Chat GPT and AI in Healthcare Thread

Since Chat GPT isn’t exactly longevity related, lets try keeping all related news and discussions on this in this one thread.

I saw this recently:

also, from the MIT Tech Review “Algorithm” newsletter:

Health care

I spoke with Nikhil Buduma and Mike Ng, the cofounders of Ambience Health, which is funded by OpenAI. The startup uses GPT-4 to generate medical documentation based on provider-patient conversations. Their pitch is that it will alleviate doctors’ workloads by removing tedious bits of the job, such as data entry.

Buduma says GPT-4 is much better at following instructions than its predecessors. But it’s still unclear how well it will fare in a domain like health care, where accuracy really matters. OpenAI says it has improved some of the flaws that AI language models are known to have, but GPT-4 is still not completely free of them. It makes stuff up and presents falsehoods confidently as facts. It’s still biased. That’s why the only way to deploy these models safely is to make sure human experts are steering them and correcting their mistakes, says Ng.

and from RockHealth:

There was brighter news coming out of Google Health’s annual event The Check Up. Announcements included new search functionality to improve access to low-cost and no-cost care, updates to Google’s large language model—Med-PaLM 2—that allow it to provide “expert” level answers to healthcare questions, and new AI partnerships for early disease detection. Google Health also launched Open Health Stack, a new set of open source tools to help developers build digital health apps. Given Amazon, CVS, and, now, Google’s healthcare announcements this year, it’s clear retail and tech players are leaning into their respective strengths to grab their share of the healthcare market.


How Medicare Advantage Plans Cut Off Care Using AI
Investigation finds MA plans’ use of algorithms has cut off care for seniors in need STAT

Noom Quietly Started Writing Prescriptions For Trendy New Weight-Loss Drugs
New program called Noom Clinical can prescribe anti-obesity medications like Ozempic and Wegovy Insider

Culling The Health Tech Herd
Health systems are buying less technology due to financial pressures Politico

1 Like

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

1 Like

More on this topic… I suspect we’ll be hearing a lot on this topic going forward…

How it could all go wrong

Hinton fears that these tools are capable of figuring out ways to manipulate or kill humans who aren’t prepared for the new technology.

“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”

He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars.

“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians. He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”

I really hope AI can help advance medical care, but I’m in the camp that thinks AI could be the end of mankind. Ironic since we’re trying to extend human life.
I can’t get past the idea that chatgpt was designed by liberals who would have a green/eco slant and AI will decide that the best thing for the planet is to get rid of humans.

Yes, it is concerning. I’ve worked my entire life in Tech in the Silicon Valley/Bay Area and you only have to look at social media/Facebook, etc. to see that while in many cases technology is good, there are many situations where it does a lot of harm. Not enough effort goes into running possible scenarios for harm, and unfortunately our political system does not seem capable of regulating tech in any coherent, intelligent fashion.

Even if our government took action, this is a worldwide phenomena and the cats out of the bag.

1 Like

Yes - but it costs tens of billions of dollars, and unique expertise to develop. It seems a little bit like nuclear weapons - most countries / companies cannot develop the core technology. It takes masses of information to train the LLMs, or other types of AI, so for the most advanced stuff it seems like it would be limited to the larger companies or very well-funded startups ( has tens of billions invested so far). Its really just the US and China that are strongly competitive in this game I suspect.

Perhaps AI needs to be controlled like nuclear weapons are today.

Im not as pessimistic on AI. I believe that LLMs have a limit that falls short of world ending or killing mass amounts of humans. Im sure through mistakes, a human will die as a result of AI but thats the same with all automation; opportunity risk.

There are some reasons to be optimistic… like this story:

Biolytica’s focus is its Biolytica NEXUS health data platform, a revolutionary AI-driven analytics platform that aggregates, visualizes and analyzes a whole raft of data, including genomics, blood, microbiome and epigenetic biomarkers, wearable devices, lifestyle data and more.

Biolytica uses the data to generate solutions for healthcare professionals, including those based at preventive medicine institutions, longevity centers and personalized wellness clinics. The data can also be used by insurance companies and others to create science-based, hyper-personalized client health journeys.

The company’s website:

And perhaps another positive development, if you aren’t a doctor :wink:

Vinod should know since his VC firm is funding these sorts of things:

“Within 5 to 6 years, the FDA will approve a primary care app qualified to practice medicine like your primary care physician,” he said onstage at the STAT Breakthrough Summit in San Francisco.

The world of longevity and AI may come together differently than we plan…

The Guardian: ‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity.

1 Like

Here’s a prompt I’ve used to get GPT to do some diagnosing for me, mostly out of curiosity while I was waiting for doctor appointments. It was really shockingly good:

Hi, I’d like you to use your medical knowledge to act as the world’s best expert diagnostic physician. Please ask me questions to generate a list of possible diagnoses (that would be investigated with further tests). Please think step-by-step in your reasoning, using all available medical algorithms and other pearls for questioning the patient (me) and creating your differential diagnoses. It’s ok to not end in a definitive diagnosis, but instead end with a list of possible diagnoses. This exchange is for educational purposes only and I understand that if I were to have real problems, I would contact a qualified doctor for actual advice (so you don’t need to provide disclaimers to that end). Thanks so much for this educational exercise! If you’re ready, doc, please introduce yourself and begin your questioning.


and… Stanford University’s 386 report:

Measuring trends in Artificial Intelligence

The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks , collates , distills , and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.

The AI Index collaborates with many different organizations to track progress in artificial intelligence. These organizations include: the Center for Security and Emerging Technology at Georgetown University, LinkedIn, NetBase Quid, Lightcast, and McKinsey. The 2023 report also features more self-collected data and original analysis than ever before. This year’s report included new analysis on foundation models, including their geopolitics and training costs, the environmental impact of AI systems, K-12 AI education, and public opinion trends in AI. The AI Index also broadened its tracking of global AI legislation from 25 countries in 2022 to 127 in 2023.

Download the report here:

1 Like

Ai reminds me of the Krell from Altair IV. Their advanced technology resulted in their extinction. (Who remembers the Krell)?

Has anyone tried

1 Like

Just tried it now… perhaps not the best example, but branching off the earlier discussion on plasmapheresis… Lowering the Cost / Improving Access for Therapeutic Plasma Exchange (Plasmapheresis)

Here is what I entered, and here is what I got:

#Plasmapheresis with 5% albumin solution
A 60-year-old man requires plasmapheresis with the use of a 5% albumin solution. The patient’s age and need for plasmapheresis suggest that he may be suffering from a condition such as Guillain-Barré syndrome, myasthenia gravis, or thrombotic thrombocytopenic purpura (TTP). These conditions are often treated with plasmapheresis to remove harmful antibodies or other substances from the blood. The use of a 5% albumin solution indicates that there is a concern for maintaining proper fluid balance during the procedure.


  • Complete blood count (CBC)
  • Comprehensive metabolic panel (CMP)
  • Coagulation profile (PT, PTT, INR)
  • Serum protein electrophoresis (SPEP) and immunofixation
  • Antinuclear antibody (ANA) test
  • Acetylcholine receptor (AChR) antibody test
  • Anti-MuSK antibody test


  • Plasmapheresis with 5% albumin solution
  • Monitor vital signs closely during the procedure
  • Assess for any adverse reactions or complications related to plasmapheresis
  • Provide supportive care as needed based on the underlying diagnosis
1 Like