With the release of large language models like GPT-3 and PaLM, big techs have been experimenting with them for quite some time now. Recently, Google also joined the party as a response to Open AI’s ChatGPT, called the MultiMediaQA, but specifically for answering medical queries.
Here’s a big part of the “medical AI” problem, you can’t get the right answers without the right input. USMLE questions already give you the right input on a shiny silver platter in a clear-cut case with only one clear-cut answer (not quite the same as real patients). I’d be more bullish on using computer vision for non-interventional Radiology/Pathology.
LLMs and other “AI” models have certainly mildly surprised me in terms of the speed of advancement. But we’ve also seen large failures and even essentially close to vaporware (cough IBM Watson Health cough).
I’ve induced ChatGPT to bypass the system and give medical advice - it has been utterly wrong even though it might sound believable to my partner (who isn’t in medicine) - with a simple Google search doing slightly better than ChatGPT - although I do like the believable Radiology reports it has generated.
Instead of general queries, what if a patient chart is given to AI?
Suppose we feed the AI the posted bloodwork of members here, and ask it for the biological age? Then we compare it with Levin, etc.? We can also ask the AI what algorithm it based its conclusions on.
I think this is the key issue here… Given the current trajectory where do you think these AI systems will be in 5 or 10 years?
Too many variables to make accurate predictions and I’m not an expert at it. It’s challenging to predict all the possible uses or misuses, particularly with how easy it is to fool the average human.
My guess would be more misinformation overall in the short run, while the power will be concentrated in a few firms which will have the best LLMs - making it hard for newcomers without quality data to train, especially when private data is “siloed”…like in medicine.
That can be done with an Excel sheet. Charts already have simple calculators to run risk scoring. I don’t see how “AI” has anything to do with that, unless one would be interested in “AI” vaporware startups that are actually not really “AI”.
Here’s the problem with patient charts - they’re scattered everywhere and EHRs don’t automatically share data with each other. No one has an incentive for an HIE (health information exchange). What patients don’t see behind the scenes is how many records requests are involved. Even then, for good reason - some physicians will not be documenting a lot of things - for example, I doubt a lot of psychiatrists will put all their therapy notes in the chart for fear of court subpoenas and changing outcomes of certain court cases. Best to keep them brief and omit a ton of information. How are you going to feed training data on that sort of “implicit knowledge”?
Not only that, I see so many charts with bad data or super messy (i.e. 20 meds the patient is not on) in the first place. Who’s gonna manually clean up all that data? Any doctor is only as good as how much quality information they have and where to get it. Garbage in, garbage out.
Even a theoretical “perfect medical AI system” in the most ideal situation would be no different. If you have no quality training, how are you going to get to even a “good enough” medical AI system, assuming you’re able to overcome a million hurdles - the easiest to see being empathizing with patients and getting a feel for knowing what questions to ask (or not) based on emotional responses? Not something a computer can currently pick up or in the near future.
Recent interesting news in the area…
And a little humor…