Thatās interesting. One of the things I think AI might be good for is exploring problems, in the sense of thinking up interesting experiments for discovering how things work & could be used. This isnāt that, but it is in the same neighborhood.
Can someone ask various AI models (Iām not very good at it) for possible remedies for Tinnitus and see if we can get something similar that might work.
He spent five years without a diagnosis because he saw the wrong doctors for TMD.
I didnāt ask an AI model, but something that might help is to take some potassium, maybe a few hundred mg (with lots of water). I wouldnāt go to a gram, as that might not be healthy.
I donāt have tinnitus per se, but maybe sub-clinical tinnitus ā every once in a while when I turn my attention to my hearing there is high-frequency noise, but is usually not distracting unless I focus on it. Anyways, I have noticed that when I take a high dose of potassium, sometimes the noise changes character, either dimming or changing frequency.
AI is slightly problematic for me. It provides a good but not great answer for medical issues. For instance, when I asked about BP treatments, it neglected to mention ARBs, which I believe is the best medication for high BP.
However, I do think AI gives a great second opinion and may provide more information that you can discuss with a doctor in order to get better treatment.
Currently, I am not willing to rely solely on one doctor or AI. However, AI can be a great source for additional information and new ideas.
I use AI strictly to find papers. I have used Gemini āresearchā setting to ask questions, but the answers are usually highly problematic. Nonetheless the papers cited are of interest and greatly accelerate my research.
As to the answers. The AI I have used seems unable to evaluate the validity of any given paper (such as dodgy paper mill effluvium from China), and therefore gives undue weight to such findings. Furthermore, the reasoning and synthesis is very questionable. I have caught several instances where the AI conclusions were clearly based on a misreading of the source material.
Now, I donāt have broad experience with dedicated AI platforms aimed specifically at the medical research field, but these are flaws inherent in the architecture of the technology, so I doubt thereād be dramatic differences between individual platforms when it comes to reliability.
I donāt doubt that AI can suggest avenues of inquiry or novel approaches and unexpected connections which might result in a diagnosis or treatment suggestions which conventional establishment physicians might not have come up with, but it strikes me as a matter of chance and happy accidents and not something to routinely rely upon.
I think AI has its place and undoubtedly can be helpful in research, but at this point in time, I personally would not base my health decisions on the output of these platforms. I definitely find it useful in scouring the internet for information and in that sense itās a terrific assistant for any researcher.
But the AI space is evolving, so Iām happy to reevaluate the use of this technology as progress is made, although I feel that some kind of conceptual breakthrough is necessary to push it beyond current limits. Without that, itās liable to stall with only incremental refinements very soon. Others are more optimistic. I truly hope it can be all that the evangelists proclaim, I too would like to live in a Jetsons world. Weāll see.
I am absolutely loving OpenEvidence. It is free for health care providers, not sure what the cost is for non-HCPs. It only obtains data from the published literature (which is both its greatest strength and greatest limitation) and has access to a huge number of full text journals thanks to a recent agreement with the New England Journal of Medicine. Iām so happy with it.
Heās not alone - and visiting the right doctors is not the solution either. In my case: Before using ChatGPT, Iād already spent years reading scientific literature and studies - I was pretty advanced, just not a doctor (and Iām still not - Iām just āreading stuffā, if you know what I mean).
Well⦠I saw 8 different specialists from 6 medical fields - including university clinics - and none of them could help me because they simply didnāt take me seriously.
Part of the problem was that my health issue just doesnāt exist in this exact form in the literature - so you canāt simply ālook it upā. Whenever I gave them clear, objective numbers, they just assumed I was guessing or didnāt know how to calculate basic things - even when it was trivial.
ChatGPT only came in during the last year, but with its latest update from march 25. it finally gave me the missing piece withhin 2 months. I still had to search, think and cross-check everything myself - but it made all the difference.
However using ChatGPT in my case wasnāt easy. Without ChatGPT I would have been completely lost.
BUT my background was really important:
- I could ask it about the ReDox-Balance and stuff I already knew much about.
- I had to build a hypothesis and ask it if my hypothesis is plausible.
- I did a functional test then - on my own.
- I wrote 30 sites (office) about my findings, test protocols and so one
- I will visit an endocrinologist next then because during the therapy I want to have a full blood count.
But Iām pretty sure I HAVE the solution. If not, I have at least a diagnosis.
If I wrote a little bit more about the background (cell biology, HPA-axis, medical stuff and so one) and added citation from some other studies, my writings would parctically be a study. Iām thinking about it. Publishing it without beeing an MD is very interesting, but possible. It would be my own little āf*** you allā.
Yes. Itās an old observation Iāve made already decades ago: the more you know about a field or subject, the more you tend to doubt and question the āexpertsā. This is true literally in any area. If you happen to know a ton about, say, specific area of medicine, suddenly you realize that your primary physician doesnāt have a clue, or your cardiologist knows less about cholesterol metabolism than you who has been researching and reading about it for years and so on. The more you know, the fewer āexpertsā there are who are worth a damn. But itās the same with everything: you trust the opinion of that lawyer, economist, businessman, historian, tourist guide etc., only because you know little about the subject. The moment you yourself know a lot about the subject, you realize how poor the quality of the āexpertā is.
But itās hardly surprising. Think about people you know. Genuinely brilliant peope are rare, just statistically. Most are āaverageā⦠because thatās whatās most common. And āaverageā is not very good. In any specialty - business, law, medicine etc., itās the 1% at best who are good, the rest, the āaverageā lawyer, doctor, programmer, plumber is going to be somewhere in the bottom 90%. Not good. Like that old joke goes: what do you call the student at the bottom of the class in medical school? - Doctor!
The human body is complicated. The number of things that can go wrong, or work suboptimally is essentially limitless. There is so much we donāt know about how the human body works. And our tools are still so inadequate. Now, if you have a condition that deviates even a little from the most common presentation, the average MD is going to be stumped, because they work off of their decision tree standard of care, and you fall between the cracks. You are the one who can focus on your body full time, so you are in a position to know your body best - that is why you must educate yourself, and be your own health advocate. You cannot rely on a doctor. Heās got neither the time, bandwidth or education to handle your particuar condition. For me, for a long time now, I only go to my doctor to get tests I donāt have to pay for, and to keep my insurance by doing the scheduled interventions, vaccinations and so on. But I donāt bother to ask my physician about any condition I may have at the moment. I buy my meds from India, and get tests from LabCorp and the like. Maybe one day AI can step in, and itāll be superior to most doctors.
absolutley! And just some days ago, a germanist was wondering why I knew much more about linguistics then he does. Human biology is not the only field I have a clue or something⦠Maybe youāll know what I mean.
but anyway: there is another case where MDs are really bad in. If you just want to improve something like BUN or colesterol and you do not have a medical condition.
I realised within the last years how excited āmyā MDs were about my health status⦠haha.
As Iāve started dealing with more age-related issuesālike actinic keratosis and a high CAC scoreāIāve found AI, especially ChatGPT, most useful not just for exploring the conditions themselves, but for navigating the medical system. Itās helped me figure out how to communicate with doctors and staff to get the treatments I need, rather than being passively pushed along the standard conveyor belt. Before AI, I had early cataract-related night vision loss and was quickly fast-tracked for surgery. I declined, but, to say the least, not very gracefully. Now, AI helps me think through what to sayāand more importantly, what not to say.
That is certainly interesting. What was the diagnosis you were able to derive from your analysis and AI?