I found Claude’s Soul document to be interesting:
My major concern with all of these is the total disregard for privacy.
If you’re going to give OpenAI access to all your medical records, I’m sure ChatGPT will do an amazing job of analysing them and explaining things - especially for those of us who are not as “enthusiastic” as members of this forum. The models are very good at breaking down results and explaining them.
However - what is the cost of doing so? One is that your personal data becomes training data for more AI models. One problem is that the companies are simply running out of legitimate human-created training data, and they pay big bucks for access to data now. That’s why they love you to upload new original writing, scientific grant proposals, unpublished papers, your kids homework, financial reports, images etc. With patients consenting to share medical records that is a new and massive stream of valuable data for the companies.
Secondly, what about privacy? There are many implications here, because we know how dirty these corporations can be. After all, all of the AI companies have already committed mass copyright infringement, and scraped websites for content which didn’t belong to them. There are also all sorts of other people who would love to get their hands on those data. Imagine the treasure trove it would be for advertising agencies, insurance companies, banks, mortgage lenders etc if they could access your health data. Hell, I’m sure the government would love to know everybody’s ChatGPT history. Imagine how much crime they could uncover of people asking how to hide their crypto profits, avoid taxes etc. IMO, if you provide the information, it will eventually make its way to those people.
The point is - these companies are offering a very good service, which is super convenient. But I advise everybody not to lose sight of the bigger picture and long-term implications.
Personally, I am becoming very interested in local, offline models which can run on your own devices. There are several good models out there now, and some are specialised in medical knowledge. If you don’t have the compute power at home, some cloud GPU services are available - not perfect, but at least you’re not directly sending your most personal data to OpenAI.
Dose of uncertainty: Experts wary of AI health gadgets at CES
Health tech gadgets displayed at the annual CES trade show make a lot of promises. A smart scale promoted a healthier lifestyle by scanning your feet to track your heart health, and an egg-shaped hormone tracker uses AI to help you figure out the best time to conceive.
Tech and health experts, however, question the accuracy of products like these and warn of data privacy issues — especially as the federal government eases up on regulation.
The Food and Drug Administration announced during the show in Las Vegas that it will relax regulations on “low-risk” general wellness products such as heart monitors and wheelchairs. It’s the latest step President Donald Trump’s administration has taken to remove barriers for AI innovation and use. The White House repealed former President Joe Biden’s executive order establishing guardrails around AI, and last month, the Department of Health and Human Services outlined its strategy to expand its use of AI.
You can turn off OpenAI training on your data. It’s not what’s uploaded that’s valuable, it’s the entire conversation. After all they are producing conversations in some sense, rather than a new generation of what was uploaded.
Since most health data is digital, you should assume in my opinion that it’s already public in some sense.
RAM prices are crazy now, but you really could use an old gaming pc and upgrade the ram then run gpt-oss with 120 billion parameters well. It’s really chatgpt at home.
And the trend continues… but as seems to be the trend, Anthropic actually does it with some safeguards, like HIPAA oriented infrastructure (though what that means exactly is a little unclear to me).
Livestream from 1 hour ago, life sciences and healthcare with Dario Amodei for ~15 min after 5 min mark.
Wow, his sister and co-founder Daniela had her second child a few months ago, she had an infection during pregnancy and many fancy doctors said it was a viral infection. She got a second opinion from Claude who suggested it was bacterial and that she needed antibiotics within 48 hrs or it would go systemic so she took them, and further testing showed Claude was right (11:40 mark).
Rapamycin is mentioned at the 48 min mark by David Fajgenbaum in the livestream, co-founder and president of everycure:
Relatively brief and readable article. (No need to ai summarize) Stanford has lots of sleep data to feed their AI.
"“The most information we got for predicting disease was by contrasting the different channels,” Mignot said. Body constituents that were out of sync — a brain that looks asleep but a heart that looks awake, for example — seemed to spell trouble."



