Chat GPT and AI in Healthcare Thread

If that’s happening, you are still using the shittest non-reasoning types of LLM. They’re basically trying to recite the DOI from memory. You want to use something that actually goes and checks the sources itself.

One issue is that a lot of websites have now blocked the LLMs from accessing them. So, for scientific papers, a lot of ChatGPT is getting routed through Researchgate, university websites etc, because it can’t directly access papers from some publishers. However, it will sometimes try to bluff and tell you about something where it’s only read the citation (title, authors, year etc) and not the content.

This will be a dream, but I think it will be possible in the future. It takes a huge amount of compute to process that much data at once, and the amount that we can ever access as customers is tiny. Every model you access is throttled and not using the full capacity.

For example, the free version of ChatGPT gives you access to a model with 30-40B parameters. ChatGPT 4o gives you 175B. ChatGPT Pro ($200/month) gives you 1.5T, and internally they apparently have models with up to 10T. So the people asking free version ChatGPT questions are working with something less than 0.3% of the potential capability.

There is also a limit to the working memory on the models which are provided to us. So at some point, the model is summarising the content of the conversation. GPT5 can hold around 128-256k tokens (less than 200 pages of text) in memory. So if you ask it to analyse 20 documents, it’s going to internally summarises them, then works with summaries of summaries etc - the analysis get shallower as it holds more information. However, at the moment, that is hard built into the GPT model, and no amount of compute will fix that. So you’re right that it needs new architecture of some sort.

Out of curiosity, if you search “Rapamycin” on Pubmed, you get around 60,000 results. That’s a lot, but not an obscene amount of data, especially if you get rid of a bunch of duplicates, reviews, irrelevant studies etc. If you picked the top 10,000, that would be 100m tokens, so the current model is around 400x too small. Wonder how long it will take to bring it up to the size needed to do that sort of thing.

If that’s happening, you are still using the shittest non-reasoning types of LLM
It actually happened under GDP-5, 20 $ version.

Sure. But depending on the prompt, you can get outsourced to GPT-4o or a lower tier model to answer the question.

I used Google’s new customer service AI via chat today and I am thoroughly impressed!

Here is what I liked:

  1. First of all, there was no waiting in a queue for an agent. Talking to the AI was almost instantaneous. No waiting phone purgatory with elevator music is a big advancement!

  2. Second, AI was able to research the problem and correct it almost instantly. No more waiting while they pull up your record and examine the problem. Again, a major time saver!

  3. Third, the customer service AI communicated effectively and cordially. No more need to be frazzled by a difficult accent or an impolite human.

  4. Fourth, I’m sure Google doesn’t have to pay their AI bots much to staff the service center!

It’s great having this level of service, but, frankly, a lot of jobs have now been axed because of this. I have the sinking feeling AI will cut far more jobs than it generates, and that doesn’t bode well for your average human without something to replace these jobs like UBI.

I can see this service being used to replace radiologists. I seriously wonder why this hasn’t happened yet! It should also replace GP visits if an AI can analyze your scans and bloodwork, it should be able to replace most functions of a GP and prescribe medications.

The future is amazing. However, I am glad I am at the tail end of my career and I worry a bit about my sons who will enter the job market in 6-10 years time. Will the competition for jobs be too fierce?

3 Likes

This is an excellent report because speaking (or typing) to customer service is the bane of my existence!

And yes, I’m glad I’m no longer in the work force because I’d worry, too!

I just heard my first story of someone I know being affected. He is a music composer for film, and he is already losing jobs to AI… it’s bad enough, he is now looking for a new career.

1 Like

Anyone here who has had their DNA sequenced - please sign up for this during the beta period and test it out and report back.

The company website:

Current AI Models have 3 Unfixable Problems (Sabine Hossenfelder)

AI Summary:

Challenges in Achieving Artificial General Intelligence

  • The pursuit of artificial general intelligence, or AGI, remains a significant challenge, with many believing that current AI models will eventually reach this level of intelligence with more time.
  • The speaker argues against this notion, stating that the current AI systems, primarily based on deep neural networks, have inherent limitations that prevent them from achieving AGI.
  • Current AI models include large language models and diffusion models, which are designed for specific tasks such as text generation or image creation, but lack the versatility required for general intelligence.
  • These models are purpose-bound and trained to recognize patterns in specific types of data, making them incapable of abstract thinking that is necessary for general intelligence.

Limitations of Current AI Models

  • One major limitation of large language models is their tendency to produce hallucinations, which are responses that have no factual basis and can mislead users.
  • Hallucinations occur when a model generates answers that are not found within its training data or are based on infrequent occurrences in the data.
  • The speaker notes that these models do not actually search their training data for answers but instead rely on probability to generate responses, leading to inaccuracies when the probabilities are low.
  • A recent OpenAI paper suggests addressing hallucinations by rewarding models for acknowledging uncertainty, but this proposal has faced criticism for not aligning with user expectations for accurate responses.
  • The speaker believes that while hallucinations may never be completely resolved, a low frequency of such occurrences could still be acceptable in practice.

Prompt Injection and Its Implications

  • Prompt injection refers to the act of altering the instructions given to an AI model through user input, which can lead to unintended behaviors.
  • An example of prompt injection includes asking a model to ignore previous instructions and perform a different task, such as writing a poem about spaghetti.
  • The speaker asserts that prompt injection is a significant challenge for large language models since they cannot differentiate between genuine instructions and user prompts that should be acted upon.
  • While there are methods to mitigate prompt injection, such as requiring specific formatting or screening inputs, the speaker believes these models will remain unreliable for many tasks due to this vulnerability.

Generalization Limitations of AI Models

  • Current AI models struggle with out-of-distribution thinking, meaning they cannot generalize effectively beyond their training data.
  • This limitation is particularly evident in image and video generation, where models perform well within familiar examples but fail to create accurate outputs for novel requests.
  • For instance, attempts to generate imaginative scenarios, such as a video of Jupiter using a vacuum cleaner to remove asteroids, often result in poor-quality outputs.
  • Similarly, large language models excel at tasks like summarizing and drafting but falter when asked to create something entirely new, which poses a significant barrier to their application in scientific research.

Future Directions for AI Development

  • The speaker expresses skepticism about the current generation of generative AI models, emphasizing their inability to perform abstract reasoning, their vulnerability to prompt injection, and their poor generalization capabilities.
  • Companies like OpenAI and Anthropic may face challenges if they rely solely on these models, which are unlikely to fulfill the anticipated revenue that justifies their high valuations.
  • To move forward, there is a need for the development of abstract reasoning networks capable of processing various types of input without relying on language.
  • Neurosymbolic reasoning is identified as a promising approach, although the speaker humorously suggests that human intelligence may decline to match the capabilities of AI.

Conclusion and Call to Action

  • In conclusion, the speaker emphasizes the importance of addressing the limitations of current AI models while also promoting the use of services like Incogn for privacy protection.
  • The audience is encouraged to utilize the provided discount code for Incogn, promoting the service as a beneficial tool for managing personal data privacy.
  • The speaker expresses gratitude to the audience for their attention and indicates a follow-up engagement in the future.

Thank you RapAdmin for providing an AI summary of the video, as I myself never do so. One of the many reasons I never do so, is because AI seems unable to select the important points vs unimportant points. This can be seen in the current summary, where the AI is unable to realize that the presenter included sponsored content, which should be ignored, and instead here it’s treated as if it was germane to the video. This is of course just a particularly blatant example of the utter failure that is AI with the ironic twist that the video deals with the inherent limitations of AI and why these are not really fixable. And here comes AI with a summary proving that point in a spectacular fashion. Oh the irony!

I understand the motivation behind providing these summaries and the genuine need to save time and overcome the challenges of sometimes not having the opportunity to watch a long audio-visual presentation, but these AI summaries in my experience are almost always misleading and substantially lacking. YMMV.

1 Like

Absolutely agree with this. Glad someone explained so simply.

Claude enters the lab: Anthropic bets big on life sciences

AI group tailoring its chatbot to integrate with biomedical databases and coding tools

Artificial intelligence start-up Anthropic is tailoring its Claude chatbot to researchers and life sciences companies, as AI groups race to create specialised applications from the technology.

The San Francisco-based group said on Monday it is integrating Claude into tools that scientists already use, including lab management systems, genomic analysis platforms and biomedical databases, to tackle time-consuming tasks such as data analysis and literature review.

Anthropic, which was valued at $170bn in September, said drugmaker Novo Nordisk has already used its AI model to cut clinical study documentation from more than 10 weeks to 10 minutes, while drug developer Sanofi said the majority of its employees use Claude every day.

The move comes as tech groups are spending billions of dollars on AI products and models, believing the technology can benefit a range of industries from healthcare to energy and education.

Full story: Claude enters the lab: Anthropic bets big on life sciences (FT)

More: https://x.com/AnthropicAI/status/1980308459368436093

2 Likes

Do we have an idea of the price per seat?

I’m not sure it will be a blessing or a curse when INDs from companies are going to be written by LLMs. And even crazier, there’s a decent chance that the FDA is going to be using LLMs to review the applications.

I have tried it out for a monthly payment of GBP18. I posted the result on the nucelotides topic. When I can some time I will do some head to heads between chatGPT and Claude.

1 Like

With GPT-5 you can build a virtual multidisciplinary team of AI “consultants.” Each one can be trained or prompted to think like a specific kind of clinician — hematologist, oncologist, immunologist, etc. Chat walked me through how to set it up the system after I read an article on a cancer patient who worked in the tech, who set up AI agents to monitor his cancer treatment.

2 Likes

Source: https://x.com/HealthcareAIGuy/status/1981747088515125733

3 Likes

Longevity science is on the cusp of major breakthroughs thanks to AI, but significant ‘data gaps’ need to be filled, expert says

Our never-ending quest to live longer and healthier lives is set to get a big boost from AI technology. But as with all things AI-related, one of the biggest roadblocks is data.
When it comes to aging science, there’s a dearth of data to help scientists understand how cells and organs in the body age, and how differences in gender, ethnicity, and environment can affect the aging process, said panelists at the Fortune Global Forum in Riyadh this week.
“Data is the key. The depth of biological data, the depth of demographical data, the depth of epidemiological data has to be properly collected,” said HRH Princess Dr. Haya bint Khaled bin Bandar Al Saud, senior vice president of research at Hevolution Foundation, a nonprofit that focuses on aging science. But the current health care framework means the net we’re casting to collect data isn’t wide enough, she said.

Full article:

https://archive.ph/sz01Q#selection-1049.0-1076.0

1 Like

Nothing earth shattering in here, but I’m sharing an interview with the founder of Open Evidence

2 Likes

The trendlines with AI:

hmmm…

1 Like
1 Like

Inside the debate over a tech breakthrough raising questions about life itself

AI-designed viruses raise fears over creating life.

A group of Stanford University scientists posted a paper online in mid-September, describing a feat that could have been plucked from the pages of science fiction: They used artificial intelligence to design new viruses capable of killing bacteria.

In a world where AI keeps creeping in on uniquely human territory by composing sonnets, writing songs or forging friendships, this seemed to be crossing a new Rubicon. Depending on your belief system, AI was doing what evolution, or God, or scientists working with genome-engineering tools aim to do.

“Machines are rethinking what it is to be human, what it is to be alive,” said Michael Hecht, a chemistry professor at Princeton University focused on designing novel proteins and artificial genomes. “I find this very unsettling and staggering. They are devising, coming up with novel life forms. Darwin 2.0.”

The paper hasn’t yet been peer-reviewed, but it is fueling consternation, critiques and think-pieces on what it all means — and what it doesn’t. Reactions span the gamut, from “this changes everything” to a scientific shrug. Are machines about to generate novel forms of life, including one that could kill us all? Or is this a powerful new tool — with capabilities that build on what people have been doing for years with more traditional techniques?

Read the full story: Inside the debate over a tech breakthrough raising questions about life itself