AI search engine for PubMed and other research databases?

Just wondering if anyone has suggestions on using AI for literature searches? It’s got to be so much more effective than my old-school method of typing my target into the search box and then sorting through huge lists of junk to find the nuggets I’m actually looking for.




There’s an engine called which allows for a free trial. Michael Greger MD mentioned they have been playing with it in a recent “how we make our videos” video. I’ll post an update when I’ve had a chance to try it out. Since it only gives you x amount of credits to search, I want to save them for when I need them. Cost is $10/month for the full version.


There is this:

I use which is okay, but it isn’t able to for example get the largest clinical trials for a compound and sort them in a table.

*"You should have ChatGPT Plus subscription plan to be able to use agents"

Dr. Oracle_
Your Personal AI Research Assistant

It is a subscription Ai. It costs $10/month and is well worth it IMO.
It provides as many citations in its search as you want or that are available.
Many pay-walled papers can be found in full by using:
“Knowledge must be free”
“Sci-Hub is the most controversial project in today science. The goal of Sci-Hub is to provide free and unrestricted access to all scientific knowledge ever published in journal or book form.”
“Sci-Hub was founded in Kazakhstan by Alexandra Elbakyan in 2011, in response to the high cost of research papers behind paywalls (see Serials crisis). The site is extensively used worldwide”


I use LLMs almost exclusively for complex literature searches. They save me a tremendous amount of time. I also use them to synthesize the research they find but treat this as more provisional even though they have sometimes captured a perspective that I missed doing it the old fashioned way. Of course, multiple prompts can be required.

LLMs are at this moment transforming the landscape by which extant scientific knowledge is organized, analyzed, and summarized. Overall for the better, IMO.

The caveat to be on guard for hallucinating still applies but less so with new LLM versions and as prompt skills evolve. In GPT-4, I also set a universal condition to treat prompts as coming from a scientific research colleague and to answer in kind. This eliminates the extra precautionary language it will otherwise provide if you are asking about something that could be considered medical advice.

1 Like

“” never hallucinates or gives bogus citations and gives links for all citations.
And, it will give citations to even the most recent papers.
It also gives an analysis of each of the citations and gives criticisms of the citations such as low cohort, etc.

1 Like

Thanks for the reference @desertshores. I spent a few minutes there. It looks like Dr. Oracle is an OpenAi template or maybe one of the others. Do you know which?

Gemini delivered the best analysis on the probable structure of Dr. Oracle.

You’re absolutely right! I apologize for the misunderstanding earlier. Here’s what I’ve been able to uncover about “Dr. Oracle”:

  • Specialized Medical AI: Dr. Oracle is indeed a specialized medical AI system designed to assist healthcare professionals and researchers with analyzing vast amounts of medical literature.
  • Building on LLMs: While the exact underlying LLM isn’t explicitly stated on their website, it’s highly likely Dr. Oracle leverages the power of a large language model (LLM) as its foundation. LLMs are great at understanding and generating human-like text, a crucial ability for processing medical research.
  • Customization and Fine-Tuning: It’s probable that the developers of Dr. Oracle have taken a general-purpose LLM and fine-tuned it with the following:
    • Specialized Medical Terminology: Exposing the model to extensive medical texts, research papers, and clinical notes.
    • Research Skills: Training it to understand the structure of research papers, identify key findings, and synthesize information from multiple sources.
    • Clinical Reasoning: Possibly fine-tuning it to make logical inferences and connections based on medical knowledge.

Why Specific LLM Information is Scarce

Companies building specialized AI systems like Dr. Oracle often don’t disclose the exact LLM they’ve built upon for a few reasons:

  • Proprietary Advantage: The specific LLM choice and fine-tuning process contributes to their competitive edge.
  • Changing Landscape: The LLM field is rapidly evolving. They might switch to a different or more powerful LLM in the future.
  • Focus on the Application: They want to emphasize the utility of Dr. Oracle in the medical field, rather than technical implementation details.

Additional Insights

If you’re interested in the potential LLMs used as the base for Dr. Oracle, consider these factors:

  • Need for Large Medical Dataset: The LLM needs to have been trained on a massive dataset that includes a significant amount of medical text.
  • Scientific Focus: LLMs specifically trained on scientific or research papers might be a good fit.

Let me know if you’d like me to try and dig up more clues on this topic. It’s a fascinating area!

1 Like