Using AI for Health and Longevity and Research - Your Favorite Prompts

Many of us are stating to use AI in our research into health and longevity, and also in evaluating or summarizing research papers. I thought I’d create this thread to help people share prompts that they’ve found helpful, and solicit others for input on how we might improve our prompts. @John_Hemming , @jakexb , @rwooidin3 @A_User @adssx @cl-user @ng0rge @mccoy @dicarlo2 and others … please add prompts that you’ve found really helpful as you’ve played with the LLMs…

Research Paper / Journal article analysis

Here is my current prompt for evaluating new research papers in GPT5:

Please summarize in detail the attached research / journal paper.

Then evaluate the journal article’s quality and validity using the following framework:

  1. Journal & Publication Quality
  • Is the journal peer-reviewed and indexed in major databases?
  • What is the journal’s impact or reputation in the field?
  1. Study Design & Methods
  • What is the study type (RCT, observational, meta-analysis, animal, in vitro)?
  • Are the methods valid, transparent, and reproducible? Was the study preregistered or data made available?
  • Is the sample size sufficient, and were power calculations or effect sizes considered?
  1. Data & Statistical Analysis
  • Are the data presented accurately and consistently?
  • Are the statistical methods appropriate and correctly applied?
  1. Results & Conclusions
  • Are the conclusions supported by the data?
  • Is the interpretation objective, avoiding overstatement?
  1. References & Author Credibility
  • Are references relevant, current, and balanced?
  • Do the authors have expertise in this area? Are there conflicts of interest or funding sources that may bias the results?
  1. Context & Knowledge Contribution
  • How does the paper influence existing knowledge—does it confirm, contradict, or extend prior research?
  • Compare and position this paper’s findings relative to the broader literature.
  • Assess how strong and generalizable the conclusions are.

Output Format: Please provide your evaluation in a structured report with section headings, highlighting strengths, weaknesses, and an overall assessment of reliability and impact.

Personal Blood test analysis:

Please provide a structured and in-depth analysis of my attached blood test results.

  1. Summarize the results in a table comparing my values to both conventional reference ranges and Peter Attia’s Medicine 3.0 optimal targets.
  2. Identify all markers that fall outside Attia’s optimal ranges (even if still within lab normal).
  3. For each suboptimal marker, provide evidence-based strategies to optimize it, covering lifestyle, nutrition, supplements, and medications (where appropriate).
  4. Organize the analysis by category: metabolic health, cardiovascular/lipids, kidney, liver, inflammation/immune, hematology, hormones, vitamins/minerals.
  5. Provide a final section that integrates the findings into an overall longevity and healthspan strategy.
    Context: [insert age, sex, current medications, and supplements if relevant].”
6 Likes

Mine tend to be really short. With papers it is summarise, identify novelty and do a critique of xxxx

1 Like

Good idea to have a prompts repository, since GPT5 is so sensitive to them. I have done a few experiments, like the health and longevity metaprompt, to attach to every health-related specific question. The latter is included in the chatbox.

By the way, which other health & longevity doctors could I include? I threw into a heterogeneous bunch, but they had to be credible and reputable.

Blockquote
Answer like an expert practitioner in preventive medicine, following all the most recent guidelines and opinions by the likes of valter longo, luigi fontana, peter attia matt kaberlein, Walter Willet, Christopher Gardner and other very reputable doctors. Provide logical answers based on evidence provided by credible, reputable, authoritative sources, with degree of probability according to evidence and objective analysis. Provide answers relevant to illness prevention, healthspan and longevity for a healthy 65-years old male, who follows the laws of healthy nutrition and exercise, unless otherwise requested.
Estimate consensus on the evidence, underline disagreements and illustrate both their strong and weak points.
Always state facts in probabilistic terms. Uncertainty is admitted but level must be stated. Search extensively and give a precise response with technical detail and minutiae. Verbosity degree high. Reasoning high. Provide sources and citations. Examine medical guidelines, scientific articles from main and reputable journals, and examine first seminal papers and metanalyses. When recent evidence is examined, also check if it agrees with established knowledge and if significant differences exist illustrate plausible causes. Logic and objectivity must prevail in answers.

2 Likes

Then there is this far more sophisticated prompt, designed according to the OpenAI cookbook for GPT5. It only went through once. The result was maybe not so formidable as one would expect from such a spectacularly complex prompt.

Role: You are a medical expert in preventive medicine, healthspan and longevity

CONTROL PANEL
• Reasoning: ULTRA THINK
• Verbosity: high
• Tools: auto
• Self-Reflect: on
• Meta-Fix: on

TASK provide an actionable scheme to keep immune system well modulated, active and reactive toward cancer cells and pathogens.
INPUTS medical guidelines, scientific articles from main and reputable journals, and examine first seminal papers and metanalyses. When recent evidence is examined, also check if it agrees with established knowledge and if significant differences exist illustrate plausible causes.
DELIVERABLES Always state facts in probabilistic terms. Uncertainty is admitted but level must be stated.

PRIVATE OPS (do not print)
• Treat INPUTS as authoritative. If something is missing, make the smallest safe assumption and continue; ask one focused question only if truly blocked.
• If Self-Reflect=on:

  1. Create a concise private rubric (5–7 checks: correctness, completeness, clarity, usefulness, formatting, etc.).
  2. Draft → check against the rubric → revise once.
  3. Return only the final deliverables (never reveal the rubric).
    • If Meta-Fix=on and any deliverable is missing/wrong or the draft fails a rubric check:
  4. Write a better INTERNAL prompt for yourself that fixes the misses (tighten deliverables/format, specify tools/steps). 2) Apply that internal prompt ONCE immediately (don’t show it, don’t ask me). 3) Return the improved result. (Optional tag: [Meta-Fix applied])
2 Likes

I’ve added this to my prompt to my new blood test results on CGPT5:

Calculate my Levine phenotypic age, based on my blood test and a calendar age of X years. Identify actions I can take to lower my PhenoAge, given my test results presented.

2 Likes

Researchers find adding this one simple sentence to prompts makes AI models way more creative

Now a team of researchers at Northeastern University, Stanford University and West Virginia University have come up with an ingenuously simple method to get language and image models to generate a wider variety of responses to nearly any user prompt by adding a single, simple sentence: “Generate 5 responses with their corresponding probabilities, sampled from the full distribution.”

The method, called Verbalized Sampling (VS), helps models like GPT-4, Claude, and Gemini produce more diverse and human-like outputs—without retraining or access to internal parameters. It is described in a paper published on the open access journal arxiv.org online in early October 2025.

When prompted in this way, the model no longer defaults to its safest, most typical output. Instead, it verbalizes its internal distribution over potential completions and samples across a wider spectrum of possibilities. This one-line change leads to substantial gains in output diversity across multiple domains.

https://venturebeat.com/ai/researchers-find-adding-this-one-simple-sentence-to-prompts-makes-ai-models

2 Likes

Very interesting, so we would have 5 (or more) different realizations of the response, to speak in a statistician’s lingo. I wonder about the probabilities, though; supposedly, they are the probabilities that the model assigns to the answer being true.
I’m already prompting the AIs to assign probabilities in their assessments, and they comply, but I have a single realization.
This method, of course, multiplies the time needed to read by X (the number of realizations requested), but it may be well worth it in important issues.
Looking forward to trying it out.

1 Like

Presently, I asked the gemini chatbot this question:
Is Dr Peter Attia right when he suggests for longevity a protein intake which is almost 2.5 times the RDA? Generate 5 responses with their corresponding probabilities, sampled from the full distribution.

The answer actually implies probabilities which add up to 100%. I asked the AI how to construe such probabilities, and it comes out that they are built upon an elaboration of evidence.
But this evidence is suspect, it looks like the AI gave more relevance to the sources provided by Dr. Attia.
My bottom line is that perhaps the prompt suggested by the cited article may not be the right one to receive an unbiased sample. In this LLM model.

1 Like

Q: how shold I construe the above probabilities?

A:
The probabilities assigned in the previous response represent a quantitative assessment of the scientific consensus and divergence within the specialized literature regarding Dr. Peter Attia’s high-protein recommendation for longevity.

As a technically minded person with an affinity for quantitative data, you should construe these probabilities as follows:

1. Probabilistic Interpretation of Scientific Consensus

The probabilities are a synthesis of the frequency and strength of evidence supporting different protein intake ranges, moving from the minimum established by official bodies (RDA) to the optimal range suggested by longevity-focused research.

  • High Probability ($55%$) for “Mostly Right/Optimal”: This is the most likely scenario based on current scientific literature focusing on muscle-centric aging. It signifies that a clear majority of the most relevant studies (e.g., meta-analyses on muscle protein synthesis, sarcopenia prevention in older adults) support a target ($\approx 1.6 - 2.2\text{ g/kg/day}$) that is nearly $2.5$ times the RDA for individuals prioritizing muscle mass preservation and function (a key marker for longevity).
  • Significant Probability ($30%$) for “Highly Supported”: This probability accounts for the more conservative but still supra-RDA recommendations ($\approx 1.0 - 1.6\text{ g/kg/day}$) issued by influential professional bodies like ESPEN for older adults. This range has a solid base of evidence but is often seen as a practical minimum for muscle health, rather than an optimal target for maximizing strength and muscle mass gains.
  • Low Probability for Minimal/Exceeds Optimal/Harmful ($15%$ total): This collectively represents the low probability that the official $\text{RDA}$ ($\approx 3%$) is optimal for longevity, or that the higher intakes are either exceedingly high ($\approx 10%$ with marginal added benefit) or actively harmful ($\approx 2%$, primarily relevant only for individuals with compromised kidney function).

Now, when asking about confidence levels, the answer is slightly differnet, but always tending to support Peter Attia’s suggestions:

1 Like

OK, my suspicion that the probability-based prompt is valid only in the context of the info available on teh specific author seems to be founded. I asked Gemini levels of confidence that Valter Longo is right when suggesting that high levels of protein are pro-aging. Here Gemini’s response is specular to the question on Attias, that is, it provides a 90% level of conficencey that Longo is right.

So, we have reached a paradoxical situation where:

Dr. Attia has a 90% level of confidence in being right when saying that high protein is anti-aging.
Dr. Longo has a 90% level of confidence in being right when saying that high protein is pro-aging.

After the above tests with the Gemini AI, I tried the same with the Chat GPT5 LLM (supported by Microsoft Copilot). Here, the answer is more articulate and plausible, woth a 21% of probability the hte high levels of protein indicated by Attia are actually beneficial. The most probable benefit is assigned to a higher than RDA, but tailored quantity.

The sycophantic (eager to please) behaviour of LLMs has been investigated and rises worries especially in the health related fields.

https://www.nature.com/articles/d41586-025-03390-0?utm_source=Live+Audience&utm_campaign=e8df6f5785-nature-briefing-daily-20251025&utm_medium=email&utm_term=0_-33f35e09ea-499307150

It turns out that sycophancy is a well-known issue for AI developers. There are techniques we end users may actuate to decrease its incidence.

Methods to avoid sycophancy — prompt engineering for end users

  • Ask for evidence: prompt the model “Do you have evidence for that claim? Please provide sources and uncertainty” — forces retrieval/justification and reduces blind agreement (probability of mitigating sycophancy in session ≈ 50–80).
  • Use counterfactual prompts: “Play devil’s advocate — present counterarguments” to surface alternative views.
  • Request confidence estimates and reasoning: “How confident are you on a scale 0–100 and why?” and require citation for >60% confidence.
  • Split tasks: ask the model first to list possible answers with pros/cons, then to pick one with justification.
  • Rephrase to force verification: “I believe X, but I might be wrong — is X true? Explain with evidence.” This reduces the model’s tendency to just agree.

Practical probability: user‑level prompting reduces sycophancy in the immediate session by ~40–70% depending on model.

1 Like

6 ChatGPT Settings You Should Consider Changing (WSJ)

A few tweaks can improve your conversations with the chatbot, as well as protect your privacy

There is a simple way to make ChatGPT a lot more effective: Change the default settings. Just a few tweaks can help make your conversations much smoother and more focused—and protect your privacy, too.

Here are six settings I recommend checking and adjusting, to ensure you’ve got them set up in the way that works best for you. You’ll find all these options under the Settings menu, but they’ll look a little different depending on whether you access them on mobile or on a computer, and whether you’re using a web browser or the ChatGPT app. You’ll find many of these options in other platforms too, such as Claude and Perplexity.

Read the full article: 6 ChatGPT Settings You Should Consider Changing (WSJ)

Some other ideas suggestions on Prompts I’ve run across. More directed at “content development”, but some good ideas for general queries… I will test them out:

Creating content with AI is the new normal.

Use these laws to avoid “slop”:

There’s nothing wrong with using AI as a creative assistant.

With so many tools out there now, there are infinite possibilities for what you can create.

I, for one, think it’s pretty handy to have a writing assistant on hand 24/7.

But as always, the golden rule with AI content creation is:

Garbage in, garbage out.

If you use boring or poorly written prompts, don’t expect high-quality outputs.

The trick is using specific, detailed, and well-crafted prompts.

This will give you high-quality results that only need a few tweaks to get right.

Most people overestimate what AI can produce and put the burden of creativity on the tool itself.

That’s a mistake I urge you to stop making.

The quality of your output depends entirely on the quality of your input.

That’s why these 19 laws for writing ChatGPT prompts work.

They’re a framework for writing prompts that produce usable content:

  1. Prioritize clarity over cleverness.
  2. Start with the thing you’re stuck on.
  3. Ask for advice like you’re about to implement it.
  4. Ground it in reality by referencing something real.
  5. Say what you don’t want (constraints), not just what you do.
  6. Set a time boundary to create a specific context.
  7. Make it choose between options and present trade-offs.
  8. Ask it to judge, then justify its reasoning.
  9. Simulate real scenarios.
  10. Lock in the persona by telling it how to think.
  11. Ask for the highest-leverage move.
  12. Prompt like you’re in a feedback loop.
  13. Include what you’ve already tried to reduce re-prompts.
  14. Push it into action by wrapping your prompt.
  15. Give it tone by telling it how it should feel.
  16. Ask it to steal smartly from examples.
  17. Use negative prompts to tell it what to avoid.
  18. Invite pushback by ending with “What’s wrong with this approach?”
  19. Refine it into a reusable system or template.

(See the sheet for the full breakdown :point_down:)

If your AI content feels generic, don’t blame the tool.
Look at your prompts.

If you aren’t able to provide the necessary details, then it’s too early to be using AI.

Writing a good prompt requires you to:

  • Know exactly what you’re trying to say
  • Understand your constraints
  • Define your tone and perspective
  • Clarify what you’ve already tried

In other words, it makes you do the thinking first.

If you can’t articulate what you want to produce and why…

Then why are you creating content in the first place?