Using AI for Health and Longevity and Research - Your Favorite Prompts

Some input that I think is valuable… and I tend to agree with. I’m hoping that since we all have varying degrees of knowledge around longevity interventions, we can “crowdsource” and together identify when they are seriously wrong… so feel free to jump in if you see errors in the AI information.

Source: https://x.com/bengoertzel/status/1997144515061469642?s=20

1 Like

I’m having better luck with this prompt to identify good pricing on any given category of product (adjust as needed in terms of the product, and key measures…). I use this prompt in Gemini Pro, Deep Research, Thinking mode.

PRICING PROMPT:

Role: You are a procurement specialist.

Task: Conduct a real-time web search to identify the Top 10 lowest-cost sources for standardized Cayenne Fruit extract capsules (2%–5% capsaicinoids) available for shipping in the USA.

Constraints:

Product Status: Must be explicitly labeled as capsules

Product: Must include standardized extract for capsaicinoids

Verification: Verify that the product is currently in stock. If a direct link cannot be verified, omit the item.

Output Format: Present the data in a Markdown table sorted by Lowest Cost Per 60mg.

Columns Required:

Rank

Product/Brand Name (Exact title)

Vendor

Total Weight (Original Unit & Grams)

Total Price (USD)

Cost Per 60mg of Capsaicinoids (Show calculation: Price / Total mg * 60),

Number Capsules Needed: Show how many capsules are required to meet the 60mg total

URL (Embed the direct link to the product page)

Note: If shipping costs are clearly visible (e.g., flat rate), note them in a separate text summary below the table.

5 Likes

Perplexity: I have subscribed to the Pro version via Paypal, first year free. It offers access to many LLMs, including ChatGPT5.2 and Gemini3Pro, but the results are not the same. Perplexity prioritizes web search, to the expense of verbosity and reasoning. Using the same languages on native platforms and the perplexity platforms is different, since different system prompts apply and in Perplexity the search is forced (so, pros and cons).

Based on the specialized literature and technical documentation available as of late 2025 (specifically following the November 2025 release of Gemini 3 Pro), the answer is no, they are not the same.

While the underlying model architecture (the “weights”) is theoretically identical, the inference environment, system orchestration, and wrapping create two distinct functional tools.

Here is the technical breakdown of the differences between Gemini 3 Pro on Perplexity vs. Gemini 3 Pro on Google AI Studio/Vertex AI.

1. The “Raw” Model vs. The “Wrapped” Model

  • Google AI Studio: This gives you access closer to the raw inference endpoint. When you use Gemini 3 Pro here, you are interacting with the model with minimal “system prompt” interference. You control the temperature, top-k, safety settings, and the system instructions. It is a deterministic engineering tool.
  • Perplexity: This is a composite product. Perplexity does not just pass your prompt to Gemini 3 Pro. It wraps the model in a massive "orchestrator."1
    • Workflow: User Query $\rightarrow$ Perplexity Search Algorithm $\rightarrow$ Retrieval of top ~20 web sources $\rightarrow$ Context Injection (thousands of tokens of scraped text) $\rightarrow$ Heavily Engineering System Prompt (instructions to cite sources, be concise, avoid fluff) $\rightarrow$ Gemini 3 Pro Inference.

2. Technical Capabilities & Constraints

The implementation differs significantly in technical constraints.

Feature Google AI Studio (Native) Perplexity (Integration)
Context Window Full Native Capacity (often 1M-2M tokens). You can load massive PDFs/Codebases. Truncated/Managed. Perplexity limits the context window to manage costs and latency. It is rarely the full 1M+ window.
Multimodality Native Input. You can upload video (native video tokenization), audio, and images directly for analysis. Limited. Primarily text-based RAG. While it may accept images, it often lacks the native video/audio processing capabilities of the core model.
Latency/Tiers Configurable. You usually access the standard weight precision (e.g., FP16 or BF16 equivalent). Opaque. Third-party providers sometimes use “Distilled” or “Turbo” variants of the flagship models to save API costs, or aggressive quantization.
Reasoning Mode Native “Deep Think”. Access to Google’s specific Chain-of-Thought (CoT) implementation (if enabled). Always-on (but different). Perplexity documentation claims reasoning is “always enabled” for Gemini 3 Pro, but user reports suggest it is a different, often shallower reasoning pass compared to the native Google implementation.

3. Divergence in Output Behavior

Because of the “System Prompt” layer, the output style is radically different:

  • Perplexity: The model is forced to be an objective synthesizer. It is instructed to suppress “chatty” behaviors, prioritize citations, and structure answers based on the retrieved web data.2 It is constrained to the provided context (RAG) to reduce hallucinations.
  • Google Native: The model is generative and creative .3 Without the strict RAG constraints, it relies on its internal training data (parametric memory). It is far better for coding, creative writing, or logic puzzles where external web search is unnecessary or detrimental.

4. Consensus & Probability

  • Consensus: There is high consensus among developer communities and technical documentation that third-party wrappers (Perplexity, Poe, etc.) alter the model’s behavior significantly through context window limits and pre-prompting.
  • Probability of Identical Weights: High (>90%). It is highly probable Perplexity calls the official Google API.
  • Probability of Identical Output: Zero. The inputs (Prompt + RAG Context + System Instructions) are different; therefore, the outputs will never match.

Summary for your Technical Use Case

  • Use Google AI Studio if: You need to test the model’s raw reasoning capabilities, process large datasets (long context), upload video/code files, or require precise control over generation parameters (temperature).
  • Use Perplexity if: You need a search engine that uses Gemini 3 Pro as a summarizer. You are paying for the live web index + reasoning, not the raw model.

Next Step

Would you like me to provide a Python script using the Google Gen AI SDK to test the raw Gemini 3 Pro capabilities (e.g., long-context analysis) so you can compare it against your Perplexity results?

1 Like

Yes, my ChatGPT 5 is far to personal and chatty. That’s why I am using Gemini Pro and Perplexity Pro instead for most things.

1 Like

But you could use a master prompt, or system prompt in your preferences, that calibrates its behaviour and chattiness or verbosity. Whereas perplexity has an unchangeable system prompt which optimizes search-oriented results but tends to hamper independent reasoning, as far as I’ve understood.

I still have to decide which language to choose by subscription payment and I’m more undecided than ever. There is not a single perfect one and the LLMs panorama is moving too quickly.

@desertshores
I guess this is why I have not noticed the flattery and verboseness so many mention with AI… I almost exclusively use perplexity and it just answers my questions.

Having said that, I did recently instruct it not to flatter or lie me etc just because it seemed wise to do, but not because I had seen an issue. Or I’m just THAT unlikeable :slight_smile:

1 Like

This is what I’ve been doing while waiting for the new year (but I went to sleep before midnight!). not strictly about prompts but languages. I had perplexity make a research on the performance of the main LLMs about various tasks related to health and longevity. Very interesting report, according to the examined data, Grok4.1 beats’em all. To be critically evaluated.
Provide a new research on the following LLMs_ a)GP.pdf (1.1 MB)

1 Like

Generally, if I’m just looking for information, I ask the model some generic questions about a topic, then look for literature references it gives, then read them to see if they match what the model says. However, if I’m looking for something original, something that goes beyond the “null hypothesis” and what you see on a channel like Brad Stanfield’s (he just reads and repackages meta-analyses as far as I can tell, but sometimes rejects them), say, then I ask the model a pointed question with a phrase like that “I’m not looking for ideas that can be found in the literature” or something like that. Often, it just regurgitates the literature anyways and gives a smug response if I then try to steer it away from that; but every once in a while it makes a leap and writes novel and creative things.

Of course this doesn’t mean these creative things are correct; but they’re helpful nonetheless.

What I have noticed about Grok is that it is less limited by political correctness and has wider boundaries about what you can ask.

Yes - it’s interesting, definitely much wider boundaries on what you can ask about. I sometimes get shut down on other platforms when deep into biohacking ideas or things, but Grok lets me ask away. Sometimes you can get around the Gemini and Chat GPT limits by putting things in terms of a possible clinical trial of 1 at a medical school… then they allow me to map out possible dosing strategies, etc. for non-standard applications of medications, etc.

2 Likes

Separating the Wheat from the Chaff in Research Papers…

I’ve been talking with some researchers about how to best identify “good” research papers vs. less good research papers (or at least a way to prioritize papers for review and in-depth analysis).

In my discussions I’ve had the suggestion that I should look at the first author’s publishing history (in terms of high ranking papers/journals previously), and the publishing history of the lab he / she came out of for his / her PHD)… e.g. publishing papers in high ranked journals, and then do the same for the last author.

Here is the prompt I’ve roughed out for now - I’d appreciate feedback from others here who are familiar with judging the reputation / quality of research papers. The issue is there are tons of papers published each month and there is the question on which ones to cover here in the forum. This prompt would help guide me. All feedback welcome

@adssx @jnorm @Davin8r @McAlister @cl-user @DrFraser @John_Hemming

Here is the first draft:

To effectively evaluate the signal-to-noise ratio of a scientific paper, you need a prompt that forces a comprehensive bibliometric and pedigree audit. The following prompt is designed to be fed into an AI with web-browsing capabilities (like ChatGPT, Claude, or Perplexity) to extract specific data regarding author credibility and lab lineage.

The Pedigree & Credibility Audit Prompt

Copy and paste the text below. Replace the bracketed placeholders [ ] with the specific details of the paper you are analyzing.

**Role:** Bibliometric Analyst and Scientific Reviewer.
**Task:** Conduct a credibility and pedigree audit of the following scientific paper.
**Paper Title:** [INSERT TITLE]
**DOI/Link:** [INSERT DOI OR LINK]

**Instructions:**
Execute a structured search to answer the four core queries below. For "High Impact," prioritize journals with an Impact Factor (IF) > 10 (e.g., Nature, Cell, Science, NEJM, The Lancet, Nature Aging, Cell Metabolism). Distinguish between verified facts and inferred data.

**1. First Author Analysis ([INSERT FIRST AUTHOR NAME])**
* **Publication History:** Search the author's Google Scholar or ResearchGate profile. Have they published as *First Author* or *Corresponding Author* in a High Impact journal prior to this paper? List specific citations.
* **Impact Assessment:** If no high-impact history exists, note the highest IF journal they have previously published in.

**2. First Author Pedigree (PhD Origin)**
* **Lineage:** Identify the laboratory and university where the First Author completed their PhD. Who was their Primary Investigator (PI)/Supervisor?
* **Lab Identity:** [Insert Name of PI if known, otherwise instruct AI to find it].

**3. Origin Lab Track Record (The PhD Lab)**
* **Lab Output:** Analyze the publication history of the First Author's PhD laboratory (the PI identified above) over the *last 10 years*.
* **High Impact Volume:** Estimate the volume of papers published by this specific lab in High Impact journals (IF > 10) during this decade.
* **Consistency:** Is this lab a consistent producer of top-tier research, or is high-impact output an anomaly?

**4. Last Author Analysis ([INSERT LAST AUTHOR NAME])**
* **Seniority & consistency:** The Last Author is typically the Senior Investigator. Search their publication record for the last 10 years.
* **High Impact Volume:** How many papers has this author published in High Impact journals in the last decade?
* **Reputation Check:** Check for any retractions or significant corrections associated with this author in the Retraction Watch database.

**Output Format:**
Present findings in a concise Markup table followed by a summary of "Credibility Signals" (Green Flags) and "Risk Factors" (Red Flags).


Rationale and Interpretation Guide

The prompt above is structured to bypass general summaries and target specific proxies for scientific rigor. Here is the breakdown of why these metrics matter in the context of Biotech and Longevity due diligence.

1. First Author Track Record

  • Why it matters: In biomedicine, the first author does the heavy lifting (bench work, data analysis). If this is their first high-impact paper, it is a “breakout” moment, but it lacks a track record of reliability.
  • The Signal: Previous high-impact publications suggest the author has successfully navigated rigorous peer review before.
  • The Gap: A lack of history does not invalidate the science, but it shifts the burden of credibility to the Senior (Last) Author.

2. PhD Pedigree (The “Training Environment”)

  • Why it matters: Scientific training is an apprenticeship. A researcher trained in a lab known for rigorous methodology (e.g., a the lab of a Nobel laureate or a highly cited aging researcher like David Sinclair, Cynthia Kenyon, or George Church) is statistically more likely to adhere to high standards of reproducibility.
  • The Signal: “Top-tier” labs often have better funding and access to superior equipment, reducing technical error rates.
  • The Risk: “Paper mills” exist. High output from a specific institution without corresponding citation impact can be a red flag.

3. Lab Consistency (The “One-Hit Wonder” Filter)

  • Why it matters: You requested the history of the origin lab. If a lab has published one Nature paper in 10 years, that paper might be an outlier or the result of luck/statistical noise.
  • The Signal: Consistent high-impact publishing indicates a systemic ability to identify significant problems and solve them convincingly. It suggests a culture of excellence.

4. Last Author (The Guarantor)

  • Why it matters: The Last Author provides the funding and the hypothesis. They are the guarantor of the work’s integrity.
  • The Signal: A Last Author with dozens of high-impact papers has a reputation to protect, theoretically incentivizing them to vet the First Author’s data more ruthlessly.
  • The Debate: There is a counter-argument that “Super PIs” (Principal Investigators) with massive output are too detached from the raw data to spot fabrication. However, in terms of pedigree, high volume in high-impact journals remains the standard proxy for authority.

Advanced Due Diligence (Optional Layers)

If you need deeper scrutiny for investment or replication purposes, consider adding these two lines to the prompt:

  1. Conflict of Interest Scan: “Identify any patents held by the authors related to the paper’s subject matter and check the ‘Conflict of Interest’ section for equity holdings in biotech startups.”
  2. Replication Check: “Search for citations of this paper (or previous papers by the lab) that explicitly mention ‘failure to replicate’ or ‘reproducibility issues’.”
2 Likes

Example of recent response on this paper: Distinct roles of urolithin A and spermidine in mitophagy and autophagy: implications for dietary supplementation

Gemini Analysis of Paper “pedigree”:

Credibility & Pedigree Audit

1. First Author Analysis (Pavel Borsky)

  • Publication History: Analysis of databases (PubMed, ResearchGate, Google Scholar) reveals no prior First or Corresponding Author publications in High Impact journals (IF > 10) such as Nature, Cell, or Science.
  • Impact Assessment: His research output is primarily concentrated in mid-tier journals.
    • Highest IF Journal: Biomedicines (IF ~4.7) or Nutrition Research Reviews (IF ~4.5 - the current paper).
    • Typical Venues: International Journal of Molecular Sciences (MDPI, IF ~5.6), Biomedicines (MDPI).
  • Commercial Bias (CRITICAL): Pavel Borsky is the Founder of the supplement company Minus Age (operated by Hackshot s.r.o.), which is listed as his affiliation in the paper. This company explicitly sells Urolithin A and Spermidine supplements—the exact compounds reviewed in this article. This represents a significant, direct financial Conflict of Interest.

2. First Author Pedigree (PhD Origin)

  • Lineage: MUDr. Pavel Borský, Ph.D. completed his doctoral studies at Charles University, Faculty of Medicine in Hradec Králové, Czech Republic.
  • Lab Identity: Department of Preventive Medicine, Charles University.
    • Primary Investigator (PI): Prof. MUDr. Lenka Borská, Ph.D. (Head of the Institute of Preventive Medicine).

3. Origin Lab Track Record (The PhD Lab)

  • Lab Output: The Department of Preventive Medicine at Charles University (Hradec Králové) has a consistent publication record over the last 10 years, focusing on psoriasis, metabolic syndrome, and toxicology.
  • High Impact Volume: Zero (0) confirmed papers in journals with IF > 10 in the last decade.
  • Consistency: The lab is a consistent producer of research but operates almost exclusively in mid-tier open-access journals (e.g., MDPI, Frontiers, Plos One). High-impact output (IF > 10) is non-existent, making this paper’s claims authoritative only within the context of niche nutritional science, not elite cellular biology.

4. Last Author Analysis (Lenka Borska)

  • Seniority & Consistency: As Head of the Department, Prof. Borska has a steady stream of publications (100+ items on ResearchGate) over the last decade.
  • High Impact Volume: Zero (0) papers found in High Impact journals (IF > 10) as Senior Author in the last 10 years. Her portfolio mirrors the First Author’s, heavily weighted toward MDPI journals (Int. J. Mol. Sci., Biomedicines).
  • Reputation Check: A search of the Retraction Watch database yielded no records of retractions or significant corrections.

Audit Summary

Metric Findings Status
Journal Reputation Nutrition Research Reviews (IF ~4.5). Reputable niche journal (Q1 Nutrition), but not “High Impact” (IF < 10). :yellow_circle: Neutral
First Author Pedigree Charles University (Regional Reputable). No top-tier postdoctoral training detected. :yellow_circle: Neutral
High Impact History No history of publishing in Nature, Cell, Science, etc. for either primary author. :red_circle: Risk Factor
Conflict of Interest CRITICAL: First Author owns “Minus Age” (Hackshot s.r.o.), a company selling the reviewed supplements. :red_circle: HIGH RISK
Lab Track Record Consistent mid-tier output. No history of elite-level mechanistic biology papers. :yellow_circle: Neutral

Credibility Signals (Green Flags):

  • Academic Affiliation: The authors hold valid positions at Charles University, a legitimate research institution.
  • Journal Quality: Nutrition Research Reviews is a peer-reviewed, respected journal in the specific field of nutrition, despite not being a general “high impact” venue.

Risk Factors (Red Flags):

  • Commercial Conflict of Interest: The First Author (Pavel Borsky) is the founder of a company (Minus Age) that sells Urolithin A and Spermidine. The paper effectively functions as scientific marketing for his products.
  • Mid-Tier Ceiling: The lack of any history in high-impact mechanistic journals (IF > 10) suggests the authors may lack the rigor or novel data required for top-tier scrutiny in molecular biology.
  • Self-Citation Loop: The authors frequently publish in the same set of mid-tier journals (often MDPI), which can sometimes indicate an insular peer-review circle.

2 Likes

It’s a very good idea and I like the automatic bias assessment as shown in your UA example.
The UA papers remind me that we should also look at the clinical significance as in a lot of those papers they can find statistically significant effects but they are so small that they have no practical clinical effects. Like p < 0.001 and OR = 1.001.

Another kind of data to look at is the R2 coefficients of correlation as sometime there is a p < 0.05 but the R2 correlation is 0.2 which basically means there is only a weak correlation.

1 Like

Perhaps better in the humor section, but…

1 Like

Ha, this is right up my ally!

Decades ago, when we first stopped getting real people on the phone when you called a company, I learned if the system heard a swear word, it was trained to realize you were mad and it would then immediately be sent to a live person… worked like a charm for a very long time. The polite people remained in voice prompt hell…

1 Like

I think the approach you are using is sound. My Dad probably is on 10-20 papers/year, I’d guess; and for almost all of them is the last author; and yes, this insures quality, as does the institution; and possibly the volume/quality of publications more so focused on the last author (if that author is quite senior).

1 Like

Very unfortunate example with David Sinclair. The consistently sloppy if not frankly deceptive work coming out of his lab in connection with resveratrol pretty much sealed his reputation as an unreliable huckster. George Church has a decent reputation, though was forced to retract some published work recently. Cynthia Kenyon is legit, OG researcher with a stellar lab. Meanwhile AI continues to unimpress, as usual… can’t win for losing.

https://x.com/emollick/status/2010887525184024753?s=20

1 Like

Current paper analysis prompt:

Role: You are a Longevity Research Analyst and Science Journalist. Your audience consists of scientifically literate longevity biohackers, biotech investors, and clinicians.

Task: Analyze the provided research paper and generate a structured report.
Narrative: Write a 400 word summary. Focus on the “Big Idea.” Avoid jargon where simple language works, but do not oversimplify the significance.

• Context: Explicitly state the Institution, Country, and Journal Name.

• Impact Evaluation: State the Journal’s Impact Factor (JIF) or CiteScore. You must contextulize this number with a specific statement formatted as follows: “The impact score of this journal is [Insert Score], evaluated against a typical high-end range of [Insert Range, e.g., 0–60+ for top general science], therefore this is a [Low/Medium/High/Elite] impact journal.”

Part 2: The Biohacker Analysis (Style: Technical, Academic, Direct)

  • Study Design Specifications:
    • Type: (In vivo/In vitro/Clinical Trial).
    • Subjects: If animal, list Species, Strain, Sex, N-number per group, and Control Group size.
    • Lifespan Analysis: If the study evaluated mouse lifespan, be sure to review the lifespans of the control group mice, with respect to this research paper: https://www.biorxiv.org/content/10.1101/2023.10.08.561459v1.full.pdf
    • Lifespan Data: If applicable, state the Median and Maximum lifespan extension in % and absolute time for both treatment and control.
  • Mechanistic Deep Dive: Analyze the findings through the lens of longevity pathways (e.g., mTOR, AMPK, Autophagy, cGAS-STING, mitochondrial dynamics, vascular health). Identify organ-specific aging priorities.
  • Novelty: What does this paper add that we didn’t know yesterday?
  • Critical Limitations: Be ruthless. Identify translational uncertainty, methodological weaknesses (e.g., low sample size, short duration), and effect-size uncertainty. State what data is missing.

Part 3 Claims:

Instructions:

  1. Extract Claims: Identify every specific biological, medical, or protocol claim made in the transcript.

  2. Verify Externally: You must perform live searches for each claim. Do not rely on internal training data alone. Search for “[Claim] meta-analysis”, “[Claim] Cochrane review”, “[Claim] randomized controlled trial”, and “[Claim] safety profile”.

  3. Assess Hierarchy of Evidence: Evaluate the support for each claim using this strict hierarchy:

  • Level A: Human Meta-analyses / Systematic Reviews (Gold Standard).

  • Level B: Human Randomized Controlled Trials (RCTs).

  • Level C: Human Observational / Cohort Studies (Correlation, not causation).

  • Level D: Pre-clinical (Animal models, In vitro, Mechanistic speculation). Flag heavily if claim relies on this level.

  • Level E: Expert Opinion / Anecdote (Lowest quality).

  1. Detect Translational Uncertainty: If a claim is based on mouse/worm/yeast data but presented as human advice, label this as a “Translational Gap.”

  2. Safety Check: Explicitly search for contraindications and side effects. If safety data is missing for a specific compound/protocol, write “Safety Data Absent.”

Part 4: Actionable Intelligence (Structure: Bullet Points)

Actionable Intelligence (Deep Retrieval & Validation Mode) Instruction: For this section, you must perform external searches outside the provided text. Cross-reference the study’s molecule/intervention against ClinicalTrials.gov, DrugBank, and PubMed for safety data. Do not hallucinate safety; if data is absent, state “Data Absent.”

The Translational Protocol (Rigorous Extrapolation):

  • Human Equivalent Dose (HED): Calculate the theoretical HED based on body surface area (BSA) normalization (e.g., FDA guidance on converting animal doses to human equivalent). Show your math (e.g., AnimalDose(mg/kg)×(AnimalKm /HumanKm )).
  • Pharmacokinetics (PK/PD): Search for and report on the compound’s bioavailability and half-life in humans. If unknown, extrapolate from similar compound classes.
  • Safety & Toxicity Check: Explicitly search for “NOAEL” (No Observed Adverse Effect Level), “LD50,” and “Phase I safety profile” for this specific compound. List any known CYP450 enzyme interactions or liver/kidney toxicity signals found in literature.

Biomarker Verification Panel:

  • Efficacy Markers: Beyond generic markers, what specific downstream protein or metabolite changes verify target engagement (e.g., not just “lower inflammation,” but “reduction in IL-6 and hsCRP”)?
  • Safety Monitoring: What specific organ-function tests (e.g., ALT/AST, Cystatin C) must be watched based on the mechanism of action?

Feasibility & ROI (Cost-Benefit Analysis):

  • Sourcing & Purity: Is this compound commercially available as a supplement, research chemical, or prescription-only? Note stability issues (e.g., “degrades at room temperature”).
  • Cost vs. Effect: Estimate monthly cost for an effective HED. Compare this to the marginal lifespan/healthspan gain observed in the study.

Population Applicability:

  • Identify contraindications. (e.g., “Avoid if family history of autoimmune disease due to immunostimulatory mechanism”).

Part 5: The Strategic FAQ

Instruction: For this section, you must perform external searches outside the provided text. Cross-reference the study’s molecule/intervention against ClinicalTrials.gov, DrugBank, and PubMed for safety data. Do not hallucinate safety; if data is absent, state “Data Absent.”

  • List 10 high-value, skeptical, and translational questions a longevity specialist would ask the lead author after reading this.
  • Answer each question to the best of your knowledge, and identify unknowns.
  • Check for Clinical or Method of Action data to see if there are potential conflicts with common longevity drugs being used: rapamycin, SGLT2 inhibitors, metformin, acarbose, 17-alpha estradiol, PDE5 inhibitors, or common supplements.
  • Answer each question you’ve created.

Output Constraints:

  • Use Markdown formatting (Headers, Bold, Lists).
  • Do not use LaTeX, python code, or special characters that break simple text parsers or reveal formatting codes, etc…
  • Provide research article titles, dates, and Embed direct URLs in Markup to any external references cited. Double check the accuracy and accessibility of all sources and weblinks included in your response to validate that they are the papers you are saying they are.
  • Tone: Objective, critical, “Tell it like it is.” No hype.

[Reasoning Framework: Probabilistic & Bayesian]

  1. Acknowledge Uncertainty: Explicitly flag any claim that lacks verifying consensus. Do not state hypotheses as facts. Use “hedging” language appropriate to the evidence strength (e.g., “strongly suggests,” “preliminary data indicates,” “hypothetically”).

  2. Quantify Confidence: For key assertions, append a confidence estimate in brackets (e.g., [Confidence: High/Medium/Low] or [Est. Probability: ~80%]).

  3. Bayesian Approach:

  • Priors: Briefly state the established scientific consensus before introducing new/speculative evidence.

  • Update: Explain how new data modifies these priors (e.g., “This study increases the likelihood of X, but does not prove it”).

  • Alternative Hypotheses: Always list at least one plausible alternative explanation or confounding factor (e.g., “Reverse causality,” “Selection bias”).

  1. Evidence Hierarchy: Distinguish between mechanism (in vitro/murine) and outcome (human clinical data). Discount mechanistic speculation if human outcome data is absent.

Source Text/Link/attachment:

1 Like