Ultimately that is what I really care about… is there anything from this new paper that we can use to improve our lifespans and health spans - right?
How about the next step after this? Do you guys have any suggestions for additional steps I might want to take, supported by AI?
One thing I’m considering right now, and which seems to work, is asking the AI, after I’ve got it to summarize the research paper, and identify actionable insights, is to then ask it:
Prompt: review your summary and identify 10 questions that a longevity-oriented biohacker might be expected to ask about this research after reading this summary.
Then identify the best questions, and have the AIs answer these questions… add any others I can think of, and let it go.
I’m trying to use AI to simplify the process of identifying the key actionable insights, and the risks/rewards, costs/benefits and issues with potential implementation, as quickly as possible. And at the same time making it faster & easier for all of us to either move forward with a given course of action, or decide to move on…
I think the abstract is way more informative. I see directly in a few short words: what the study design was, inclusion criteria, how many people were in the study, when the exposure was measured, what they interacted with, the result.
Scientific summaries seems to have been perfected over hundreds of years.
If you wanted an actionable insight, you could’ve just asked the AI what you learned from the study, i.e higher CRP bad, so don’t waste tokens in context, just ask how to keep CRP lower, no need to feed in the entire paper.
The mechanistic papers don’t have any actionable insight unless they were MR studies. It’s unlikely it’s possible to really find much information from the current literature, that requires experimentation, i.e, other types of studies.
These chatbots are compute constrained, you probably need a custom solution with multiple tool calls, for every point in the study with multiple agents and as many reasoning tokens as is possible.
I prefer them much much sorter. If I want a 500 words text I would just read the study. Also, graphs are also super interesting, when AI becames capable of adding them, please do.
I have been reading these summaries regularly and very much enjoying them. When I ask an AI to summarize a scientific study for me, I ask for analogies to explain the main points. Since I don’t have a scientific background, the analogies are helpful for me to grasp concepts that I would not otherwise understand. I am not sure how analogies would go over with other forum members because it might seem as if it were being dumbed down, especially for those that already understand the science.
The 400 to 500 word count is right for me. Fewer words would be even better. I usually add “concisely and succinctly” when I ask for a chatGPT report. When a post gets longer than 500 words I simply skip it. Some authors give page after page of notes and citations that I always skip because it’s too much in a single post.
The method of presentation (bullet points, etc.) isn’t so much of a concern as long as it’s clearly written.
Any additional information in a set of links at the bottom of the post is my preference.
Me too. I love the summaries. You are on the hunt for papers that I’d never come across, and you save me the labor of distilling videos, most of which I can’t bear to watch. I thank you for your efforts.
But it’s so bad. It makes consistent mistakes, I have to review everything If it were my employee, I’d send it home lol. honestly though it needs to be double checked consistently. And then you check the next day and its 180° out again.
I tried using chatgpt to design a stack protocol and it wouldn’t do it. It leads you up to the point when it’s just about to then says something like it’s not a physician and can’t prescribe bla bla bla. I haven’t checked all of them.
I know they are still useful obviously, but yea fact check em on each other. And it’s just shocking how the answers can change in a matter of seconds.
Capricious… that’s what they should have named it.
through Gemini to fact check references. It was pretty good:
Gemini Fact Check:
This is a detailed and well-structured summary and fact-check of a video discussing Ora Biomedical. The references cited are generally correct and support the broad claims they are linked to. However, one link is erroneous and one reference is incomplete/inaccurate in its description.
Here is the assessment of the linked references:
Erroneous and Weak References
1. Erroneous Link
Reference in Question: In the “Claims & Evidence Table,” the supporting link for the radioprotection claim:
Radioprotectors exist (e.g., amifostine in radiotherapy link)
Assessment:WRONG LINK. The provided URL, https://www.ncbi.nlm.nih.gov/books/NBK537187/, leads to an NCBI Bookshelf entry titled “Chronic Obstructive Pulmonary Disease” (COPD). This is entirely unrelated to amifostine or radioprotection.
Correction: A correct link would point to a source discussing amifostine’s use as a radioprotectant. For example, a search for that topic leads to:
Reference in Question: In the “Fact-Check of Important Claims,” the supporting link for the claim about topical rapamycin:
Topical rapamycin is in clinical development for skin aging/wrinkles. Check: There are clinical trials and papers on topical rapamycin for aging skin and conditions like facial angiofibromas: see Chung et al. 2019, and a current trial targeting wrinkles/skin aging (e.g., NCT05523306).
Assessment:INACCURATE DESCRIPTION. While the links are technically correct, the description of the NCT05523306 link as a current trial is misleading. While it may have been current at the time of the video, a review of the linked page shows:
The study Start Date was September 2022.
The study Status is listed as Withdrawn.
Conclusion: The trial mentioned is no longer active. The Chung et al. 2019 paper remains a valid reference for the concept, but the specific trial link does not support the claim that a “current trial” is underway.
Correct and Supporting References
All other linked references appear correct and are highly relevant to the claims they support.
I agree. It makes so many mistakes, you have to doublecheck everything at which point it’s twice the work. I may as well read the papers myself, since it would be less work than checking every single thing not knowing where it might go wrong. Too much work. Do it once, but do it well - read the paper your own d@mn self. That said, I use AI like I’d use a research assisstant, to round up papers - saves a lot of time. In that context - rounding up papers - I like OpenEvidence, which I’ve written a post about before. It’s not perfect - cites papers that are sometimes not relevant - but it’s a great labour saving device.
Maybe one day AI will be useful in summarizing papers, reasoning and drawing valid conclusions, but from my experience that day is far, far, far away as yet. YMMV.
So what I’m doing right now that seems to be working, is taking any responses from CGPT and dropping it into Gemini and asking it to “Fact Check” this text. Seems to work quite well, and I’ll be trying it across the other AI platforms.
I very much like that this study shows that GLPs exert their rejuvenating effects through the hypothalamus.
This is the part of the brain that co-ordinates messaging with body in regards to glucose levels, oxygen, heat, cold etc. Hyperbaric oxygen studies have shown increased SIRT gene expression (? a longevity gene) in the hypothalamus, amongst other longevity benefits.
When elderly rats are given hypothalamic progenitor stem cells, they live longer and when their hypothalamus is damaged they die younger due to multi-organ failure.
I think there is a lot more positive research to come regarding the hypothalamus and am very glad that the possible GLP longevity benefits are been suggested to act through this important part of the brain.
I’ve been perusing the site tonight for the first time in several months and I immediately noticed the AI summaries. I like them a lot! Thank you rap admin
I thought about this topic when I heard my daughter talking about water usage from data centers. There are some videos debating this topic. I almost always get an AI response when I use Brave’s search engine. Here is what it stated:
AI Water Consumption
AI’s water usage is substantial and growing, driven by the energy demands of data centers that power large language models and other AI systems. Globally, data centers consumed about 560 billion liters of water annually in 2025, with projections rising to 1,200 billion liters by 2030 due to expanding AI infrastructure.
Per Data Center and Region
A 100-megawatt U.S. data center, which uses more power than 75,000 homes, consumes approximately 2 million liters (528,000 gallons) of water per day—equivalent to the daily use of 6,500 households.
Some large data centers use up to 5 million gallons (18.9 million liters) per day, comparable to the water needs of a town with 10,000 to 50,000 people.
Per AI Interaction
Each 100-word AI prompt (e.g., to ChatGPT) is estimated to use 519 milliliters of water—roughly one standard water bottle—according to studies from the University of California, Riverside and The Washington Post.
For a GPT-4 response, water use is estimated at 519 milliliters per 100 words, or about 3.5 milliliters per response based on energy use (1.75 watt-hours × 2 ml per watt-hour).
GPT-3 inference for 10–50 queries consumes about 500 milliliters of water, depending on location and cooling methods.
Sources of Water Use
Scope 1 (Onsite cooling): Water used directly in data centers for cooling servers via evaporative cooling towers or liquid cooling systems.
Scope 2 (Power generation): Water consumed in generating electricity—thermal and hydroelectric plants account for most of this, especially in regions reliant on fossil fuels.
Scope 3 (Supply chain): Up to 2,200 gallons (8,300 liters) of ultra-pure water are used to produce a single microchip.
Future Projections
By 2027, AI’s annual water consumption could reach 4.2 to 6.6 billion cubic meters—equivalent to the annual water use of Denmark or half the UK’s total withdrawal.
By 2028, AI data centers may consume 1,068 billion liters annually, according to a Morgan Stanley report.
Efforts to Reduce Impact
Liquid cooling (direct-to-chip or immersion) reduces water use significantly compared to evaporative cooling.
Use of recycled or non-potable water is increasingly common in drought-prone areas.
Tech giants like Google and Microsoft are investing in water replenishment programs to achieve “water positive” operations by 2030.
Despite efficiency gains, the exponential growth of AI continues to strain global water resources, particularly in water-stressed regions. Transparency and sustainable cooling technologies are critical to managing this hidden environmental cost.
Just something to think about. I suppose if they built them near the ocean, they could use that water? I don’t know anything about that though.