Personally, I like shorter summaries. If the summary provides a basic insight into the YouTube video or paper, I can tell if it’s relevant to me, and then if I am interested, I can always take a deeper dive with my own AI queries.
I am going to watch this with interest. I certainly wish you a lot of success with this. Based on my experiences with AI up until now, I personally don’t trust any AI summary at all, period. The only way I use AI atm. is to find papers I might not be aware of otherwise, as a labour saving device. But reasoning, comprehension, drawing conclusions - not a chance. But perhaps I’m a luddite in this case.
Yes, these summaries are very useful. I particularly like the “actionable insights”. Like DS, I prefer shorter summaries to identify areas of interest before embarking on a deeper read. Thank you RapAdmin!!
I don’t trust them for detailed information (always double check) but the high level stuff generally seems reasonably good. And I’m trying the same prompts and papers on all the AIs to compare them all. One might go off in a weird direction, but all of them going off in the same weird direction seems unlikely
I’m also wondering if I can use one AI to check on the other AI’s summary? Has anyone tried this - fact checking on AI output with another AI?
My goal here is to help boost our “hive mind”… basically getting us all on the same page, asking questions and trying to figure out the issues, potential benefits and downsides of different new approaches that we can come up with, with the help of the AI systems. Let’s see if we can be a template for how AI can speed longevity research as it gets translated to human application.
It seems AI has the potential to really turbo-charge our fact finding and discovery, and ultimately implementation phases of our efforts here.
If you want to do that you starting a podcast is probably a better option. Have one every two weeks that the hive mind is centered around afterwards, people learn and maybe new audiences come here. Currently the posts are going in every direction. That means people don’t learn about prerequisite topics like how to evaluate a study, evidence hierarchy, pyramid of evidence, etc. If it’s not official in some respect people don’t consider it essential listening (and reading) thus haven’t learned about them when participating here, and doesn’t know what the forum represents. This idea has a better likelihood of “getting us all on the same page”, IMO.
Ultimately that is what I really care about… is there anything from this new paper that we can use to improve our lifespans and health spans - right?
How about the next step after this? Do you guys have any suggestions for additional steps I might want to take, supported by AI?
One thing I’m considering right now, and which seems to work, is asking the AI, after I’ve got it to summarize the research paper, and identify actionable insights, is to then ask it:
Prompt: review your summary and identify 10 questions that a longevity-oriented biohacker might be expected to ask about this research after reading this summary.
Then identify the best questions, and have the AIs answer these questions… add any others I can think of, and let it go.
I’m trying to use AI to simplify the process of identifying the key actionable insights, and the risks/rewards, costs/benefits and issues with potential implementation, as quickly as possible. And at the same time making it faster & easier for all of us to either move forward with a given course of action, or decide to move on…
I think the abstract is way more informative. I see directly in a few short words: what the study design was, inclusion criteria, how many people were in the study, when the exposure was measured, what they interacted with, the result.
Scientific summaries seems to have been perfected over hundreds of years.
If you wanted an actionable insight, you could’ve just asked the AI what you learned from the study, i.e higher CRP bad, so don’t waste tokens in context, just ask how to keep CRP lower, no need to feed in the entire paper.
The mechanistic papers don’t have any actionable insight unless they were MR studies. It’s unlikely it’s possible to really find much information from the current literature, that requires experimentation, i.e, other types of studies.
These chatbots are compute constrained, you probably need a custom solution with multiple tool calls, for every point in the study with multiple agents and as many reasoning tokens as is possible.
I prefer them much much sorter. If I want a 500 words text I would just read the study. Also, graphs are also super interesting, when AI becames capable of adding them, please do.
I have been reading these summaries regularly and very much enjoying them. When I ask an AI to summarize a scientific study for me, I ask for analogies to explain the main points. Since I don’t have a scientific background, the analogies are helpful for me to grasp concepts that I would not otherwise understand. I am not sure how analogies would go over with other forum members because it might seem as if it were being dumbed down, especially for those that already understand the science.
The 400 to 500 word count is right for me. Fewer words would be even better. I usually add “concisely and succinctly” when I ask for a chatGPT report. When a post gets longer than 500 words I simply skip it. Some authors give page after page of notes and citations that I always skip because it’s too much in a single post.
The method of presentation (bullet points, etc.) isn’t so much of a concern as long as it’s clearly written.
Any additional information in a set of links at the bottom of the post is my preference.
Me too. I love the summaries. You are on the hunt for papers that I’d never come across, and you save me the labor of distilling videos, most of which I can’t bear to watch. I thank you for your efforts.
But it’s so bad. It makes consistent mistakes, I have to review everything If it were my employee, I’d send it home lol. honestly though it needs to be double checked consistently. And then you check the next day and its 180° out again.
I tried using chatgpt to design a stack protocol and it wouldn’t do it. It leads you up to the point when it’s just about to then says something like it’s not a physician and can’t prescribe bla bla bla. I haven’t checked all of them.
I know they are still useful obviously, but yea fact check em on each other. And it’s just shocking how the answers can change in a matter of seconds.
Capricious… that’s what they should have named it.
through Gemini to fact check references. It was pretty good:
Gemini Fact Check:
This is a detailed and well-structured summary and fact-check of a video discussing Ora Biomedical. The references cited are generally correct and support the broad claims they are linked to. However, one link is erroneous and one reference is incomplete/inaccurate in its description.
Here is the assessment of the linked references:
Erroneous and Weak References
1. Erroneous Link
Reference in Question: In the “Claims & Evidence Table,” the supporting link for the radioprotection claim:
Radioprotectors exist (e.g., amifostine in radiotherapy link)
Assessment:WRONG LINK. The provided URL, https://www.ncbi.nlm.nih.gov/books/NBK537187/, leads to an NCBI Bookshelf entry titled “Chronic Obstructive Pulmonary Disease” (COPD). This is entirely unrelated to amifostine or radioprotection.
Correction: A correct link would point to a source discussing amifostine’s use as a radioprotectant. For example, a search for that topic leads to:
Reference in Question: In the “Fact-Check of Important Claims,” the supporting link for the claim about topical rapamycin:
Topical rapamycin is in clinical development for skin aging/wrinkles. Check: There are clinical trials and papers on topical rapamycin for aging skin and conditions like facial angiofibromas: see Chung et al. 2019, and a current trial targeting wrinkles/skin aging (e.g., NCT05523306).
Assessment:INACCURATE DESCRIPTION. While the links are technically correct, the description of the NCT05523306 link as a current trial is misleading. While it may have been current at the time of the video, a review of the linked page shows:
The study Start Date was September 2022.
The study Status is listed as Withdrawn.
Conclusion: The trial mentioned is no longer active. The Chung et al. 2019 paper remains a valid reference for the concept, but the specific trial link does not support the claim that a “current trial” is underway.
Correct and Supporting References
All other linked references appear correct and are highly relevant to the claims they support.
I agree. It makes so many mistakes, you have to doublecheck everything at which point it’s twice the work. I may as well read the papers myself, since it would be less work than checking every single thing not knowing where it might go wrong. Too much work. Do it once, but do it well - read the paper your own d@mn self. That said, I use AI like I’d use a research assisstant, to round up papers - saves a lot of time. In that context - rounding up papers - I like OpenEvidence, which I’ve written a post about before. It’s not perfect - cites papers that are sometimes not relevant - but it’s a great labour saving device.
Maybe one day AI will be useful in summarizing papers, reasoning and drawing valid conclusions, but from my experience that day is far, far, far away as yet. YMMV.
So what I’m doing right now that seems to be working, is taking any responses from CGPT and dropping it into Gemini and asking it to “Fact Check” this text. Seems to work quite well, and I’ll be trying it across the other AI platforms.