Use of AI in posts

In no stretch of the imagination is this a scientific forum.
It is a discussion forum.

Views and pointers on life extension are discussed.
People bring us up to date on the latest news and papers.
It is up to us to decide whether to follow up with our research.
AI is much better now, and many, such as Google Gemini, provide links to papers on every opinion it expresses.

I am open-minded about people’s opinions and anecdotal evidence.
Rapa News is just a daily jumping-off point for further personal research.
Don’t get your skivvies in a bunch if you don’t like a post; just carry on—no need to block or make impolite criticisms.

I personally like other people to do some of the work, especially in summarizing long videos and blogs.

Bottom line: I don’t personally care if you use AI or not. It is helpful to provide the name of the AI used, the original query, and the relevant references.
I plead guilty to not always supplying all three of these in my posts.

11 Likes

Thank you for this. However I think this is one of the most well grounded in science forums on the net. We should put effort into not posting slop. I don’t think I do post slop. (Ai or otherwise). I am happy to debate this however.

3 Likes

I don’t think a policy is needed. BUT, an AI generated synopsis is a form of tertiary (secondary) information. Like: Encyclopedia articles, Wikipedia, Review articles. I may have missed it, but I haven’t seen any of these posts make a discovery where the person submitting offers up a primary source from the AI generated synopsis relevant to the conversation. And it is also rare that anyone offers any critical thinking of its relevance to the topic. It sort of just sits there and deflates the room. It begets laziness.

I think I am probably the person who has posted the most AI derived content in the last week. Hence I need to know what people want. Even AI content takes some effort (if you do it properly). I am not in any way stressed, in fact I am, unsurprisingly, quite drunk. However, if people don’t want the content it is no effort not to provide it

2 Likes

Well that makes two of us. If people like or find such posts useful, more power to them. The only reason I brought it up, is because I thought dealing with the issue would improve user experience on this site. But I may have misjudged the situation, or my perceptions are skewed, in which case, please ignore and carry on, sorry for the interruption.

1 Like

I am entirely happy that you being the issue up. I need to know what people find useful. I think what I post is useful. It actually takes time to get an ai summary of a video. If people don’t want it then ok.

So, I’m a nonpartisan, do you care?
I don’t give a rat’s a$$ what your political views are.
I thought we had nipped that kind of post in the bud long ago.
Can we keep to the subjects of rapamycin and life extension?

1 Like

I think you misunderstood? We were not referring to a political party “Democrat”, rather the generic meaning of small “d” democrat - and btw. John H. is British, so he could hardly be a US Democrat. FWIW, I’m registered Indepenent, but none of these posts are political. I’m a “democrat” rather than a proponent of “autocratcy”, i.e. we’ll go along whatever the majority determines.

2 Likes

Sorry!
As Emily Litella said, " Never mind!"

1 Like

I use ChatGPT on a daily basis for my business and I notice it more than ever, especially in response to chemistry and biology queries. I also use Deep Research mode and it’s still an issue there.

2 Likes

Perhaps an ideal solution:

  • Special markdown block for AI ([AI] ... [/AI])
  • Users have the an option to hide/collapse posts that include AI blocks in settings

But I don’t know what the difficulty of this is.

2 Likes

I really appreciate the respectful discussion here. Thank you all.

I think we’re all grappling with this, and all the views I’ve seen here are thoughtful and reasonable perspectives.

It might be that the best way forward with the current reliability of AI is to have the convention that we mark posts or threads as AI if they are AI generated, and leave the rest to the reader.

1 Like

I am a democrat with a small “d”. In British English this is not supporting a political party, but supporting the principle of democracy. Hence it was not a Political statement, but a statement that I will be guided by what the majority (of participants in the forum) wish to see. British English and American English have subtle differences which do at times cause misunderstanding. This was one of them.

4 Likes

I tend to be pretty irritated by low quality ChatGPT slop copy pasted by people who don’t understand the underlying issue. It spreads misinformation because it tends to hallucinate.

However, I have had pretty good luck with perplexity.ai as it does citations. I’ve heard good things about NotebookLM too as you can upload your own papers.

The reports created by this can create a good overview if you are researching yourself. However, even perplexity can and will hallucinate and say things that are NOT in the cited papers/websites.

For that reason don’t like non vetted AI content by someone who just pastes in a question and does not vet the outcome.

2 Likes

Our time is a limited resource. An AI generated answer can easily be too long.It would be great if it were concise, short, and carried references.

I hope to get correct and interesting information from the discussions here. For me, it is not a question if the posts here are based on the “mind” of an AI or directly from the mind of a person, not as long as it is short and correct and with references. Before posting AI content, it is an ethically good thing to check and provide the references.

I think having a tag would be an appropriate solution, I agree. The models are very useful for things like summarising papers, summarising YouTube video transcripts. (I like to get the key points without introductions, chit-chat, being interrupted by ads and told to smash the like button, thumbs up, comment and subscribe!)

However, I also don’t enjoy seeing a long entirely AI-generated post. The thing is, all of us have these tools, so pasting in whatever it splurges out isn’t very useful. This is especially true if the prompt is not included with the post. As we know, you can bully the models into saying anything, and they also very badly want to “please” you. So the reliability of the information varies widely, especially depending on how you prompt it.

Like, if I ask a question and someone says "well, here’s what Grok says about it… " what use is that? I can also ask Grok those same questions myself. I think the reason we’re here is to get different perspectives from people with different expertise. For example, just yesterday, I was reading a post from a dentist/MD who was commenting about the theories of Rapamycin re-growing gums, Kaeberlin eLife study, pocket depth etc, and the dentist user said it was effectively BS, and there are problems with the animal models etc. That’s an incredibly useful insight, from a real human, who has looked in thousands of mouths and evaluated thousands of dental X-rays.

Here, in this forum, we have doctors who have treated patients with lots of diseases, basic researchers (like me) who actually do the experiments and know how the data is generated, we have doctors who have prescribed rapamycin. All of that still beats AI by a long way. On the other hand, the AI is only as good as the data it was trained on, and a lot of stuff it’s trained on is flawed or incorrect. It will take all the papers about stem cell exosome and tell you that they’re amazing, because a lot of the publications are pure hype. Take an actual expert (like me, on advisory committees, go to those conferences, peer review the studies and grants) and I believe I have a much more nuanced view to offer.

I think overall a good policy would be to have something like authorship on a journal. You can use AI to construct your post etc, but ultimately the user posting it should take responsibility. If the AI included references, you should check that they’re accurate and actually support the claims made. If there are chunks of text from AI, I think they should be included as a quote, potentially marked as AI-generated.

5 Likes

AI is good when used as an assistant to come up with more effective ways of communicating ideas. I’m not a fan of AI replacing people as it doesn’t do the job right for now.

If you believe in Science (which we all do), it is extermely hard to be binary about the utility of AI (LLMs) in day to day life and beyond. LLMs are commodities. My views have evolved as I use them. These models highly depend on the user generated prompts and they do have personalities.
For example, ChatGPT in the context of this forum can be chatty and may not be highly accurate.
For factual research, I tend to lean on Perplexity which scans the entire web for latest references.
ChatGPT, Grok and Claude are several months behind.Chat GPT tends to be more interactive and tryin g to please user. It is geared to hook and engage user to spend more timeon Chat GPT. It does hallucinate if you try to mislead deliberately. Then there are users out there who intentionally feed these models with misinformation to.contaminate their learning.
But if used correctly, Perplexity can synthesize uptodate summary along with references and.can.even analayze complex data… It does pretty damn good job!
Its too premature to be totally dismissive of them. Nor should we blindly rely on them.

I agree. It used to be that it would hallucinate sources, provide a link and the title it gave the link was completely unrelated to the content on the link. Now it can think, find sources for and against its thought and then judge which of these appears to be most accurate and then think about the consequences of what that means in relation to its thought and the original request, and do this continuously until the initial request is resolved.

But even then it tends to ramble on, or miss important points.

It’s also super valuable to summarize videos you don’t have time watching. The browser extension YouTube Summary with ChatGPT & Claude allows you to feed the transcript of a video into AI with a single click and get a summary.