I think having a tag would be an appropriate solution, I agree. The models are very useful for things like summarising papers, summarising YouTube video transcripts. (I like to get the key points without introductions, chit-chat, being interrupted by ads and told to smash the like button, thumbs up, comment and subscribe!)
However, I also don’t enjoy seeing a long entirely AI-generated post. The thing is, all of us have these tools, so pasting in whatever it splurges out isn’t very useful. This is especially true if the prompt is not included with the post. As we know, you can bully the models into saying anything, and they also very badly want to “please” you. So the reliability of the information varies widely, especially depending on how you prompt it.
Like, if I ask a question and someone says "well, here’s what Grok says about it… " what use is that? I can also ask Grok those same questions myself. I think the reason we’re here is to get different perspectives from people with different expertise. For example, just yesterday, I was reading a post from a dentist/MD who was commenting about the theories of Rapamycin re-growing gums, Kaeberlin eLife study, pocket depth etc, and the dentist user said it was effectively BS, and there are problems with the animal models etc. That’s an incredibly useful insight, from a real human, who has looked in thousands of mouths and evaluated thousands of dental X-rays.
Here, in this forum, we have doctors who have treated patients with lots of diseases, basic researchers (like me) who actually do the experiments and know how the data is generated, we have doctors who have prescribed rapamycin. All of that still beats AI by a long way. On the other hand, the AI is only as good as the data it was trained on, and a lot of stuff it’s trained on is flawed or incorrect. It will take all the papers about stem cell exosome and tell you that they’re amazing, because a lot of the publications are pure hype. Take an actual expert (like me, on advisory committees, go to those conferences, peer review the studies and grants) and I believe I have a much more nuanced view to offer.
I think overall a good policy would be to have something like authorship on a journal. You can use AI to construct your post etc, but ultimately the user posting it should take responsibility. If the AI included references, you should check that they’re accurate and actually support the claims made. If there are chunks of text from AI, I think they should be included as a quote, potentially marked as AI-generated.