Published MAR 6, 2023 7:00 AM
This is related, but have you seen these chat AIs making up fake references that don’t exist anywhere? My recent searches have had many fake science articles that I can’t find anywhere else. Obviously, this is concerning.
Soon I will be giving a talk on Subtle Neurodevelopmental disorders in Youth. Chat GPT told me that there is an article, “Subtle Neurodevelopmental Delays and Their Manifestation as Aggressive Behaviour in Early Childhood: A Systematic Review” (2020). As far as I can tell, this is totally made up.
Yes, it’s referred to as hallucinating in this context.
GPT-4 "Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.
While still a real issue, GPT-4 significantly reduces hallucinations relative to previous models (which have themselves been improving with each iteration). GPT-4 scores 40% higher than our latest GPT-3.5 on our internal adversarial factuality evaluations"
There are many cases where you have to verify what it is returning
Yes, I noticed in the early stages that it would just make up references that didn’t exist.
And yes, it still does.
At this point in time everything it says needs to be fact-checked before use or at least you must use a disclaimer when quoting.
Very interesting the use of the term “Hallucinate.” Analogous in many ways to human behaviors, Hmmmm.
Yes - I see this all the time. They make up references. I really don’t believe a lot of what the AI bots produce… don’t trust it for anything serious yet.
You can read the full Stanford AI report here: https://aiindex.stanford.edu/report/
Here’s a new AI project I’m working on and having lots of fun with. A reporter with the Times of London interviewed me about it today (I have no idea if they’re going to publish anything about it).
Working with vs 4 in researching nutritional supplements to induce iPSC on an organismal level: RIR. I utilize a large prompt that defines the parameters specifically in an attempt to prevent confabulations i.e., inventing shit that seems logical, but does not actually exists, also know as hallucinations. As already stated you must verify every data point provided why the chat ai. When you request a reference or a link to an article ChatGPT will be glad to provide you with one. I long ago gave up on asking for such links, because so far, 100% have been unrelated to the work that was referenced or quoted by ChatGPT. I am convinced that it will continue to get better and we should all provide rational and reasonable feedback by using the thumbs up and down buttons. This will also allow you to add a comment. When the prompts are correctly structured with detailed information and well structured prompts the organizational benefits are dramatic. Sometime shocking in a good way!
What are the best sources you’ve found on how to structure the prompts?
I understand that this is a hot new area… “prompt engineers”
Here are a few of the links that have helped me develop more effective prompts. Although they are helpful, depending on the discipline you are addressing, the most effect way to develop an effective prompt is to read the guideline and then write your prompt. Be guided by the results and then rewrite your prompt to focus the results to align with your own desires and requirement. I have just started defining what the formats and sequence of information I want to have returned will be. It now a closed ended process but a constant refinement.