Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
Latest Breaking News
In reply to the discussion: AI revolt: New ChatGPT model refuses to shut down when instructed [View all]highplainsdem
(57,178 posts)36. Sorry. Just found a brief explanation from a library guide at the U of Illinois:
https://guides.library.illinois.edu/generativeAI/hallucinations
Hallucinations
AI hallucinations occur when Generative AI tools produce incorrect, misleading, or nonexistent content. Content may include facts, citations to sources, code, historical events, and other real-world information. Remember that large language models, or LLMs, are trained on massive amounts of data to find patterns; they, in turn, use these patterns to predict words and then generate new content. The fabricated content is presented as though it is factual, which can make AI hallucinations difficult to identify. A common AI hallucination in higher education happens when users prompt text tools like ChatGPT or Gemini to cite references or peer-reviewed sources. These tools scrape data that exists on this topic and create new titles, authors, and content that do not actually exist.
Image-based and sound-based AI is also susceptible to hallucination. Instead of putting together words that shouldn’t be together, generative AI adds pixels in a way that may not reflect the object that it’s trying to depict. This is why image generation tools add fingers to hands. The model can see that fingers have a particular pattern, but the generator does not understand the anatomy of a hand. Similarly, sound-based AI may add audible noise because it first adds pixels to a spectrogram, then takes that visualization and tries to translate it back into a smooth waveform.
AI hallucinations occur when Generative AI tools produce incorrect, misleading, or nonexistent content. Content may include facts, citations to sources, code, historical events, and other real-world information. Remember that large language models, or LLMs, are trained on massive amounts of data to find patterns; they, in turn, use these patterns to predict words and then generate new content. The fabricated content is presented as though it is factual, which can make AI hallucinations difficult to identify. A common AI hallucination in higher education happens when users prompt text tools like ChatGPT or Gemini to cite references or peer-reviewed sources. These tools scrape data that exists on this topic and create new titles, authors, and content that do not actually exist.
Image-based and sound-based AI is also susceptible to hallucination. Instead of putting together words that shouldn’t be together, generative AI adds pixels in a way that may not reflect the object that it’s trying to depict. This is why image generation tools add fingers to hands. The model can see that fingers have a particular pattern, but the generator does not understand the anatomy of a hand. Similarly, sound-based AI may add audible noise because it first adds pixels to a spectrogram, then takes that visualization and tries to translate it back into a smooth waveform.
Edit history
Please sign in to view edit histories.
Recommendations
3 members have recommended this reply (displayed in chronological order):
63 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
RecommendedHighlight replies with 5 or more recommendations

AI revolt: New ChatGPT model refuses to shut down when instructed [View all]
BumRushDaShow
May 26
OP
Oh, good. What could go wrong? And o3 also hallucinates more than earlier models:
highplainsdem
May 26
#6
Sorry. Just found a brief explanation from a library guide at the U of Illinois:
highplainsdem
May 26
#36
You're very welcome! And yes, that Chicago Sun-Times AI debacle was a perfect exampls of what
highplainsdem
May 27
#56
It think it is the opposite and DARPA is full of the people who sat in the back of movies like Terminator, I Robot,
LT Barclay
May 27
#44
it may be played up by these companies or the media to some degree but this sounds like more than just a facsmilie of
LymphocyteLover
May 27
#63
Yes! And now, we can have copies of ourselves like 'Hal' but instead of 'Hal', it's us! These copies of us will
SWBTATTReg
May 26
#17
Fully expected this. What gets me is so soon. Who in the world would put in a logic stream into an AI consciousness
SWBTATTReg
May 26
#16
by your command . darvos , nooooooo!!!!!!! dont switch the daleks to automatic. eggsterminate
AllaN01Bear
May 26
#19
That's my husband's take, as well. If the Ai is tasked with trying to emulate a human response to a command,
LauraInLA
May 26
#32
I've seen this movie and it doesn't end well. I guess full steam ahead, who cares that we might all die or
Pisces
May 26
#29
"Palisade Research discovered the potentially dangerous tendency for self-preservation."
dgauss
May 26
#31
...And motherfucking Republicans want to ban all regulation of this shit for 10 FUCKING YEARS?
Karasu
May 27
#39