[SystemSafety] AI Hallucination Cases

Prof. Dr. Peter Bernard Ladkin ladkin at causalis.com
Wed Jul 16 16:23:24 CEST 2025


On 2025-07-16 16:00 , Paul Sherwood wrote:
>
> I've been reading up on the general topic of AI and can recommend .... [2]
>
> [which] points out that we have all been hoodwinked into anthropomorphising LLMs, e.g. using the 
> word "hallucinate" which creates the impression there's a mind in the matrices. They recommend 
> talking about  "stochastic parrot" or "mathy maths" instead of LLMs, and note that the entire 
> purpose of an LLM is just to "make s**t up" on the basis of weighted averages from training data.
>
> [2] https://www.amazon.co.uk/AI-Fight-Techs-Create-Future/dp/B0DQQD5XML
>
Everything I have read by Emily Bender on the subject has been perceptive as well as incisive. She 
is a computational linguist and of course they have been dealing with word embeddings and multi-head 
attention not just for a couple of years but for a couple of decades as methods to solve 
computational-linguistic problems. It seems when ChatGPT was released in November 2022 its 
capabilities were perceived to be such that many people in computational linguists thought their 
careers had been pretty much ended because the problems they'd been working on appeared to be 
solved. Bender has been eloquent on that also. BTW, she is the originator of the "stochastic parrot" 
categorisation: read it here https://dl.acm.org/doi/10.1145/3442188.3445922

I bet that's already one of the most cited papers in all of computer science.

PBL

Prof. Dr. Peter Bernard Ladkin
Causalis Limited/Causalis IngenieurGmbH, Bielefeld, Germany
Tel: +49 (0)521 3 29 31 00



More information about the systemsafety mailing list