[SystemSafety] AI Hallucination Cases

Paul Sherwood paul.sherwood at codethink.co.uk
Wed Jul 16 16:00:23 CEST 2025


Thanks for this, Derek.

I've been reading up on the general topic of AI and can recommend [1] 
and [2]

The latter book points out that we have all been hoodwinked into 
anthropomorphising LLMs, e.g. using the word "hallucinate" which creates 
the impression there's a mind in the matrices. They recommend talking 
about  "stochastic parrot" or "mathy maths" instead of LLMs, and note 
that the entire purpose of an LLM is just to "make s**t up" on the basis 
of weighted averages from training data.

br
Paul

[1] 
https://www.amazon.co.uk/More-Everything-Forever-Overlords-Humanity/dp/B0F2N3F339
[2] https://www.amazon.co.uk/AI-Fight-Techs-Create-Future/dp/B0DQQD5XML

On 2025-07-16 14:28, Derek M Jones wrote:
> All,
> 
> This database tracks legal decisions in cases where generative AI
> produced hallucinated content – typically fake citations, but
> also other types of arguments.
> 
> 212 cases and counting
> 
> https://www.damiencharlotin.com/hallucinations/
> 
> via https://www.data-is-plural.com


More information about the systemsafety mailing list