[SystemSafety] AI Hallucination Cases

Derek M Jones derek at knosof.co.uk
Thu Jul 17 13:14:36 CEST 2025


Les,

> IMHO hallucinations in these domains are not a huge problem. You can judge
> authenticity by triangulation - posing the same prompt to at least 3 AIs or
> engaging your brain with critical thinking. In the limit you could always ask
> a human being as I plan to do in an appointment with my oncologist
> momentarily.

The lawyer citation cases are an example of situations that is probably
happening in many fields.

If people are using LLMs to save time, they are less likely to compare the
answers from multiple LLMS.

People have learned to treat computer output as accurate.  Belief in the
accuracy of computer output sent people to jail the case of the Post Office
Horizon system.

If the output LLMs is treated as good enough, we will die a death of a
thousand cuts.

LLMs don't just hallucinate new data, they also miss stuff out.

I use LLMs to help solve maths problems.  They are helpful because
they are great are pattern matching against theorems/identities that I
don't know.  My input is spotting when the LLM drops variables while
doing algebra and tweaking the question to be more precise.

-- 
Derek M. Jones           Evidence-based software engineering
blog:https://shape-of-code.com



More information about the systemsafety mailing list