[SystemSafety] AI Hallucination Cases
Prof. Dr. Peter Bernard Ladkin
ladkin at causalis.com
Wed Jul 16 17:47:45 CEST 2025
Derek,
On 2025-07-16 16:41 , Derek M Jones wrote:
>
> When LLMs first arrived I had trouble believing it was all next
> token prediction.
A request for clarification. Are you referring to October 2022 and the release of ChatGPT?
As I mentioned, LLMs "arrived" between one and two decades ago in computational linguistics. We were
using them in the first Harbsafe project nearly a decade ago because my colleagues thought that you
could use the techniques to suggest semantic similarity of tokens. I don't think anyone would have
"trouble believing" any of that. I certainly didn't. We were worried rather that our corpus was too
small to allow the technique to be effective. But it wasn't.
PBL
Prof. Dr. Peter Bernard Ladkin
Causalis Limited/Causalis IngenieurGmbH, Bielefeld, Germany
Tel: +49 (0)521 3 29 31 00
More information about the systemsafety
mailing list