[SystemSafety] AI Hallucination Cases
Derek M Jones
derek at knosof.co.uk
Wed Jul 16 19:22:10 CEST 2025
Nick,
> with incorrect information (disinformation), bad use of language and are
> also targetting the training for cyber security AI. Use such open tools
> with caution...
The widely used Open weights models are created by the likes of Facebook
and other well funded groups, several in China. They are as likely to be
poisoned as the proprietary models.
Why are the likes of Facebook and Chinese companies spending billions
and then giving the models away for free?
In Facebook's case they want to stop any one company getting a
stranglehold on the market by undercutting them, plus <insert nefarious
aims here>.
Some of the Chinese models are very good and presumably they are
after market share, and ..., then profit!
The training for the Chinese models appears to discourage them from
answering questions about certain subjects, e.g., Tiananmen Square
> Nick Tudor
> Tudor Associates Ltd
> Mobile: +44(0)7412 074654
> www.tudorassoc.com
>
> *77 Barnards Green Road*
> *Malvern*
> *Worcestershire*
> *WR14 3LR*
> *Company No. 07642673*
> *VAT No:116495996*
>
> *www.aeronautique-associates.com <http://www.aeronautique-associates.com>*
>
>
> On Wed, 16 Jul 2025 at 17:05, Derek M Jones <derek at knosof.co.uk> wrote:
>
>> Peter,
>>
>>>> token prediction.
>>>
>>> A request for clarification. Are you referring to October 2022 and the
>> release of ChatGPT?
>>>
>>> As I mentioned, LLMs "arrived" between one and two decades ago in
>> computational linguistics. We were using them in the
>>
>> People have been doing stuff with word sequence probabilities since
>> Shannon's famous 1948 paper "A Mathematical Theory of Communication"
>>
>> https://web.archive.org/web/20090216231139/http://plan9.bell-labs.com//cm//ms//what//shannonday//shannon1948.pdf
>>
>> When did LLMs arrive on the scene and when does a tool
>> become an LLM? I will leave others to argue over this.
>>
>> The 2021 paper you cited
>> "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"
>> uses the term language model (LM) and does not use the term LLM (but
>> does talk about "... ever larger language models,").
>> Also the term does not appear in the famous 2017 paper
>> "Attention Is All You Need"
>> https://arxiv.org/abs/1706.03762
>>
>> --
>> Derek M. Jones Evidence-based software engineering
>> blog:https://shape-of-code.com
>>
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
>> Manage your subscription:
>> https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
>>
>
--
Derek M. Jones Evidence-based software engineering
blog:https://shape-of-code.com
More information about the systemsafety
mailing list