[SystemSafety] [System Safety] FOSDEM talk by Paul Sherwood
Prof. Dr. Peter Bernard Ladkin
ladkin at causalis.com
Tue Feb 11 13:08:18 CET 2025
On 2025-02-11 11:43 , Paul Sherwood wrote:
>
> Agreed, but as I said in the talk, the key point is that "we are not in Kansas anymore". The "old
> ways" served for microcontrollers, but some of them do not scale to the multicore microprocessor
> world we are now in.
Edition 3 of IEC 61508, which is now out for voting, explicitly addresses the issues of multicore.
Indeed, that has been a major topic of discussion for the 7+ years the HW part has been in development.
>> But it was pretty light on why "new will be sufficient
>> for acceptable safety" vs. deciding that doing the new ways really
>> well will automatically be fine.
>
> My understanding (I may be wrong) is that the accepted approach to "what is sufficient" is
> something along the lines of "we put suitably qualified+experienced people on the job and they
> tell us when they are finished".
Um, no. Nowhere near accurate. IEC 61508-1 has requirements for hazard and risk analysis (and has
had ever since 1997). HRA is established in ISO/IEC Guide 51 as a requirement for any international
standard which concerns safety-related systems or components. The Guide 51 considerations obviously
also apply to ISO 26262.
>> We can justify the old way in hindsight in that it seems to work, even
>> if we struggle to rigorously explain why. Do we want to jump to a new
>> way without understanding why it is expected to work and spend decades
>> of mishaps climbing the hill to getting to where it really needs to
>> be? Or is there a way to have some confidence about it before then?
>
> That is exactly what we are working on. First off we've had to categorically establish that
> software in these new systems exhibits random behaviour, and then show that we could apply
> statistical techniques to model failure rates with confidence intervals.
Littlewood and Strigini showed clearly in 1993 that you can't use statistical evaluation to
establish positively the safety requirements of safety functions with SIL2 - SIL4 reliability
requirement. (You can use statistical evaluation in hindsight to assess whether your system did
achieve was it was intended to achieve, but you usually need years - or decades - of operational
experience.)
I find it astonishing that, 30+ years later, people are still unfamiliar with this basic result.
> For safety specifically, we are mapping our argument for compliance with IEC 61508 using the
> Trustable Software Framework.
IEC 61508 has between 50 and 60 documentation requirements. Unless you can map TSF documentation
onto those 50-60 requirements, you won't be able to claim compliance with IEC 61508.
> While I agree that supply chain attacks are a huge issue (and to be clear three out of the six
> Trustable Tenets focus on this topic) I think you should be aware that this applies equally to
> systems running proprietary software. Most so-called proprietary systems these days involve a
> great deal of open source **anyway**, but this may be conveniently avoided (either accidentally or
> on purpose) during safety analysis.
>
You are not, and cannot be, talking about any systems which are compliant with IEC 61508. Software
of unknown pedigree doesn't make it in to IEC 61508-compliant software.
PBL
Prof. Dr. Peter Bernard Ladkin
Causalis Limited/Causalis IngenieurGmbH, Bielefeld, Germany
Tel: +49 (0)521 3 29 31 00
More information about the systemsafety
mailing list