[SystemSafety] [System Safety] FOSDEM talk by Paul Sherwood
Paul Sherwood
paul.sherwood at codethink.co.uk
Tue Feb 11 13:45:00 CET 2025
Thanks for getting involved, Peter - I was beginning to wonder if you
might be tacitly accepting what was expressed in my talk :)
Comments inline.
On 2025-02-11 12:08, Prof. Dr. Peter Bernard Ladkin wrote:
> On 2025-02-11 11:43 , Paul Sherwood wrote:
>>
>> Agreed, but as I said in the talk, the key point is that "we are not
>> in Kansas anymore". The "old ways" served for microcontrollers, but
>> some of them do not scale to the multicore microprocessor world we are
>> now in.
>
> Edition 3 of IEC 61508, which is now out for voting, explicitly
> addresses the issues of multicore. Indeed, that has been a major topic
> of discussion for the 7+ years the HW part has been in development.
Excellent. Obviously folks need to make progress in the meantime,
though. And ideally the standard would be properly public, so folks
discuss flaws and improvements (the open source way... )
>>> But it was pretty light on why "new will be sufficient
>>> for acceptable safety" vs. deciding that doing the new ways really
>>> well will automatically be fine.
>>
>> My understanding (I may be wrong) is that the accepted approach to
>> "what is sufficient" is something along the lines of "we put suitably
>> qualified+experienced people on the job and they tell us when they are
>> finished".
>
> Um, no. Nowhere near accurate. IEC 61508-1 has requirements for hazard
> and risk analysis (and has had ever since 1997). HRA is established in
> ISO/IEC Guide 51 as a requirement for any international standard which
> concerns safety-related systems or components. The Guide 51
> considerations obviously also apply to ISO 26262.
Hmmm.... Sorry but I'm failing to grasp the relevance of your comment. I
was replying to "sufficient for acceptable safety" which is about the
whole of the work, not just risk/hazard analysis. My understanding (that
sufficient is when the experts say so) is based on feedback from actual
practitioners.
Risk/hazard analysis is clearly necessary (but not sufficient :-)) for
acceptable safety. For complex systems, the question of how much
analysis is sufficient itself may be debatable.
>>> We can justify the old way in hindsight in that it seems to work,
>>> even
>>> if we struggle to rigorously explain why. Do we want to jump to a new
>>> way without understanding why it is expected to work and spend
>>> decades
>>> of mishaps climbing the hill to getting to where it really needs to
>>> be? Or is there a way to have some confidence about it before then?
>>
>> That is exactly what we are working on. First off we've had to
>> categorically establish that software in these new systems exhibits
>> random behaviour, and then show that we could apply statistical
>> techniques to model failure rates with confidence intervals.
>
> Littlewood and Strigini showed clearly in 1993 that you can't use
> statistical evaluation to establish positively the safety requirements
> of safety functions with SIL2 - SIL4 reliability requirement. (You can
> use statistical evaluation in hindsight to assess whether your system
> did achieve was it was intended to achieve, but you usually need years
> - or decades - of operational experience.)
We're all standing on the shoulders of giants - but lots of things which
were shown clearly in the 90s have since been improved upon.
> I find it astonishing that, 30+ years later, people are still
> unfamiliar with this basic result.
I'm not unfamiliar - I believe we've discussed the topic on this list
previously. But it can't be surprising that **most people** are
unfamiliar, since most people don't have time or inclination to read all
of the research.
>> For safety specifically, we are mapping our argument for compliance
>> with IEC 61508 using the Trustable Software Framework.
>
> IEC 61508 has between 50 and 60 documentation requirements. Unless you
> can map TSF documentation onto those 50-60 requirements, you won't be
> able to claim compliance with IEC 61508.
Understood.
>> While I agree that supply chain attacks are a huge issue (and to be
>> clear three out of the six Trustable Tenets focus on this topic) I
>> think you should be aware that this applies equally to systems running
>> proprietary software. Most so-called proprietary systems these days
>> involve a great deal of open source **anyway**, but this may be
>> conveniently avoided (either accidentally or on purpose) during safety
>> analysis.
>>
> You are not, and cannot be, talking about any systems which are
> compliant with IEC 61508. Software of unknown pedigree doesn't make it
> in to IEC 61508-compliant software.
FWIW I think folks in general (again outside the safety community) have
dropped the word "pedigree" in favour of "provenance".
I am talking about open source, which in many cases has much better
provenance (and evidence of its provenance) than most proprietary
software.
br
Paul
More information about the systemsafety
mailing list