[SystemSafety] Software Safety Requirements according to IEC 61508

Matthew Squair mattsquair at gmail.com
Tue May 24 09:44:52 CEST 2016


Thanks! 

There's also Bev Littlewood and John Rushby's 'possibly perfect' papers about diverse two channel systems. Which I like particularly for their treatment of aleatory and epistemic uncertainty. 

Matthew Squair

MIEAust, CPEng
Mob: +61 488770655
Email; Mattsquair at gmail.com
Web: http://criticaluncertainties.com

> On 23 May 2016, at 7:07 PM, Peter Bishop <pgb at adelard.com> wrote:
> 
> Alternatively, you try an argument like this one.
> 
> "Does software have to be ultra reliable in safety critical systems?"
> http://openaccess.city.ac.uk/2465/
> 
> It argues you can achieve better *for the overall system* *in the long term*
> *if*
> - there is some fallback in the overall system to prevent disaster
> - there are relatively few (dangerous) faults
> - faults, when found, are fixed with relatively high probability
> 
> In this argument, the initial failure rate the software is not
> particularly significant, So 10-5 or even 10-3 does not change the long
> term safety of the system very much.
> 
> This type of argument is not sensitive to changes in operational profile
> (which is always an issue with conventional software reliability arguments).
> 
> Peter Bishop
> 
>> On 22/05/2016 11:00, Peter Bernard Ladkin wrote:
>> On 2016-05-22 09:41 , Matthew Squair wrote:
>>>> Why do something that demonstrably brings you nothing, but costs resources?
>>> 
>>> Because the standard made me? :) 
>>> 
>>> Actually the legal formalism implicit in the wise crack above is one justification.
>> 
>> I don't see how.
>> 
>> "Your honour, our prior calculation of reliability of this software-based safety function was
>> 10^(-5) per operating hour. We based this on our development procedures and prior experience with
>> safety functions of this type that we and others have implemented, and our estimate was accepted by
>> the assessor as thorough and provisionally accurate.
>> 
>> Neither we nor any of our colleagues and competitors have software, any software, ever, which
>> implements this safety function which has run for the requisite period of time to achieve a higher
>> reliability estimate than 10^(-5) or less failure probability per operating hour. In order to
>> improve this estimate, such a piece of software would have to log some 45 million operating hours
>> without the slightest alteration in an identical environment to that in which our software runs. If
>> that had happened, we could be 99% certain that our software had a 10^(-6) or less probability of
>> failure. That would be an improvement. But no one has ever run any software of that sort, without
>> change, for that length of time, ever, and we doubt that anybody ever will. So no one has a better
>> estimate than ours of 10^(-5) failure probability.
>> 
>> Furthermore, we propose that it is not in any sense reasonably practicable to exercise our software
>> for 45 million operating hours in the exact environment in which it was installed. The ALARP
>> requirement is fulfilled by our existing estimate.
>> 
>> We are aware that the nominal reliability requirement for this function was set higher. We think
>> that is a design mistake. We are not responsible for the design. We discharged our responsibility
>> concerning this design mistake in that we argued to the client at the time that this was
>> inachievable in any practical sense. Our argument is correct and is mathematically uncontentious. It
>> must be accepted by any engineer.
>> 
>> Our software achieved, and achieves, what is achievable. No one can do any better, and no one has
>> any software for this function which they can demonstrate is more reliable.
>> 
>> Far from being grossly negligent, or even negligent, our software is demonstrably amongst the most
>> carefully developed and most reliable there is for this task. We refute any claim of negligence, and
>> thereby the opposition's claim for compensation."
>> 
>> This isn't theory. It is what has actually been argued in court. Indeed, I don't see what other
>> argument would ever be used in these circumstances if this one is available.
>> 
>> PBL
>> 
>> Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany
>> Je suis Charlie
>> Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
> 
> -- 
> 
> Peter Bishop
> Chief Scientist
> Adelard LLP
> Exmouth House, 3-11 Pine Street, London,EC1R 0JH
> http://www.adelard.com
> Recep:  +44-(0)20-7832 5850
> Direct: +44-(0)20-7832 5855
> 
> Registered office: Stourside Place, Station Road, Ashford, Kent TN12 1PP
> Registered in England & Wales no. OC 304551. VAT no. 454 489808
> 
> This e-mail, and any attachments, is confidential and for the use of
> the addressee only. If you are not the intended recipient, please
> telephone 020 7832 5850. We do not accept legal responsibility for
> this e-mail or any viruses.
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160524/b344e309/attachment-0001.html>


More information about the systemsafety mailing list