[SystemSafety] Collected stopgap measures

Peter Bernard Ladkin ladkin at causalis.com
Fri Nov 2 12:24:49 CET 2018


Some points concerning safety and SW.

If you are producing commercial safety-related software, non-medical, non-defence,
non-civil-aerospace, in GB then HSE requires in effect that you are able to show compliance with IEC
61508 Part 3, known as IEC 61508-2:2010. A couple of us here are on the IEC Maintenance Team for
this standard, and one of us is a Director of HSE. So this thread is talking to some direct experience.

In Germany, there is no general one-stop regulator across (many but not all) industries as there is
in the UK. But if something goes wrong you will get assessed and maybe prosecuted by the state's
attorneys and showing them you have complied with applicable standards is a must. IEC 61508 and/or
its sector-specific derivative is one of those applicable standards.

The way that IEC 61508 requires you address safety is as follows. You have Equipment Under Control
(EUC) and this equipment can theoretically behave in such a way that it causes damage (hurts or
kills people, damages non-related things, mucks up the environment).

A risk analysis must be performed (hazard identification, hazard analysis - basically the
assignation of a severity to each hazard, and some estimate of likelihood, then risk assessment, the
combination of likelihood with severity). "Society" sets the acceptable risk, per hazard. If the EUC
risk from the risk analysis exceeds this acceptable risk, then a *safety function* (technical term,
henceforth SF) must be introduced, whose operation avoids or mitigates the risk ("mitigate" means
either reduce the likelihood of occurrence, or reduce the severity, or both). It is assumed the SF
may fail. It can't fail all the time, for then the risk is the plain EUC risk and that is not
acceptable by hypothesis. So the SF gets a reliability condition imposed, one of four. These
reliability conditions are called "safety integrity levels", or SILs. For random HW failures, the
SILs are quantitative (probability of failure per demand/per hour). SILs for SW (which is not taken
to be susceptible to "random" failure, but only to "systematic" failure, that is, reproducible
failure due to design) are not quantitative, and are conceptually a little more complicated,
referring to something called "systematic capability".

The idea is that the risk-analysis-and-consequence definition of safety function all happens at the
system level. The SW developers are given a set of requirements by the systems people, and then they
develop to those requirements. It is theoretically for the systems people to get the requirements
right, not the SW people. In practice, there will be negotiation of course. The SIL of a safety
function constrains techniques which it is "highly recommended" be used. So if you are developing SW
with a SIL 3 or SIL 4 systematic capability, then it is "highly recommended" that formal methods be
applied in the specified places.

Developing SW according to IEC 61508-3:2010 will involve you in almost 60 documentation
requirements. You will have to produce those 60 documents. About a third of them concern your
testing (protocols, execution, results). I think people can well imagine that, unless you start your
development process knowing you are going to have to produce those almost-60 documents, you will
very likely be unable to show compliance to an assessor (or a prosecutor if your client has had some
bad luck). Not only that, but there are a lot of tables in Annexes A and B saying in quite specific
terms what methods are "recommended" or "highly recommended" and where. So assessors are likely to
be checking on that, also. Things such as "formal proof" or "formal verification", "static analysis"
and forward/backward traceability between SW safety requirements specification and SW safety
validation plan. (There is a question what a SW safety requirements specification is, but I won't
get into that.) You can go cross-eyed looking through it all (maybe you need to be cross-eyed
looking through it?).

I do hope that with the twenty-odd Assurance Points that we are developing in IEC 61508-3-2, much of
this will become more orderly. We'll see.

So that is the way it is done. You can't teach it in universities from the source, because the IEC
wants each user to buy a copy and if you want the full set it will cost you thousands of €/$/£.

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181102/958cb729/attachment.sig>


More information about the systemsafety mailing list