[SystemSafety] "Serious risks" in EC 765/2008

Peter Bernard Ladkin ladkin at rvs.uni-bielefeld.de
Mon Apr 8 16:09:08 CEST 2013


Asking EC 765/2008 to define "risk" is inappropriate. It is a proto-law, and the word is supposed to 
be used here in its everyday meaning, whatever that is. Let's go with that.

On 4/8/13 2:24 PM, peter.sheppard at uk.transport.bombardier.com wrote
> There is probably as much chance of finding a definition of "serious risk" in the IEC standards as
> there is in defining what a "significant change" is in the European Railway Authority, Common Safety
> Method!

It seems to me that a serious risk is a risk for which the combination of severity with probability 
is relatively high in comparison with other risks that are not regarded as "serious". An easy case 
is if the risk is higher than for some example already deemed to have been a "serious risk". There 
is of course a definition of "risk" in IEC 61508 for example, but oddly enough it doesn't turn up in 
the Electropedia, the on-line version of IEC 60050. The IEC should really get it in there!


> *<Thierry.Coq at dnv.com>*	
> .........
> In EC 765/2008, what is considered a "serious risk"? Is there a reference?

First: see above. Second: no.

> How are the "serious risk" mitigations assessed, especially when "The feasibility of obtaining
> higher levels of safety or the availability of other products presenting a lesser degree of risk
> shall not constitute grounds for considering that a product presents a serious risk."?

It is a law, not an algorithm.

> This standard also mandates that the product should be recalled when the serious risk has
> materialized... and there is wording here to update the risk assessment with field reports.
> So is a "serious risk" in this standard in fact a materialized danger...?

Not necessarily. If someone identifies lead-based paint on a children's toy then it will be 
withdrawn because of precedent before any kid chews on it - the analysis already exists and there is 
prior case.

But if there is something for which there does not exist an obvious prior, then maybe one has to 
wait until something untoward happens. If someone or some people get hurt or killed because a 
product functioned in such a way as to cause it, the lawyers step in. Lawyer for the prosecution 
says that this was normal use or reasonably foreseeable misuse and the manufacturer's hazard 
analysis was clearly insufficient. Lawyer for the defence says that it was an anomaly essentially 
related to the unforeseeable deleterious behaviour of the operator or victims or weather or someone 
or other. One of those guys wins, or the judges decide it was a bit of both. So either the product 
will be recalled, or the manufacturer will stick a sign on the product to stop similar things coming 
to court again ("Warning: this refrigerator is not an edible substance either in whole or in part. 
Damage to teeth, mouth and gums may result from an attempt to eat it"). You can't say in advance of 
such proceedings how they will turn out, so you can't specify what a "serious risk" is in advance of 
having tested it in court. But once a court has so decided, then the product is off the market 
because the law says it must be. And if someone doesn't already have a ready-made risk analysis, 
then the court will do one/have one done, because it has to, in order to determine "serious 
risk"-hood, because Article 20 says that is how you decide.

The stuff about what does not constitute a serious risk seems to be a way of saying that ALARP as a 
principle does not necessarily apply here.

To Doug: there are a whole lot of conventions and precedents saying how risk must be evaluated in 
engineered systems, and medical technology, and aerospace, and so on. The "trade offs" you speak of 
are largely specified. IEC 61508 has one set of procedures; civil aerospace certification another, 
medical technology a third; nuclear power a fourth; now there are the automobile people with one. 
And so on. You can argue about whether any set is efficacious, and we do. But they are often, even 
mostly, there.

PBL

Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany
Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de






More information about the systemsafety mailing list