[SystemSafety] Software reliability (or whatever you would prefer to call it)

Peter Bernard Ladkin ladkin at rvs.uni-bielefeld.de
Tue Mar 10 07:13:22 CET 2015



On 2015-03-09 17:54 , RICQUE Bertrand (SAGEM DEFENSE SECURITE) wrote:
> If a system implementing a software fails in front of the inputs of the real world it is:
> 
> ·         Either because these inputs had been foreseen by the specification, but the software
> doesn’t implement properly the specification and this was not detected by the tests. So the software
> is WRONG.
> 
> ·         Either because these inputs had not been foreseen by the specification, but however the
> software implements properly the specification. So the specification is WRONG.
> 
> ·         Either because something happens in the hardware, and the software does not operate as
> planned.

That seems to be right for a uniprocessor, whose internal communications are regarded as part of the
HW.

> Any probabilistic assessment of a system implementing a software will merge all of above.

Not necessarily.

If you have a reliable means of telling when an input (including all causally relevant environmental
parameters) is out of range of those foreseen by the specification, then you can distinguish the
first two. This is often done and has been done.

Similarly, in critical failure cases, a causal investigation will often determine the contributions
of HW and other components to the failure. In cases of rare failure then the effort to analyse in
depth is often made. In civil aerospace, for example, where failure of certain sorts prima facie
contravenes the certification requirements, the analysis is always made.

PBL

Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany
Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de






More information about the systemsafety mailing list