[SystemSafety] Practical Statistical Evaluation of Critical Software

Derek M Jones derek at knosof.co.uk
Mon Mar 2 00:30:17 CET 2015


Matthew,

> Isn't the model implicitly normative? After all we're talking about testing
> a designed thing over which we have control, rather than the world.

We are testing a thing over which we don't have full control, it
contains faults.

The World is where the input that has not been used during testing
and which will trigger one or more remaining faults will come from.

> So from that perspective the approach is to tell engineers that if you do
> A,B,C then your software's behaviour is going to be close enough to a
> memoryless process such that the statistical techniques of the annex are
> useful.

I could propose a fault model that was very similar to phone
calls being received at a telephone exchange; I'm sure it could be made
plausible.  The maths has already been worked out and it involves the
Erlang distribution (which is the sum of k independent, memory-less
distributions).

Where is the data to show that use of Bernoulli/Poisson gives more
accurate predictions than Erlang?  It might in some cases and not in
others.

We are a long way from having a good theory of software faults.
It is too soon to recommend a single model for fault analysis.

It would be wise to provide a few pointers and not force people down
specific paths of analysis.

> Of course A, B, C might not be very well articulated, and the intent
> implicit rather than explicitly stated. I don't know as I haven't read the
> Annex (apologies).
>
>
> On Mon, Mar 2, 2015 at 5:21 AM, Derek M Jones <derek at knosof.co.uk> wrote:
>
>> Peter,
>>
>> Lets apply a well known adage:
>> "All models are wrong but some are useful".
>>
>> Lets put to one side the extent to which the proposed model is wrong.
>>
>> I don't see how the proposed model is useful.
>>
>> The use of Bernoulli/Poisson mathematics is proposed and you correctly
>> point out that this only works if the data has the desired properties.
>>
>> I would have proposed any combination of distributions, said the same
>> thing and been just as correct as you.
>>
>> Surely the approach should be to tell engineers to find out what
>> distribution(s) their data has and then apply the probability
>> analysis appropriate to that distribution(s)?
>>
>>
>>
>>   As I mentioned last week, Bev Littlewood and I have been writing a short
>>> practical guide to
>>> statistical evaluation of software with high-reliability requirements. It
>>> has been apparent to us
>>> for a while that IEC 61508-7 Annex D is an insufficient and in some
>>> respects misleading guide to the
>>> statistical evaluation of critical software, and it's been there in the
>>> standard by now for 18
>>> years. Time to fix that. This is the material we think should go into a
>>> revision.
>>>
>>> It's available at http://www.rvs.uni-bielefeld.de/publications/Papers/
>>> LadLitt20150301.pdf and has
>>> also been submitted for publication.
>>>
>>> PBL
>>>
>>> Prof. Peter Bernard Ladkin, Faculty of Technology, University of
>>> Bielefeld, 33594 Bielefeld, Germany
>>> Je suis Charlie
>>> Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> The System Safety Mailing List
>>> systemsafety at TechFak.Uni-Bielefeld.DE
>>>
>>>
>> --
>> Derek M. Jones           Software analysis
>> tel: +44 (0)1252 520667  blog:shape-of-code.coding-guidelines.com
>>
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
>>
>
>
>

-- 
Derek M. Jones           Software analysis
tel: +44 (0)1252 520667  blog:shape-of-code.coding-guidelines.com


More information about the systemsafety mailing list