[SystemSafety] RE : Qualifying SW as "proven in use"

Nancy Leveson leveson.nancy8 at gmail.com
Fri Jun 28 13:15:59 CEST 2013


On Fri, Jun 28, 2013 at 3:16 AM, Peter Bernard Ladkin <
ladkin at rvs.uni-bielefeld.de> wrote:

>
> On 6/27/13 4:23 PM, Nancy Leveson wrote:
>
>  Someone [Metthew Squair] wrote:
>> > I've been thinking about Peter's example a good deal, the developer
>> seems to me to have made an
>> > implicit assumption that one can use a statistical argument based on
>> successful hours run to justify
>> > the safety of the software.
>> And Peter responded:
>> > It is not an assumption. It is a well-rehearsed statistical argument
>> with a few decades of
>> > universal acceptance, as well as various successful applications in the
>> assessment of emergency
>> > systems in certain English nuclear power plants.
>>
>> "Well-rehearsed statistical arguments with a few decades of universal
>> acceptance" are not proof.
>> They are only well-rehearsed arguments. Saying something multiple times
>> is not a proof.
>>
>
> What an odd comment, if I have understood it.
>
> Following:
> 1. One can perform a statistical evaluation of executing SW, based on
> successful hours run, and sometimes use such an evaluation to justify one's
> level of confidence in safety properties of the software;
> 2. This is not an assumption, but a mathematically well-established fact;
> 3. It is, however, of limited application, and the explicit assumptions
> under which one can use it mostly serve to make it impractical for use with
> real SW and real systems;
> 4. No, there is no "proof" (meaning: certainty) of anything established by
> using (most) statistically-valid arguments. Such arguments are mostly
> concerned with levels of confidence around 90-95%.
>
> This is really basic stuff. I don't understand why anyone would want to
> quibble with any of it.
>
> Because we too often accept things because they are "well-rehearsed
arguments" that everyone else accepts. Actually, they only accept them
because they have heard everyone else say the same thing for a long time.
To make progress, we need to question such "truths."

>
>  I agree with the original commenter about the implicit assumption, which
>> the Ariane 5 case disproves
>> (as well as dozens of others).
>>
>
> Ariane has to do with using SW proven reliable in one environment and
> using it in another environment with input parameters whose distribution
> intersects that of the previous use *in the null set*. It violates one of
> the main conditions of the most common method for statistical evaluation of
> SW to which I refer in Point 1 above. I don't see anything in that method
> that it "disproves". Neither do I understand why you're confused about that.


The intersection was not the null set. It was only one point in perhaps
thousands that did not apply to the new system. And the testing was done
with the old Ariane 4 assumptions because that was what was in the
specifications. I don't care how many tests you run in that situation, it
will not prove anything about the behavior in the new system. And, in fact,
the software involved was only (as it turned out in retrospect) executed a
very few times (once?) in the Ariane 4 because it was error-handling
software that was only needed in a rare case. Just because the entire
software package has been executed millions of times does not mean that
each statement has been executed that many times. Some error-handling
software is never executed (even in testing if statistical testing is
used). Those rarely or never executed routines may be included simply to
protect against misassumptions about the environment.

>
>  Perhaps the reason why software reliability modeling still has pretty
>> poor performance after at
>> least 40 years of very bright people trying to get it to work is that the
>> assumptions underlying it
>> are not true.
>>
>
> To my mind, the reason why it doesn't have more application is that you
> have to do a lot of hard work and have a lot of hard data to make a limited
> inference, and the hard data is mostly not there in most cases.
>
> Also, as evinced by much of the discussion around such matters, many
> engineers (and not only engineers) are not familiar with reasoning using
> <assertion, confidence> pairs. And people don't use stuff with which they
> are not familiar.
>
> In this sense, "statistical evaluation" might be the new "formal methods".
> Let's just skip a decade of "don't work/does too work" discussion, shall
> we? I'll have better things to do in old age, such as practicing to be a
> rock star.


The problem is not that it requires a lot of hard work. I don't know of
empirical evaluations to determine whether it works in practice. I know of
a lot of counter-examples in practice. One counter-example should raise
questions.

>
>
>  When someone wrote:
>>  > I don't think that's true,
>> Peter Ladkin wrote:
>>  >>You might like to take that up with, for example, the editorial board
>> of IEEE TSE.
>>
>> [As a past Editor-in-Chief of IEEE TSE, I can assure you that the entire
>> editorial board does not
>> read and vet the papers, in fact, I was lucky if one editor actually read
>> the paper. Are you
>> suggesting that anything that is published should automatically be
>> accepted as truth? That nothing
>> incorrect is ever published?]
>>
>
> No, none of that, obviously.
>
> PBL
>
>
> Prof. Peter Bernard Ladkin, Faculty of Technology, University of
> Bielefeld, 33594 Bielefeld, Germany
> Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de
>
>
>
>
> ______________________________**_________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-**Bielefeld.DE<systemsafety at TechFak.Uni-Bielefeld.DE>
>



-- 
Prof. Nancy Leveson
Aeronautics and Astronautics and Engineering Systems
MIT, Room 33-334
77 Massachusetts Ave.
Cambridge, MA 02142

Telephone: 617-258-0505
Email: leveson at mit.edu
URL: http://sunnyday.mit.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20130628/d8598e85/attachment.html>


More information about the systemsafety mailing list