[SystemSafety] OpenSSL Bug

Derek M Jones derek at knosof.co.uk
Mon Apr 14 23:59:25 CEST 2014


Peter,

> I find a discussion about "empirical evidence" beside the point.
>
> Suppose it is known that people make lots of mistakes of type X.

That is empirical evidence,

> Suppose technical methods T are known to avoid, definitively, mistakes of type X,

more empirical evidence,

> and T are practical.

and yet more empirical evidence,

> Suppose there is an area of engineering, SCS, in which mistakes of type X have potentially very serious consequences

I would tot up the various costs and benefits, using the empirical
evidence and make a decision based on the weight of evidence.

Others might prefer to ignore the evidence based approach
and read tea leaves, use astrology or some other method much
loved by superstitious folk.

>
> Then we say: in SCS, using T is essential/best practice/the way to avoid lawsuits/whatever.
>
> What would be the relevance of any "empirical evidence" that some subset of T is "effective" in avoiding E?
>
> Since programming is a human endeavor, any "empirical evidence" that some set A of programmers in some artificial environment E  using some subset of T produced programs P with instances of error E less than, or marginally less than, some other subset B of programmers in E who didn't use T is subject to question on a number of fronts. What training/culture did A and B have in common? How does one determine that all relevant characteristics of E were taken into account? If it is possible to avoid E without using T, how do we know that most people in A weren't all cognisant of how to avoid E and few people in B were so cognisant, quite independent of using T? Did people in A+B know they were being assessed in avoiding E? If not explicitly, were they able to infer it covertly? And the people in A more capable of so inferring than those in B? And how do we determine that people in A and B didn't covertly find out what the point of the test was and determine to justify it by, re
 sp
>   ectively, paying more attention and paying less attention to what they were doing? You can go on for ever.
>
> It is much easier with statistical methods on human populations to show that something you presumed didn't or shouldn't matter actually does matter. As with much experimentation, discovering a negative is straightforward and proving a positive almost impossible.
>
> PBL
>
> Prof. Peter Bernard Ladkin, University of Bielefeld and Causalis Limited
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>

-- 
Derek M. Jones                  tel: +44 (0) 1252 520 667
Knowledge Software Ltd          blog:shape-of-code.coding-guidelines.com
Software analysis               http://www.knosof.co.uk


More information about the systemsafety mailing list