[SystemSafety] Safety-Critical Systems eJournal

Peter Bernard Ladkin ladkin at causalis.com
Tue Feb 1 09:33:58 CET 2022



On 2022-01-31 21:26 , Derek M Jones wrote:
> 
>> Better said, they did a good enough job of highlighting the kinds of traps you fall into if you 
>> don't have a good understanding of the statistical reasoning you want to use.
> 
> Which kind of statistical reasoning techniques are
> applicable for software reliability?
> 
> We don't know,

A less eccentric view would be that the papers on it which appear in the most highly regarded 
journals in software engineering are likely to give you such techniques. But many of those 
techniques have prerequisites which are in many practical situations difficult to assure.

Someone looking to claim that her company's Linux version is appropriate for SIL 4 applications 
can't just gather data from her company's clients for ten years, bang them into some piece of formal 
mathematics she took out of IEEE TSE, see what number comes out, and then put that number on the 
company's advertisements.

There has to be some analysis of the quality of the data, and whether the prerequisites of the 
technique being used are plausibly fulfilled. That in turn may require some grey-box or white-box 
analysis of the software, which in cases as complex as Linux tend to show that the prerequisites are 
a long way from being fulfilled.

The standard conundrum of statistical analysis is that determining what numbers are telling you is a 
finicky business which is full of pitfalls. I think in engineering in general and software in 
particular we are luckily in a situation in which engineers tend to react "oo, err, don't like it, 
don't feel comfortable, let's do something else." The Linux situation above is recounted by many 
colleagues as a typical "horror story".

It could be different. In medicine or social science there are way too many cases in which people 
collect numbers, bang them into some software package, get "answers" out, and send it all off to 
some journal which may or may not succeed in finding reviewers who can accurately assess it for 
plausibility. And that is just the innocent stuff. Then there is the scientific fraud.......

> It's opinions all the way down.
Granted that opinions abound concerning software statistical evaluation, it does not follow that 
opinions are all there are.

Take, for example, the Monty Hall problem. There is a book on it by Jason Rosenhouse. He goes 
through all kinds of opinions that have been expressed on the probabilistic analysis of the 
situation. But there is a fact of the matter. You can play the game to find out what it is. Any 
analysis which does not return those data is wrong; facts trump opinion. (But, as Rosenhouse 
remarks, not for all holders of fallacious opinions.....)

I think there is a role for the study of mistaken reasoning. In logic, it started as the study of 
logical fallacies. Similarly, there could be a role for the study of mistaken statistical reasoning, 
such as those examples which D&T adduce. (Not all of my statistically expert colleagues agree with 
this, though. One difficulty is that there is so much of it.)

PBL

Prof. i.R. Dr. Peter Bernard Ladkin, Bielefeld, Germany
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de




-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20220201/1cf858bd/attachment.sig>


More information about the systemsafety mailing list