[SystemSafety] What do we know about software reliability?

Derek M Jones derek at knosof.co.uk
Wed Sep 16 15:48:38 CEST 2020


yorklist at philwilliams.f2s.com,

> I also recall presentations about software reliability where there was complete opposite of opinion in the theories prevailing at the time as to whether a greater number of detected defects in software indicated more or less reliable software. If there is now consensus about the direction of the trend, maybe there is hope for assessments of the magnitude.

Most existing data does not include number of users or some other usage measure.
The reported faults just magically appear.

The Adams paper from 1984, cited earlier by Martyn, is one of the few to contain
usage data (a consequence of IBM charging by machine usage).  The data set is tiny,
but it is still one of the few that are publicly available.

More recent data counts downloads, App installs or page views.  Far from perfect,
but that is all there seems to be.

Next time you encounter a fault prediction paper, ask about how usage is
incorporated into the model.  You will probably be met with a blank look, or
some sob story about that data not being available.

> Phil
> 
> -----Original Message-----
> From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de> On Behalf Of Peter Bernard Ladkin
> Sent: 16 September 2020 11:33
> To: systemsafety at lists.techfak.uni-bielefeld.de
> Subject: Re: [SystemSafety] What do we know about software reliability?
> 
> 
> 
> On 2020-09-16 11:46 , yorklist at philwilliams.f2s.com wrote:
>> If A is dependent on some temporal event, and the testing is conducted
>> prior to that event – what does the testing tell you about the outcome after that event?
> 
> Such things are issues, but I am not sure of the value of posing it so abstractly.
> 
> The abstract answer is that the occurrence of the temporal event TE is an environmental predicate:
> the characteristic pre-TE or post-TE is part of the environment. So the answer to your question logically is: it tells you nothing at all because the environment has changed.
> 
> But that is hardly helpful. Here is a more concrete example. What does statistical testing prior to Y2K tell you about how your system works post-Y2K? Or, what does statistical testing prior to 2038 tell you about the operation of your 32-bit Unix system in 2038?
> 
> The answer is: design your statistical tests so that both environmental states are represented. Then you know.
> 
> Those are known possible-dependencies. You can - and did, and would - shield your system from such effects.
> 
> Then there are unknown ones. An easter egg triggered by the clock. A GPS-dependency tracing its way through a library you used.
> 
> I don't know of any general answer/prophylaxis in abstract terms. The known dependencies you just handle individually in whatever way is appropriate. I think you may be able to detect easter eggs by modified dead-code analysis. I think you handle internal temporal dependencies by performing an impact analysis on clock values. GPS dependencies can be detected through jamming in the environment E. And so on.
> 
> PBL
> 
> Prof. Peter Bernard Ladkin, Bielefeld, Germany Styelfy Bleibgsnd
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
> 
> 
> 
> 
> 
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
> 

-- 
Derek M. Jones           Evidence-based software engineering
tel: +44 (0)1252 520667  blog:shape-of-code.coding-guidelines.com


More information about the systemsafety mailing list