[SystemSafety] Software reliability (or whatever you would prefer to call it)

Heath Raftery heath.raftery at restech.net.au
Mon Mar 9 22:20:57 CET 2015


On 10/03/2015 4:48 AM, Steve Tockey wrote:
 > If one had the ability to characterize input space S for some software,
 > and if someone were able to comprehensively cover all possible inputs
 > (in all possible sequences…) then one could truly measure the
 > "reliability" of the code with respect to S—"What percent of the members
 > of input space S lead to failures?", or, "On average, what time period
 > passes between members of input space S that lead to failures?"

Thank you Steve, for attempting to peer beneath the obfuscation and 
academic territorialism, and find some features we can build from. This 
is precisely the notion that we so flippantly refer to when we casually 
use a phrase like Martyn's example: 'Is it meaningless to say that one 
release of a software system is "more reliable" than an earlier 
release?'. Doing so is just a rough way of saying that given some small 
input space, less bugs were observed, which is a fine statement in 
context. There is a (ATM, IMHO, unbridgeable) chasm between that 
statement and quantifying software reliability (or worse, quantifying 
software degradation!).

I think we're all guilty at times of strenuously defending our 
objections when the goal may indeed be the same.

Bev attempted to find the common ground with this:

On 10/03/2015 5:19 AM, Littlewood, Bev wrote:
> So how about “the reliability of software in its environment”. Or “the reliability of a (software, environment) pair”?

While commendable for defining common terms when one wishes to use them 
unusually, it seems a tortured way to make the argument sound. Why not 
just talk of the reliability of the environment to match expectations? 
What's it got to do with the software? The software will just do what 
the software will do.

*Lurk mode back on*

Heath



More information about the systemsafety mailing list