[SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

Steve Tockey Steve.Tockey at construx.com
Mon Jul 1 18:18:47 CEST 2013


Martyn,

"The safety goal is to have sufficient evidence to justify high
confidence that the software has specific properties that have been
determined to be critical for the safety of a particular system in a
particular operating environment."

Agreed, but my fundamental issue is (ignoring the obviously contrived
cases where the defects are in non-safety related functionality) how could
software--or the larger system it's embedded in--be considered "safe" if
the software is full of defects? Surely there are many elements that go
into making safe software. But just as surely, IMHO, the quality of that
software is one of those elements. And if we can't get the software
quality right, then the others might be somewhat moot?


Regards,

-- steve



-----Original Message-----
From: Martyn Thomas <martyn at thomas-associates.co.uk>
Reply-To: "martyn at thomas-associates.co.uk" <martyn at thomas-associates.co.uk>
Date: Saturday, June 29, 2013 8:37 AM
Cc: "systemsafety at techfak.uni-bielefeld.de"
<systemsafety at techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] Qualifying SW as "proven in use"
[Measuring	Software]

There are, as you say, many things that have been said to aid program
understanding, with some justification.

The use of strongly typed languages, for example, or avoiding the use
of  go-to statements, global variables, operator overloading, and
inheritance.

Many programmers resist such advice and you can usually construct a
situation where the deprecated language construct is the clearest and
simplest way to implement a design, which makes it hard to get such
advice generally adopted.

But, even if these factors and the measures that you cite could be shown
to have a strong impact on defect density, they are about costs and
time, not directly about safety.

The safety goal is to have sufficient evidence to justify high
confidence that the software has specific properties that have been
determined to be critical for the safety of a particular system in a
particular operating environment. That evidence cannot depend on human
inspection - it will always need automated analysis.

If the software has fewer defects introduced by the programmers and if
it is easier to understand, then achieving the evidence necessary for
high assurance will be more likely to succeed and require less rework
and cost less. But these are quality criteria (albeit very important
ones) not safety criteria.

Martyn




On 28/06/2013 22:09, Steve Tockey wrote:
>
> That's what I've been trying to get across all along. The evidence
> that I have is that three indicators are pretty good measures [of lack
> of clarity]:
>     Cyclomatic complexity
>     Depth of (decision) nesting
>     Fan out
>
> As I've said, I'm sure there are other relevant indicators as well.
> What's missing is the correlation analysis that gives us the empirical
> evidence that the indicators we look at are truly relevant. Both the
> Lorentz & Kidd and Chidamber & Kemmerer publications (cited earlier)
> proposed about 20 indicators, but I'm not aware of any serious
> correlation analysis having been done. I'm sure that most of what's in
> those publications are irrelevant, but I'm sure a couple of them are
> relevant. We just need to find a way to get the data and do the analysis.

_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE



More information about the systemsafety mailing list