[SystemSafety] Data on Proof effectiveness from real projects

Les Chambers les at chambers.com.au
Sun Apr 3 09:58:21 CEST 2016


Steve

RE: " Inspections efficiency is 0.8 to 1.2 labor hours per defect found"

Impressive: the spread here is .4, not .45 or .39 but .4. But what does this
mean to the average Joe in the development shop in Outer Mongolia?

I should point out that while it is laudable for an organisation to collect
these stats for the purposes of process improvement, they are rarely
transferable across organisations (sometimes the're useless between projects
in the same organisation) as they are wholly a function of the inspection
processes used and the skills of the inspectors. As a rookie instrument
engineer I was once on a team inspecting instrument installations in a
chemical plant. Our team was led by a guy with 30 years experience. I was in
awe of this guy. Just about every installation had problems. The phone rang
in the control room and he excused himself to take the call instructing us
to carry on without him. Like magic we stopped finding problems. After 10
minutes we smelt a rat, stopped for a convenient cup of coffee and awaited
his return.

Organisations that do not have a "stable" development environment - that is,
people come and go and much use is made of contract labour, not to mention
the usual churn in development technology with the resulting "learning
experiences" - really struggle to make sense of these numbers because they
vary wildly. 

 

I'm not saying we shouldn't try though. The answer must lie in more
effective automation of inspection tasks. In my experience human beings are
just so bad at it. Meanwhile I guess we will just have to put up with gross
measures. 

 

As a matter of interest, one gross measure that fascinates me is how many
lines of the 10,000,000 + line of code body in my car has actually been
reviewed by someone other than the author. Right now we have a debate in
Australia over the definition of free range eggs. We are moving towards one
square metre per chook (Australian for chicken) , surely VW et al , owe us
an eyeball count for the lines of code in our vehicles (I don't even want to
think about our aircraft).

 

Cheers

Les

 

 

From: systemsafety
[mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
Steve Tockey
Sent: Sunday, April 3, 2016 5:11 AM
To: martyn at thomas-associates.co.uk;
systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] Data on Proof effectiveness from real projects

 

 

Martyn wrote:

 

"Is anyone actually interested in the number of defects found? Surely that's
not a measure of anything useful about the software."

 

Maybe it doesn't tell me much about the software, but it speaks volumes
about the software process being used. I care about how many requirements
defects were found when requirements work was done. I also care about how
many requirements defects were found after that requirements work was
claimed to have been done. Dividing the first by the sum of the first plus
the second gives me an effectiveness measure for my requirements QA/QC work.
As well, dividing labor hours (or, money) invested in requirements QA/QC
work by the number of requirements defects found gives me a measure of
efficiency.

 

Published data comparing Testing to Inspections is striking:

 

Testing effectiveness is typically 60-70%

Testing efficiency is typically 6-8 labor hours per defect found

Effort to repair defects found in Testing averages 3.4 to 27.2 hours per

 

Inspections effectiveness is reported to be 60-90% although I've had
inspections run at 95% efficiency

Inspections efficiency is 0.8 to 1.2 labor hours per defect found

Effort to repair defects found in Inspections averages 1.1 to 2.7 hours per

 

I think it's very useful to know that Inspections can be far more effective
than Testing. I also think it's very useful to know that Inspections are
between 5x and 10x more efficient than Testing. But I could never know
either of those if I didn't count the number of defects found.

 

 

If I have a base historical of data for estimating total defects, I can
subtract defects already found from estimated total to get an approximation
for defects remaining. Of course, it is only an approximation. But even
being only an approximation, it can be useful to know.

 

 

-- steve

 

 

 

From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de> on
behalf of Martyn Thomas <martyn at thomas-associates.co.uk>
Reply-To: "martyn at thomas-associates.co.uk" <martyn at thomas-associates.co.uk>
Date: Saturday, April 2, 2016 8:02 AM
To: "systemsafety at lists.techfak.uni-bielefeld.de"
<systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] Data on Proof effectiveness from real projects

 

On 02/04/2016 06:37, Steve Tockey wrote:

Generally speaking, efficiency is looking at investment per unit of work
done. In other words, increasing efficiency means that the same work is done
at lower investment-the same (number of?) defects are found for a lower
investment. Maximizing the chance of finding defects would be an issue of
effectiveness. Effectiveness is looking at the rate at which the job is done
correctly (I.e., a defect is found, not missed). One needs to look at both
efficiency and effectiveness of processes to make a fair comparison.

 

 


Is anyone actually interested in the number of defects found? Surely that's
not a measure of anything useful about the software. 

An important benefit of verification is that it can tell you (for some
classes of defect) that the number of defects remaining is zero.

Martyn

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160403/c854cbf1/attachment-0001.html>


More information about the systemsafety mailing list