[SystemSafety] Data on Proof effectiveness from real projects

Steve Tockey Steve.Tockey at construx.com
Sat Apr 2 21:10:54 CEST 2016


Martyn wrote:

"Is anyone actually interested in the number of defects found? Surely that's not a measure of anything useful about the software."

Maybe it doesn't tell me much about the software, but it speaks volumes about the software process being used. I care about how many requirements defects were found when requirements work was done. I also care about how many requirements defects were found after that requirements work was claimed to have been done. Dividing the first by the sum of the first plus the second gives me an effectiveness measure for my requirements QA/QC work. As well, dividing labor hours (or, money) invested in requirements QA/QC work by the number of requirements defects found gives me a measure of efficiency.

Published data comparing Testing to Inspections is striking:

Testing effectiveness is typically 60-70%
Testing efficiency is typically 6-8 labor hours per defect found
Effort to repair defects found in Testing averages 3.4 to 27.2 hours per

Inspections effectiveness is reported to be 60-90% although I've had inspections run at 95% efficiency
Inspections efficiency is 0.8 to 1.2 labor hours per defect found
Effort to repair defects found in Inspections averages 1.1 to 2.7 hours per

I think it's very useful to know that Inspections can be far more effective than Testing. I also think it's very useful to know that Inspections are between 5x and 10x more efficient than Testing. But I could never know either of those if I didn't count the number of defects found.


If I have a base historical of data for estimating total defects, I can subtract defects already found from estimated total to get an approximation for defects remaining. Of course, it is only an approximation. But even being only an approximation, it can be useful to know.


-- steve



From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of Martyn Thomas <martyn at thomas-associates.co.uk<mailto:martyn at thomas-associates.co.uk>>
Reply-To: "martyn at thomas-associates.co.uk<mailto:martyn at thomas-associates.co.uk>" <martyn at thomas-associates.co.uk<mailto:martyn at thomas-associates.co.uk>>
Date: Saturday, April 2, 2016 8:02 AM
To: "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: Re: [SystemSafety] Data on Proof effectiveness from real projects

On 02/04/2016 06:37, Steve Tockey wrote:
Generally speaking, efficiency is looking at investment per unit of work done. In other words, increasing efficiency means that the same work is done at lower investment—the same (number of?) defects are found for a lower investment. Maximizing the chance of finding defects would be an issue of effectiveness. Effectiveness is looking at the rate at which the job is done correctly (I.e., a defect is found, not missed). One needs to look at both efficiency and effectiveness of processes to make a fair comparison.



Is anyone actually interested in the number of defects found? Surely that's not a measure of anything useful about the software.

An important benefit of verification is that it can tell you (for some classes of defect) that the number of defects remaining is zero.

Martyn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160402/dd16dd68/attachment.html>


More information about the systemsafety mailing list