[SystemSafety] Big claims and duff data

Steve Tockey Steve.Tockey at construx.com
Thu Sep 20 22:46:02 CEST 2018


Olwen wrote:

“Here's a challenge: I claim that the cheapest way to do anything is to get it right first time. Do any of the leprechaun-quoters disagree with that? Or, to put it another way, what serious objections are there to doing all you can to prevent defects in software?”

A consulting buddy of mine once quipped that

“The number of iterations needed to make it right is an indicator of just how badly you did it the first time”


— steve



From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of Olwen Morgan <olwen.morgan at btinternet.com<mailto:olwen.morgan at btinternet.com>>
Date: Thursday, September 20, 2018 at 8:28 AM
To: "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: [SystemSafety] Big claims and duff data


Getting decent cost and effort metrics data has always been difficult (actually virtually impossible) in software engineering. Even if the data from a single project is reliable, conclusions drawn from it may well not be validly generalisable to other projects. I once read paper about a project in a European telco for which it was claimed that use of QAC had cut testing effort by 50%. A few years afterwards, I worked with someone who had worked on that project. When I showed him the paper, his reply was, "Frankly, that project was in such a mess that any common-sense improvement would have produced comparable savings." Nuff said.

Relying on data is OK provided that you are getting repeatable and reproducible results from it - and that goes as much for the results-wavers as for the debunkers. Most firms don't even get results that are repeatable, leave alone reproducible. I am always pretty suspicious about published metrics data because it's very difficult to ensure its accuracy, leave alone its general validity.

Nevertheless, it is prudent to recall that absence of evidence is not evidence of absence.

I think it is true to say that projects using state-of-the-art static checking tools *do* find that their testing phases entail less effort than they would otherwise have done (some who uses SPARK Ada - please help me out here ;-) And I know of another telco at which QAC found an error that testing had not revealed and which, I was told by management accountants, would have cost UKP 6.3m to fix after system deployment in the field.

Here's a challenge: I claim that the cheapest way to do anything is to get it right first time. Do any of the leprechaun-quoters disagree with that? Or, to put it another way, what serious objections are there to doing all you can to prevent defects in software?


O





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180920/bcc01280/attachment.html>


More information about the systemsafety mailing list