[SystemSafety] Big claims and duff data

DREW Rae d.rae at griffith.edu.au
Thu Sep 20 23:15:58 CEST 2018


Olwen,
The argument against "getting it right first time" is that we don't always know what "right" looks like first time. Design, as it is practiced in the real world, is iterative, and complex systems involve many such  parallel and serial iterative design processes.

In any other field of practice, if I saw managers being as hypercritical of frontline practices as software safety people are about software development - particularly with the moral overtones - I would be deeply skeptical that they really understood the constraints and concerns of frontline work.

Most workers, most of the time, are neither stupid or evil. If they aren't working the way managers think they should be working, chances are there's something the managers don't understand about the work.

Drew
________________________________
From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de> on behalf of Olwen Morgan <olwen.morgan at btinternet.com>
Sent: Friday, September 21, 2018 1:28:46 AM
To: systemsafety at lists.techfak.uni-bielefeld.de
Subject: [SystemSafety] Big claims and duff data


Getting decent cost and effort metrics data has always been difficult (actually virtually impossible) in software engineering. Even if the data from a single project is reliable, conclusions drawn from it may well not be validly generalisable to other projects. I once read paper about a project in a European telco for which it was claimed that use of QAC had cut testing effort by 50%. A few years afterwards, I worked with someone who had worked on that project. When I showed him the paper, his reply was, "Frankly, that project was in such a mess that any common-sense improvement would have produced comparable savings." Nuff said.

Relying on data is OK provided that you are getting repeatable and reproducible results from it - and that goes as much for the results-wavers as for the debunkers. Most firms don't even get results that are repeatable, leave alone reproducible. I am always pretty suspicious about published metrics data because it's very difficult to ensure its accuracy, leave alone its general validity.

Nevertheless, it is prudent to recall that absence of evidence is not evidence of absence.

I think it is true to say that projects using state-of-the-art static checking tools *do* find that their testing phases entail less effort than they would otherwise have done (some who uses SPARK Ada - please help me out here ;-) And I know of another telco at which QAC found an error that testing had not revealed and which, I was told by management accountants, would have cost UKP 6.3m to fix after system deployment in the field.

Here's a challenge: I claim that the cheapest way to do anything is to get it right first time. Do any of the leprechaun-quoters disagree with that? Or, to put it another way, what serious objections are there to doing all you can to prevent defects in software?


O





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180920/e7aeaccf/attachment.html>


More information about the systemsafety mailing list