[SystemSafety] Collected stopgap measures

Peter Bernard Ladkin ladkin at causalis.com
Sun Nov 4 09:33:52 CET 2018


On 2018-11-04 08:47 , Andrew Banks wrote:
> 
>>> However, this documentation is often done after the fact (retrofitting 
>>> documentation to existing systems/software), and it seems to me that any 
>>> realistic process or checklist needs to accept that reality.
> 
> Retrospective documentation serves no purpose, other than allowing boxes to be ticked
> 
> Doing it at the right time helps influence design decisions, and thus contributes to getting things right, first time.
I agree with your second point about timely performance, but I am not sure I agree with the first.

Documenting retrospectively requires you to revisit what has been done, and if the documentation is
part of an assurance case, then you have to (re)construct the arguments for assurance. That is not
"serv[ing] no purpose", but in fact serving a very good purpose: if it succeeds, your system
(however built) is assured (in the way the assurance criteria formulate it); if it does not succeed,
then your built system will be rejected.

This of course depends on the assurance criteria. If they are box-ticking exercises (is this another
name for checklists, recently effusively praised here?) then they are less likely to be effective
for fitness-for-purpose demonstrations than formal demonstrations that (say) the built system
rigorously fulfils the requirements specification.

An example. Suppose you have developed a system down to object code from requirements that were
never checked. Your system is nominally ready for customer deployment. An assessor says "how do you
know your requirements are consistent and (relatively) complete?" Performing the checks/deriving the
arguments at this stage is neither more nor less effective than having done so before any of the
forward development from the requirements specification had taken place. (Of course, it costs you
and your client a lot more if you find out they are inconsistent and you have to go back and retrofit.)

Moving on to the second point about timely performance, there have been all sorts of rumors about
data on catching flaws "early" in development. I remember Morris Chudleigh wrote a paper in the
1990's for a medical informatics conference in which he used a simplified waterfall enumeration of
SW development stages and suggested (I think) that costs of retrofit went up by a factor of ten for
each stage you went through before the flaw was discovered. But I haven't found my copy and Morris
doesn't have one either. And I don't remember what dataset was being referenced. Peter Amey of
Praxis was giving talks based on this phenomenon a decade ago. His data set was (at least) SW
completed and deployed by Praxis (as it then was). Tim Schürmann found the NASA study (his mail of
20th September). Then last Friday I revisited a classic, Barry Boehm's Software Engineering
Economics, from 1981. I have had his second COCOMO II book for a long time, but that doesn't include
the basic material available in the 1981 book. The 1981 book had been out of print, but
Prentice-Hall seem to have brought it back as print-on-demand, for what counts as a reasonable price
nowadays.

There it is, in Figure 4.2, Increase in cost-to-fix or change software throughout life-cycle. The
data are gathered largely from TRW, whose Software Research and Technology Division Boehm ran,
although an IBM example is included. Getting it right first time is a good idea.

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181104/be2969f5/attachment.sig>


More information about the systemsafety mailing list