[SystemSafety] systemsafety Digest, Vol 66, Issue 5

Roderick Chapman roderick.chapman at googlemail.com
Thu Jan 4 12:16:56 CET 2018


On 04/01/2018 11:00, systemsafety-request at lists.techfak.uni-bielefeld.de 
wrote:

> When I said "Sometimes it takes no more than a brief look to know that a
> code sample is not good enough to include in the final product."
I agree. The SEI's PSP is a strong proponent of the "Broken Windows" 
approach
to defect management. In short - to have any chance of verifying the 
big, important
things, you have to take care of all the "little things" first.  In 
software, the "little things"
are just what several contributors have mentioned, such as consistent 
coding style,
choice of names, presence of meaningful (and up to date) comments, and 
(yes) some
basic limits on a set of the simpler metrics, such as McCabe.

On the subject of "gaming" metrics, I would really like to see anyone 
try to "game"
the SPARK theorem prover ... the teams at Altran use "proof 
completeness" (i.e.
essentially the false alarm rate of the prover) as a pre-commit target. 
Bottom line -
either the theorem prover says your code is 100% OK with no false 
alarms, or it doesn't
go into CM.  That is the basic discipline on the NATS iFACTS programme, 
for example.

  - Rod




More information about the systemsafety mailing list