[SystemSafety] Collected stopgap measures

Paul Sherwood paul.sherwood at codethink.co.uk
Sat Nov 17 18:38:01 CET 2018


Hi Matthew
>>> So no you don't 'necessarily' need a software specification to 
>>> develop a software product but to convince a regulator that you 
>>> haven't broken the chain of custody from system specification to 
>>> source code* you'll need to bring evidence. And as we know strong 
>>> claims demand strong evidence.
>> 
>> This is new to me... what's the value of "chain of custody from system 
>> specification to source code"? Are you saying that the system spec has 
>> to lead to the existence of source code somehow?
>> 
On 2018-11-16 00:48, Matthew Squair wrote:
> The value of a ‘chain of custody’ approach is twofold. To
> demonstrate that the software does what the system requires it to do,
> and that it does nothing else. The looser that chain gets the more
> likely it is that a requirement will be missed (somewhere) or a
> misinterpretation will not be picked up.

In other contexts I've understood 'chain of custody' to mean evidence of 
who did and/or looked after things, with no gaps.

If I understand you correctly, I think you are using 'chain of custody' 
here to describe the mapping from requirements (probably via 
intermediate documents, e.g. architecture) to source code?

Is that a common use of this phrase in the safety community?

> Doing it in a stepwise fashion is important from a practical point of
> view if your ability to verify on a representative target system or in
> the actual environment is limited. For example landing on Mars for
> real is not a good place to be formally demonstrating the squat
> software function won’t accidentally turn the engines off (as
> happened to the MPL).

I accept that in some cases it may be possible, and perhaps even clearly 
best to proceed stepwise from requirements down the waterfall to code.

My iterations on this discussion are an attempt to establish what is 
**generally** useful/practicable/true etc.

In particular I'm wondering on how to approach re-use of established 
software, which has arisen via other methods with ticking any of the 
standards boxes. Clearly if this is to be justified at all, some 
evidence needs to be gathered and/or constructed.

> If you’re interested in seeking it out I think it was Parnas who
> wrote a paper on ‘how to fake a rational design process’ which
> addresses your comments about the difference between aspiration and
> messy reality.

Yes, I am interested, and thanks to Martyn for following up with the 
link [1].

It strikes me as interesting that Parnas' reasoning seems to align with 
core practices that the Linux kernel community and others have 
established at the code contribution and review level. There anyone 
making a code change must ultimately deliver it as an idealised series 
of logically separate, non-breaking patches which can be reviewed and 
applied in order. In other words the history and content of the change 
is adjusted until it satisfies the reviewers and is then committed once 
approved - as a result the evidenced history is tidy, yet obviously very 
different from how the development actually happened.

BTW I'd like to thank you for your summary of the Salzer and Schroder 
principles [2] - that's one of the most helpful short documents I've 
come across in quite a long time.

br
Paul

[1] 
https://www.researchgate.net/publication/260649064_A_Rational_Design_Process_How_and_Why_to_Fake_it
[2] 
https://criticaluncertainties.com/reference/saltzer-and-schroeders-principles/





More information about the systemsafety mailing list