[SystemSafety] A small taste of what we're up against

paul_e.bennett at topmail.co.uk paul_e.bennett at topmail.co.uk
Thu Oct 25 14:22:34 CEST 2018


On 25/10/2018 at 1:02 PM, "Olwen Morgan" <olwen at phaedsys.com> wrote:
>
>@all:
>
>Is it me or are some of us, to reverse the metaphor, not seeing 
>the 
>trees for the wood. It seems to me that the variable quality of 
>available data and studies means that we could go round in circles 
>arguing about how much the use measures to ensure code quality 
>improves 
>dependability To my mind one does not need a big-picture argument 
>here. 
>What do we actually agree on?
>
>1.    Do people agree that finding and correcting errors is 
>cheaper the 
>earlier it is done in the development process?

Yes. Especially if the errors can be eradicated during the finalisation of
requirements before any other design effort goes in.

>2.    Do people agree that detecting errors by static analysis is 
>significantly cheaper than detecting them by testing? (I've seen 
>claims 
>that the per error detection cost is from 30% to 150% higher for 
>testing 
>than for static analysis.)

If you couple static analysis with technical inspections, then yes. There
is, though, a limitation on how much inspection is reasonable (especially
if the design became very complex).

>If the answer to *either* of these questions is yes, then *any* 
>system 
>development process should be using static analysis. Even if it 
>does not 
>improve dependability, surely making s/w at lower cost makes sense 
>financially.


>While it is true that language design strongly influences the 
>complexity 
>of static analysis, I don't entirely share your pessimism over C. 
>To 
>compensate for it's dilapidations, my approach has always been to 
>use a 
>paranoiacally draconian subset and throw the best tools at the 
>static 
>analysis problem. I still think that approach is viable because, 
>although the subset has to be severe, if you write code that way, 
>*existing* tools can do a pretty good job of error detection - 
>although 
>it is often no trivial task to configure them to get things right.

Tools used for such tasks may need to be configured in such a way
that developers cannot change the settings without the proper approval
all round (A lock the settings approach to prevent cheating).

>The need to subset things is not confined to language. People work 
>with 
>cut-down UML, or, as I do, a chopped-off-at-the-knees-and-adapted 
>subset 
>of SSADM. Cutting things down is forced upon us by the lousy state 
>of 
>current standards but I continue to think it a workable strategy - 
>possibly the only strategy - until the standardisation processes 
>becomes 
>less dysfunctional.

Starting at the other end of the complexity scale, such that instead of
sub-setting, you have to super-set to grow the required level of minimal
complexity for an application might be an approach that others might
consider. My preference for high integrity requirements.


Regards

Paul E. Bennett IEng MIET
Systems Engineer
Lunar Mission One Ambassador
-- 
********************************************************************
Paul E. Bennett IEng MIET.....
Forth based HIDECS Consultancy.............
Mob: +44 (0)7811-639972
Tel: Due to relocation - new number TBA. Please use Mobile.
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************



More information about the systemsafety mailing list