[SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

Steve Tockey Steve.Tockey at construx.com
Mon Jul 1 18:52:07 CEST 2013


Derek,

"The real elephant in the room is the performance of the people writing
the software.

Measurements of source code may be thin of the ground but it is
orders of magnitude greater than measurements of developers."

Yes, I agree. But now that raises the obvious follow-on question of "what
criteria would you measure developers against?"

Just as a data point, how many people on this list are aware of the
"Software Engineering Body of Knowledge" (aka "SWEBOK")?


-- steve



-----Original Message-----
From: Derek M Jones <derek at knosof.co.uk>
Organization: Knowledge Software, Ltd
Date: Sunday, June 30, 2013 6:12 PM
To: "systemsafety at techfak.uni-bielefeld.de"
<systemsafety at techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] Qualifying SW as "proven in
use"	[Measuring	Software]

All,

> I am intrigued at how long this conversation has endured. It seems to
>ignore
> the elephant in the room: the massively complex, fragmented and error
>prone
> environment in which modern software products run.

The real elephant in the room is the performance of the people writing
the software.

Measurements of source code may be thin of the ground but it is
orders of magnitude greater than measurements of developers.

  I'm not against
> calculating McCabe numbers. In fact getting all coders to run their code
> through a McCabe analyser prior to code review is a great idea. It
> encourages people to keep it simple. But the complexity of the
>application
> code is the tip of the iceberg when you are considering failure modes.
>As I
> have said before (taking a web app is a worst-case scenario), several
> environmental factors impact what a user actually encounters at the user
> interface:
> - various language libraries downloaded from the web
> - the language engine and its configuration
> - the web page markup language and its configuration via cascading style
> sheets
> - the database server in its configuration
> - the web server and its configuration
> - various brands of browser with their random interpretation of the
>served
> page
>
> This is the real world that software lives in these days. I encourage the
> brains trust on this list to engage with the aggressive ugliness that is
>the
> real world and consider how we might deal with it. For example, the rise
>of
> complex configurable systems. Over the years it's become cool to avoid
> writing software by manipulating behaviour with configuration data. All
>this
> has done is move complexity into the configuration data and the tools
>that
> maintain that data. There is nothing more obscure than a bucket of
> configuration data. It is frighteningly easy to lose touch with what it
> means. The other disturbing thing is that the tools that manage this
> information are often held together with dental floss and bailing wire,
> together with the knowledge of one or two critical people. A friend of
>mine
> once held this position. One day he walked into the boss's office and
>said,
> "If I am to continue in this job two things are going to happen: 1) I
>will
> be the only person who will have write access to the configuration
>database;
> 2) I will receive a $10,000 a year raise." The boss' response: "Done!
>Carry
> on." It was a short meeting. A screw up in system configuration could
>bring
> a 26,000 point SCADA system, performing critical control tasks in a rail
> network, to its knees in a microsecond. The boss was an insightful dude.
>
> Our ability to control anything rests on the accuracy of our mental
>model of
> its behaviour. Right now large systems projects in complex environments
>defy
> our feeble attempts at modelling. Hopefully this will not always be the
> case. Until that time there are brute force solutions. I once worked on
>an
> embedded system responsible for nuclear reactor shutdown. They wouldn't
>let
> us use an operating system.
>
> Cheers
> Les
>
> -----Original Message-----
> From: systemsafety-bounces at techfak.uni-bielefeld.de
> [mailto:systemsafety-bounces at techfak.uni-bielefeld.de] On Behalf Of
>Martyn
> Thomas
> Sent: Sunday, June 30, 2013 1:37 AM
> Cc: systemsafety at techfak.uni-bielefeld.de
> Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring
> Software]
>
> There are, as you say, many things that have been said to aid program
> understanding, with some justification.
>
> The use of strongly typed languages, for example, or avoiding the use
> of  go-to statements, global variables, operator overloading, and
> inheritance.
>
> Many programmers resist such advice and you can usually construct a
> situation where the deprecated language construct is the clearest and
> simplest way to implement a design, which makes it hard to get such
> advice generally adopted.
>
> But, even if these factors and the measures that you cite could be shown
> to have a strong impact on defect density, they are about costs and
> time, not directly about safety.
>
> The safety goal is to have sufficient evidence to justify high
> confidence that the software has specific properties that have been
> determined to be critical for the safety of a particular system in a
> particular operating environment. That evidence cannot depend on human
> inspection - it will always need automated analysis.
>
> If the software has fewer defects introduced by the programmers and if
> it is easier to understand, then achieving the evidence necessary for
> high assurance will be more likely to succeed and require less rework
> and cost less. But these are quality criteria (albeit very important
> ones) not safety criteria.
>
> Martyn
>
>
>
>
> On 28/06/2013 22:09, Steve Tockey wrote:
>>
>> That's what I've been trying to get across all along. The evidence
>> that I have is that three indicators are pretty good measures [of lack
>> of clarity]:
>>      Cyclomatic complexity
>>      Depth of (decision) nesting
>>      Fan out
>>
>> As I've said, I'm sure there are other relevant indicators as well.
>> What's missing is the correlation analysis that gives us the empirical
>> evidence that the indicators we look at are truly relevant. Both the
>> Lorentz & Kidd and Chidamber & Kemmerer publications (cited earlier)
>> proposed about 20 indicators, but I'm not aware of any serious
>> correlation analysis having been done. I'm sure that most of what's in
>> those publications are irrelevant, but I'm sure a couple of them are
>> relevant. We just need to find a way to get the data and do the
>>analysis.
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> .
>

-- 
Derek M. Jones                  tel: +44 (0) 1252 520 667
Knowledge Software Ltd          blog:shape-of-code.coding-guidelines.com
Software analysis               http://www.knosof.co.uk
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE



More information about the systemsafety mailing list