[SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

Steve Tockey Steve.Tockey at construx.com
Mon Jul 1 18:46:59 CEST 2013


Les,

"I am intrigued at how long this conversation has endured. It seems to
ignore
the elephant in the room: the massively complex, fragmented and error prone
environment in which modern software products run."

I (personally) haven't been ignoring it, I'm just trying to take things
one issue at a time.

"This is the real world that software lives in these days. I encourage the
brains trust on this list to engage with the aggressive ugliness that is
the
real world and consider how we might deal with it."

Actually, there are some very effective strategies for dealing with the
ugliness you're referring to.

Ultimately, complexity is the enemy. The complexity isn't ever going to
simply go away, but there are ways to manage it. First is to realize that
(software) complexity comes in two fundamental types, "essential" and
"accidental".

"Essential complexity" is the complexity that's inherent in the problem
being solved--like how to navigate a commercial airliner. "Accidental
complexity" is the complexity that appears in the solution space:
multi-threaded code, stored procedures, database denormalization, etc.
Further, the accidental complexity comes in two flavors, "necessary" and
"unnecessary". "Necessary accidental complexity is the accidental
complexity that must be present in order to meet the (non-functional)
requirements. "Unnecessary accidental complexity" is there because someone
did something stupid, essentially.

The first step is to simply go in with an attitude of, "Eliminate
unnecessary accidental complexity and manage the rest"

So my questions back to you on your web application example are:

"How much of that complexity is essential to the problem being solved?" My
feeling is that--having seen a lot of the web applications out there--not
much of the complexity is truly essential.

"How much of that complexity is necessary accidental complexity?" Clearly
some of it.

"How much of that complexity is unnecessary accidental complexity?" I
might propose that a non-zero amount of the complexities you are talking
about could be unnecessary. The programmer did it that way because it was
hip, trendy, sexy, or it looked really cool on their resume/CV.


Second, there are a set of very effective management tools that software
professionals can use to manage the essential and necessary accidental
complexities:
	Abstraction
	Encapsulation
	Cohesion
	Coupling
	Design-to-invariants & Design-for-change
Properly applied by professionals who know what they are doing, what would
otherwise be massively complex software systems become not that big of a
deal after all. I assert that much of the apparent complexity in software
systems was put there by "highly paid amateur programmers" who really
shouldn't have been doing software development in the first place.

"Our ability to control anything rests on the accuracy of our mental model
of
its behaviour."

Agreed.

"Right now large systems projects in complex environments defy
our feeble attempts at modelling."

Here is where I disagree. There are, actually, reasonable approaches to
modeling software in some very complex systems.

"Hopefully this will not always be the
case."

It doesn't need to be, the problem was largely solved 20+ years ago IMHO.


Cheers,

-- steve



-----Original Message-----
From: Les Chambers <les at chambers.com.au>
Date: Sunday, June 30, 2013 5:01 PM
To: "martyn at thomas-associates.co.uk" <martyn at thomas-associates.co.uk>
Cc: "systemsafety at techfak.uni-bielefeld.de"
<systemsafety at techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] Qualifying SW as "proven in
use"	[Measuring	Software]

I am intrigued at how long this conversation has endured. It seems to
ignore
the elephant in the room: the massively complex, fragmented and error prone
environment in which modern software products run. I'm not against
calculating McCabe numbers. In fact getting all coders to run their code
through a McCabe analyser prior to code review is a great idea. It
encourages people to keep it simple. But the complexity of the application
code is the tip of the iceberg when you are considering failure modes. As I
have said before (taking a web app is a worst-case scenario), several
environmental factors impact what a user actually encounters at the user
interface:
- various language libraries downloaded from the web
- the language engine and its configuration
- the web page markup language and its configuration via cascading style
sheets
- the database server in its configuration
- the web server and its configuration
- various brands of browser with their random interpretation of the served
page

This is the real world that software lives in these days. I encourage the
brains trust on this list to engage with the aggressive ugliness that is
the
real world and consider how we might deal with it. For example, the rise of
complex configurable systems. Over the years it's become cool to avoid
writing software by manipulating behaviour with configuration data. All
this
has done is move complexity into the configuration data and the tools that
maintain that data. There is nothing more obscure than a bucket of
configuration data. It is frighteningly easy to lose touch with what it
means. The other disturbing thing is that the tools that manage this
information are often held together with dental floss and bailing wire,
together with the knowledge of one or two critical people. A friend of mine
once held this position. One day he walked into the boss's office and said,
"If I am to continue in this job two things are going to happen: 1) I will
be the only person who will have write access to the configuration
database;
2) I will receive a $10,000 a year raise." The boss' response: "Done! Carry
on." It was a short meeting. A screw up in system configuration could bring
a 26,000 point SCADA system, performing critical control tasks in a rail
network, to its knees in a microsecond. The boss was an insightful dude.

Our ability to control anything rests on the accuracy of our mental model
of
its behaviour. Right now large systems projects in complex environments
defy
our feeble attempts at modelling. Hopefully this will not always be the
case. Until that time there are brute force solutions. I once worked on an
embedded system responsible for nuclear reactor shutdown. They wouldn't let
us use an operating system.

Cheers
Les

-----Original Message-----
From: systemsafety-bounces at techfak.uni-bielefeld.de
[mailto:systemsafety-bounces at techfak.uni-bielefeld.de] On Behalf Of Martyn
Thomas
Sent: Sunday, June 30, 2013 1:37 AM
Cc: systemsafety at techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring
Software]

There are, as you say, many things that have been said to aid program
understanding, with some justification.

The use of strongly typed languages, for example, or avoiding the use
of  go-to statements, global variables, operator overloading, and
inheritance.

Many programmers resist such advice and you can usually construct a
situation where the deprecated language construct is the clearest and
simplest way to implement a design, which makes it hard to get such
advice generally adopted.

But, even if these factors and the measures that you cite could be shown
to have a strong impact on defect density, they are about costs and
time, not directly about safety.

The safety goal is to have sufficient evidence to justify high
confidence that the software has specific properties that have been
determined to be critical for the safety of a particular system in a
particular operating environment. That evidence cannot depend on human
inspection - it will always need automated analysis.

If the software has fewer defects introduced by the programmers and if
it is easier to understand, then achieving the evidence necessary for
high assurance will be more likely to succeed and require less rework
and cost less. But these are quality criteria (albeit very important
ones) not safety criteria.

Martyn




On 28/06/2013 22:09, Steve Tockey wrote:
>
> That's what I've been trying to get across all along. The evidence
> that I have is that three indicators are pretty good measures [of lack
> of clarity]:
>     Cyclomatic complexity
>     Depth of (decision) nesting
>     Fan out
>
> As I've said, I'm sure there are other relevant indicators as well.
> What's missing is the correlation analysis that gives us the empirical
> evidence that the indicators we look at are truly relevant. Both the
> Lorentz & Kidd and Chidamber & Kemmerer publications (cited earlier)
> proposed about 20 indicators, but I'm not aware of any serious
> correlation analysis having been done. I'm sure that most of what's in
> those publications are irrelevant, but I'm sure a couple of them are
> relevant. We just need to find a way to get the data and do the analysis.

_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE

_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE



More information about the systemsafety mailing list