[SystemSafety] OpenSSL Bug

Steve Tockey Steve.Tockey at construx.com
Thu Apr 17 00:27:27 CEST 2014


Bertrand,
Here are my answers to your questions:

"Is the programmer expected to have a brain, and if yes of which type
(requirement developer ?) ?"
In today's software development environment, yes, the typical programmer
has to be expected to not only have a workable brain, but they also have
to be reasonably adept at requirements development. Remarkably few, if
any, organizations are in a position to give the programmer a complete set
of requirements. Requirements development, as practiced today, is woefully
inadequate. The programmer is forced--by necessity--to do some
requirements work on their own. The only other alternative is for the
programmer to simply guess as what they think the requirements might be,
and history shows that nobody is very good at that. And, if it's not
already obvious, guessing wrong --> defect.


"Is a programmer expected to develop requirements when thinks they are
missing ?"
At an absolute minimum, a marginally competent programmer should be
expected *to go ask someone* when they think requirements are missing.


"Isn't a healthy development process (and this is not restricted to SW)
based on sticking to the requirements and eventually going back to
requirement capture rather than engineering requirements in lower phases
of the development cycle ?"
Yes. The problem is that the vast majority of software teams have
amazingly unhealthy development processes which include amazingly
unhealthy requirements practices, thus forcing the requirements capture
into the lower phases of the development cycle. It's possible to do a much
better job on requirements. Simply, most people don't know or don't care
about those better ways.


-- steve



-----Original Message-----
From: "RICQUE Bertrand   (SAGEM DEFENSE SECURITE)"
<bertrand.ricque at sagem.com>
Date: Wednesday, April 16, 2014 8:47 AM
To: Steve Tockey <Steve.Tockey at construx.com>, David MENTRE
<dmentre at linux-france.org>, "systemsafety at lists.techfak.uni-bielefeld.de"
<systemsafety at lists.techfak.uni-bielefeld.de>
Subject: RE: [SystemSafety] OpenSSL Bug

"
And even if there were no explicitly stated requirement, had the developer
had any brains and used Design-by-Contract then they would surely have
noticed that the value range of that parameter's data type was much bigger
than what the function was supposed to be able to handle. The need for a
range check would have been bloody obvious.
"
There are two things here.

Is the programmer expected to have a brain, and if yes of which type
(requirement developer ?) ?

Is a programmer expected to develop requirements when thinks they are
missing ?

Isn't a healthy development process (and this is not restricted to SW)
based on sticking to the requirements and eventually going back to
requirement capture rather than engineering requirements in lower phases
of the development cycle ?

Bertrand Ricque
Program Manager
Optronics and Defence Division
Sights Program
Mob : +33 6 87 47 84 64
Tel : +33 1 58 11 96 82
Bertrand.ricque at sagem.com



-----Original Message-----
From: systemsafety-bounces at lists.techfak.uni-bielefeld.de
[mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
Steve Tockey
Sent: Wednesday, April 16, 2014 5:36 PM
To: David MENTRE; systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] OpenSSL Bug


David,

"Probably because the whole discussion started from a bug (in OpenSSL)
that precisely lies in those 7%. :-)"

Actually I'm going to respectfully disagree with that. I'm going to claim
I can trace it back to incomplete requirements & resulting inadequate
design. A parameter on a function call was missing a range check. Why?
Because there wasn't an explicit requirement that the value represented in
that parameter be within a specific range. Had there been an explicit
requirement, and had the designer/programmer paid attention to that
requirement then they would have needed a range checking 'if' statement to
satisfy that requirement.

And even if there were no explicitly stated requirement, had the developer
had any brains and used Design-by-Contract then they would surely have
noticed that the value range of that parameter's data type was much bigger
than what the function was supposed to be able to handle. The need for a
range check would have been bloody obvious.

It wasn't at all a 7% defect, it was an 83% defect.


"More seriously, I would make the distinction between two kinds of
approaches: ..."

Again, slight disagreement. I don't know the exact source of the defect
injection data (I.e., what specific kinds of projects the data came from).
However, I do know (of) the person who published the data: James Martin.
Knowing (of) Mr. Martin, I'm hard pressed to say that the data came from
any safety critical projects.

Besides that, my own experience with both safety-critical and non-
development projects doesn't show any appreciable difference in the defect
*injection* statistics. The big difference, IMHO, is that the
safety-critical projects make much better use of reviews and the defect
*detection* statistics are heavily skewed towards being found and fixed
earlier.


"Thus I have a practical question: suppose I have a company that lacks
such good practices (at least for the review or safe coding rules one).
I'm not a project manager, just a low level software engineer working on a
given project. How can I help improve the development process to reach
such level of good practices?"

Contact me directly, I'll send you a paper called "How Healthy is your
Software Process?". By instrumenting the project's defect tracking system
with very simple data we can easily show that the majority of defects are
requirements & design injected and that more than half of the
organization's capacity to do development work is being wasted in finding
and fixing mistakes that were made much earlier. Managers have a tendency
to discount what the techies are telling them, but it's hard to ignore the
data when it's staring them in the face.

If anyone wants the paper, I'm happy to share it.


-- steve




-----Original Message-----
From: David MENTRE <dmentre at linux-france.org>
Date: Wednesday, April 16, 2014 6:32 AM
To: "systemsafety at lists.techfak.uni-bielefeld.de"
<systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] OpenSSL Bug

Hello,

Le 16/04/2014 13:23, Steve Tockey a écrit :
> No offense inferred, but thanks for pointing it out.

The same for me. I should have kept the classical "if (a == SUCCESS)"
example. Another example: C language does not forces you to check the
return value of a function (that might flag an error case). Most of other
languages I know would force you to check it (or at least issue a warning).

> Besides, echoing this and Derek M. Jones' last post, I think this
> whole discussion is focusing on relatively minor issues and completely
> missing the big picture.
[...]
> the simple fact is that 83% of
> all defects exist before a single line of code was ever written. Why
> aren't we attacking the 83%, not the 7%???

Probably because the whole discussion started from a bug (in OpenSSL) that
precisely lies in those 7%. :-)

More seriously, I would make the distinction between two kinds of
approaches:

  1. People that produce software following the good practices you
describe below, probably to satisfy DO-178C or EN 50128 standards. In such
case, their error figures are probably in the above range (83% requirement
+ design, 7% code).

  2. People that produce software *without* following proper practices,
probably like OpenSSL developers and most companies developing software in
non-safety critical domain. In that case the coding errors are much more
prevalent. And we are several people on this list to think that using
better tool could help catch more of these errors. I do agree that
following good practices could have similar effect, but the simple fact is
that they are not doing it, probably due to lack of knowledge, time, money
or because this is not a paid-for job. You might counter-argue that if
they are not using good practices, they won't use static analysis tool
neither. You might be right. ;-)

> A
> team with good requirements, design, and review practices can write
> good, safe code in any language, including C.

I agree.

Thus I have a practical question: suppose I have a company that lacks such
good practices (at least for the review or safe coding rules one).
I'm not a project manager, just a low level software engineer working on a
given project. How can I help improve the development process to reach
such level of good practices?

Sincerely yours,
david


_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE

_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
#
" Ce courriel et les documents qui lui sont joints peuvent contenir des
informations confidentielles, être soumis aux règlementations relatives au
contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont
pas destinés, nous vous signalons qu'il est strictement interdit de les
divulguer, de les reproduire ou d'en utiliser de quelque manière que ce
soit le contenu. Toute exportation ou réexportation non autorisée est
interdite Si ce message vous a été transmis par erreur, merci d'en
informer l'expéditeur et de supprimer immédiatement de votre système
informatique ce courriel ainsi que tous les documents qui y sont attachés."
******
" This e-mail and any attached documents may contain confidential or
proprietary information and may be subject to export control laws and
regulations. If you are not the intended recipient, you are notified that
any dissemination, copying of this e-mail and any attachments thereto or
use of their contents by any means whatsoever is strictly prohibited.
Unauthorized export or re-export is prohibited. If you have received this
e-mail in error, please advise the sender immediately and delete this
e-mail and all attached documents from your computer system."
#




More information about the systemsafety mailing list