[SystemSafety] Component Reliability and System Safety

Olwen Morgan olwen.morgan at btinternet.com
Mon Sep 17 16:00:23 CEST 2018


On 2018-09-17 11:06 , Paul Sherwood wrote:

> But software is a very big field. It seems to me that most of the software we are relying on these
> days was developed without following coding standards in general, ....

There are some valid points here as regards avoiding a one-size-fits-all 
approach.

If you are writing a compiler test suite, and especially in tests 
designed to determine the nature of implementation-defined 
characteristics (e.g. order of evaluation), you typically need very 
contrived side-effects. A good example is determining whether the left 
or right operand of an assignment statement is evaluated first. (Mini 
challenge - but not for Derek Jones because he knows all these tricks - 
write a C program to make that check.) When one is writing that kind of 
program, a goodly few common coding rules can cheerfully be ignored.

Also, in my experience, the reasons for putting rules into coding 
standards are often left without entirely *succinct* justifications. 
When I was doing C and C++ coding standards for BT - about 25 years ago, 
we identified several reasons for having a coding rule for a language 
construct. Among these were that the construct concerned was or was 
believed to:

(a)    be  implemented incorrectly in at least one compiler - this was 
in the days when compilers were still trying to catch up with C90,

(b)    be often used incorrectly by programmers,

(c)    be  inadequate for its purported purposes, e.g. locale facilities 
were not at that time fully adequate for internationalisation,

(d)    rely on unspecified or implementation-defined values or 
behaviours and thereby be non-portable,

(e)    impair readability, e.g. low comment density, inconsistent 
indentation (AFAI recall BT opted for exdented as most of its 
programmers used it),

(f)    lead to unnecessarily large or complex control flow graphs, or 
function call graphs,

(g)    make code unnecessarily hard to analyse for static analysers (the 
likes of QAC were OK but some groups were at the time using PolySpace),

(h)    make for difficulties in interoperability with separately 
developed systems,

(i)    be inefficient (e.g. deeply nested loops with operations that 
could benefit from strength reduction),

(j)    exhibit poor characteristics of coupling or cohesion.

Given the diversity of software used in BT's then network management 
systems, different development groups had different sets of standard 
concessions against the department-wide coding standard that I wrote. 
There was no one-size-fits all, if only because of the large amount of 
legacy code under maintenance, much of which needed to be migrated away 
from platforms as various technical rationalisation steps occurred.

One weakness of the work was that I had neither the time nor resources 
to look for documented justifications for rules. Much (but by no means 
all) was done on the basis of a what other coding guidelines had said 
(notably Plum-Hall and those from the book "C Traps and Pitfalls by 
Koenig) and general consensus about whether a rule "sounded like a good 
idea" - hardly satisfactory but you have to start somewhere.

Another issue was that I thought it prudent to be proactive in the rules 
that went into the standards. Given the anticipation that abstract 
interpretation tools were about to come into widespread use at BT (real 
uptake turned out much slower, AFAI recall), there was sense in 
including several rules solely because they helped to limit the 
prevalence of usages that the tools might find hard to deal with. (At 
the time PolySpace was somewhat notorious for telling programmers that 
it couldn't be sure about large sections of code.) Ironically, the 
theoretical justifications for those rules were actually a lot sounder 
than the consensus justifications that were used for other rules.

As a result of my experience at BT and in contributing to MISRA C, I 
came to the view that coding standards should make a sharp distinction 
between which constructs needed to be controlled (so-called "Designated 
Constructs"), and what restrictions were to be applied to them. If you 
do things this way, the document that identifies the constructs gives 
each one a unique identifier and then an application-specific coding 
standard simply consists of a table listing against each designated 
construct identifier, whether the identified construct was permitted, 
deprecated or subject to some stated form of control.

Doing a coding standard this way is not particularly difficult and it 
makes for a much more convenient document when its use is spread across 
a diverse set of development teams.


regards,
Olwen



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180917/2e1a6962/attachment-0001.html>


More information about the systemsafety mailing list