[SystemSafety] Koopman replies to concerns over Toyota UA case

Steve Tockey Steve.Tockey at construx.com
Wed Jan 3 19:59:07 CET 2018


Les wrote:

“Granted we are talking forensic analysis here”

Why does it have to be forensic? If there were a rule that said, “No Cyclomatic complexity > 15” (with an appropriate allowance for CASE) then its a before-the-fact rather than after-the-fact situation. The developer knows the rule is in place, and as long as it is consistently enforced it tends not to get broken any more.

I know of at least one situation where the organization had the source code to their compiler. To calculate Cyclomatic complexity inside of a compiler is a fairly easy job, a dozen lines of code at most. They added those lines, plus one more:

If( cyclomaticComplexity > 15 ) {
   FatalCompilerError( “Your code is too complex, it can’t be compiled. Simplify it and try again” );

When the limit is enforced by the compiler, it can’t ever get out of range. Similarly, many source code configuration management systems can be connected to a static analyzer and reject check-in when structural complexity limits get violated. It is not difficult to enforce it before it could ever turn into a problem.


“No model based development. Code hacked from a blank page with no consideration of highly effective modelling techniques - the ones that support simplicity and cut LOC counts. The classic example is control systems that do not make extensive use of state engines.”

Someone should write a book about that, huh? (Hahaha)


“My point is that the critical review is the design review not the code review.”

I’m going to go with Paul E. Bennett on this one. The critical review is the requirements (model) review.


— steve



From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of Les Chambers <les at chambers.com.au<mailto:les at chambers.com.au>>
Date: Monday, January 1, 2018 at 5:30 PM
To: 'Matthew Squair' <mattsquair at gmail.com<mailto:mattsquair at gmail.com>>, 'Derek M Jones' <derek at knosof.co.uk<mailto:derek at knosof.co.uk>>
Cc: 'Bielefield Safety List' <systemsafety at techfak.uni-bielefeld.de<mailto:systemsafety at techfak.uni-bielefeld.de>>
Subject: Re: [SystemSafety] Koopman replies to concerns over Toyota UA case

Granted we are talking forensic analysis here - but I can't help myself. A McCabe number is a lag indicator. I know Phil, detecting a big one gives the forensic analyst a warm feeling (much like peeing in your pants) "Arr huh. I've found the problem!". But from a practical point of view it's too late. Projects never have the money to rewrite the offending code. Better to look at the upstream causes of excessive complexity and to develop lead indicators for them. A few examples are:

1.       No architecture. Code bodies that grow like topsy and are hacked over the years by itinerant coders with widely varying skill levels, into an unintelligible spaghetti mess. This is easy to spot. Find the architect, ask him to brief you on his architectural approach. What architectural design pattern is he using, what framework is using ... ?

2.       Bad architectures. That do not properly partition the solution into manageable code chunks. I once asked a so-called architect to brief me on his partitioning strategy. He had no idea what I was talking about. ... That do not lay down the inviolable rules, for example thou shalt not use globals

3.       Good shelfware architectures. Beautiful system architecture specifications that become shelfware and are admired from afar but not implemented especially under deadline death march conditions.

4.       No model based development. Code hacked from a blank page with no consideration of highly effective modelling techniques - the ones that support simplicity and cut LOC counts. The classic example is control systems that do not make extensive use of state engines.

My point is that the critical review is the design review not the code review. The code review is the too-late-review especially if it's done close to delivery, or in this case over a dead body.

Les
PS: I'm not against McCabe. It's a good tool to have on code check in. If your number is in Steve Tockey's red zone just get ready to have a good story. It keeps coders on their toes.

From: systemsafety [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of Matthew Squair
Sent: Monday, January 1, 2018 4:47 AM
To: Derek M Jones
Cc: Bielefield Safety List
Subject: Re: [SystemSafety] Koopman replies to concerns over Toyota UA case


As I see it the problem/opportunity to ‘game’ McCabes metric is similar to how you can game risk assessements by picking a lower level in the system (say at the subsystem) at which to evaluate risk. Which as each subsystem contributes only a part of the total means that their individual risks will be less (yes I have seen folk do this). The antidote in that case is to consider the total risk rather than individual subsystem risks alone.

In the case of McCabe by salami slicing the function up (part/whole style hierarchy) as Derek proposed we’re pushing the complexity up into the next higher system level where, as McCabe doesn’t have a concept of hierarchy, it somewhat conveniently ‘disappears’. Again the antidote is to recognize that the complexity hasn’t gone any where though a simple budgeting exercise. In line with other budgeting practices I’d (aspirationally) look for who owns that higher level of system design and what the practices were for proposing and accepting such transfers.

Touching on complexity theory, as I see it McCabe is more a measure of local (flat) complicatedeness (which still poses difficulties) rather than ‘true’ complexity (ie hierarchy, abstraction, emergence). To make it more a measure of complexity it should have, or be augmented by other measures) to have, an ability to deal with hierarchy at least.

The other dimension is to consider the system from a means-end rather than part-whole hierarchy, ‘generally’ people don’t write complicated code for the hell of it so if the ‘means’ has a high metric, chances are if we look back up at the ‘end’ required it’s going to be the driver and there may be an opportunity to refactor the requirements.

Apologies for the scattergun of the above, on a short layover in NZ.

Regards,

On 1 January 2018 at 2:32:54 am, Derek M Jones (derek at knosof.co.uk<mailto:derek at knosof.co.uk>) wrote:
Steve,


Are you saying that there should be NO constraints whatsoever on the code
a developer writes? Are you willing to accept the following because I have

I'm saying that people should stop demonizing an easily gamed metric.
It increases the pressure on people at the sharp end to commit
account fraud.

--
Derek M. Jones Software analysis
tel: +44 (0)1252 520667 blog:shape-of-code.coding-guidelines.com<http://shape-of-code.coding-guidelines.com>
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180103/0a41e6de/attachment-0001.html>


More information about the systemsafety mailing list