[SystemSafety] Another unbelievable failure (file system overflow)

Matthew Squair mattsquair at gmail.com
Sat May 30 06:34:44 CEST 2015


Les,

I'm not sure that being a manager, or an engineer, is something that
carries distinct genetic markers. Your early mentoring scheme may run into
some practical difficulties here.. :))

A personal belief of mine is that the sort of 'engineer' I turned out to be
was very much dictated by the first couple of chief engineers I worked for.
In that respect I was very lucky (I think).

Matthew Squair

MIEAust, CPEng
Mob: +61 488770655
Email; Mattsquair at gmail.com
Web: http://criticaluncertainties.com

On 30 May 2015, at 1:24 pm, Les Chambers <les at chambers.com.au> wrote:

  Steve

Clap, clap, clap, clap. At last, a serious metric, guaranteed to make a
difference because it uses story patterns, the only facility guaranteed to
change attitudes. George should go underground and embrace the onion
router. He is clearly a dangerous radical.

However, Dilbert aside, it behoves us to dig deeper and look at causal
factors. Somewhere further back in this stream the point was made that the
good programmer/bad manager metaphor gets trotted out too often. This is
very true, I've been guilty of it myself, having socialist leanings and
being in the presence of far too many disgustingly poor management
decisions in my 40 year career. But. We should ask, "How does a programmer
or a manager become BAD." I put it to the list that this is the exact same
question as, "How does a person become a criminal?"

Most serial killers are the product of child abuse. Indeed most criminals
have had damaged childhoods. Incompetent child rearing or no child rearing
- not brought up, just kicked and told to get up. No role models or the
wrong role models: Street gangs, drug dealers, thieves and murderers. Bill
Clinton addressed this once:

"People who grew up in difficult circumstances and yet are successful have
one thing in common; at a crucial juncture in their adolescence, they had a
positive relationship with a caring adult." (More at:
http://www.chambers.com.au/public_resources/mentoring_wp.pdf)



The FBI specialists who hunt down serial killers have a saying, "The best
indicator of future behaviour is past behaviour."



So, any way you want to look at this problem, the only way to break the
endless cycle of "glitches" is: better child rearing. Anyone responsible
for the rearing of a software developer or his or her manager should
reflect on this.



Cheers

Les



PS: This "... has become clear" (at least to me), "later on."



*From:* systemsafety-bounces at lists.techfak.uni-bielefeld.de [
mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de
<systemsafety-bounces at lists.techfak.uni-bielefeld.de>] *On Behalf Of *Steve
Tockey
*Sent:* Saturday, May 30, 2015 4:34 AM
*To:* Robert Schaefer at 300
*Cc:* systemsafety at lists.techfak.uni-bielefeld.de
*Subject:* Re: [SystemSafety] Another unbelievable failure (file system
overflow)





>From what I remember about Scott Adams, at least in the early days he used
a "three company rule". The majority of his comics come from ideas
submitted by readers. His rule was that he had to see the same basic idea
come from at least three different companies before he had confidence the
problem was widespread enough to be understood/funny for a majority of
readers. I don't know if he follows the same rule now, but it would make
sense.



I agree, a socio-economic study of insights of Dilbert would be fascinating.



And, by the way, if anyone remembers the Software Engineering institute's
"Capability Maturity Model" (CMM), here's a proposed update:



----- cut here -----



API Austin - First there were software metrics.  With these, software
developers
and their management could finally measure something for the output of the
software creation process.  In the 80's these techniques flourished.  Funny
names for these measurements emerged, like "McCabe complexity" and
"software volume".



Soon it was realized that there needed to be a way not only to measure the
quality of the software output, but also to measure the quality of the
engineering organization itself.  The Capability Maturity Model, CMM, was
developed in the early 90's.  Organizations are audited by professionals
and rated on a scale of 1 to 5. Low scores mean the software production
process is chaotic, while 5 means that all aspects of software development
are fully understood and carefully applied, organizations today weigh in at
a meager 1, and there's a surprising number of 0's out there.



Now, a revolutionary new measurement technique has been developed by a small
startup consulting firm in Austin, Texas.  The new system is simply known
as DCF.  The simplicity and elegance of the new measuring system belies its
power in accurately judging the soundness of a software organization.



The inventor of DCF and founder of the DiCoFact Foundation, George Kritsonis,
says the new measurement system is "simple and fool-proof, but
modifications are being made to make it management-proof as well".



One Sunday morning George was performing his normal ritual of reading the
most important parts of the newspaper first, when he came across his
favorite comic strip, "Dilbert" by Scott Adams.  George and his
work colleagues loved this comic strip and were amazed by how many of
the silly storylines reminded them of actual incidences at their company.

They even suspected that Scott Adams was working there in disguise, or at
least that there was a spy in the company feeding Scott daily promised to
make him millions:  The Dilbert Correlation Factor (DCF).



George's idea was simple:  "Take 100 random Dilbert comic strips and present
them in a survey to all your engineering personnel.  Include both engineers
and management.  Each person reads the strips, and puts a check mark on
each strip that reminds him of how his company operates.  Collect all
surveys and count the check marks.  This gives you your Dilbert
Correllation Factor, which can range of course from 0% to 100%.  Average
out the engineers scores.  Throw out the manager's surveys, we just have
them do the survey to make them feel important; however, if many of them
scowl during the survey, add up to 5 points to the DCF (in technical terms,
this is your Management Dissing Fudge Factor, MDFF).  Make sure to also
throw out surveys of engineers that laugh uncontrollably during the whole
survey (remember their names for subsequent counseling).  And that's all
there is to it!  Oh yeah, then walk around the building and count Dilbert
cartoons on the walls.  Don't forget coffee bars, bulletin boards, office
doors and of course, bathrooms".  Add up to 10 points for this
Dilbert Density Coefficient Adjustment (DDCA).



Interpreting the results is simple.  Let's look at some ranges:



0% - 25%:  You probably have a quality software organization. However, you
guy's need to lighten up!  Maybe a few surprise random layoff, or perhaps
initiating a Quality Improvement Program, will do the trick to boost your
company's DCF to healthier level.



26% - 50%:  This is also a sign of a good software organization, and is
nearly ideal.  You still manage to get a quality product out, and yet you
still have some of the fun that only Dilbert lovers can identify with...
Mandatory membership in social committees, endless e-mail debates about the
right acronyms to use for the company products, and of course detailed
weekly status reports where everyone lists "did status report" on
accomplishments.



51% - 75%:  This is the most typical DCF level for software houses today.
Your software products are often in jeopardy due to the Dilbert-like
environment they are produced in. You have a nice healthy dose of routine
mismanagement, senseless endless meetings with no conclusions,
miscommunications at all levels of the organization, and arbitrary
commitments made to customers which send engineers into cataplexy.



76% - 100%:  The best advice for this organization is this:  Get the hell
out of the software business.  Hire the best cartoonist you can afford,
have him join your project teams and document what he sees in comic
strips...  get 'em syndicated and you'll make a fortune!



George has applied for a patent on his unique DCF system.  He is anxious to
become a high-priced consultant, going to lots of companies, doing his
survey, getting the fee, and getting out before management realizes they've
been ripped off and have to hire another high-priced consultant to come in
and set things right.  George reports, "I'm thinking about a do-it-yourself
version for the future, too.  I'd put Dilbert cartoons on little cards so
they can be passed out to the engineers for the survey...  I'll probably
call it 'Deal-a-Dilbert'. I'm also thinking about a simple measurement
system that lets employees find out their personality type and where they
best fit into the organization.  I call this the 'Dilbert/Dogbert Empathy
Factor' or 'DDEF' for short.



----- end cut here -----





Cheers,



-- steve









*From: *Robert Schaefer at 300 <schaefer_robert at dwc.edu>
*Date: *Friday, May 29, 2015 5:11 AM
*Cc: *"systemsafety at lists.techfak.uni-bielefeld.de" <
systemsafety at lists.techfak.uni-bielefeld.de>
*Subject: *Re: [SystemSafety] Another unbelievable failure (file system
overflow)



I would claim that this not always prospect theory sometimes dysfunction
due to greed.



By deliberately not testing you can get the customer to:

 1. become your beta tester, i.e. work for you for free

 2. directly or indirectly get the customer pay you again for you
fixing your own mistakes

 3. You leave no evidence of criminal negligence (when you are indeed
criminally negligent ->

      if you did detect safety issues during testing, those issues would be
recorded in the testing documentation).



I would like to see, someday, a serious socio-economic study of the
insights of the Dilbert comic (dilbert.com).

I have read in interviews with the cartoonist (Scott Adams) that  people
email him what they've experienced,

and he just draws it up. One might claim that what he does is all made up,
but I have my doubts given what

I've experienced as a programmer in several large corporations over the
past decades.


 ------------------------------

*From:* systemsafety-bounces at lists.techfak.uni-bielefeld.de <
systemsafety-bounces at lists.techfak.uni-bielefeld.de> on behalf of Matthew
Squair <mattsquair at gmail.com>
*Sent:* Friday, May 29, 2015 2:13 AM
*To:* Heath Raftery
*Cc:* systemsafety at lists.techfak.uni-bielefeld.de
*Subject:* Re: [SystemSafety] Another unbelievable failure (file system
overflow)



An example of prospect theory?

Matthew Squair



MIEAust, CPEng

Mob: +61 488770655

Email; Mattsquair at gmail.com

Web: http://criticaluncertainties.com


On 29 May 2015, at 7:43 am, Heath Raftery <heath.raftery at restech.net.au>
wrote:

 On 28/05/2015 11:50 PM, Chris Hills wrote:

 Static analysis isn't free. Testing isn't free.

Who determines the need for  or business case for static analysis and test?

 [CAH] normally (every report I have seen) static analysis saves a lot of

 time and money.

 The same is true of structured testing.


Funnily enough, the only experience I've had recommending static analysis
is as the programmer to the manager. This is indeed the argument I use. A
strange thing happens in business though (and perhaps my lack of
comprehension explains why I'm the programmer and not the manager ;-) ) -
capital costs and investment are worse than running costs. Buying and
applying static analysis, even if it is cheaper in the long run, is always
seen as less attractive than paying labour to deal with the consequences
later.

Heath
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE

   _______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20150530/5d0d47f6/attachment-0001.html>


More information about the systemsafety mailing list