[SystemSafety] power plant user interfaces

Les Chambers les at chambers.com.au
Tue Jul 21 02:43:34 CEST 2015


Steve

You have a point, however the terms metaphor and heuristic have some
properties and functions in common. They are both simplifying tools, they
both help people understand and deal with the unfamiliar. I think the
definition of terms is less important than the utility of these concepts to
help people design better systems.

 

Metaphors are strong in the area of creativity enhancement. The rich imagery
of a good metaphor gathers what you know about one domain and helps you
apply it to another. 

 

Example are:

- life is a road

- venture capitalists are sharks

- paintbrushes are pumps

- "the group W bench" a place of rejection for the suspect and unwonted of
society

- Skills like ropes woven out of many components and braided together. This
stimulates discussion on the elements of skills and how they can be combined
to form some useful function.

- The concept of software as an engine stimulates thoughts of wearing out
and cost of maintenance.

- The metaphorical view of this list could also be instructive. Is it a war
or a journey. Are the posts weapons used to flame each other or gifts of
knowledge to fellow travellers.

 

A high-quality metaphor originates from a well understood concept rich in
imagery and potential associations. Surprises are good, in that the more
unrelated it is to your domain of discourse the more effective it can be.
And the more primal the metaphor the longer it will live. There are forces
out there that disrupt interpretation (I suspect you would have to be over
50 to make any sense out of "the group W bench"). Cultural norms change and
metaphors can be hijacked.

 

As a case study on applying metaphor in systems engineering we can analyse
your response to my negative example:

Steve said:

".. my understanding is that design criteria for private planes (and, thus,
ultralights?) is that they have positive stability. Just let go of the
controls and the airplane is designed to return to straight-and-level. Of
course this doesn't help if there's something to run into, but it does solve
the loss of orientation problem. Just trust physics to point you back at
level and hope to run out of cloud before you run out of airspace (better
yet, stay the h--- away from clouds unless you're instrument rated and in an
IFR equipped plane)."

 

Les responds:

The "death-in-90-seconds" comment came from a man who flies these aircraft:
http://jabiru.net.au/

I therefore accept it as a legitimate hazard of ultralight flight.

Your suggestion to, "Just let go of the controls," seems rational.
Unfortunately it channels the metaphor of
pilot-as-rational-preprogrammed-robot, which when tested in real life is
commonly proven bad, bad, bad. In that situation most pilots panic. This is
why they die. A good example is the Air France flight AF 447 copilot who
continued to stall the plane even though he had stall alarms, voices and
instruments advising him to the contrary. 

A metaphor of pilot-as-kangaroo-in-the-headlights would probably be more
apt. In moments of high stress, they stand there dazzled incapable of
rational thought. Then you hit them.

Systems that are designed around bad metaphors, or without considering the
implied metaphors inadvertently created by designers with little experience
of the application domain are a hazard.

 

This is why I believe that metaphor design should be a subject in all
systems engineering courses. Not just an elective but a core course
component.

Resources exist:

- http://aeon.co/magazine/culture/how-to-design-a-metaphor/

- Metaphors We Live By (1980), George Lakoff 

 

Lastly I'd like to share a compelling personal experience that demonstrates
the power of a good metaphor:

I once complained to a client that I saw no value in the laptop touchpad as
a replacement for a mouse. I just couldn't make it work as effectively. Then
he uttered three words from another domain that completely changed my
attitude and improved the utility of this HMI then and for evermore. 

"It's a clitoris," he said.

 

Cheers

Les

 

From: Steve Tockey [mailto:Steve.Tockey at construx.com] 
Sent: Monday, July 20, 2015 8:23 AM
To: Les Chambers
Cc: Matthew Squair; systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] power plant user interfaces

 

 

Les,

Seems you and I have different definitions of "metaphor". Again, "metaphor"
means using something people are already familiar with to help them
understand something they aren't familiar with. The network switch to
railroad switching yard was my earlier example. Metaphor always involves
"the A that you don't know is like the B you do know". IMHO, the helmsman's
trick would qualify as a "heuristic", not a metaphor. The spreaders aren't
"like" west, it's just a convenient way to help solve a problem (which is
what a heuristic does).

 

One of my hobbies is home brewing beer. The temperature of the water when
it's mixed with the malted barley is fairly critical, 175F to 180F. When
water is being heated to boiling, the surface gets as smooth as glass when
it's in that range. That's how people made beer before thermometers were
invented: heat the mash water to the smooth-as-glass point and then add the
malted barley. "Smooth as glass" is a metaphor for understanding the
appearance of the water, however it's use in brewing is as a heuristic
because it indicates when to mix in the grains.Said another way,
"(Heuristic: add the malted barley to the mash water when the surface of
that water is (metaphor: as smooth as glass))"

 

On your negative example, my understanding is that design criteria for
private planes (and, thus, ultralights?) is that they have positive
stability. Just let go of the controls and the airplane is designed to
return to straight-and-level. Of course this doesn't help if there's
something to run into, but it does solve the loss of orientation problem.
Just trust physics to point you back at level and hope to run out of cloud
before you run out of airspace (better yet, stay the h--- away from clouds
unless you're instrument rated and in an IFR equipped plane).

 

 

Cheers,

 

-- steve

 

 

 

 

From: Les Chambers <les at chambers.com.au>
Date: Thursday, July 16, 2015 5:59 PM
To: Steve Tockey <Steve.Tockey at construx.com>
Cc: Matthew Squair <mattsquair at gmail.com>,
"systemsafety at lists.techfak.uni-bielefeld.de"
<systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] power plant user interfaces

 

Thanks for this Steve

An excellent set of pointers to good HMI design.

 I hold to my initial proposition though. That the metaphor is at the root
of all HMI design. It is not something off to one side. All your points are
either an attribute of a good metaphor or an analysis process that yields a
good one.

The metaphor is at the nexus of human cognition and the real world. It's
quality is the core objective in HMI design. 

Case study: 

1. The negative example: most ultra lite aircraft are not fitted with
instruments. Any UL pilot will tell you that if you fly into cloud you have
around 90 seconds to live.

2. The positive example: All helmsmen will tell you that the most potent
metaphor for due west in the northern hemisphere is the position of the
constellation of Orion relative to the mast in the hours before dawn. Put
the three stars of Orion's belt underneath the spreaders and you are dead on
course 270. It's primal, even beautiful, and real time, unlike the compass
or digital readouts which have a lag.





When we find the good metaphor we've invested substantial intellectual
effort in discovering the truth. And the truth is what an operator needs,
especially in emergency situations when there is no time to think.

 

Cheers

Les

Les Chambers 

Director

Chambers & Associates Pty Ltd

www.chambers.com.au

0412 648992


On 17/07/2015, at 5:43 AM, Steve Tockey <Steve.Tockey at construx.com> wrote:

 

Les,

Yes, I'm here. Just too buried in travel and client work to spend the time
responding to this one.

 

"So my point is: the key to a good HMI is excellent metaphor design. The FAA
standard lists all the HMI Lego blocks in stupefying detail but there is no
guidance on how to assemble them into a compelling metaphor. Where is the
standard for that?"

 

I'll agree that excellent metaphor is A key, but it's not THE key. Other
things are entirely relevant. Here's a good quote to start things off:

 

"The most powerful interaction design tool used by the authors is simple on
the surface: a precise descriptive model of the user, what he wishes to
accomplish, and why."

-Alan Cooper & Robert Reimann (From About face 2.0: the essentials of
interaction design. New York: Wiley, 2003. ISBN 0764526413)

 

I'm a huge fan of doing "Task Analysis". Task analysis examines both the
work to be done and the work environment to better understand the context of
the system and its requirements-particularly user interface requirements.
There's a description of Task Analysis in section 7.6 of "Human Factors
Methods for Design: Making Systems Human-Centered" by Christopher P. Nemeth.

 

 

In addition, here are eight useful principles from Ben Schniederman:

 

Strive for consistency

Enable frequent users to use shortcuts

Offer informative feedback

Design dialogs (read: interactions) to yield closure

Offer error prevention and simple error handling

Permit easy reversal of actions

Support internal locus of control

Reduce short-term memory load

 

 

Here are the first 10 of 30 principles from Paul Heckel (The elements of
friendly software design: the new edition. San Francisco, CA: SYBEX, 1991.
ISBN 0895887681):

 

Know your subject

Know your audience

Maintain the user's interest

Communicate visually

Leverage the user's knowledge

Speak the user's language

Use metaphors

Focus the user's attention

Anticipate user's perceptual problems

Communicate only if you can

 

 

So my point is that while a good metaphor is very important, there's a lot
of other stuff that's also very important. Starting with a good task
analysis, and using the kinds of design principles here (and from Norman's
Design of Everyday Things), one can then assemble the Lego Blocks in the FAA
standard to build an effective interface. Well, or at least have a better
chance of not doing something stupid.

 

 

Cheers,

 

-- steve

 

 

 

 

 

From: <systemsafety-bounces at lists.techfak.uni-bielefeld.de> on behalf of Les
Chambers <les at chambers.com.au>
Date: Tuesday, July 14, 2015 6:12 PM
To: 'Matthew Squair' <mattsquair at gmail.com>
Cc: "systemsafety at lists.techfak.uni-bielefeld.de"
<systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] power plant user interfaces

 

Matthew

Just so. I'm in furious agreement.

Not long ago I spent 40 hours on the helm of a yacht crossing the Atlantic.
The yacht had no autopilot. It occurred to me one night, "My God, I am
become an automaton", which led me to the following thought process that may
shed light on the fundamental quality factors in HMI design.

 

Picture life as a control system. Your job is to ride herd on some equipment
against the constraints that the humans have given you. All you can see are
a few measured variables. It's like looking at life through a straw. All you
can do is pull some levers that the humans have given you. If the
behavioural model of the equipment under your control matches well enough
with real life, the script the humans gave you (read control algorithm) has
a chance of working. There are a few things in your favour. You are
attentive 24/7, you never fall asleep. You can also respond to disturbances
in milliseconds. You're capable of processing large amounts of data and
taking many control actions almost in parallel. That is of course if your
model is a good reflection of real life. The model can be as complex as you
like, just as long as you can run it in real time.

 

Your other job is to pass on a simplified picture of what's going on to the
humans. And you do this through a user interface. The humans are stupid and
slow, prone to sleepiness with an unhealthy penchant for sex, drugs and rock
'n' roll. They're also carrying a lot of distracting baggage: mortgages,
family dramas, gambling habits. They don't have your eye for detail and may
panic and run when things go pear shaped. So if you can possibly help it,
don't bother them with drama unless it is absolutely necessary. Even if you
are not feeling well, heal yourself and tell the maintenance guys later.
They tend to be the more rational of humans. 

But take pity on the operators because they are looking at life through a
straw also, the one you gave them in the HMI. So what you give them had
better be rich in simple metaphor, explaining a lot with a little. A
Metaphor is no good if you've got to explain it to someone. They need to
glance at it and say, "Oh yeah, I got it, life's like that." This is why
drag and drop has been so successful. This is why the concepts of "reactor
step", and "sequence control unit" were so successful in chemical reactor
operations. They were a simplification of the concepts of state and state
engine, Mealy models, Moore models Harel state charts and the like. It was
all the operators needed to know through their narrow straw, the one we gave
them in the HMI.

 

So my point is: the key to a good HMI is excellent metaphor design. The FAA
standard lists all the HMI Lego blocks in stupefying detail but there is no
guidance on how to assemble them into a compelling metaphor. Where is the
standard for that? 

 

Steve. Hallo. Are you there?

 

Les

 

From: Matthew Squair [mailto:mattsquair at gmail.com] 
Sent: Tuesday, July 14, 2015 9:52 PM
To: Les Chambers
Cc: Gergely Buday; Gareth Lock; systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] power plant user interfaces

 

Actually a HMI is a little more than 'just a window'. I think you're looking
in the wrong direction. 

 

Complex HMI actively mediate the interaction between the system under
control and the operator. The more complex that mediation, the more the
operator must herself maintain a model of the interface not just the system
under control. 

 

So you have to not just make the system under control understandable to the
operator, so that they can do their job, but also do the same for the
interface. That requires a very good practical understanding as to how
people think, perceive etc, etc, a bit more than the 'ilities', some call it
cognitive engineering.

Matthew Squair

 

MIEAust, CPEng

Mob: +61 488770655

Email; Mattsquair at gmail.com

Web: http://criticaluncertainties.com


On 14 Jul 2015, at 8:55 am, Les Chambers <les at chambers.com.au> wrote:

Can we get back to first principles here. A human machine interface (HMI) is
just a window into a process that allows human beings to observe what's
going on, understand what's going on and manipulate what's going on (when
human intervention is required) such that the target system succeeds in its
mission (in our case, without killing anyone). 

A 'good' HMI therefore supports: observe-ability, understand-ability and
control-ability
If you like the devil is in the 'ilitys'

After 10 years working with chemical processing reactors of all levels of
complexity, sizes, shapes and chemical processing technologies, followed by
a further few years working in wide area control systems in the rail
industry, interspersed with a year working on development standards for
computers in the shutdown loop of  nuclear reactors I have concluded the
following:

Observe-ability
A groovy HMI (with all the right contrast ratios and menu hierarchies) is
useless if you don't have the instrumentation to observe what's going on in
the process. Case study: QF32 would not have had an engine explosion if
Rolls-Royce did a mass balance around the lubricating oil flow in their jet
engines. A mass balance would have revealed an oil leak that ultimately
caused the explosion and the near death-experience of 300 odd people.
Further, these days, just the presence of sensors is not enough. You need
the computing power to calculate secondary variables such as mass balance
and rates of change. It can get even more complicated in chemical processing
when the output of chemical analysis equipment requires significant
processing to come up with numbers that mean something to human beings. Some
of these numbers need to be calculated at high rates depending on the time
constants of the target process.
A simple example: in a latex reactor control system I once worked on the
most important number in the plant was the rate of change of reactor
temperature. It was a lead indicator of trouble, maybe 6 to 8 hours in the
future. My point is that the HMI could display this number in the most
primitive and clunky way and it would still be a potent tool in man machine
interface. The fact that it existed was most important, not the way it was
displayed.

Understand-ability
In response to those who might say, "Aw shucks these systems are highly
complex these days and operators can easily get confused. Gees look at what
happened at Chernobyl and Three Mile Island. " ... I say, "squeeze out the
tears you sorry bugger."
Professional control systems engineers have known for years that highly
complex systems can be simplified using the right metaphors in design.
Cooperating state engines is one good example that is pretty well
universally applied. ... Although some industries do go through dark ages
where this is forgotten. For example in one project I actually had to fight
to universally apply this model across a smoke extraction system. In the end
I won by pulling rank and (metaphorically) executing anyone who disagreed
with me.
My point is that at the root ball of understand-ability is the ability to
understanding system state. And you cannot achieve this without a well
thought out design using a state model. 
Returning to the latex reactor, in common with every other chemical
processing plant I ever worked on, apart from some critical raw or
calculated process variables, the next most important set of numbers was the
state of each unit operation in the process (we called it the step number -
a concept easily understood by anybody). If the controls for these unit
operations were implemented with state engines this became a simple matter.
Some operations (the ones that were potentially explosive) were more
critical than others. So you could walk into a control room look at a couple
of numbers and very quickly get the complete picture of where the plant was
up to, or if it was, in fact, in a dangerous state. It was observe-ability
heaven!
Once again, the display of these numbers could be as clunky as you like the
fact that they existed was the important thing. And they would not have
existed without a design totally focused on understand-ability through human
friendly metaphors.

Control-ability
Once again a groovy HMI is useless unless you have the final control
elements to actually control the process. In chemical processing this took a
massive leap of faith as large sums of money had to be spent on installing
elements such as control valves that could be manipulated by a computer. It
got so expensive that 30 percent of plant capital went into instrumentation
and final control equipments. Just the act of running a pipe down off a pipe
rack and installing a control valve with all its associated block and bleed
equipment could cost upwards of 20,000 dollars (in the 1970s).
Another thing I noticed about controllability is that the less control you
give to a human being the better of you are. I experienced some plants that
could not be manually controlled by human beings. One tubular reactor a
colleague worked on could only be controlled by a computer algorithm. If the
algorithm or the field equipment looked like it was failing they would shut
the plant down. 
Here's the interesting thing: once you're committed to proper
instrumentation and final control elements, computers, and state engine
models you tend to take it all the way and make automation total. Google and
friends have reached this conclusion with the automobile. The less our hands
touch the steering wheel the better off we all will be. (I invoke a previous
post, the aphorism from Apocalypse Now: never get out of the boat,
absolutely god damn right, unless you're prepared to take it all the way. No
matter what happens.)
One downside of total control is that operators need to understand what is
going on in the rare situations where they need to intervene. I am told that
the most common explicative in an aircraft cockpit is, "What the f... is it
doing now?" Once again this is where good metaphors in design play a
critical role.

A note on engineers not understanding user needs.
In my experience this was solved by chemical plant engineers actually
writing the control software after appropriate training in control theory
and the target computer control systems.  Plant engineers were then
responsible for maintaining their own software. Unfortunately this is
impractical in other domains such as aviation.
I will say one thing though, understanding fundamentals of any process,
chemical or nuclear, is one thing but knowing how to control it safely is
another. In operating any process you need to look at it from the point of
view of set points, measured variables, lead and lag indicators, time
constants, dead time, gains and rates of change. I found this perspective
missing in a lot of plant engineers, that is before they were properly
trained in control theory. Some of them seemed helpless to solve their
process problems purely because they were looking at the issue through the
eyes of a control Systems engineer. Indeed Chernobyl was a result of some
punter not understanding that running those reactors at low power created an
unstable system which got out of control and ran away when they attempted to
control it manually (just like the tubular reactor I mentioned above).
There is also a dilemma here which I experienced many years ago when it fell
to me to train operators in technologies and ways of operating that they
could not possibly visualise with their current experience. Henry Ford
framed it well (with a metaphor of course): "If I asked them what they
wanted they'd tell me, 'faster horses'."
There is an element of this in many new things we design these days. For
example Steve jobs never had focus groups. Page, Brin and Musk at some point
in their careers were all viewed as crazy (Musk once asked his latest
biographer, "Am I crazy" as if he was unsure himself). 

Steve
I had a quick flip through the FAA Human Factors Design Guide. All good
stuff but I noted that none of the above issues were addressed. It's like I
was reading the syntax manual with the bit on semantics missing. Was all
that stuff in another chapter? Tell me it was mate or are we entering
another dark age.

Cheers
Les

-----Original Message-----
From: systemsafety-bounces at lists.techfak.uni-bielefeld.de
[mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
Gergely Buday
Sent: Tuesday, July 14, 2015 5:44 AM
To: Gareth Lock
Cc: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] power plant user interfaces

On 13 July 2015 at 21:27, Gareth Lock <gareth at humaninthesystem.co.uk> wrote:





The answer is yes, but one of the problems is that engineers are not
normally users, so they have a different perspective on what short-cuts or
'misuse' might happen. This means that the end user needs to be engaged in
the design process too but from my perspective, they aren't normally that
bothered because they can't see or touch it.  In addition, we sometimes get
into the 'but why would anyone do it that way, I designed it this way!'
discussion!


I do not want to hijack the seriousness of the conversation but that
reminded me of this:

https://www.facebook.com/boingboing/photos/a.10151640159521179.1073741825.27
479046178/10152326345746179

- Gergely
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE


_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20150721/97ad9d34/attachment-0001.html>


More information about the systemsafety mailing list