[SystemSafety] power plant user interfaces

Matthew Squair mattsquair at gmail.com
Tue Jul 14 13:52:07 CEST 2015


Actually a HMI is a little more than 'just a window'. I think you're
looking in the wrong direction.

Complex HMI actively mediate the interaction between the system under
control and the operator. The more complex that mediation, the more the
operator must herself maintain a model of the interface not just the system
under control.

So you have to not just make the system under control understandable to the
operator, so that they can do their job, but also do the same for the
interface. That requires a very good practical understanding as to how
people think, perceive etc, etc, a bit more than the 'ilities', some call
it cognitive engineering.

Matthew Squair

MIEAust, CPEng
Mob: +61 488770655
Email; Mattsquair at gmail.com
Web: http://criticaluncertainties.com

On 14 Jul 2015, at 8:55 am, Les Chambers <les at chambers.com.au> wrote:

Can we get back to first principles here. A human machine interface (HMI)
is just a window into a process that allows human beings to observe what's
going on, understand what's going on and manipulate what's going on (when
human intervention is required) such that the target system succeeds in its
mission (in our case, without killing anyone).

A 'good' HMI therefore supports: observe-ability, understand-ability and
control-ability
If you like the devil is in the 'ilitys'

After 10 years working with chemical processing reactors of all levels of
complexity, sizes, shapes and chemical processing technologies, followed by
a further few years working in wide area control systems in the rail
industry, interspersed with a year working on development standards for
computers in the shutdown loop of  nuclear reactors I have concluded the
following:

Observe-ability
A groovy HMI (with all the right contrast ratios and menu hierarchies) is
useless if you don't have the instrumentation to observe what's going on in
the process. Case study: QF32 would not have had an engine explosion if
Rolls-Royce did a mass balance around the lubricating oil flow in their jet
engines. A mass balance would have revealed an oil leak that ultimately
caused the explosion and the near death-experience of 300 odd people.
Further, these days, just the presence of sensors is not enough. You need
the computing power to calculate secondary variables such as mass balance
and rates of change. It can get even more complicated in chemical
processing when the output of chemical analysis equipment requires
significant processing to come up with numbers that mean something to human
beings. Some of these numbers need to be calculated at high rates depending
on the time constants of the target process.
A simple example: in a latex reactor control system I once worked on the
most important number in the plant was the rate of change of reactor
temperature. It was a lead indicator of trouble, maybe 6 to 8 hours in the
future. My point is that the HMI could display this number in the most
primitive and clunky way and it would still be a potent tool in man machine
interface. The fact that it existed was most important, not the way it was
displayed.

Understand-ability
In response to those who might say, "Aw shucks these systems are highly
complex these days and operators can easily get confused. Gees look at what
happened at Chernobyl and Three Mile Island. " ... I say, "squeeze out the
tears you sorry bugger."
Professional control systems engineers have known for years that highly
complex systems can be simplified using the right metaphors in design.
Cooperating state engines is one good example that is pretty well
universally applied. ... Although some industries do go through dark ages
where this is forgotten. For example in one project I actually had to fight
to universally apply this model across a smoke extraction system. In the
end I won by pulling rank and (metaphorically) executing anyone who
disagreed with me.
My point is that at the root ball of understand-ability is the ability to
understanding system state. And you cannot achieve this without a well
thought out design using a state model.
Returning to the latex reactor, in common with every other chemical
processing plant I ever worked on, apart from some critical raw or
calculated process variables, the next most important set of numbers was
the state of each unit operation in the process (we called it the step
number – a concept easily understood by anybody). If the controls for these
unit operations were implemented with state engines this became a simple
matter. Some operations (the ones that were potentially explosive) were
more critical than others. So you could walk into a control room look at a
couple of numbers and very quickly get the complete picture of where the
plant was up to, or if it was, in fact, in a dangerous state. It was
observe-ability heaven!
Once again, the display of these numbers could be as clunky as you like the
fact that they existed was the important thing. And they would not have
existed without a design totally focused on understand-ability through
human friendly metaphors.

Control-ability
Once again a groovy HMI is useless unless you have the final control
elements to actually control the process. In chemical processing this took
a massive leap of faith as large sums of money had to be spent on
installing elements such as control valves that could be manipulated by a
computer. It got so expensive that 30 percent of plant capital went into
instrumentation and final control equipments. Just the act of running a
pipe down off a pipe rack and installing a control valve with all its
associated block and bleed equipment could cost upwards of 20,000 dollars
(in the 1970s).
Another thing I noticed about controllability is that the less control you
give to a human being the better of you are. I experienced some plants that
could not be manually controlled by human beings. One tubular reactor a
colleague worked on could only be controlled by a computer algorithm. If
the algorithm or the field equipment looked like it was failing they would
shut the plant down.
Here's the interesting thing: once you're committed to proper
instrumentation and final control elements, computers, and state engine
models you tend to take it all the way and make automation total. Google
and friends have reached this conclusion with the automobile. The less our
hands touch the steering wheel the better off we all will be. (I invoke a
previous post, the aphorism from Apocalypse Now: never get out of the boat,
absolutely god damn right, unless you're prepared to take it all the way.
No matter what happens.)
One downside of total control is that operators need to understand what is
going on in the rare situations where they need to intervene. I am told
that the most common explicative in an aircraft cockpit is, "What the f...
is it doing now?" Once again this is where good metaphors in design play a
critical role.

A note on engineers not understanding user needs.
In my experience this was solved by chemical plant engineers actually
writing the control software after appropriate training in control theory
and the target computer control systems.  Plant engineers were then
responsible for maintaining their own software. Unfortunately this is
impractical in other domains such as aviation.
I will say one thing though, understanding fundamentals of any process,
chemical or nuclear, is one thing but knowing how to control it safely is
another. In operating any process you need to look at it from the point of
view of set points, measured variables, lead and lag indicators, time
constants, dead time, gains and rates of change. I found this perspective
missing in a lot of plant engineers, that is before they were properly
trained in control theory. Some of them seemed helpless to solve their
process problems purely because they were looking at the issue through the
eyes of a control Systems engineer. Indeed Chernobyl was a result of some
punter not understanding that running those reactors at low power created
an unstable system which got out of control and ran away when they
attempted to control it manually (just like the tubular reactor I mentioned
above).
There is also a dilemma here which I experienced many years ago when it
fell to me to train operators in technologies and ways of operating that
they could not possibly visualise with their current experience. Henry Ford
framed it well (with a metaphor of course): "If I asked them what they
wanted they'd tell me, 'faster horses'."
There is an element of this in many new things we design these days. For
example Steve jobs never had focus groups. Page, Brin and Musk at some
point in their careers were all viewed as crazy (Musk once asked his latest
biographer, "Am I crazy" as if he was unsure himself).

Steve
I had a quick flip through the FAA Human Factors Design Guide. All good
stuff but I noted that none of the above issues were addressed. It's like I
was reading the syntax manual with the bit on semantics missing. Was all
that stuff in another chapter? Tell me it was mate or are we entering
another dark age.

Cheers
Les

-----Original Message-----
From: systemsafety-bounces at lists.techfak.uni-bielefeld.de [
mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de
<systemsafety-bounces at lists.techfak.uni-bielefeld.de>] On Behalf Of Gergely
Buday
Sent: Tuesday, July 14, 2015 5:44 AM
To: Gareth Lock
Cc: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] power plant user interfaces

On 13 July 2015 at 21:27, Gareth Lock <gareth at humaninthesystem.co.uk> wrote:

The answer is yes, but one of the problems is that engineers are not
normally users, so they have a different perspective on what short-cuts or
‘misuse’ might happen. This means that the end user needs to be engaged in
the design process too but from my perspective, they aren’t normally that
bothered because they can’t see or touch it.  In addition, we sometimes get
into the ‘but why would anyone do it that way, I designed it this way!’
discussion!


I do not want to hijack the seriousness of the conversation but that
reminded me of this:

https://www.facebook.com/boingboing/photos/a.10151640159521179.1073741825.27479046178/10152326345746179

- Gergely
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE


_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20150714/b05297d2/attachment-0001.html>


More information about the systemsafety mailing list