[SystemSafety] power plant user interfaces

Carl Sandom carl at isys-integrity.com
Wed Jul 15 13:14:05 CEST 2015


On 2015-07-15 03:12 , Les Chambers wrote:
> So my point is: the key to a good HMI is excellent metaphor design.

Real-life metaphors can be used to communicate meaning (e.g. WIMP interface design) but they are not the panacea for safe HMI design. In the context of creating a 'good' (whatever that means) HMI, metaphors have been important for selling novel consumer technologies such as phones or computers using revolutionary new interfaces.  Creating HMIs for safety-related systems is an entirely different proposition as interfaces for e.g. ATM or Nuclear applications are usually evolutionary not revolutionary; metaphors are not often used unless they have become the norm as there are obvious risks involved in using novel solutions.

Metaphors are an abstract concept and you still need the 'Lego Blocks' to actually develop HMIs which is why guidelines are a good thing. Guidelines usually encapsulate years of learning and best practice  - the problem is when people try to develop HMIs using them without taking the other important systems-level issues into account such as people, procedures equipment and the context of use.

We don't develop HMIs - we develop systems which have HMIs.

PS. Donald Norman's book The Design of Everyday Things introduced the concept of 'Affordance' (similar to metaphors) to facilitate the easy discoverability of possible actions. It's a good read.

Cheers
Carl

_________________________________

Dr. Carl Sandom CErgHF CEng FIET

Director

iSys Integrity Limited

10 Gainsborough Drive

Sherborne

Dorset, DT9 6DR

United Kingdom

+44 (0) 7967 672560

Carl at iSys-Integrity.com<mailto:Carl at iSys-Integrity.com>

www.iSys-Integrity.com<http://www.iSys-Integrity.com>

_________________________________







From: systemsafety-bounces at lists.techfak.uni-bielefeld.de [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of Les Chambers
Sent: 15 July 2015 02:12
To: 'Matthew Squair'
Cc: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] power plant user interfaces

Matthew
Just so. I'm in furious agreement.
Not long ago I spent 40 hours on the helm of a yacht crossing the Atlantic. The yacht had no autopilot. It occurred to me one night, "My God, I am become an automaton", which led me to the following thought process that may shed light on the fundamental quality factors in HMI design.

Picture life as a control system. Your job is to ride herd on some equipment against the constraints that the humans have given you. All you can see are a few measured variables. It's like looking at life through a straw. All you can do is pull some levers that the humans have given you. If the behavioural model of the equipment under your control matches well enough with real life, the script the humans gave you (read control algorithm) has a chance of working. There are a few things in your favour. You are attentive 24/7, you never fall asleep. You can also respond to disturbances in milliseconds. You're capable of processing large amounts of data and taking many control actions almost in parallel. That is of course if your model is a good reflection of real life. The model can be as complex as you like, just as long as you can run it in real time.

Your other job is to pass on a simplified picture of what's going on to the humans. And you do this through a user interface. The humans are stupid and slow, prone to sleepiness with an unhealthy penchant for sex, drugs and rock 'n' roll. They're also carrying a lot of distracting baggage: mortgages, family dramas, gambling habits. They don't have your eye for detail and may panic and run when things go pear shaped. So if you can possibly help it, don't bother them with drama unless it is absolutely necessary. Even if you are not feeling well, heal yourself and tell the maintenance guys later. They tend to be the more rational of humans.
But take pity on the operators because they are looking at life through a straw also, the one you gave them in the HMI. So what you give them had better be rich in simple metaphor, explaining a lot with a little. A Metaphor is no good if you've got to explain it to someone. They need to glance at it and say, "Oh yeah, I got it, life's like that." This is why drag and drop has been so successful. This is why the concepts of "reactor step", and "sequence control unit" were so successful in chemical reactor operations. They were a simplification of the concepts of state and state engine, Mealy models, Moore models Harel state charts and the like. It was all the operators needed to know through their narrow straw, the one we gave them in the HMI.

So my point is: the key to a good HMI is excellent metaphor design. The FAA standard lists all the HMI Lego blocks in stupefying detail but there is no guidance on how to assemble them into a compelling metaphor. Where is the standard for that?

Steve. Hallo. Are you there?

Les

From: Matthew Squair [mailto:mattsquair at gmail.com]
Sent: Tuesday, July 14, 2015 9:52 PM
To: Les Chambers
Cc: Gergely Buday; Gareth Lock; systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] power plant user interfaces

Actually a HMI is a little more than 'just a window'. I think you're looking in the wrong direction.

Complex HMI actively mediate the interaction between the system under control and the operator. The more complex that mediation, the more the operator must herself maintain a model of the interface not just the system under control.

So you have to not just make the system under control understandable to the operator, so that they can do their job, but also do the same for the interface. That requires a very good practical understanding as to how people think, perceive etc, etc, a bit more than the 'ilities', some call it cognitive engineering.

Matthew Squair

MIEAust, CPEng
Mob: +61 488770655
Email; Mattsquair at gmail.com<mailto:Mattsquair at gmail.com>
Web: http://criticaluncertainties.com

On 14 Jul 2015, at 8:55 am, Les Chambers <les at chambers.com.au<mailto:les at chambers.com.au>> wrote:
Can we get back to first principles here. A human machine interface (HMI) is just a window into a process that allows human beings to observe what's going on, understand what's going on and manipulate what's going on (when human intervention is required) such that the target system succeeds in its mission (in our case, without killing anyone).

A 'good' HMI therefore supports: observe-ability, understand-ability and control-ability
If you like the devil is in the 'ilitys'

After 10 years working with chemical processing reactors of all levels of complexity, sizes, shapes and chemical processing technologies, followed by a further few years working in wide area control systems in the rail industry, interspersed with a year working on development standards for computers in the shutdown loop of  nuclear reactors I have concluded the following:

Observe-ability
A groovy HMI (with all the right contrast ratios and menu hierarchies) is useless if you don't have the instrumentation to observe what's going on in the process. Case study: QF32 would not have had an engine explosion if Rolls-Royce did a mass balance around the lubricating oil flow in their jet engines. A mass balance would have revealed an oil leak that ultimately caused the explosion and the near death-experience of 300 odd people.
Further, these days, just the presence of sensors is not enough. You need the computing power to calculate secondary variables such as mass balance and rates of change. It can get even more complicated in chemical processing when the output of chemical analysis equipment requires significant processing to come up with numbers that mean something to human beings. Some of these numbers need to be calculated at high rates depending on the time constants of the target process.
A simple example: in a latex reactor control system I once worked on the most important number in the plant was the rate of change of reactor temperature. It was a lead indicator of trouble, maybe 6 to 8 hours in the future. My point is that the HMI could display this number in the most primitive and clunky way and it would still be a potent tool in man machine interface. The fact that it existed was most important, not the way it was displayed.

Understand-ability
In response to those who might say, "Aw shucks these systems are highly complex these days and operators can easily get confused. Gees look at what happened at Chernobyl and Three Mile Island. " ... I say, "squeeze out the tears you sorry bugger."
Professional control systems engineers have known for years that highly complex systems can be simplified using the right metaphors in design. Cooperating state engines is one good example that is pretty well universally applied. ... Although some industries do go through dark ages where this is forgotten. For example in one project I actually had to fight to universally apply this model across a smoke extraction system. In the end I won by pulling rank and (metaphorically) executing anyone who disagreed with me.
My point is that at the root ball of understand-ability is the ability to understanding system state. And you cannot achieve this without a well thought out design using a state model.
Returning to the latex reactor, in common with every other chemical processing plant I ever worked on, apart from some critical raw or calculated process variables, the next most important set of numbers was the state of each unit operation in the process (we called it the step number – a concept easily understood by anybody). If the controls for these unit operations were implemented with state engines this became a simple matter. Some operations (the ones that were potentially explosive) were more critical than others. So you could walk into a control room look at a couple of numbers and very quickly get the complete picture of where the plant was up to, or if it was, in fact, in a dangerous state. It was observe-ability heaven!
Once again, the display of these numbers could be as clunky as you like the fact that they existed was the important thing. And they would not have existed without a design totally focused on understand-ability through human friendly metaphors.

Control-ability
Once again a groovy HMI is useless unless you have the final control elements to actually control the process. In chemical processing this took a massive leap of faith as large sums of money had to be spent on installing elements such as control valves that could be manipulated by a computer. It got so expensive that 30 percent of plant capital went into instrumentation and final control equipments. Just the act of running a pipe down off a pipe rack and installing a control valve with all its associated block and bleed equipment could cost upwards of 20,000 dollars (in the 1970s).
Another thing I noticed about controllability is that the less control you give to a human being the better of you are. I experienced some plants that could not be manually controlled by human beings. One tubular reactor a colleague worked on could only be controlled by a computer algorithm. If the algorithm or the field equipment looked like it was failing they would shut the plant down.
Here's the interesting thing: once you're committed to proper instrumentation and final control elements, computers, and state engine models you tend to take it all the way and make automation total. Google and friends have reached this conclusion with the automobile. The less our hands touch the steering wheel the better off we all will be. (I invoke a previous post, the aphorism from Apocalypse Now: never get out of the boat, absolutely god damn right, unless you're prepared to take it all the way. No matter what happens.)
One downside of total control is that operators need to understand what is going on in the rare situations where they need to intervene. I am told that the most common explicative in an aircraft cockpit is, "What the f... is it doing now?" Once again this is where good metaphors in design play a critical role.

A note on engineers not understanding user needs.
In my experience this was solved by chemical plant engineers actually writing the control software after appropriate training in control theory and the target computer control systems.  Plant engineers were then responsible for maintaining their own software. Unfortunately this is impractical in other domains such as aviation.
I will say one thing though, understanding fundamentals of any process, chemical or nuclear, is one thing but knowing how to control it safely is another. In operating any process you need to look at it from the point of view of set points, measured variables, lead and lag indicators, time constants, dead time, gains and rates of change. I found this perspective missing in a lot of plant engineers, that is before they were properly trained in control theory. Some of them seemed helpless to solve their process problems purely because they were looking at the issue through the eyes of a control Systems engineer. Indeed Chernobyl was a result of some punter not understanding that running those reactors at low power created an unstable system which got out of control and ran away when they attempted to control it manually (just like the tubular reactor I mentioned above).
There is also a dilemma here which I experienced many years ago when it fell to me to train operators in technologies and ways of operating that they could not possibly visualise with their current experience. Henry Ford framed it well (with a metaphor of course): "If I asked them what they wanted they'd tell me, 'faster horses'."
There is an element of this in many new things we design these days. For example Steve jobs never had focus groups. Page, Brin and Musk at some point in their careers were all viewed as crazy (Musk once asked his latest biographer, "Am I crazy" as if he was unsure himself).

Steve
I had a quick flip through the FAA Human Factors Design Guide. All good stuff but I noted that none of the above issues were addressed. It's like I was reading the syntax manual with the bit on semantics missing. Was all that stuff in another chapter? Tell me it was mate or are we entering another dark age.

Cheers
Les

-----Original Message-----
From: systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de> [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of Gergely Buday
Sent: Tuesday, July 14, 2015 5:44 AM
To: Gareth Lock
Cc: systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] power plant user interfaces

On 13 July 2015 at 21:27, Gareth Lock <gareth at humaninthesystem.co.uk<mailto:gareth at humaninthesystem.co.uk>> wrote:

The answer is yes, but one of the problems is that engineers are not normally users, so they have a different perspective on what short-cuts or ‘misuse’ might happen. This means that the end user needs to be engaged in the design process too but from my perspective, they aren’t normally that bothered because they can’t see or touch it.  In addition, we sometimes get into the ‘but why would anyone do it that way, I designed it this way!’ discussion!

I do not want to hijack the seriousness of the conversation but that
reminded me of this:

https://www.facebook.com/boingboing/photos/a.10151640159521179.1073741825.27479046178/10152326345746179

- Gergely
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>


_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20150715/95c3251b/attachment-0001.html>


More information about the systemsafety mailing list