[SystemSafety] Fwd: "Protected" Environments

Peter Bernard Ladkin ladkin at causalis.com
Sun Nov 11 07:00:09 CET 2018



On 2018-11-11 02:31 , Matthew Squair wrote:
> The first view might also be driven by the inherent vulnerability of legacy systems that are 20 to
> 30 years old and a belief that the only practical approach is the castle defence. 

Neither Stuxnet nor Trisis targeted legacy systems. The kit they targeted was contemporary.

Can anyone name any civil system which has successfully established a "security environment" (it
looks a lot like a "zone" from IEC 62443 but apparently it is not), within which safety engineers
can perform safety analyses and design safety functions assuming everything is cybersecure?

What the second view has going for it is that maybe most complex industrial plant is quite
cyberinsecure and really needs to be brought "up to date" in its cybersecurity measures. The Chatham
House report on cybersecurity in NPPs showed that (to those who didn't already know).

Where the second view is misleading is that it suggests you can succeed. You can, provided your
system has very limited capability, is designed by the world's leading experts, and has heavy layers
of physical security. According to Roger Schell, specific US government agencies have some such
systems. I think we can guess what they look like. They are not networked; they are perfectly
emissions-shielded, and they are physically guarded by soldiers with authority to use deadly force.
And they probably run an OS whose MLS has been formally verified. None of that is a model for
civilian industrial installations with a purpose of, say, providing electricity.

Also, let's see what happens in an incident. Suppose an operator is encouraged to provide a
"security environment". Heshe installs a digital controller. And leaves it in "program mode" for
operations because (a) it may be "adjusted" from a location other than standing at its cabinet, and
(b) because it is in a "security environment", the operator need not take the possibility of
malfeasant intrusion into account. Now say the controller is infiltrated and the system gets nailed.
Who is responsible? There are traditionally three organisations involved: the manufacturer of the
vulnerable kit; the system integrator who procured and installed the kit in the plant; the plant
operator.

The manufacturer provided vulnerable kit; the system integrator procured and installed that kit,
believing it was fit for purpose ("due" but not perfect "diligence"); the operator ...... manifestly
failed to provide the required "security environment". So looks like (at least to legleagles) the
operator gets to shell out.

Such arrangements could have the salutary secondary effect that system integrators and operators may
require in the future a contractual guarantee from OEMs that their kit is free of cybersecurity
vulnerabilities, in order to provide/build the required "security environment".

They are going to have fun with that. Let us take components which one may use to build such a
"security environment", say an industrial Ethernet switch. Perform a Google search on " ics-cert
<your-favorite-industrial-Ethernet-switch> "  Wouldn't it be nice if the search came up with nothing?

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181111/f5123ccb/attachment-0001.sig>


More information about the systemsafety mailing list