[SystemSafety] A Fire Code for Software?

Peter Bernard Ladkin ladkin at rvs.uni-bielefeld.de
Mon Mar 19 06:46:13 CET 2018



On 2018-03-18 17:15 , Martyn Thomas wrote:
> 
> SFAIRP has ben legally defined to mean that an employer who chooses to claim that they have met the
> SFAIRP duty must show that they have assessed the costs of reducing the risks further, and that they
> have assessed the benefit of reducing the risks further, and that the costs are "grossly
> disproportionate" to the benefit obtained.
> 
> I doubt that many developers of safety-related systems would be able to pass that test. Notice that
> the burden of proof rests on the party seeking to rely on the claim that the risks *have* been
> reduced SFAIRP.
> 
> ..............
> 
> /I have no knowledge that these legal duties have ever been used as the basis for a prosecution or
> claim for damages following an accident attributed in any way to the unsafe behaviour of a software
> based system. If anyone knows of such a case, please send me a reference.
So let us suppose that a cyberattack shuts down a piece of critical infrastructure, and that in
doing so some employee (to make it simple) is injured.

First, there is an existing standard, IEC 61508. If the critical infrastructure is process plant,
then it falls under IEC 61511:2016, which refers to IEC 61508-3 for SW. Note that there is nothing
in IEC 61508-3 concerning hazard or threat analysis. That is all supposed to have been covered in
Parts 1 and 2. So let us turn to Part 1. If malicious action is "reasonably foreseeable", a
"security threats analysis" shall have been carried out. If threats have been identified, then a
"vulnerability analysis" shall have been performed.

But it doesn't say who is responsible for that.

The typical first-line responsible is the plant operator. The first hook is "reasonably
foreseeable". As far as I know, most intrusions come about through a combination of social
engineering, and weak operational security (default passwords and so on). They are all intuitively
the responsibility of the operator, or maybe the plant integrator if the kit has existing since
service introduction.

Is there absolutely any point to doing a cost-benefit analysis (CBA) on not changing the default
password on a piece of critical kit? Note that CBA is probability-based, and while the ontology of
cybersecurity has its notion of risk, that notion is not and cannot be probability-based under any
reasonable construal. So a CBA is a non-starter.

Are operational personnel trained to never ever give any sensitive information out over a remote
communication device such as a telephone?

Those methods of access both fall under the rubric of "access control", which is explicitly
addressed by IEC 61511. The OEM surely has, in its operating manual, a line which says "change the
default password" - even off-the-shelf routers from 10 years ago had that, and it is all over
everyone's advice - see for example HMG's CyberAware program at
https://www.theguardian.com/cyber-aware/2018/feb/28/no-excuses-how-to-tighten-up-your-online-security-in-10-minutes
 So this is a matter for the operator or system integrator and not the OEM.

Let us now suppose a piece of kit is hacked. Say the access controls (login + password) were
brute-forced. The OEM could argue that, at the time of design and production, access controls were
"state of the art"; that such brute-forcing capability was not "reasonably foreseeable" at the time,
and so their responsibility is fulfilled. Someone else chose the kit, installed and maintained it
and if it had become the case that access control became weak, it is the maintainer's responsibility
to modify the kit as necessary (and most manufacturers of kit are nowadays involved in some ICS-CERT
like program to be informed and to issue advice concerning security vulnerabilities). If the access
came through a software vulnerability other than brute-forcing controls, and the vulnerability was
known, then it has a date of discovery. If that date of discovery is after the manufacture of the
kit, there is a prima facie argument that it was not "reasonably foreseeable" at time of
manufacture, and that it was thereby up to the system integrator and operator to reevaluate the
kit's suitability at that date of discovery. OEM off the hook again.

Suppose the date of discovery was way before the date of manufacture. Somebody carried out a
"security threats analysis" and "vulnerability analysis". One can argue it should have been
identified at that time. If it was identified, then responsibility surely lies with the system
integrator or operator for deciding to install a piece of kit with a known vulnerability. OEM off
the hook again. If it was not identified, then the security threats analysis and vulnerability
analysis was faulty. Neither of those would have been the responsibility of the OEM.

I don't yet see the legal leverage of the HSWA on OEMs here. Further, I don't yet see how to
construct a lever. If as an OEM I give you a piece of kit and say "this kit has a four-ASCII-letter
root password" and you use it for something and someone gets hurt because access control on the kit
is brute-forced by a malicious intruder, then surely you are on the hook for inappropriate use, not
I. There is nothing except pure commercial pressure (if that) motivating me to install stronger
access control.

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de





-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 525 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180319/396dbb86/attachment.sig>


More information about the systemsafety mailing list