[SystemSafety] Making Standards available .....

Martyn Thomas martyn at thomas-associates.co.uk
Sun May 15 14:15:32 CEST 2016


On 15/05/2016 05:28, Daniel Grivicic wrote:
> What I would like to understand is why there is little coverage of
> security in IEC61508. My reading of it only discovered high level
> information. If safety and security were applied based on how much
> coverage they have in IEC61508, I think you may find the current
> balance the status-quo. A lot of safety and a little security.  What
> do others think?
>
> As a general question, do engineers (or others) really need this
> (security) spelt out in a standard where current knowledge does focus
> on security as an important partner to safety? Can best practice
> evolve without a standard?


Daniel

My recollection is that ISO 61508 was derived from IEC (or possibly BS)
1508 and that 1508 was developed in the late 1980s and early 1990s by
Ron Bell (of the UK Health and Safety Executive) and others.

That was 20+ years ago, when cybersecurity was hardly an issue, and the
software part of 1508 was based on safety ideas from the discrete
component hardware world (and perhaps PLCs), where the concept of
"systematic faults" (design errors) had not been a major problem
(because of the far lower complexity of discrete component hardware
systems). So the standard did not address cybersecurity.

The world has changed, and cyberattacks are now part of our operating
environment, like the weather and corrosion. Search engines such as
Shodan will now find all industrial control systems connected to the
internet, and pentest tools like Metasploit will allow thousands of
known exploits to be run against them. For a designer or operator of a
safety-related computer-based system to ignore the safety consequences
of a cyberattack would be incompetent today. But the standards process
is slow (for good reasons and bad) and 61508 has not caught up with the
changed environment.

Incorporating cybersecurity issues fully in 61508 would require a
fundamental change, in my opinion, because separate failures from cyber
attacks cannot reasonably be treated as independent, since an attacker
may have the capability and the motivation to attack multiple subsystems
simultaneously.  This undermines probabilistic risk analysis (which is a
fundamental weakness that already exists in 61508, because the standard
is built around the probability of a safety function failing even though
it is acknowledged that safety functions may contain software and that
it is not practical to provide adequate evidence of extremely low
failure probabilities for complex software). 61508 currently sidesteps
this issue.

It seems to me that a revision to take adequate account of cybersecurity
would need to require that developers prove that many classes of
cyberattack are impossible. This will require either major constrains on
system architectures, such as requiring that every safety function must
be isolated from, and independent of, all external sources of control or
data, or it will require that the software is proved to be invulnerable
to many classes of attacks, including all the exploits that are in the
databases of known attacks (such as the Offensive Security Exploit
Database Archive at https://www.exploit-db.com/).

I don't know any way to do that practically, other than through formal
methods and proof tools, which would require a major change to current
industrial software development practices. I do not expect that the
standards committees will adopt such a standard, since they are
controlled by industrial interests.

Martyn
These views are mine and should not be taken as representing the policy
of the UK Health and Safety Executive.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160515/10f35e9e/attachment.html>


More information about the systemsafety mailing list