[SystemSafety] AI and the virtuous test Oracle - action now!

Prof. Dr. Peter Bernard Ladkin ladkin at techfak.de
Wed Jul 26 10:18:26 CEST 2023



On 2023-07-26 08:52 , Dr. Brendan Patrick Mahony wrote:
> 
>> [PBL] Am I right that this is a regime that concerns cybersecurity? And that it is a US
>> Government process? And that it concerns largely IT systems?
> 
> All good questions I was hoping someone wiser might have answers to.

OK; I'll turn my first two into assertions. It seems to me that ATO/cATO concerns just 
cybersecurity. It also seems to me that ATO/cATO appears to be a requirement for USG IT operations.

Many governments now have cybersecurity requirements for their operations. In the UK there is the 
Cyberessentials scheme, which has been running for well over a decade. I can't talk about ATO but I 
can talk a bit about Cyberessentials. NSCS notes that Cyberessentials certification is required for 
*some* government contracts. I recall there was a lot of discussion about the scheme at the 2016 IET 
System Safety and Cyber Security conference in London.

I have some passing acquaintance with Cyberessentials, but I have personally neither self-certified 
nor been through certification. It started out as thought to be a "basic" scheme, but I had a 
knowledgeable colleague who had spent a couple of years trying to self-certify his consultancy, and 
there was also a lot of very odd talk at the 2016 conference from certification organisations who 
were disparaging their customers (that is not a business practice I would recommend for anyone, let 
alone at a technical conference....). The NCSC recognised at the time that Cyberessentials was not 
as easy as HMG initially intended (the programme is in fact older than NCSC). The cybersecurity 
officer for a major multinational told me that Cyberessentials certification was one of the most 
expensive internal exercises the company had ever undertaken.

There is nothing to criticise here. It is that cybersecurity "basics" turn out to be not as easy as 
people thought, and don't completely work. If people ask for advice to inhibit burglary, they will 
be told "have good locks on all your doors and locking devices on your accessible windows and use 
them." Simple and good advice. But it has been around for years and burglary has not ceased.

The major issue with IT cybersecurity (not just with IT) is the supply chain. Bruce Schneier opines 
that it is an insoluble problem. Supply chain issues are well illustrated in USG IT circles by the 
Solarwinds incident.

I think ATO/cATO concerns IT, but I ask for correction if this is mistaken.

IT means systems whose useful outputs are information, or information structures, displayed largely 
on screens when they are displayed. None of these are safety-related under the IEC definition of 
functional safety. Functional safety largely concerns operational technology (OT) - control systems.

I would say that the difference between IT cybersecurity and OT cybersecurity is quite significant, 
because the general difference in architectures is quite significant. Detection is generally easier 
in OT: it is generally easier to spot that a control system isn't doing what the operator thinks it 
should be doing, and to invoke shutdown or other prophylaxis, as (in IT) to detect somebody reading 
all your confidential records, or diverting a few extra pence of a transaction somewhere where it 
shouldn't be going. (Financial transaction systems have traditionally been considered IT. Since they 
perform transactions it is possible to consider them in principle as OT, but here "harm" would be 
interpreted as financial error, which is not in the IEC definition.) OT of necessity has system 
architectures which are more clear than those of large complex IT systems.

The difference between inhibiting/mitigating something (cybersecurity incidents) and preventing 
something (accidents) is also much more than a matter of degree.  A HazAn performed on a large IT 
system for cybersecurity hazards will turn up masses of potential vulnerabilities, most of which the 
operator just has to swallow/"learn to live with" (or cease operations); whereas for safety hazards 
each and every one of the hazards with severity above the tolerable risk level *must* be avoided or 
mitigated as a matter of law (this is how ALARP, a legal requirement, plays out in common-law 
countries).

> My concerns are that it seems to be being promoted as an approach to in-servicing “intelligent” 
> systems, meaning they intend to use cATO to authorise the fielding of whole swathes of “operator” 
> replacement software.
Certainly something worthy of discussion. If no one here answers, I'll see if I can find pointers 
elsewhere.

PBL

Prof. i.R. Dr. Peter Bernard Ladkin, Bielefeld, Germany
Tel+msg +49 (0)521 880 7319  www.rvs-bi.de






More information about the systemsafety mailing list