[SystemSafety] Collected stopgap measures

Ross Hannan - Sigma ross_hannan at sigma-aerospace.com
Mon Nov 5 13:30:35 CET 2018


The reason for the departure from the airborne guidance for COTS was based on there being significantly higher usage of COTS in ground systems when compared to airborne systems. The airborne guidance contains no explicit objectives for COTS and simply basically states that the COTS needs to meet the guidance of the document and where there are shortfalls the data should be augmented to satisfy the objectives of the document. ED-109A/DO-278A provides 14 COTS related objectives but is still aiming at the goal of full compliance of the COTS. That has proven to be problematic for applicants and for those authorities, CVEs and DERs assessing the software were the COTS is assigned as anything above Assurance Level 5 or Software Level D. So as we move forward we need to provide guidance material with is less onerous and more practical than we have now but it would still likely be based on ED-109A/DO-278A.

 

Ross

 

From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de> On Behalf Of Matthew Squair
Sent: 04 November 2018 22:54
To: Paul Sherwood <paul.sherwood at codethink.co.uk>
Cc: systemsafety at techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] Collected stopgap measures

 

In DO-278A - the ground (CNS/ATM) version of 178C there’s an interesting approach to COTS. Which after just completing a SOI review on planning for the program I’m working on I’m painfully familiar with. 

 

First off the standard has additional explicit COTS related objectives, activities and evidence of compliance for software lifecycle processes covering planning, acquisition, verification and configuration management. This is over and above what 178C has. 

 

Specifically the 278A approach to COTS is:

 

*	Develop a plan for COTS acquisition and integration - document in the Plan for Software Aspects of Approval (PSAA) - this is approved by the approval authority.
*	Conduct of a gap analysis against the extant (non-COTS) objectives (Sections 4 to 9 of the standard - common with 178C).
*	Record the gap analysis in an assurance case for software integrity.
*	Identify how any gaps will be filled appropriate to the Assurance Level and gain agreement from the approving authority.
*	Identify derived requirements for the COTS software and provide these to the system safety approach.
*	Provide evidence for those objectives of sections 4 to 9 which can be achieved.
*	For the gaps provide assurance that the same level of confidence has been achieved as would be if the objectives of section 4 through 9 had been met.

 

For gap assurance (section 12.4.11.2.4 of 278A) the applicant can rely on service experience, additional testing, functional restriction, monitoring and recovery, design knowledge, audits and inspections, or prior approval. The standard provides guidance for each of these techniques to generate appropriate evidence to support the claim of equivalent assurance. You can use others, but you’d need to get the agreement of the approver (through the PSAA).

 

This is something of a depart from the airborne side of the house, but it reflects the way in which CNS/ATM systems are built and the challenges thereof. Yes there’s usually new or modified application software for an ATM system but that’s also usually riding on a COTS OS, like Linux. I’d also note as an aside that the issue for CNS/ATM systems is data veridicity as this is still a very human driven process (PBL wrote an article on this, worth a read).

 

Just as BTW Falcon X and Dragon run a variant of Linux as flight software, just sayin’.

 

On 5 Nov 2018, at 7:53 am, Paul Sherwood <paul.sherwood at codethink.co.uk <mailto:paul.sherwood at codethink.co.uk> > wrote:

 

On 2018-11-04 11:41, Martyn Thomas wrote:



Please don't take offense at the style of some of the responses on this
list. The signal-to-noise ratio is generally reasonably high, there's a
lot of expertise here (and a lot of frustration because so many
safety-related systems are built unprofessionally and unsafely and it
seems impossible to achieve the necessary culture changes).


Noted.




Your questions and challenges have been constructive and useful, in my
opinion.


Thank you.




You are right of course that Linux is used in critical systems but it is
an open question whether that is adequately safe, secure or (in some
countries) legal, because of the problem of establishing its effect on
the dependability of the system.


Yup, understood, and I recognise the systemic difficulties in attempting to answer that question.

We can affect dependability in a multitude of ways, though.

One anti-pattern I've grown a bit tired of is people choosing a micro-kernel instead of Linux, because of the notional 'safety cert', and then having to implement tons of custom software in attempting to match off-the-shelf Linux functionality or performance. When application of the standards leads to "develop new, from scratch" instead of using existing code which is widely used and known to be reliable, something is clearly weird imo.




(There's been a lot of debate here
about the "proven in use" approach to assurance. Summarising that
deserves a separate thread but, in essence, there's insufficient
scientific basis for almost all such claims).





So please hang in here. We need people who are doing their best and
willing to engage with others who are doing the same.


Agreed. I think I'll need to get some air, first :-)

_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE <mailto:systemsafety at TechFak.Uni-Bielefeld.DE> 
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181105/f2632892/attachment.html>


More information about the systemsafety mailing list