[SystemSafety] Fwd: NYTimes: The Next Accident Awaits

Nancy Leveson leveson.nancy8 at gmail.com
Tue Feb 4 11:52:13 CET 2014


There are more alternatives than just checklists and "objectives." And
usually more than one may be used, especially the use of checklists plus
other methods. Here is how I characterized the alternatives in my paper on
safety cases:

*Types of Regulation*

     Certification methods differ greatly among industries and countries.
Approaches commonly used can be broken into two general types, which
determine the type of evidence used in the certification process:

   1. *Prescriptive*: Standards or guidelines for product features or
   development processes are provided that are used to determine whether a
   system should be certified.


    1. *Product*: Specific design features are required, which may be (a)
      specific designs or (b) more general features such as fail-safe design or
      the use of protection systems.

                                           i.     *Specific designs and
features* provide a way to encode and pass on knowledge about past
experience and lessons learned from past accidents. In some industries,
practitioners are licensed based on their knowledge of the standards or
codes of practice. An example is the existence of electrical codes based on
past experience with various designs. For software, some completeness
criteria for requirements have been identified [Leveson, 1995] as well as
specific design features [Leveson, 2012] based on common flaws leading to
many accidents in the past. Certification then becomes the responsibility
of the licensed practitioner, who can lose their license if they fail to
follow the standards. Organizations may also be established that produce
standards and provide certification, such as the UL rating. It is difficult
to fathom any argument that such encoded knowledge should not be included
in any certification effort. Requiring reinvention of this past experience
for every project would be prohibitively costly and potentially incomplete
and error prone without any clear advantage.

                                         ii.     Different industries face
different safety problems and therefore the *general approach to safe
design* may differ among them. For example, commercial aviation has created
various types of fail-safe techniques used to protect against component
failures [Follensbee]. Nuclear power, because of differences in the
problem, has traditionally used defense in depth and protection systems.
For software, such features might include the use of exception handling,
checking for out-of-range variables, and designing to reduce the potential
for human error when interacting with the software. Certification is
usually provided by inspection that the design features provided are
effective and implemented properly.



    1. *Process*: Here the standards specify the process to be used in
      producing the product or system or in operating it (e.g., maintenance or
      change procedures) rather than specific design features of the product or
      system itself. Assurance is based on whether the process was
followed and,
      sometimes, on the quality of the process or its artifacts. The process
      requirements may specify

                                                     i.     General product
or system *development processes and their artifacts*, such as requirements
specifications, test plans, reviews, analyses to be performed and
documentation produced (e.g., DO-178B) or

                                                   ii.     The *process to
be used in the safety engineering* of the system and not the general
development process used for the product (e.g., MIL-STD-882). Only the
safety engineering process is specified, not the general development
process, which is up to the individual system developers.



   1. *Performance-based or goal-setting approaches* focus on desired,
   measurable outcomes, rather than required product features or prescriptive
   processes, techniques, or procedures. The certification authority specifies
   a threshold of acceptable performance and often but not always a means for
   assuring that the threshold has been met. Basically, the standards set a
   goal, which may be a risk target, and usually it is up to the assurer to
   decide how to accomplish that goal. Performance-based regulation specifies
   defined results without specific direction regarding how those results are
   to be obtained. An example is a requirement that an aircraft navigation
   system must be able to estimate its position to within a circle with a
   radius of 10 nautical miles with some specified probability or that for new
   aircraft in-trail procedure (ITP) equipment "The likelihood that the ITP
   equipment provides undetected erroneous information about accuracy and
   integrity levels of own data shall be less than 1E-3 per flight hour"
   [RTCA, 2008].

Nancy


On Tue, Feb 4, 2014 at 5:40 AM, SPRIGGS, John J <John.SPRIGGS at nats.co.uk>wrote:

>  As one of "the grey beards" someone mentioned several posts ago, I first
> encountered goal-based regulation about 2001, when my Customers' Regulator
> said they were about to introduce it.  Customers had asked for Safety Cases
> long before that.  The argument would be along the lines of "You told us to
> do this, here is the evidence we did it; we also did this other stuff to
> address the hazards we identified along the way; therefore we consider it
> safe enough for the purpose you stated."
>
> So should the debate be about whether Regulation should be based on
> checklists or objectives, rather than whether you need a safety case or
> some other assurance document?
>
>
>
> John Spriggs
>
> Head of System Integrity @ NATS
>
> .
>
> .
>
> .
>
>
>
>
> ------------------------------
> If you are not the intended recipient, please notify our Help Desk at
> Email Information.Solutions at nats.co.uk immediately. You should not copy
> or use this email or attachment(s) for any purpose nor disclose their
> contents to any other person.
>
> NATS computer systems may be monitored and communications carried on them
> recorded, to secure the effective operation of the system.
>
> Please note that neither NATS nor the sender accepts any responsibility
> for viruses or any losses caused as a result of viruses and it is your
> responsibility to scan or otherwise check this email and any attachments.
>
> NATS means NATS (En Route) plc (company number: 4129273), NATS (Services)
> Ltd (company number 4129270), NATSNAV Ltd (company number: 4164590) or NATS
> Ltd (company number 3155567) or NATS Holdings Ltd (company number 4138218).
> All companies are registered in England and their registered office is at
> 4000 Parkway, Whiteley, Fareham, Hampshire, PO15 7FL.
> ------------------------------
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
>


-- 
Prof. Nancy Leveson
Aeronautics and Astronautics and Engineering Systems
MIT, Room 33-334
77 Massachusetts Ave.
Cambridge, MA 02142

Telephone: 617-258-0505
Email: leveson at mit.edu
URL: http://sunnyday.mit.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20140204/3f6e1317/attachment-0001.html>


More information about the systemsafety mailing list