[SystemSafety] Bursting the anti formal methods bubble

Steve Tockey Steve.Tockey at construx.com
Fri Oct 27 20:34:43 CEST 2017


Michael J. Pont wrote:

“Will FMs help such small automotive teams?”

I would argue that some amount of at least semi-formal methods will help. However, no change in common practice will happen unless it can be shown that there is a business benefit to an increase in formality, like:

*) a reduction in development project cost and/or schedule
*) a reduction in defects delivered (e.g., ability to reduce or avoid product recall)
*) a reduction in product lifetime maintenance costs

Without a demonstration of benefit to the business, you’re unlikely to get organization to adopt whatever you are selling.



“If the development process for such teams is to become more formal, how should they get started?”

I would argue that there are two very powerful first steps:

*) Adopt semi-formal modeling of requirements and design, like using UML state charts
*) Adopt Design-by-Contract as a part of standard practice



“How much ‘FM’ is enough to protect the company if the product goes wrong and they end up in court (a risk that you raised in a recent post)?”

I’m not a lawyer however I have been involved in some expert witness work in a couple of software cases. At least in the US, companies can avoid or reduce their legal liability as long as they can show that they used “generally accepted professional practices”. Being able to show that you did more than just generally accepted practice—by using FM—can only help your case.



— steve



From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of "Michael J. Pont" <M.Pont at SafeTTy.net<mailto:M.Pont at SafeTTy.net>>
Organization: SafeTTy Systems Ltd
Reply-To: "M.Pont at SafeTTy.net<mailto:M.Pont at SafeTTy.net>" <M.Pont at SafeTTy.net<mailto:M.Pont at SafeTTy.net>>
Date: Friday, October 27, 2017 at 12:01 AM
To: "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: Re: [SystemSafety] Bursting the anti formal methods bubble

Martyn,

Thank you for your reply.

I am happy to accept that cybersecurity represents a significant challenge for developers of a wide range of modern embedded systems.

I see another key challenge (it is not unrelated).

In my experience, key parts of the automotive supply chain involve companies much smaller than Bosch or Siemens.  The team developing such embedded systems may involve 10 people or less (covering both hardware and software).  Companies with teams of this size are building safety-critical components (such as sensors) that will be used in highly-autonomous vehicles in the next few years.

Such organisations typically work in ‘C’ and target small microcontrollers (something like a ‘Cortex M4’ would be typical).  They will often (in my experience) struggle to get to grips with ISO 26262.  They have a lot of work to do, and often produce products with small profit margins (around 2% is typical for high-volume automotive components).

Will FMs help such small automotive teams?

If the development process for such teams is to become more formal, how should they get started?

How much ‘FM’ is enough to protect the company if the product goes wrong and they end up in court (a risk that you raised in a recent post)?

In my view, this isn’t just an automotive problem.  For example, I also work with small teams (often at the heart of organisations with several hundred employees) developing safety-critical industrial systems in compliance with IEC 61508.   The challenges here are very similar.

Overall if – as I believe you have argued – FMs represent ‘state of the art’, then we either need to provide FM tools and clearly-defined processes that can be shown to work effectively with very small teams, or we will need to re-structure the automotive supply chain (and probably the supply chain in other safety-related sectors too) …

Michael.

Michael J. Pont, PhD
SafeTTy Systems Ltd

From: systemsafety [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of Martyn Thomas
Sent: 26 October 2017 16:28
To: systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] Bursting the anti formal methods bubble


Michael J Pont and C Michael Holloway,

Thank you for your constructive comments. I would like to try to build some progress from them.

Firstly, let me explain that I understand the difficulties of introducing formal methods into an existing commercial development process. Readers on this list may not know that I have never been an academic: I have worked in commercial software development and support since 1969 and the company, Praxis, that I founded in 1983 was was only 40 people strong when we first started using VDM, and our Critical Systems Division was less than 60 people when we won the development of the air traffic management system CDIS,  the navy helicopter landing system SHOLIS and bet the future of the company on the success of these projects using Z and SPARK. More recently I chaired the steering committee for a large EU project DEPLOY that introduced the Event-B methods and Rodin toolset into Siemens Transportation, Bosch, Space Systems Finland, SAP and elsewhere and studied the difficulties encountered and the solutions found. I co-edited the resulting book<https://link.springer.com/book/10.1007%2F978-3-642-33170-1> and contributed a chapter on "Introducing Formal Methods into Existing Industrial Practices". I also worked on Daniel Jackson's study for the National Academy of Sciences Software for Dependable Systems: Sufficient Evidence? (2007) and co-edited the final report<https://www.nap.edu/catalog/11923/software-for-dependable-systems-sufficient-evidence>. That study consulted a lot of practitioners.

So I'm not ignorant of the limitations and difficulties. Most systems contain legacy components; some parts of most developments cannot be fully formalised; we don't have a formal specification of the behaviour of people or much of the real word in which we operate. A very small fraction of the required proofs will have to be resolved by inspection because of limitations in the prover or model checker. Training is expensive and takes time. Many programmers are unfamiliar with formal logic or operations on sets and states. These and many other things are real barriers to change.

But I have seen from experience that the use of formal methods can practical and cost effective and that despite these and other limitations introducing FMs dramatically reduces software defects and provides far stronger evidence for assurance than can be achieved in any other way. I believe that there are strong empirical reasons why this would be the case.

And we strive to be engineers and engineering involves making the right decisions and the right compromises, and as an engineer and a businessman, I have learnt the value of increased formality.

If we are to make constructive progress, I would like us to agree two things:

  1.   That the cybersecurity threat is a game-changer. When the only failures that had to concern us were the randomly-occuring failures triggered by combinations of events in our operating environment, we were able to make some important assumtions that simplified the task of assuring fitness-for-purpose. We could assume that failures in independent components would occur independently (many safety cases depend on this assumption). We could assume that a failure that occurred randomly in one car in a fleet of ten thousand would not suddenly occur simultaneously in every car in the fleet. We could assume that unlikely combinations of events remained unlikely. These and other assumptions are not valid if our systems may face well-resourced and technically sophisticated cyber attack. And the attacker could be a low-level criminal or even a script kiddie in a world where the agencies of nation states seek out vulnerabilities that they can exploit, weaponise them with scripts, apps and user guides, and then have them stolen and dumped on the Internet (as happened with the EternalBlue exploit that was developed by the NSA, stolen by ShadowBrokers, and used in the Wannacry and Petya/NotPetya Ransomware attacks). It would be complacent to rely on an assumption that no-one has both the motive and the means to mount a successful cyber-attack.
  2.  That the software development methods that are widely used by the industries that develop safety-critical software cannot provide adequate assurance against such attacks, and that therefore some way must be found to move from assurance that is largely based on testing towards assurance that is substantially supported by automated analysis of our specifications, designs and code.

If we can agree this starting point then we can move to discussing the incremental steps that can improve our processes and our products, step by step, person by person, component by component, subsystem by subsystem and interface by interface. It will be a long journey and we shall need to learn from each other how to overcome the many barriers, but I genuinely believe that the current situation is unsustainable and that our profession must come together to start making the necessary progress.

Regards

Martyn



On 26/10/2017 10:21, Michael J. Pont wrote:



Peter,



Let me re-phrase my questions (and address them to anyone on this list who cares to express a view).



Suppose Organisation X currently develops software for use in safety-related embedded systems in compliance with one or more international safety standards / guidelines (e.g. IEC 61508, ISO 26262 - or whatever).



Organisation X does not currently use formal methods / tools.



Organisation X wishes  to improve confidence in the safety of the systems that employ their software.



What level of 'confidence improvement' could Organisation X expect to see through use of tools that allow static analysis of code pre- and post-conditions?   What evidence can be provided to support this?



[Here I'm assuming something like SPARK or Frama-C tools.]



What other formal methods / tools could Organisation X consider (if any) if they want to reduce the number of defects in their code by a factor of 100x to 1000x?  What evidence can be provided to support this level of defect reduction (from Organisation X's starting point)?



What level of FM support should Organisation X consider if they want to reduce the threat of civil lawsuits etc (raised in Martyn's original email)?



---



Some more general comments.



There are people on this list who are formal-method 'fans' (some would say 'fundamentalists').



In some cases, I think such FM people 'need to get out more' - and spend time with the development teams in real (often fairly small) organisations that are struggling to create safety-related systems.   Such teams usually have very limited time and resources available: this does not mean that they are populated by 'bad people' who want to create poor-quality systems.



Shouting at people because they are not using formal methods doesn't really help ...   Offering suggestions about processes / tools that may help organisations to become 'more formal' would - in my view - be more productive.



Simply my take (others may well see the world in a different way).



Michael.



---



Michael,



On 2017-10-26 09:05 , Michael J. Pont wrote:

What is your baseline - what do you mean by “typical industrial

software methods”?  Do you – for example - mean products developed in compliance with IEC 61508 (e.g. ‘SIL 3’)?  ISO 26262 (e.g.

‘ASIL D’)?



In https://www-users.cs.york.ac.uk/tpk/nucfuture.pdf , John McD and Tim Kelly talk about typical fault densities. They suggest that 1 error per KLOC is considered to be "world-class" for some areas of safety-critical systems. The C130J software had about 23 errors per KLOC and 1.4 safety-critical errors per KLOC.



In contrast, Praxis, now Altran UK, has instrumented its code quality for a long time and has achieved fault densities as low as 1 per 25KLOC.



What do you mean by FMs?   Would use of SPARK Pro qualify?  Use of Frama-C?  Something else?



The answer is trivially no, because you are asking about apples and your examples are oranges.



SPARK Pro and Frama-C are PDEs. A FM is first and foremost a method. Both SPARK Pro and Frama-C use FMs, for example SPARK Pro uses Bergeretti-Carre information flow analysis integrally.  The SPARK Examiner uses Hoare Logic for static analysis, as well as other methods.



If you can provide evidence that use of a specific FM is capable of

providing “two or three orders of magnitude fewer defects” in an IEC 61508 / ISO 26262 project, I’d be interested to see this.

That is a different ask from what Martyn said. Also, in a safety-critical context just using one FM is not going to get you very far. Using Z to write your requirements specifications, for example, isn't going to get you very far if you then simply give your Z requirements to designers who can't read Z. You need more stuff to support that one technique, and you need your techniques to be joined up.



But I can give you some obvious examples from the past. First, use of strong typing would have eliminated 80% of the networked-SW vulnerabilities listed by US CERT in the 1990's. Strong data typing is an FM introduced 49 years ago. Second, use of higher-level languages and their compilers led to much more reliable code than writing it all in assembler. HLL Compilers are FM (if you are surprised by that assertion, a quick revisit of Aho, Hopcroft and Ullmann will make it clear).



PBL



Prof. Peter Bernard Ladkin, Bielefeld, Germany MoreInCommon Je suis Charlie

Tel+msg +49 (0)521 880 7319  www.rvs-bi.de<http://www.rvs-bi.de>













_______________________________________________

The System Safety Mailing List

systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20171027/8c266c85/attachment-0001.html>


More information about the systemsafety mailing list