[SystemSafety] Scalability of Formal Methods [was: Logic]

C. Michael Holloway c.m.holloway at nasa.gov
Thu Feb 27 14:20:21 CET 2014


In theory, one need only justify

1. the requirements are sufficient to ensure safety
2. the object code that runs on the target machine satisfies the requirements 
and provides no unintended function
3. the target machine behaves properly

Refinement steps that are inserted between 1 & 2 are necessary only because 
of limitations of current methods.

-- 
/*cMh*/

*C. Michael Holloway*, Senior Research Engineer
Safety Critical Avionics Systems Branch, Research Directorate
NASA Langley Research Center / MS 130 Hampton VA 23681-2199 USA
office phone: +1.757.864.1701 /often forwarded to/ +1.757.598.1707

The words in this message are mine alone; neither blame nor credit NASA for 
them.

On 2/27/14 8:03 AM, Peter Bishop wrote:
> I think Patrick has a point.
> To demonstrate safety you need to justify every step
>
> requirements
> requirement to code
> code to object code
> runtime behaviour of the chip
>
> For example, there was a formal proof for requirements-to-code and 
> code-to-object-code for the Sizewell B primary protection system
> And the Viper chip was an attempt to prove correctness at chip level
>
> But even if the whole chain is proved correct, to justify safety we also 
> need to justify that the hardware / software system is robust to 
> malfunction (e.g. like the single event upset that might have caused a 
> malfunction in the Toyota engine controller).
>
> Peter Bishop
> Adelard
>
> Patrick Graydon wrote:
>> On 26 Feb 2014, at 22:33, Peter Bernard Ladkin 
>> <ladkin at rvs.uni-bielefeld.de> wrote:
>>
>>> Fetzer is a red herring. His argument rests on an equivocation and admits 
>>> of a trivial counterexample. Proved-correct is not a unary predicate of 
>>> programs. What does "correct" mean, when speaking of programs? Answer: 
>>> that it does what you want. So you have to have some idea of what you 
>>> want, some specification. And then you may ask whether the program is 
>>> correct wrt this specification. Or, as we may say, whether the program 
>>> fulfils the specification. Correctness is (at least) a binary predicate, 
>>> of which one argument is a program and another a specification of some 
>>> kind, a saying-what-is-wanted.
>>> Fetzer claims to show that programs can never be proved correct. Here is 
>>> a simple, trivial counterexample to that claim. Consider the 
>>> specification "P OR (NOT P)", where "P" is any proposition you care to 
>>> express. Constructivists, who may doubt excluded middle, may rather 
>>> consider "NOT P OR (NOT NOT P)". Call this specification S. Take any 
>>> program you like: Prog. There is a rigorous proof that Prog fulfils S. 
>>> For the classical statement, the proof is a couple lines long (think of 
>>> it as the two lines of the truth table); for the constructivist about 
>>> double that.
>>>
>>> See? Logic helps. Here, in showing why somebody who is arguing that 
>>> program verification should lose its research funding is canonically 
>>> mistaken.
>>>
>>> Not that the CACM would print such a refutation. The editor's comment 
>>> when I sent it in was that Fetzer's article was somewhat old (a couple of 
>>> years); did I have a more contemporary contribution  to CACM? That's when 
>>> I realised the marketing people had taken over and I let my membership 
>>> lapse.
>>
>> My point in bringing up Fetzer's article is that what I actually want to 
>> know is what my software will cause real chips to do in the real world.  
>> What you are telling me is that you can prove what the software means 
>> according to idealised semantics.  I'm not even going to address that, 
>> because those are two different things ... which is what I understood 
>> Fetzer to be saying.  Unless you can somehow deductively prove the 
>> behaviour of a real, physical chip?
>>
>> I don't disagree that knowing what software will do according to idealised 
>> semantics is useful.  But safety depends on knowing what will happen in 
>> the real world on real chips.  And if you can't prove that, then you need 
>> to bring in a different form of evidence to complete the picture.  Some 
>> sort of inference based on observation.  Testing, in other words.
>>
>> As to Martyn's point, I care much less about whether responsibility for 
>> completing the evidence about what the software will cause real chips to 
>> do in the real world lies with a person who calls himself or herself a 
>> software engineer or any other job title (or combination of titles).  I 
>> care that the evidence *is* completed by someone.  I think showing that 
>> the software is correct according to some abstract semantics is a useful 
>> goal ... provided that somebody covers the other gaps. This begins with 
>> first being clear that those gaps exist.  Since I have met several people 
>> who seemed to think that software proof was going to tell them what the 
>> real chips would do in all circumstances, I think a little reminder of 
>> what it does mean is useful from time to time.
>> --- Patrick
>>
>> Dr Patrick John Graydon
>> Postdoctoral Research Fellow
>> School of Innovation, Design, and Engineering (IDT)
>> Mälardalens Högskola (MDH), Västerås, Sweden
>>
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20140227/dd5f47c0/attachment.html>


More information about the systemsafety mailing list