[SystemSafety] Scalability of Formal Methods [was: Logic]

Martyn Thomas martyn at thomas-associates.co.uk
Thu Feb 27 17:54:49 CET 2014


I agree with David.

In my opinion, an engineer wants their system to be fit for purpose and
chooses methods, tools and components that are expected to achieve
fitness for purpose. It's poor engineering to have a system fail in
testing, partly because that puts the budget and schedule at risk but
mainly because it reveals that the chosen methods, tools or components
have not delivered a system of the required quality, and that raises
questions about the quality of the development processes.

Finding a fault in testing also strongly suggests the presence of
several other faults that you have not found, because of the well-known
limitations of testing.

So, as David says, the testing should be designed to disprove the claim
that the system is fit for purpose. And if the tests find a fault in the
system, they have also found a fault in the chosen methods, tools or
components and that is where the correction should be made first. If you
just fix the fault without fixing the process that should have prevented
the error getting through to testing, then you may compromise the design
or introduce further errors and you have missed a chance to make
important improvements to your engineering.

When after trying your hardest, you can't find any faults, it may mean
that your tests are not good enough, but it should also increase your
justified confidence in your methods, tools and components and,
ultimately, that will be the root of any justifiable confidence in the
system.

And, of course, your development process has improved - so the next
system will be quicker, cheaper and better!

Martyn

On 27/02/2014 15:39, David Crocker wrote:
> I've always considered that the role of testing is fundamentally
> different when software is developed using a correct-by-construction
> approach that formally proves that the code satisfies the software
> requirements.
>
> This approach uses a chain of tools (formal verification tools,
> compiler, linker, etc.) and hardware, each of which be believe produces
> output that has a defined relationship to the input. For example, the
> formal tool provides proof that the source code meets the requirements;
> the compiler produces object code that faithfully implements the
> semantics of the source code in the target instruction set; and the
> hardware faithfully implements that instruction set. We further believe
> that we have combined these tools and hardware in a process such that if
> they all function correctly, the target system must meet the requirements.
>
> The purpose of testing is to challenge those beliefs and ultimately
> establish confidence in them. If even one test fails, then there is a
> problem in one of the tools, or the hardware, or the process. So when a
> test fails, we don't just treat it as a bug that needs fixing, we must
> establish which tool or hardware or part of the process is at fault, and
> correct it so that it can't happen again.
>
> Of course, in practice some sorts of test may also expose deficiencies
> in the original specification.
>
> David Crocker, Escher Technologies Ltd.
> http://www.eschertech.com
> Tel. +44 (0)20 8144 3265 or +44 (0)7977 211486
>



More information about the systemsafety mailing list