[SystemSafety] Number Theorist Fears All Published Math Is Wrong

Derek M Jones derek at knosof.co.uk
Mon Nov 4 14:05:34 CET 2019


David,

> Yes, in fact from the machine architecture up to application level, this
> is DeepSpec project: https://deepspec.org/main

This new project looks interesting.  Let's see what they produce.

> No, nowadays CompCert has proofs from the text parser to the code
> generator!

It contains bits of code that have been informed in formal verification.

> proved. And this is not "soap powder advertising", this is science. See
> this paper https://hal.inria.fr/hal-01643290 for an overview, and

This paper is end-to-end soap powder advertising.

> reference [11] within this paper for the proofs linking the text LR(1)
> parser and C language semantics.

Ok, lets look at this paper:
http://gallium.inria.fr/~fpottier/publis/jourdan-leroy-pottier-validating-parsers.pdf

A common way of building a parser involves feeding a grammar into a tool
which generates some tables, which are used by a finite state
machine to decide whether some input conforms to a language grammar (and
the state transitions can be used to trigger useful actions like
building an abstract syntax tree, these actions have to be manually
added).  The finite state machine is one component of many of a parser.

This paper describes some work done to try and verify the correct
behavior of the finite state machine.  It works by differential testing.
A finite state machine does its job (using tables generated from an
unverified grammar by an untrusted tool), and another tool (called
a validator; how formal methods people love pompous names) decides
whether it thinks the behavior of the finite state machine is correct.

The paper says:
"In summary, the approach to high-confidence parsing developed in
this paper..."
What is claimed is high-confidence, no claims of verified or
guaranteed correct.

So a piece if a parser has been involved in some arm waving maths.

> And yes, some parts are not proved, mainly pre-processing, and testing
> is used there (again,see above paper for details).

None of it is proved, but some pieces of it have been involved in a
a process which claims high-confidence.

> We all know your skepticism regarding formal methods, and I agree with

My skepticism is aimed at many of the unsubstantiated claims being made.

> But they are industrial cases where formal methods are more efficient:

More efficient than what?

> If those companies are using formal methods, for some of them for more
> than 20 years, it is not due to "soap powder advertising" but due to
> real cost benefit over several projects.

Large companies are involved in a wide range of activities.
Please provide links to sources showing real cost benefit savings.

> To give just one example, safety critical data verification used for
> example in CBTC projects, goes down from 1 month to 1 or 2 days
> (compared to manual human review process), with rapid identification of
> faulty data and strong confidence brought to the verification process.

A cost benefit analysis needs to include the startup costs.
How much did it cost to create the tooling to automate the process?

> For me now the real question is: when is it cost effective to use formal
> methods?

This has always been the question.  Perhaps the answer is almost never.

-- 
Derek M. Jones           Software analysis
tel: +44 (0)1252 520667  blog:shape-of-code.coding-guidelines.com


More information about the systemsafety mailing list