[SystemSafety] Machine models under formal methods

David Crocker dcrocker at eschertech.com
Fri Nov 10 21:20:41 CET 2017


Todd,

Although a lot of what you say it true, the use of formal methods in to
prove *some *aspects of correctness of processor hardware designs has
been mainstream for many years. Sadly, it's still generally the case
that *no *aspects of software are proven correct. This is despite the
fact that some aspects of software correctness (e.g. freedom from
exceptions or undefined behaviour) are not difficult to prove with the
right tools, and other aspects (e.g. functional correctness in
single-threaded systems) can be proven with a little more work.

David Crocker, Escher Technologies Ltd.
http://www.eschertech.com
Tel. +44 (0)20 8144 3265 or +44 (0)7977 211486

On 10/11/2017 14:39, Todd Carpenter wrote:
> Correct me if I'm wrong (I'm sure I am), but the "[SystemSafety]
> Bursting the anti formal methods bubble" discussion seems to assume
> that we have a clue about what's going on beneath our software. As a
> caveat, I agree with the premise that if you can't even assure your
> software using a simple machine model (instruction X copies the values
> stored in memory locations to/from registers, instruction Y performs
> ALU operations on registers, and memory values don't change unless you
> use instructions X or Y), then you can't hope to know what your
> software does on a complex machine.
>
> Unfortunately, we no longer have simple machines. I've heard claims
> from microkernel, RTOS, and secure OS vendors that they verify their
> code (using a variety of means, including formal proofs, analysis,
> etc.) against instruction set models. But those instruction sets are
> often backed by microcode, which was not developed under formal
> methods. Worse, on some processors, that microcode is mutable, and
> loaded at run-time. Furthermore, there are all sorts of other things
> going on under your simple ALU, including Turing complete MMUs,
> IOMMUs, and DMA engines.
>
> If you want to be entertained by what the vendors are doing to you,
> check out the following recent articles about Intel's Management
> Engine back door that they baked into their chipsets. Sorry, it's not
> a back door, it is a "management portal." Whatever. These portals are
> on endpoint machines. They even put support for this capability in
> ATOM SoCs. These management engines can arbitrarily access memory.
> (Oh, wait, Intel doesn't do evil. It's not arbitrary... Trust them.)
> Server class machines have the equivalent "Baseboard management
> controllers" which can also interact with memory. All the BMCs we've
> studied use old versions of BusyBox, and are riddled with
> opportunities. Think about how that impacts the argument that starts
> with, "assuming a solid foundation, we can formally show that this
> program does X."
>
>    
> http://www.theregister.co.uk/2017/10/21/purism_cleanses_laptops_of_intel_management_engine/
>     https://puri.sm/posts/deep-dive-into-intel-me-disablement/
>    
> http://www.zdnet.com/article/minix-intels-hidden-in-chip-operating-system/
>    
> http://www.theregister.co.uk/2017/11/09/chipzilla_come_closer_closer_listen_dump_ime/
>
>
> (My apologies to the academics: these are popular press articles
> rather than peer reviewed articles. Several of the articles point to
> the real thing.)
>
> So now lets assume that we develop processors, MMUs, caches, and
> inter-core networks that are well behaved. It's possible. What happens
> when that machine suffers Single Event Errors?
>
> Note that I am NOT arguing against formal methods in the slightest. I
> am merely pointing out that unless someone has a good model of the
> machine, it might not be sufficient.
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20171110/246b6d44/attachment.html>


More information about the systemsafety mailing list