[SystemSafety] "Ripple20 vulnerabilities will haunt the IoT landscape for years to come"

Olwen Morgan olwen at phaedsys.com
Wed Jul 1 18:10:10 CEST 2020


Not so. There is always the possibility that the verification and 
testing tools contain errors.

If your experiment contradicts a hypothesis, you check *everything* - 
including the design and operation of your experimental tools, which 
include your verifier and test tools.

You use SPARK as an example but what about users of C? Any C verifier 
depends on assumptions about the the nature of implementation-defined 
and unspecified language features (of which C, alas, has many). Although 
you can determine the nature of such characteristics by experiment 
(using some very devious programming techniques), you cannot guarantee 
that the behaviour elicited by an unspecified usage is the same at all 
places at which it occurs. A perverse implementation could provide 
different behaviour at different places.

I'll defer to those more familiar with the Ada RM but I'd be very 
surprised if the language specification were entirely free from lacunae 
related to implementation-defined features that could, at least in 
principle, lead to similar problems. Does the RM tie down the compiler 
strictly enough to preclude different instances of 
implementation-defined constructs behaving differently in different 
places? Given the gung-ho approach of the designers of optimising 
compilers, I wouldn't bet on it. OK, you might attempt to switch off 
optimisations but control of which optimisations are used is typically 
far from easy (at least in C compilers). Can Ada compilers truly claim 
to be any better? Could such a claim be convincingly substantiated by 
appeal to the provisions of the RM?

Don't get me wrong. I'll gladly use the best verifiers and testing tools 
I can get my hands on. But I will not depart from viewing testing as an 
experiment that tries to falsify the correctness hypothesis. Once you 
adhere to that view, the results of testing always have the possibility 
of contradicting the results of verification. Any other stance would be 
like doing physics without ever checking your experimental apparatus 
when you got unexpected results. One can hardly base sound engineering 
on rejection of scientific method.


Olwen




On 01/07/2020 16:27, Peter Bernard Ladkin wrote:
>
> On 2020-07-01 17:17 , Olwen Morgan wrote:
>> On 01/07/2020 16:11, Peter Bernard Ladkin wrote:
>>> <snip>
>>>
>>> ... If you are programming, it may be that you can avoid unit tests in favor of verification, ...
>> <snip>
>>
>>
>> ABSOLUTELY NOT!
> Absolutely so.
>
> If the programmer had been using CbyC methods/tools such as in SPARK Pro appropriately, then the
> program does indeed do what the specification says (modulo discharging proof obligations), which is
> the point of unit testing the various components. Thereby unit testing can be avoided if you use CbyC.
>
> Your example dialogue ensues when the spec is wrong. That is not what unit-testing modules is for.
> The dialogue would also not occur. The programmer would not contradict the tester; she would say "I
> am the wrong person to talk to about this; go talk to so-and-so along the corridor who developed the
> spec".
>
> PBL
>
> Prof. Peter Bernard Ladkin, Bielefeld, Germany
> Styelfy Bleibgsnd
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
>
>
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20200701/0d1b28b6/attachment-0001.html>


More information about the systemsafety mailing list