[SystemSafety] "Ripple20 vulnerabilities will haunt the IoT landscape for years to come"

Steve Tockey steve.tockey at construx.com
Wed Jul 1 16:36:42 CEST 2020



Quoting Boris Beizer:


³It only takes one failed test to show that the software doesn¹t work, but
even an infinite number of tests won¹t prove that it does²



Quoting Cem Kaner:

³If you think you can fully test a program without testing its response to
every possible input, fine. Give us your test cases. We can write a
program that will pass all of your tests but still fail spectacularly on
an input you missed. If we can do this deliberately, our contention is
that we or other programmers could do it accidentally²



Quoting Boris Beizer (again):

³Our objective must shift from an absolute proof to a suitably convincing
demonstration²



Alternatively, quoting me:

³Depending on testing alone‹as the sole means of determining code
correctness‹is a hopelessly lost cause²





-----Original Message-----
From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de>
on behalf of Olwen Morgan <olwen at phaedsys.com>
Date: Wednesday, July 1, 2020 at 7:17 AM
To: Martyn Thomas <martyn at 72f.org>
Cc: "systemsafety at lists.techfak.uni-bielefeld.de"
<systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] "Ripple20 vulnerabilities will haunt the IoT
landscape for years to come"

Good question.

As far as I can see, all I can possibly know is that a (hopefully
well-designed) set of tests has failed to falsify the assertion that the
software meets its specification.

What else could one claim of any experiment?

Olwen


On 26/06/2020 21:46, Martyn Thomas wrote:
> I like to ask ³what do you know after your software has passed your
>tests that you didn¹t know before - other than that it passes these
>specific tests run in this specific order today? And if there is
>anything, how do you know that?²
>
> I have never received an answer that addresses the question..
>
> Regards
>
> Martyn
>
>> On 26 Jun 2020, at 20:35, Olwen Morgan <olwen at phaedsys.com> wrote:
>>
>>
>> On 26/06/2020 19:36, paul_e.bennett at topmail.co.uk wrote:
>>>> A lot of software source code I have seen from others would
>>>>immediately fall
>>>> into the rejected category. Mainly through lack of included
>>>>documentation,
>>>> very high MCC scores and lack of a clear enough interface.
>> Arghhh ... another perennial hobby-horse of mine!
>>
>> Why do so few software engineers never even think of using test metrics
>>to help them *minimise* the number of test cases they require?
>>
>> I usually try to design my own code so that every set of test cases
>>that attains 100% boundary value coverage also attains 100% simple path
>>coverage. It means that you have only the number of simple paths you
>>need to make the relevant logical distinctions among the input
>>conditions (easy to achieve in functional languages and, alas, easier
>>still to fail to achieve in imperative languages).
>>
>> But when I suggest this to other software "engineers", they usually ask
>>me what "boundary value coverage" and "simple path" mean. ...
>>
>>
>> ... and they wonder why I fantasise about their suffering long and
>>excruciating deaths ... ?
>>
>>
>> Brooding in dark, technostalinist hyperbole,
>>
>> Olwen
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
>> Manage your subscription:
>>https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription:
https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety



More information about the systemsafety mailing list