[SystemSafety] Misunderstanding of the nature of mathematics

Olwen Morgan olwen at phaedsys.com
Wed Jul 1 21:10:32 CEST 2020


You fail to grasp my point.

Mathematics is not some absolute in a world of forms. Its use in CbyC 
software development cannot entirely obviate testing since its efficacy 
rests on how closely it describes the behaviour of the systems with 
which it deals and on the assumptions it makes about them.

You are quite right to make the observation about a typed "2" being 
taken for a "5". But that merely illustrates the magnitude of the 
assumptions, modulo which, proofs provide the claimed level of 
assurance. One is relying on, among other things:

(1) The requirements specification permitting the generation of 
acceptance tests that give a suitably rigorous exercise of functional 
behaviours specified in the requirements.

(2) Corresponding conditions being met by both the architectural and 
detailed design specifications.

(3) A coding process that is designed faithfully to reflect the software 
unit specifications and the requirements of verification tools.

(4) The correctness of the compilers for all programming languages used.

(5) The soundness of the verifier's reasoning about what the compiler 
will do with the code. (Admittedly far less dubious for SPARK Ada than, 
AFAI am aware, for any C verifier supported by any known subset of C).

(6) The correctness of test coverage measurement tools.

... to mention only the most salient.

Yes, it is true that when we question things at this level of detail, we 
feel we are getting into a Godelian regress (which we aren't - the 
regress is finite even if it is large). But that does not alter the fact 
that CbyC rests critically on some pretty big assumptions about the tool 
chain. Soundly done, CbyC can reduce the risks of not unit-testing. It 
cannot eliminate them. If you get a test failure at the next level up, 
you still have to check backwards and accept that you may well find an 
error in a coded unit ... or somewhere in the tool chain.

As regards testing on iFacts, I worked on that project and was asked at 
one stage by one of the client's engineers to look into how well a 
certain routine had been tested. That particular routine tracked the 
movement of military aircraft across (non-convex) sectors of air space. 
My conclusion was that to give a greater degree of confidence that the 
testing was adequate, the system should internally partition the 
airspace into convex subsets - in this case triangular prisms, because 
movement across such subsets is easy to track and the tracking routine 
correspondingly easier to test. This upset the engineer who had 
specified the airspace model in Z because that model was as minimal as 
one could get. As far as I could see it specified a basis for a 
topology, which struck me as abstraction shooting itself in the foot. 
Why weaken to something weaker than a topology when, by the process of 
stereographic projection by which the airspace was defined, a entire 
coherent *geometry* was given? Abstracting away from agreed well-defined 
geometry hardly demonstrates mathematical finesse. Needless to say there 
was a bit of an atmosphere in the office for a while - not least because 
the engineer responsible for that bit of the Z spec was a Cambridge 
mathematics graduate (who IMO should have known *a lot* better). I was 
eventually told that my comments would be borne in mind if evidence 
should arise that the testing of the relevant routines had in fact been 
inadequate. NOBODY suggested that my concerns lacked a sound technical 
basis.

As regards Altran omitting unit tests, if I were advising a client on 
whether I thought it advisable, my own view would be that it was not. 
And the reason for this is simple. Errors are cheaper to correct the 
earlier you detect them. Not unit testing runs the risk of detecting 
errors in units later than might otherwise have been achieved. In taking 
that view, I would be open to arguments about relative risk PROVIDED 
that I was supplied with relevant data, details of how the data had been 
obtained, specification of the statistical analyses used and convincing 
arguments as to the repeatability and reproducibility of the results 
that the statistical testing was claimed to support. In this, I think, I 
would be taking a view entirely consistent with sound scientific practice.

My responsibility on iFacts was in providing initial consulting on a 
suitable subset of C and corresponding subset enforcement tools and 
later myself examining all C checker tool outputs for each build to 
ensure that they met the specified requirements. I ought to point out in 
that connection, that the software that paints the AT controller's radar 
display screen is written entirely in C using X libraries for which many 
deviations from the MISRA C subset were required (an average build 
required examination of 12,000 error messages to check that all observed 
deviations were permitted ones - though I got this down to 300 by 
supervising the development and use of a post-processing tool that put 
diagnostic suppression messages at contextually identified positions in 
the code, any redundant ones being detectable from unused suppression 
reports in the tool output. So much, then for CbyC in iFacts. ONLY THE 
ADA BITS WERE CbyC !!!. True, the only bits in C were as inconsequential 
as painting the radar display screens that the ATCs looked at but, if I 
apprehend you correctly, that was a minor detail?

I confess to a degree of skepticism that results largely from my mixed 
experiences with C. I would like to see C verifiers used much more 
widely. To support efforts in that direction, I have, some twelve years 
ago, offered a meta standard allowing C subsets to be formally defined 
and tailored to verifier capabilities. Nobody has taken up the (absurdly 
simple) ideas that I presented. Notably, the outstandingly inept MISRA C 
working group has resolutely refused to adopt a rigorous specification 
even of a subset as weak as MISRA C - and I've been into that sorry 
story before on this list. (Their reason: It would "alienate MISRA C's 
readership base ... WTF?!)

I'd expect SPARK tools relying on the SPARK language to get pretty good 
results. But SPARK developments are not the whole of software 
engineering. If unit-testing is found to be dispensable without risk in 
SPARK developments, great. Modulo my reservations above, I'd countenance 
giving it (heavily) qualified support - or at least absence of 
desctructive opposition. Not so for C - nor for any poorly defined 
language, with no safety/reliability subset and no verification tools. 
In suboptimal conditions, I would never dispense with unit testing and I 
question the wisdom of so doing even in the best conditions currently 
sustainable.

In the end it's all down to risk and numbers. Give me hard, *robustly 
reproducible* evidence that using CbyC makes unit-testing redundant and 
I'll consider it. But please do not expect me to be a pushover when it 
comes to examination of claims of reproducibility. I was trained to 
assess these things to UKAS standards - which are considerably more 
stringent than almost any others - an assessor can raise non-compliance 
issues based not only on a tick-the-boxes compliance checklist but also 
on fitness-for-purpose in the assessor's considered technical opinion. 
When it comes to software engineering processes, I'm not at all easy to 
convince.


Olwen




On 01/07/2020 17:53, Peter Bernard Ladkin wrote:
>
> On 2020-07-01 18:25 , Olwen Morgan wrote:
>>
>> Sorry, Peter, but if I have a choice between believing you and believing Arnold, I'm afraid Arnold wins.
> Which makes my point better than I could have made it myself.
>
> Which bits of Newton do you believe? The stuff in Principia or the stuff about metals vegetating? Or
> gravity being caused by salniter? Why?
>
> PBL
>
> Prof. Peter Bernard Ladkin, Bielefeld, Germany
> Styelfy Bleibgsnd
> Tel+msg +49 (0)521 880 7319  www.rvs-bi.de
>
>
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20200701/15a693aa/attachment-0001.html>


More information about the systemsafety mailing list