[SystemSafety] Correctness by Construction - tortoises all the way down

SPRIGGS, John J John.SPRIGGS at nats.co.uk
Fri Jul 17 17:22:20 CEST 2020


,,, and if you want to follow up on Les's foray into story theory, I recommend David Baboulene's story book http://www.amazon.co.uk/gp/product/0955708923

I proofread it, but do not get royalties...

Is proofreading a sub-process of Construction by Correction?

From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de> On Behalf Of Les Chambers
Sent: 16 July 2020 02:23
To: 'Michael Jackson' <jacksonma at acm.org>
Cc: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] Correctness by Construction - tortoises all the way down

Michael
Thank you for initiating this conversation. Somewhat mischievously I
suspect. But it's good mischief. Achieving correctness is essential to any
successful automated system.

"the limits of correctness" is an example of the problem of infinite regress
where the truth of proposition P1 requires the support of proposition P2,
the truth of proposition P2 requires the support of proposition P3, and so
on, ad infinitum.

It is well known in systems engineering that correctness means compliance
with some pre-existing specification (read proposition - P). Most projects
have a long chain of them - concept of operations, system requirements,
software requirements, software system architecture, detailed design ...
correctness of the system depending on each one of these artefacts being
complete, correct, unambiguous and verifiable. The verifiability of a
specification is a direct function of its formality. Formal languages/models
being the best we have at the moment. In my experience there have been
levels of formality. In my view you've reached complete formality when you
can press a button and generate code from the formal specification. This is
true for the finite state engine. It can be both understood by a human being
and compiled into executables.

The core of our problem is that this proposition chain is not infinite. It
ends at the real world - a phenomenon that tech actors find difficult to
formalise. Don't feel bad, humanity has struggled with this problem for
centuries.
I cite an 1854 transcript of remarks by preacher Joseph Frederick Berg
addressed to Joseph Barker:
"My opponent's reasoning reminds me of the heathen, who, being asked on what
the world stood, replied, "On a tortoise." But on what does the tortoise
stand? "On another tortoise." With Mr. Barker, too, there are tortoises all
the way down. "

With requirements we have reached "the limits of correctness" on the back of
the root tortoise. Where to from here?

The beginnings of a solution to "How can software developers reason
reliability about the physical world?" lie in a deeper study of how humans
engage with metaphor. (Definition: a metaphor describes this in terms of
that where this is new and that is familiar - Robert Frost).

Marooned on the back of your route tortoise it's instructive to recast the
problem statement.
"How can software developers reason reliability about the physical world?"
becomes "How can software developers help their users reason reliability
about the physical world?" In my experience a large proportion of
ambiguities errors and omissions arise from users who don't know what they
want at the level of detail required to build an automated system.
Technologists fill in the gaps, sometimes erroneously. In this way we can
miss the fundamental behaviours of the system that justify its existence.

The solution lies in shared metaphors. Concepts understood by both the
non-technical subject matter expert and the technologist that provide a
framework for discourse and improve the precision and the completeness of
the specification. A good metaphor helps people visualise a problem sparking
inspiration as they wander through its n degree space. Good metaphors also
colonise the subconscious which proceeds to work on the problem while you're
asleep. The truest statement I have ever heard is, "they never remember what
you said only what they were visualising while you were talking." So give
them something to imagine.

The state engine is a good example. With a minimal amount of training I have
taught plant operators without a high school education the basic concepts of
the finite state model. This metaphor then helped us eliminate any
ambiguities errors and omissions that typically occur in rambling free text.

But the state engine is not a solution to everything (Steve Tockey breathes
a sigh of relief).
There is, in fact, a higher-order metaphor that does model real life
facilitating informed reflection on the core functionality required. The
fascinating thing about this metaphor is that knowledge of it is tacit in
all humans, not explicit. They can't tell you why they understand it they
just know truth when they see it (heard that before?). Applied thoughtfully
this metaphor can not only govern development of early concepts but also
inform the way a project is delivered. And if your project does not follow
its dictates the probability of failure is high.

It is of course the mono myth the overarching pattern that describes the
underlying architecture of most stories you have ever heard, most movies you
have ever watched.

Guys, if you are still with me please suspend judgement for a few more
lines. What follows may explain why Microsoft trains all its people in story
theory.

To engage with this metaphor you need to accept that embarking on a complex
software and electronic systems project casts you as a character in a story
that will, whether you like it or not, unfold as neatly described by the
mono mythical pattern. I've further developed these ideas here:
https://www.specnative.com/story/Story-Tutorial-V4.0.pdf<https://www.specnative.com/story/Story-Tutorial-V4.0.pdf>

Here is one use case:
All stories feature a hero who embarks on a quest for some elixir that will
solve a problem in their normal world. The need for the quest becomes
apparent when their normal world is in some way thrown out of balance by
forces outside their control. The elixir is expected to restore that
balance. The hero is usually aware either consciously or unconsciously of
the imbalance but refuses to embark on the quest until forced by occurrence
of some horrendous inciting incident. Example: Luke Skywalker's auntie and
uncle are murdered by Imperial storm troopers causing him to take Obi-Wan
Kenobi's advice to become a Jedi Knight.

So in you're first meeting with the client ask them:
You've been living in a normal comfortable world, what has happened lately
to throw it out of balance?
Have there been any horrendous incidents that have convinced you that
something has to be done?

Another use case that speaks to a productive attitude in a requirements
engineer:
A story never rings true to an audience unless they know who the hero is,
they know what the hero wants and they want the hero to have it?
Empathy guys, empathy.

Story theory gives you a form, it is not a formula. Understanding the story
pattern will not help you write a bestseller (or build the perfect system),
it will however help you identify what is missing in your candidate
bestseller. In contrast a formula allows you to push the button and generate
code.

So Michael I'll leave it with you. Hope this helps.

Cheers
Les

-----Original Message-----
From: systemsafety
[mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
Michael Jackson
Sent: Thursday, July 16, 2020 1:15 AM
To: David Crocker; Olwen Morgan; Brent Kimberley; Dewi Daniels;
njt at tudorassoc.com<mailto:njt at tudorassoc.com>; Peter Bernard Ladkin; Martyn Thomas
Cc: systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>
Subject: Re: [SystemSafety] Correctness by Construction

Thank you all for your emails on this topic. I have quoted below what I
think are the most relevant parts. My own comments follow the double dashed
line after the last quote.

----------------------

> On 14 Jul 2020, at 20:20, David Crocker <dcrocker at eschertech.com<mailto:dcrocker at eschertech.com>> wrote:
>
> On 14/07/2020 17:06, Brent Kimberley wrote:
>> >> how are the software developers to reason reliably about the physical
problem world where the important requirements are located and defined, and
will---or will not---be satisfied?
>>
>> An automated World view inquiry framework? epistemology automation? ;)
> The way I suggested doing that when using Perfect Developer to develop and
(to some extent) verify the software requirements is to include a model of
the external physical system in the requirements specification. It is then
possible to reason and construct proofs about the combined behaviour of the
software + physical system, including the effects of external inputs. This
approach is in principle applicable to other tool-supported formal
specification languages, for example Z.
>
> Although this is conceptually simple, the problem lies in constructing a
model of the physical system and possible external inputs that is
sufficiently accurate and complete to make the proofs meaningful and useful.
>
> Cheers
>
> David Crocker, Escher Technologies Ltd.

----------------------

On 15 Jul 2020,at 04.21, Brent Kimberley wrote:

Hi Michael.
Perhaps I misinterpreted the question. I though the question was how can
software developers reason reliability about the physical world?

My response was write more code. ;)
More precisely have the software to develop inferences WRT sensor / actuator
/ FRU / logic / bus - failure & train operators. ;)

For example write the requested logic plus an engine to progressively refine
world view.


For example for a fly by wire or environmental controls system: there may be
certain assumptions about the physical sensors, actuators, structural
members, buses, DAQs, energy sources, clocks, FRUs, by which the logic
engages with the physical world. The epistemology engine would continuously
update it's world view assumptions and inform an ontology layer - used by
the requested software logic - to interact with the physical world.

For four engines delivering power (and data) to four propulsion units -
transmission grid. The epistemology engine identifies events such as
faults, updating the ontology model, and the requested software logic using
the ontology layer could to control available propulsion units - routing
power (and data) around faults - in a way which ideally preserves assets.

-----------------------

On 15 Jul 2020, at 21.10, Olwen Morgan wrote:

On 14/07/2020 20:20, David Crocker wrote:
<snip>
>
> include a model of the external physical system in the requirements
specification. It is then possible to reason and construct proofs about the
combined behaviour of the software + physical system, including the effects
of external inputs. This approach is in principle applicable to other
tool-supported formal specification languages, for example Z.
>
> Although this is conceptually simple, the problem lies in constructing a
model of the physical system and possible external inputs that is
sufficiently accurate and complete to make the proofs meaningful and useful.
>
Just so. And in my experience an even greater problem lies in the difficulty
of persuading people that it is, in some cases , necessary. I once walked
out of and blew the whistle on a train safety project where fellow engineers
ignored me on this issue.


Olwen

----------------------
----------------------

Some comments:

1. One way to think about a bipartite cyber-physical system is to regard
the interface between the processor hardware and the physical problem world
as providing a (bidirectional and hugely complex) API allowing the processor
to monitor and control phenomena of the problem world. This API needs a
full, maximally reliable, model of the physical domains and phenomena of the
problem world, showing the causal chains terminated at the actuators and
sensors and involving phenomena that are not directly connected to the
interface.

2. As David says: "Although this is conceptually simple, the problem lies
in constructing a model of the physical system and possible external inputs
that is sufficiently accurate and complete to make the proofs meaningful and
useful." Yes. But this problem is exactly parallel to the problem of
describing the semantics of the programming language---both the source code
and the binary code executed by the processor. It can't be relegated to an
optional extra: without a solution the software developer is programming a
machine---the physical problem world---for which there is no adequate
semantic definition.

3. For a realistic system, the problem world is a large and complex
system, and modelling demands some major help from structure. One approach
to the problem is to structure the physical model according to the structure
of the desired system behaviour. (This is what I aim at in my current work.)
This approach makes sense because the physical world model supports the
behaviour exactly as the processor supports the software execution. Each
'constituent behaviour' (my terminology) relies on certain properties of the
problem world expressed in an associated model.

4. Perhaps the structuring of the physical model according to the
structure of the system behaviour is also a characteristic of Brent's
suggestion: "... to include a model of the external physical system in the
requirements specification." That depends, of course, on the structure,
format, and content of the Requirements Specification. I am keen to hear
more about Requirements from anyone here who deals with them either as a
consumer or a producer.

5. A fundamental difficulty for formal models of the physical world is
that it is not itself a formal system. Brian Cantwell Smith explained the
difficulty in his 1986 paper "The Limits of Correctness", pointing out the
absence of a theory of informal-to-formal modelling. I think that a
discipline, rather than a theory, is needed (and I aim at this too in my
current work).

-- Michael

www.theworldandthemachine.com<http://www.theworldandthemachine.com>









_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>
Manage your subscription:
https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety<https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety>


_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety<https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety>

***************************************************************************
If you are not the intended recipient, please notify our Help Desk at Email information.solutions at nats.co.uk
immediately. You should not copy or use this email or attachment(s) for any purpose nor disclose
their contents to any other person.

NATS computer systems may be monitored and communications carried on them recorded, to 
secure the effective operation of the system.

Please note that neither NATS nor the sender accepts any responsibility for viruses or any losses
caused as a result of viruses and it is your responsibility to scan or otherwise check this email
and any attachments.

NATS means NATS (En Route) plc (company number: 4129273), NATS (Services) Ltd 
(company number 4129270), NATSNAV Ltd (company number: 4164590) 
or NATS Ltd (company number 3155567) or NATS Holdings Ltd (company number 4138218). 
All companies are registered in England and their registered office is at 4000 Parkway, 
Whiteley, Fareham, Hampshire, PO15 7FL.

***************************************************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20200717/f3c4702c/attachment-0001.html>


More information about the systemsafety mailing list