[SystemSafety] AI and the virtuous test Oracle - intelligence

Steve Tockey steve.tockey at construx.com
Fri Jun 30 20:34:29 CEST 2023


Les,
Thanks for the thoughtful reply.

As always, it depends on definitions of terms. I might counter Max Tegmark’s definition of intelligence by asking, for some arbitrarily complex computing system:

“Is the intelligence an inherent part of the computing system itself, or was it put there by the programmer(s) who programmed it to behave that way?”

The computer, being ultimately a Turing Machine, isn’t going to make the kind of creative leap that, say, Newton and Leibnitz made when they invented Calculus in the 1660s. Or, when Shockey, Bardeen, and Brattain invented the transistor in 1947. No computing system today can go beyond the code and the data that has been put into it. Given the code, and given the data, and possibly knowing a little more about the internal state of execution, the system’s response is 100% predictable. No computer/program is capable of programming itself into any kind of useful, non-trivial behavior. And yet humans do this kind of thing all the time. That’s what I mean by determinism.

I don’t know enough about quantum computing to say that the above holds there, too.


You wrote:
 On the subject of determinism, Max goes on:
"Just as we don't fully understand how our children learn, we still don't fully 
understand how such neural networks learn, and why they occasionally fail.

I attended a conference earlier this year in Mysuru, India where an invited speaker, Dr. Seema Chopra from Boeing talked about adding in elements into AI systems whereby that system can be asked to “explain” why it gave the result that it did. Seemed like an interesting area of research that could maybe help in this area.


Re: computational substrate—some of my customers are the biggies in the semiconductor industry. As least in terms of silicon semiconductors, Moore’s Law is reaching its limit. Chip designers and manufacturers are trying to wrestle with, for example, conductors on the chip that are only one or a few atoms wide. Can’t shrink below that.


Cheers,

— steve







On Jun 29, 2023, at 9:12 PM, Les Chambers <les at chambers.com.au> wrote:

Steve

RE : " I argue, human intelligence is not something that a computer could ever 
actually be capable of"

According to Max Tegmark the jury is still out on this question.
Max is a professor of physics at MIT and president of the Future of Life 
Institute (ref Tegmark, Max. Life 3.0, Penguin Books Ltd). His claim to fame is 
that he was in the room when Larry Page called Elon Musk a speceist.

Max provides evidence that the nature of intelligence is a subject for debate - 
as yet, unresolved.
Max: "My wife and I recently had the good fortune to attend a symposium on 
artificial intelligence organized by the Swedish Nobel Foundation, and when a 
panel of leading AI researchers were asked to define intelligence, they argued 
at length without reaching consensus.
.
there are many competing ones, including capacity for logic, understanding, 
planning, emotional knowledge, self-awareness,creativity, problem solving and 
learning."

Max's pet definition is:
"intelligence = ability to accomplish complex goals"

Max makes some outrageous assertions: 
"This substrate [platform]  independence of computation implies that AI is 
possible: intelligence doesn't require flesh, blood or carbon atoms."
Max uses the following metaphor:
"Waves, for instance: they have properties such as speed, wavelength and 
frequency, and we physicists can study the equations they obey without even 
needing to know what particular substance they're waves in."

Oh well, they say, "All great truths begin as blasphemies" (George Bernard 
Shaw)

On the subject of determinism, Max goes on:
"Just as we don't fully understand how our children learn, we still don't fully 
understand how such neural networks learn, and why they occasionally fail.
.
Stuart Russell told me that he and many of his fellow AI researchers had 
recently experienced a "holy s**t!" (HS) moment, when they witnessed AI doing 
something they weren't expecting to see for many years."

My question here is: "How do you run a hazard analysis on a system you don't 
understand that is likely, for further unknown reasons, to occasionally fail? "

This situation would be amusing but for the fact, "they" are already amongst 
us. Cruise (General Motors), Waymo (Google), and Zoox (Amazon) are currently 
operating driverless taxis on the streets of San Francisco. YouTube reports 
that San Franciscans amuse themselves by jumping in front of them to see if 
they will automatically stop?????

This AI non-determinism is a fact, confirmed by all practitioners who feel free 
to speak out. It's one of the core reasons why some senior practitioners have 
left developers like OpenAI and Google Deep Mind. It works too well and they 
don't know why - the neural network is a very simple approximation to the human 
brain. It shouldn't work that well -, and they don't know what it might be 
capable of in the future. Which brings me to the unbounded nature of artificial 
intelligence.

From Max:
"Nobody knows for sure what the next blockbuster computational substrate will 
be, but we do know that we're nowhere near the limits imposed by the laws of 
physics. My MIT colleague Seth Lloyd has worked out what this fundamental limit 
is, and as we'll explore in greater detail in chapter 6, this limit is a 
whopping 33 orders of magnitude (1033 times) beyond today's state of the art 
for how much computing a clump of matter can do. So even if we keep doubling 
the power of our computers every couple of years, it will take over two 
centuries until we reach that final frontier."

The debate continues. Keep your powder dry Steve.

Les


> Les,
> 
> â?osilicon is unboundedâ?
> 
> I disagree. Silicon is bounded, just in different ways than humans. For one, 
Turing Computability. Turing Machines, which all modern silicon computers are a 
kind of, are 100% deterministic. Human intelligence is non-deterministic. Thus, 
I argue, human intelligence is not something that a computer could ever 
actually be capable of. See, for example:
> 
> https://youtu.be/i2trJEIFIvY
> 
> I agree that, â?oWe are embarked; we need to deal with itâ?. But we also 
have to be aware of the limits of so-called AI.
> 
> â?" steve
> 
> On Jun 26, 2023, at 9:15 PM, Les Chambers <les at chambers.com.au> wrote:
> 
> Hi
> 
> I put it to you and the list in general, a "discuss later" mentality is 
foolish
> in the extreme. The process of deploying intelligence in automated systems 
has
> , and will,  fundamentally change. Among other things, international bodies
> that currently regulate software-intensive Safety-Critical systems - who 
cling
> to regulating processes that have ceased to exist - are likely to be overrun
> and made redundant.
> 
> In favour of organisations such as:
> 
> - The Center for Human-Compatible AI at UC Berkeley
> - The Future of Life Institute
> - The Center for AI Safety (CAIS)
> - Stanford Center for AI Safety
> 
> My view is that this is not a steady-as-she-goes situation. This is a major
> inflection point in the evolution of intelligence. Carbon hosts will always 
be
> limited; silicon is unbounded. We are embarked; we need to deal with it.
> 
> Suggested reading: Max Tegmark, Life 3.0
> 
> Les
> 
> C2 - Confidential
> 
> Hello,
> 
> For the moment, I don't see in industry any attempt to realise E/EPE safety
> related function with On-line AI. All what I see is focused on off-line AI,
> meaning that the training is done specified training data sets, and the
> validation is done on specified test datasets. We don't see any performance
> better than roughly 3.3 10-3 in automotive. So let's wait for the achievement
> of 10-8 to 10-5 error rate on test datasets and discusss later the
> acceptability.
> 
> AttentionÃ, : due to increased cybersecurity screening I may receive external
> emails 30 minutes after their emission.
> RESTRICTED
> 
> Bertrand RICQUE
> Safety and security standardisation Expert
> Program Support Manager
> Optronics and Defence Division/Customer Support
> Safran Electronics & Defense
> 
> P +33 (0)1 58 11 96 82   M +33 (0)6 87 47 84 64
> bertrand.ricque at safrangroup.com
> 102 Avenue de Paris
> 91300 MASSY FRANCE
> www.safran-electronics-defense.com
> 
> -----Message d'origine-----
> DeÃ, : Les Chambers <les at chambers.com.au>
> EnvoyÃf©Ã, : lundi 26 juin 2023 06:03
> Ãf?Ã, : RICQUE Bertrand (SAFRAN ELECTRONICS & DEFENSE)
> <bertrand.ricque at safrangroup.com>; les at chambers.com.au;
> koopman.cmu at gmail.com; systemsafety at lists.techfak.uni-bielefeld.de
> ObjetÃ, : RE: [SystemSafety] AI and the virtuous test Oracle
> 
> CAUTION:  This message originated from an outside organization. In case of
> suspicion, click on "Report to SAFRAN Security" from the Outlook ribbon.
> 
> RE your comment: â?oas it impossible, by construction, to identify all
> dangerous
> situations�
> 
> True. But it is possible to identify the subset of highest probability
> â?odangerous situationsâ? and deal with them. Ã, I had 10 years of
> experience
> with this issue in computer control of chemical processing reactors. In the
> 1970s, we had a safety authority write software to: 1. identify well-
> established dangerous states of a reactor; 2. Transition reactor to a safe
> state
> â?" usually shut down. It was called the â?oabortâ? programming. This
> abort code
> overrode the control software written by another team. By today's standards
> it was primitive but still very effective.
> This experience has influenced my thinking on ways and means of dealing
> with â?obadâ? or â?ohallucinatingâ? AIs. We need a separate and 
distinct
> â?oabortâ? AI
> capable of recognising evil and dealing with it.
> Sal Khan (Khan Academy) has implemented another idea in creating his
> Khanmigo personal tutor. To improve its accuracy in mathematics, he allows
> the AI to have â?opersonal thoughtsâ?. The AI equivalent of mindfulness
> (straight out of Marcus Aurelius). He gives the impression that he feeds
> back
> the AI's response to the AI, so the AI can have a think about what it is
> thinking. A bit like briefing a fish on the concept of water.
> 
> This supports my pitch for engineers to be taught philosophy. The concepts
> of virtue and evil were defined simply 2000 years ago. These definitions
> have
> stood outside of time and can be implemented with appropriate pattern
> matching.
> If Marcus Aurelius was reincarnated, he'd be running international seminars
> on mindfulness. The foundation ideas have not changed a wit.
> 
> Cheers
> Les
> 
> C2 - Confidential
> 
> Answer to the first question. Once the NN is trained, it will always
> give the
> same output for the same input. It is stupid software logic.
> 
> Answer to the second question. The current safety paradigm is based on
> perception-interpretation-decision-action loops based on the fact that:
> 1 Either, all states of the system, leading to all identified losses,
> can be
> exhaustively defined and their probability of occurrences remains, by
> concept and construction of the system, below a tolerable level, enabling
> fully autonomous functions (e.g. emergency shutdown functions in process
> industries),
> 
> 2 Or, there is a human in control of the system with the ability of
> arbitrating the decisions, meaning the ability to at least perceive and
> interpret.
> 
> AI deceives point 1 as it impossible, by construction, to identify all
> dangerous situations (the training dataset is incomplete and whatever its
> size, the operational domain being infinite (or close to infinite from a
> practical
> point of view), training dataset size divided by infinite = 0.
> 
> To deceive the second assumption, just replace a car windscreen by a
> video
> screen. No need of AI.
> 
> AttentionÃf,Ã, : due to increased cybersecurity screening I may receive
> external
> emails 30 minutes after their emission.
> RESTRICTED
> 
> Bertrand RICQUE
> Safety and security standardisation Expert Program Support Manager
> Optronics and Defence Division/Customer Support Safran Electronics &
> Defense
> 
> P +33 (0)1 58 11 96 82   M +33 (0)6 87 47 84 64
> bertrand.ricque at safrangroup.com
> 102 Avenue de Paris
> 91300 MASSY FRANCE
> https://urldefense.com/v3/__http://www.safran-electronics-
> defense.com_
> _;!!Dl6pPzL6!fVuNoqxkdCCqescc5hd8-9ke7-
> o3uVuZEOBtugdgtfrHUBcO3T2a6LyI_
> lZBwk7CZBJRmOhVQnZHXCdvIB_h5MY$
> 
> -----Message d'origine-----
> DeÃf,Ã, : Les Chambers <les at chambers.com.au> EnvoyÃffÃ,©Ãf,Ã, : jeudi 22 
juin
> 2023 23:46 Ãffâ,¬Ãf,Ã, : RICQUE Bertrand (SAFRAN ELECTRONICS & DEFENSE)
> <bertrand.ricque at safrangroup.com>; koopman.cmu at gmail.com;
> systemsafety at lists.techfak.uni-bielefeld.de
> ObjetÃf,Ã, : RE: [SystemSafety] AI and the virtuous test Oracle
> 
> CAUTION:  This message originated from an outside organization. In
> case of suspicion, click on "Report to SAFRAN Security" from the
> Outlook
> ribbon.
> 
> RE your comment, Ãf¢â,¬Ã."A neural network is software 
logic.Ãf¢â,¬Ã,
> 
> A neural Network is a component of an AI agent. An AI agent is a
> suite of general purpose software tools. The agent is configured from a
> dataset.
> 
> ChatGPT goes on (see dialog below): Ãf¢â,¬Ã."Synthesizing an AI agent
> involves training the machine learning model on a large
> dataset.Ãf¢â,¬Ã,
> 
> This indicates that the training dataset determines the agentÃf¢â,‰"¢s
> behaviour (transfer function - inputs to outputs). In the case of a
> Tesla motor
> vehicle,
> the inputs are camera sensors and the outputs will soon be controls
> to the accelerator, brake and steering.
> 
> RE your comment, Ãf¢â,¬Ã."The problem is not AI. It is the exhaustive
> deterministic
> relationship between the undesired losses and the system states.
> Anything breaking this, breaks the current safety paradigm. And AI
> is not the single thing to achieve this.Ãf¢â,¬Ã,
> 
> Could you please expand on this idea. Possibly using an example.
> 
> How would your thought apply here: I am a driver at the wheel of a
> Tesla travelling at 110 km/h on the freeway. I have switched the
> vehicle to full
> self-
> driving mode. I am fast asleep. The FSD agent detects a false
> positive Ãf¢â,‰?o
> a lady
> with a pram. It jams on the brakes initiating a rear-end collision
> with a following semi.
> Q: In this case how is it that, Ãf¢â,¬Ã."The problem is not 
AIÃf¢â,¬Ã,
> 
> Les
> 
> Ãf¢â,‰?Ãf¢â,‰?Ãf¢â,‰?Ãf¢â,‰?Ãf¢â,‰? 
ChatGPT Dialog
> Ãf¢â,‰?Ãf¢â,‰?Ãf¢â,‰?Ãf¢â,‰?Ãf¢â,‰?
Ãf¢â,‰?
> 
> ChatGPT prompt:
> Ãf¢â,¬Ã."Describe an AI agent from a technical perspective. What are its
> components
> and
> how is it synthesized?Ãf¢â,¬Ã,
> 
> Response:
> An AI agent, from a technical perspective, typically consists of
> several components that work together to perform specific tasks.
> Here are the key components and how they are synthesized:
> 
> 1. Data: The foundation of any AI agent is data. It can be in
> various forms such as text, images, audio, or structured data. This
> data is used for
> training
> and providing context to the AI model.
> 
> 2. Machine Learning Model: The AI agent utilizes a machine learning
> model
> to
> process and generate responses. These models are trained on vast
> amounts of data and learn patterns to make predictions or generate
> outputs.
> 
> 3. Natural Language Processing (NLP): NLP is a crucial component for
> understanding and processing human language. It involves tasks like
> language translation, sentiment analysis, intent recognition, and
> entity extraction.
> NLP
> enables the AI agent to comprehend user inputs and generate relevant
> responses.
> 
> 4. Knowledge Representation: Knowledge representation refers to how
> information is stored and organized within the AI agent. It can
> employ various
> techniques
> such as semantic networks, ontologies, or graph databases to store
> facts
> and
> relationships. This allows the agent to access and utilize knowledge
> when responding to user queries.
> 
> 5. Dialogue Management: Dialogue management focuses on handling the
> flow of conversation. It involves tracking the dialogue context,
> managing user requests and responses, and generating meaningful and
> coherent replies.
> Reinforcement
> learning techniques can be applied to improve dialogue management.
> 
> 6. Integration and APIs: AI agents can integrate with external
> systems and services using APIs (Application Programming
> Interfaces). This allows them
> to
> access relevant data sources, databases, or external functionalities
> to provide more comprehensive and accurate responses.
> 
> 7. User Interface: The AI agent needs a user interface to interact
> with
> users
> effectively. It can be a chat interface, voice-based interface, or a
> combination of both. The UI should provide a seamless experience for
> users to interact with the agent.
> 
> Synthesizing an AI agent involves training the machine learning
> model on a large dataset. This process includes steps like
> preprocessing the data, designing the model architecture, selecting
> appropriate algorithms,
> training
> the model using optimization techniques, and fine-tuning it for
> specific
> tasks.
> The trained model is then integrated with other components, such as
> NLP, knowledge representation, and dialogue management, to create a
> functioning AI agent. Regular updates, feedback loops, and
> continuous improvement are essential to enhance the agent's
> performance over time.
> 
> --------END Chat ----------
> 
> C2 - Confidential
> 
> The problem is not AI. It is the exhaustive deterministic
> relationship
> between the undesired losses and the system states. Anything
> breaking this, breaks the current safety paradigm. And AI is not the
> single thing to
> achieve
> this.
> 
> AttentionÃffâ?sÃf,Ã, : due to increased cybersecurity screening I may
> receive
> external
> emails 30 minutes after their emission.
> RESTRICTED
> 
> Bertrand RICQUE
> Safety and security standardisation Expert Program Support Manager
> Optronics and Defence Division/Customer Support Safran Electronics
> & Defense
> 
> P +33 (0)1 58 11 96 82   M +33 (0)6 87 47 84 64
> bertrand.ricque at safrangroup.com
> 102 Avenue de Paris
> 91300 MASSY FRANCE
> https://urldefense.com/v3/__http://www.safran-electronics-
> defense.com__;!!Dl6pPzL6!bg5nEqH6ID136htdDa-
> 
> DgyRz2IZw9arqA9HPKC3p01ZYHoQhzebcyOn5xrgApDNO52A_sLRMk2YgX63j
> n5fk7M4$
> 
> -----Message d'origine-----
> DeÃffâ?sÃf,Ã, : systemsafety <systemsafety-bounces at lists.techfak.uni-
> bielefeld.de>
> De la part de Phil Koopman
> EnvoyÃffÃ?'Ãf,Ã,©Ãffâ?sÃf,Ã, : jeudi 22 juin 2023 03:32
> ÃffÃ?'Ãf¢â?sÃ,¬Ãffâ?sÃf,Ã, :
> les at chambers.com.au; systemsafety at lists.techfak.uni-
> bielefeld.de
> ObjetÃffâ?sÃf,Ã, : Re: [SystemSafety] AI and the virtuous test Oracle
> 
> CAUTION:  This message originated from an outside organization.
> In case
> of
> suspicion, click on "Report to SAFRAN Security" from the Outlook
> ribbon.
> 
> Les,
> 
> Since you welcome riffs, I have something that is not as all-
> encompassing,
> but might have more immediate application.
> 
> I propose that to the degree that "AI" technology is deployed in
> a way
> that
> supplants practical human judgement, the manufacturer of that
> system
> (in
> some cases just the AI part if it is an add-on component) should
> be
> held
> accountable for any action (or inaction) that, if associated
> with the
> human
> that was supplanted, would have constituted negligence.Ãffâ?sÃf,Ã,
> This
> should
> include
> situations in which a human is put in an untenable situation of
> supervising
> an
> AI in a way that puts unreasonable demands upon them, amounting
> to a "moral crumple zone"
> approach
> 
> (https://urldefense.com/v3/__https://papers.ssrn.com/sol3/papers.cfm
> ?ab
> 
> stract_id=2757236__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4
> gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUivQZX7h$ ).
> Liability/negligence if an AI is in substantive control of such
> a
> situation
> should
> attach to the manufacturer.
> 
> This leads to a more narrow oracle, but perhaps still useful,
> than you propose. If a loss event is caused by a lack of
> "reasonable" behavior
> by an
> AI,
> the manufacturer is on the hook for negligence, and the
> AI/manufacturer owes a duty of care the same as the human who
> was supplanted would
> have
> owed to whoever/whatever might be affected by that negligence.
> It has
> the
> advantage of reusing existing definitions of "reasonable person"
> that
> have
> been hammered out over decades of law. (To be sure that is not
> in the
> form
> of an engineering specification, but case law has a pretty
> robust set
> of
> precedents, such as crashing into something after your properly
> functioning
> vehicle ran a red light is likely to lead to the driver being
> found
> negligent.)
> 
> This does not require the AI to behave the same as people, and
> is not a
> full
> recipe for "safe" AI. But it puts a floor on things in a way
> that is
> readily
> actionable using existing legal mechanisms and theories. If a
> reasonable
> person would have avoided a harm, any AI that fails to avoid the
> harm
> would
> be negligent.
> 
> I've worked with a lawyer to propose this approach for automated
> vehicles,
> and it is starting to get some traction. What I write in this
> post
> (above)
> is a
> generalization of the concept beyond the narrow automated
> vehicle application.
> Details here:
> 
> https://urldefense.com/v3/__https://safeautonomy.blogspot.com/2023/0
> 5
> /a-liability-approach-for-
> 
> automated.html__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg
> 9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUh295b5I$
> 
> -- Phil
> 
> On 6/21/2023 7:14 PM, Les Chambers wrote:
> Hi All
> 
> I find myself reflecting on what will become of us.
> As systems engineering best practice is overrun by AI.
> 
> Practitioners report that neural networks are eating code.
> Example 1: The vector field surrounding a Tesla motor vehicle
> is an output of a neural network, not the result of software
> logic. Soon
> the
> neural net - not code - will generate controls. The size of
> the code base is reducing.  (Elon
> Musk)
> Example 2: the ChatGPT transformer code base is only 2000 LOC
> (Mo Gawdat
> https://urldefense.com/v3/__https://youtu.be/bk-
> nQ7HF6k4__;!!Dl6pPzL6!
> 
> dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xN
> rnA0Av
> -DS0QOuOFaUpIawiVG$ )
> 
> The intelligence resides in terabytes of data, perceptrons and
> millions of weighting parameters. All are gathered by
> automated
> means.
> Not subject to human review.
> 
> Ergo what will become of our trusty barriers to dangerous
> failure:
> 1. Safety functions - gone
> 2. Verification - gone
> 3. Code reviews - gone
> 4. Validation - How?
> 
> On validation, may I suggest the moral AI. A test oracle built
> on a virtuous dataset, capable of interrogating the target
> system to determine virtue. Test outcomes will morph from
> pass/failure to
> moral/immoral.
> 
> Credible industry players have predicted that soon we will
> have AIs orders of magnitude smarter than us. Especially when
> they start
> talking
> to
> each other.
> The bandwidth will be eye-watering - the increase in
> intelligence,
> vertical.
> 
> New barriers are required. Time to develop an AI that is on
> our side
> ÃffÃ,¢Ãf¢â?sÃ,¬Ãf¢â,¬Ã."
> the side of ethics and the moral life. An adult in the room if
> you like. We should birth this creature now and raise it as good
> parents.
> 
> Let us not panic. May I put the proposition: virtue, like
> creativity, can be algorithmic.
> I have a sense of starting from the beginning - tabula rasa. I
> suggest
> that high-level thinking on the subject could begin with
> ChatGPT
> prompts:
> 1. What is the stoic philosopherÃffÃ,¢Ãf¢â?sÃ,¬Ãf¢â?zÃ,¢s concept of
> virtue?
> 2. What are the elements of philosophy relevant to AI?
> 
> Let us not forget our engineering mission: Guardians of the
> divine Logos, the organizing principle of the universe,
> responsible for its creation, maintenance, and order.
> 
> Would anyone care to riff on this?
> 
> Les
> 
> --
> 
> Les Chambers
> 
> les at chambers.com.au
> systemsengineeringblog.com
> 
> +61 (0)412 648 992
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:
> https://urldefense.com/v3/__https://lists.techfak.uni-
> bielefeld.de/mai
> 
> lman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaC
> GJCz
> dLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-
> DS0QOuOFaUjXMdlNF$
> 
> --
> Prof. Phil Koopman   koopman at cmu.edu
> (he/him)
> 
> https://urldefense.com/v3/__https://users.ece.cmu.edu/*koopman/__;fg!
> 
> !Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YM
> f2lF9o5xNrnA0Av-DS0QOuOFaUnsUrevc$
> 
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:
> https://urldefense.com/v3/__https://lists.techfak.uni-
> 
> bielefeld.de/mailman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVL
> cd
> XXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-
> DS0QOuOFaUjXMdlNF$
> 
> #
> " Ce courriel et les documents qui lui sont joints peuvent
> contenir des
> informations confidentielles, ÃffÃ?'Ãf,Ã,ªtre soumis aux
> rÃffÃ?'Ãf,Ã,¨glementations
> relatives au
> contrÃffÃ?'Ãf,Ã,´le des exportations ou ayant un caractÃffÃ?'Ãf,Ã,¨re
> privÃffÃ?'Ãf,Ã,©.
> S'ils ne
> vous sont
> pas destinÃffÃ?'Ãf,Ã,©s, nous vous signalons qu'il est strictement interdit
> de les divulguer, de les reproduire ou d'en utiliser de quelque
> maniÃffÃ?'Ãf,Ã,¨re que ce
> soit
> le contenu. Toute exportation ou rÃffÃ?'Ãf,Ã,©exportation non
> autorisÃffÃ?'Ãf,Ã,©e
> est interdite Si ce message vous a ÃffÃ?'Ãf,Ã,©tÃffÃ?'Ãf,Ã,© transmis par
> erreur,
> merci d'en informer l'expÃffÃ?'Ãf,Ã,©diteur et de supprimer
> immÃffÃ?'Ãf,Ã,©diatement
> de votre systÃffÃ?'Ãf,Ã,¨me informatique ce courriel ainsi que tous les
> documents qui y sont attachÃffÃ?'Ãf,Ã,©s."
> ******
> " This e-mail and any attached documents may contain confidential
> or
> proprietary information and may be subject to export control laws
> and regulations. If you are not the intended recipient, you are
> notified that
> any
> dissemination, copying of this e-mail and any attachments thereto or
> use of their contents by any means whatsoever is strictly
> prohibited. Unauthorized export or re-export is prohibited. If you
> have received this e-mail in
> error,
> please advise the sender immediately and delete this e-mail and all
> attached
> documents from your computer system."
> #
> 
> --
> 
> Les Chambers
> 
> les at chambers.com.au
> 
> +61 (0)412 648 992
> 
> #
> " Ce courriel et les documents qui lui sont joints peuvent contenir
> des
> informations confidentielles, ÃffÃ,ªtre soumis aux rÃffÃ,¨glementations
> relatives au
> contrÃffÃ,´le des exportations ou ayant un caractÃffÃ,¨re privÃffÃ,©. 
S'ils ne
> vous sont
> pas destinÃffÃ,©s, nous vous signalons qu'il est strictement interdit de les
> divulguer, de les reproduire ou d'en utiliser de quelque maniÃffÃ,¨re que ce
> soit
> le contenu. Toute exportation ou rÃffÃ,©exportation non autorisÃffÃ,©e est
> interdite Si ce message vous a ÃffÃ,©tÃffÃ,© transmis par erreur, merci 
d'en
> informer l'expÃffÃ,©diteur et de supprimer immÃffÃ,©diatement de votre
> systÃffÃ,¨me informatique ce courriel ainsi que tous les documents qui y 
sont
> attachÃffÃ,©s."
> ******
> " This e-mail and any attached documents may contain confidential or
> proprietary information and may be subject to export control laws and
> regulations. If you are not the intended recipient, you are notified that
> any
> dissemination, copying of this e-mail and any attachments thereto or use of
> their contents by any means whatsoever is strictly prohibited. Unauthorized
> export or re-export is prohibited. If you have received this e-mail in
> error,
> please advise the sender immediately and delete this e-mail and all
> attached
> documents from your computer system."
> #
> 
> --
> 
> Les Chambers
> 
> les at chambers.com.au
> 
> +61 (0)412 648 992
> 
> #
> " Ce courriel et les documents qui lui sont joints peuvent contenir des
> informations confidentielles, Ãfªtre soumis aux rÃf¨glementations relatives 
au
> contrÃf´le des exportations ou ayant un caractÃf¨re privÃf©. S'ils ne vous 
sont
> pas destinÃf©s, nous vous signalons qu'il est strictement interdit de les
> divulguer, de les reproduire ou d'en utiliser de quelque maniÃf¨re que ce 
soit
> le contenu. Toute exportation ou rÃf©exportation non autorisÃf©e est 
interdite Si
> ce message vous a Ãf©tÃf© transmis par erreur, merci d'en informer 
l'expÃf©diteur
> et de supprimer immÃf©diatement de votre systÃf¨me informatique ce courriel 
ainsi
> que tous les documents qui y sont attachÃf©s."
> ******
> " This e-mail and any attached documents may contain confidential or
> proprietary information and may be subject to export control laws and
> regulations. If you are not the intended recipient, you are notified that any
> dissemination, copying of this e-mail and any attachments thereto or use of
> their contents by any means whatsoever is strictly prohibited. Unauthorized
> export or re-export is prohibited. If you have received this e-mail in error,
> please advise the sender immediately and delete this e-mail and all attached
> documents from your computer system."
> #
> 
> --
> 
> Les Chambers
> 
> les at chambers.com.au
> 
> +61 (0)412 648 992
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-
bielefeld.de/mailman/listinfo/systemsafety



--

Les Chambers

les at chambers.com.au

+61 (0)412 648 992





More information about the systemsafety mailing list