[SystemSafety] Does "reliable" mean "safe" and or "secure" or neither?

Nick Tudor njt at tudorassoc.com
Sat Apr 23 16:32:08 CEST 2016


Tsk Tsk Bev....the thread on this subject last year ended in no accepted
conclusions by either set of parties; just some shouted louder (and got
personal) and I among others gave up arguing...as I'm going to do now.

Best regards

Nick Tudor
Tudor Associates Ltd
Mobile: +44(0)7412 074654
www.tudorassoc.com

*77 Barnards Green Road*
*Malvern*
*Worcestershire*
*WR14 3LR*
*Company No. 07642673*
*VAT No:116495996*

*www.aeronautique-associates.com <http://www.aeronautique-associates.com>*

On 23 April 2016 at 15:18, Littlewood, Bev <Bev.Littlewood.1 at city.ac.uk>
wrote:

> Where “As previously established” = “If I say it often enough it will be
> true”
>
>
> On 23 Apr 2016, at 12:34, Nick Tudor <njt at tudorassoc.com> wrote:
>
> Peter
>
> As previously established...software does not have a reliability.
>
> Cheers
>
> On Saturday, 23 April 2016, Peter Bernard Ladkin <
> ladkin at rvs.uni-bielefeld.de> wrote:
>
>> On Thursday 2016-04-21 05:00 , I wrote:
>> >  On 2016-04-20 23:18 , Les Chambers wrote:
>> >>> But here's the thing, any standards body that goes down this path
>> will soon encroach upon the territory of
>> >>> established religion whose moral codes often diverge even though
>> their collective central core is probably the same.
>> >
>> >  That is utter nonsense.
>>
>> Les deprecated this as somehow lacking intellectual rigor. So here's a
>> more rigorous derivation.
>>
>> Suppose you have a computer-based system S with function F on which you
>> want to rely. You've done
>> your best to write software So1 for S so that it executes F, but
>> sometimes it doesn't. You run So1
>> in environment E1, and mostly it executes F (75%) but sometimes it
>> doesn't (25%). You run So1 in
>> environment E2, and mostly it executes F (60%) but sometimes it doesn't
>> (40%).
>>
>> Someone else has done their best to write software So2 for S so that it
>> executes F, but sometimes it
>> doesn't. You run So2 in environment E1, and mostly it executes F (70%)
>> but sometimes it doesn't
>> (30%). You run it in environment E2, and mostly it executes F (65%) but
>> sometimes it doesn't (35%).
>>
>> You can talk about the reliability of So1 in E2 and in E2, and about the
>> reliability of So2 in E1
>> and in E2. Reliability is a function of software and its environment:
>> reliability(So1,E) = 75%.
>>
>> Now suppose you have a new environment E3 and you want to know whether to
>> choose So1 to run in it or
>> So2. How do you choose? How So1 and So2 run in E1 and E2 might not be a
>> reliable guide to how either
>> is going to run in E3, although you'd imagine that there would be some
>> kind of correlation. You want
>> somehow a measure of So1 against So2 which is going to guide your choice.
>>
>> And then you've heard that someone else has written So3 to perform F, and
>> has some reliability stats
>> in some other environments, but not E1, E2 or E3. So now you want a guide
>> to choosing from So1, So2
>> and So3. And so on for So4 from yet another vendor, and ........
>>
>> What's that measure going to be? Let's call it XXXXXX. If you had
>> software So7 and So8, and So7 ran
>> more reliably in every environment you tried than did So8, you might want
>> to conclude that So7 had
>> more of XXXXXX than So8. You can't say that So1 has more XXXXXX than So2
>> per se, if you just look at
>> their reliabilities in E and E1, because one is not dominant over the
>> other.
>>
>> And doesn't XXXXXX seem like a silly name? It's hard to pronounce, for
>> one thing. So you might want
>> to call it, I dunno, ... how about "integrity"?
>>
>> You know that whatever integrity is, it's not the same as reliability.
>> You also have a pretty good
>> hunch that it might be correlated with the care taken in developing the
>> software. But you have no
>> exact measure.
>>
>> So what you might do is say: I can't be exact, so I'll stick with five
>> discrete categories of
>> "integrity" for software: no-measure, low, medium, high and exceptionally
>> high. You might even want
>> to call these "levels" (except for the first). And give them numbers, say
>> no-number, 1, 2, 3 and 4.
>> And develop software for IL 1 by whatever means people think is
>> appropriate for the cases where you
>> need the kind of performance hopefully associated with IL 1-designated
>> applications. Mutatis
>> mutandis for IL 2, 3, 4.
>>
>> Well, actually, I've left a bit out. The developers already did that. So1
>> and So2 were both
>> developed using means thought appropriate for IL 2. So3 was developed
>> using means thought
>> appropriate for IL 4.
>>
>> So, which software do you procure for executing F in E2? Mutatis
>> mutandis, I'd probably choose the
>> one with IL 4. Of course, that doesn't mean So3 will actually perform
>> more reliably in E3 than the
>> others. I don't know that for sure and won't find out until I try (all
>> three, and document carefully
>> their performance over a long period of time).
>>
>> None of this is subjective, although some of it is vague and some is
>> uncertain (irredeemable
>> uncertain, according to some people). There is also a difference in what
>> software is intended to do,
>> what it is needed to do, and what it actually does. (Ingo Rolle has some
>> insight into how this
>> affects claims about the software, and has written it up. The final
>> version is due to go up on my
>> blog soon.) Still, it does a job right now which people need doing, but
>> probably can't do perfectly.
>>
>> Then someone comes along, quotes a couple of common-language dictionary
>> definitions of "integrity",
>> and says something about established religion and moral codes. Utter
>> nonsense you might say, and I did.
>>
>> PBL
>>
>> Prof. Peter Bernard Ladkin, Faculty of Technology, University of
>> Bielefeld, 33594 Bielefeld, Germany
>> Je suis Charlie
>> Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de
>>
>>
>>
>>
>>
>>
>
> --
> Nick Tudor
> Tudor Associates Ltd
> Mobile: +44(0)7412 074654
> www.tudorassoc.com
>
> *77 Barnards Green Road*
> *Malvern*
> *Worcestershire*
> *WR14 3LR*
> * Company No. 07642673*
> *VAT No:116495996*
>
> *www.aeronautique-associates.com <http://www.aeronautique-associates.com/>*
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> <systemsafety at techfak.uni-bielefeld.de>
>
>
> _______________________________________________
>
> Bev Littlewood
> Professor of Software Engineering
> Centre for Software Reliability
> City University London EC1V 0HB
>
> Phone: +44 (0)20 7040 8420  Fax: +44 (0)20 7040 8585
>
> Email: b.littlewood at csr.city.ac.uk
>
> http://www.csr.city.ac.uk/
> _______________________________________________
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160423/e8552a8e/attachment.html>


More information about the systemsafety mailing list