[SystemSafety] CbyC and unit testing

Brent Kimberley brent_kimberley at rogers.com
Tue Jul 7 01:19:24 CEST 2020


 Hi Olwen.I was asking about monitoring - Mission Ops Info Mgmt.
    In the case of a controller, how often did the target t exceed the laws of physics?    In a RT system, how often were design assumptions violated?    In a sort algorithm, you might want to know statistics about the sort - wide / narrow ; fixed length variable length;runtime    Simulated annealing - iterations, restarts, etc.

    On Monday, July 6, 2020, 2:51:34 p.m. EDT, Olwen Morgan <olwen at phaedsys.com> wrote:  
 
  

 
 
Not sure, but I've certainly been involved in the design and use of a suite of tests that have comprehensively destroyed the claims of developers of some CASE tools. The was the suite of 8500+ tests that Derek Jones and I (almost entirely Derek - he actually developed the tests in between our telephone brain-storming sessions) used to test - pretty well to destruction - Purify and Sentinel mid 1990s.
 
We both strongly suspected the tools were claiming much more than they could actually do, so I'd regard that as justifiably malicious destructive testing. ... :-))
 
Indeed that was one of the origins of my approach to software testing in general that I've cited in the CbyC/UT thread. You could fairly have called that particular exercise "saturation-bombing testing".
 
 

 
 
Olwen
 

 
 On 06/07/2020 18:58, Brent Kimberley wrote:
  
 
  Tangential question, who uses non-destructive testing to detect errors?
  
      On Monday, July 6, 2020, 1:37:49 p.m. EDT, Martyn Thomas <martyn at thomas-associates.co.uk> wrote:  
  
   On 05/07/2020 12:47, Olwen Morgan wrote:
 > Does anyone here honestly believe that you could successfully defend
 > omitting UT in an action for negligence if a system developed using
 > CbyC failed and killed someone as a result of a defect that could have
 > been detected by UT?
 
 Can you guarantee that your UT will detect all the errors that any
 possible UT would have detected? If so, how?
 
 Are you using successful tests as the axioms on which you can develop a
 rigorous inductive proof of correctness, which (if I recall correctly)
 Tony Hoare said was how testing should be used?
 
 If not, in your hypothetical example, how are you going to defend having
 omitted the unit tests that would have detected the errors that caused
 the failure that killed someone?
 
 I think you are doing what the opponents of FMs often do and assuming
 that the proponent of C-by-C is claiming they can deliver perfection.
 I'm certainly not - I'm saying that software engineering seeks to make
 software that is as fit as is reasonably practicable for it's intended
 purpose and that in my experience, being as rigorous as reasonably
 practicable is tautologically how to achieve that.
 
 In my experience, most software teams don't even try to be rigorous. At
 best they are skilled craftspeople, not professional engineers.
 Sometimes that's good enough. Sometimes it may even be what you need.
 Caveat emptor. 
 
 Martyn
 
 
 _______________________________________________
 The System Safety Mailing List
 systemsafety at TechFak.Uni-Bielefeld.DE
 Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
      
  _______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety _______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20200706/3c0710a4/attachment-0001.html>


More information about the systemsafety mailing list