[SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

Ben Bradshaw Ben.Bradshaw at TRW.COM
Tue Jul 2 11:50:43 CEST 2013


Regarding the values quoted for 90% confidence and the number of hours of fault free testing, I believe that if you have 3 x 10^X hours of failure-free operation you can be 95% confident that the average failure rate is one failure every 10^X hours.


Ben Bradshaw BSc PhD MSc CEng MIMechE
Principal Engineer, Systems and Safety
TRW Conekt
Stratford Road
Solihull
West Midlands B90 4GW
 -----------------------------------------------------------------
E-mail:     ben.bradshaw at trw.com 
Tel:	+44 (0)121 627 3556
Fax:	+44 (0)121 627 3584 
Web:	www.conekt.co.uk

 -----------------------------------------------------------------

This message, together with any of its attachments, is strictly confidential and intended solely for the addressee(s).  It may contain information which is covered by legal, professional or other privilege.  If you are not the intended recipient, you must not disclose, copy or take any action in reliance of this transmission.  If you have received this message in error, please notify us as soon as possible.

TRW Limited, Registered in England, No. 872948, Registered Office Address: Stratford Road, Solihull B90 4AX



-----Original Message-----
From: systemsafety-bounces at techfak.uni-bielefeld.de [mailto:systemsafety-bounces at techfak.uni-bielefeld.de] On Behalf Of Peter Bernard Ladkin
Sent: 02 July 2013 04:43
To: Steve Tockey
Cc: systemsafety at techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]



On 2 Jul 2013, at 00:47, Steve Tockey <Steve.Tockey at construx.com> wrote:

> I think that sounds good in theory but it may not work effectively in practice. The issue is that almost all of the test teams I know don't have inside (aka "white box") knowledge of the software they are testing. They are approaching it purely from an external ("black box") perspective. They can't tell if the code has high cyclomatic complexity or not.

That sounds like the wrong way to assure SW. If you want to be assured that the SW is reliable to the average frequency of one SW-caused failure in 10^X operational hours, you need to observe 3 x 10^X hours of failure-free operation to be 90% confident of it. Under the assumption that you have perfect failure detection.

I guess it's OK if you don't mind if your SW croaks every hundred or thousand hours or so. But that is hardly what one might term quality assurance.

> In principle, testers should be given the authority to reject a piece of crap product. That's their job in all other industries. But in software (non-critical, mind you), testing is usually a window dressing that's mostly overridden if it meant threatening promised ship dates.

By which I take it you mean failures are seen. In which case not even the above applies.

I had thought that the main point of testing a product which you hope to be of moderate quality was to make sure you have the requirements right and haven't forgotten some obvious things about the operating environment.

PBL

Prof. Peter Bernard Ladkin, University of Bielefeld and Causalis Limited _______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE




More information about the systemsafety mailing list