[SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

Peter Bernard Ladkin ladkin at rvs.uni-bielefeld.de
Tue Jul 2 05:43:06 CEST 2013



On 2 Jul 2013, at 00:47, Steve Tockey <Steve.Tockey at construx.com> wrote:

> I think that sounds good in theory but it may not work effectively in practice. The issue is that almost all of the test teams I know don't have inside (aka "white box") knowledge of the software they are testing. They are approaching it purely from an external ("black box") perspective. They can't tell if the code has high cyclomatic complexity or not.

That sounds like the wrong way to assure SW. If you want to be assured that the SW is reliable to the average frequency of one SW-caused failure in 10^X operational hours, you need to observe 3 x 10^X hours of failure-free operation to be 90% confident of it. Under the assumption that you have perfect failure detection.

I guess it's OK if you don't mind if your SW croaks every hundred or thousand hours or so. But that is hardly what one might term quality assurance.

> In principle, testers should be given the authority to reject a piece of crap product. That's their job in all other industries. But in software (non-critical, mind you), testing is usually a window dressing that's mostly overridden if it meant threatening promised ship dates.

By which I take it you mean failures are seen. In which case not even the above applies.

I had thought that the main point of testing a product which you hope to be of moderate quality was to make sure you have the requirements right and haven't forgotten some obvious things about the operating environment.

PBL

Prof. Peter Bernard Ladkin, University of Bielefeld and Causalis Limited


More information about the systemsafety mailing list