[SystemSafety] Thorough Test Suite?

Dewi Daniels dewi.daniels at software-safety.com
Sun Nov 17 09:59:04 CET 2019


Rod,

About half the software was analysed by Lloyd's Register, the other half
was analysed by Aerosystems. We only looked at the Level A and Level B code
because it would have been too expensive to analyse all of the software on
the aircraft. I was one of the team leaders at Lloyd's Register, so I was
personally involved in conducting some of the static analysis and I was
also aware of the work being done by other Lloyd's Register teams and by
Aerosystems. It may be that some of the software might have been through
the full DO-178B verification process, but that was not the case for the
majority of the software. I personally analysed the worst C program cited
in the article, which had an anomaly rate of 500 anomalies per thousand
lines of code. The design didn't match the requirements and the code didn't
match the design.

I remember that the mission computer software was written in SPARK. That
program had the lowest anomaly rate of any of the programs that were
analysed. The reported anomaly rate was four anomalies per thousand lines
of code. As you know, Praxis typically achieved a residual defect rate of
less than one defect per thouand lines of code for software developed using
a similar process.

Yours,

Dewi Daniels | Director | Software Safety Limited

Telephone +44 7968 837742 | Email d <ddaniels at verocel.com>
ewi.daniels at software-safety.com

Software Safety Limited is a company registered in England and Wales.
Company number: 9390590. Registered office: Fairfield, 30F Bratton Road,
West Ashton, Trowbridge, United Kingdom BA14 6AZ


On Fri, 15 Nov 2019 at 11:31, Roderick Chapman <rod at proteancode.com> wrote:

> On 14/11/2019 17:11, Dewi Daniels wrote:
>
> I helped conduct the static analysis on C-130J when I was at Lloyd's
> Register. QinetiQ's analysis is flawed. Due to timescale pressures, we were
> asked to conduct the static analysis before the code had been tested, so I
> don't see that you can draw any conclusions about the efficacy (or
> otherwise) of the DO-178B verification process.
>
> Dewi,
>
>  When you say "before the code had been tested", do you mean _all_ the
> code, or only the subsystems that L-R and you personally looked at? Were
> those systems the Level-A and Level-B systems that German compared wrt the
> efficacy of MC/DC structural coverage?
>
> (Note: I also had a hand in this: a team from Praxis worked on the Mission
> Computer development at L-M in late 1995, during the phase when they
> adopted SPARK. I recall the experience fondly.)
>
>  - Rod
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:
> https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20191117/f545354a/attachment.html>


More information about the systemsafety mailing list