[SystemSafety] Autonomous Vehicles and "Hacking" Threats

Nick Lusty nl887 at my.open.ac.uk
Thu Dec 11 20:40:56 CET 2014


Hi Dewi,

It is interesting reading this, because about fiveyears ago, whilst 
working with at Silver with you, I was sent down to Aerosystems 
International to repeat the SPARK proofs against the Parnas tables a new 
release of the software.  The new Mission Computer software was still in 
Ada, but not in a SPARK compatible subset, especially when it came to 
the operator interfaces where variable sized parameters were in common 
use, so much of the time was spent producing SPARK shadow operators that 
represented the real Ada code in SPARK, then adding SPARK annotations to 
help the simplifier, and finally using the proof checker to demonstrate 
things that the simplifier could not figure out.  I didn´t find any 
functional defects, and it was very labour intensive.  A shame, as 
writing SPARK from scratch would have been quicker and certainly more 
reliable.

Regards,

Nick

On 25/11/2014 11:41, Dewi Daniels wrote:
> I helped carry out static code analysis of the airborne software for the
> Lockheed C-130J in 1995 while I was working at Lloyd's Register. We used
> MALPAS to verify software written in Ada, C and Assembler. This is the
> project described in Andy German's CrossTalk article
> (http://www.crosstalkonline.org/storage/issue-archives/2003/200311/200311-0-
> Issue.pdf). I later spent 6 months at the Lockheed factory in Marietta, GA,
> helping to write the software safety case for the C-130J. Before working for
> Lloyd's Register, I'd worked for Program Validation Limited, where I helped
> to develop the SPARK Examiner. After Lloyd's Register, I went to work for
> Praxis, who had purchased Program Validation Limited and rights to the SPARK
> toolset.
>
> One needs to be careful in interpreting the evidence from the C-130J static
> code analysis. I seem to remember there were about 7 categories of defect,
> ranging from functional defects that affected safety of flight at one
> extreme to spelling mistakes in supporting documentation at the other
> extreme. Also, in many cases, the static code analysis was carried out
> before the software testing had been completed. The software that Andy
> German reported to have a defect rate of 4 anomalies per thousand lines of
> code was the mission computer software, which was specified using Parnas
> Tables and implemented in SPARK by Lockheed with help from Praxis. I believe
> they did program proof of the SPARK code against the Parnas Tables. I
> therefore believe the 4 anomalies per kLOC reported by Andy German is
> consistent with the < 1 post-delivery defect per KLOC reported by Praxis on
> other SPARK projects, given the anomalies reported on C-130J would have
> included spelling mistakes and other non-functional defects, and that the
> static code analysis was done in parallel with the software development.
> Incidentally, Lockheed's positive experience of developing the mission
> computer software in SPARK was documented in Jim Sutton's book, Lean
> Software Strategies
> (http://www.amazon.co.uk/Lean-Software-Strategies-Techniques-Developers/dp/1
> 563273055).
>
> When comparing defect rates, we need to be careful to define whether we mean
> just those functional defects that affect safety, all functional defects, or
> all defects including non-functional defects, and also whether we count
> defects found prior to entry into service, or only defects found after entry
> into service.
>
> I think what was clear from the C-130J experience was that the (only) SPARK
> program exhibited a much lower defect rate than the average Ada program,
> which exhibited a much lower defect rate than the average C program. Having
> said that, there was considerable variability between suppliers (the best C
> program had a lower defect rate than the average Ada program). As it
> happens, I helped conduct the static code analysis on the worst program (500
> anomalies per thousand lines of code according to Andy German's article).
> The code didn't match the design, and the design didn't match the
> requirements! I understand that Lockheed cancelled the contract with that
> particular supplier and sourced an alternative part from another supplier.
>
> Yours,
>   
> Dewi Daniels | Managing Director | Verocel Limited
> Direct Dial +44 1225 718912 | Mobile +44 7968 837742 |
> Email ddaniels at verocel.com
>   
> Verocel Limited is a company registered in England and Wales. Company
> number: 7407595. Registered office: Grangeside Business Support Centre, 129
> Devizes Road, Hilperton, Trowbridge, United Kingdom BA14 7SZ
>
> -----Original Message-----
> From: systemsafety-bounces at lists.techfak.uni-bielefeld.de
> [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
> Peter Bernard Ladkin
> Sent: 23 November 2014 05:19
> To: systemsafety at lists.techfak.uni-bielefeld.de
> Subject: Re: [SystemSafety] Autonomous Vehicles and "Hacking" Threats
>
> Dewi Daniels, who did much of the code inspection on this project I
> understand, is on this list.
>
> For an unquantified list of what was discovered see slides 16-20 of
> http://www.rvs.uni-bielefeld.de/publications/Talks/LadkinIETSysSafe2012.pdf
>
> PBL
>
> On 2014-11-22 23:30 , Martyn Thomas wrote:
>> I think that his numbers are just the discovered anomalies.
>>
>> On 22 Nov 2014, at 22:03, Brent Kimberley <brent_kimberley at rogers.com
>> <mailto:brent_kimberley at rogers.com>> wrote:
>>
>>> How does Andy et al estimate the volume of undiscovered anomalies?
>>>
>>> On Saturday, November 22, 2014 10:47 AM, Martyn Thomas
>>> <martyn at thomas-associates.co.uk <mailto:martyn at thomas-associates.co.uk>>
> wrote:
>>>   
>>> On 21/11/2014 16:38, Stefan Winter wrote:
>>>> I had hoped for some better estimate of defect densities for the
>>>> latter. The best approximation I had come up with so far is the
>>>> product of "lines of code in a modern car" (100 million for a
>>>> premium car in 2009) and "defect count per line of code in really
>>>> critical software" (10^^-4 ). I had taken these numbers from an IEEE
>>>> spectrum publication and a short paper from Gerard Holzmann, hoping
>>>> that critical NASA software contains in average less bugs than
>>>> common automotive code and the calculation, hence, gives me a
>>>> conservative estimate. If anyone has a better idea or wants to share
>>>> more accurate numbers, please let me know. :-)
>>> Andy German's Crosstalk article
>>> (http://www.crosstalkonline.org/storage/issue-archives/2003/200311/20
>>> 0311-0-Issue.pdf) decribed the analysis of a range of software in a
> military aircraft. He reported a range of "anomaly"
>>> densities, ranging from 4/KLoC to 250/KLoC.
>
>
> Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld,
> 33594 Bielefeld, Germany
> Tel+msg +49 (0)521 880 7319  www.rvs.uni-bielefeld.de
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE



More information about the systemsafety mailing list