[SystemSafety] OpenSSL Bug

Patrick Graydon patrick.graydon at gmail.com
Fri Apr 11 17:36:50 CEST 2014


On 11 Apr 2014, at 16:38, Mike Rothon <mike.rothon at certisa.com> wrote:

> 1) How did we arrive at a situation where a large proportion of seemingly mission / financially critical infrastructure relies on software whose licence clearly states " This software is provided by the openSSL project ``as is`` and any expressed or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed."? 

I don’t know the history about which you ask.  But it seems inevitable to me that gratis software would not be warranted fit for any purpose: how could a loose collection of unpaid volunteer developers possibly underwrite such a warranty?

I don’t have too much of a problem with gratis software being offered as-is.  I doubt most people are capable of judging the fitness of software.  (Even if they are experts, how much can one person check?)  But I don’t see why such software shouldn’t be sold by a vendor who charges for the value-add of verifying, validating, and warranting it.

The real questions, I think, are (a) why do we put up with such disclaimers on software that is part of a commercial offering?, and (b) what can be done to make vendors take responsibility for their software?  I realise that each of us as individuals has Hobson’s choice with respect to (a), but if enough people demanded it, the situation might be different.  Chris’s paper explores some of the options for addressing (b).


> 2) Is it implicit that FOSS is less secure than proprietary software because exploits can be found by both analysis and experimentation rather than just experimentation? Or will this start a gold rush analysis of FOSS by security organisations resulting in security levels that are close to or better than proprietary software?

There are people who claim the opposite actually: the thinking is that more eyeballs make software more secure.  I’ve heard rhetoric from both sides, but if there is solid empirical evidence either way, I am not aware of it.

We’ve discussed programming languages, but what this episode makes me wonder more about is basic engineering in the form of architecture, verification, and validation.

I’ve read a couple of articles about this that mentioned the idea that this particular code wasn’t considered critical because the heartbeat function has no particular security implications.  (Sorry, the citations escape me at the moment.)  That worries me because it displays a misunderstanding of partitioning and isolation, a topic that DO-178B addressed two decades ago.

I also wonder about how this code was tested before being put into service.  Static analysis might have been a good idea, but shouldn’t basic robustness testing as per DO-178B §6.4.2.2’s two-decade-old advice have caught this?  I suppose that one could submit a heartbeat length greater than the actual request data sent, get back a response longer than what was sent, and not think that this is a problem.  But that seems a bit doubtful to me.

What kind of engineering did the people who developed this code and the people who put it into service do?!?


> Just in case anyone missed the news, the original source code for MS-DOS and Word for Windows 1.1a is available online from the Computer History Museum (http://www.computerhistory.org).

Might be worth revisiting the bad old days days of LPARAMs, HANDLEs, LocalAlloc, GetProcAddress, GetProfileString, InvalidateRect, CreateWindow, MessageBox, and thousands-of-lines-long wndprc functions.  :)

— Patrick



More information about the systemsafety mailing list