[SystemSafety] Comparing reliability predictions with reality

Derek M Jones derek at knosof.co.uk
Mon Feb 24 14:47:57 CET 2025


Robert,

Thanks for the links.

> being an old guy, I see there’s a new generation of software engineers out there who are trying to improve on previous generation’s efforts

I wish this was true.  But us old guys seem to be the ones
doing all the real work.

> https://osf.io/preprints/psyarxiv/tfjyw_v1
> Hicks, C. M., & Hevesi, A. (2024, November 21). A Cumulative Culture Theory for Developer Problem-Solving.

Yes, human cognition is a woefully understudied aspect
of software engineering.  Several academics in
software departments have told me that this is not
an area of interest to the department, while others
have said that they would never get anything in this
area published in the software journals.

See chapter 2 of my book Evidence-based Software Engineeringhttp://knosof.co.uk/ESEUR/

> Analyze This! 145 Questions for Data Scientists in Software Engineering
> Andrew Begel, Thomas Zimmermann

A lot of the questions are sensible things involving
cost/benefit and return on investment.
However, in the few cases where I have good enough data,
developers don't like where the analysis goes.
People just like doing what they are doing and tend to resist
suggestions for major changes.
Cognitive capitalism is the title of the 3rd chapter of my book.

In systems safety there is the belief that following a process
will lead to reliable code.  And the evidence for this is?

It's easy enough to create a process that produces unreliable code
and then rebrand it as the opposite.

> ICSE 2014: Proceedings of the 36th International Conference on Software Engineering
> Pages 12 - 23
> https://doi.org/10.1145/2568225.2568233
> 
> still standing,
>    bob s
> 
>> On Feb 24, 2025, at 8:10 AM, Derek M Jones <derek at knosof.co.uk> wrote:
>>
>> All,
>>
>> Having spent some time reading lots of papers on
>> models of software reliability, based on
>> cpu time between fault experiences, I have not found
>> one that measures the accuracy of the models.  By
>> accuracy I mean comparison of the prediction of time
>> to next fault against actual time to next fault.
>>
>> There is something of a cottage industry of papers
>> that compare probability distributions fitted to historical
>> data.  Researchers seem shy about taking the next step of
>> comparing predictions of future faults.  Perhaps because
>> the results are not very good?
>>
>> Does anybody know of papers that compare predictions against
>> actual?
>>
>> Yes, lack of data is a perennial problem.
>>
>> Some related discussion here
>> https://shape-of-code.com/2025/02/23/deep-dive-looking-for-good-enough-reliability-models/
>>
>> -- 
>> Derek M. Jones           Evidence-based software engineering
>> blog:https://shape-of-code.com
>>
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety at TechFak.Uni-Bielefeld.DE
>> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
> 

-- 
Derek M. Jones           Evidence-based software engineering
blog:https://shape-of-code.com



More information about the systemsafety mailing list