[SystemSafety] Fwd: Contextualizing & Confirmation Bias

Derek M Jones derek at knosof.co.uk
Wed Feb 5 15:53:21 CET 2014


All,

 > Recent NYT article on the scientific method and confirmation bias in
 > experiments in the hard sciences for those who are still interested 
in this
 > topic:
 >
 > 
http://www.nytimes.com/2014/02/02/opinion/sunday/scientific-pride-and-prejudice.html?_r=0

There are several themes playing out here.
Some of the underlying problems being:

    o lots of monkeys, or researchers, running experiments and only
those with better than 5% statistical significance getting published,
"Why Most Published Research Findings Are False"
http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

http://simplystatistics.org/2013/12/16/a-summary-of-the-evidence-that-most-published-research-is-false/

    o researchers have a hard time publishing negative results, so
once something is out there it can remain as received wisdom for a
long time

    o journals tend to be more interested in publishing new research and
not replications of previously published work.

The fact that experimental data tends not to be kept very long
is another problem for those of us interested in checking past results:
http://www.cell.com/current-biology/retrieve/pii/S0960982213014000

Some of Physics' famous experiments, e.g., Millikan's oil drop
experiment had a great deal of confirmation bias built into their
original reporting:
http://www.albany.edu/~scifraud/data/sci_fraud_0732.html

I did see a photograph of a page in Millikan's original notebook
that showed exactly what he had done with his data, but cannot find
the link.

-- 
Derek M. Jones                  tel: +44 (0) 1252 520 667
Knowledge Software Ltd          blog:shape-of-code.coding-guidelines.com
Software analysis               http://www.knosof.co.uk


More information about the systemsafety mailing list