[SystemSafety] Bounded rationality or ignorance?

Robert P. Schaefer rps at mit.edu
Thu Oct 11 15:15:57 CEST 2018


In the "bigger picture", given that there will always be software problems, the "bigger solution" is to consider the turn-around time from
detection of the software problem to correction of the software problem. The detection is political, (i.e. who is allowed to say
there IS a problem?). The correction is also political, (i.e. who will PAY for the correction?). As political processes tend towards
non-action (i.e. I’ve got mine, you f* off), the turnaround time from problem detection (by someone without political power)
to problem fix (by someone who doesn’t want to pay for it) tends towards infinity unless suitible forces are pressed into
action (i.e. whateve counter-balancing political and legal forces that exist, align to do good).


On Oct 11, 2018, at 8:59 AM, Olwen Morgan <olwen at phaedsys.com<mailto:olwen at phaedsys.com>> wrote:



The points Peter and Martyn raise about why software engineers make poor decisions is IMO well worth exploring.

Behavioural economists use what are known as "bounded rationality models" to model the behaviour of economic actors in cases where their decision making can be affected by (among other things):

(a)    the tractability of the decision problem,

(b)    the cognitive limitations of their minds, and

(c)    the time available to make the decision.

Here's an example of (b) and (c) (sort-of): A project has decided to use MCDC coverage as the test coverage domain to be used in unit testing. The aim is to get 100% MCDC coverage of developed code units. Owing to slippage, a manager decides that to make up for lost time, the project will stop unit testing when 80% MCDC coverage has been achieved.

Here we have (typically), a manager who does not realise the risks involved in settling for only 80% coverage. Is this a "cognitive limitation" or just ignorance? Also the factor involved was not the time available to make the decision but the delay that would accrue if the project stuck to the 100% coverage target. OK, lets suppose that these quibbles are actually implied to be part of (a), (b), and (c) above. In any case,cutting testing short is an very common way in which managers seek to recover schedule on delayed projects and the scenario above is far from unrealistic.

Now the question: How does a bounded rationality model cope with this kind of behaviour? One way to look at it is that the manager makes a decision based on how he feels he will be evaluated for his performance on the project. Unfortunately that does not actually capture the problem that the manager is, more than likely, mis-evaluating technical risks because he is technically ignorant - as many lamentably are. Using the concepts of bounded rationality, one may easily come up with a narrative that describes the problem by citing  the factor of "ignorance" but, as is lamentably common in behavioural economic models, that ends up merely as an exercise in labelling and does not capture the inner dynamics of decision-making hampered by ignorance.

For this reason I'm really not optimistic about the prospects for behavioural economics actually helping in correcting the problems of poor technical decision-making. In fact, though I won't set any of them out here, after much time reading in economics, my opinions on what I've seen of economics and economists are, even on a good day, largely unprintable.

Olwen


_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181011/7622a4a3/attachment.html>


More information about the systemsafety mailing list