When common sense goes wrong

Dear colleagues,

Are Jacques Derrida, Michel Foucault and Jürgen Habermas famous because their work is more insightful than that of others? Or could something else be involved?

Is Wollongong University's increasing stature due to its policies? Or are other factors more important?

One of the shortcomings of common sense is that explanations are given only after we know the answer. This is the claim of Duncan J. Watts in Everything is obvious: when common sense fails (Atlantic Books, 2011). Watts, with a degree in physics and a PhD in engineering, worked as a sociologist at Columbia University and now works for Yahoo! Research. His book is an accessible treatment of a range of research with surprising conclusions for everyday life - and academic work.

One of Watts' example is Leonardo da Vinci's famous painting the Mona Lisa. Is it famous because of its special characteristics? That's what many say, but Watts argues that circumstances, including a good deal of luck, led to the popularity of the painting. People then assumed that it must be special because of intrinsic features. Watts: "this kind of circular reasoning- X succeeded because X had the attributes of X - pervades commonsense explanations for why some thing succeed and others fail" (p. 60).

Watts and co-workers have carried out fascinating research into the way judgements about pop songs can diverge purely by chance. Listeners are more likely to download songs that others have downloaded, causing a bandwagon effect. The intrinsic quality of the song, as independently rated by listeners, explains only a portion of a song's success.

The normal assumption is that success implies superior performance. So when companies do well, analysts try to figure out why, looking at leadership, business strategies and the like. But there are intrinsic flaws in this approach, because some of these businesses, with the same leaders and strategies, fail miserably later on. So the leaders and strategies can't be the full explanation.

The trouble is that there are no controlled trials. History cannot be repeated thousands of times to see things might have turned out otherwise. Because history did happen the way it did, people assume that it must have been that way - there must be an explanation. Pure chance is not favoured as an historical explanation.

Watts has a fair bit to say about history. He says it can't be written by people in the middle of it, because the very concepts used to describe events require hindsight. For example, the French Revolution is a concept that was not available to those storming the Bastille.

Another reasoning flaw analysed by Watts is the treatment of groups - such as governments or corporations - as individuals. This sort of shorthand, for example "Canberra today announced ..." masks a confusion between individuals and groups.

When people make predictions, they regularly make mistakes, but routinely ignore them. This is okay for everyday purposes, but not good for strategic planning. One of the problems is that we don't know today the sorts of issues that need to be anticipated. In retrospect, 9/11 tells planners about the risks of terrorists boarding aeroplanes with bolt cutters. But this is using hindsight. In 2001, there was no way of knowing whether the key question was airline security or risks from biological or chemical agents or something else.

So to return to the question of famous intellectuals: it is quite possible that some of them are prominent due to factors other than their intellectual contributions. They were in the right place at the right time and gained recognition which snowballed as more and more academics cited them.

Watts draws on examples from sport, business and politics, but not education. Yet it is easy to see the relevance of his analysis in all sorts of academic areas. For example, in predicting student demand or emerging research strengths, one traditional approach is to rely on experts. Watts quotes much evidence that combining predictions from many individuals, experts or non-experts, is far better than relying on one person - especially yourself! Using a simple model can do nearly as well.

One of the most important lessons is that success is not necessarily a reflection of superior strategies. Methods need to be evaluated on their own, without knowing the outcomes. That's what happens in science, with double-blind trials. Yet all too often in social science and policy, assessments are made based on single cases in which the result is known.

"The Halo Effect ... turns conventional wisdom about performance on its head. Rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process" (p. 221). Apply this to Excellence in Research for Australia or grant outcomes or various key performance indicators and see where it gets you.

A lot of what Watts has to say is counter-intuitive. Of course. He's writing about when common sense fails. He is critical of social science at times, but ultimately he is a powerful defender against the criticisms and unrealistic expectations of hard scientists. Everything is obvious is a highly stimulating book that is well worth attention and reflection. It might even give you some good ideas for research or organisational performance.

Brian Martin
15 October 2011

I thank Majken Sørensen for helpful comments on a draft of this comment.


Go to

Brian's comments to colleagues

Brian Martin's publications

Brian Martin's website