The philosopher, Isiah Berlin, liked to pose a seemingly rhetorical question: “If we have the possibility of knowing the truth, why would we choose to be deceived?” To this puzzling question, the psychologist Daniel Kahneman has uncovered an answer: it is because finding the truth demands too much effort and is usually not sufficiently rewarding.
Daniel Kahneman suggests we have two ways of thinking when we make decisions. System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to effortful mental activities, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice and concentration. [p.21] Unless there is very strong pressure otherwise, we will typically employ System 1 thinking and settle for the result, only activating System 2 to check our results with considerable reluctance. On the whole, the problems caused by our resulting errors are less troublesome than the challenges created by checking the evidence and correcting our results. Excessive use of System 2 would simply paralyse us without making our lives better. It is worth adding that System 2 not only strains our own capacities but also provokes social costs - it is anti social to question lazy opinions that can be left undisturbed.
System 1 decision making has the advantage of speed and is a great help when evading tigers in dense jungle, something we all have to do whenever we find ourselves in a dense jungle with tigers. For such reasons, we have evolved to place great value on its rapid judgements. In the very different environment created by civilisation and modern life, there are a greater number of occasions when we really need to employ System 2 thinking more readily and we are not well adapted for this. We persist in applying Type 1 thinking and the resulting errors of judgement have substantial consequences.
Jumping to conclusions is widespread. We place too much faith in evidence from very small or unrepresentative samples, ending up with a view of the world that is oversimplified and more coherent than the data really justify.
Our predilection for causal thinking makes us predisposed to assume an association reveals cause and effect and reluctant to accept that it arises by chance. When events arise by chance, we are invariably wrong to seek a causal explanation.
Anchoring effects are very reliable in experiments. They may arise from “priming” - when asked to estimate any quantity, the answers are consistently skewed by asking if the figure should be more or less than a given number, which acts as an “anchor.” They may also arise from insufficient adjustment, as when a negotiator starts out with a ridiculous proposition, that is far too high or low. The psychological effect of anchoring makes us far more suggestible than we would wish to admit and plenty of people are prepared to exploit our gullibility in this way.
The “availability heuristic” arises when we judge frequency on the basis of how easily instances come to mind. We are thus influenced by salient events, dramatic events, personal experiences rather than things that happen to other people. People are more vulnerable to availability effects when they rely on System 1 thinking, and less so with System 2. People assess risk with regard to availability, with the result that the public often form concerns at odds with the opinions of risk experts. There is some evidence that the public are not always mistaken but indeed they sometimes place perfectly legitimate values on risks that are different to the way experts value them. However, there is also evidence of a so called “availability cascade,” in which popular perceptions generate a media and political outcry that is at odds with the rational evidence of experts, often resulting in ill considered and economically defective legislation and regulation.
Any question about probability or likelihood is difficult and evokes answers to easier questions instead. One of the easier ways to answer is an automatic assessment of representativeness - in other words, we rely on stereotypes; for example, when comparing men and women drivers. Stereotypes are sometimes plausible but often fail and this problem is aggravated because we are far too willing to make predictions about highly unlikely events with wholly inadequate information. In particular, we fail to establish from the outset what is a plausible base rate for the likely frequency of any event and we too often fail to question if the evidence to hand is sufficiently “diagnostic.” Representativeness can be a stronger influence than logic when evaluating likelihoods, as shown in the “conjunction fallacy.” In the “Linda Problem,” subjects consistently judged that she was more likely to be a feminist bank teller than to be a bank teller, even though the second category (bank tellers) is larger than and includes the first (feminist bank tellers) by definition, so it cannot possibly be more likely.
System 1 represents categories in terms of norms and prototypical exemplars. For people, this means that we refer to stereotypes automatically. This feature of System 1 can sometimes yield good enough judgements but statistical facts or general statements of any kind which may be valid when applied to a group or a population are not usually valid when applied to individual members of that group or population.
Regression to the mean is a statistical phenomenon that routinely catches us out. An especially good performance will almost of necessity be followed by a less good one closer to the norm, and an especially poor performance by improvement towards the norm, regardless of how we respond. In general, every exceptional result should be followed by regression towards the mean. Francis Galton established a general rule: when two variables are not perfectly correlated, they will always display regression. So for example, the correlation of intelligence scores for spouses is less than perfect, so unusually intelligent people will normally marry someone of lesser intelligence, etc. It is terribly difficult to accept this necessary consequence of simple statistics and we unavoidably search for causal explanations where none is required.
I have reviewed only a part of the material presented in this lively and thought provoking book. Kahneman himself concentrates on the difficulty of educating people so that they will make better judgements and more rational choices. He does not think it can really be achieved - our brains just work this way.
To my mind, the more sinister consideration is that not only are all humans susceptible to being manipulated into making or accepting irrational judgements but also there are powerful people who understand this vulnerability very well and put it to systematic use. Education is unlikely to inoculate the public as a whole from the power of manipulation and distraction which these findings make available to governments and powerful interest groups. When watching political debate in the media, it becomes impossible to miss the widespread and often very well practised employment of such techniques in ways that can be deeply cynical and this just cannot be an accident.
I started working on a few examples and suddenly appreciated that they would all be seen as contentious, provocative and worse, precisely because the examples are so effective. I decided it is not a useful diversion here. I have to add the provocative remark, which may be misunderstood, that Kahneman and his colleague Tversky did much of their work In Israel and for the Israeli Defence Force, and also for the U.S. military, which Kahneman often mentions in passing as though that is quite neutral information. One can readily see why those agencies would be happy to support their work.