Two leading British economists who’ve launched a scathing critique of the unrealistic assumptions their peers have added to conventional economics to make it more tractable mathematically have not spared one of my great favourites: “behavioural economics”. It has lost its way, too.
The economists are Professor John Kay of Oxford University and Professor Mervyn King, a former governor of Britain’s central bank, the Bank of England. Their criticism is in the book, Radical Uncertainty: Decision-making for an unknowable future.
As I wrote in this column last week, economists have been working for decades to make their discipline more academically “rigorous” by using mathematical techniques better suited to the “stationary” physical world – where everything that happens is governed by the unchanging laws of physics – or to games of chance, where the probability of something happening can be calculated easily and accurately.
Kay and King call this modelling “small worlds”, where the right and wrong answers are clearly identified, whereas the large worlds occupied by consumers, businesses and government policymakers are characterised by “radical uncertainty”. We must make decisions with so little of the information we need – about the present and the future – that we can never know whether we jumped the right way, even after the event.
Economists’ analysis and predictions are based on the assumption that everything individuals and businesses do is “rational” – a word to which they attach their own, highly technical meaning. They think it means the decision-maker was able to consider every possibility and think completely logically.
Behavioural economics – which has been a thing for at least 40 years – involves economists using the findings of psychology to help explain the way people actually behave when they make economic decisions. It takes the assumption that people always act “rationally” and subjects it to empirical testing. Where’s the hard evidence that people really behave that way?
It shouldn’t surprise you that behavioural economists have found much behaviour doesn’t fit the economists’ definition of rational. They’ve done many laboratory experiments asking people (usually their students) questions about whether they prefer A, B, C or D, and have put together a list of about 150 “biases” in the way people think.
These “biases” include that people suffer from optimism and overconfidence, overestimating the likelihood of favourable outcomes. We are guilty of “anchoring” – attaching too much weight to the limited information we hold when we start to think about a problem. We are victims of “loss aversion” – hating losses more that we love the equivalent gains. And much more.
But this is where Kay and King object. As has happened before in economics, some highly critical finding is taken by the profession and reinterpreted in a way that’s less threatening to the conventional wisdom.
Over the years, I’ve written about many of these findings, taking them to mean the economists’ theory is deficient and needs to be changed.
But Kay and King claim the profession has turned this on its head, seeing the findings as meaning that a lot of people behave irrationally and need to be shown how to be more sensible.
This is an old charge against conventional economists: they don’t want to change their model to fit the real world, they want to change the world so it fits their model.
Why? Because economists think they know what behaviour is right and what’s wrong. What’s rational and what’s irrational. There is, indeed, a popular book about behavioural economics called Predictably Irrational. (The economists love the “predictable” bit – it implies they can get their own predictions right with only minor modifications.)
Kay and King object that most (though not all) the listed “biases” are not the result of errors in beliefs or logic. Most are the product of a reality in which decisions must be made in the absence of a precise and complete description of the world in which people live.
“Real people do not optimise, calculate subjective probabilities and maximise expected utilities; not because they are lazy, or do not have the time, but because they know that they cannot conceivably have the information required to engage in such calculation,” they say.
They note that whereas the American behavioural economists led by the Nobel-prize-winning psychologist Daniel Kahneman have put a negative connotation on the “heuristics” – mental short-cuts – people take in making their decisions, a rival group led by the German psychologist Gerd Gigerenzer sees it as proof of how good humans are at coping with radical uncertainty. It’s amazing how often we get it right.
Kay and King agree, saying that if humans don’t make decisions in the computer-like way economists assume we do, “it is not because we are stupid but because we are smart. And it is because we are smart that humans have become the dominant species on Earth.
“Our intelligence is designed for large worlds, not small. Human intelligence is effective at understanding complex problems within an imperfectly defined context, and at finding courses of action which are good to get us through the remains of the day and the rest of our lives. [Which aren’t the best solutions, but are “good enough”.]
“The idea that our intelligence is defective because we are inferior to computers in solving certain kinds of routine mathematical puzzles fails to recognise that few real problems have the character of mathematical puzzles.
“The assertion that our cognition is defective by virtue of systematic ‘biases’ or ‘natural stupidity’ is implausible in the light of the evolutionary origins of that cognitive ability. If it were adaptive [in the survival-of-the-fittest sense] to be like computers we would have evolved to be more like computers than we are. . .
“Our knowledge of context and our ability to interpret it has been acquired over thousands of years. These capabilities are encoded in our genes, taught to us by our parents and teachers, enshrined in the social norms of our culture,” they conclude.