Showing posts with label maths. Show all posts
Showing posts with label maths. Show all posts

Monday, February 1, 2021

How economics could get better at solving real world problems

The study of economics has lost its way because economists have laboured for decades to make their social science more mathematical and thus more like a physical science. They’ve failed to see that what they should have been doing is deepening their understanding of how the behaviour of “economic agents” (aka humans) is driven by them being social animals.

In short, to be of more use to humanity, economics should have become more of a social science, not less.

This is the conclusion I draw from the sweeping criticism of modern economics made by two leading British economics professors, John Kay and Mervyn King, in their book, Radical Uncertainty: Decision-making for an unknowable future.

But don’t hold your breath waiting for economists to see the error of their ways. There are two kinds of economist: academic economists and practising economists, who work for banks, businesses and particularly governments or, these days, are self-employed as “economic consultants”.

Whenever I criticise “economists” – which I see as part of the service I provide to readers – the academics always assume I’m talking about them. It rarely occurs to them that I’m usually talking about their former students, economic practitioners – the ones who matter more to readers because they have far more direct influence over the policies governments and businesses pursue.

You see from this just how inward-looking, self-referential and self-sustaining academic economics has become. The discipline’s almost impervious to criticism. Criticism from outside the profession (including “the popular press”) can usually be dismissed as coming from fools who know no economics. If you’re not an economist, how could anything you say have merit?

But Kay and King are insiders. As governor of the Bank of England, King was highly regarded internationally. Kay has had a long career as an academic, author, management consultant, Financial Times columnist and head of government inquiries.

So their criticism will just be ignored, as has been most of the informed criticism that came before them. Their arguments will be misrepresented – such as that they seem opposed to all use of maths and statistics in economics. They’re not. But there’ll be little face-to-face debate. Too discomforting.

Trouble is, the push to increase the “mathiness” of economics has gone for so long that all the people at the top of the world’s economics faculties got there by being better mathematicians than their rivals.

They don’t want to be told their greatest area of expertise was a wrong turn. Similarly, all the people at the bottom of the academic tree know promotion will come mainly by demonstrating how good they are at maths.

Kay and King complain that economics has become more about technique – how you do it – than about the importance of the problems it is (or isn’t) helping people grapple with in the real world. (This may help explain why, in many universities, economics is losing out to business faculties.)

In support of their case for economics needing to be more of a social science, Kay and King note there are three styles of reasoning: deductive, inductive and “abductive”. Deductive reasoning reaches logical conclusions from stated premises.

Inductive reasoning seeks to generalise from observations, and may be supported or refuted by later experience. Abductive reasoning seeks to provide the best explanation for a particular event. We do this all the time. When we say, for instance, “I think the bus is late because of congestion in Collins Street”.

Kay and King say all three forms of reasoning have a role to play in our efforts to understand the world. Physical scientists (and mathy economists) prefer to stick to deductive reasoning. But this is possible only when we study the “small world” where all the facts and probabilities are known – the world of the laws of physics and games of chance.

In the “large world”, where we must make decisions with far from complete knowledge, we have to rely more on inductive and abductive reasoning. “When events are essentially one-of-a-kind, which is often the case in the world of radical uncertainty, abductive reasoning is indispensable,” they say.

And, so far from thinking “as if” we were human calculating machines, “humans are social animals and communication plays an important role in decision-making. We frame our thinking in terms of narratives.”

Able leaders – whether in business, politics or everyday life – make decisions, both personal and collective, by talking with others and being open to challenge from them.

The Nobel prize-winning economist Professor Robert Schiller, of Yale, has cottoned on to the importance of narratives in explaining the behaviour of financial markets, but few others have seen it. Most academic economists just want to be left alone to play the mathematical games they find so fascinating.

Read more >>

Saturday, January 30, 2021

Humans beat computers at knowing when to leap into the unknown

Two leading British economists who’ve launched a scathing critique of the unrealistic assumptions their peers have added to conventional economics to make it more tractable mathematically have not spared one of my great favourites: “behavioural economics”. It has lost its way, too.

The economists are Professor John Kay of Oxford University and Professor Mervyn King, a former governor of Britain’s central bank, the Bank of England. Their criticism is in the book, Radical Uncertainty: Decision-making for an unknowable future.

As I wrote in this column last week, economists have been working for decades to make their discipline more academically “rigorous” by using mathematical techniques better suited to the “stationary” physical world – where everything that happens is governed by the unchanging laws of physics – or to games of chance, where the probability of something happening can be calculated easily and accurately.

Kay and King call this modelling “small worlds”, where the right and wrong answers are clearly identified, whereas the large worlds occupied by consumers, businesses and government policymakers are characterised by “radical uncertainty”. We must make decisions with so little of the information we need – about the present and the future – that we can never know whether we jumped the right way, even after the event.

Economists’ analysis and predictions are based on the assumption that everything individuals and businesses do is “rational” – a word to which they attach their own, highly technical meaning. They think it means the decision-maker was able to consider every possibility and think completely logically.

Behavioural economics – which has been a thing for at least 40 years – involves economists using the findings of psychology to help explain the way people actually behave when they make economic decisions. It takes the assumption that people always act “rationally” and subjects it to empirical testing. Where’s the hard evidence that people really behave that way?

It shouldn’t surprise you that behavioural economists have found much behaviour doesn’t fit the economists’ definition of rational. They’ve done many laboratory experiments asking people (usually their students) questions about whether they prefer A, B, C or D, and have put together a list of about 150 “biases” in the way people think.

These “biases” include that people suffer from optimism and overconfidence, overestimating the likelihood of favourable outcomes. We are guilty of “anchoring” – attaching too much weight to the limited information we hold when we start to think about a problem. We are victims of “loss aversion” – hating losses more that we love the equivalent gains. And much more.

But this is where Kay and King object. As has happened before in economics, some highly critical finding is taken by the profession and reinterpreted in a way that’s less threatening to the conventional wisdom.

Over the years, I’ve written about many of these findings, taking them to mean the economists’ theory is deficient and needs to be changed.

But Kay and King claim the profession has turned this on its head, seeing the findings as meaning that a lot of people behave irrationally and need to be shown how to be more sensible.

This is an old charge against conventional economists: they don’t want to change their model to fit the real world, they want to change the world so it fits their model.

Why? Because economists think they know what behaviour is right and what’s wrong. What’s rational and what’s irrational. There is, indeed, a popular book about behavioural economics called Predictably Irrational. (The economists love the “predictable” bit – it implies they can get their own predictions right with only minor modifications.)

Kay and King object that most (though not all) the listed “biases” are not the result of errors in beliefs or logic. Most are the product of a reality in which decisions must be made in the absence of a precise and complete description of the world in which people live.

“Real people do not optimise, calculate subjective probabilities and maximise expected utilities; not because they are lazy, or do not have the time, but because they know that they cannot conceivably have the information required to engage in such calculation,” they say.

They note that whereas the American behavioural economists led by the Nobel-prize-winning psychologist Daniel Kahneman have put a negative connotation on the “heuristics” – mental short-cuts – people take in making their decisions, a rival group led by the German psychologist Gerd Gigerenzer sees it as proof of how good humans are at coping with radical uncertainty. It’s amazing how often we get it right.

Kay and King agree, saying that if humans don’t make decisions in the computer-like way economists assume we do, “it is not because we are stupid but because we are smart. And it is because we are smart that humans have become the dominant species on Earth.

“Our intelligence is designed for large worlds, not small. Human intelligence is effective at understanding complex problems within an imperfectly defined context, and at finding courses of action which are good to get us through the remains of the day and the rest of our lives. [Which aren’t the best solutions, but are “good enough”.]

“The idea that our intelligence is defective because we are inferior to computers in solving certain kinds of routine mathematical puzzles fails to recognise that few real problems have the character of mathematical puzzles.

“The assertion that our cognition is defective by virtue of systematic ‘biases’ or ‘natural stupidity’ is implausible in the light of the evolutionary origins of that cognitive ability. If it were adaptive [in the survival-of-the-fittest sense] to be like computers we would have evolved to be more like computers than we are. . .

“Our knowledge of context and our ability to interpret it has been acquired over thousands of years. These capabilities are encoded in our genes, taught to us by our parents and teachers, enshrined in the social norms of our culture,” they conclude.

Read more >>

Friday, January 22, 2021

Why economists get so many of their predictions wrong

Sometimes the study of economics – which has gone on for at least 250 years – can take a wrong turn. Many economists would like to believe their disciple is more advanced than ever, but in the most important economics book of 2020 two leading British economists argue that, in its efforts to become more “rigorous”, it’s gone seriously astray.

The book is Radical Uncertainty: Decision-making for an unknowable future, by Professor John Kay of Oxford University and Professor Mervyn King, a former governor of the Bank of England.

The great push in economics since World War II has been to make the subject more rigorous and scientific by expressing its arguments and reasoning in mathematical equations rather than words and diagrams.

The physical sciences have long been highly mathematical. Economists are sometimes accused of trying to distinguish their discipline from the other social sciences by making it more like physics.

Economics is now so dominated by maths it’s almost become a branch of applied mathematics. Sometimes I think that newly minted economics lecturers know more about maths than they do about the economy.

Kay and King don’t object to the greater use of maths (and I think economists have done well in using advanced statistical techniques to go beyond finding mere correlations to identifying causal relationships).

But the authors do argue that, in their efforts to make conventional economic theory more amenable to mathematical reasoning, economists have added some further simplifying assumptions about the way people and businesses and economic policymakers are assumed to behave which take economic theory even further away from reality.

They note that when, in 2004, the scientists at NASA launched a rocket to orbit around Mercury, they calculated that it would travel 4.9 billion miles and enter the orbit in March 2011. They got it exactly right.

Why? Because the equations of planetary motion have been well understood since the 17th century. Because those equations describing the way the planets move are “stationary” – meaning they haven’t changed in millions of years. And because nothing that humans do or believe has any effect on the way the planets move.

Then there’s probability theory. You know that, in games of chance, the probability of throwing five heads in a row with an unbiased coin, or the probability that the next card you’re dealt is the ace of spades can be exactly calculated.

In 1921, Professor Frank Knight of Chicago University famously argued that a distinction should be drawn between “risk” and “uncertainty”. Risk applied to cases where the probability of something happening could be calculated with precision. Uncertainty applied to the far more common cases where no one could say with any certainty what would happen.

Kay and King argue that economics took a wrong turn when Knight’s successor at Chicago, a chap called Milton Friedman, announced this was a false distinction. As far as he was concerned, it could safely be assumed that you could attach a probability to each possible outcome and then multiply these together to get the “expected outcome”.

So economists were able to get on with reducing everything to equations and using them to make their predictions about what would happen in the economy.

The authors charge that, rather than facing up to all the uncertainty surrounding the economic decisions humans make, economics has fallen into the trap of using a couple of convenient but unwarranted assumptions to make economics more like a physical science and like a game of chance where the probability of things happening can be calculated accurately.

There’s a big element of self-delusion in this. If you accuse an economist of thinking they know what the future holds, they’ll vehemently deny it. No one could be so silly. But the truth is they go on analysing economic behaviour and making predictions in ways that implicitly assume it is possible to know the future.

Kay and King make three points in their book. First, the world of economics, business and finance is “non-stationary” – it’s not governed by unchanging scientific laws. “Different individuals and groups will make different assessments and arrive at different decisions, and often there will be no objectively right answer, either before or after the event,” they say.

Why not? Because we so often have to make decisions while not knowing all there is to know about the choices and consequences we face in the world right now, let alone what will happen in the future.

Second, the uncertainty that surrounds us means people cannot and do not “optimise”. Economics assumes that individuals seek to maximise their satisfaction or “utility”, businesses maximise shareholder value and public policymakers maximise social welfare – each within the various “constraints” they face.

But, in reality, no one makes decisions the way economic textbooks say they do. Economists know this, but have convinced themselves they can still make accurate predictions by assuming people behave “as if” they were following the textbook. That is, people do it unconsciously and so behave “rationally”.

Kay and King argue that people don’t behave rationally in the narrow way economists use that word to mean, but neither do they behave irrationally. Rather, people behave rationally in the common meaning of the word: they do the best they can with the limited information available.

Third, the authors say humans are social animals, which means communication with other people plays an important role in the way people make decisions. We develop our thinking by forming stories (“narratives”) which we use to convince others and to debate which way we should jump. We’ve built a market economy of extraordinary complexity by developing networks of trust, cooperation and coordination.

We live in a world that abounds in “radical” uncertainty – having to make decisions without all the information we need. Rather than imagining they can understand and predict how people behave by doing mathematical calculations, economists need to understand how humans press on with life and business despite the uncertainty - and usually don’t do too badly.

Read more >>

Saturday, April 11, 2020

Some major contagions have nothing to do with you-know-what

It’s a long weekend so, though we’re barred from enjoying it in the usual way, let’s at least forget the V-word. How about a quiz?

Let’s say the government is preparing for the outbreak of an unusual disease (no, not that kind of disease) that, should we take no action, is expected to kill 600 people. The government could act to combat the disease in either of two ways.

If program A is adopted, 200 people will be saved. If program B is adopted, there’s a one-third chance that 600 people will be saved, and a two-thirds chance that no one will be saved. Which one would you choose?

If you chose A, congradulations. You’re in good company. When this psychology experiment is run, about 72 per cent of subjects favour A and only 28 per cent favour B.

But then the government consults the epidemiologists. Their advice is: forget A and B, and consider program C or program D. If C is adopted, 400 people will die. If program D is adopted, there’s a one-third chance no one will die and a two-thirds chance that 600 will die. Which one would you choose?

If you chose D, more applause. In laboratory experiments, that’s what 78 per cent of subjects choose, leaving only 22 per cent choosing C.

But if you look at the four options again you find that program A and program C are the same. Under A, 200 out of 600 are saved; under C, 400 out of 600 die. It’s just that A highlights the positive, whereas C highlights the negative.

That 72 per cent of subjects favoured A, but only 22 per cent favoured C tells that most of us instinctively favour the safer, more certain outcome. Program B, remember, contained a two-thirds chance that no one would be saved. This instinctive preference confirms economists’ conventional assumption that most people are “risk-averse”.

But a closer look also reveals that program B and program D are the same. Program B offers a one-third chance that 600 people will be saved and a two-thirds chance that no one will be saved, whereas program D offers a one-third chance no one will die and a two-thirds chance that 600 will die.

(If you can’t see that, remember that, in probability theory, the expected outcome is the possible outcome multiplied by the probability of it happening. So B is ⅓(600) + ⅔(0) = 200. And D is ⅓(600) + ⅔(0) = 200.)

But if options B and D are the same thing expressed in different ways, how come the experiments show only 28 per cent of subjects choosing B, but 78 per cent choosing D? It’s because, relative to option C, which offered only the certainty that 400 people would die, option D offered a one-third chance that no one would die, and most subjects thought that was a risk worth taking.

This shows that, while it’s generally true that most people are risk-averse, as conventional economics assumes, a more powerful human characteristic – which conventional economics ignores – is that most of us are “loss-averse”.

A key insight of behavioural economics is that we hate losing something much more than we love gaining something of the same value. So much so that, surprisingly, we’re willing to run risks to avoid any loss.

If you hadn’t noticed, when you look closely you see that all four options offered the same “expected value”: 200 people saved, 400 lost. If everyone had realised this at the time, they should have been equally divided between the options.

Why were we so sure that A and C were much more attractive that B and D? Well, one possibility is that most of us aren’t much good at maths. But the more important explanation is that we are heavily influenced by the way a proposition is presented to us – by the way it’s “framed”, as psychologists say. The same proposition can be packaged in a way we find attractive or repellent.

This, too, is a truth that conventional economics knows nothing of, but behavioural economics – the school of economic thought that uses psychology to throw light on economic issues – has brought to economists’ attention.

Putting it differently, the choices we make are heavily influenced by the context in which we make them. This is one of the key arguments advanced by Robert Frank, an economics professor at Cornell University, is his new book, Under the Influence.

Frank notes that standard economic theory says the spending decisions we make depend only on our incomes and relative prices. People’s assessments of their needs and wants are assumed to be completely independent of the spending decisions of others around them.

But this too is where the assumptions of standard theory are unrealistic. In real life, the things we buy and do are often heavily influenced by the “context” of what our friends are buying and doing.

We wear the clothes we think are fashionable, and we judge what’s fashionable by what our friends are wearing. The best way to predict whether a young person will take up smoking is whether their friends smoke.

We have an impulse to conform – which is stronger than we often realise. That’s why we can’t resist buying toilet paper when others are grabbing it, or selling our shares when others are quitting the market.

Psychologists call this phenomenon “behavioural contagion” – our tendency to mimic the behaviour of others. When some things start to become popular, they often become very popular. Same if they start becoming unpopular.

Frank notes that our tendency to copy what others are doing can have positive consequences (as when people exercise more because their friends are doing it) or negative consequences (as when we drink heavily because the people we live with are).

He argues that economists ought to be more conscious of behavioural contagion because of the opportunities they present for governments to use taxation to encourage us to make better choices.
Read more >>

Saturday, January 28, 2017

Think you're pretty sharp? Try this simple quiz

It's the last (unofficial) holiday weekend of summer before the new year really gets down to business on Monday. So let's have some fun. Try yourself on this simple quiz.
Q1: Linda is 31 years old, single, outspoken and very bright. At uni, she majored in philosophy. As a student she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which of these two is more likely: that Linda is a bank teller or that Linda is a bank teller and is active in the feminist movement?
If you went for a feminist bank teller - sorry, wrong.
Q2: As an investor you're trying to decide between buying shares in three listed companies when you notice that one of them's been chosen as company of the year by a business magazine. Would that make it best bet of the three?
Q3: You're trying to decide which super fund to put your savings in, so you look up the figures to see which one had the highest returns last year. Would it be the best bet?
If you answered yes to those questions you're likely to be disappointed.
Q4: The instructors of fighter pilots found that pilots who were praised when they'd flown well always performed worse the next time, whereas those who were criticised for performing badly always performed better the next time.
The instructors concluded that criticism was more effective than praise. Were they right?
If you answered yes - sorry, wrong.
Q5: You flip an unbiased coin and it comes up five heads in a row. Which is more likely from the sixth throw: heads or tails?
Q6: Which is the more likely birth order in a family of six kids: B B B G G G or G B B G B G?
In the first case the sixth throw is just as likely to be another head as a tail. In the second, the two birth orders are equally likely.
Q7: Which would you prefer, an operation with a 90 per cent success rate, or a different one with a 10 per cent failure rate?
Answer: Have another think about the question.
Apart from the investment questions (which I threw in to please the business editor) all those questions come from best-selling business writer Michael Lewis' latest book, The Undoing Project.
It's the story of two Israeli-American academic psychologists, Daniel Kahneman and Amos Tversky, who demonstrated how wide of the mark is the assumption of conventional economics that we're all "rational" - coldly logical - in the decisions we make, thus giving a huge push to the new school of behavioural economics.
A lot of their experiments involved our understanding of maths. Don't feel bad if you failed many of them. Most of us do, even people good at maths.
The moral is, however much or little people know about maths, particularly the rules of probability, we have trouble applying this to our daily lives because we let our emotions distract us.
Q1 was about the rules of probability. Linda certainly sounded like a feminist, but a lot of bank tellers aren't feminists so, statistically, there was a higher probability that she was a bank teller than a bank teller and a feminist.
All that guff about her interests at uni engaged our emotions and distracted us from the simple probabilities.
The questions about investment choices and fighter pilots were about a key statistical regularity most of us haven't heard of, called "reversion to the mean".
The performance of companies, super funds or fighter pilots in any year is a combination of skill and luck. We're always tempted to attribute good luck to high skill.
The luck factor is random, so a performance that's way above average is likely to have been assisted by luck, just as a really bad performance is likely to have been worsened by bad luck.
If good luck and bad luck average out over time, an outstandingly good performance is more likely to be followed by a performance closer to the average than by another rip-snorter. Similarly, a really bad performance is more likely to be followed by one not so bad.
Note that we're only accounting for the luck factor in performance, so a policy of always predicting reversion to the mean gives you a slight advantage in the forecasting stakes, not a sure thing.
The pilot trainers were observing reversion to the mean, but falsely attributing it to their own efforts in awarding praise or criticism.
Sadly, this has left many of the world's bosses suffering the delusion that criticism works better than praise.
The questions on coin tosses and baby order were about the "law of large numbers", which says that if events have equal probability of occurring, eventually they'll occur an equal number of times.
We all know that if you toss a coin enough times you'll get a roughly equal number of heads and tails. And we all know the numbers of boys and girls being born are almost equal.
Trouble is, you need thousands of samples to be sure of getting that result. By expecting to see equal numbers in a sample as small as six, we've turned the statisticians' law of large numbers into our own imaginary "law of small numbers".
Remember, probability theory applies to independent events, where what's gone before has no effect on what happens next.
Humans are pattern-seeking animals, but sometimes we go too far and see patterns that aren't real. Five heads in a row, or three boys followed by three girls, may look unlikely but, because the law applies only to large numbers, are perfectly consistent with a random draw.
Whether it's heads or tails, boy or girl, the safest bet remains 50/50. In the case of the five heads in a row, no one told the coin its duty was to make its sixth toss a tail.
Read more >>