Monday, April 25, 2011

Games show how economists lead us astray

In universities these days they play a lot of games - though when the economists play they prefer to call it "game theory". And game playing is one of the most potentially useful things academics do.

The most famous game played by social scientists is "the prisoner's dilemma". As described by Wikipedia, two suspects are arrested by the police. The police have insufficient evidence for a conviction, but they keep them separate and offer each the same deal.

If one testifies for the prosecution against the other (that is, "defects" from a position of solidarity with the other) and the other remains silent (that is, "co-operates" with the other), the defector goes free and the silent accomplice receives the full 10-year sentence.

If both remain silent, both are sentenced to only six months for a minor offence. If each betrays the other, each receives a five-year sentence. So each prisoner must choose whether to betray the other or remain silent.

Each is assured the other won't know about the betrayal before the end of the investigation. So how should the prisoners act?

As a group, the two prisoners are better off if they each stay silent - each gets only six months' jail.

As individuals, however, the risk of being betrayed by the other means the "rational" choice is always to dob in the other guy. If he stays silent, you get off while he gets 10 years. If he dobs you in too, you both get half the full sentence, whereas if you were to stay silent while he dobbed you in you'd cop the full 10 years. Barry Schwartz and Kenneth Sharpe, in their new book Practical Wisdom, observe that social scientists love the prisoner's dilemma game because it embodies many situations in life in which co-operation would make everyone better off, but choosing to co-operate makes you vulnerable to exploitation by people who choose not to co-operate.

It's noteworthy that, though the economists' model leads them to predict that everyone will make the "rational" choice to be unco-operative, when the once-only game is played with experimental subjects a significant minority of people choose to co-operate.

See what's happening? It turns out that the economists' conventional, neo-classical model is just one way of "framing" the economic problem - the problem of how to make a living.
The model frames the problem as a problem for individuals: how do I look after myself in a world composed of other individuals whose main aim is to look after themselves as individuals?

In other words, the model sees the economic world as fundamentally competitive. It highlights the risk that others will choose not to co-operate with me, and highlights the benefit to me of "free-riding" - taking advantage of those who do choose to co-operate.

The one thing it doesn't highlight is the opportunity cost - whether to me or to all of us - of our mutual failure to reap the benefits of co-operation.

So "the economists' way of thinking" is a way of framing the economic problem that's biased in favour of competition and against co-operation. But it's just one way of framing the problem; framing it another way could emphasise the benefits of co-operation and the costs of excessive competitiveness.

When we're taught to think about the economic problem the way economists conventionally think about it, our thinking becomes biased against recognising the benefits of co-operative solutions: "communitarian" or "collective" solutions, whether agreed between people informally or - to overcome the problem of free-riding - delivered by governments using compulsory mechanisms such as taxation.

Conventional economic analysis will always be biased against government intervention because it frames the economic problem as one to be solved by individuals, not by society.

A crude reading of evolution says it's all about competition - the survival of the fittest. A more modern, sophisticated reading says the supremacy of the human animal is as much the product of co-operation between humans as about competition between them. Both co-operation and competition are key components of our winning formula.

The fact is that a huge proportion of economic activity involves co-operation between people rather than competition in markets. There are all the goods and services produced within households.

And there's all the activity that occurs inside big companies, including trade between the different parts of national and trans-national corporations. Economists know surprisingly little about this activity.

To emphasise the point that conventional economics (and, indeed, all economics) involves framing, Schwartz and Sharpe note that the participants in one experiment were giving the same version of the prisoner's dilemma game, except that one group was told it was the Wall Street Game whereas the other group was told it was the Community Game.

You guessed it: people playing the Wall Street game were much more likely to defect. In a similar game, those told they were taking part in the Social Exchange Study were more likely to co-operate than those told they were taking part in the Business Transaction Study.

The latter researchers say the social-exchange frame induced a motivation for the players to do what was right, whereas the business-transaction frame induced the motivation to get as much money from playing the game as possible.

All this suggests the success economists have had in recent decades in propagating their way of framing the choices we face has subtly influenced our thinking and behaviour, making us more competitive and self-seeking and less co-operative and public-spirited.

If so, we're the poorer for it. We need to frame the economic problem more carefully.
Read more >>

Saturday, April 23, 2011

Its all in the frame (behavioural economics)

It's a long weekend, so let's play a game. Tell me this: are eagles large? And, next, are cabins small? If you said yes to both, congratulations - you're right. But if you said no to both, you're not wrong. In fact, you're just as right as the others are.

Relative to other birds, eagles are large. And relative to other buildings, cabins are small. But if you compare an eagle with a cabin, eagles are small and cabins are large.

Get it? Whether eagles and cabins are large or small depends on what you're comparing them with. Or, as they say in the classics, everything's relative.

And this, believe it or not, is one of the great discoveries of cognitive psychology.

Part of that discovery is that the way we react to situations or propositions is heavily dependent on the way they're framed, as psychologists say - the way they're packaged, the context in which they're put.

We can react differently to the same proposition depending on how it's framed. A classic example: even doctors say a 90 per cent success rate for operations is more acceptable than a 10 per cent failure rate.

The people who didn't need psychologists to tell them our reactions to things are influenced by the way they're framed are advertising and marketing types. They know that draping a girl in a bikini over a sports car can help sell more of them. What's the logical link between a good-looking young woman and a motor car? There's none - but the young bucks (and ageing baby boomers) who buy sports cars can imagine one.

Although it comforts economists to kid themselves that advertising is purely informational, in truth almost all advertising is about framing - drawing unspoken links between the product you're trying to flog and some attractive situation or emotion. Their not-so-subtle message is, buy my margarine (or sliced bread) and you'll have a happy, healthy family. In the advertisers' adage, you sell the sizzle, not the steak.

But framing goes far wider than advertising. It's the reason you should be sceptical of the results interest groups quote from the opinion polls they commission. It's too easy to influence the answers you get by the way you frame the questions you ask.

And don't forget that political spin is a form of framing. It's about portraying situations or decisions in ways that reflect more favourably on the pollies involved.

Their opponents, of course, try to frame the same situations or decisions in a more negative light.

But in Practical Wisdom, a new book by two academics at Swarthmore College, near Philadelphia, Barry Schwartz and Kenneth Sharpe, they observe that stories like these have given framing a bad name that's unwarranted.

Why? Because there's no alternative to framing. That's the great discovery of cognitive psychology: just about the only way we can get our minds around anything is to compare it with something we already know about.

Years ago an editor reminded me of the classical rule of rhetoric that argument by analogy is invalid. Sorry, it turns out that the only way we learn is by comparing things we don't understand with things we do understand.

This doesn't mean every analogy-based argument is correct, of course, just that there's no other way to argue.

The term frame is itself a metaphor. Schwartz and Sharpe say it's a wonderful one because it emphasises our capacity to take the chaos of the social world around us and organise it in an understandable way.

The capacity we have to frame enables us to do one of the most important things the exercise of practical wisdom demands: discern what's relevant about a particular context or event in regard to the decision we face.




Read more >>

Wednesday, April 20, 2011

Looking to Aristotle for a guide on reform

How things have changed. When I was growing up Labor portrayed itself as the party of reform, out to fix an unjust world. The Liberals were conservatives, satisfied with the world as it was and trying to keep change to a minimum. Needless to say, the Libs kept winning.

These days, however, both sides portray themselves as parties of reform. And the faster the world changes the more certain both sides become of the need for further reform - even if, as with Work Choices, the new lot's reform is merely to reverse the reforms of the previous lot.

There is one small problem with all this reform: it's not always clear the changes actually make things better. The pollies see things that aren't working well, make changes intended to improve the situation, but often don't succeed. Then they, or their successors, do more in the same vein or try the opposite approach, with neither seeming to work.

When politicians see institutions they think aren't performing - the health system, the education system, the courts, the banks - they tend to apply one of two tools. The first is to toughen up the rules and regulations governing the institution; be more explicit about what people are required to do.

The second is to sharpen the incentives (and disincentives) faced by people in the institutions. With private-sector institutions - banking, for example - the approach is usually to reduce government regulation and then rely on competition and the profit motive to improve performance.
With public-sector institutions - health and education, say - the approach is to impose numerical tests and targets (''key performance indicators'') and maybe introduce monetary rewards for good performance.

As the international experience with banking indicates, the reformers sometimes alternate between the two approaches when they find the other hasn't worked. After the Great Depression we tightly regulated the banks, but in the 1980s we decided they weren't performing well and the answer was to deregulate them. Now, after the global financial crisis, the world has swung back to thinking tighter regulation is the key to better performance.

A long memory, however, suggests it won't be that simple. Why is it that neither rules nor incentives seem to do the trick? And what else can we do that stands a better chance of working? Well, while I was away on holiday in Italy I read a book that offers some answers. It's Practical Wisdom, by Barry Schwartz, a professor of psychology at Swarthmore College in Pennsylvania, and Kenneth Sharpe, a professor of political science at the same college.

It's noteworthy that both approaches proceed from a low opinion of the people working in these institutions: they don't really care about their work. The notion that tightening up the rules will improve the performance of practitioners assumes they are dumb (they don't know the right thing to do) and uncommitted to doing their job well. The notion that introducing numerical targets and monetary incentives will improve performance assumes practitioners are lazy and motivated only by self-interest. Both approaches are top-down: the politicians know what should be done to improve the performance of the courts or whatever, and seek to impose their judgment on the practitioners.

That gives us a clue as to why neither approach is particularly effective. Both are demoralising - in both senses of the word. They reduce the practitioners' scope to exercise their discretion when objectives conflict (as they often do in this increasingly complex world) and the circumstances of individual cases differ.

This demotivates professionals as well as removing the moral element from their jobs. They become responsible for obeying rules or meeting targets, not ensuring the ultimate objectives are achieved.

Modern jobs are multi-faceted, with multiple objectives. Numerical targets and monetary incentive payments inevitably narrow practitioners' objectives and increase their focus on monetary rewards, driving out other motivations.

And when you eliminate the moral element you encourage people to try to beat the system. The more rules you make, the more you encourage demoralised workers to look for loopholes. The more you measure people's performance with numerical indicators, the more you encourage them to game the system. Whatever elements of their performance aren't covered by a performance indicator will be cannibalised to help achieve those you are measuring.

Under both approaches quantity improves at the expense of quality, partly because quantity is easy to measure and quality is hard.

So what's the answer? Schwartz and Sharpe say that, though we will always need rules and rewards in the running of institutions, increasing the emphasis on rules and incentives discourages and diminishes the third, more elusive element needed to make institutions work well: what Aristotle called phronesis and translates as practical wisdom.

People exercising practical wisdom use their skills and experience to achieve to the best of their ability the ''telos'' or true purpose of their activity. Practical wisdom involves finding the right way to do the right thing in the particular case you are dealing with.
People are motivated to exercise practical wisdom not to obey rules or increase their income but because they know it's the right thing to do, to benefit their students, patients, clients or customers and obtain personal satisfaction in the process. It's about intrinsic motivation - doing a good job for its own sake - rather than the extrinsic motivation of obeying rules or making more money.

Institutions would work better if, rather than discouraging practical wisdom by tighter rules and bigger incentives, they gave practitioners more flexibility to innovate, improvise and generally exercise their own judgment in doing the right thing by the individuals they help. Reformers haven't got far by assuming doctors, teachers, judges, public servants and the rest are dumb and lazy and must be compelled or bribed to do better. Why not assume the majority of these professionals want to do a good job and give them more scope to do the right thing in the right way?

Read more >>