Talk to ABS seminar, Canberra
June 17, 2009
When Geoff Neideck invited me to talk to you he said you’d be interested in getting a perspective on the use of ABS data in the media, and I’m happy to oblige. I’m happy to be here talking to the bean counters because I’ve been a bean counter all my working life. I started as an accountant, then graduated to the economy. As that epithet implies, people concerned with the intricacies of counting things are not highly regarded. The glory goes to those who make the beans, or those who use the counts to draw interesting conclusions.
But years in the counting business have convinced me of an under-rated truth: what gets measured gets taken seriously, whereas things that aren’t measured tend to be ignored. The problem is that we tend to measure what’s easily measured, but many things difficult to measure are more important. The problem is compounded when we seize on a readily available measurement without bothering to inquire of the boring bean counters whether it measures what we think it does.
I want to talk about the media’s use and abuse of ABS statistics, but first I want to make a qualification. Among journalists there are two kinds of users of your statistics, the professionals and the amateurs. The most intensive users of ABS stats are the economic journalists, who are professional users. These are people such as me, Tim Colebatch, Alan Mitchell, Michael Stutchbury, Alan Wood, Peter Martin, David Uren and Stephen Long who, by both education and experience, can be expected to use and interpret your stats with care and accuracy. If you see us misinterpreting your data we’d be most grateful for a quiet phone call explaining where we went wrong. We have younger economic reporters working for us, often with less experience of the intricacies of economic statistics, but rest assured that we’re training them in those mysteries and drawing to their attention any misunderstandings we find in their copy, even if (as in my case) it’s after they appear in print.
That’s enough about the professional, specialist users of ABS data. You’re entitled to expect high standards from them and, for the most part, I think you get it. All the rest of what I’ve got to say about the media’s use and abuse of statistics applies to the amateur users: journalists whose use of your data is infrequent and quite unqualified. These users range from political journalists here in Canberra to reporters in the state capitals who occasionally get hold of social statistics, right up to the editors. The main thing I want to do is explain why the media so often use stats in ways you disapprove of. I want to give you an insight into how it is from our perspective. This will contribute little to reducing the misuse of statistics, but it will help you understand what you’re up against.
Many of the complaints about misuse of stats arise from the headlines on stories and the truth is that the headline on a story heavily influences a reader’s perception of what the story is saying. But headlines are written by sub-editors, not reporters, and sometimes there’s a gap between what the story actually says and what the headline says it says. If there is, most readers won’t notice it. Such gaps can occur for three reasons: because the hard-pressed sub doesn’t accurately comprehend what the story’s actually saying; because the reporter has left some ambiguity in his copy and the sub, who generally knows far less about the topic than the reporter, has jumped the wrong way; or because the sub knowingly writes a headline that makes the story sound more exciting than it actually is. The first two explanations - misunderstandings - are more likely to be the case on broadsheet newspapers; the third - misrepresentation - is more likely to be found in tabloid newspapers.
In one offending Herald story, the NSW Bureau of Crime Statistics issued a report and an accompanying press release saying that the prison terms for most offences had increased, whereas the headline on the story said they’d fallen. The interesting question is why the reporter wrote his story in a way that encouraged that error to be made - why he focused on unrepresentative falls rather than the representative rises. I’ll try to answer that when we get to the question of motive - why the media behave the way they do. Perhaps here I should remind you that journalists have to draw the essence from sometimes long and complex reports or events in just an hour or two - under pressure from bosses to make it quick and make it sexy - so it’s not surprising errors and misinterpretations occur.
Now let me give you some relevant background information. Much of the news the media publish comes to them in the form of press releases. The ABS’s releases have some of the characteristics of a press release, and sometimes they’re accompanied by an actual summarising press release. It’s often alleged that the media are so lazy they largely publish uncritically the press releases sent to them by powerful government, business and other interests. In my experience that’s usually not the case; quite the reverse. These days most interest groups seek to use the media to advance their own interests. They employ PR people to put their own spin on the information they release to the media. Most journalists aren’t lazy and they see it as their job to get past the spin, finding the news their audience would like to know about but which the powerful interest would like to conceal. When they receive a report or a press release they think: there’s probably an interesting story in here somewhere, but I’ll have to dig for it; certainly, it won’t be the one the people who put out the press release put at the top of the release. There’s so much spin in the world that many journalists come to the conclusion that everyone’s trying to pull the wool over their eyes. You may regard the ABS as a beacon of independent truth-seeking, but I guess many journalists would suspect it’s just another government agency pumping out bromide at the behest of its political masters. There’s a saying in journalism that news is anything somebody somewhere doesn’t want you to know. My guess is that the Herald journalist in question waded through the crime bureau’s report until he found the bit he thought the NSW Government wouldn’t want people to know: that in the case of five significant offences, rates of imprisonment are going down not up.
Much of the misrepresentation of ABS data arises from statistical misinterpretation. You can misrepresent a time series in a host of obvious ways: by choosing a convenient time period for your comparison, by ignoring random variation (ie failing to ignore outliers), by ignoring seasonal variation, by ignoring base effects (eg saying some rate has doubled when it’s gone from 2 a year to 4 a year) and by ignoring the effect of government policy.
The question is whether the journos who commit these statistical crimes are knaves or fools. I couldn’t deny there’s a lot of knavery - journos who know they’re distorting the statistics’ message, but don’t care - but there are more fools than you may imagine. Most journalists are arts-degree types with a very weak grasp on maths and little clue about how to interpret statistical information. If they did understand those things they’d be an economics editor by now. But the question goes deeper: many journalists wouldn’t be sure the diligent performance of their job required them to take account of those statistical niceties. The rules of statistical interpretation aim to ensure the user draws from the stats an accurate or representative picture of the aspect of the world the stats relate to. But that’s simply not the objective of journalism. Journalism pays no heed to the scientific method.
So let’s turn to the question of why the media sometimes misuse statistics and misrepresent their message. Let’s look at motive. Much of the criticism of the media rests on the unspoken assumption that the media’s role is to give us an accurate picture of the world around us. We don’t have first hand experience of much of what’s happening around us and we need the media to inform us.
If that’s the role you think the media play - or should play - I have shocking news. The news media are on about news. What is news worthy? Anything happening out there that our audience will find interesting or important, although the interesting will always trump the important. Paris Hilton is interesting but of no importance; the latest change in the superannuation rules is important but deadly dull - guess which one gets more media overage?
Maybe 99 per cent of what happens in the world is of little interest: 99 per cent of the motorists who crossed the Sydney Harbour Bridge today made it without incident; someone you’ve never heard of went to work as usual and sold a new ring to someone you don’t know; Australia didn’t declare war on New Zealand . . . the list of uninteresting things that happen is endless. Journalists sort through all the things that happen looking for things they believe their audience will find interesting: the 10-car pile-up on the Bridge, Brad Pitt bought a ring for Angelina Jolie to make up after a fight, the Dutch withdrew their troops from Afghanistan.
When social scientists take a random sample they may examine the sample and discard any outliers that could distort their survey, throwing them on the floor. A journalist is someone who comes along, finds them on the floor and says, ‘these would make a great story’. I happened to be in the Herald’s daily news conference last February on the day Kevin Rudd’s $42 billion stimulus package was announced, with all its (then) $950 cash handouts. We discussed searching for a farmer who’d get $950 because he was in exceptional circumstances, $950 because he paid tax last year, $950 because his wife also works, $4750 because he has five school-age kids, and maybe another $950 because one of the kids is doing a training course. And, of course, he’d have a big mortgage, meaning he’d also save $250 a month because of the 1 per cent cut in interest rates announced the same day. Had we found such a person and taken a good photo of him he’d have been all over our front page. The point is that we were search for the most unrepresentative person we could find. Why? Because our readers would have been fascinated to read about him. It’s reasonable to expect the media to be accurate in the facts they report but, even if they are, it’s idle to expect them to give us a representative picture of the world.
And that takes me to an even more shocking thought: if the media aren’t on about giving us a representative picture of the world around us, why would journalists bother adhering to the rules of statistical interpretation? Why not highlight a quite unrepresentative statistical comparison if it happens to be the most interesting comparison?
It’s often claimed that the media focus heavily on bad news, often ignoring good news. Guilty as charged. But we do so for a simple reason: we know our audience finds bad news a lot more interesting than good news. So I’m not particularly apologetic for this state of affairs: our failings are the failings of our audience, which are the failings of human nature. Why do people find bad news more interesting than good news? As I’ve written elsewhere (SMH 12.4.2006), I believe the explanation can be found in our evolutionary history. Our brains are hardwired to perpetually scan our environment for threats, and now the chances of our being eaten by a lion have diminished we’re left with a strong appetite for bad news about, for instance, the threat of crime.
Communications research tells us we read much more for reinforcement than enlightenment. While there’s a niche market for columns that challenge the conventional wisdom, and news about some new and unexpected twist in a standard story will be found interesting, journalists know the news that goes down best is the news that confirms people prejudices. Perhaps thanks to the efforts of the media themselves, most people know as a self-evident truth that crime is increasing. Most stories about crime are intended to reinforce that belief.
The media’s defence against criticism is that their failings are those of their audience; they do what they do because their audience demands it of them. But shouldn’t we hold the media to a higher standard than we hold ourselves? Yes we should. We can expect less crass commercialism and more professionalism. Doctors, for instance, don’t ask patients what disease they want to be told they have and don’t let patients pick the medicine they want prescribed.
And there’s a limit of inaccuracy and sensationalism below which market punishment sets in. Mediums that play too lightly with the truth eventually lose their credibility and their audience’s respect. This means there are checks and balances. Mediums that value their credibility - in commercial as well as ethical terms - often employ commentators who set a high store on making sure their audience isn’t misled, even when those commentators spend a fair bit of time highlighting the media’s own failings and trying to beat down some of the things that get beaten up on the front page. My guess is that, as information overload and infotainment continue to grow, at least the better-educated audience will gravitate to those journalists and journals they perceive to be committed to the search for truth. What’s more, it is possible to be truthful and interesting at the same time.
Turning to the question of community expectations and perceptions of the ABS, from where I sit the community knows little about the role and functions of the ABS and spends very little time thinking about it. In particular, people have no understanding of the bureau’s independence and see it as just another government department doing what the government tells it to do.
Some years ago someone from the bureau came to the Herald’s office to give a few of our senior people a little seminar on the virtues of the trend estimates over the seasonal adjusted figures. After it was over the editor at the time said to me: ‘Well, we won’t be using trend figures - they’re only estimates.’ He was quite surprised when I explained that almost all the bureau’s figures were estimates. When I was an economic reporter in Canberra 34 years ago, the chief sub-editor told me not to use seasonally adjusted figures because the Herald only reported the real figures, not figures some statistician had played around with. These days, of course, we use the seasonally adjusted figures as a matter of course without even bothering to say we’re doing so.
But you will have noted that, notwithstanding all the bureau’s efforts to give greater prominence to the trend figures, the media - like the business economists - largely ignores them and continues to highlight the seasonally adjusted estimates. We do this mainly because, like the financial markets, we have a vested interest in volatility. The more the figures bounce around, the more interesting the stories we can write - and the more exciting the markets’ betting games. But the econocrats prefer the seasonally adjusted figures, too. And whatever our true motives, we all have a good statistical excuse: our interest is in the figure for the most recent month or quarter, and here the trend estimate runs into the ‘end-point problem’ - the inability to centre the moving average.
You probably know that many people - maybe most - regard the CPI as something that’s made up in a government department somewhere with the intention of understating the true inflation rate. That’s because their own mental estimate of price increase is so much higher than the bureau’s. The question of why that’s the case is one to which I’ve given much thought over the years. You can say that, were I to carefully calculate a personal CPI it would differ from the official figure because the weights in my basket would differ from the eight-capital average. You can say that I may not adequately distinguish between quality improvements and pure price increases.
That’s true, but it doesn’t get to the heart of the disparity. A bigger problem is that people don’t weight the price changes they encounter. An even bigger problem is what psychologists call the ‘availability heuristic’. Large prices rises stick in our mind more than small increases and price rises are easier to remember than price falls. And get this: in most people’s mental CPI, prices that don’t change would get a weighting of zero.
There’s probably not a lot the bureau can do about that, but there’s one key economic indicator whose low credibility with the public it can act to improve - the measure of unemployment - and now it has. A large number of people believe the official unemployment figures are a fraud and have been manipulated by the Government to understate the true position. They have a vague but firm memory of the Howard government changing the definition of unemployment. They get muddled between being unemployed and being on the dole. They have no perception of the bureau’s independence and no notion of international conventions that haven’t changed in decades.
I have to tell you, however, that I’ve tired of trying to dispel the public’s misconceptions on this issue and my sympathy for the bureau has run out. The unvarnished truth is that, for whatever reason, the official unemployment figures are misleading, they do significantly understate the true extent of the problem, and the bureau could publish less misleading figures if it wanted to. The fact is that the international rule that doing an hour’s work a week means you’re not unemployed may have made sense once and may still make sense in some countries, but it makes no sense in a country like ours where part-time employment accounts for 28 per cent of total employment. And if the bureau can publish estimates of underemployment once a year it’s hard to see why it can’t publish broader estimates of labour underutilisation every month. I’m here to tell you I can’t think of an issue that has done greater damage to the bureau’s credibility with the public, so I’m delighted to see your latest decision to publish the underutilisation rate monthly.
Read more >>
June 17, 2009
When Geoff Neideck invited me to talk to you he said you’d be interested in getting a perspective on the use of ABS data in the media, and I’m happy to oblige. I’m happy to be here talking to the bean counters because I’ve been a bean counter all my working life. I started as an accountant, then graduated to the economy. As that epithet implies, people concerned with the intricacies of counting things are not highly regarded. The glory goes to those who make the beans, or those who use the counts to draw interesting conclusions.
But years in the counting business have convinced me of an under-rated truth: what gets measured gets taken seriously, whereas things that aren’t measured tend to be ignored. The problem is that we tend to measure what’s easily measured, but many things difficult to measure are more important. The problem is compounded when we seize on a readily available measurement without bothering to inquire of the boring bean counters whether it measures what we think it does.
I want to talk about the media’s use and abuse of ABS statistics, but first I want to make a qualification. Among journalists there are two kinds of users of your statistics, the professionals and the amateurs. The most intensive users of ABS stats are the economic journalists, who are professional users. These are people such as me, Tim Colebatch, Alan Mitchell, Michael Stutchbury, Alan Wood, Peter Martin, David Uren and Stephen Long who, by both education and experience, can be expected to use and interpret your stats with care and accuracy. If you see us misinterpreting your data we’d be most grateful for a quiet phone call explaining where we went wrong. We have younger economic reporters working for us, often with less experience of the intricacies of economic statistics, but rest assured that we’re training them in those mysteries and drawing to their attention any misunderstandings we find in their copy, even if (as in my case) it’s after they appear in print.
That’s enough about the professional, specialist users of ABS data. You’re entitled to expect high standards from them and, for the most part, I think you get it. All the rest of what I’ve got to say about the media’s use and abuse of statistics applies to the amateur users: journalists whose use of your data is infrequent and quite unqualified. These users range from political journalists here in Canberra to reporters in the state capitals who occasionally get hold of social statistics, right up to the editors. The main thing I want to do is explain why the media so often use stats in ways you disapprove of. I want to give you an insight into how it is from our perspective. This will contribute little to reducing the misuse of statistics, but it will help you understand what you’re up against.
Many of the complaints about misuse of stats arise from the headlines on stories and the truth is that the headline on a story heavily influences a reader’s perception of what the story is saying. But headlines are written by sub-editors, not reporters, and sometimes there’s a gap between what the story actually says and what the headline says it says. If there is, most readers won’t notice it. Such gaps can occur for three reasons: because the hard-pressed sub doesn’t accurately comprehend what the story’s actually saying; because the reporter has left some ambiguity in his copy and the sub, who generally knows far less about the topic than the reporter, has jumped the wrong way; or because the sub knowingly writes a headline that makes the story sound more exciting than it actually is. The first two explanations - misunderstandings - are more likely to be the case on broadsheet newspapers; the third - misrepresentation - is more likely to be found in tabloid newspapers.
In one offending Herald story, the NSW Bureau of Crime Statistics issued a report and an accompanying press release saying that the prison terms for most offences had increased, whereas the headline on the story said they’d fallen. The interesting question is why the reporter wrote his story in a way that encouraged that error to be made - why he focused on unrepresentative falls rather than the representative rises. I’ll try to answer that when we get to the question of motive - why the media behave the way they do. Perhaps here I should remind you that journalists have to draw the essence from sometimes long and complex reports or events in just an hour or two - under pressure from bosses to make it quick and make it sexy - so it’s not surprising errors and misinterpretations occur.
Now let me give you some relevant background information. Much of the news the media publish comes to them in the form of press releases. The ABS’s releases have some of the characteristics of a press release, and sometimes they’re accompanied by an actual summarising press release. It’s often alleged that the media are so lazy they largely publish uncritically the press releases sent to them by powerful government, business and other interests. In my experience that’s usually not the case; quite the reverse. These days most interest groups seek to use the media to advance their own interests. They employ PR people to put their own spin on the information they release to the media. Most journalists aren’t lazy and they see it as their job to get past the spin, finding the news their audience would like to know about but which the powerful interest would like to conceal. When they receive a report or a press release they think: there’s probably an interesting story in here somewhere, but I’ll have to dig for it; certainly, it won’t be the one the people who put out the press release put at the top of the release. There’s so much spin in the world that many journalists come to the conclusion that everyone’s trying to pull the wool over their eyes. You may regard the ABS as a beacon of independent truth-seeking, but I guess many journalists would suspect it’s just another government agency pumping out bromide at the behest of its political masters. There’s a saying in journalism that news is anything somebody somewhere doesn’t want you to know. My guess is that the Herald journalist in question waded through the crime bureau’s report until he found the bit he thought the NSW Government wouldn’t want people to know: that in the case of five significant offences, rates of imprisonment are going down not up.
Much of the misrepresentation of ABS data arises from statistical misinterpretation. You can misrepresent a time series in a host of obvious ways: by choosing a convenient time period for your comparison, by ignoring random variation (ie failing to ignore outliers), by ignoring seasonal variation, by ignoring base effects (eg saying some rate has doubled when it’s gone from 2 a year to 4 a year) and by ignoring the effect of government policy.
The question is whether the journos who commit these statistical crimes are knaves or fools. I couldn’t deny there’s a lot of knavery - journos who know they’re distorting the statistics’ message, but don’t care - but there are more fools than you may imagine. Most journalists are arts-degree types with a very weak grasp on maths and little clue about how to interpret statistical information. If they did understand those things they’d be an economics editor by now. But the question goes deeper: many journalists wouldn’t be sure the diligent performance of their job required them to take account of those statistical niceties. The rules of statistical interpretation aim to ensure the user draws from the stats an accurate or representative picture of the aspect of the world the stats relate to. But that’s simply not the objective of journalism. Journalism pays no heed to the scientific method.
So let’s turn to the question of why the media sometimes misuse statistics and misrepresent their message. Let’s look at motive. Much of the criticism of the media rests on the unspoken assumption that the media’s role is to give us an accurate picture of the world around us. We don’t have first hand experience of much of what’s happening around us and we need the media to inform us.
If that’s the role you think the media play - or should play - I have shocking news. The news media are on about news. What is news worthy? Anything happening out there that our audience will find interesting or important, although the interesting will always trump the important. Paris Hilton is interesting but of no importance; the latest change in the superannuation rules is important but deadly dull - guess which one gets more media overage?
Maybe 99 per cent of what happens in the world is of little interest: 99 per cent of the motorists who crossed the Sydney Harbour Bridge today made it without incident; someone you’ve never heard of went to work as usual and sold a new ring to someone you don’t know; Australia didn’t declare war on New Zealand . . . the list of uninteresting things that happen is endless. Journalists sort through all the things that happen looking for things they believe their audience will find interesting: the 10-car pile-up on the Bridge, Brad Pitt bought a ring for Angelina Jolie to make up after a fight, the Dutch withdrew their troops from Afghanistan.
When social scientists take a random sample they may examine the sample and discard any outliers that could distort their survey, throwing them on the floor. A journalist is someone who comes along, finds them on the floor and says, ‘these would make a great story’. I happened to be in the Herald’s daily news conference last February on the day Kevin Rudd’s $42 billion stimulus package was announced, with all its (then) $950 cash handouts. We discussed searching for a farmer who’d get $950 because he was in exceptional circumstances, $950 because he paid tax last year, $950 because his wife also works, $4750 because he has five school-age kids, and maybe another $950 because one of the kids is doing a training course. And, of course, he’d have a big mortgage, meaning he’d also save $250 a month because of the 1 per cent cut in interest rates announced the same day. Had we found such a person and taken a good photo of him he’d have been all over our front page. The point is that we were search for the most unrepresentative person we could find. Why? Because our readers would have been fascinated to read about him. It’s reasonable to expect the media to be accurate in the facts they report but, even if they are, it’s idle to expect them to give us a representative picture of the world.
And that takes me to an even more shocking thought: if the media aren’t on about giving us a representative picture of the world around us, why would journalists bother adhering to the rules of statistical interpretation? Why not highlight a quite unrepresentative statistical comparison if it happens to be the most interesting comparison?
It’s often claimed that the media focus heavily on bad news, often ignoring good news. Guilty as charged. But we do so for a simple reason: we know our audience finds bad news a lot more interesting than good news. So I’m not particularly apologetic for this state of affairs: our failings are the failings of our audience, which are the failings of human nature. Why do people find bad news more interesting than good news? As I’ve written elsewhere (SMH 12.4.2006), I believe the explanation can be found in our evolutionary history. Our brains are hardwired to perpetually scan our environment for threats, and now the chances of our being eaten by a lion have diminished we’re left with a strong appetite for bad news about, for instance, the threat of crime.
Communications research tells us we read much more for reinforcement than enlightenment. While there’s a niche market for columns that challenge the conventional wisdom, and news about some new and unexpected twist in a standard story will be found interesting, journalists know the news that goes down best is the news that confirms people prejudices. Perhaps thanks to the efforts of the media themselves, most people know as a self-evident truth that crime is increasing. Most stories about crime are intended to reinforce that belief.
The media’s defence against criticism is that their failings are those of their audience; they do what they do because their audience demands it of them. But shouldn’t we hold the media to a higher standard than we hold ourselves? Yes we should. We can expect less crass commercialism and more professionalism. Doctors, for instance, don’t ask patients what disease they want to be told they have and don’t let patients pick the medicine they want prescribed.
And there’s a limit of inaccuracy and sensationalism below which market punishment sets in. Mediums that play too lightly with the truth eventually lose their credibility and their audience’s respect. This means there are checks and balances. Mediums that value their credibility - in commercial as well as ethical terms - often employ commentators who set a high store on making sure their audience isn’t misled, even when those commentators spend a fair bit of time highlighting the media’s own failings and trying to beat down some of the things that get beaten up on the front page. My guess is that, as information overload and infotainment continue to grow, at least the better-educated audience will gravitate to those journalists and journals they perceive to be committed to the search for truth. What’s more, it is possible to be truthful and interesting at the same time.
Turning to the question of community expectations and perceptions of the ABS, from where I sit the community knows little about the role and functions of the ABS and spends very little time thinking about it. In particular, people have no understanding of the bureau’s independence and see it as just another government department doing what the government tells it to do.
Some years ago someone from the bureau came to the Herald’s office to give a few of our senior people a little seminar on the virtues of the trend estimates over the seasonal adjusted figures. After it was over the editor at the time said to me: ‘Well, we won’t be using trend figures - they’re only estimates.’ He was quite surprised when I explained that almost all the bureau’s figures were estimates. When I was an economic reporter in Canberra 34 years ago, the chief sub-editor told me not to use seasonally adjusted figures because the Herald only reported the real figures, not figures some statistician had played around with. These days, of course, we use the seasonally adjusted figures as a matter of course without even bothering to say we’re doing so.
But you will have noted that, notwithstanding all the bureau’s efforts to give greater prominence to the trend figures, the media - like the business economists - largely ignores them and continues to highlight the seasonally adjusted estimates. We do this mainly because, like the financial markets, we have a vested interest in volatility. The more the figures bounce around, the more interesting the stories we can write - and the more exciting the markets’ betting games. But the econocrats prefer the seasonally adjusted figures, too. And whatever our true motives, we all have a good statistical excuse: our interest is in the figure for the most recent month or quarter, and here the trend estimate runs into the ‘end-point problem’ - the inability to centre the moving average.
You probably know that many people - maybe most - regard the CPI as something that’s made up in a government department somewhere with the intention of understating the true inflation rate. That’s because their own mental estimate of price increase is so much higher than the bureau’s. The question of why that’s the case is one to which I’ve given much thought over the years. You can say that, were I to carefully calculate a personal CPI it would differ from the official figure because the weights in my basket would differ from the eight-capital average. You can say that I may not adequately distinguish between quality improvements and pure price increases.
That’s true, but it doesn’t get to the heart of the disparity. A bigger problem is that people don’t weight the price changes they encounter. An even bigger problem is what psychologists call the ‘availability heuristic’. Large prices rises stick in our mind more than small increases and price rises are easier to remember than price falls. And get this: in most people’s mental CPI, prices that don’t change would get a weighting of zero.
There’s probably not a lot the bureau can do about that, but there’s one key economic indicator whose low credibility with the public it can act to improve - the measure of unemployment - and now it has. A large number of people believe the official unemployment figures are a fraud and have been manipulated by the Government to understate the true position. They have a vague but firm memory of the Howard government changing the definition of unemployment. They get muddled between being unemployed and being on the dole. They have no perception of the bureau’s independence and no notion of international conventions that haven’t changed in decades.
I have to tell you, however, that I’ve tired of trying to dispel the public’s misconceptions on this issue and my sympathy for the bureau has run out. The unvarnished truth is that, for whatever reason, the official unemployment figures are misleading, they do significantly understate the true extent of the problem, and the bureau could publish less misleading figures if it wanted to. The fact is that the international rule that doing an hour’s work a week means you’re not unemployed may have made sense once and may still make sense in some countries, but it makes no sense in a country like ours where part-time employment accounts for 28 per cent of total employment. And if the bureau can publish estimates of underemployment once a year it’s hard to see why it can’t publish broader estimates of labour underutilisation every month. I’m here to tell you I can’t think of an issue that has done greater damage to the bureau’s credibility with the public, so I’m delighted to see your latest decision to publish the underutilisation rate monthly.