Showing posts with label automation. Show all posts
Showing posts with label automation. Show all posts

Saturday, September 30, 2017

Our bulldust detectors are on the blink

The world has always been full of bulldust, which is why everyone should come equipped with a bulldust detector.

Trouble is, we're living in a time of bulldust inflation. Some of the things we're being told are harder and harder to believe. But a lot of people's detectors seem to be on the blink.

Part of the reason for the step-up may be that there are so many people shouting that anyone else hoping to be heard has to start shouting too.

These thoughts are prompted by the runaway success of the claim that 40 per cent of jobs in Australia are likely to be automated in the next 10 to 15 years.

This is a fantastic claim in the original, dictionary sense: imaginative or fanciful; remote from reality.

And yet it seems many thousands of people have accepted its likelihood without question.

Similar predictions have been made about America, and are just as widely believed.

As I've written before, two economists, Jeff Borland and Michael Coelli, of Melbourne University, who didn't believe it – because they could find no evidence to support it – traced the origins of the claim and the flimsy assumptions on which it was based.

Which led them to ask the question I'm asking: why do people so readily believe propositions they should find hard to believe?

The authors found a quote from a leading American economist, Alan Blinder, of Princeton University, in his book, After the Music Stopped.

"The consequences of adverse economic events are typically exaggerated by the Armageddonists​ – a sensation-seeking herd of pundits, seers and journalists who make a living by predicting the worst.

"Prognostications of impending doom draw lots of attention, get you on TV, and sometimes even lead to best-selling books . . .

"But the Armageddonists are almost always wrong," Blinder concludes.

What? Journalists? Bad news?

Blinder is right in concluding we take a lot more notice of bad news than good. Borland and Coelli observe that "You are likely to sell a lot more books writing about the future of work if your title is 'The end of work' rather than 'Everything is the same'.

"If you are a not-for-profit organisation wanting to attract funds to support programs for the unemployed, it helps to be able to argue that the problems you are facing are on a different scale to what has been experienced before.

"Or if you are a consulting firm, suggesting that there are new problems that businesses need to address, might be seen as a way to attract extra clients.

"For politicians as well, it makes good sense to inflate the difficulty of the task faced in policy-making; or to be able to say that there are new problems that only you have identified and can solve," the authors say.

I'd add that if you're a think tank churning out earnest reports you hope will be noticed – if only so your generous funders see you making an impact – it's tempting to lay it on a bit thicker than you should.

By now, however, it's better known that there are evolutionary reasons why the human animal – maybe all animals – takes more interest in bad news than good news.

It's because we've evolved to be continually searching our environment for signs of threat to our wellbeing.

All of us are this way because we've descended from members of our species who were pretty nervy, cautious, suspicious types. We know that must be true because those of our species who weren't so cautious didn't survive long enough to have offspring.

In ancient days, the threats we were most conscious of were to life and limb – being eaten by a wild animal. These days we keep well away from wild animals, but there are still plenty of less spectacular, more psychological threats – real or imagined – to our wellbeing.

This instinctive concern for our own safety is no bad thing. It helps keep us safe. It's an example of the scientists' "precautionary principle" – the dire prediction may not come to pass, but better to be on the safe side and take out some insurance, so to speak.

By contrast, failing to take notice of good news is less likely to carry a cost.

Except that, like many good things, it can be overdone. If we're too jumpy, reacting to every little thing that comes along, we're unlikely to be terribly happy. And unremitting stress can take its toll on our health.

Which brings us to the media. Journalists didn't need evolutionary psychologists to tell them the customers find bad news more interesting. Bad news has always received a higher weighting in the assessment of "newsworthiness".

But I have a theory that the news media have responded to greater competition – not just between them but, more importantly, with the ever-increasing number of other ways of spending leisure time – by turning up the volume on bad news.

This can create a feedback loop. People wanting their messages to be broadcast by a media that's become ever-more obsessed by bad news respond by making those messages more terrible.

I'm not sure the media have done themselves a favour by making the news they're trying to sell more depressing, BTW.

But Borland and Coelli offer a further possible explanation of why we're inclined to believe that the technological change which has been reshaping the jobs market for two centuries without great conflagration is about to turn disastrous: the cognitive bias that causes people to feel "we live in special times" – also known as "this time is different".

"An absence of knowledge of history, the greater intensity of feeling about events which we experience first-hand, and perhaps a desire to attribute significance to the times in which we live, all contribute to this bias," they say.

If so, a lot of people will continue believing stuff they should doubt.
Read more >>

Wednesday, September 13, 2017

How the threat from robots was exaggerated

You'd have to have been hiding under a rock not to know that 40 per cent of jobs in Australia – about 5 million of them – are likely to be automated in the next 10 to 15 years.

Ask a young person what they know about the future of work and that's it. Which may help explain why so many of them seem angry and depressed about the economic future they're inheriting.

This information is widely known because it's the key finding of a major report, Australia's future workforce?, published in 2015 by the Committee for Economic Development of Australia, a well-regarded business think tank, derived from modelling it commissioned.

It's the sort of proposition you see many references to on social media, particularly because it chimes with a similar widely known prediction made in 2013 that 47 per cent of American jobs could be automated in the next 20 years.

Neither figure is a fact, of course, just a prediction about the distant future based on "modelling".

Why is it that if a prediction is big enough and gloomy enough, everyone keeps repeating it and no one thinks to question it? Why do we accept such frightening claims without asking for further particulars?

Why doesn't anyone ask the obvious question: how – would – they – know?

Because the prediction is based on "modelling"? That if it came out of a computer, it must be true?

Because the modelling for Australia reached similar results to the modelling for America? Sorry, it's actually the same model applied to different figures for each country.

Fortunately, not everyone is as easily convinced that the sky is falling. Two economists from Melbourne University, Professor Jeff Borland and Dr Michael Coelli, have taken a very hard look at the modelling undertaken for the committee by Professor Hugh Durrant-Whyte, of Sydney University, and other engineers at National Information and Communication Technology Australia.

Durrant-Whyte's modelling simply applies to Australia modelling of US occupations by Carl Frey, an economic historian, and Dr Michael Osborne, an engineer, of the Oxford Martin School for a sustainable future at Oxford University.

Frey and Osborne provided some colleagues with descriptions of 70 US occupations and asked them to judge whether they were "automatable" or not. This sample was then analysed and used to classify all 702 US occupations according to their likelihood of being automated.

Any occupation with a predicted probability of automation of more than 70 per cent was classed as being at "high risk" of automation.

Borland and Coelli make some obvious criticisms of this methodology. First, the colleagues found that 37 of the sample of 70 occupations were at risk of automation. Should these subjective assessments prove wrong, the whole exercise is wrong.

For instance, the colleagues judged that surveyors, accountants, tax agents and marketing specialists were automatable occupations, whereas Australian employment in these has grown strongly in the past five years.

Frey and Osborne say the need for dexterous fingers is an impediment to automation, but their method predicts there is an automation probability of 98 per cent for watch repairers.

Second, Frey and Osborne's modelling makes the extreme assumption that if an occupation is automated then all jobs in that occupation are destroyed. The advent of driverless vehicles, for instance, is assumed to eliminate all taxi drivers and chauffeurs, truck drivers, couriers and more.

Third, their modelling assumes that if it's technically feasible to automate a job it will be, without any need for employers to decide it would be profitable to do so. Similarly, it assumes there will be no shortage of the skilled workers needed to set up and use the automated technology.

More broadly, their modelling involves no attempt to take account of the jobs created, directly and indirectly, by the process of automation.

No one gets a job selling, installing or servicing all the new robots. Competition between the newly robotised firms doesn't oblige them to lower their prices, meaning their customers don't have more to spend – and hence create jobs – in other parts of the economy.

All that happens, apparently, is that employment collapses and profits soar. But if it happens like that it will be the first time in 200 years of mechanisation and 40 years of computerisation.

In 2016, the Organisation for Economic Co-operation and Development commissioned Professor Melanie Arntz and colleagues at the Centre for European Economic Research to offer a second opinion on Frey and Osborne's modelling.

Arntz and co noted that occupations categorised as at high risk of automation often still contain a substantial share of tasks that are hard to automate.

So they made one big change: rather than assuming whole occupations are automated, they assumed that particular tasks would be automated, meaning employment in particular occupations would fall, but not be eliminated.

They found that, on average across 21 OECD countries, the proportion of jobs that are automatable is not 40 per cent, but 9 per cent.

Those countries didn't include Oz, so Borland and Coelli did the figuring – "modelling" if you find that word more impressive – and found that "around 9 per cent of Australian workers are at high risk of their jobs being automated".

Why are we so prone to believing those whose claims are the most outlandish?
Read more >>