Remember this headline?
“47% of total US employment is at risk from automation.”
This quote came from a famous study by Oxford University back in 2013, which became essentially “patient zero” in the discussion about job loss from AI. I’ve analyzed multiple major job automation studies and written about them here in Quartz. The conclusions that the major studies have in common are important and are not obvious from the headlines of half of all human work going away. These three are top of my list:
Every study grapples with uncertainty in the timing of technological change. This matters because wide time bands make it difficult for policy makers to plan. Which means individuals may be on their own.
Everyone agrees that the current generation of AI - machine learning - is fundamentally different than the AI that’s been before. AI can now be creative, emotional and conversational. This matters because many automated systems can now adapt and learn on their own.
Jobs are not islands; boundaries change. Jobs rarely disappear. Instead they delaminate into individual tasks. This matters because it means that predicting how automation affects jobs is inherently difficult, fraught with uncertainty. It’s a difficult problem to break a job down into tasks and predict which parts an AI can do. (Don’t believe me, try it on your own job).
So this “47%” number is, in the words of our re-founder, Mike Burn, “high precision horsesh*t”.
But it sure caught people’s attention. I wrote about a survey we did at Quartz, where we found that 90% of respondents thought that it would be other people that would lose their job to a robot. So it seems that everyone is worried about something that is going to happen to somebody else. What’s up with this? How can we reconcile that most people see the opportunity for AI but inherently believe that they cannot be replaced by a machine?
Here’s the thing about robots and jobs: it’s counterintuitive. Automating one component or task in a job creates an increase in productivity, which raises the value of the whole chain of tasks that make up that job. It’s called the paradox of automation and it’s the piece of the puzzle that is the most critical to understanding what makes humans indispensable.
What if this paradox applies just as well to new forms of automation - such as intelligent assistants - as much as it does to traditional manufacturing robotics? As we automate our conversations - with AI performing simple conversational tasks - if the paradox holds, we will raise the value of the rest of what we say.
Some argue that, when it comes to collaboration, machines will beat us. When we talk, information - conveyed as language - transfers between us at a maximum rate of around 60 bits/second. Between computers, it’s gigabits per second. But this misses the point. Humans have empathy, a theory of mind for others, the idea that we are modeling each other all the time, second guessing what the other person is going to say. As Harvard University Professor Steven Pinker explains in this video, empathy massively amplifies our human-to-human transfer rate.
AI needs data to make a prediction. If people are working to solve a problem for which there is no data - an “unknown known” or an unknown future state - then collaboration coupled with intuition, experience and judgment will be the most important investment any of us can make - individually or collectively.
This collaboration - what we call the superintelligence of diverse human teams - will unlock the opportunity to work on new solutions and to stay ahead of the robot apocalypse.