The world needs a pro-human AI agenda

Facebook
Twitter
LinkedIn
WhatsApp
Telegram
Email

Judging by the current paradigm in the technology industry, we cannot rule out the worst of all possible worlds: none of the transformative potential of AI, but all the labour displacement, misinformation, and manipulation. But it’s not too late to change.

These are uncertain and confusing times. Not only are we contending with pandemics, climate change, societal ageing in major economies, and rising geopolitical tensions, but artificial intelligence is poised to change the world as we know it. What remains to be seen is how quickly things will change and for whose benefit.

If you listen to industry insiders or technology reporters at leading newspapers, you might think artificial general intelligence (AGI) – AI technologies that can perform any human cognitive task – is just around the corner. Accordingly, there is much debate about whether these amazing capabilities will make us prosperous beyond our wildest dreams (with less hyperbolic observers estimating more than 1-2% faster GDP growth), or instead bring about the end of human civilization, with super-intelligent AI models becoming our masters.

But if you look at what is going on in the real economy, you will not find any break from the past so far.

There is no evidence yet of AI delivering revolutionary productivity benefits. Contrary to what many technologists promised, we still need radiologists (more than before, in fact), journalists, paralegals, accountants, office workers, and human drivers. As I noted recently, we should not expect much more than about 5% of what humans do to be replaced by AI over the next decade. It will take significantly longer for AI models to acquire the judgment, multi-dimensional reasoning abilities, and social skills necessary for most jobs, and for AI and computer vision technologies to advance to the point where they can be combined with robots to perform high-precision physical tasks (such as manufacturing and construction).

Of course, these are predictions, and predictions can always be wrong. With industry insiders becoming even more vocal about the pace of progress, perhaps game-changing AI breakthroughs will come sooner than expected. But the history of AI is replete with ambitious predictions by insiders. In the mid-1950s, Marvin Minsky, arguably the grandfather of AI, predicted that machines would surpass humans within just a few years, and when it didn’t happen, he remained adamant. In 1970, he was still insisting that,

“In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point, the machine will begin to educate itself with fantastic speed. In a few months, it will be at genius level and a few months after that, its powers will be incalculable.”

Similarly optimistic predictions have recurred since then, only to be abandoned in periodic “AI winters.” Could this time be different?

To be sure, generative AI’s capabilities far exceed anything that the industry has produced before. But that does not mean that the industry’s expected timelines are correct. AI developers have an interest in creating the impression of imminent revolutionary breakthroughs in order to stoke demand and attract investors.

But even a slower pace of progress is cause for concern, given the damage that AI can already do: deepfakes, voter and consumer manipulation, and mass surveillance are just the tip of the iceberg. AI can also be leveraged for large-scale automation, even when such uses make little sense. We already have examples of digital technologies being introduced into workplaces without a clear idea of how they will increase productivity, let alone boost existing workers’ productivity. With all the hype surrounding AI, many businesses are feeling the pressure to jump on the bandwagon before they know how AI can help them.

Such trend-chasing has costs. In my work with Pascual Restrepo, we show that so-so automation represents the worst of both worlds. If a technology is not yet capable of increasing productivity by much, deploying it extensively to replace human labour across a variety of tasks yields all pain and no gain. In my own forecast – where AI will replace about 5% of jobs over the next decade – the implications for inequality are quite limited. But if hype prevails and companies adopt AI for jobs that cannot be done as well by machines, we may get higher inequality without much of a compensatory boost to productivity.

We, therefore, cannot rule out the worst of all possible worlds: none of AI’s transformative potential, but all of the labour displacement, misinformation, and manipulation. This would be tragic, not only because of the negative effects on workers and on social and political life but also because it would represent a huge missed opportunity.

Progress for Whom?
It is both technically feasible and socially desirable to have a different type of AI – one with applications that complement workers, protect our data and privacy, improve our information ecosystem, and strengthen democracy.

AI is an information technology. Whether in its predictive form (such as the recommendation engines on social media platforms) or its generative form (large language models), its function is to sift through massive amounts of information and identify relevant patterns. This capability is a perfect antidote to what ails us. We live in an age where information is abundant, but useful information is scarce. Everything that you want is on the internet (along with many things you don’t want), but good luck finding what you need for a specific job or purpose.

Useful information drives productivity growth, and as David Autor, Simon Johnson, and I have argued, it is more important than ever in today’s economy. Many occupations – from nurses and educators to electricians, plumbers, blue-collar workers, and other modern craft workers – are hampered by the lack of specific information and training to deal with increasingly complex problems. Why are some students falling behind? Which equipment and vehicles need preemptive maintenance? How can we detect faulty functioning in complex products such as aeroplanes? This is exactly the kind of information AI can provide.

When applied to such problems, AI can deliver much larger productivity gains than those envisioned in my own meagre forecast. If AI is used for automation, it will replace workers; but if it is used to provide better information to workers, it will increase the demand for their services, and thus their earnings.

Unfortunately, three formidable barriers are blocking us from this path. The first is the fixation on AGI. Dreams of superintelligent machines are pushing the industry to ignore the real potential of AI as an information technology that can help workers. Accurate knowledge in the relevant domain is what matters, but this is not what the industry has been investing in. Chatbots that can write Shakespearean sonnets will not empower electricians to perform sophisticated new tasks. But if you genuinely believe that AGI is near, why bother helping electricians?

The problem is not just the obsession with AGI. As a general principle, tools should do things that humans are not good at doing efficiently. This is what hammers and calculators do, and it is what the internet could have done if it had not been corrupted by social media. However, the tech industry has adopted the opposite perspective, favouring digital tools that can substitute for humans rather than complement them. This is partly because many tech leaders underappreciate human talent and exaggerate human limitations and fallibility. Obviously, humans make mistakes; but they also bring a unique blend of perspectives, talents, and cognitive tools to every task. We need an industry paradigm that, rather than celebrating the superiority of machines, emphasizes their greatest strength: augmenting and expanding human capabilities.

A second obstacle is underinvestment in humans. AI can be a tool for human empowerment only if we invest as much in training and skills. AI tools complementing workers will amount to nothing if most humans cannot use them, or cannot acquire and process the information they provide. It took humans a long time to figure out how to manage the information from new sources such as the printing press, radio, TV, and the internet, but the timeline for AI will be accelerated (even if the “imminent AGI” scenario remains so much hot air).

The only way to ensure that humans benefit from AI, rather than being fooled by it, is to invest in training and education at all levels. That means going beyond the trite advice to invest in skills that will be complementary to AI. While that is, of course, necessary, it is woefully insufficient. What we really need is to teach students and workers to coexist with AI tools and use them in the right way.

The third barrier is the tech industry’s business models. We will not get better AI unless tech companies invest in it, but the sector is now more concentrated than ever, and the dominant firms are completely devoted to the quest for AGI and human-replacing and human-manipulating applications. A huge share of the industry’s revenues comes from digital ads (based on collecting extensive data from users and getting them hooked on platforms and their offerings), and from selling tools and services for automation.

However, new business models are unlikely to emerge by themselves. The incumbents have built large empires and monopolized key resources – capital, data, talent – leaving aspiring entrants at an increasing disadvantage. Even if some new player breaks through, it is more likely to be acquired by one of the tech giants than to challenge their business model.

The bottom line is that we need an anti-AGI, pro-human agenda for AI. Workers and citizens should be empowered to push AI in a direction that can fulfil its promise as an information technology. But for that to happen, we will need a new narrative in the media, policymaking circles, and civil society, and much better regulations and policy responses. Governments can help to change the direction of AI, rather than merely reacting to issues as they arise. But first policymakers must recognize the problem.

Writer: Daron Acemoglu| Project Syndicate

Share this post :

Facebook
Twitter
LinkedIn
WhatsApp
Telegram
Email