At the latest Worldwide Developers Conference, Apple announced that it is not only launching its own suite of artificial-intelligence models, but also integrating OpenAI’s ChatGPT into its devices and software. As AI increasingly penetrates our lives, debates about the technology’s risks and potential are reaching fever pitch.
One challenge – which is particularly relevant to Apple, with its much-touted commitment to customer security and privacy – lies in developers’ reliance on user-provided data to train AI models. As the University of Hong Kong’s Angela Huyue Zhang and London Business School’s S. Alex Yang warn, OpenAI’s new multimodal AI tool, GPT-4o, is “designed to gobble up user data, much of which is copyrighted.” Since users may not own the data they are sharing, their consent is not enough to guard against copyright infringement; regulators must take charge.
But while “true technological advances” are always disruptive, points out Michael R. Strain of the American Enterprise Institute, one must not forget that “the problems created by a new technology can also be solved by it.” In fact, from detecting AI-enabled cheating in schools to mitigating the risks of AI-coordinated weapons, AI tools are already doing just that. Past experience, he concludes “should inspire confidence – but not complacency – that generative AI will lead to a better world.”
The technology certainly has the potential to improve family life, suggest New America’s Anne-Marie Slaughter and Milo’s Avni Patel Thompson. By taking over “repetitive and mundane tasks,” it would “enable human caregivers to spend more time establishing emotional connections and providing companionship.” Though developing an “AI for caregivers” would “test the technology’s technical limits and determine the extent to which it can account for moral considerations and societal values,” it would undoubtedly be “worth the effort.”
Jamie Metzl of OneShared.World offers an even more sweeping vision of AI’s potential, arguing that the technology could help people incorporate into their “traditional identities” a “global consciousness and a greater awareness of how to meet the collective needs of society.” To this end, we should prompt AI systems to “help us imagine a better path forward,” including a “global framework for addressing” common challenges.
In the view of MIT’s Simon Johnson and McKinsey & Company’s Eric Hazan, AI offers the promise of “significantly faster productivity growth, especially in Europe, and shared prosperity.” But it can fulfill this promise “only if its adoption is accompanied by upgraded human skills and more proactive worker redeployment.” This will require executives to be “as candid as possible about nascent skills gaps,” and governments to “focus on making it as easy as possible for all workers to upgrade their skills in a timely and appropriate fashion.”
In the near term, however, AI can probably deliver only limited productivity gains, warns MIT’s Daron Acemoglu. In fact, “neither economic theory nor the data support” the “exuberant forecasts” being touted by “tech industry leaders, business-sector forecasters, and much of the media.” Recognizing this is crucial, because “if we embrace techno-optimism uncritically or let the tech industry set the agenda,” much of AI’s potential “could be squandered.”
Source: Project Syndicate