The idea of machines outsmarting humans has long been the subject of science fiction. Rapid improvements in artificial intelligence (AI) programs over the past decade have led some experts to believe that science fiction may soon become fact. On March 19, Jensen Huang, CEO of Nvidia, the world’s largest computer chipmaker and third-most valuable publicly traded company, said he believed today’s models could reach the level of so-called artificial general intelligence (AGI) within five years. years. What exactly is AGI and how can we assess when it occurs?
Mr. Huang’s words should be taken with a grain of salt: Nvidia’s profits have soared due to growing demand for its technologically advanced chips that are used to train AI models. Promoting artificial intelligence is therefore good for business. However, Mr. Huang provided a clear definition of what he believes constitutes AGI: a program that performs 8% better than most people on certain tests, such as the bar exams for lawyers and logic quizzes.
This proposal is the latest in a long line of definitions. In the 1950s, Alan Turing, a British mathematician, said that talking to a model that had achieved AGI would be indistinguishable from talking to a human. Probably the most advanced models of enormous languages already pass the Turing test. However, in recent years, technology leaders have moved the bar by proposing a number of novel definitions. Mustafa Suleyman, co-founder of DeepMind, an artificial intelligence research firm, and CEO of Microsoft’s newly formed AI division, believes that what he calls “actionable AI” – a “state-of-the-art Turing test” – will be achieved when the model receives $100,000 and, without instructions, turns it into $1 million. (Mr. Suleyman is a board member of The Economist’s parent company.) Steve Wozniak, Apple’s co-founder, has a more prosaic vision of AGI: a machine that can walk into the average home and make a cup of coffee.
Some researchers reject the concept of AGI altogether. Mike Cook of King’s College London says the term has no scientific basis and means different things to different people. Few definitions of AGI evoke consensus, admits Harry Law of the University of Cambridge, but most are based on the idea of a model that can outperform humans at most tasks – whether it’s making coffee or making millions. In January, DeepMind researchers proposed six levels of AGI, ranked by the percentage of qualified adults that the model can outperform: They argue that the technology has only reached its lowest level, and that artificial intelligence tools are equal to or slightly better than an unskilled human.
The question of what happens when we achieve AGI is an obsession for some researchers. Eliezer Yudkowsky, a computer scientist who has been concerned about artificial intelligence for 20 years, fears that by the time people realize that models have become conscious, it will be too tardy to stop them and humans will become enslaved. However, few researchers share his views. Most believe that artificial intelligence simply follows human actions, often poorly.
There may be no consensus among scientists and businesspeople on what constitutes AGI, but a definition may soon be agreed upon in court. As part of a lawsuit filed in February against the company he co-founded, OpenAI, Elon Musk is asking a court in California to decide whether the company’s GPT-4 model bears the hallmarks of AGI. If that happens, Musk argues, OpenAI will break its core principle of licensing only pre-AGI technology. The company denies this happened. Through his lawyers, Musk is seeking a jury trial. If his wish is granted, a handful of non-experts could decide on an issue that has vexed AI experts for decades.
Editor’s note: This article has been updated to clarify Mustafa Suleyman’s concept
© 2024, The Economist Newspaper Confined. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com