Seoul: At the end of a two-day AI summit in Seoul, some of the world’s largest technology companies pledged to work together to protect against AI-related threats.
Industry leaders from South Korea’s Samsung Electronics to Google vowed at the event, co-hosted by the UK, to “minimize risk” and responsibly develop recent artificial intelligence models even as they strive to advance this cutting-edge field.
The recent obligation, codified during the so-called Wednesday’s Artificial Intelligence Business Declaration in Seoul, and the recent round of security commitments announced the previous day, build on the consensus reached at the inaugural Global Artificial Intelligence Security Summit at Bletchley Park in the UK last year.
As part of Tuesday’s engagement, companies including OpenAI and Google DeepMind promised to share how they assess the risks associated with their technology – including risks “considered unacceptable” – and how they will ensure such thresholds are not exceeded.
continued below
But experts warned that it was tough for regulators to understand and manage artificial intelligence when the sector was growing so quickly.
“I think it’s a really, really massive problem,” said Markus Anderljung, head of policy at the Center for the Governance of AI, a nonprofit research organization based in Oxford, UK.
“I expect that addressing artificial intelligence will be one of the biggest challenges governments around the world will face over the next few decades.”
“The world will need to achieve some kind of common understanding of the risks posed by these kinds of most advanced general models,” he said.
Michelle Donelan, Britain’s secretary of state for science, innovation and technology, said in Seoul on Wednesday that “as the pace of AI development accelerates, we must keep pace… if we are to overcome the risks.”
She said there would be more opportunities to “push the boundaries” of testing and evaluating recent technologies at the next artificial intelligence summit in France.
“At the same time, we must pay attention to mitigating risk beyond these models, ensuring that society as a whole becomes resilient to the threats posed by AI,” Donelan said.
– AI Inequality –
The huge success of ChatGPT shortly after its release in 2022 sparked a gold rush in the field of generative AI, with technology companies around the world investing billions of dollars in developing their own models.
Such AI models can generate text, photos, audio and even video based on plain prompts, and their supporters hail them as breakthroughs that will improve lives and businesses around the world.
But critics, human rights activists and governments warn they can be abused in a variety of ways, including manipulating voters through fraudulent news or “deepfake” photos and videos of politicians.
Many people have called for the introduction of international standards to regulate the development and utilize of artificial intelligence.
“I think we are increasingly realizing that we need global cooperation to really think about the problems and harms associated with artificial intelligence. “Artificial intelligence knows no boundaries,” said Rumman Chowdhury, an artificial intelligence ethics expert who runs Humane Intelligence, an independent, nonprofit organization that assesses and evaluates AI models.
Chowdhury told AFP that the huge problem is not just “runaway artificial intelligence” from science fiction nightmares, but also issues such as rampant inequality in the sector.
“All artificial intelligence is being built, developed, and the profits are made by very, very few people and organizations,” she told AFP on the sidelines of the Seoul summit.
People in developing countries like India “are often involved in cleaning. They are the ones who annotate the data and are the content moderators. They are clearing the land so that everyone else can walk on pristine territory.”
Most read on the Internet