WASHINGTON: The Biden administration is poised to open a up-to-date front in its efforts to protect U.S. artificial intelligence from China, with initial plans to place protective barriers around the most advanced artificial intelligence models, the core software of artificial intelligence systems such as ChatGPT, sources said.
The Commerce Department is considering a up-to-date regulatory move to restrict the export of proprietary or closed-source artificial intelligence models, whose software and the data on which they are trained are kept secret, three people familiar with the matter said.
Any action would come on top of a series of measures introduced over the past two years to block exports of advanced AI chips to China in an effort to ponderous Beijing’s development of cutting-edge technology for military purposes. Still, it will be arduous for regulators to keep pace with the rapidly evolving changes in the industry.
The Commerce Department declined to comment. The Chinese embassy in Washington did not immediately respond to a request for comment.
continued below
There is currently nothing stopping US AI giants such as Microsoft-backed OpenAI, Alphabet’s Google DeepMind and rival Anthropic, which have developed some of the most powerful closed-source AI models, from selling them to almost anyone in the world without oversight. government.
Government and private sector researchers fear that U.S. adversaries could employ these models, which mine massive amounts of text and images to summarize information and generate content, to launch aggressive cyber attacks or even create powerful biological weapons.
To develop export controls for AI models, sources say the United States may employ a threshold in the AI executive order issued last October based on the amount of computing power needed to train the model. Once this level is reached, the developer must report his plans to develop the AI model and provide the test results to the Commercial Department.
That computing power threshold could become the basis for determining which AI models will be subject to export restrictions, according to two U.S. officials and another source briefed on the discussions. They declined to give their names because the details have not been made public.
If applied, it would likely only restrict exports of models that have not yet been released to the market, as none are believed to have met the threshold yet, although according to EpochAI, a research institute that tracks artificial intelligence trends, Google’s Gemini Ultra is almost close to that purpose. .
The agency is far from finalizing its proposed regulations, sources emphasize. But the fact that such a move is being considered shows that the U.S. government is struggling to fill gaps in its efforts to thwart Beijing’s artificial intelligence ambitions, despite the significant challenges of imposing a muscular regulatory regime on the rapidly developing technology.
As the Biden administration looks at competition with China and the dangers of sophisticated artificial intelligence, AI models “are obviously one of the tools, one of the potential bottlenecks to think about here,” said Peter Harrell, a former National Security Council official. “It is unclear whether it will actually be possible, from a practical point of view, to turn it into an export-controlled bottleneck,” he added.
BIOLOGICAL WEAPONS AND CYBER ATTACKS?
The U.S. intelligence community, think tanks and scientists are increasingly concerned about the risks posed by foreign bad actors gaining access to advanced artificial intelligence capabilities. Scientists from Gryphon Scientific and Rand Corporation noted that advanced artificial intelligence models could provide information that could aid create biological weapons.
The Department of Homeland Security said in its 2024 Insider Threat Assessment that cybercriminals will likely employ artificial intelligence to “develop up-to-date tools” that will “enable faster, more competent, and more effective cyberattacks at scale.”
One source said any up-to-date export rules could also apply to other countries.
“Potential explosion [AI’s] employ and exploitation is radical, and it’s actually very tough for us to keep up with it,” Brian Holmes, an official in the Office of the Director of National Intelligence, said at a March meeting on export controls, highlighting China’s advance as a particular concern.
AI RESOLUTION
To address these concerns, the United States has taken action to stop the flow of American AI chips and the tools that enable them to be delivered to China.
She also proposed a rule requiring U.S. cloud services companies to inform the government when foreign customers employ their services to train powerful artificial intelligence models that can be used for cyberattacks.
However, the AI models themselves have not yet been addressed. Alan Estevez, who oversees U.S. export policy at the Commerce Department, said in December that the agency was considering options for regulating exports based on the open-source immense language model (LLM) before seeking industry input.
Tim Fist, an artificial intelligence policy expert at the Washington-based think tank CNAS, says the threshold “is a good interim measure until we develop better ways to measure the capabilities and risks of up-to-date models.”
The threshold is not set in stone. One source said the trade could end up lower, combined with other factors such as the type of data or the potential applications of the AI model, such as the ability to design proteins that could be used to make biological weapons.
Regardless of the threshold, the export of AI models will be arduous to control. Many models are open source, which means they will remain outside the scope of contemplated export controls.
Even imposing controls on more advanced proprietary models will be a challenge because regulators will likely have a tough time defining the right criteria for determining which models should be subject to controls at all, Fist said, noting that China is likely only about two years behind the United States in developing its own AI software.
Most read on the Internet