Aleksandra Alper
WASHINGTON: The Biden administration is poised to open a up-to-date front in its efforts to protect U.S. artificial intelligence from China and Russia, with initial plans to place protective barriers around the most advanced artificial intelligence models, Reuters reported on Wednesday.
Government and private sector researchers fear that U.S. adversaries could apply these models, which mine massive amounts of text and images to summarize information and generate content, to launch aggressive cyber attacks or even create powerful biological weapons.
Here are some of the threats posed by artificial intelligence:
DEEPFAX AND DISINFORMATION
Deepfakes – realistic but fabricated videos created by AI algorithms trained on a wealth of material available on the Internet – are appearing on social media, blurring fact and fiction in the polarized world of American politics.
While such synthetic media has been around for several years, it has been turbocharged over the past year thanks to a slew of up-to-date “generative artificial intelligence” tools like Midjourney that make it economical and straightforward to create convincing deepfakes.
continued below
AI-powered imaging tools from companies such as OpenAI and Microsoft may be used to create images that could promote election or voting-related disinformation, despite each having policies in place to prevent the creation of misleading content – researchers said in a March report.
Some disinformation campaigns simply apply AI’s ability to mimic real news articles to spread false information.
While major social media platforms such as Facebook, Twitter and YouTube have made efforts to ban and remove deepfakes, their effectiveness in policing such content varies.
For example, last year, a Chinese government-controlled news outlet using a generative artificial intelligence platform spread a previously spread false claim that the United States was operating a laboratory in Kazakhstan to create biological weapons for apply against China, the Department of Homeland Security (DHS) said in its assessment threats to the country from 2024
National security adviser Jake Sullivan, speaking Wednesday at an artificial intelligence event in Washington, said the problem has no straightforward solutions because it combines the capabilities of artificial intelligence with “the intention of state and non-state actors to apply disinformation on a gigantic scale to disrupt democracies to advance propaganda to shape world perceptions.”
“Right now, the offense is definitely beating the defense,” he said.
BIO WEAPONS
The U.S. intelligence community, think tanks and scientists are increasingly concerned about the risks posed by foreign bad actors gaining access to advanced artificial intelligence capabilities. Scientists from Gryphon Scientific and Rand Corporation noted that advanced artificial intelligence models could provide information that could aid create biological weapons.
Gryphon investigated how gigantic language models (LLMs) – computer programs that draw on huge amounts of text to generate answers to queries – can be used by adversarial actors to cause harm to the field of life sciences and found that “they can provide information that “could assist a malicious actor in the creation of a biological weapon by providing useful, true and detailed information at every step of the path.”
For example, they found that an LLM could provide the postdoctoral-level knowledge needed to solve problems when working with a virus capable of causing a pandemic.
Rand research has shown that LLMs can aid plan and execute a biological attack. They found that LLM could suggest aerosol delivery methods for botulinum toxin, for example.
CYBER WEAPONS
In its 2024 Insider Threat Assessment, DHS said cybercriminals are likely to apply artificial intelligence to “develop up-to-date tools” to “enable faster, more proficient, and more evasive, larger-scale cyberattacks” on critical infrastructure, including pipelines and rail lines.
DHS says China and other adversaries are developing artificial intelligence technologies that could undermine U.S. cyber defenses, including generative artificial intelligence programs that support malware attacks.
In a February report, Microsoft said it had tracked hacking groups linked to the governments of China and North Korea, as well as Russian military intelligence and Iran’s Revolutionary Guard, as they tried to refine their hacking campaigns using gigantic language models.
The company announced its discovery with a complete ban on state-backed hacking groups using its AI products.
NEW EFFORTS AGAINST THREAT
A bipartisan group of lawmakers introduced a bill tardy Wednesday that would make it easier for the Biden administration to impose export controls on artificial intelligence models to protect prized American technology from foreign bad actors.
The bill, sponsored by House Republicans Michael McCaul and John Molenaar and Democrats Raja Krishnamoorthi and Susan Wild, would also give the Commerce Department explicit authority to prohibit Americans from working with foreigners to develop artificial intelligence systems that pose a risk to U.S. national security.
Tony Samp, an artificial intelligence policy adviser at DLA Piper in Washington, said policymakers in Washington are trying to “support innovation and avoid stringent regulations that stifle innovation” as they seek to address the many threats posed by the technology.
However, he warned that “restricting the development of artificial intelligence through regulation could inhibit potential breakthroughs in areas such as drug discovery, infrastructure, national security and others, and cede territory to competitors abroad.”
Most read in next-generation technology