Long before the rise of generative artificial intelligence, a Silicon Valley company that contracted to collect and analyze unclassified data on China’s illicit fentanyl trade was making a compelling case for U.S. intelligence agencies to adopt the technology. The results of the operation far exceeded analysis conducted solely on humans and found twice as many companies and 400% more people involved in the illegal or suspected trade of the deadly opioid. Excited U.S. intelligence officials publicly touted the results – the artificial intelligence made connections based largely on data from the Internet and the murky web – and shared them with Beijing authorities, calling for tougher action. One critical aspect of the 2019 operation, called Sable Spear, that had not been previously reported: the company used generative AI to provide US agencies – three years before the release of OpenAI’s groundbreaking ChatGPT product – with evidence summaries in potential criminal cases, saving countless hours work.
“This wouldn’t have been possible without artificial intelligence,” said Brian Drake, then director of artificial intelligence at the Defense Intelligence Agency and coordinator of the project.
continued below
The contractor, Rhombus Power, would later employ generative artificial intelligence to predict a full-scale Russian invasion of Ukraine with 80% confidence four months earlier for another U.S. government client. Rhombus says it also alerts government customers, whom it declined to name, about upcoming North Korean missile launches and Chinese space operations.
US intelligence agencies are struggling to embrace the artificial intelligence revolution, believing they will otherwise be stifled by the exponential growth of data as sensor-generated surveillance technology further blankets the planet.
But officials are well aware that the technology is adolescent and brittle, and that generative artificial intelligence – predictive models trained on enormous troves of data to generate human-like text, images, video and conversations on demand – is not particularly designed for a perilous trade riddled with fraud. .
Analysts need “sophisticated artificial intelligence models that can digest massive amounts of open-source and covertly acquired information,” CIA Director William Burns recently wrote in Foreign Affairs. But it won’t be effortless.
The CIA’s inaugural chief technology officer, Nand Mulchandani, believes that because Gene AI models are “hallucinatory,” they are best treated like a “crazy, drunk friend” – capable of great insight and creativity, but also susceptible to bias. There are also security and privacy issues: adversaries can steal and poison them, and they may contain sensitive personal information that officers are not authorized to see.
However, this does not stop experiments, which mostly take place in secret.
Exception: Thousands of analysts in 18 U.S. intelligence agencies currently employ a CIA-developed artificial intelligence called Osiris. It works on the basis of unclassified, publicly or commercially available data – so-called open source software. It creates annotated summaries, and the chatbot feature allows analysts to ask questions more deeply.
Mulchandani said it uses multiple AI models from various commercial vendors, which it will not name. He also wouldn’t say whether the CIA uses Gen.’s artificial intelligence for any critical purposes on secret networks.
“This is just the beginning,” Mulchandani said, “and our analysts must be able to determine with complete certainty where information is coming from.” The CIA is trying out all the major generation artificial intelligence models – without committing to any – in part because artificial intelligence continues to outdo itself in its capabilities, he said.
Mulchandani claims that Gen.’s artificial intelligence works mainly as a virtual assistant looking for a “needle in a needle stack.” Officials say it will never replace human analysts.
Linda Weissgold, who retired last year as the CIA’s deputy director of analysis, believes war gaming will be a “killer application.”
During her tenure, the agency was already using artificial intelligence – algorithms and natural language processing – for translations and tasks, including notifying analysts outside business hours of potentially critical events. The AI wouldn’t be able to describe what happened – it would be secret – but it could say, “there’s something you need to see and see.”
Generation AI is expected to streamline such processes.
Anshu Roy, CEO of Rhombus Power, believes that the most powerful employ of intelligence will be in predictive analytics. “This will probably be one of the biggest paradigm shifts in the entire field of national security – the ability to predict what your adversaries are likely to do.”
Rhombus’ AI machine uses over 5,000 data streams in 250 languages collected over 10 years, including global news sources, satellite imagery and cyberspace data. It’s all open source. “We can track people and objects,” Roy said.
Among the top artificial intelligence players competing for U.S. intelligence agencies is Microsoft, which announced on May 7 that it is offering GPT-4 OpenAI for top secret networks, although the product must still be accredited to work on secret networks.
Competitor Primer AI counts two unnamed intelligence agencies among its clients, which include the military, according to documents published online as part of a recent workshop on military artificial intelligence. It offers AI-powered search in 100 languages to “detect emerging signals of breaking events” from sources such as Twitter, Telegram, Reddit and Discord, and helps identify “key people, organizations and locations.”
Primer lists targeting among the advertised uses of its technology. In a demonstration at an army conference just days after Hamas attacked Israel on Oct. 7, company executives described how their technology separated fact from fiction in the deluge of online information from the Middle East.
The CEOs declined to be interviewed.
In the near term, how U.S. intelligence officials employ artificial intelligence may be less critical than countering how adversaries employ it: to penetrate U.S. defenses, spread disinformation and try to undermine Washington’s ability to read their intentions and capabilities.
And with Silicon Valley driving the technology, the White House also fears that any Gene AI models adopted by U.S. agencies could be infiltrated and poisoned, which research shows is a sedate threat. (AP) ZH ZH
Most read in next-generation technology