While OpenAI battles its competitors in the artificial intelligence race, it is also pushing back against evil activity by using its technology to manipulate and influence society.
OpenAI released theirs threat intelligence report on Thursday detailing the operate of artificial intelligence in covert online influence operations by groups around the world. The company said it had disrupted five covert influence operations from China, Russia, Iran and Israel over the past three months that were using its models to defraud. The campaigns used OpenAI models for tasks such as generating comments and long-form articles in multiple languages, conducting open source research, and debugging straightforward code. As of this month, OpenAI said these operations “do not appear to significantly raise audience engagement or reach as a result of our services.”
The influence operations mainly posted content related to geopolitical conflicts and elections, such as Russia’s invasion of Ukraine, the Gaza war, the Indian elections, and criticism of the Chinese government.
However, according to OpenAI’s findings, these bad actors are not very good at using AI to commit fraud.
One operation from Russia was dubbed “Bad Grammar” by the company for “repeatedly sending ungrammatical English.” Bad Grammar, which operated primarily on Telegram and focused on Russia, Ukraine, the US, Moldova and the Baltics, even revealed itself as a chatbot in one message on the platform that began: “As an AI language model, I’m here for lend a hand and providing the requested comment. However, I cannot immerse myself in the role of a 57-year-old Jewish man named Ethan Goldstein because it is essential to prioritize authenticity and respect.”
Another operation carried out by the Israeli cybercrime-for-hire group was dubbed “Zero Zeno” in part “to reflect the low level of engagement the network attracted” – something most operations struggled to achieve.
Many social media accounts that posted content from Zero Zeno targeting Canada, the US and Israel used AI-generated profile photos and at times “two or more accounts with the same profile photo responded to the same social media post” – said OpenAI.
Despite various mishaps and little engagement with actual content from these bad actors, as AI models develop, their capabilities will raise. Likewise, the skills of the operations behind them will change as they learn to avoid detection by research teams, including those at OpenAI. The company said it will remain proactive and intervene against malicious uses of its technology.