OpenAI, the makers of ChatGPT, say they have taken action within 24 hours to disrupt the fraudulent operate of artificial intelligence in covert operations focused on the Indian elections. File. | Photo source: AP
OpenAI, the makers of ChatGPT, say it took action within 24 hours to disrupt the fraudulent operate of artificial intelligence in covert operations focused on the Indian elections, which did not lead to a significant enhance in viewership. In a report on its website, OpenAI said that STOIC, an Israeli political campaign management company, generated content about the Indian elections as well as the conflict in Gaza.
“This is a very hazardous threat to our democracy. It is obvious that this is driven by vested interests within and outside India and needs to be thoroughly analyzed/examined and exposed. “At this point, in my opinion, these platforms could have published this information much earlier, and not so delayed, when the elections end,” he added.
OpenAI said it is committed to developing sheltered and broadly beneficial artificial intelligence. “Our investigations into suspected covert influence operations (IO) are part of a broader strategy to achieve our goal of safely deploying AI.” OpenAI said it is committed to enforcing rules to prevent abuse and improve the transparency of AI-generated content. This is especially true when it comes to detecting and disrupting covert influence operations (IOs) that aim to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.
“Over the last three months, we have disrupted five covert I/O operations that were intended to operate our models to support fraudulent activities on the Internet. As of May 2024, these campaigns have not significantly increased audience engagement or reach with our services,” it said.
Describing its actions, OpenAI said that the operations of a commercial company in Israel called STOIC were disrupted. Only the business was disrupted, not the company.
“We named this operation Zero Zeno, after the founder of the Stoic school of philosophy. “The people behind Zero Zeno used our models to generate articles and comments that were then published on multiple platforms, in particular Instagram, Facebook, X and websites associated with this operation,” it said.
Content published under these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, elections in India, politics in Europe and the United States, and criticism of the Chinese government by Chinese dissidents and foreign governments.
OpenAI said it is taking a multi-pronged approach to combating abuses on its platform, including monitoring and disrupting threat actors, including state groups and sophisticated persistent threats. “We are investing in technology and teams to identify and disrupt the activities of entities like those discussed here, including using artificial intelligence tools to assist combat fraud.” Collaborates with others in the AI ecosystem to identify potential misuse of AI and shares knowledge with the public.