This marks OpenAI’s second global collaboration amid growing concerns about the growing influence of artificial intelligence on various forms of content, particularly during and in the run-up to elections in major economies such as India and the US.
OpenAI’s decision to join C2PA comes a day after the Election Commission of India issued a notice to political parties outlining potential punitive actions that could be initiated under various provisions of the law for the operate of deepfakes in political campaigns.
Also read: OpenAI says it can now detect images generated by its software – in most cases
Penalties, including sections 66C and 66D of the Information Technology Act 2000; section 123 para. 4 of the Representation of the People Act of 1951; and Sections 171G, 465, 469 and 505 of the Indian Penal Code, can result in several years of imprisonment and fines for the perpetrators.
However, global enforcement of such regulations has so far been challenging given the evolving nature of artificial intelligence and digital content.
In February, 20 tech companies signed an agreement to identify political content altered by artificial intelligence
To address these challenges, 20 leading technology companies, including Adobe, OpenAI, IBM, LinkedIn, Snap and TikTok, signed an agreement on February 16 in Munich to identify political content altered by AI, limit its distribution and improve “cross-industry resilience” to identify “misleading AI-based election content.”
The C2PA initiative, which includes Adobe, Google, Intel, Microsoft and Sony, among others, aims to further advance the effort. The theme will be to develop standards for identifying content credentials and include “tamper-proof metadata” to reveal full details, including the origin of content, whether in text, image, video or audio format.
Also read: AI Tracker: OpenAI brings ChatGPT storage feature to Plus users
While efforts to identify content sources have been met with intense scrutiny, the dangers of deepfakes – altered versions of existing videos created to convey alternative political messages – have prompted governments and the industry to curb the threat, especially with the increasing integration of artificial intelligence capabilities with mainstream social media and mobile applications.
With India’s ongoing seven-seat general elections, public figures including Aamir Khan, Ranveer Singh and Union Home Minister Amit Shah have been forced to file complaints with the police over concerns over some imitation videos of altered speeches circulating under their names .
Such developments have prompted many public policy leaders to call for a universal standard that could aid track AI-generated content around the world in order to close the technical gaps that exist between services from different technology companies.
Rohit Kumar, founding partner of public policy research firm Quantum Hub, said: “There is an urgent need to intensify public outreach efforts to address and identify deepfakes. Building social resilience and promoting critical evaluation of all content will be crucial in this election cycle to discourage blind trust in information.”
According to a statement issued by Anna Makanju, vice president of global affairs at OpenAI, the decision to join C2PA could “improve common standards for digital provenance.”
“The existing adoption, support and ongoing commitment to content references will provide an critical voice in the working efforts to guide the development of the (common standard),” Andy Jenks, C2PA chairman, said in a statement
While discussions are ongoing to develop such standards, implementation has so far been a challenge. But last September, in an interview with Mint, Nick Clegg, vice president of global affairs at Meta Platforms, said that policies enabling global communication were “not only possible… they are highly desirable.”
Posted: May 7, 2024, 8:43 pm EST