Union IT Minister Vaishnaw confirmed the meetings, in response to Mintinquiries.
Since participating in these meetings, both Google and Meta have released notes on combating AI-altered content and advertising on AI intermediary, search and conversation platforms, which include ChatGPT, Facebook, Gemini, Google Search, Instagram, WhatsApp, among others and YouTube. Each company was advised to take a “cautious” approach to AI-generated information, including clearly labeling such content in political ads and limiting the AI’s ability to generate search results about key political figures, parties or any opinions related to upcoming politics. General elections in 2024.
US-based Adobe, which runs Photoshop, one of the world’s largest imaginative visualization tools, has taken an equally cautious approach to how its generative tool, Firefly, can be used to manipulate or create images that can be used in political campaigns, said Andy Parsons, senior director of Adobe’s content authenticity initiative, in an interview with Mint.
The Center also discussed how the above-mentioned intermediaries (except Adobe) who benefit from the sheltered harbor protections as defined in the Information Technology (Intermediary Guidelines and Digital Media Code of Ethics) Rules, 2021, may be held liable to criminal liability if they fail to limit the spread of AI-powered disinformation on online platforms. The discussions took place in featherlight of the continued spread of artificial intelligence in common content on the Internet, which is causing leading technology companies to seek to implement techniques such as watermarking and metadata tagging, all officials cited above said.
The center’s ability to insist on censoring specific keywords comes from a “better understanding of the impact that artificial intelligence can have on public discourse,” Adobe’s Parsons said. “Governments understand that there is no silver bullet to prevent disinformation, and are only now beginning to realize how the Munich Agreement could impact Substantial Tech and elections. This could aid guide government decisions on how democracies like India can deal with sensitive AI-based disinformation.”
Signed On February 16, 20 companies, including Adobe, Google, Meta, Microsoft, OpenAI and X (formerly Twitter), signed the “Technical Agreement to Combat the Fraudulent Utilize of Artificial Intelligence in the 2024 Elections.” The agreement signed at the Munich Security Conference proposed “the implementation of technology to mitigate the risks associated with duplicitous AI-based election content, the assessment of… companies’ artificial intelligence models.
Google and Meta’s election strategy disclosures, released on March 12 and March 19, respectively, detail the agreement. In a post attributed to the “Google India team”, the tech company said it will disclose when AI is used in political ads, flag AI-generated content on YouTube, and utilize a digital watermark to identify modified content. Generative artificial intelligence platform Gemini said in a post: “We have begun to place limits on the types of election-related queries that Gemini will return responses to.”
Meta in its post stated that the company runs 12 fact-checking teams that will independently verify AI-generated content, and altered political content will be restricted on all its platforms. “When content is rated as ‘changed’ or we consider it almost identical, it will appear lower on Facebook. We also radically limit content distribution. On Instagram, changed content is less evident in “news” and “stories”. This greatly reduces the number of people who see it,” the post said.
Neither of the two companies responded Mintemailed questions about details of their meetings with Meita officials and ministers.
Senior law and policy experts said that existing clauses in both the IT Rules 2021 and the Indian Penal Code (IPC) could apply to both Substantial Tech as an enterprise and users promoting such content – depending on issues under consideration.
“If an intermediary faces a court order due to its inaction towards proactively curbing AI-based disinformation on its platforms, it is at risk of violating Rule 7 of the IT Rules 2021, which will therefore change the way AI-based disinformation is combated in election time from responsibility to responsibility for these companies,” said NS Nappinai, Senior Counsel of the Supreme Court and Founder of Cyber Saathi Foundation.
A senior partner at a leading law firm, who requested anonymity because the firm represents one or more of the Substantial Tech companies mentioned here, added that a key challenge in effectively mitigating AI risks is “the general definition of intermediaries.”
“There is a lack of clear definition of platforms and intermediaries, which leaves our regulatory mechanism with a broad approach to who bears responsibility and accountability. This may pose a challenge to effectively and urgently restrict AI content during the election period,” the lawyer said.
Rule 7, mentioned above, states that if the company fails to exercise due diligence to curb identity breaches or manipulation in its various forms, it shall be “criminal liable under any law for the time being in force, including the provisions of the Act, and the Indian Penal Code “
Kazim Rizvi, founding director of policy think tank The Dialogue, added that effective criminalization – a step that could aid curb disinformation – would require “greater efforts to enforce existing legal frameworks, rather than an overemphasis on creating recent laws.”
“The current regulatory environment already provides a comprehensive framework to combat deepfakes, including Rule 3(1)(b) of the IT Rules 2021. The nature of synthetic media is not inherently harmful and holds significant potential in areas such as education, content creation, crime prevention and awareness of government programs. Over-regulation may unintentionally limit these positive applications, thereby reducing the wider benefits of AI-based technological advances. The key, therefore, is to smoothly operationalize existing legal structures, enhance law enforcement capacity, ensure platforms are compliant with regulations, and educate the public on their role in identifying and reporting deepfakes, effectively creating a more informed and proactive digital environment for communities,” Rizvi added.