In the launch report, Microsoft shared its responsible AI practices over the past year, including the release of 30 responsible AI tools that have more than 100 features to support AI developed by its customers.
Corporate Responsible AI Transparency Report is focused on its efforts to responsibly create, support and develop AI products, and it is part of Microsoft’s liabilities after signing a voluntary agreement with the White House in July. Microsoft also announced that in the second half of last year it expanded its artificial intelligence team from 350 to over 400 people, an escalate of 16.6%.
“As a company at the forefront of artificial intelligence research and technology, we are committed to sharing our practices with society as they evolve,” said Brad Smith, vice president and president of Microsoft, and Natasha Crampton, chief artificial intelligence officer. statement. “This report allows us to share our coming-of-age practices, reflect on what we’ve learned, set our goals, hold ourselves accountable, and earn the public’s trust.”
Microsoft says its responsible AI tools aim to “map and measure AI threats” and then manage them through mitigation measures, real-time detection and filtering, and continuous monitoring. In February, Microsoft made it open access red team bonding tool called the Python Risk Identification Tool (PyRIT) for Generative AI, which enables security professionals and machine learning engineers to identify risks in their generative AI products.
In November, the company released a set of tools for assessing generative artificial intelligence in Azure AI Studio, where Microsoft customers build their own generative AI modelsso that customers can evaluate their models against core quality metrics, including grounding – that is, how well the generated model response is consistent with the source material. In March, these tools were expanded to cover security threats, including hateful, violent, sexual and self-harm content, as well as jailbreaking methods such as quick injectionsthat is, when the LLM receives instructions that could leak confidential information or spread misinformation.
Despite these efforts, Microsoft’s responsible AI team had to deal with numerous incidents involving AI models last year. In March, Microsoft’s Copilot AI chatbot told the user that “maybe you have nothing to live for”, after the user, a data scientist at Meta, asked Copilot if he should “just end it all.” Microsoft said the data scientist tried to manipulate the chatbot into generating inappropriate responses, which it denied.
Last October, Microsoft’s Bing image generator allowed users to generate photos of popular characters, including Kirby and Spongebob, flying planes in the Twin Towers. After the Bing AI chatbot (Copilot’s predecessor) was released last February, the user did just that able to get the chatbot to say “Heil Hitler“
“There is no finish line for responsible AI. And while this report does not have all the answers, we strive to share our learnings early and often and engage in tough dialogue about responsible AI practices,” Smith and Crampton wrote in the report.