If artificial intelligence has become the technological Wild West, some up-to-date security features from Microsoft Azure are designed to put a stop to it. Microsoft has made a number of tools available on Thursday that he says he will aid their clients’ AI models prevent hallucinations – or chatbots’ tendency to make things up. These capabilities reside in Azure AI, a cloud-based service that provides support and technology to developers and organizations.
One of such functions is the so-called ground detection, which aims to identify text-based hallucinations.
Microsoft says the up-to-date feature will find and flag chatbot responses for “unsubstantiated material,” or content that doesn’t appear to be rooted in fact or common sense, to aid improve their quality.
In February 2023, Microsoft launched its own chatbot called Copilot. He also has broad partnership with OpenAI which includes Azure OpenAI service, which gives developers the ability to create their own AI applications through direct access to OpenAI models powered by the Azure platform. Azure Artificial Intelligence customers include consulting firm KPMG, telecommunications giant AT&T and Reddit.
Other tools introduced on Thursday include: instant shieldsthat block attacks on generative AI models, such as instant injection or malicious prompts from external documents that direct models away from training and security barriers.
“We know that not all customers have in-depth knowledge of injection attacks or hateful content, so the rating system generates the prompts needed to simulate these types of attacks,” Sarah Bird, the company’s chief artificial intelligence product officer, told The Verge Microsoft. , adding that customers will then see the output and results based on the model’s performance in these simulations.
Microsoft announced that Azure AI will soon also introduce two other monitoring and security features.
These types of problems, while seemingly benign, have caused some awkward (i.e some very problematic) gaffes from artificial intelligence-based text and image generators. Google’s Gemini AI sparked controversy in February after it was generated historically misleading images like racially diverse Nazis. Recently ChatGPT OpenAI completely off track last month with gibberish and hallucinations that left users scratching their heads.