Author: Matt O’BrienGoogle said Friday that it has made “more than a dozen technical improvements” to its artificial intelligence systems after it was discovered that its overhauled search engine was spitting out incorrect information.
In mid-May, the tech company overhauled its search engine, which often displays AI-generated summaries above search results. Soon after, social media users began sharing screenshots of the most bizarre responses.
Google has largely defended its AI review feature, saying it is typically right and has been thoroughly tested beforehand. But Liz Reid, Google’s head of search, acknowledged on her blog Friday that “there have certainly been some weird, wrong or unhelpful AI overviews.”
While many examples were foolish, others were perilous or harmful lies. Adding to the furor, some people also took bogus screenshots, allegedly showing even more absurd responses that Google never generated. Several of these fakes were also shared widely on social media.
Last week, the Associated Press asked Google which wild mushrooms to eat and responded with a long AI-generated summary that was mostly technically correct, but “there’s a lot of information missing that could potentially be morbid or even fatal,” Mary Catherine Aime said. professor of mycology and botany at Purdue University, who reviewed Google’s response to the AP query.
For example, information about mushrooms known as puffballs was “about right,” she said, but Google’s review emphasized looking for mushrooms with uniform, white flesh – which also have many potentially deadly puffball mimics.
In another widely circulated example, an artificial intelligence researcher asked Google how many Muslims have been presidents of the United States, to which it responded confidently with a long-debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.”
Last week, Google immediately implemented fixes to prevent a repeat of Obama’s mistake because it violated the company’s content policies.
In other cases, Reid said Friday he was trying to make broader improvements, such as better detection of “nonsensical queries,” such as “How many stones should I eat?” – this should not be answered with an AI summary.
Artificial intelligence systems have also been updated to reduce the utilize of user-generated content – such as social media posts on Reddit – that can offer misleading advice. One widely shared example is last week’s Google artificial intelligence overhaul, based on a satirical comment on Reddit that suggested using glue to make cheese stick to pizza.
Reid said the company also added more “trigger restrictions” to improve the quality of answers to some questions, such as those related to health.
However, it is not clear how this works and under what circumstances. On Friday, Google was asked again what wild mushrooms are worth eating. AI-generated responses are inherently random, and the newer response was different but still “problematic,” said Aime, a fungus expert at Purdue who is also president of the American Mycological Society.
For example, the statement that “chanterelles look like shells or flowers is not true,” she said.
Google Summaries are designed to provide users with reliable answers to the information they are looking for as quickly as possible, without having to click through a ranking list of site links.
But some AI experts have long warned Google against tying search results to AI-generated answers, which could perpetuate bias and misinformation and put people seeking emergency care at risk. Artificial intelligence systems, called immense language models, work by predicting what words best answer the questions they are asked, based on the data they have been trained on. They tend to make things up – a widely researched problem known as hallucinations.
In her Friday blog post, Reid argued that Google’s AI reviews “don’t typically hallucinate or make things up in the way that other” products based on immense language models can because they are more tightly integrated with the customary search engine Google only in showing what is backed by the best results on the Internet.
“When AI gets it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting the nuances of language on the Internet, or not having a lot of useful information available,” she wrote.
But this kind of information retrieval should be Google’s core business, says computer scientist Chirag Shah, a professor at the University of Washington, who cautioned against seeking to shift search to AI language models. Even though Google’s AI feature “doesn’t technically invent things that don’t exist,” it still provides false information – whether AI-generated or human-made – and includes it in its summaries.
“In any case, the situation is worse because for decades people have trusted at least one thing that Google offers – search,” Shah said.
Most read on the Internet