Google finally explains what the hell happened to it Artificial Intelligence Reviews.
For those who haven’t caught up, AI Reviews rolled out to Google Search on May 14 as part of a beta release Search for a generative experience and making it available to everyone in the US. This feature was supposed to put an AI-powered answer at the top of almost every search, but it soon started implying that humans add glue to their pizza or follow potentially fatal health advice. While technically still dynamic, AI reviews seem to be less prominent on the site, with fewer and fewer searches from the Lifehacker team returning answers from Google bots.
IN blog post yesterday, Google Search Vice President Liz Reid explained that while the feature has undergone testing, “there’s nothing better than millions of people using it across a multitude of creative searches.” The company admitted that AI Reviews doesn’t have the best reputation (the blog is titled “About Last Week”), but also said it had discovered where the failures were occurring and was working to fix them.
“AI reviews work very differently than chatbots and other LLM products,” Reid said. They “not only generate results based on training data” but instead perform “established “search” tasks and provide information from the “top results on the Internet.” Therefore, it does not connect errors with hallucinations it’s just that the model misread what was already on the network.
“We have seen AI reviews containing sarcastic or trolling content from message boards,” she continued. “Forums are often a great source of genuine, first-hand information, but in some cases they can lead to not very useful advice. ” In other words, because the robot cannot distinguish between sarcasm and actual support, it may sometimes present the former instead of the latter.
Likewise, when certain topics have “data gaps,” meaning little has been written about them seriously, Reid said the Reviews inadvertently drew from satirical rather than legal sources. To address these bugs, the company has allegedly made improvements to its AI reviews, saying:
We’ve built better mechanisms to detect nonsensical queries that shouldn’t show an AI overview, and restricted the inclusion of satirical and amusing content.
We have updated our systems to limit the employ of user-generated content in responses that may contain misleading advice.
We’ve added query trigger restrictions where AI reviews haven’t proven to be as helpful.
We already have solid guardrails in place when it comes to topics like news and health. For example, we try not to show AI overviews for arduous topics where freshness and factualness are essential. In Health, we have made additional trigger enhancements to improve our quality safeguards.
All of these changes mean that AI overhauls probably won’t be available anytime soon, even though people are still finding modern ways to do it remove Google AI from search. Despite the social media hype, the company said that “user feedback shows that people are more satisfied with search results thanks to AI reviews,” and went on to mention how committed Google is to “improving [its] security, including for edge cases.”
Nevertheless, it seems that there is still some disconnect between Google and users. Elsewhere in its posts, Google called out users for “meaningless modern searches seemingly intended to produce incorrect results.”
Specifically, the company questioned why someone would search for “How many stones should I eat?The idea was to identify where data gaps might appear, and while Google said these questions “highlighted some specific areas we need to improve,” the implication seems to be that problems most often arise when people they start looking for them.
Google similarly denied responsibility for several responses to the AI review, claiming that “unsafe results on topics such as leaving dogs in cars, smoking during pregnancy and depression” were falsified.
There’s definitely a note of defensiveness in the post, even from Google spends billions on AI engineers who were probably paid to detect these kinds of errors before publication. Google claims that AI Reviews only “misinterprets language” in a “miniature number of cases,” but we feel sorry for anyone who is sincerely trying to improve their training program and yet followed the “squat” advice.