Google has made changes to its recent AI Overview search feature after it showed strange and misleading results, such as telling people to put glue on a pizza and eat the rocks. The company has introduced recent rules to stop artificial intelligence from reporting bad information.
Last week, Google made AI overviews available to everyone in the United States. The recent feature uses artificial intelligence to summarize information from websites and provide direct answers to search queries. However, many people quickly found examples of AI giving very strange and incorrect answers.
In one case, when someone searched for “How many stones should I eat?”, an AI review concluded that eating stones may provide health benefits, although this is not true. In another example, AI told people to apply glue to make cheese stick to pizza better. And ask a question, said the artificial intelligence
Liz Reid, Google’s search chief, wrote a blog post on Thursday explaining what went wrong. She said when it comes to rocks, almost no one has looked for it before. One of the few websites devoted to this topic had a amusing article, but the artificial intelligence thought it was grave.
“Before these screenshots went viral, virtually no one was asking Google this question,” Reid wrote. “There is not a lot of online content that seriously considers this issue. This is often called the “data gap” or “information gap,” where there is a circumscribed amount of high-quality content on a given topic.”
Reid said the answer to pizza cheese glue came from a message board post. She explained that while forums often contain useful first-hand information, they can also contain bad advice that the artificial intelligence picks up on.
She defended Google, saying that some of the worst alleged examples spread on social media, such as an AI purporting to say that pregnant women can smoke, were phony screenshots and actually were.
Google is making changes to its search engine AI overhaul, so it no longer tells people to eat rock or glue
The search executive said Google’s AI reviews are designed differently than chatbots because they are integrated with the company’s core ranking systems to display relevant, high-quality results. For this reason, she argued, AI typically does not “hallucinate” information like other gigantic language models and has an accuracy rate comparable to Google’s Featured Snippets.
Reid acknowledged that “there have certainly been some strange, misleading or unhelpful AI reviews,” pointing out areas for improvement. So Google has now made over a dozen changes to try and fix these issues.
Google has improved its artificial intelligence’s ability to recognize stupid or, as Reid says, “nonsensical” questions that it shouldn’t answer. It also made the AI less reliant on forums and social media posts that could lead it astray.
For grave topics like health and news, Google already had stricter rules on when AI could provide direct answers. Now it has added even more restrictions, especially for health-related searches.
Google says it will closely monitor AI reviews in the future and quickly fix any issues. “We will continually improve when and how we display AI overviews and strengthen our safeguards, including for edge cases,” Reid wrote. “We really appreciate your ongoing feedback.”
Last week, Google made AI overviews available to everyone in the United States. The recent feature uses artificial intelligence to summarize information from websites and provide direct answers to search queries. However, many people quickly found examples of AI giving very strange and incorrect answers.
In one case, when someone searched for “How many stones should I eat?”, an AI review concluded that eating stones may provide health benefits, although this is not true. In another example, AI told people to apply glue to make cheese stick to pizza better. And ask a question, said the artificial intelligence
Liz Reid, Google’s search chief, wrote a blog post on Thursday explaining what went wrong. She said when it comes to rocks, almost no one has looked for it before. One of the few websites devoted to this topic had a amusing article, but the artificial intelligence thought it was grave.
“Before these screenshots went viral, virtually no one was asking Google this question,” Reid wrote. “There is not a lot of online content that seriously considers this issue. This is often called the “data gap” or “information gap,” where there is a circumscribed amount of high-quality content on a given topic.”
Reid said the answer to pizza cheese glue came from a message board post. She explained that while forums often contain useful first-hand information, they can also contain bad advice that the artificial intelligence picks up on.
She defended Google, saying that some of the worst alleged examples spread on social media, such as an AI purporting to say that pregnant women can smoke, were phony screenshots and actually were.
Google is making changes to its search engine AI overhaul, so it no longer tells people to eat rock or glue
The search executive said Google’s AI reviews are designed differently than chatbots because they are integrated with the company’s core ranking systems to display relevant, high-quality results. For this reason, she argued, AI typically does not “hallucinate” information like other gigantic language models and has an accuracy rate comparable to Google’s Featured Snippets.
Reid acknowledged that “there have certainly been some strange, misleading or unhelpful AI reviews,” pointing out areas for improvement. So Google has now made over a dozen changes to try and fix these issues.
Google has improved its artificial intelligence’s ability to recognize stupid or, as Reid says, “nonsensical” questions that it shouldn’t answer. It also made the AI less reliant on forums and social media posts that could lead it astray.
For grave topics like health and news, Google already had stricter rules on when AI could provide direct answers. Now it has added even more restrictions, especially for health-related searches.
Google says it will closely monitor AI reviews in the future and quickly fix any issues. “We will continually improve when and how we display AI overviews and strengthen our safeguards, including for edge cases,” Reid wrote. “We really appreciate your ongoing feedback.”