Google earlier this month introduced an AI-generated search results overview tool, which summarizes search results so that users don’t have to click through multiple links to get quick answers to their questions. But the feature came under fire this week after it provided false or misleading information to some users’ questions.
For example, several users posted on X that Google’s AI summary said that former President Barack Obama is a Muslim, a common misconception. In fact, Obama is a Christian. Another user posted that a Google AI summary said that “none of Africa’s 54 recognized countries start with the letter ‘K’” — clearly forgetting Kenya.
Google confirmed to CNN on Friday that the AI overviews for both queries had been removed for violating the company’s policies.
“The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web,” Google spokesperson Colette Garcia said in a statement, adding that some other viral examples of Google AI flubs appear to have been manipulated images. “We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback. We’re taking swift action where appropriate under our content policies.”
The bottom of each Google AI search overview acknowledges that “generative AI is experimental.” And the company says it conducts testing designed to imitate potential bad actors in an effort to prevent false or low-quality results from showing up in AI summaries.
Google’s search overviews are part of the company’s larger push to incorporate its Gemini AI technology across all of its products as it attempts to keep up in the AI arms race with rivals like OpenAI and Meta. But this week’s debacle shows the risk that adding AI – which has a tendency to confidently state false information – could undermine Google’s reputation as the trusted source to search for information online.
Even on less serious searches, Google’s AI overview appears to sometimes provide wrong or confusing information.
In one test, CNN asked Google, “how much sodium is in pickle juice.” The AI overview responded that an 8 fluid ounce-serving of pickle juice contains 342 milligrams of sodium but that a serving less than half the size (3 fluid ounces) contained more than double the sodium (690 milligrams). (Best Maid pickle juice, for sale at Walmart, lists 250 milligrams of sodium in just 1 ounce.)
CNN also searched: “data used for google ai training.” In its response, the AI overview acknowledged that “it’s unclear if Google prevents copyrighted materials from being included” in the online data scraped to train its AI models, referencing a major concern about how AI firms operate.
It’s not the first time Google has had to walk back the capabilities of its AI tools over an embarrassing flub. In February, the company paused the ability of its AI photo generator to create images of people after it was blasted for producing historically inaccurate images that largely showed people of color in place of White people.
Google’s Search Labs webpage lets users in areas where AI search overviews have rolled out toggle the feature on and off.