Google’s ‘legitimately dangerous’ AI search sparks ridicule, concern

Join us as we return to New York on June 5 to collaborate with executive leaders in exploring comprehensive methods for auditing AI models for bias, performance, and ethical compliance across diverse organizations. Find out here how you can attend.


Should you add glue to your pizza, stare directly at the sun for 30 minutes a day, eat rocks or a poisonous mushroom, treat a snake bite with ice, and jump off the Golden Gate Bridge?

According to information provided through Google Search’s new “AI Summary” feature, these obviously stupid and harmful suggestions are not only good ideas, but they are the best possible results a user should see when performing a search with your exclusive product.

What’s going on, where is all this bad information coming from, and why is Google appearing at the top of its search results pages right now? Let’s dive in.

What is Google AI Overview?

In its attempt to catch up with its rival OpenAI and its successful chatbot ChatGPT in the large language model (LLM) chatbot and search game, Google introduced a new feature called “Generative Search Experience” almost a year ago, in May 2023.

VB Event

The AI ​​Impact Tour: The AI ​​Audit

Join us as we return to New York on June 5 to engage with top executive leaders and dive deeper into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance at this exclusive invitation-only event.

Request an invitation

It was then described as: “an AI-powered snapshot of key information to consider, with links to dig deeper” and appeared basically as a new paragraph of text just below the Google Search entry bar, above the traditional list of blue links. the user usually gets when searching on Google.

The feature was said to work with search-specific artificial intelligence models. At the time, it was an “opt-in” service and users had to jump through a series of hoops to activate it.

But 10 days ago at Google’s I/O conference, amid a flurry of AI-related announcements, the company announced that the generative search experience had been renamed AI Overviews and would become the default experience in Search. Google for all users, starting with those in the United States

There are ways to disable it or perform Google searches without AI overviews (i.e. the “Web” tab in Google Search), but now in this case, users need to take a few extra steps to do so.

Why is Google’s AI overview controversial?

Since Google turned on AI Overviews as the default option for users in the US, some have taken to .

In some cases, the AI-powered feature chooses to display wildly incorrect, inflammatory, and downright dangerous information.

Even celebrities, including musician Lil Nas X, have joined the group:

Other results are more harmless but are still incorrect and make Google look stupid and unreliable:

Poor quality results generated by AI have taken on a life of their own and have even become a meme with some users tweaking answers in screenshots to make Google look even worse than it already does in the real results:

Google called the AI ​​overview feature “experimental,” placing the following text at the end of each result: “Generative AI is experimental” and linking to a page that describes the new feature in more detail.

On that page, Google writes: “AI overviews can make searching easier by providing an AI-generated snapshot with key information and links to dig deeper… With user feedback and human reviews, we evaluate and improve the quality of our results and products responsibly.”

Will Google Retire AI/Overview?

But some users took to Gemini’s AI imaging feature created racially and historically inaccurate images earlier this year, inflaming prominent Silicon Valley libertarians and politically conservative figures like Marc Andreessen and Elon Musk.

In a statement to The Verge, a Google spokesperson said of the AI ​​Overview feature that users were shown examples. “Queries are generally very rare and not representative of most people’s experiences.”

Additionally, The Verge reported that: “The company has taken action against violations of its policies… and is using these ‘isolated examples’ to further refine the product.”

However, as some have noted on X, this sounds an awful lot like victim blaming.

Others have raised that AI developers could be held legally responsible for dangerous results like the one shown in AI Overview:

Importantly, technology journalists and other digitally literate users have noted that Google appears to be using its artificial intelligence models to create summaries of content that it has previously indexed in its search index, content that it did not originate but in which it still trusted to provide its users with “key information.”

Ultimately, it’s hard to say what percentage of searches show this misinformation.

But one thing is clear: AI Overview appears to be more prone than Google Search to receiving misinformation from untrustworthy sources or information posted as a hoax that the underlying AI models responsible for the overview cannot understand as such, and instead the They treat it as something serious.

Now, it remains to be seen whether users actually act on the information provided in these results, but if they do, it is clearly reckless and could pose risks to their health and safety.

Let’s hope users are smart enough to check alternative sources. Say, rival AI search startup Perplexity, which seems to have less trouble displaying correct information than Google’s AI Overviews at the moment (an unfortunate irony for the search giant and its users, given Google’s role in the conception and articulation of the transformative machine learning architecture). at the heart of the modern generative AI/LLM boom).