Google’s AI Is Destroying Search, the Internet, and Your Brain


Google’s AI Is Destroying Search, the Internet, and Your Brain

Yesterday the Pew Research Center released a report based on the internet browsing activity of 900 U.S. adults which found that Google users who encounter an AI summary are less likely to click on links to other websites than users who don’t encounter an AI summary. To be precise, only 1 percent of users who encountered an AI summary clicked the link to the page Google is summarizing. 

Essentially, the data shows that Google’s AI Overview feature introduced in 2023 replacing the “10 blue links” format that turned Google into the internet’s de facto traffic controller will end the flow of all that traffic almost completely and destroy the business of countless blogs and news sites in the process. Instead, Google will feed people into a faulty AI-powered alternative that is prone to errors it presents with so much confidence, we won’t even be able to tell that they are errors. 

Here’s what this looks like from the perspective of someone who makes a living finding, producing, and publishing what I hope is valuable information on the internet. On Monday I published a story about Spotify publishing AI-generated songs from dead artists without permission. I spent most of my day verifying that this was happening, finding examples, contacting Spotify and other companies responsible, and talking to the owner of a record label who was impacted by this. After the story was published, Spotify removed all the tracks I flagged and removed the user who was behind this malicious activity, which resulted in many more offending, AI-generated tracks falsely attributed to human artists being removed from Spotify and other streaming services. 

Many thousands of people think this information is interesting or useful, so they read the story, and then we hopefully convert their attention to money via ads, but primarily by convincing them to pay for a subscription. Cynically aiming only to get as much traffic as we can isn’t a viable business strategy because it compromises the very credibility and trustworthiness that we think convinces people to pay for a subscription, but what traffic we do get is valuable because every person who comes to our website gives us the opportunity to make our case. 

The Spotify story got decent traffic by our standards, and the number one traffic source for it so far has been Google, followed by Reddit, “direct” traffic (meaning people who come directly to our site), and Bluesky. It’s great that Google sent us a bunch of traffic for that, but we also know that it should have sent us a lot more, and that it did a disservice to its own users by not doing that. 

We know it should have sent us more traffic because of what when you search for “AI music spotify” on Google, the first thing I see is a Google Snippet summarizing my article. But that summary isn’t from nor does it link to 404 Media, it’s a summary of and a link to a blog on a website called dig.watch that reads like it was generated by ChatGPT. The blog doesn’t have a byline and reads like the endless stream of AI-generated summaries we saw when we created a fully automated AI aggregation site of 404 Media. Dig.watch itself links to another music blog, MusicTech, which is an aggregation of my story that links to it in the lede. 

When I use Google’s “AI mode,” Google provides a bullet-pointed summary of my story, but instead of linking to it, it links to three other sites that aggregated it: TechRadar, Mixmag, and RouteNote. 

Gaming search engine optimization in order to come up as the first result on Google regardless of merit has been a problem for as long as Google has been around. As the Pew research makes clear, AI Overview just ensures people will never click the link where the information they are looking for originates. 

We reserve the right to whine about Google rewarding aggregation of our stories instead of sending the traffic to us, but the problem here is not what is happening to 404 Media, which we’ve built with the explicit goal of not living or dying by the whims of any internet platform we can’t control. The problem is that this is happening to every website on the internet, and if the people who actually produce the information that people are looking for are not getting traffic they will no longer be able to produce that information. 

This ongoing “traffic apocalypse” has been the subject of many articles and opinion pieces saying that SEO strategies are dead because AI will take the ad dollar scraps media companies were fighting over. Tragically, what Google is doing to search is not only going to kill big media companies, but tons of small businesses as well.

Luckily for Google and the untold number of people who are being fed Snippets and AI summaries of our Spotify story, so far that information is at least correct. That is not guaranteed to be the case with other AI summaries. We love to mention that Google’s AI summaries told its users to eat glue whenever this subject comes up because it’s hilarious and perfectly encapsulates the problem, but it’s also an important example because it reveals an inherently faulty technology. More recently, AI Overview insisted that Dave Barry, a journalist who is very much alive, was dead

The glue situation was viral and embarrassing for Google but the company still dominates search and it’s very hard for people to meaningfully resist its dominance given our limited attention spans and the fact that it is the default search option in most cases. AI overviews are still a problem but it’s impossible to keep this story in the news forever. Eventually Google shoves it down users’ throats and there’s not much they can do about it.

Google AI summaries told users to eat glue because it was pulling on a Reddit post that was telling another user, jokingly, to put glue on their pizza so the cheese doesn’t slide off. Google’s AI didn’t understand the context and served that answer up deadpan. This mechanism doesn’t only result in other similar errors, but is also possibly vulnerable to abuse. 

In May, an artist named Eduardo Valdés-Hevia reached out to me when he discovered he accidentally fooled Google’s AI Overview to present a fictional theory he wrote for a creative project as if it was real. 

“I work mostly in horror, and my art often plays around with unreality and uses scientific and medical terms I make up to heighten the realism along with the photoshopped images,” Valdés-Hevia told me. “Which makes a lot of people briefly think what I talk about might be real, and will lead some of them to google my made-up terms to make sure.”

In early May, Valdés-Hevia posted a creepy image and short blurb about “The fringe Parasitic Encephalization Theory,” which “claims our nervous system is a parasite that took over the body of the earliest vertebrate ancestor. It captures 20% of the body’s resources, while staying separate from the blood and being considered unique by the immune system.”

Someone who saw Valdés-Hevia post Googled “Parasitic Encephalization” and showed him that AI overview presented it as if it was a real thing. 

Google’s AI Is Destroying Search, the Internet, and Your Brain

Valdés-Hevia then decided to check if he could Google AI Overview to similarly present other made-up concepts as if they were real, and found that it was easy and fast. For example, Valdés-Hevia said that only two hours after he and members of his Discord to start posting about “AI Engorgement,” a fake “phenomenon where an AI model absorbs too much misinformation in its training data,” for Google AI Overview to start presenting it uncritically. It still does so at the time of writing, months later. 

Google’s AI Is Destroying Search, the Internet, and Your Brain

Other recent examples Valdés-Hevia flagged to me, like the fictional “Seraphim Shark” were at first presented as real by AI Overview, but has since been updated to say they are “likely” fictional. In some cases, Valdés-Hevia even managed to get AI Overview to conflate a real condition—Dracunculiasis, or guinea worm disease—with a fictional condition he invented, Dracunculus graviditatis, “a specialized parasite of the uterus.” Google 

Google’s AI Is Destroying Search, the Internet, and Your Brain

Valdés-Hevia told me he wanted to “test out the limits and how exploitable Google search has become. It’s also a natural extension of the message of my art, which is made to convince people briefly that my unreality is real as a vehicle for horror. Except in this case, I was trying to intentionally ‘trick’ the machine. And I thought it would be much, much harder than just some scattered social media posts and a couple hours.” 

“Let’s say an antivaxx group organizes to spread some disinformation,” he said. “They just need to create a new term (let’s say a disease name caused by vaccines) that doesn’t have many hits on Google, coordinate to post about it in a few different places using scientific terms to make it feel real, and within a few hours, they could have Google itself laundering this misinformation into a ‘credible’ statement through their AI overview. Then, a good percentage of people looking for the term would come out thinking this is credible information. What you have is, in essence, a very grassroots and cheap approach to launder misinformation to the public.”

I wish I could say this is not a sustainable model for the internet, but honestly there’s no indication in Pew’s research that people understand how faulty the technology that powers Google’s AI Overview is, or how it is quietly devastating the entire human online information economy that they want and need, even if they don’t realize it.

The optimistic take is that Google Search, which has been the undisputed king of search for more than two decades, is now extremely vulnerable to disruption, as people in the tech world love to say. Predictably, most of that competition is now coming from other AI companies that thing they can build better products than AI overview and be the new, default, AI-powered search engine for the AI age. Alternatively, as people get tired of being fed AI-powered trash, perhaps there is room for a human-centered and human-powered search alternative, products that let people filter out AI results or doesn’t have an ads-based business model.

But It is also entirely possible and maybe predictable that we’ll continue to knowingly march towards an internet where drawing the line between what is and isn’t real is not profitable “at scale” and therefore not a consideration for most internet companies and users. Which doesn’t mean it’s inconsequential. It is very, very consequential, and we are already knee deep in those consequences.

Scroll to Top