Disaster tends to strike when you let automated systems distribute news, especially on sensitive topics.
The latest case in point: how Google’s automated news alerts accidentally ended up throwing more fuel onto an already hurtful racial slur controversy that unfolded at the BAFTA awards this weekend.
As Deadline reported, Google pushed out a notification linking to an article with the headline, “How the Tourette’s Fallout Unfolded at the BAFTA Film Award.” The problem was the message appended to the notification, which prompted readers to “see more on n****rs.”
The blunder became viral after Instagram influencer Danny Price posted a screenshot of the alert to his followers, calling it “absolutely f**ked.”
“What an interesting Black History month this has turned out to be,” Price wrote.
Google apologized for the slur after the incident gained news coverage.
“We’re very sorry for this mistake. We’ve removed the offensive notification and are working to prevent this from happening again,” a spokesperson told media outlets.
Understandably, the error prompted fiery discussions online about the irresponsibility of allowing AI systems to report and repackage the news. But Google claims that no AI was involved in the blunder.
In a clarification provided by Deadline after it originally reported the notification was AI-generated, Google told the paper that its systems “recognized a euphemism for an offensive term on several web pages, and accidentally applied the offensive term to the notification text.”
“This system error did not involve AI,” the company emphasized. “Our safety filters did not properly trigger, which is what caused this.”
Nonetheless, the criticisms of AI butting its way into journalism aren’t unwarranted. When Apple launched an AI feature that summarized headlines in 2024, it falsely told users that Luigi Manione had shot himself, and other lies, leading to the BBC filing a formal complaint against the tech giant after the tool repeatedly butchered its stories. Last December, the Washington Post deployed an AI-generated feature for creating personalized podcasts that summarized its stories, which immediately invented and misattributed quotes, among other mistakes.
Google, whose non-chatbot models are notorious for producing outrageous hallucinations, has itself been guilty of similar blunders, too. Last month, its Google Discover feed was caught showing users sensationalized AI-generated headlines that replaced the publication’s original headline, The Verge found.
More on Google: Google’s AI Insists That Next Year Is Not 2027
The post Google Apologizes for Sending the Worst Push Notification You Can Possibly Imagine appeared first on Futurism.


