Unethical brain rot: why are millions watching AI fruits have affairs on TikTok?

Tiktok

If you’ve spent much time on TikTok recently, you may have noticed a strange new type of AI brain rot taking over: fruit dramas.

These AI-generated short dramas feature odd-looking anthropomorphic fruit characters engaging in a range of ethically problematic behaviours. Many storylines, for instance, are based around affairs, racist attitudes, and the sexual assault of women characters.

At face value, the videos come across as so bizarre and grotesque they can be hard to take seriously. That is until you realise they’re amassing hundreds of millions of views. One account called ai.cinema021, which has launched a parody series called Fruit Love Island, has more than 3 million followers.

This content is, at best, a water-guzzling affront to the art of animation and, at worst, actively helping to normalise racism and misogyny. So why does it have so many fans?

Tapping into the brain’s reward system

These videos exploit core features of human psychology. Combined with addictive platform features (such as infinite scroll), the result is an endless stream of content that keeps us engaged – even if the message is immoral, or simply ridiculous.

Short-form video feeds such as TikTok and Instagram reels operate on similar principles to those used in gambling systems. The human brain is highly sensitive to novelty and unpredictability, both of which are linked to dopamine signalling in reward learning.

When rewards are delivered unpredictably, behaviour becomes more persistent. This pattern, known as “variable reinforcement”, has long been shown to sustain repeated actions, even when rewards are inconsistent.

AI slop videos offer rapid visual novelty and unexpected emotional turns. You don’t know whether the next one will be absurd, funny, tragic, or strangely compelling.

The videos also compress big emotional experiences. A single clip may move from betrayal, to sadness, to revenge, to humour in seconds. This creates emotional volatility, which increases arousal and sustains attention.

Research shows emotionally charged content, especially when it is negative or surprising, is more likely than neutral material to get our attention.

The pull of things that feel ‘kinda wrong’

Many viewers describe a sense that these videos feel “off”. The characters are expressive, but often not fully coherent. The narratives resemble human drama, but lack internal logic.

This relates to the idea of the uncanny valley, where near-human representations produce discomfort. Importantly, these videos rarely become disturbing enough to trigger avoidance. Instead they sit in a middle zone. They are strange enough to provoke curiosity, but not uncomfortable enough to make you stop watching.

This creates cognitive tension. According to cognitive dissonance theory, people are motivated to resolve such inconsistencies. And the way to resolve tension in this case is to keep watching, in search of closure. The mind keeps asking: what is this and where is it going?

We’re also more likely to ignore the unethical messaging because of the format. The characters are highly synthetic. This makes the scenarios feel fictional – even when they reflect real social behaviours.

Research on moral disengagement shows people are more likely to relax ethical judgement when the harm appears abstract or indirect. Fruit videos with themes of betrayal, humiliation or assault can be consumed without the discomfort that would arise if real people were involved.

Influence through many minor interactions

Much like AI slop, social media algorithms don’t prioritise meaning or quality. They prioritise content that captures our attention.

Recommendation systems are driven by metrics such as “watch time”, “completion rate” and “interaction”. High engagement leads to greater visibility, which encourages the production of more similar content, creating a feedback loop.

From an AI governance perspective, these videos highlight an often overlooked risk. That is: generative systems don’t just produce content; they can gradually shape our behaviours – often without us realising. This aligns with broader concerns in AI ethics about behavioural influence and manipulative design working on a large scale.

Reclaiming your time and attention

Avoiding social media entirely is not realistic for many people. But small changes can reduce the pull of AI-generated brain rot.

One approach is to introduce a pause before scrolling to the next video. Even a brief interruption will weaken the reward loop in your brain, and make it easier to put your phone down. When you notice yourself thinking “this feels pointless” or “this is strange”, that’s the best time to stop. In some cases a digital detox might be helpful.

You can also retrain your algorithm. Quickly skip or select “not interested” on videos you don’t want to see – and replace passive scrolling with intentional viewing by seeking out specific content.

Finally, create friction. This might involve disabling automatic playback, or limiting your access to a feed, by disabling the app notification, or removing the app from your home screen.

AI fruit videos may seem trivial and absurd, but they reveal something important about the digital environment. As generative systems scale up, they will only get better at capturing and directing our attention. Understanding the psychology behind this is the first step to resisting it.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Scroll to Top