Non-consensual AI porn doesn’t violate privacy – but it’s still wrong

Jorge Salvador/Unsplash

It rarely takes long before new media technologies are turned to the task of creating pornography. This was true of the printing press, photography, and the earliest days of the internet. It’s also true of generative artificial intelligence (AI).

Face-swapping tech has been around for more than a decade. It almost immediately gave rise to “deepfakes” – fake, yet convincing images and videos of people.

Generative AI has supercharged the spread of deepfake pornography, making it easier than ever to fabricate explicit pictures and videos of others.

And it’s not just celebrities who are victimised. Deepfake nudes of classmates and teachers are rife in schools around the world, sometimes targeting children as young as 11. Image-based abuse is widespread, and victims say the law doesn’t offer enough protection.

So what does the law say about this? And even when not illegal, is it ever ethical to use this technology for one’s private fantasies?

Deepfake pornography and the law

In 2024, Australia amended its criminal code to explicitly include AI-generated porn in the law against distributing sexual material of others without their consent. As a result, digitally manipulated sexual imagery of others now falls within the same legal category as genuine photographs or video footage.

There are gaps in this legislation. Most notably, the relevant offence prohibits transmitting such material via a carriage service (such as the internet). But there is no standalone offence for creating such material. Only sharing is explicitly prohibited.

There is some ambiguity here. Many AI tools used to create sexual imagery are online services. To use them, you send data to the service, which then sends sexual imagery back. It’s unclear whether this counts as “transmitting” sexual material in the relevant legal sense.

Also, the offence requires that the person distributing the sexual material is either aware the target did not consent to its distribution, or is reckless as to whether they consented. But what, exactly, does “reckless” mean?

If Neera created deepfake pornography of Julian without even considering whether he would consent, this would be reckless. But what if Neera claimed that she (wrongly) assumed Julian wouldn’t mind because the footage isn’t a true depiction of him? Would this count as “reckless” in the relevant legal sense? This, too, remains unclear.

Legal doesn’t make it ethical

As the law doesn’t clearly prohibit private creation and use of deepfake pornography, individuals must make their own moral choices.

Moreover, the law has only a limited impact on how people behave online. Internet piracy is known to be illegal but remains widespread, presumably because people are aware they probably won’t be punished for it and don’t think piracy is a serious moral wrong.

By contrast, many people have the strong intuition that even private use of deepfake pornography is wrong. But it’s surprisingly difficult to articulate why. After all, far fewer people morally condemn others for having private sexual fantasies of celebrities, acquaintances or strangers.

If private fantasies are not seriously wrong, is computer-assisted fantasising any different?

The case for privacy

Most commonly, deepfake pornography has been described as a privacy violation. It’s easy to see the appeal of this view. AI outputs appear to depict, in concrete form, what somebody looks like unclothed, or engaged in sex.

Some victims report a sense that others have “seen them naked”, or that the outputs feel like “real images”. This seems more invasive of privacy than an image held only in someone’s imagination.

However, there is a problem with the privacy argument.

AI tools can swap a person’s face onto existing porn footage or generate entirely new imagery from patterns learned during training. What they can’t do is depict what the person is actually like. The deepfakes look convincing because most human bodies are roughly similar in ways that matter for sexualised imagery.

This matters because sexual privacy concerns information that is particular to us – such as identifying details about our bodies, or how we express ourselves sexually.

Assumptions we make based on generic facts about humans are different. You can violate someone’s privacy by sharing specific details from their sexual history. You can’t violate their privacy by announcing they probably have nipples, and probably sometimes have sex.

This distinction is not trivial. AI “nudify” apps offer the fantasy that the AI tool allows access to another person’s body without their consent. And if we think deepfake porn is offering genuinely personal information about its targets, that makes the deepfakes more harmful. It’s a misconception that shouldn’t be encouraged.

It’s still morally wrong

We are not suggesting that private creation of deepfake pornography is morally benign.

It might not violate a person’s privacy, and it might not break the law. But people also have a broader interest in how they’re depicted and seen by others. Deepfake porn is vivid and can be visually convincing. If someone sees such imagery of you, their view of you can be distorted more than if they were just fantasising in their head.

It is also well established that many people find others viewing deepfaked sexual depictions of them psychologically and emotionally ruinous. That alone is sufficient reason to condemn the use of these tools.

While powerful in some respects, AI tools can’t reveal the genuinely private aspects of our sexual lives. But their use for deepfake porn remains a small-minded and morally unjustifiable act of disrespect.

The Conversation

Neera Bhatia has previously received funding from the UK Arts and Humanities Research Council for an unrelated project.

Julian Koplin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Scroll to Top