Women on being targeted for deepfake pornography: “I was being spoken to like I was a pornstar, it was mortifying”

Loopholes in the law are making it difficult to prosecute deepfake creators, with no bill in place to protect young women.

Deepfakes are a buzzword floating around the internet, and have been since they first appeared in 2017. The AI technology is the ultimate tool in the spread of misinformation. However, the law surrounding the technology is patchwork at most, and could have significant consequences if not tightened or renewed. Many are rallying for a new bill to crack down on deepfakes, but in the meantime, members of the public are being affected with no legal leg to stand on, and with very serious consequences for their personal lives. 

If you delve into laws, there are loopholes that make persecuting a creator of deepfakes a sketchy process. For members of the public especially, the law is not sufficient. You do not own copyright to your face, hence the defence may lie on the ground of parody and caricature. This is, of course, if you ever manage to identify the creator on the dark web, and then it is likely questions of jurisdiction will still appear. 

With this in mind, there has been a wave of doctored images depicting women being targeted for online pornography. Worryingly, anyone can create a deepfake. They can be made in 30 seconds on free websites, and all you need are a few images. Anonymous sites are housing some disturbing groups to prey on women by playing ‘games’ to collect their information and pictures. If they don’t manage to do so, they concoct a deepfake pornographic video. 

Speaking to a young woman who was affected by the groups of men targeting her on these websites, she talks to HUNGER about her experience and what she saw when trying to protect herself. “The scariest thing was when it first started,” she said. “I didn’t understand what they wanted or why they chose to target me. It wasn’t just me, there were so many women there and even underage girls. There was a whole page about a young dancing influencer. The images were stamped with whether they had won or lost, and they played a game by forging your trust. They messaged me saying they would take it down, so that you panic and reply. I then realised the person trying to help me was a part of it.”

The struggle extended to her personal life, and how she started “reacting to people cautiously, especially online. I was wary of all my followers; I deactivated my accounts; blocked numbers and stopped going on my phone. The fact they got my number and knew about my personal life confirmed it wasn’t just anyone. I got multiple phone calls and messages every day for months and they haven’t stopped, even now. It changed the way I was looking at everyone, including my friends.” 

The feelings that came up from this were difficult for the young woman who was targeted. Online forums have some discussion points, and there are a few Twitter threads about the issue. Overall however, the support is non-existent. “Embarrassment was the main thing for me. I had to explain to my mum that there was a site with this awful name saying these things about her daughter. Having to explain that these 25 year old incels were out to get you… so embarrassing, so vulgar. I don’t associate myself with that side of sexuality online so when I was being spoken to like I was a pornstar, it was just… mortifying.”  

As she looked into legal action to take down these doctored images, she found the law did not extend far enough to help her. The patchwork of laws are not advanced enough for the peculiar nature of deepfakes. In June 2022, the European Commission strengthened code to threaten tech companies who failed to take action. There is also a model of oversight that is reflected in the Online Safety Bill proposed by the UK government to put responsibility onto tech companies for considerable fines, and to take action for not policing the darkest corners of their sites. 

To understand how to take it down, understanding the process of doctoring an image is essential. The technology itself is undeniably innovative. A deepfake is made by employing artificial intelligence with a deep-learning system that studies a person’s images and voice in order to make a digital clone. On top of that, there is another system testing this clone against the original to perfect the likeness. These are pitted against each other for an ultimate deepfake image that lies within the uncanny valley. It has traits that can be detected such as lack of blinking or slightly unsynchronized lip movements, but looks normal to the untrained eye.

The final images can then be controlled by the creator, and can say or do whatever instruction is input. What may seem like a clever tool has previously received backlash for its role in informational warfare in the UK. In 2019, a deepfake video of Keir Starmer was released of the politician stumbling over words on daytime TV, and was posted to the official Twitter of the Conservative Party. The next year there was a deepfake advert of the Queen’s speech at Christmas time on Channel 4, which received over 200 complaints to Ofcom. 

However, in a report by the Digital, Culture, Media and Sport Committee, the draft bill for deepfakes was criticised, especially for how it will affect the public. The committee suggested a new primary legislation to go anti-deepfake before the issue gets out of control. But for many, this issue is already out of hand. It has been seen time and time again where the law cannot catch up with new technologies, and in this dystopian turn of events for women being targeted, may become everyone’s problem before they know it.

WriterElla Chadwick
Banner Image CreditUnsplash