Deepfake sex crimes surge: AI could create a new generation of abusers

The current dilemma is that the pace of technological development has far outpaced the pace of legislation.

Deepfake abuse is emerging as a new frontier in violence against women. Recently, a number of suspects in South Korea used artificial intelligence technology to synthesize real faces and false bodies without the knowledge or consent of women, forged pornographic images, and widely spread in a number of social media groups.

In recent years, the number of Deepfake sex crimes in South Korea has been on the rise, from 156 cases in 2021 to 180 cases in 2023, and 297 cases have been reported in the first seven months of this year. Most of the victims are young women, including students, teachers and soldiers, and nearly two-thirds of the victims are teenagers.

The surge in Deepfake sex crimes has caused panic among Internet users around the world. But it was not a sudden outburst.

In fact, as early as the beginning of this year, the well-known American singer Taylor Swift (Taylor Swift) deep fake pornographic pictures have been widely spread on social media, before the platform made the deletion process, the picture has been viewed more than 47 million times. Unfortunately, the incident only caused shock within the fan community.

CNN also published an opinion piece in late January this year, saying that "our biggest mistake is to believe that this kind of damage only happens to public figures." In 2019, security firm Deeptrace published a study that found that 96% of online deepfakes were non-consensual pornography. "This is not a niche issue."

The Deepfake case that attracted attention this time pointed to the fact that online sexual crimes have reached a new stage. When we comb through the discussion surrounding the Deepfake case, we find that the reason why this seemingly non-physical violence has sparked such widespread concern is that the real concern is a broader, hostile online environment seeping into the real world.

Ai tools could lead to a new generation of young people with a "my wish is AI's command" mentality. "If we are not careful, we will not only create a new generation of victims, but a new generation of abusers."

These discussions also remind us once again that gender bias - the world's oldest bias - has not gone away, and that there is still a long way to go to achieve true equality between the sexes. While technology has always implied risk, it is the tool but never the answer.

Deepfake case backtracking:

Revenge porn?

In some ways, deepfakes are to video what Photoshop is to pictures. Both intuitively push the boundaries between the real and the virtual. Before the popularity of deep forgery technology, people should be familiar with "cheap forgery". The latter refers to the processing of ready-made footage, and the processing is limited to slowing down or speeding up parts of the video, or selectively editing the collage content.

Nancy Pelosi, the Speaker of the US House of Representatives, suffered from it.

In contrast, Deepfakes can transform one face to another, or change the expression of one face to another, and combined with voice acting or voice cloning, this technology can produce completely fictional but extremely realistic videos at a very low cost. In the early years, foreign social media had circulated US President Nixon's speech on the moon landing disaster, which was the result of deep forgery.

Before the widespread attention on deepfaking sex crimes in South Korea, most of the mainstream media coverage of the technology around the world focused on the political sphere. The fear is that it could lead to the spread of disinformation, create social and political divisions and greatly undermine democracy. These concerns are valid, but the filter of media coverage filters out the technology's more practical impact.

According to a recent report by Sensity, a European technology consulting firm, less than 5% of deepfake videos circulating across the Internet are politically related, and more than 95% of them involve non-consensual pornography.

The report also found that some deepfakes, which focus on the bodies of female celebrities and ordinary women, had more than 100 million views. A separate investigation into fake nude photos generated on Telegram found that more than 680,000 women had their photos stolen from their social media or private conversations.

Before that, deep forgery sparked intense discussion at the beginning of this year. A deepfake photo of US singer Taylor Swift went viral on social media, where it was viewed some 47 million times before the platform removed it.

According to relevant statistics, there were more than 9,500 websites "specializing in the production of non-consensual intimate images." Before Taylor Swift became the most high-profile victim, the technology had actually disrupted many people's lives.


User Login

Register Account