Not so sexy: How to combat deepfakes and sextortion
- ESET Expert
- Aug 18
- 5 min read

AI deepfakes used for sextortion highlight a growing problem with consumer privacy — but are we ready to forgo our online comforts for security?
While some argue that AI has gone far enough and the bubble is ready to burst, the challenges regarding its use will surely remain — even after the marketing machines stop.
Some of these challenges relate to AI-generated media content, like cloned voices, doctored images, or videos — also known as deepfakes — which are often used for various criminal behaviors, such as social engineering or extortion.
Among the most troubling types is sextortion, which uses AI-generated explicit content to defraud victims — all without consent.
Social engineering and extortion on the rise
It doesn’t take much to see the impact generative AI (GenAI) has had on criminal behavior. According to a security report by SlashNext, phishing (a form of social engineering) has seen a 4,151% rise since the release of ChatGPT in 2022.
The human element is highly exploited, as per Verizon’s 2025 Data Breach Investigations Report, accounting for 60% of business data breaches. Among these breaches, social engineering was the method of choice in a fifth (22%) of cases, with phishing and pretexting being the top two methods.
The above statistic is also confirmed by Orange Cyberdefense, saying that cyber extortion victims have grown by a record 77%, with attackers opportunistically targeting high-value targets. While the report doesn’t link this to generative AI, it does posit an interesting theory that said tech will likely globalize social engineering, leading to greater future impacts.
(Deep)faking it: GenAI a likely culprit?
Nonetheless, the rising use of GenAI can be easily correlated with the uptick in social engineering, phishing, or extortion — impacting everyone. For example, consider the possible impact on a business caused by employees acting upon instructions from a deepfaked boss.
Just how easy it is to deepfake a person with GenAI? As demonstrated by Jake Moore, global security advisor at ESET, all you need is enough of a voice or image sample to make a clone. While there can still be some small errors in the way a cloned media sample feels, it can be convincing enough to fool others. The speed of this technology’s development also suggests that, over time, it will become more convincing.
The allure of authenticity brought by a close approximation of one’s likeness, be it voice or looks, is enough to convince bad actors of this method's reliability. Even more when we consider the grander implications — stealing identities for sexual purposes.
A world marred by (s)extortion
According to a study from 2023, 96% of online deepfake videos are sexual. Another study confirmed a whopping 550% rise in deepfake content, compared to 2019 statistics. Despite the wonders that the internet can and does bring to the world, it is widely acknowledged that many developments we all benefit from (such as high-speed internet, video streaming and online payment systems) stemmed from the adult content industry, or rather, from its audience’s appetite. We also know that online criminals are always looking for new tactics they can use to meet their ultimate aims (which are often financial). With this in mind, is it a surprise that they are embracing the capabilities of GenAI, sordid as their tactics are?
It takes less than 25 minutes to create a minute-long, deepfaked porn video of anyone using only a single clear face image of a victim. Percentagewise, most victims are from South Korea, the U.S., Japan, and the United Kingdom.
In South Korea specifically, the situation with deepfake porn is so severe that people have had to demand that the government take serious action, as the faces of famous singers, actresses, and others are morphed into explicit content without their consent.
For the U.S., this threat is so prevalent that the FBI itself had to issue an advisory in 2023, warning people about schemes involving fake nudes of minors and non-consenting adults, as people’s images or videos were altered into sexual content, later shared on social media and porn websites for harassment and sextortion purposes.
Sextortion vs. Sextortion scams There is a difference between sextortion and sextortion scams. The former is a form of blackmail, in which a threat actor fools or pushes a victim into sharing sexual content, later threatening to release it unless the victim bows to the demands. In the latter, actual sexual content need not exist. A threat actor claims to have spied on a victim through their computer, recording them watching pornography — demanding payment for them deleting said recording.
Children especially are more easily extorted, and the consequences can be tragic. There have been several cases of children committing suicide as a result of sextortion. As children continue to use the internet, where will we draw the line between what’s convenient and what’s necessary to safeguard them? Are humans so preoccupied with accessibility that they forget the importance of privacy?
How to protect against AI-powered sextortion?
The most important advice with regard to stopping sextortion is to limit the number of images one shares online — but this is only a part of the issue.
Due to the constant erosion of privacy across the internet and even in real life, we face externalities that might involve having their likenesses recorded without their consent. From public social media profiles to CCTV recordings to strangers deliberately or involuntarily sharing one’s likeness, we are surrounded by privacy intrusions.
However, there are a few steps you can take to at least limit your online visibility:
Audit search results about yourself and request deletion of particularly problematic results (sensitive data including images, addresses, emails, etc.).
Switch your public social profiles to private for lower visibility.
Edit your online pictures — particularly of your face — to make AI reproduction more difficult. Consider posting only low-quality images, adding filters or other treatments with software, and such. You might also use avatars (if possible) instead of your actual selfies online.
Never send explicit pictures to strangers. For those who are comfortable with such activity, consider cropping them to omit the face, or use services with timed single-view picture functions if available.
Establish clear communication with kids about these online dangers and create trust between parents, teachers, and pupils, so they wouldn’t feel ashamed to tell their caretakers in case they should face any cyber risk, including sextortion.
Report. If you face sextortion demands or fall victim to a sextortion scam, report it to the local police authority as well as the online platform where you received or saw any suggested harm.
Extortion-free
Reducing the likelihood that your face appears online is a surefire way to combat the possibilities of AI-generated deepfakes, especially those that can turn into sextortion or appear on porn sites. Trusting that a friend, date, or partner won’t share something (willingly or not) is not enough. Our bodies are our own, and we shouldn’t give them away so freely — even if it’s just an image.
Comments