top of page
Writer's pictureESET Expert

Deepfaking it: What to know about deepfake‑driven sextortion schemes


Criminals increasingly create deepfake nudes from people’s benign public photos in order to extort money from them, the FBI warns.


The U.S. Federal Bureau of Investigation (FBI) is warning about an increase in extortion campaigns where criminals tap into readily available artificial intelligence (AI) tools to create sexually explicit deepfakes from people’s innocent photos and then harass or blackmail them.


According to its recent Public Service Announcement, the Bureau has received a growing number of reports from victims “whose photos or videos were altered into explicit content.” The videos, featuring both adults and minors, are circulated on social media or porn sites.


Worryingly, fast-emerging tech enables almost anybody to create spoofed explicit content that appears to feature non-consenting adults and even children. This then leads to harassment, blackmail and sextortion in particular.


Sometimes the victim finds the content themselves, sometimes they are alerted to it by someone else, and sometimes they are contacted directly by the malicious actor. What then happens is one of two things:

  • The bad actor demands payment or else they’ll share the content with friends and family

  • They demand genuine sexually-themed images or videos

Another driver for sextortion

The latter may involve sextortion, a form of blackmail where a threat actor tricks or coerces a victim into sharing sexually explicit content of themselves, and then threatens to release it unless they pay them or send more images/videos. It’s another fast-growing trend the FBI has been forced to issue public warnings about over the past year.


Usually in sextortion cases, the victim is befriended online by an individual pretending to be someone else. They string the victim along, until they receive the explicit images/videos. In the case of deepfake-powered extortion, the fake images are the means by which victims are held to ransom – no befriending is needed.


On a related note, some criminals perpetrate sextortion scams that involve emails in which they claim to have installed malware on the victim’s computer that allegedly enabled them to record the individual watching porn. They include personal details such as an old email password obtained from a historic data breach in order to make the threat – almost always an idle one – seem more realistic. The sextortion scam email phenomenon arose from increased public awareness of sextortion itself.


The problem with deepfakes

Deepfakes are built using neural networks, which enables users to effectively fake the appearance or audio of an individual. In the case of visual content, they’re trained to take video input, compress it via an encoder and then rebuild it with a decoder. This could be used to effectively transpose the face of a target onto the body of someone else, and have them mimic the same facial movements as the latter.


The technology has been around for a while. One viral example was a video of Tom Cruise playing golf, performing magic and eating lollypops, and it garnered millions of views before it was removed. The technology has, of course, been also used to insert the faces of celebrities and other people into lewd videos.


The bad news is that the technology is becoming ever more readily available to anybody and it’s maturing to the point where tech novices can use it to pretty convincing effect. That’s why (not only) the FBI is concerned.


How to beat the deepfakers

Once such synthetic content is released, victims can face “significant challenges preventing the continual sharing of the manipulated content or removal from the internet.” This may be more difficult in the US than within the EU, where GDPR rules regarding the “right to erasure” mandate service providers take down specific content at the request of the individual. However, even so, it would be a distressing experience for parents or their children.


In the always-on, must-share digital world, many of us hit publish and create a mountain of personal videos and photos arrayed across the internet. These are innocuous enough but unfortunately, many of these images and videos are readily available to view by anyone. Those with malicious intent always seem to find a way to use these visual assets and available technology for ill ends. That’s also where many deepfakes come in as, these days, almost anybody can create such synthetic but convincing content.


Better to get ahead of the trend now, to minimize the potential damage to you and your family. Consider the following steps to reduce the risk of becoming a deepfake victim in the first place, and to minimize the potential fallout if the worst-case scenario occurs:


For you:

  • Always think twice when posting images, videos and other personal content. The most innocuous content could theoretically be use by bad actors without your consent to turn into a deepfake.

  • Learn about the privacy settings on your social media accounts. It makes sense to make profiles and friend lists private, so images and videos will only be shared with those you know.

  • Always be cautious when accepting friend requests from people you don’t know.

  • Never send content to people you don’t know. Be especially wary of individuals who put pressure on to see specific content.

  • Be wary of “friends” who start acting unusually online. Their account may have been hacked and used to elicit content and other information.

  • Always use complex, unique passwords and multi-factor authentication (MFA) to secure your social media accounts.

  • Run regular searches for yourself online to identify any personal information or video/image content that is publicly available.

  • Consider reverse image searches to find any photos or videos that have been published online without your knowledge.

  • Never send any money or graphic content to unknown individuals. They will only ask for more.

  • Report any sextortion activity to the police and the relevant social media platform.

  • Report deepfake content to the platform(s) it was published on.

For parents:

  • Run regular online searches on your kids to identify how much personal info and content is publicly available online.

  • Monitor your children’s online activity, within reason, and discuss with them the risks associated with sharing personal content.

  • Think twice about posting content of your children in which their faces are visible.

Cheap deepfake technology will continue to improve, democratizing extortion and harassment. Perhaps it’s the price we pay for an open internet. But by acting more cautiously online, we can reduce the chances of something bad happening.


留言


bottom of page