Users on a popular low-level hacking forum are advertising deepfake services, claiming to offer the ability to swap faces of other people onto videos for a fee. “Deepfake” is developing technology which uses machine learning and artificial intelligence to generate convincing moving images of a person’s likeness. While the forum users provided no indication of what others could use the videos for, the marketing of the services to hacking communities indicates a growing demand for the service which can be used to facilitate frauds and harassment.
Deepfake technology is now freely available online through various “open-source” code-sharing sites, which anyone can use to generate their own videos. Although it is not simple to achieve a realistic video, those with the requisite coding skills will be able to produce convincing content. Mishcon de Reya explored some of the implications of deepfake technology in relation to fraud and political interference as part of our Now & Next series with the Economist and the International Fraud Group in 2019.
In one case, a hacking forum user, who had a relatively good reputation rating, claimed to offer the service for $20 per minute, saying that a video typically took four days with the right “training data”. Deepfakes use artificial intelligence, to “learn” convincing facial movements from real videos and transpose them onto fake ones. Given the nature of the forum, which has over 4 million members and is dedicated to computer hacking, it is likely that interested customers will use deepfake videos for low level "trolling” (harassment) of others, but may also use the service to generate unique content for the purposes of “e-whoring", a term which refers to the act of manipulating victims into parting with money and gifts in exchange for virtual sexual and romantic encounters. There is also potential for deepfake videos to be used for even more sinister purposes such as extortion attempts or online revenge-porn and harassment. At the time of writing, there were no reviews of the service.
There are also growing concerns that deepfake technology could be used in political influence campaigns, prompting the US House of Representatives Intelligence Committee to hold hearings in 2019 on the use of deepfakes in elections. The use of automated bots on social media to proliferate messages and what Facebook terms “coordinated inauthentic behavior” has also prompted social media platforms to take steps to stem online disinformation. In January 2020, Facebook announced it would remove “misleading manipulated media” including deepfakes that had been designed to deceive viewers and in 2019, launched a “Deepfake Detection Challenge Dataset” to accelerate the development of deepfake detection models. In September 2020, Microsoft also launched their own detection tool.
We expect to see an increase in cybercriminals attempting to use deepfakes for creating material for harassment, fraud and extortion, increasing the need for businesses to implement identity verification procedures, particularly for large financial transactions. The application of deepfake videos in fraud at this point are limited although there have been reports of the use of audio fakes using machine learning to imitate voices used in frauds and concerns for the application of this in phishing and business email compromise (BEC).