Here we look at what deepfake videos are, why they’re made, how they (might) be quickly and easily detected using a variety of deepfake detection and testing tools. 

What Are Deepfake Videos? 

Deepfake videos are a kind of synthetic media created using deep learning and artificial intelligence techniques. Making a deepfake video involves manipulating or superimposing existing images, videos, or audio onto other people or objects to create highly realistic (but typically fake) content. The term “deepfake” comes from the combination of “deep learning” and “fake.” 

Why Make Deepfake Videos? 

People create deepfake videos for various reasons, driven by both benign and malicious intentions. Here are some of the main motivations behind the creation of deepfake videos: 

  • Entertainment and art. Deepfakes can be used as a form of artistic expression or for entertainment purposes. AI may be used, for example, to create humorous videos, mimic famous movie scenes with different actors, or explore creative possibilities. 
  • Special effects and visual media. In the film and visual effects industry, deepfake technology is often used to achieve realistic visual effects, such as de-aging actors or bringing deceased actors back to the screen (a contentious point at the moment, give the actors strike over AI fears). That said, some sportspeople, actors and celebrities have embraced the technology and are allowing their deepfake identities to be used by companies. Examples include Lionel Messi by Lay’s crisps and Singapore celebrity Jamie Yeo agreeing a deal with financial technology firm Hugosave. 
  • Education and research. Deepfakes can be used for research and educational purposes, helping researchers, academics, and institutions study and understand the capabilities and limitations of AI technology. 
  • Memes and internet culture. In recent times, deepfakes have become part of internet culture and meme communities, where users create and share entertaining or humorous content featuring manipulated faces and voices. 
  • Face swapping and avatar creation. Some people use deepfakes to swap faces in videos, such as putting their face on a character in a movie or video game or creating avatars for online platforms. 
  • Satire and social commentary. Deepfake videos are also made to satirise public figures or politicians, creating humorous or critical content to comment on current events and societal issues. 
  • Privacy and anonymity. In some cases, people may use deepfakes to protect their privacy or identity by concealing their face and voice in videos. 
  • Spreading misinformation and disinformation. Unfortunately, deepfake technology has been misused to spread misinformation, fake news, and malicious content. Deepfakes can be used to create convincing videos of individuals saying or doing things they never did, including political figures, leading to potential harm, defamation, and the spread of falsehoods. 
  • Fraud and scams. This is a very worrying area as criminals can now use deepfakes for fraudulent activities, e.g. impersonating someone in video calls to deceive or extort others. For example, deepfake testing company Deepware says: ”We expect destructive use of deepfakes, particularly as phishing attacks, to materialise very soon”. 

What Are Deepfake Testers? 

With AI deepfakes becoming more convincing and easier to produce thanks to rapidly advancing AI developments and many good AI video, image, and voice services widely available online (many for free), tools that can quickly detect deepfakes have become important. In short, deepfake testers are online tools that can be used to scan any suspicious video to discover if it’s synthetically manipulated. In the case of deepfakes made to spread misinformation and disinformation or for fraud and scams, these can be particularly valuable tools. 

How Do They Work? 

For the user, deepfake testers typically involve copying and pasting the URL of a suspected deepfake into the online deepfake testing tool and hitting the ‘scan’ button to get a quick opinion about whether it’s likely to be a deepfake video. 

Behind the scenes there a number of technologies used by deepfake testers, such as: 

  • Photoplethysmography (PPG), for detecting changes in blood flow, because deepfake faces don’t give out these signals. This type of detection is more difficult if the deepfake video is pixelated. 
  • Eye movement analysis. This is because deepfake eyes tend to be divergent, i.e. they don’t look at a central point like real human eyes do. 
  • Lip Sync Analysis can help highlight a lack of audio and visual synchronisation, something which is a feature of deepfakes. 
  • Facial landmark detection and tracking algorithms to assess whether the facial movements and expressions align realistically with the audio and overall context of the video. 
  • Testing for visual irregularities, e.g. unnatural facial movements, inconsistent lighting, or strange artifacts around the face. 

Examples 

Some examples of deepfake testers include: 

  • DeepwareAI  – This is only for detecting AI-generated face manipulations and can be used via the  website, API key, or in an offline environment via SDK. There is also an Android app. There is a maximum limit of 10 minutes for each video. 
  • Intel’s FakeCatcher – With a reported 96 accuracy rate, Intel’s deepfake detection platform, introduced last year, was billed as “the world’s first real-time deepfake detector that returns results in milliseconds.” Using Intel hardware and software, it runs on a server and interfaces through a web-based platform.  
  • Microsoft’s Video Authenticator Tool  – Announced 3 years ago, this deepfake detecting tool uses advanced AI algorithms to detect signs of manipulation in media and provides users with a real-time confidence score. This tool was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, both of which are key models for training and testing deepfake detection technologies. 
  • Sentinel – This AI-based protection platform is used by governments, defence agencies, and enterprises. Users upload their digital media through the website or API, whereupon Sentinel uses advanced AI algorithms to automatically analyse the media. Users are given a a detailed report of its findings. 
  • DeepFake-O-Meter – This is an open platform allowing users to upload a video (.wav video, maximum size is 50MB), input their email address, and get an assessment of whether a video is fake. 
  • DuckDuckGoose DeepDetector Software  – This is fully automatised deepfake detection software for videos and images which uses explainable output powered by AI to help users understand how the detection was made. It detects deepfake videos and images in real-time and provides explainable outputs 
  • WeVerify – This project aims to develop intelligent human-in-the-loop content verification and disinformation analysis methods and tools whereby social media and web content is analysed and contextualised within the broader online ecosystem. The project offers, for example, a chatbot to guide users through the verification process, an open-source browser plugin, and other open source AI tools, as well as proprietary tools owned by the consortium partners. 

What Does This Mean For Your Business? 

Deepfake videos can be fun and satirical, however there are real concerns that with AI advancements, deepfake videos are being made for spreading misinformation and disinformation. Furthermore, fraud and scam deepfakes can be incredibly convincing and, therefore dangerous.

Political interference such as spreading videos of world leaders and politicians saying things they didn’t say, plus using videos to impersonate someone to deceive or extort are now very real problems. With it being so difficult to tell for sure just by watching a video whether it’s fake or not, these deepfake testing tools can have a real value both as a safety measure for businesses, or for anyone who needs a fast way to check out their suspicions.

Deepfake testers, therefore, can contribute to cybercrime prevention and countering of fake news. The issue of deepfakes as a threat is only going to grow, so the hope is that as deepfake videos become ever-more sophisticated, the detection tools are able to keep up in their ability to be able to tell with certainty whether a video is fake.

If you would like to discuss your technology requirements please:

Back to Tech News