Skip to content
Logo TecnoOrange
Go back

What Is a Deepfake and How to Detect a Fake Video

Human face representing facial manipulation with artificial intelligence
Photo by cottonbro studio on Pexels

In recent years, deepfakes have gone from being a technological curiosity to a real threat. Fake videos of politicians saying things they never said, video call scams with a family member’s face, non-consensual pornography… Artificial intelligence has made creating a deepfake as easy as downloading an app. In this article I explain what a deepfake is and, most importantly, how to detect a fake video before it fools you.

Table of contents

Table of contents

What is a deepfake and how does it work

A deepfake is an image, video, or audio manipulated with artificial intelligence to make it seem like someone said or did something that never happened. The word comes from “deep learning” and “fake.”

How it’s created:

The process is simpler than it seems. Current tools use neural networks that analyze hundreds or thousands of photos of a person to learn their facial features: eye shape, mouth movements, expressions. Then, that model is applied to another video or image.

  1. Photos/videos of the target person are collected.
  2. An AI model is trained with that data.
  3. The model generates a fake face that mimics movements and expressions.
  4. It’s overlaid on another video or generated from scratch.

Three years ago, creating a decent deepfake required a powerful computer and technical knowledge. In 2026, there are apps that do it in seconds with a single photo. That’s what makes it so dangerous.

Pro-tip: The basic rule when facing any shocking video is: don’t share or react until you verify the source. Viral deepfakes are designed to provoke an immediate emotional reaction. That reaction is exactly what the creator is looking for.


Real cases of dangerous deepfakes

This isn’t about paranoia. Deepfakes are already causing real harm:

Business scams: In 2024, a company lost $25 million when an employee transferred money after a video call where everyone except him were deepfakes of the CFO and other executives. The employee didn’t suspect because the faces moved and spoke completely naturally.

Political disinformation: Fake videos of candidates making controversial statements have circulated before elections in multiple countries. Some went viral millions of times before being debunked.

AI sextortion: Scammers use social media photos to create intimate deepfakes and then extort victims by threatening to publish them.

Video call scams: Family members receive video calls from supposed children or parents urgently requesting money. The face and voice are deepfakes created from the victim’s public photos.


How to detect a deepfake video

Although the technology has improved a lot, deepfakes still have flaws you can spot if you know what to look for:

Visual signals

Unnatural blinking: Early deepfakes didn’t blink or blinked robotically. Newer models have improved, but sometimes blinking is too fast, too slow, or doesn’t match natural rhythm.

Face edges: Check where the face meets the neck and hair. Deepfakes often have a blurry edge or unnatural transition between the face and the rest of the body.

Inconsistent lighting: If the face has one type of lighting and the background another, it’s a sign of manipulation. Observe whether shadows on the face match the light direction in the scene.

Skin texture: Deepfakes can make skin look too smooth, plastic-like, or have an unnatural shine.

Mouth movement: Lip sync is one of the hardest things to get right. Watch if the lips match each word exactly, especially with consonants like B, P, and M.

Teeth and eyes: Teeth may look too uniform or blurry. Eyes may not reflect light from the environment naturally.

Audio signals

Robotic or flat voice: Although voice cloning has improved, it sometimes lacks natural emotion, pitch variations, or breaths between phrases.

Inconsistent background noise: If the audio has background noise that changes unnaturally or disappears between phrases, it may be generated audio.

Detection tools

There are tools that can help you verify:

ToolWhat it doesCost
Hive ModerationDetects deepfakes in images and videoFree (web)
Deepware ScannerAnalyzes videos for manipulationFree (limited)
Microsoft Video AuthenticatorAnalyzes photos and videos by fragmentsFree
AI or NotDetects AI-generated imagesFree
FotoForensicsELA image analysis to detect editsFree

What to do if you find a deepfake

If you detect or suspect a video is a deepfake:

Don’t share it. Sharing it, even to debunk it, gives it exposure. The best thing is to report it directly.

Report on the platform. All social media platforms have options to report manipulated content. Use them.

Document before reporting. Take screenshots of the video, the account that posted it, and any associated information before it gets deleted.

Contact the victim. If the deepfake affects someone you know, notify them immediately so they can take action.

Legal report. In many countries, creating and distributing non-consensual deepfakes is illegal. In the EU, the AI Act of 2024 classifies deepfakes as content that must be labeled as AI-generated.


How to protect yourself from being a deepfake victim

The best defense is preventive:

Limit public photos of your face: The more photos of you there are online, the easier it is to create a deepfake. Review your social media privacy settings.

Enable two-factor authentication: If someone tries to impersonate you with a deepfake, at least your accounts will be protected.

Establish family code words: With your close family, agree on a verification word or question you’d use in case of a suspicious video call. Something a deepfake wouldn’t know.

Don’t respond to video emergencies without verifying: If you receive a video call from a family member urgently requesting money, hang up and call directly the number you already have saved.


FAQ: Frequently asked questions

Can a deepfake of me be made from just one photo?

Yes, with current 2026 tools it’s possible to create basic deepfakes from a single frontal photo. The results are less realistic than with multiple photos, but enough to fool someone who isn’t alert.

Are deepfakes illegal?

It depends on the use and the country. Creating non-consensual pornographic deepfakes is illegal in many countries. Using them for scams or identity impersonation is a crime. The EU requires labeling AI-generated content.

Do social media platforms detect and remove deepfakes?

Major platforms (YouTube, Facebook, Instagram, TikTok) have automatic detection systems, but they’re not foolproof. They rely heavily on user reports.

Can I use AI to detect deepfakes?

Yes, there are tools like Hive Moderation or Deepware Scanner that analyze videos and photos. However, creation technology advances faster than detection technology, so they’re not 100% reliable.


Conclusion

Deepfakes are one of the most dangerous applications of artificial intelligence, and they’re no longer science fiction. They’re here, accessible, and causing real harm. The best defense is education: knowing what they are, how they’re created, and what signs to look for. When facing any shocking video, apply the golden rule: verify before sharing. Your healthy skepticism is your best tool against AI-generated disinformation.


Share this post on:

Previous Post
What Is the iPhone Fold and When Does It Come Out: Everything We Know
Next Post
What Is AI Phishing and How to Avoid Falling for It

Related articles