Video Deepfakes: There’s a New Sheriff in Town

ai deepfakes

With AI rolling across the tech landscape, we’re entering new territories — a Wild West of content creation where no one has gone before.

In Europe, an AI-generated “interview” with Formula 1 racing legend Michael Schumacher highlights the danger (Schumacher has not been seen in public since his brain injury in a skiing accident in December 2013.)

The Schumacher family will now take legal action against Die Actuelle, a German tabloid magazine that failed to mark the interview as an AI creation. The magazine fired the editor responsible, but the damage was already done — and out in public.

In our ProAV jurisdiction, more dangers than just AI-generated text will arise. We have to worry about fake videos, of course. And in videoconferencing … fake presence. We’re not talking about avatars or enabling computer-simulated presence, but outright fakes, created to deceive and usually with criminal intent.

Whether in videos or in a videoconference, deepfakes discombobulate our perception of what is real. They have the potential to cripple our faith in person-to-person video calls. They will certainly jeopardize corporate and enterprise communications; this is a new and urgent item on the cybersecurity list.

All that said, Intel claims to have the solution: FakeCatcher, the world’s first real-time deepfake detector. It can — in milliseconds — detect fake videos with a reported 96% accuracy rate.

“Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,” said Ilke Demir, senior staff research scientist at Intel Labs.

“And then I saw an MIT paper about finding blood flow from videos.”

That paper inspired her to invent the FakeCatcher system (in collaboration with Umur Ciftci from the State University of New York, Binghamton).

See related  Exploring AI Solutions for Mitigating School Gun Violence, as Debated by Colorado Lawmakers

While today’s deep learning-based detectors examine raw data for fakes (which requires uploading videos for analysis and then waiting hours for results), FakeCatcher takes a different route: It looks into the pixels of a video in real-time for the physical minutiae of “blood flow,” a nearly invisible combination of color and surface changes in our veins caused when our heart pumps blood. FakeCatcher examines these blood flow signals so its algorithms can translate the signals into spatiotemporal maps. Then, using deep learning, Intel’s FaceCatcher can instantly detect whether a video is real or fake.

How It Really Works

First, the system must first be intelligent enough to find the face. From the face, it can find facial landmarks (think, noses ). From the nose and other facial landmarks, FaceCatcher then digitally extracts “a region of interest,” or a map. Using an OpenVino deep learning toolkit running on Intel hardware, the system analyses the tiny changes in the mapped region, examining-— every 64 or 128 frames — the changes in the skin and the colors of those facial blood vessels that rest just under the skin.

Welcome to photoplethysmography (PPG), a simple and low-cost optical technique to detect blood volume changes in microvascular tissue. PPG detection might be THE technology to stop deepfakes. PPG is almost impossible to replicate, and it’s a particularly effective way to tell a real human from an AI-generated replicate. (Imagine if criminals were forced to gear up with supercomputers to build replicas of blood flows unique to each deep-faked face.)

With FakeCatcher, a future videoconferencing kit could issue a warning if it determines a deepfake — and could alert you if you’re speaking to a fraudster.

That will bring an interesting refresh to the old slogan: “INTEL INSIDE.”