Facial detection software being shown on 3 peoples faces

Deepfakes: Can truth survive if we can't trust our eyes?

Deepfakes are the death of evidence and a call to action

Guest Contributor Holly Brockwell. 

A set of celebrity videos has been pinging its way around the inboxes of adland for the last couple of weeks, to equal amounts of conversation and consternation. But this is no viral campaign from some big-name client: it's a demonstration of Samsung's new technique to realistically animate someone's face from a single photo.

At the same time, a clumsy fake video purporting to show US politician Nancy Pelosi drunk and slurring her words has been doing the rounds on Facebook. Despite its total lack of authenticity, it’s had an enormous number of shares, and a shocking amount of people appear to believe it’s real.

The confluence of these two things is disastrous: at the very same time we’ve developed techniques to create incredibly realistic fake videos, we’re seeing how little it actually takes to fool the public.

In the case of the demo videos, Samsung have thankfully used Marilyn Monroe, Salvador Dali and the Mona Lisa, none of whom are alive and could therefore be mistaken for real. The simulations are enormously creepy to watch, partly because there's that strange feeling of reanimating the dead, and partly because it's really impressive considering it's based entirely on a single 2D photo.

Deepfakes (and ‘cheap fakes,’ as the Pelosi video was called) are one of the latest developments in a problem going back centuries: how do we know what we can trust, and what we can't? Humans are wired to rely on the evidence of our eyes and ears, but we've long since passed the point of being able to fool those. How many Photoshopped pictures have made it into our news feeds? How many doctored recordings and cleverly edited videos?

When I was 17, a male 'friend' sent me an image of a female porn star with my head clumsily Photoshopped over the top. Her fake-tanned skintone didn't match mine and my hair had been badly smudged to reach the same length, so it wasn't hard to tell it was faked. Now, a teenager trying to threaten someone the same way would have access to far better tools, and someone really skilled could make not just a photo, but a whole video – and send it around the school in just a few clicks.

This is where deepfaking starts to become serious. Yes, it's amazingly cool that we can create something from nothing, and manipulate our world to tell new and exciting stories. But every new technology can and will be used for evil, and in this case it's sometimes hard to imagine there being more good applications than bad. Will poor Marilyn be dug up to sell women's tights without her approval? Will we see Martin Luther King reanimated to convince people to vote a particular way? Will the dead ever be allowed to rest in peace again?

We're about to enter an era where we can't trust anything we see or hear – no matter how convincing. The only way to survive in such a climate is to fund investigative journalism, teach critical thinking skills, and regulate the social networks – all things we're doing an awful job of right now.

If human history is anything to go by, it'll take a really serious example of a deepfake doing major damage before we'll be forced to act. I for one hope it's more along the lines of Mona Lisa endorsing Dulux than any of the many, much worse possibilities.

 

Holly Brockwell is a tech journalist, writer and entrepreneur. She’s joining WE at Cannes this month to host our game show, News Makers or News Fakers, an exploration of how brands can operate with purpose in a world of deceptive content.

Join us at Cannes Lions 2019 for News Makers or News Fakers.

For more on the challenges of balancing disruptive tech and ethical corporate leadership, read Joanne Matsusaka's blog about AI and personal data, or Global CEO and Founder Melissa Waggener Zorkin's blog post about the need for ethical leadership in a world in motion.

June 06, 2019

Holly Brockwell