.Welcome to Deepfake Hell

The 21st century so far has been one long, bruising bumper-car ride of extravagant public lies, debunked and discredited, but—to varying degrees—somehow still doing damage: Swiftboating, WMDs, catfishing, Benghazi, anti-vax, Brian Williams, Lance Armstrong, alternative facts, Pizzagate, inaugural crowd size, Theranos, “Mexico will pay for the Wall,” on and on.

In the amusement-park carnival ride of modern technology, dealing with increasingly sophisticated lies and deceptions is a chronic problem that most expect as the price to pay to live in a free society. But there is a foreboding sense that we may all be in for a steep drop downward into a Twilight Zone of falsehood and fabrication. Thanks to one emerging technology, we seem to have come to a top-of-the-rollercoaster moment when the future suddenly comes into breathtaking view in all its vertiginous and terrifying detail.

Welcome to the age of “deepfakes.”

In recent months, the buzz about deepfake technology has penetrated nearly every realm of the broader culture—media, academia, tech, national security, entertainment—and it’s not difficult to understand why. In the constant push-pull struggle between truth and lies, already a confounding problem of the Internet Age, deepfakes represent that point in the superhero movie when the cackling bad guy reveals his doomsday weapon to the thunderstruck masses.

“Deepfakes” is a term applied to realistically depicted video or audio content that has been technically altered to present a fundamentally false version of real life. It is a deception powerful enough to pass the human mind’s Turing test, a lie on steroids.

secure document shredding

In many cases, it’s done for entertainment value and we’re all in on the joke. In Weird Al Yankovic’s face-swap masterpiece video for “Perform This Way,” a parody of Lady Gaga’s “Born This Way,” nobody actually believes that Weird Al has the body of a female supermodel, however convincingly he makes the case. This month, a hilarious deepfake roundtable discussion featuring creepy-real simulations of Tom Cruise, Robert Downey Jr., Jeff Goldblum, and George Lucas garnered more than a million views in a matter of days.

Nor does a historian have to debunk the idea that Forrest Gump once met President John F. Kennedy, because there’s no danger anyone is going to take that idea seriously. But the technology has now advanced to the point where it can potentially be weaponized to inflict lasting damage to individuals, groups—even economic and political systems. A new system called FSGAN has now emerged that makes the creation of deepfakes a lot easier, eliminating the steep technical learning curve. This technology is evolving week to week.

For generations, video and audio has enjoyed almost absolute credibility. Those days are coming to an abrupt and disorienting end. Whether it’s putting scandalous words into the mouth of a politician, or creating a phony emergency or crisis just to sow chaos, the day is fast approaching when deepfakes can be used for exploitation, extortion, malicious attack, even terrorism.

For a small group of otherwise enormously privileged individuals, that day is already here. If you’re part of that tiny elite of female celebrities deemed sexually desirable on the Internet—Emma Watson, Jennifer Lawrence, Gal Gadot, etc.—you wake up every morning knowing you’re a click or two away from seeing yourself in explicit porn in which you never participated. This horrifying Black Mirror experience is not rape exactly, but it’s a psychic cousin. And if the doomsayers are right, Emma Watson’s present may be the future for the rest of us.

Of course, creating fake videos that destroy another person’s reputation, whether it’s to exact revenge or ransom, is only the most individualized nightmare of deepfakes. If you can destroy one person, why not whole groups or categories of people? Think of the effect of a convincing-but-completely-fake video of an American soldier burning a Koran, or a cop choking an unarmed protester, or an undocumented immigrant killing a U.S. citizen at the border. Real violence can follow fake violence. A deepfake video could cripple the financial markets, undermine the credibility of a free election, or impel an impetuous and ill-informed president to reach for the nuclear football.

ESCAPE FROM THE UNCANNY VALLEY

Ultimately, the story of deepfakes is a story of technology reaching a particular threshold. At least since the dawn of television, generations have grown up developing deeply sophisticated skill sets to interpret audio-visual imagery. When you spend a lifetime looking at visual information on a screen, you get good at “reading” it, much like a lion “reads” the African savanna. At one point, video technology was able to create realistic imagery out of whole cloth, but it quickly ran into a problem known as the “uncanny valley effect,” in which the closer technology got to reality, the more dissonant small differences would appear to a sophisticated viewer. Deepfakes, as they now exist, are still dealing with that specific problem, but the fear is that they will soon transcend the uncanny valley and then allow for fake videos that are indistinguishable from reality. Cue the great leap forward into the media apocalypse.

Deepfakes are the product of machine learning and artificial intelligence. The applications that create them work from dueling sets of algorithms known as “generative adversarial networks,” or GANS. Working from a giant database of video and still images, this technology pits two algorithms—one known as the “generator” and the other the “discriminator”—against each other. Imagine two rival football coaches, or chess masters, developing increasingly complicated and sophisticated offensive and defensive schemes to answer each other, with the goal of creating an offense that can’t be stopped. The GANS process accelerates a kind of technological “natural selection,” to the point that an algorithm can fool the human eye and/or ear.

Naturally, the entertainment industry has been on the forefront of this technology, and the current obsession with deepfakes might have begun with the release in December 2016 of Rogue One, the Star Wars spin-off that featured a CGI-created image of the late Carrie Fisher as a young Princess Leia. A year later, an anonymous Reddit user posted some deepfake celebrity porn videos with a tool he created called FakeApp. Shortly after that, tech reporter Samantha Cole wrote a piece for Vice’s Motherboard blog on the phenomenon headlined “AI-assisted Fake Porn is Here and We’re All Fucked.” A couple of months later, comedian and filmmaker Jordan Peele created a video in which he put words in the mouth of former President Obama as a way to illustrate the incipient dangers of deepfakes. Reddit banned subreddits having to do with fake celebrity porn, and other platforms, including PornHub and Twitter, banned deepfakes as well. Since then, everyone from PBS to Samantha Bee has dutifully taken a turn in ringing the alarm bells to warn consumers. The deepfake panic had begun.

WILL THE TRUTH SURVIVE?

Two decades ago, the media universe—a Facebook-less, Twitter-less, YouTube-less media universe, we should add—bought into a tech-inspired doomsday narrative known as “Y2K,” which posited that the world’s computer systems would seize up, or otherwise go haywire in a number of unforeseen ways, the minute the clock turned over to Jan. 1, 2000. Y2K turned out to be a giant nothingburger, and now it’s merely a punchline for comically wrong-headed fears.

In this case, Y2K is worth remembering as an illustration of what can happen when the media pile on to a tech-apocalypse narrative. The echoing effects can overestimate a perceived threat, and even create a monsters-under-the-bed problem. In the case of deepfakes, the media freak-out might also draw attention away from a more nuanced approach to a coming problem.

Riana Pfefferkorn is the associate director of surveillance and cybersecurity at Stanford’s Center for Internet and Society. She’s been at the forefront of what deepfakes will mean to the legal system. “I don’t think this is going to be as big and widespread thing as people fear it’s going to be,” she says. “But, at the same time, there’s totally going to be stuff that none of us see coming.”

The ramifications of deepfakes showing up in the legal ecosystem are profound. Video and audio have been used in legal proceedings for decades, and the veracity of such evidence has rarely been challenged. “It’s a fairly low standard to get (video and audio evidence) admitted so far,” says Pfefferkorn. “One of the things I’m interested in exploring is whether deepfake videos will require changing the rules of evidence, because the threshold now is so low.”

But deepfakes won’t only have the potential to wreak havoc in the evidentiary stages of criminal and civil court. They could have impact probate and securities law—to fake a will, for example, or to get away with fraud. Pfefferkorn is calling on the legal system to make its adjustments now, and she’s confident it will. “When (Adobe’s) Photoshop came out in the ’90s,” she says, “a lot of news stories then talked about the doctoring of photos and predicted the downfall of truth. The courts figured that out and adapted, and I think we’ll probably survive this one as well.”

What may be more troubling is the other side of the deepfakes conundrum—not that fake videos will be seen as real, but that real ones will be seen as fake. It’s a concept known as the “Liar’s Dividend,” a term championed by law professors Danielle Citron and Robert Chesney, who’ve been the leading thinkers in academia on the deepfakes issue. “One of the dangers in a world where you can accuse anything of being fake is the things you can get people to disbelieve,” says Pfefferkorn. “If people are already in this suspicious mindset, they’re going to bring that with them in the jury box.”

Andrew Grotto is a research fellow at Stanford’s Hoover Institute and a research scholar at the Center for International Security and Cooperation, also at Stanford. Before that, he served as the senior director for cybersecurity policy at the White House in the Obama and Trump administrations. Grotto’s interest in deepfakes is how they will affect the electoral process and political messaging.

“If 9/11 is a 10, and, let’s say the Target breach (a 2013 data breach at the retailer that affected 40 million credit-card customers) is a 1,” he says, “I would put this at about a 6 or 7.” Grotto has been to Capitol Hill and to Sacramento to talk to federal and state lawmakers about the threats posed by deepfakes. Most of the legislators he talked to had never heard of deepfakes, and were alarmed at what it meant for their electoral prospects.

“I told them, ‘Do you want to live and operate in a world where your opponents can literally put words in your mouth?’ And I argued that they as candidates and leaders of their parties ought to be thinking about whether there’s some common interest to develop some kind of norm of restraint.”

Grotto couches his hope that deepfakes will not have a large influence on electoral politics in the language of the Cold War. “There’s almost a Mutually Assured Destruction logic to this,” he says, applying a term used to explain why the U.S. and the Soviet Union didn’t start a nuclear war against each other. In other words, neither side will use such a powerful political weapon because they’ll be petrified it will then be used against them. But such a notion seems out of tune in the Trump era. And political parties don’t have to use deepfake videos in campaigns when there are countless partisan sources, many of them sketchy, who will do it for them.

One of the politicians that Grotto impressed in Sacramento was Democrat Marc Berman, who represents California’s 24th District (which includes Palo Alto and the southern half of the peninsula) in the state Assembly. Berman chairs the Assembly’s Elections and Redistricting Committee, and he authored a bill that would criminalize the creation or distribution of any video or audio recording that is “likely to deceive any person who views the recording,” or that is likely to “defame, slander or embarrass the subject of the recording.” The new law would create exceptions for satire, parody or anything that is clearly labeled as fake. The bill (AB 602) was passed by the Legislature and signed by Gov. Newsom in October.

“I tell you, people have brought up First Amendment concerns,” says Berman in a phone interview. “But, I have to say: Does this bill really bring up First Amendment concerns? Now, I don’t have an answer to that. But the First Amendment is freedom of speech—“I can say what I want to say.” It’s been 11 years since I graduated law school, but I don’t recall freedom of speech meaning you are free to put your speech in my mouth.”

The Electronic Frontier Foundation, which for almost three decades has fought government regulation in the name of online civil liberties, is pushing back against any legislative efforts to deal with deepfakes. In a media statement, the EFF conceded that deepfakes could create mischief and chaos, but contended that existing laws pertaining to extortion, harassment and defamation are up to the task of protecting people from the worst effects.

Berman, however, is having none of that argument: “Rather than being reactive, like during the 2016 (presidential) campaign when nefarious actors did a lot of bad things using social media that we didn’t anticipate—and only now are we reacting to it—let’s try to anticipate what they’re going to do and get ahead of it. This way, we have policy and law that is updated concurrently with technology, instead of always behind technology.”

FAKE FUTURE

Are there potentially positive uses for deepfake technology? In the United States of Entertainment, the horizons are boundless, not only for all future Weird Al videos and Stars Wars sequels, but for whole new genres of art yet to be born. Who could doubt that Hollywood’s CGI revolution will continue to evolve in dazzling new directions? Maybe there’s another Marlon Brando movie or Prince video in our collective future.

The Electronic Frontier Foundation touts something called “consensual vanity or novelty pornography.” Deepfakes might allow people to change their physical appearances online as way of identity protection. There could be therapeutic benefits for survivors of sexual abuse or PTSD to have video-conferencing therapy without showing their faces. Some have speculated about educational uses—creating videos of, say, Abraham Lincoln reading his Gettysburg Address and then regaling Ms. Periwinkle’s fifth-grade class with stories from his youth.

Andrew Grotto at Stanford envisions a kind of “benign deception” application that would allow a campaigning politician to essentially be in more than one place at a time, as well as benefits in get-out-the-vote campaigns.

But here at the top of the rollercoaster, the potential downsides look much more vivid and prominent than any speculative positive effect. Deepfakes could add a wrinkle of complication into a variety of legitimate pursuits. For example, in the realm of journalism, imagine how the need to verify some piece of video or audio could slow down or stymie a big investigation. Think of what deepfakes could do on the dating scene, in which online dating is already consumed with all levels of fakeness. Do video games, virtual reality apps and other online participatory worlds need to be any more beguiling?

If the Internet Age has taught us anything, it’s that trolls are inevitable, even indomitable. The last two decades have given us a dispiriting range of scourges, from Alex Jones to revenge porn. Trolling has even proven to be a winning strategy for getting into the White House. Behind all the media attention devoted to deepfakes in recent months is the sneaking suspicion is that trolls are getting an effective and devastating new weapon to torment society in ways maybe even they haven’t conceived of yet.

And the Emma Watson Effect might not even be the worst of it.

“Let’s keep walking down the malign path here,” says Grotto in his Stanford office, speculating about how deep the wormhole could go. He brings up the specter of what he calls “deepfake for text,” which he says is now inevitable. What that means is that one day, deepfakes will be interactive. They could create a totally fake two-way conversation. What is known about the process of radicalization leading to involvement with extremist groups is that interactive conversations are the most effective means of recruitment.

“People watch videos, sure,” says Grotto. “But mostly what really gets people over the edge is chatting with someone who is trying to make the case for them to join the cause. Instead of passively watching YouTube or exchanging messages on Facebook, you now have the ability to create a persona to sit in front of somebody for hours and try to persuade them of this or that. Imagine what an interactive deepfake, targeted at individuals based on data collection, could do in the hands of ISIS, or some white supremacist group, or pick your bad guy.”

KEEPING IT REAL

In addressing the threat of deepfakes, most security experts and technologists agree that there is no vaccine, no silver bullet. Watermarking technology could be inserted into the metadata of audio and video material. Even in the absence of legislation, app stores would probably require such watermarking be included on any deepfake app. But how long would it be before someone figured out a way to fake the watermark? There is some speculation that celebrities and politicians might opt for 24/7 “lifelogging,” digital auto-surveillance of their every move to give them an alibi against any fake video.

Deepfakes are still in the crude stages of development. “It’s still hard to make it work,” says Grotto. “The tools aren’t to the point where someone can just sit down without a ton of experience and make something (that is convincing).”

He says the 2020 presidential election may be plagued by many things, but deepfakes probably won’t be one of them. After that, though? “By 2022, 2024, that’s when the tools get better,” he says. “That’s when the barriers to entry really start to drop. That’s when you’ll see more malicious applications in other domains, where conceivably a 16-year-old kid could do a deepfake of a school shooting.”

Now is not a time to panic, he says. It’s a time to develop policies and norms to contain the worst excesses of the technology, all while we’re still at the top of the roller coaster. Grotto says convincing politicians and their parties to resist the technology, developing legal and voluntary measures for platforms and developers, and labeling and enforcing rules will all have positive effects in slowing down the slide into deepfake hell.

“I think we have a few years to get our heads around it and decide what kind of world we want to live in, and what the right set of policy interventions look like,” he says. “But talk to me in five years, and maybe my hair will be on fire.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img
Good Times E-edition Good Times E-edition
music in the park san jose