Hey Agenda readers!

We have a very special announcement today. 

To our great surprise, Kari Lake, the former newscaster turned political candidate and frequent subject of our derision, offered to film a testimonial about how much she likes the Arizona Agenda. 

She has tweeted our work once or twice, but this was unexpected, to say the least. 

So we figured what the hell! Let’s roll with it. 

Before you read on, take a few seconds to watch the video… 

So what did you think? 

At what point did you get it? 

Did you realize before you clicked because the setup was just so implausible? 

At least, did you spot it before she told you?

Or — like most people that we’ve shown this to — did it take a second for your brain to catch up even after our “Deep Fake Kari Lake” told you she was fake? 

Welcome to the terrifying new age of artificial intelligence deepfakes. 

It’s only beginning. 

The 2024 election is going to be the first in history where any idiot with a computer can create convincing videos depicting fake events of global importance and publish them to the world in a matter of minutes.

Including us. 

These are the things that keep election officials, cybersecurity experts and national security officers up at night. 

But like our fake Lake said, we’re here to help…

WHY MAKE A DEEPFAKE?

Before we get into the nuts and bolts of how we did this (it’s terrifyingly easy) and how you can spot fake political content (it’s terrifyingly difficult), we should explain why we decided to make a deepfake of Lake. 

When we’ve mentioned this project to people over the past few weeks, they have pretty consistently responded with some variation of a famous line from Jurassic Park.

News organizations generally make an effort to stay away from this kind of stuff.1 For obvious reasons, we called a few lawyers before pulling the trigger on this one.

We believe that if you voters have to wade through this kind of disinformation as part of your civic duty, it’s better to expose you to what’s possible in a contained environment before bad actors bombard you with it. 

They’re already out there:

  • This month, supporters of Donald Trump created and distributed deepfake photos of him with Black supporters trying to sway Black people toward Trump.

  • Last month, a Democratic political operative admitted to making a deepfake audio of President Joe Biden urging people to stay home in the New Hampshire Democratic presidential primary.

  • Last year, an account affiliated with the Ron DeSantis campaign posted deepfake images of Donald Trump hugging Anthony Fauci.

We’re out of the age of “fake news” and entering the era of fake reality.

So today, we’re going to teach you how to spot AI deepfakes and explain why the next generation will be even harder to spot. 

And when we say “the next generation,” we mean coming up before November. 

HOW CAN I SPOT IT?

First, it’s helpful to understand a little bit about how this kind of video is made — and the advanced deepfakes that are about to proliferate as new tools become more widely available.

Knowing how it works will help you spot the telltale signs of deep fakes.

Our fake Lake will help explain:

This is pretty simple face-swap technology, combined with some audio cloning and lip-syncing. It’s almost child’s play in the world of AI video. But it’s easy to do with cheap, off-the-shelf tools — and it’s getting incredibly believable.

A talented software engineer friend spent an hour or two making these for us. With a little source video and audio, they can put any face on anyone — to varying degrees of believability — and make them say anything. 

The audio is pretty dead-on.2 Audio mimicking technology is incredibly advanced and simple, and one of the hardest deep fakes to spot. Cyber scammers are running wild with this stuff right now.

Video is still a little harder.

The biggest limitation to our video is that we’re putting words in her mouth.

Sometimes, she’s making an “Oh” sound in real life, and we have her making an “Eee” sound. Even if we were to sync up her lips perfectly,3 the facial expression wouldn’t match.

Right now, that’s one of the easiest ways to identify deepfakes.

We were lucky that Lake films a lot of dead stare monologues. It’s much harder to lip-sync people in motion. But if we didn’t have that, we could always mash up a few shorter videos, which are easier to make convincing anyway.

If you’re watching her talk for a full minute, you can still tell it’s fake. But as a five-second soundbite clip while scrolling, it’s harder to catch.

WHAT ELSE IS COMING?

If we were some unscrupulous candidate, PAC or company with an actual budget, we’d hire an actor who resembles Lake. We could have the actor do or say anything we want, then swap on Lake’s face so seamlessly it would trick even a trained eye. 

That’s basically how Disney remade the young Luke Skywalker in “The Book of Boba Fett.4 But anybody with a little knowledge and a decent computer can do it nowadays. 

As scary as all that is, it’s still just advanced video editing.

The next generation of AI video won’t require an actor or a set. 

OpenAI, the company behind ChatGPT, recently debuted its new text-to-video tool called Sora. The company was careful to note that it’s “taking important safety steps before this research becomes available in any of our products.” 

But Pandora’s Box has been opened. Less discerning companies are on their way. Soon, anyone with a little tech skill will be able to generate a video of anybody doing anything from scratch nearly flawlessly. 

Let your mind wander for a minute to ponder the possibilities, and the profound implications for not just politicians, celebrities or our legal system, but for our concepts of objective reality.

When you can’t tell what’s fake, how do you know what’s real?

HOW CAN I FIGHT BACK?

Just like how the Terminator was the solution to the rise of Skynet, AI detection software may be one of the best defenses against AI. 

Machine learning algorithms can be taught to distinguish between real and fake content by analyzing footage frame by frame, looking for subtle signs that humans might miss.

But those technologies are still just emerging. 

Of course, lawmakers are also trying to regulate this kind of technology — though few of them understand it.

Fighting this new wave of technological disinformation this election cycle is on all of us.

Your best defense is knowing what’s out there — and using your critical thinking. 

If you know anything about the real Lake, or the Agenda, the premise of the first video should have been a dead giveaway that something was wrong. 

But what would happen if we had made a video of Lake saying, “Let’s storm the Capitol and take back the Governor’s Office that I won”?

Many of you would be so outraged that you wouldn’t question if it was real. 

And some of you would show up at the Governor’s Office, tiki torches in hand. 

We’re in a scary time as a nation. It sometimes feels like we’re one deepfake away from a civil war. 

So please, pause, think and investigate before you get outraged. Check with a reputable news source that you trust. Take a breath before you tweet.

And remember, not everything you see on the internet is real.

Reply

or to participate

Keep Reading

No posts found