Atlanta

Deepfake debate: How AI-generated ads could affect elections

ATLANTA — With primary elections for the 2026 midterms just months away, Georgia lawmakers are considering ideas to combat a new generation of AI videos that could manipulate voters and swing elections.

The proposals come as AI video generation is getting more convincing and easier than ever. 

“Technology is far surpassing regulation in terms of the kinds of experiments we’re running on ourselves,” Georgia Tech professor Brian Magerko told Channel 2’s Richard Elliot. “If you’re willing to spend 10 bucks for a subscription to a service, you can clone anyone’s voice — their likeness. You can make them look and say whatever you want them to say with very, very little training or technical know-how.”

Magerko explained that the technology has grown so fast that it has been difficult for lawmakers to catch up.

While Georgia doesn’t yet have any laws banning this technology from being used for political purposes, Republican state lawmaker has been trying since last March to find ways to limit it’s impact on elections.

“We’re not trying to take away people’s free speech and their ability to do something,” said state senator John Albers but at the same point you need to make sure people understand that this is not true and accurate."

AI-generated videos have already featured in Georgia campaign advertisements.

Amid the government shutdown last November, an AI-generated animation of Sen. Jon Ossoff’s portrait was posted on social media by an account affiliated with Republican Congressman Mike Collins, another candidate running for U.S. Senate.

In the advertisement, Ossoff’s likeness explains why he voted against reopening the government in a robotic sarcasm.

While there’s still signs it isn’t real, it could be a first in a new wave of campaign tactics.

Data from a Northwestern University research project tracked hundreds of politically motivated deepfakes and AI imagery circulating online. Since the 2024 presidential election, AI videos have increasingly joined static images as the technology to make them has improved.

Many of those posts feature President Donald Trump, including one that racked up 28 million views on an image of the president riding a cat. The image followed a 2024 presidential debate.

Several voters we talked to told Channel 2 Action News that they see AI everywhere; and though they can spot the signs, they worry about relatives who are less exposed to AI.

One shared the concerns he had for his mother.

“She’s seen Facebook posts and she’ll fall for it for one second,” Ashiq Islam said. " But I’ve taught her to understand that not all of this is real, and this is what is deepfake or is a fake AI-generated video."

“I imagine that to some people in the older generations who are huge voting blocs, especially in these types of local elections, it would seem real,” said Jonathan Feldman. “I think there should be some responsibility taken by these campaigns that are using these technologies to at least make it clear by watermarking or by disclaimer in comments.”

Many echoed that they saw a need for limits or regulations to make sure the technology can’t swing a future election.

“We can’t be using old regulation on technology that is new and evolving,” said Shreya Iyer. “As we’re using it and putting it into our daily media, it really does have the ability to sway people in a way that might not really be true or authentic.”

In the meantime, there are easy ways to protect yourself.

Magerko said people should make sure they are critical of the media they consume. AI is most convincing in quick bursts or when viewers can’t give it their full attention. Key signs of AI generated video include unnatural movement, strange, bobble-head like speeches or audio clips that are more rigid than natural human speech.

[DOWNLOAD: Free WSB-TV News app for alerts as news breaks]

[SIGN UP: WSB-TV Daily Headlines Newsletter]

0