A Deepfake Movie Of Kari Lake Shows A Possible Issue During Election Season


We hear all the time about progress in artificial intelligence, which can lead to good things like progress in medicine and science.

At the same time, worries about bad people using it for bad things are rising.

A deepfake video of Senate candidate Kari Lake was released to show how lifelike AI videos can be. This has made the issue of how it affected the election even more controversial right now.

At the start of the video, it says, “Subscribe to the Arizona Agenda for hard-hitting real news and a sneak peek at the scary AI that will be used in the next election, like this video, which is an AI deepfake…”

An AI-made movie that looks real is called a deepfake.

Pres. of Copper State Consulting Group Stan Barnes said, “I think the Arizona Agenda, the news organization that made that video public, did everyone a favor.”

“It’s really a brave new world.” The 2024 election will be very hard, and it might be even harder because people have always been skeptical of our government, the election process, and the candidates.

The political consultant also said that people are not ready for what is coming.

“I’m scared for what it means for fair campaigns and smart voters.” Even more so, being an informed voter will be hard because the first thing you’ll ask everywhere is, “Is this real?” Is this not real?” Barnes said.

Even a trained expert like Subbarao Kambhampati, a professor at Arizona State University in the School of Computing and Augmented Intelligence, has a hard time finding them.

“That kind of brings up the question of how can you tell? It’s also interesting that you can’t tell in the end. “Yeah, that’s the truth,” Kambhampati said.

He said that inconsistencies in the background can be used as clues sometimes. That’s why the backgrounds of many AI-made movies might be blurry like in Lakes.

He also said that knowing how someone acts can help you tell the difference between a real and a fake movie. Kambhampati, on the other hand, said that in a year or two, it might be hard to tell the difference.

“I tell people that the time has come to trust their eyes and ears. “The main reason is that AI can make fake media that might look just like real media,” he said.

Barnes and Kambhampati both agree that a third party will have to verify a video’s authenticity in order to know if it is real or not. They said that news outlets like newspapers, radio, and TV are likely to be used by voters and politicians.

Leave A Reply

Your email address will not be published.