We Can’t Fix Political Deepfakes

Image created using DALL·E 3 with this prompt: Create an image of a robot on the telephone, making thousands of calls to would-be voters, urging them not to vote in an upcoming election. Aspect ratio: 16×9.

 

ElevenLabs, an AI startup specializing in voice replication technology, has banned a user account responsible for creating an audio deepfake of President Biden. The deepfake featured a message urging people not to vote in the New Hampshire primary.

Pindrop Security Inc., a voice-fraud detection company, analyzed the deepfake and identified ElevenLabs’ technology as the source. Upon learning of Pindrop’s findings, ElevenLabs suspended the user’s account and began an investigation into the incident.

The deepfake was sophisticated enough to initially convince some listeners, including a New Hampshire voter who recognized Biden’s voice but later realized it was a scam. As you can imagine, the incident has (once again) raised concerns about the potential misuse of AI in politics.

ElevenLabs is the “go to” synthetic voiceover and voice cloning site. It recently raised an $80 million round led by Andreessen Horowitz, valuing the company at more than $1.1 billion.

On its safety page, the company states: “A very important rule applies to all uses of voice cloning technology: you cannot clone a voice for abusive purposes such as fraud, discrimination, hate speech or for any form of online abuse without infringing the law.”

This admonition is about as useful as telling a preschooler with a box of crayons not to draw on the furniture. That said, the company has stated its commitment to preventing the misuse of its audio AI tools and has been developing safeguards to curb such misuse.

The problem with this story has nothing to do with technology; people will believe whatever they want to believe. When you offer facts that disprove a story someone is telling you, the most common response is, “Yes, but…” and they continue with an unchanged mind. There are no technological fixes for the willfully ignorant.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Tags

Categories

PreviousAuschwitz-Birkenau: The Technology of Death NextOpenAI Notified: ChatGPT Violates GDPR

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe