Image created using DALL·E 3 with this prompt: Read the text below and create an image that reflects the story. Aspect ratio 16×9.

 

A coalition of 20 tech companies – including Microsoft, OpenAI, Google, Meta, Amazon, and Adobe – signed an agreement to combat election-related deepfakes ahead of the 2024 elections. The accord, known as the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” aims to prevent and address AI-generated content that could influence voters, such as deceptive audio, video, and images of political candidates, election officials, and other key stakeholders. The signees have committed to developing and sharing tools to detect and address the online distribution of deepfakes, fostering public awareness, media literacy, and all-of-society resilience.

The agreement is voluntary and does not include binding enforcement, which has led to concerns about its effectiveness. The companies have agreed to work together to assess the risks of their AI models, detect the distribution of deepfakes on their platforms, and appropriately address such content. They also plan to provide transparency to users about how they address deceptive AI content.

Notably, the accord does not commit to banning or removing deepfakes, but it does outline methods to detect and label them. The companies have faced pressure to do more, especially in the U.S., where legislation regulating AI in politics is still lacking. The FCC has confirmed that AI-generated audio clips in robocalls are against the law, but this does not cover audio deepfakes on social media or in campaign advertisements.

Despite the voluntary nature of the agreement, the companies involved have expressed a commitment to protecting the integrity of elections and ensuring that their tools are not weaponized.

In practice this may not matter because – as we know all too well – people believe what they want to believe. Voluntary policing of deepfakes is a good place to start, but facts usually don’t win arguments. Even when they do, the essence of the deepfakes problem lies in the complex question fallacy (aka a loaded question): “How often do you beat your wife?” Damage done.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Tags

Categories

PreviousOpenAI Launches Text-to-Video Model Sora NextAI Goes Boing! Splat! Kerplunk!

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe