Shelly Palmer

Tech Giants Unite to Fight Election Deepfakes: Groundbreaking Pact to Combat AI-Generated Deception!

Image created using DALL·E 3 with this prompt: Read the text below and create an image that reflects the story. Aspect ratio 16×9.

 

A coalition of 20 tech companies – including Microsoft, OpenAI, Google, Meta, Amazon, and Adobe – signed an agreement to combat election-related deepfakes ahead of the 2024 elections. The accord, known as the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” aims to prevent and address AI-generated content that could influence voters, such as deceptive audio, video, and images of political candidates, election officials, and other key stakeholders. The signees have committed to developing and sharing tools to detect and address the online distribution of deepfakes, fostering public awareness, media literacy, and all-of-society resilience.

The agreement is voluntary and does not include binding enforcement, which has led to concerns about its effectiveness. The companies have agreed to work together to assess the risks of their AI models, detect the distribution of deepfakes on their platforms, and appropriately address such content. They also plan to provide transparency to users about how they address deceptive AI content.

Notably, the accord does not commit to banning or removing deepfakes, but it does outline methods to detect and label them. The companies have faced pressure to do more, especially in the U.S., where legislation regulating AI in politics is still lacking. The FCC has confirmed that AI-generated audio clips in robocalls are against the law, but this does not cover audio deepfakes on social media or in campaign advertisements.

Despite the voluntary nature of the agreement, the companies involved have expressed a commitment to protecting the integrity of elections and ensuring that their tools are not weaponized.

In practice this may not matter because – as we know all too well – people believe what they want to believe. Voluntary policing of deepfakes is a good place to start, but facts usually don’t win arguments. Even when they do, the essence of the deepfakes problem lies in the complex question fallacy (aka a loaded question): “How often do you beat your wife?” Damage done.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.