Microsoft, Google, OpenAI and Others Agree to Combat AI Deepfakes in 2024 Elections

A coalition of 20 tech companies has agreed to combat AI deepfakes in the critical 2024 elections across more than 40 countries. The pact, titled “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” includes industry giants like OpenAI, Google, Meta, Amazon, Adobe, and X, among others. The goal is to prevent and counter AI-generated content that could influence voters. However, concerns have been raised about the agreement’s vague language and lack of binding enforcement.

Microsoft, Google, OpenAI and Others Agree to Combat AI Deepfakes in 2024 Elections

The signatories of the accord, which comprises companies involved in creating and distributing AI models, as well as social platforms where deep fakes are likely to be disseminated, have committed to a set of eight key actions:

1. Developing and implementing technology to mitigate risks related to Deceptive AI Election content.
2. Assessing models to understand the risks they may present regarding Deceptive AI Election Content.
3. Seeking to detect the distribution of this content on their platforms.
4. Seeking to address this content detected on their platforms appropriately.
5. Fostering cross-industry resilience to deceptive AI election content.
6. Providing transparency to the public regarding how the company addresses it.
7. Continuing to engage with a diverse set of global civil society organizations and academics.
8. Supporting efforts to foster public awareness, media literacy, and all-of-society resilience.

The agreement covers AI-generated audio, video, and images and targets content that deceptively alters the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provides false information to voters about the election process.

The signatories have committed to working together to create and share tools to detect and address the online distribution of deepfakes. They also plan to launch educational campaigns and provide transparency to users.

OpenAI plans to suppress election-related misinformation worldwide by encoding images generated with its DALL-E 3 tool

While the agreement represents a significant step in combating deceptive AI use in elections, some observers have expressed scepticism about its effectiveness, citing its voluntary nature and the lack of binding enforcement mechanisms. Nevertheless, the signatories are optimistic that their collective efforts will help safeguard elections from deceptive AI use.

The pact’s signatories have already taken steps to address the issue. OpenAI, for example, plans to suppress election-related misinformation worldwide by encoding images generated with its DALL-E 3 tool with a digital watermark to clarify their origin. The company also intends to prevent chatbots from impersonating candidates.

While the accord represents a promising start, we are yet to see its ultimate effectiveness in combating deceptive AI use in elections. As AI technology continues to evolve, it will be crucial for companies, governments, and civil society to work together to ensure that AI is used responsibly and ethically.

See Also: Nikon, Sony and Canon Fight AI-Generated Fake Images With New Camera Technology

PTA Taxes Portal

Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal

Explore NowFollow us on Google News!

Onsa Mustafa

Onsa is a Software Engineer and a tech blogger who focuses on providing the latest information regarding the innovations happening in the IT world. She likes reading, photography, travelling and exploring nature.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
>