Tech Companies Plan to Combat Use of Fake AI in Elections

Facebook User
by Eric Lendrum

 

As the threat of fake images and videos generated by artificial intelligence (AI) could potentially play a role in the coming 2024 elections and beyond, several tech companies have pledged to use their resources to combat misinformation as a result of such technology.

According to Politico, multiple companies are planning to cooperate through a so-called “Tech Accord” dictating several key goals and methods that will be used in the fight against false AI. The companies intend to expose and debunk any “deepfake” images or videos produced by AI, through various tactics such as watermarks and automatic detection technology.

“We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders,” the draft Accord reads, in part. Participants in the Accord include Google, Microsoft, Facebook, Adobe, TikTok, and OpenAI. The full draft will be presented to international leaders at the Munich Security Conference (MSC) on Friday.

“In a critical year for global elections, technology companies are working on an accord to combat the deceptive use of AI targeted at voters,” the statement continues. “Adobe, Google, Meta, Microsoft, OpenAI, TikTok and others are working jointly toward progress on this shared objective.”

The drafting of the new Accord follows a report and presentation by the MSC on Monday, which detailed the rise of false AI and subsequent concerns about the political impact of such false content.

Some of the methods proposed in the plan include “detection technology” and “open standards-based identifiers” for fake content, as well as watermarks utilizing C2PA and SynthID, initiatives that are already in place and being acted upon by Microsoft, Google, and other tech companies.

However, the companies noted that even such tactics as “metadata, watermarking, classifiers, or other forms of provenance or detection techniques” will not be enough to completely shut down the threat of false AI content, and that they instead may need the support of governments through the passing of new laws.

– – –

Eric Lendrum reports for American Greatness. 

 

 

 

 


Content created by the Center for American Greatness, Inc. is available without charge to any eligible news publisher that can provide a significant audience. For licensing opportunities for our original content, please contact [email protected].

Related posts

Comments