Skip to main content

The race to regulate deepfakes

With elections right around the corner, we should expect more deepfakes in our future. The question is: what should we do about it?
Created on April 10|Last edited on March 24

Introduction

With important elections being held in over 50 countries this year—including the US Presidential election in November—governments are trying to step up regulation of deepfakes to combat extremism, as well as misinformation and disinformation campaigns.
Whether it’s a viral audio of President Joe Biden urging people not to vote in the New Hampshire primaries or video of British Labour Party leader Keir Starmer peddling an investment scheme, deepfakes allows anyone to impersonate anyone else with astonishing realism. And they’re easy to create and deploy—often within minutes.
The lack of quick and aggressive action by the US and EU keeps me up at night. We are primarily relying on the self-regulation of large content platforms like Meta, Youtube, TikTok, and X, who have come up with various labeling methods for AI manipulated content, but detection at scale remains infeasible and enforcement is even more elusive.
Let’s take a look at proposed regulation in the US and the EU, and how far away we are from effectively combating deepfakes.

The American Landscape

Currently, there are two major initiatives in the United States:
Congress introduced a bill on March 21st called the “Protecting Americans from Deceptive AI Act,” which directs the National Institute of Standards and Technology (NIST) to develop standards for identifying and labeling AI-generated content, and requires generative AI developers and online content platforms to provide disclosures on AI-generated content.
​​Separately, Biden’s recent AI Executive Order tasks the Department of Commerce with developing guidance for content authentication and watermarking, to clearly label AI-generated content for Federal agencies to use when communicating with the American public.
While these are good first steps, it’s unlikely for us to see any meaningful implementation or enforcement make it in time for one of the most controversial Presidential elections of our time.

A More Proactive EU

The EU has always been quicker to take action, but regulation around deepfakes doesn't seem very effective so far. The EU AI Act just passed, and deepfakes have evaded the “high-risk” AI systems category that would subject them to serious scrutiny. Still, as a “limited risk” AI system, the AI Act requires basic labeling for AI manipulated content.
It’s unclear, however, how far the EU AI Act goes in asking developers and content platforms to monitor and prevent malicious uses of deepfakes. Voluntary enforcement by larger content platforms, and self-regulation in determining how far they will take labeling practices, is really the only protection that the public can rely on.

The Limitations of Labeling and Embedding Metadata

As part of the recent Munich Security Conference, key content platforms and companies are responding to mounting governmental pressure and agreed to try to detect and label deceptive AI content, including deepfakes that target voters. The list includes Meta, Amazon, Google, Adobe, IBM, Microsoft, OpenAI, TikTok, and X, among many others, who are committed to “swift and proportionate responses” when deceptive content starts to spread.
Meta, perhaps the largest content platform susceptible to misinformation/disinformation campaigns, said that it will require toolmakers to incorporate encrypted metadata into AI-generated content, according to the specifications of an industry standards body, such as Content Credentials.
Companies like Adobe and Microsoft are already members of the industry group Coalition for Content Provenance and Authenticity (C2PA), and have adopted Content Credentials that embed metadata indicating who made the image and what program was used to create it. People will be able to click on the symbol for Content Credentials to look at that metadata themselves.
Nonetheless, significant problems remain with this approach:
Simply embedding metadata or watermarking still puts much of the burden on the public. People generally do not have the habit of checking the sources of what they see online.
Moreover, as this article points out, bad actors who create deepfakes to wreak havoc will use less established, sometimes open-source tools. That makes it hard to trace deepfakes back to the tool or the creator. They could also leverage tools that make it easy to disable the addition of metadata or watermarks. Meta is still in the relatively early stages of developing effective classifier AI models to detect AI-generated content that lacks proper labeling.
In the meantime, Meta, Youtube, etc. rely largely on an honor system of individual users and creators. With warnings of potential content removal and loss of ad revenue, users and creators are meant to add their own labels when they upload realistic but manipulated AI content, especially when the content incorporates sensitive political topics. It should also be noted that these companies are profit-driven advertising platforms—it’s not even in their best interest to enforce content removal.

Final Thoughts

Both the US and EU governments are planning to conduct wargames pre-election, to mitigate the dangers of deepfake technology. They suggest that the large content platforms follow suit, by setting up rapid reaction mechanisms and conducting simulation exercises.
That being said, voluntary enforcement and self-regulation by content platforms, tools, and users only goes so far. In the race between deepfakes and government regulation, we’re already far behind. And with elections around the corner, this should be terrifying for anyone who cares about voters making informed choices.

Tags: Articles
Iterate on AI agents and models faster. Try Weights & Biases today.