NO FAKES Act: Safeguarding Against AI Deepfakes or Endangering Internet Freedom?
Thursday, Jun 26, 2025

Critics are concerned that revisions to the NO FAKES Act have turned a focus on AI deepfake prevention into a tool for widespread censorship.
What started as a seemingly sensible approach to curbing AI-generated deepfakes has, according to advocates for digital rights, become significantly more alarming. The well-publicized Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act—originally designed to prevent unauthorized digital replicas of individuals—now looms as a potential force to radically alter the internet landscape.
The expansion of the bill has sent shockwaves throughout the tech industry. What was initially meant to protect public figures from deceptive videos is now viewed as possibly establishing a widespread censorship framework.
At its inception, the idea was more clear-cut: to safeguard against AI systems producing fake videos of real individuals without their consent. We've all witnessed those unsettling deepfakes circulating online.
Rather than enacting focused, specific measures, lawmakers have opted for what the Electronic Frontier Foundation has termed a "federalised image-licensing system," which extends far beyond practical protections.
"The revised bill intensifies this initial flawed approach," the EFF points out, "by mandating an entirely new censorship infrastructure encompassing not just images but also the tools and services used to create them."
The most alarming aspect of the NO FAKES Act is its stipulation for nearly all internet platforms to install systems capable of not only removing content following a takedown notice but also preventing its re-upload. Essentially, this mandates content filters which have historically been unreliable in various contexts.
The AI sector finds great cause for concern because the NO FAKES Act targets the very tools themselves. The updated legislation wouldn't just target malicious content; it could potentially shut down entire development platforms and software tools capable of creating unauthorized images.
This strategy resembles trying to ban word processors because someone might use one to draft defamatory content. Although the bill includes constraints (e.g., tools must be "primarily designed" for producing unauthorized replicas or have limited other commercial applications), these nuances are notoriously subjective.
Small UK startups moving into AI image generation could face costly legal challenges based on tenuous claims before they have the chance to grow. Meanwhile, tech giants with legal teams are better positioned to withstand such challenges, potentially solidifying their dominance.
Anyone familiar with YouTube's ContentID system or similar copyright filtering tools knows how often they mistakenly identify and flag legitimate content, such as musicians performing original songs or creators using material under fair use provisions.
The NO FAKES Act would effectively require similar filtering systems across the internet. While it provides exceptions for satire, parody, and commentary, applying these distinctions through algorithms proves nearly impossible.
"These systems routinely flag content that is similar but not identical," the EFF mentions, "like two different individuals performing the same public domain music piece."
Implementing such filtering systems could become prohibitively costly for smaller platforms that lack resources on a scale with Google. The likely outcome? Most would simply lean towards over-censorship to mitigate legal risk.
Interestingly, one might expect the tech giants to oppose such widespread regulation. Yet many have remained surprisingly silent. Some industry observers believe this is no accident—established giants can more easily absorb compliance costs that would crush smaller competitors.
"It's likely no coincidence that some of these major corporations are fine with this revamped version of NO FAKES," the EFF observes.
This pattern is a recurring theme in tech regulatory history—regulations that seem intended to limit Big Tech often end up reinforcing their market monopoly by erecting barriers too steep for newcomers.
Concealed within the legislation is yet another disconcerting provision that could unmask anonymous internet users based merely on allegations. The bill authorizes anyone to secure a subpoena from a court clerk —without judicial oversight or evidence—forcing services to disclose identifying information about users accused of creating unauthorized replicas.
Past experiences demonstrate such processes are prone to misuse. Valid critics could be exposed and faced with harassment if their critiques incorporate screenshots or quotes from those attempting to silence them.
This vulnerability could severely impact legitimate criticism and whistleblowing activities. Consider exposing corporate malfeasance only to have your identity revealed through such subpoena processes.
The push for additional regulation seems peculiar since Congress recently passed the Take It Down Act, which specifically addresses images involving intimate or sexual context. This legislation itself raised privacy issues, especially regarding monitoring encrypted communications.
Rather than evaluate the effectiveness of existing policies, lawmakers appear set on advancing broader restrictions that could shape internet governance for years ahead.
The coming weeks are crucial as the NO FAKES Act proceeds through legislative channels. For anyone invested in internet freedom, innovation, and balanced solutions to emerging technological challenges, this warrants close attention indeed.
Latest News
Here are some news that you might be interested in.

Thursday, Jun 26, 2025
Salesforce Unveils Agentforce 3 to Enhance AI Agent Visibility
Read more

Wednesday, Jun 25, 2025
Huawei's HarmonyOS 6 AI Agents: A New Rival to Android and iOS
Read more

Saturday, Jun 21, 2025
Gain Access to the Untapped 99% of Your Data: Optimized for AI Use
Read more

Friday, Jun 20, 2025
Apple Suggests AI Will Be Incorporated Into Future Chip Designs
Read more