TAKE IT DOWN Act: What Tech & AI Firms Must Know Now

The TAKE IT DOWN Act (S.146), signed into U.S. federal law in early 2025, marks a significant evolution in digital content regulation. It specifically addresses the rise of non-consensual intimate imagery (NCII), child sexual abuse material (CSAM), and AI-generated deepfakes, particularly involving minors. This legislation, introduced with bipartisan support, places responsibility squarely on digital platforms and AI service providers to act swiftly and decisively. 

Under the Act, platforms must remove reported harmful content, especially intimate content involving minors or created without consent, within 48 hours of a verified takedown request. This creates new compliance obligations for AI developers, hosting platforms, and content-sharing networks. 

Key Provisions of the Act 

Who is Covered? 

The Act applies to “covered platforms,” defined broadly to include websites, apps, and online services that host user-generated content. This includes image and video sharing platforms, messaging services, cloud storage tools, and generative AI content creators. 

What Must Be Removed? 

 Platforms must take down intimate visual depictions, photos or videos, upon request by an individual if: 

  • The individual did not consent to its distribution. 
  • The content was altered, deepfaked, or generated using AI. 
  • The subject is a minor, regardless of the method of content creation. 

Takedown Timeline 

Platforms are legally obligated to comply with verified takedown requests within 48 hours of submission. 

Enforcement and Penalties 

 Failure to comply with the Act can result in: 

  • Civil penalties and regulatory fines. 
  • Investigations by the Department of Justice (DOJ) or the Federal Trade Commission (FTC). 
  • Platform restrictions or bans in severe or repeated violation cases. 
  • Public accountability reporting obligations. 

What This Means for AI Developers and Tech Firms 

The TAKE IT DOWN Act introduces direct implications for developers and operators of generative AI tools. Many of these platforms now enable users to create images and videos that closely resemble real people. These features, while innovative, pose legal and ethical risks. 

To comply, companies must adopt proactive safeguards, transparency, and rapid response capabilities. 

Compliance Checklist 

  1. Build AI with Safety by Design

 Ensure all AI models that generate visual content include content filtering, consent controls, and output tracking features. 

  1. Age and Identity Verification

 Integrate age detection and identity validation systems, especially platforms that allow uploads of media featuring people. 

  1. Transparent User Reporting Mechanisms

 Create a public-facing, secure portal for individuals to report and request removal of content that violates the Act. 

  1. Maintain Audit Trails

 Log all uploads, reported content, and moderation activity to demonstrate compliance and assist with regulatory audits. 

  1. Policy Updates and Team Training

 Regularly update content policies and train internal teams on compliance, user protection, and content review protocols. 

How Violations Are Detected 

Violations may be identified through: 

  • User reports submitted through portals such as NCMEC’s CyberTipline 
  • Automated hash-matching systems and image detection tools 
  • Platform-internal content monitoring technologies 
  • Reports from NGOs, watchdog organizations, and whistleblowers 

Public Guidance 

Do: 

  • Report any explicit or non-consensual content through official channels. 
  • Confirm consent before uploading or sharing content featuring individuals. 
  • Promote awareness of digital safety practices, especially among young people. 

Don’t: 

  • Share or create altered or AI-generated intimate content. 
  • Assume that automation or anonymity shields you from accountability. 
  • Ignore takedown notices or fail to respond within the required timeframe. 

FTF’s Perspective 

The TAKE IT DOWN Act is more than a regulatory measure. It is a directive to prioritize ethical innovation in the AI and tech ecosystem. Generative AI, for all its potential, also brings unprecedented risks, particularly when it intersects with identity, privacy, and public safety. 

Tech companies must now see compliance not as a burden, but as a cornerstone of user trust and operational resilience. The Act reinforces that those who build and enable powerful content creation tools also bear responsibility for how they are used. 

As we enter a new era of digital regulation, one thing is clear: responsibility is no longer optional. AI developers, platform owners, and digital content intermediaries must take tangible steps to protect individuals and communities from digital exploitation.

For further information, please see the original Act on the US Congress website: 

https://www.congress.gov/bill/119th-congress/senate-bill/146/text