A New Federal Law Criminalizing DeepFakes and Digital Exploitation: The TAKE IT DOWN Act

President Trump signed into law the TAKE IT DOWN Act (the “Act”) on May 19, 2025. The Act (full name: The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act) requires online platforms to remove non-consensual intimate images (“NCII”) upon request or face civil liability. This includes what the Act calls “digital forgeries,” commonly known as deepfakes. The Act also criminalizes the publication of NCII. The Act received substantial bipartisan support and the endorsement of several victims rights organizations, though the final mechanisms of the Act are not without First Amendment criticisms, which we mention below. 

Platform Liability: Notice & Removal of NCII

The Act obliges covered platforms to remove NCII in response to a valid request from an identifiable individual or someone authorized to act on their behalf. Covered platforms include websites, online services, and applications that primarily provide a forum for user-generated content. The Act excludes from the “covered platform” definition websites or applications that consist mostly of content pre-selected by the provider, as opposed to its users.  

Within one year of the Act’s passage, covered platforms must establish a process for receiving and handling takedown requests and post a “plain language” explanation of this process. Upon receiving a valid takedown request, platforms must remove the NCII, along with any copies, within 48 hours. 

Platforms that fail to reasonably comply with the removal requirements may face an enforcement action by the Federal Trade Commission (“FTC”) under Section 5 of the FTC Act. Importantly, the Act expands the scope of the FTC’s jurisdiction to include nonprofit organizations. 

Covered platforms that facilitate good-faith removal of material reported as NCII are shielded from liability for removing or disabling content, even if it is ultimately determined that the removed content was lawful. 

Several stakeholders have expressed concern that there are no anti-abuse provisions to prevent false takedown requests seeking to suppress lawful speech on social media platforms. Although each removal request must include a brief statement of the submitter’s “good faith belief” that the published depiction is NCII, critics contend this is not enough. A similar concern was raised in the late 90s, with the drafting of the Digital Millennium Copyright Act (“DMCA”), which deals primarily with copyrighted content. Unlike the TAKE IT DOWN Act, however, the DMCA contains an anti-abuse provision to challenge takedown requests. Here, critics fear that the Act’s removal obligations could potentially apply to broader categories of content involving intimate or sexual content. Their concerns are amplified by the reality that covered platforms will likely use automated filters to handle removal requests, which may not reliably parse out bad-faith requests.

Criminal Penalties for Publishing NCII

The Act prohibits the intentional online publication of NCII of both minors and adults, including both authentic and computer- or AI-generated images, which the Act defines as “digital forgeries.”

The Act outlines criteria for NCII that may fall under this prohibition. 

For NCII with adult subjects, there are different standards for authentic images and digital forgeries. If the image is authentic, the Act applies if the i) publication is intended to cause or does cause harm to the subject, and ii) the depiction was published without the subject’s consent, or if it was created or obtained under circumstances where the adult had a reasonable expectation of privacy. 

The drafters likely included the last factor to mitigate the shortcomings of the common law tort of “public disclosure of private facts”—under which a private fact must not be publicly available or generally known to the public, and which has sometimes barred plaintiffs from bringing suit  if they had shared the information with anyone else, no matter how minimal the audience may have been. 

If the image is a digital forgery of an adult subject, the Act applies if i) the digital forgery was published without consent, ii) the public disclosure was not voluntary by the individual, iii) the depiction was not a matter of public concern, and iv) the publication of the digital forgery intended to or actually caused harm, including reputational harm. 

For NCII with minors, the Act applies if i) a publication is intended to abuse or harass the minor or ii) to arouse or gratify the sexual desire of any person. This applies to both authentic images and digital forgeries. 

The Act also prohibits threatening to publish NCII of an individual “for the purpose of intimidation, coercion, extortion, or to create mental distress.”

Violators of the Act are subject to criminal liability. For NCII publication of adult subjects, penalties include fines, imprisonment for up to 2 years, or both. For NCII publication of minors, imprisonment up to 3 years can be imposed alongside fines. For threats, the Act applies the same penalties as actual publication, except that threats involving digital forgeries receive slightly shorter sentences–18 and 30 months for adult and minor subjects respectively. 

This blog post was researched and drafted by our law student summer intern Susanna Khachatryan, under the supervision of attorney Yelena Ambartsumian.


Previous
Previous

Atypical Attitudes on AI and the Provision That Vanished From the “One Big Beautiful BILL”

Next
Next

A PUSH FOR FEDERAL AGENCIES TO IMPLEMENT AI