The “TAKE IT DOWN Act”[1] and the “DEFIANCE Act”[2] are harbingers of a waning Section 230 and the dawning of AI Tort liability. After receiving a friendly nod from President Trump in his 2025 Joint Session Address,[3] the “TAKE IT DOWN Act” was signed into law on May 19, 2025.[4] The “TAKE IT DOWN Act” criminalizes the publication of AI deepfakes (termed “digital forgeries”) that depict intimate visual content of a discernable person, generally of a sexually explicit nature. The act empowers victims to notify hosting platforms of the deepfakes and demand the removal of the offending content. Failure to remove the offending content may result in FTC enforcement. The “DEFIANCE Act,” reintroduced in 2025 after passing the Senate in 2024, would go one step further by creating a private right of action for the production, possession, solicitation, or disclosure of unconsented AI deepfakes that depict intimate visual content.
Both acts closely track modern applications of traditional common law privacy torts in the digital age. For example, false light is a cause of action for the publication of materials that place one in a highly offensive and misleading context.[5] AI deepfakes have the capacity to mislead and deceive an audience into believing its fabricated depiction, and audiences may attribute the words and actions, and depicted intimate acts represented, to the purported subject of the deepfake. AI deepfakes may also be the bases of claims for negligent and intentional infliction of emotional distress, or common law fraud. Scammers may utilize AI deepfake voice imitations to deceive their victims, and internet trolls could potentially torment their victims with deepfake videos depicting harm to a loved one. With the TAKE IT DOWN Act and the DEFIANCE Act, Congress is taking steps to shape the AI technology landscape and setting the stage for a future regulatory regime.
In the private sector, the TAKE IT DOWN Act and DEFIANCE Act challenge the broad immunity presently enjoyed by content hosting platforms. Section 230 of the Communication Decency Act[6] is often described as “The 26 words that made the internet”[7] due to the broad civil immunity it provides. Under Section 230, hosting platforms (termed “interactive computer services”) like Facebook, X, and other social media websites, are not treated as the publisher or speaker of any user-generated content that they host, and they have enjoyed total immunity for good faith content moderation. This immunity scheme departs from traditional publisher and editor liability for defamatory publications.[8]
Juxtaposed to newspaper and traditional media liability, Section 230’s immunity afforded content hosting platforms the opportunity to grow expansively without costly liability insurance and cumbersome legal review of user-generated content.
But the golden age of publisher immunity for content hosting platforms may be coming to an end. In February, members of the Senate Judiciary Committee announced a plan to introduce a bipartisan bill that would sunset Section 230 over the next two years.[9] If signed into law, the social media landscape will be fundamentally redefined. Not only would these content hosting platforms need to be responsive to AI deepfakes that depict intimate visual content, they would also need to filter for all defamatory, false light, and tortious content. In such a scenario, litigation and market pressures will likely push these content hosting platforms into far more restrictive content moderation policies.
To mitigate liability, platforms should implement algorithmic detection, auditing, and robust legal compliance policies. Content hosting platforms that flag AI-generated content may mitigate the reputational and emotional harms central to defamation and false light claims. Encouraging and rewarding users to accurately report AI-sourced fraud attempts could help crowdsource content moderation while maintaining platform utility and a cohesive identity. Comprehensive legal policies that reconcile a platforms’ expansive First Amendment rights[10] and the privacy tort limitations[11] against the burgeoning field of AI regulation in the U.S. and E.U. will aid these platforms in avoiding arising liability pitfalls. And algorithmic auditing – where social media companies test their detection software for liability causing content – should be regularly implemented.
Whether the regulation of sexual AI-generated content is the first step in a marathon or merely a one-off, only time will tell. But regulatory appetites are rarely satiated on a first bite. AI-centered liability is on the horizon, and major players in the social media sector must prepare for a new era of tort liability.
[1] Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, Pub. L. No. 119-12, 139 Stat. 55 (2025).
[2] S.1837, 119th Cong. (2025), the “Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2025.”
[3] President Donald Trump, Joint Session Address (2025), (“[w]ith Elliston’s help, the Senate just passed the Take It Down Act, and – this is so important . . . and once it passes the House, I look forward to signing that bill into law.”).
[4]Supra, note 1.
[5] Restatement (Second) of Torts § 652E (1977).
[6] 47 U.S.C. § 230 (“Section 230”).
[7] Jeff Kosseff, The Twenty-Six Words that Created the Internet (2019).
[8] Restatement (Second) of Torts § 578 (1977). See, e.g., Cianci v. New Times Pub. Co., 639 D.2d 54, 61 (2d Cir. 1980).
[9] Press Relief, U.S. Senate Committee on the Judiciary, Durbin Delivers Opening Statement During Senate Judiciary Committee Hearing on Stopping the Exploitation of Children Online (Feb. 19, 2025), https://www.judiciary.senate.gov/press/dem/releases/durbin-delivers-opening-statement-during-senate-judiciary-committee-hearing-on-stopping-the-exploitation-of-children-online (“Durbin will join U.S. Senators Lindsey Graham (R-SC), Shelden Whitehouse (D-RI), Josh Hawley (R-MO), Amy Klobuchar (D-MN) and Marsha Blackburn (R-TN) to introduce a bill that would sunset Section 230 of the Communications Decency Act in two years).
[10]See for example, Moody v. NetChoice, 603 US 707 (2024).
[11]New York Times v. Sullivan, 376 U.S. 254 (1964) (elevating defamation standard for public figures).