The House of Representatives has overwhelmingly approved the Take It Down Act with a 409–2 vote, signaling a major step in the fight against nonconsensual deepfake pornography. The legislation specifically targets the creation and distribution of explicit AI-generated content without consent, a problem that has grown rapidly alongside advances in artificial intelligence. The bill also compels online platforms to remove flagged material within 72 hours, ensuring victims have a more immediate path to relief.
A critical feature of the bill is that it gives victims legal recourse. For the first time, individuals will be able to sue the creators, distributors, or even platforms that fail to act on takedown requests. Lawmakers say this provision is essential to match the pace of evolving technology, giving victims tools to push back against digital exploitation.
The bill has received rare bipartisan support, alongside backing from President Trump. Advocates emphasize the importance of safeguarding vulnerable groups such as women, children, and public figures from deepfake abuse. As Rep. Sheila Jackson Lee (D-TX) noted, the measure is about “drawing a line” to protect human dignity and limit the devastating psychological and social harm caused when victims’ likenesses are spread online without their consent.
Despite broad support, two lawmakers opposed the measure, citing concerns over free speech and government overreach. Supporters counter that the act strikes a careful balance between protecting privacy and maintaining platform accountability. Now headed to the Senate, the Take It Down Act is expected to advance quickly. If enacted, it would mark a turning point in U.S. digital protections and reshape how the nation confronts AI-driven exploitation.