Lawsuit against Meta: the company is accused of using pirated pornographic films to train AI
On 25 July 2025, Strike 3 Holdings and Counterlife Media filed a lawsuit against Meta in a California federal court, claiming that the company had been downloading and distributing at least 2,396 copyrighted pornographic videos via BitTorrent since 2018 to train its AI models, including the Movie Gen video generator. The total amount of illegally obtained content is estimated at at least 81.7 terabytes.
The lawsuit seeks to prove direct harm to Strike 3's brands (e.g. Vixen, Tushy, Deeper), including the potential launch of an AI-powered porn generator that could reproduce their style for free, undermining their competitiveness.
The plaintiffs identified 47 IP addresses belonging to or associated with Meta, including one employee's address, and claim that Meta attempted to conceal these violations by using VPNs and even employees' home IP addresses. The company commissioned an analysis that revealed that after 6 levels of VPN tunnelling, the traffic "landed" in a pool of IP addresses owned by Meta.
The plaintiffs demand compensation for losses, up to $359 million in total, a ban on further access to the company's content, and the removal of any content obtained in this way from Meta's AI models.
What it means.
Meta may have used pornographic materials to train AI models, but the main goal was probably to quickly download a large amount of unpopular data. The fact is that the BitTorrent protocol is designed in such a way that by downloading a certain file, the user must allow other users to download this file from their computer as freely as possible. If a large number of people download a file and do not allow others to download it from their PCs, then soon other users will not be able to download it from anyone else. In order to encourage users to stay on the distribution, many torrent sites keep statistics, blocking people who download and do not give anything back. It is likely that Meta had difficulty downloading books, movies, audio, or other unpopular files quickly, so the company decided to download and distribute popular pornographic films to quickly earn a good rating.
Even though Facebook's artificial intelligence is unlikely to generate pornography, this lawsuit sheds light on a bigger issue. A huge corporation, valued at billions of dollars, instead of purchasing content to train its AI models, simply goes to pirate sites. Large industry representatives have the tools to track such violations and can sue for something, while most other rights holders will be left with nothing. If the US passes a law that decriminalises the use of intellectual property for AI training, such corporations will be able to do whatever they want with the content.
This case overlaps with other claims of Meta using pirated books (e.g., from LibGen) to train its systems, as court documents revealed attempts to hide data sources even at the level of CEO Mark Zuckerberg.
Source: arstechnica.com