Meta Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal

Meta just lost a major fight in its ongoing legal battle with a group of authors suing the company for copyright infringement over how it trained its artificial intelligence models. Against the company’s wishes, a court unredacted information alleging that Meta used Library Genesis (LibGen), a notorious so-called shadow library of pirated books that originated in Russia, to help train its generative AI language models.

The case, Kadrey et al. v. Meta Platforms, was one of the earliest copyright lawsuits filed against a tech company over its AI training practices. Its outcome, along with those of dozens of similar cases working their way through courts in the United States, will determine whether technology companies can legally use creative works to train AI moving forward and could either entrench AI’s most powerful players or derail them.

Vince Chhabria, a judge for the United States District Court for the Northern District of California, ordered both Meta and the plaintiffs on Wednesday to file full versions of a batch of documents after calling Meta’s approach to redacting them “preposterous,” adding that, for the most part, “there is not a single thing in those briefs that should be sealed.” Chhabria ruled that Meta was not pushing to redact the materials in order to protect its business interests but instead to “avoid negative publicity.” The documents were originally filed late last year remained publicly unavailable in unredacted form until now.

In his order, Chhabria referenced an internal quote from a Meta employee, included in the documents, in which they speculated, “If there is media coverage suggesting we have used a dataset we know to be pirated, such as LibGen, this may undermine our negotiating position with regulators on these issues.” Meta declined to comment.

Novelists Richard Kadrey and Christopher Golden, along with comedian Sarah Silverman, first filed the class-action lawsuit against Meta in July 2023, alleging the tech giant trained its language models using their copyrighted work without permission. Meta has argued that using publicly available materials to train AI tools is shielded by the “fair use” doctrine, which holds that using copyrighted works without permission is legal in certain cases, one of which, the company argues, is “using text to statistically model language and generate original expression,” the company’s lawyers wrote in a motion to dismiss the authors’ lawsuit in November 2023. In this particular lawsuit, Meta has also argued that the plaintiffs’ claims are without merit.

“Meta has treated the so-called ‘public availability’ of shadow datasets as a get-out-of-jail-free card, notwithstanding that internal Meta records show every relevant decision-maker at Meta, up to and including its CEO, Mark Zuckerberg, knew LibGen was ‘a dataset we know to be pirated,’” the plaintiffs allege in this motion. (Originally filed in late 2024, the motion is a request to file a third amended complaint.)

In addition to the plaintiffs’ briefs, another filing was unredacted in response to Chhabria’s order—Meta’s opposition to the motion to file an amended complaint. It argues that the authors’ attempts to add additional claims to the case are an “eleventh-hour gambit based on a false and inflammatory premise” and denies that Meta waited to reveal crucial information in discovery. Instead, Meta argues it first revealed to the plaintiffs that it used a LibGen dataset in July 2024. (Because much of the discovery materials remain confidential, it is difficult for WIRED to confirm that claim.)

Meta’s argument hinges on its claim that the plaintiffs already knew about the LibGen use and shouldn’t be granted additional time to file a third amended claim when they had ample time to do so before discovery ended in December 2024. “Plaintiffs knew of Meta’s downloading and use of LibGen and other alleged ‘shadow libraries’ since at least mid-July 2024,” the tech giant’s lawyers argue.

In November 2023, Chhabria granted Meta’s motion to dismiss some of the lawsuit’s claims, including its claim Meta’s alleged use of the authors’ work to train AI violated the Digital Millennium Copyright Act, a US law introduced in 1998 to stop people from selling or duplicating copyrighted works on the internet. At the time, the judge agreed with Meta’s stance that the plaintiffs had not provided sufficient evidence to prove that the company had removed what’s known as “copyright management information,” like the author’s name and title of the work.

The unredacted documents argue that the plaintiffs should be allowed to amend their complaint, alleging that the information Meta revealed is evidence that the DMCA claim was warranted. They also say the discovery process has unearthed reasons to add new allegations. “Meta, through a corporate representative who testified on November 20, 2024, has now admitted under oath to uploading (aka ‘seeding’) pirated files containing Plaintiffs’ works on ‘torrent’ sites,” the motion alleges. (Seeding is when torrented files are then shared with other peers after they have finished downloading.)

“This torrenting activity turned Meta itself into a distributor of the very same pirated copyrighted material that it was also downloading for use in its commercially available AI models,” one of the newly unredacted documents claims, alleging that Meta, in other words, had not just used copyrighted material without permission but also disseminated it.

LibGen, an archive of books uploaded to the internet that originated in Russia around 2008, is one of the largest and most controversial “shadow libraries” in the world. In 2015, a New York judge ordered a preliminary injunction against the site, a measure designed in theory to temporarily shut the archive down, but its anonymous administrators simply switched its domain. In September 2024, a different New York judge ordered LibGen to pay $30 million to the rights holders for infringing on their copyrights, despite not knowing who actually operates the piracy hub.

Meta’s discovery woes for this case aren’t over, either. In the same order, Chhabria warned the tech giant against any overly sweeping redaction requests in the future: “If Meta again submits an unreasonably broad sealing request, all materials will simply be unsealed,” he wrote.

Source : Wired