‘Neo-Nazi Madness’: Meta’s Top AI Lawyer on Why He Fired the Company

He’s not a famous name in the wider world, but copyright lawyer Mark Lemley is equal parts revered and feared within certain tech circles. TechDirt recently described him as a “Lebron James/Michael Jordan”-level legal thinker. A professor at Stanford, counsel at an IP-focused law firm in the Bay Area, and one of the 10 most-cited legal scholars of all time, Lemley is exactly the kind of person Silicon Valley heavyweights want on their side. Meta, however, has officially lost him.

Earlier this month, Lemley announced he was no longer going to defend the tech giant in Kadrey v. Meta, a lawsuit filed by a group of authors who allege the tech giant violated copyright law by training its AI tools on their books without their permission. The fact that he quit is a big deal. I wondered if it had something to do with how the case was going—but then I checked social media.

Lemley said on LinkedIn and Bluesky that he still believes Meta should win the lawsuit, and he wasn’t bowing out because of the merits of the case. Instead, he’d “fired” Meta because of what he characterized as the company and its CEO Mark Zuckerberg’s “descent into toxic masculinity and Neo-Nazi madness.” The move came on the heels of major policy shifts at Meta, including changes to its hateful conduct rules that now allow users to call gay and trans people “mentally ill.”

In a phone conversation, Lemley explained what motivated his decision to quit, and where he sees the broader legal landscape on AI and copyright going, including his suspicion that OpenAI may settle with The New York Times.

This interview has been edited for clarity and length.

Especially right now, it’s apparent that Zuckerberg isn’t the only tech mogul aligning himself with Trump. As you mentioned, Elon Musk comes to mind. But there are a lot of very powerful people in Silicon Valley who are pivoting hard towards MAGA policies. Do you have a list, now, of people you’d say no to representing? How are you approaching this?

I did think Zuckerberg and Musk have been particularly egregious in their behavior. But one of the nice things about being in the position I’m in—having a full-time job teaching rather than practicing law—is that I have probably greater freedom than a lot of people to say I don’t need to take that money. Do I have a list? No, absolutely not.

But if you decide that the thing to do with your brand is to associate it with moves towards fascism, that is a decision that ought to have consequences. One of the challenges that a lot of people have is they don’t feel that they can speak up, because it’s going to cost them personally. So I think it’s all the more important for people who can bear that cost to do so.

What has the reaction been like?

When I made this as a personal decision, I decided I should say something about it on social media, both because I thought it was important to explain why I was doing it, and also to explain that it wasn’t a function of anything in the case, or my views about the case. I had no idea what I was in for, in terms of the reaction. It’s been quite remarkable and overwhelmingly positive. There are plenty of trolls who think I’m an idiot and a libtard. But so far, no death threats, which is a welcome improvement from the past.

Have you heard from people who might follow in your footsteps?

This struck such a nerve, and there are obviously a lot of people who feel that they don’t have the power to tell Meta or anyone else to go away, or to stand up for things that they think, and that’s unfortunate.

I know your position remains that Meta is still on the right in its AI copyright disputes. But are there any cases in which you think the plaintiffs have a stronger argument?

The strongest arguments are the ones where the output of a work ends up being substantially similar to a particular copyrighted input. Most of the time, when that happens, it happens by accident or because they didn’t do a good enough job trying to fix the problems that lead to it. But sometimes, it might be unavoidable. Turns out, it’s hard to purge all references to Mickey Mouse from your AI dataset, for instance. If people want to try to generate a Mickey Mouse image, it’s often possible to do something that looks like Mickey Mouse. So there are a set of issues that might create copyright problems, but they’re mostly not the ones currently being litigated.

The one exception to that is the UMG v. Anthropic case, because at least early on, earlier versions of Anthropic would generate the song lyrics for songs in the output. That’s a problem. The current status of that case is they’ve put safeguards in place to try to prevent that from happening, and the parties have sort of agreed that, pending the resolution of the case, those safeguards are sufficient, so they’re no longer seeking a preliminary injunction.

At the end of the day, the harder question for the AI companies is not is it legal to engage in training? It’s what do you do when your AI generates output that is too similar to a particular work?

Do you expect the majority of these cases to go to trial, or do you see settlements on the horizon?

There may well be some settlements. Where I expect to see settlements is with big players who either have large swaths of content or content that’s particularly valuable. The New York Times might end up with a settlement, and with a licensing deal, perhaps where OpenAI pays money to use New York Times content.

There’s enough money at stake that we’re probably going to get at least some judgments that set the parameters. The class-action plaintiffs, my sense is they have stars in their eyes. There are lots of class actions, and my guess is that the defendants are going to be resisting those and hoping to win on summary judgment. It’s not obvious that they go to trial. The Supreme Court in the Google v. Oracle case nudged fair-use law very strongly in the direction of being resolved on summary judgment, not in front of a jury. I think the AI companies are going to try very hard to get those cases decided on summary judgment.

Why would it be better for them to win on summary judgment versus a jury verdict?

It’s quicker and it’s cheaper than going to trial. And AI companies are worried that they’re not going to be viewed as popular, that a lot of people are going to think, Oh, you made a copy of the work that should be illegal and not dig into the details of the fair-use doctrine.

There have been lots of deals between AI companies and media outlets, content providers, and other rights holders. Most of the time, these deals appear to be more about search than foundational models, or at least that’s how it’s been described to me. In your opinion, is licensing content to be used in AI search engines—where answers are sourced by retrieval augmented generation or RAG—something that’s legally obligatory? Why are they doing it this way?

If you’re using retrieval augmented generation on targeted, specific content, then your fair-use argument gets more challenging. It’s much more likely that AI-generated search is going to generate text taken directly from one particular source in the output, and that’s much less likely to be a fair use. I mean, it could be—but the risky area is that it’s much more likely to be competing with the original source material. If instead of directing people to a New York Times story, I give them my AI prompt that uses RAG to take the text straight out of that New York Times story, that does seem like a substitution that could harm the New York Times. Legal risk is greater for the AI company.

What do you want people to know about the generative AI copyright fights that they might not already know, or they might have been misinformed about?

The thing that I hear most often that’s wrong as a technical matter is this concept that these are just plagiarism machines. All they’re doing is taking my stuff and then grinding it back out in the form of text and responses. I hear a lot of artists say that, and I hear a lot of lay people say that, and it’s just not right as a technical matter. You can decide if generative AI is good or bad. You can decide it’s lawful or unlawful. But it really is a fundamentally new thing we have not experienced before. The fact that it needs to train on a bunch of content to understand how sentences work, how arguments work, and to understand various facts about the world doesn’t mean it’s just kind of copying and pasting things or creating a collage. It really is generating things that nobody could expect or predict, and it’s giving us a lot of new content. I think that’s important and valuable.

Source : Wired