There was a time when Mark Zuckerberg didn’t regard mainstream media as the enemy. He even allowed me, a card-carrying legacy media person, into his home. In April 2018, I ventured there to hear his plans to do the right thing. It was part of my years-long embed into Facebook to write a book. For the past two years, Zuckerberg’s company had been roundly criticized for its failure to rein in disinformation and hate speech. Now the young founder had a plan to address this.
Part of the solution, he told me, was more content moderation. He was going to hire many more humans to vet posts, even if it cost Facebook considerable capital. He would also amp up efforts to use artificial intelligence to proactively remove harmful content. “It is no longer enough to give people tools to say what they want and then just let our community flag them and try to respond after the fact,” he told me as we sat in his sunroom. “We need to get in there more and just take a more active role.” He admitted he had been slow to realize how damaging toxic content was on Facebook, but now he was committed to fixing the problem, even though it might take years. “I think we’re doing the right thing,” he told me, “It’s just that we should’ve done it sooner.”
Seven years later, Zuckerberg no longer thinks more moderation is the right thing. In a five-minute Reel, he characterized his actions to support it as a regretful cave-in to government jawboning about Covid and other subjects. He announced a shift away from content moderation—no more proactive takedowns and downranking of misinformation and hate speech—and the end of a fact-checking program that aimed to refute lies circulating on his platforms. Fact checks by trusted sources would be replaced by “community notes,” a crowdsourcing approach where users provide alternate views on the veracity of posts. That technique is the exact thing that he told me in 2018 was “not enough.” While he admits now his changes will allow “more bad stuff,” he says that in 2025 it is worth it for more “free expression” to thrive.
The policy shift was one of several moves that indicated that, whether or not Zuckerberg wanted to do this all along, Meta is positioning itself in sync with the new Trump administration. You’ve heard the litany, which has become a meme in itself. Meta promoted its top lobbyist, former GOP operative Joel Kaplan, to chief global affairs officer; he immediately appeared on Fox News (and only Fox News) to tout the new policies. Zuckerberg also announced that Meta would move employees who write and review content from California to Texas, to “help remove the concern that biased employees are overly censoring content.” He disbanded Meta’s DEI program. (Where is Sheryl Sandberg, who was so proud of Meta’s diversity effort. Sheryl? Sheryl?) And Meta changed some of its service terms specifically to allow users to degrade LGBTQ people.
Now that it’s been a week since Meta’s turnaround—and my first take at Zuckerberg’s speech—I am particularly haunted by one aspect: He seems to have downranked the basic practice of classic journalism, characterizing it as no better than the nonreported observations from podcasters, influencers, and countless random people on his platforms. This was hinted at in his Reel when he repeatedly used the term “legacy media” as a pejorative: a force that, in his view, urges censorship and stifles free expression. All this time I thought the opposite!
A hint of his revised version of trustworthiness comes from the shift from fact-checkers to community notes. It’s true that the fact-checking process wasn’t working well—in part because Zuckerberg didn’t defend the checkers when ill-intentioned critics charged them with bias. It’s also reasonable to expect community notes to be a useful signal that a post might be fallacious. But the power of refutation fails when participants in the conversation reject the idea that disagreements can be resolved by convincing evidence. That’s a core difference between fact-checking—which Zuckerberg got rid of— and the community notes he’s implementing. The fact-checking worldview assumes that definitive facts, arrived at via research, talking to people, and sometimes even believing your own eyes, can be conclusive. The trick is recognizing authorities who have earned public confidence by pursuing truth. Community notes welcome alternate views—but judging which ones are reliable is all up to you. There’s something to the canard that an antidote to bad speech is more speech. But if verifiable facts can’t successfully refute easily disproven flapdoodle, we’re stuck in a suicidal quicksand of babel.
That’s the world that Donald Trump, Zuckerberg’s new role model, has consciously set about to realize. 60 Minutes reporter Leslie Stahl once asked Trump why he insulted reporters who were just doing their job. “You know why I do it?” he responded. “I do it to discredit you all and demean you all so when you write negative stories about me, no one will believe you.” In 2021, Trump further revealed his intent to benefit from an attack on truth. “If you say it enough and keep saying it, they’ll start to believe you,” he said during a rally. A corollary to that is if social media promotes falsehoods enough, people will believe those as well. Especially if formerly recognized authorities are discredited and demeaned.
That discrediting is exactly what Zuckerberg and his host Joe Rogan engaged in during a 3-hour conversation in Rogan’s Austin, Texas, podcast studio. This was Zuckerberg’s only appearance to explain his actions, another sign that he’s not kowtowing to a media establishment that he no longer feels is trustworthy or worth paying attention to. Zuckerberg and Rogan went on at length about how podcasters and influencers were more popular than mainstream reporters, because no one trusts those institutions anymore, and celebrated statistics that indicate that many people get their news from social media these days. (Though it’s still far from the dominant source.)
Look, I am a fan of podcasts, especially the epic multi-hour interviews like Rogan’s. But as a replacement for reported news? That’s a disaster. Reporters make countless phone calls, dig through mountains of documents, and travel all over the globe to try to make sense of the world. Podcasters glean their knowledge from those reports—and maybe just as much from biased grifters, random anecdotes, and paranoid visions. Also, as interviewers, some podcasters are entertaining but not rigorous in posing tough questions. No wonder Zuckerberg chose Rogan as the single place to defend his decisions. Rogan didn’t challenge him on whether his decisions were a sop to the incoming president—he congratulated him.
In the same week that Rogan’s podcast dropped, Peter Thiel—the billionaire VC who led Facebook’s initial funding—published a piece in the Financial Times that resonated with the Zuckerberg/Rogan theory that the media shrouds things in darkness rather than sunlight. Thiel adopted a friend’s terminology to include the media in a sweeping conspiracy called The Distributed Idea-Suppression Complex that limits public discussion. He chortled that the internet, presumably under Trump’s influence, is helping “our liberation from the DISC prison.” (Somehow the Financial Times, which is as legacy as the media gets, escaped this prison to publish Thiel’s piece.)
Thiel’s examples of media suppression were a few cases where mainstream media–though not all of it– exercised perhaps too much caution, like the JFK assassination or, more recently, Covid. But on every single day, the thousands of journalists who still have jobs seek truth by gathering facts, whether it’s to locate a mayor’s nighttime peregrinations or to show how the FBI could have acted faster to stop a notorious swatter targeting high schools. Without facts, we cannot determine whether a cabinet nominee is worthy of a position, whether it’s proper to aid an ally, or why fighting fires in Southern California is so difficult. Of course I’m biased, but this seems rather obvious. The practice of journalism is already threatened by a post-internet failure of its traditional business models. What it didn’t see coming was a horrifyingly effective political attack on its very foundation.
And now the guy who runs the biggest social networks in the world—and boasted on Rogan that because he can’t be fired, he will do anything he damn wants—has cast his lot with those who seek to tumble the distressingly vulnerable house of facts. Yes, it’s tough to control disinformation at scale. But Zuckerberg has accumulated over $200 billion of personal riches by building a platform that is part of the problem. He owes us something better than forcing his 3 billion users to sift out facts from lies, all while waving a checkered flag to the liars.
Zuckerberg knows better—after all, he let me collect facts on his company for three years. Afterwards, he messaged me congratulations. He didn’t agree with all my conclusions and said I didn’t get everything right (no examples), but he noted that he appreciated the time and effort I spent to produce a nuanced account. You know, what journalists do. On the Rogan show, Zuckerberg talked a lot about restoring masculinity to corporate culture. Maybe he should man up and fight for truth.
Time Travel
In our 2018 interview, Zuckerberg was excited about building up the company’s content moderation forces and using AI to proactively take down content like hate speech and misinformation—a policy that he slammed the brakes on earlier this year. At the time, the company faced a crisis over its hosting of fake news during the 2016 election.
Before we wrap up I ask him—has this crisis made Facebook different?
His answer is both no—the mission is the same—but, in a way, yes. “I really think the biggest shift is around being more proactive, around finding and preventing abuse. The big learning is that we need to take a broader view of our responsibility … It is no longer enough to give people tools to say what they want and then just let our community flag them and try to respond after the fact. We need to take a more active role in making sure that the tools aren’t misused.”
Zuckerberg recognizes the difficulty of remaking his systems to proactively catch harmful content. “I think this is about a three-year transition to really build up the teams, because you can’t just hire thirty thousand people overnight to go do something,” he says. “You have to make sure that they’re executing well and bring in the leadership and train them. And building up AI tools—that’s not something that you could just snap your fingers on either … We’ll never be fully done. But I do really think that this represents a pretty major shift in the overall business model and operating model of the company.“
Ask Me One Thing
Petros asks, “Is AI model collapse similar to inbreeding?”
Thanks for the question, Petros. It’s always tricky to anthropomorphize digital phenomena with actual biology. (This from a guy who wrote a book called Artificial Life.) But when you talk about AI model collapse, you are indeed referring to something that in the broadest sense can be understood by referencing a human condition. But let me back up for the uninitiated. We’ve seen the boom in AI large language models enabled by a massive influx of data to train them—basically as much human knowledge as can be input. So much that it looks like we’re running out of human-generated content to train the most recent generations of models.
One potential solution would be for the models themselves to create content—they can do this forever. But there are indications that if you do this a lot, it degrades the output. People have compared it to a snake eating its own tail. That’s why in the last few months I’ve seen the word ouroboros many times more than I have in my entire previous life. So in the same rough way that we talk about artificial neural nets as analogous to the networks in the brain, you can say that a collapse due to an overuse of synthetic data is like inbreeding.
Some scientists think there’s a way around this. But I am just as happy that these limits exist. It puts a premium on quality human-created content—and incentivizes AI companies to pay for it.
Submit your questions in the comments below, or send an email to mail@wired.com. Write ASK LEVY in the subject line.
End Times Chronicle
More than a week later and LA is still burning.
Last but Not Least
Meet the UAE spymaster turned AI tech funder who controls a $1.5 trillion empire.
A guide to the cryptocurrency connections of the first Bitcoin president.
Inside the dark heart of the solar panel sales world.
Novo Nordisk is raking it in with Ozempic sales. So why is its CEO bent out of shape?
Source : Wired