The X-ification of Meta

It’s a social networking company with close ties to the incoming Trump administration. It deploys a “community notes” system to fight misinformation and lets hateful comments fly under the banner of “free speech.” It’s got a hefty side-gig in AI. It’s setting up shop in Texas. It’s run by a guy worth hundreds of billions of dollars and the fashion sense of someone a couple of decades younger. No, not X. It’s Meta.

It may soon be hard to tell the difference. Meta this week said it would torpedo its fact-checking partners in favor of X’s “community notes” model (a comparison Meta chief global policy officer Joel Kaplan made directly in a blog post yesterday) and used the fig leaf of “free speech” to make alarmingly permissive changes to its Hateful Conduct policy. Last week, it appointed friend of Donald Trump and Ultimate Fighting Championship CEO Dana White to its board, and elevated Kaplan, who is firmly entrenched in Republican circles.

“I’ve been expecting Meta to axe this program for years,” says Alexios Matzarlis, director of Cornell University’s Security, Trust, and Safety Initiative and founding director of the International Fact-Checking Network, which helped establish the partnership between Facebook and fact-checkers in 2016. “But not in this manner and with this timing, which is so nakedly political.”

X seems like it should be a cautionary tale rather than a North Star for Meta CEO Mark Zuckerberg. Advertisers and users alike have reportedly fled in droves since Musk took. Timelines are increasingly filled with far-right debate-me edgelord accounts that post a constant churn of misinformation. (And that’s just the owner.) Yet Meta has made clear that this is the future it wants.

“As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable,” Kaplan wrote of the fact-checking program and other moderation tools. “We want to fix that and return to that fundamental commitment to free expression.”

“Meta has perennially been a home for Russian, Chinese, and Iranian disinformation,” claims Gordon Crovitz, co-CEO of NewsGuard, a company that provides a tool to evaluate the trustworthiness of online information. “Now, Meta apparently has decided to open the floodgates completely.”

Again, fact-checking isn’t perfect; Croviz says that NewsGuard has tracked several “false narratives” on Meta’s platforms already. And the community notes model with which Meta will replace its fact-checking battalions can still be somewhat effective. But research from Mahavedan and others has shown that crowdsourced solutions miss vast swaths of misinformation. And unless Meta commits to maximal transparency in how its version is implemented and used, it will be impossible to know whether the systems are working at all.

It’s also unlikely that the switch to community notes will solve the “bias” problem Meta executives cite are so outwardly concerned about, given that it seems unlikely to exist in the first place.

“The motivator for all of this changing of Meta’s policies and Musk’s takeover of Twitter is this accusation of social media companies being biased against conservatives,” said David Rand, a behavioral scientist at MIT. “There’s just not good evidence of that.”

In a recently published paper in Nature, Rand and his coauthors found that while Twitter users who used a Trump-related hashtag in 2020 were more than four times likelier to ultimately be suspended than those who used pro-Biden hashtags, they were also much more likely to have shared “low-quality” or misleading news.

“Just because there’s a difference in who’s getting acted on, that doesn’t mean there’s bias,” says Rand. “Crowd ratings can do a pretty good job of reproducing the fact-checker ratings … You’re still going to see more conservatives get sanctioned than liberals.”

And while X gets outsize attention in part because of Musk, remember that it’s an order of magnitude smaller than Facebook’s 3 billion monthly active users, which will present its own challenges when Meta installs its own community notes-style system.“There’s a reason there’s only one Wikipedia in the world,” says Matzarlis. “It’s very hard to get a crowdsourced anything off the ground at scale.”

As for the loosening of Meta’sHateful Conduct policy, that in itself is an inherently political choice. It’s still allowing some things and not allowing others; moving those boundaries to accommodate bigotry does not mean they don’t exist. It just means that Meta is more OK with it than it was the day before.

So much depends on exactly how Meta’s system will work in practice. But between the moderation changes and the community guidelines overhaul, Facebook, Instagram, and Threads are careening toward a world where anyone can say that gay and trans people have a “mental illness,” where AI slop will proliferate even more aggressively, where outrageous claims spread unchecked, where truth itself is malleable.

You know: just like X.

Source : Wired