Why the EU’s Vice President Isn’t Worried About Moon-Landing Conspiracies on YouTube

When European Union vice president Věra Jourová met with YouTube CEO Neal Mohan in California last week, they fell to talking about the long-running conspiracy theory that the moon landings were fake. YouTube has faced calls from some users and advocacy groups to remove videos that question the historic missions. Like other videos denying accepted science, they have been booted from recommendations and have a Wikipedia link added to direct viewers to debunking context.

But as Mohan spoke about those measures, Jourová made something clear: Fighting lunar lunatics or flat-earthers shouldn’t be a priority. “If the people want to believe it, let them do,” she said. As the official charged with protecting Europe’s democratic values, she thinks it’s more important to make sure YouTube and other big platforms don’t spare a euro that could be invested in fact-checking or product changes to curb false or misleading content that threatens the EU’s security.

“We are focusing on the narratives which have the potential to mislead voters, which could create big harm to society,” Jourová tells WIRED in an interview. Unless conspiracy theories could lead to deaths, violence, or pogroms, she says, don’t expect the EU to be demanding action against them. Content like the recent fake news report announcing that Poland is mobilizing its troops in the middle of an election? That better not catch on as truth online.

In Jourová’s view, her conversation with Mohan and similar discussions she held last week with the CEOs of TikTok, X, and Meta show how the EU is helping companies understand what it takes to counter disinformation, as is now required under the bloc’s tough new Digital Services Act. Its requirements include that starting this year the internet’s biggest platforms, including YouTube, have to take steps to combat disinformation or risk fines up to 6 percent of their global sales.

Civil liberties activists have been concerned that the DSA ultimately could enable censorship by the bloc’s more authoritarian regimes. A strong showing by far-right candidates in the EU’s parliamentary elections taking place later this week also could lead to its uneven enforcement.

YouTube spokesperson Nicole Bell says the company is aligned with Jourová on preventing egregious real-world harm and also removing content that misleads voters on how to vote or encourages interference in the democratic processes. “Our teams will continue to work around the clock,” Bell says of monitoring problematic videos about this week’s EU elections.

Jourová does have some clear preferences, though. “We should do everything to guarantee that lies are not the easiest way to get political positions,” she says. “If politicians are lying, there should be somebody to say immediately, ‘Guy, you are lying.’ Using clear lies, especially of the nature that increases the hostility and proliferates hate, should be stopped.”

Political candidates around the world have continued to turn to new technologies and social media to spread potentially misleading content. She says local researchers identified 70 cases of deepfakes ahead of recent elections in Slovakia. Though the impact they had on the vote has not been assessed, some audio deepfakes on the eve of the vote targeted a pro-Ukraine candidate who lost a bid to run the country to a pro-Russian opponent. WIRED has cataloged so far about 50 cases of deepfakes across elections globally this year.

Western governments and researchers have attributed some of the deepfake surge to Russia. But though Jourová is concerned about the alleged interference, she also takes it as evidence that democracy is working. There aren’t enough fellow autocrats in Europe for Putin to call up to win favor with the EU, she reasons, and so instead, he has to seed lies and hope they sway electorates towards installing leaders who support him. That’s an expensive strategy for a country in economic straits, Jourová says, and she anticipates it becoming more expensive still for the Kremlin if tech platforms successfully crack down on disinformation.

The DSA includes measures aimed at making it clear to officials and the public what action platforms are taking. Companies are required to share data and commentaries on their work on limiting disinformation such as political deepfakes. So far, the platforms’ compliance under that provision of both the DSA and the EU’s related voluntary Code of Practice on Disinformation has been variable, making it difficult to draw comparisons to assemble an overall picture of harmful untruths across the internet.

YouTube has told the EU that in the second half of last year 112 deepfake videos each received over 10,000 views before being taken down. By contrast, Meta offered no comparable data and declined to comment to WIRED on its different approach to reporting.

“It’s a bit still moving in the darkness,” Jourová says of monitoring compliance. But she insists that will change. “Comparable data in a structured way” is the goal, she says. Or else big fines—and the crippling of democracy—could follow.

Additional reporting by Morgan Meaker.

Source : Wired