This is a republication of a press release by the European Fact-Checking Standards Network (EFCSN); you can read the original in English here.
The European Fact-Checking Standards Network, a European network of fact-checkers, expresses its disappointment at Meta’s decision to end its independent fact-checking program in the United States and condemns statements by its CEO that link fact-checking with censorship. “This appears to be more a politically motivated move in the context of Donald Trump’s presidency in the U.S. than a decision based on evidence that the program does not work,” said Clara Jiménez Cruz, President of the EFCSN. The EFCSN calls on the European Union to stand firm against political pressure and not abandon its efforts to counter the spread of disinformation on major digital platforms.
Fact-checking is not censorship. On the contrary, fact-checking adds information to public debates, providing context and data so that each citizen can make their own decisions. It has been shown time and again that fact-checking is effective in countering misinformation. Equating fact-checking with censorship is a false and malicious claim. Fact-checkers do not “censor” anyone. Our members investigate and publish evidence about potentially false claims. It has always been Meta’s decision what to do with fact-checked content, not ours.
The EFCSN disagrees with the characterization of fact-checkers and journalists made by Meta CEO Mark Zuckerberg. In justifying the decision to end the program, Zuckerberg said: “The fact-checkers have simply been too politically biased and have destroyed more trust than they’ve created.” This is false. Fact-checkers are subject to the highest journalistic standards of impartial reporting, transparency, integrity, and accountability, and organizations such as the EFCSN uphold these standards through independently conducted audits every two years. Linking fact-checking with censorship is particularly harmful, as it is one of the driving forces behind harassment and attacks against fact-checkers. Doubling down on these claims can only exacerbate an already serious problem affecting fact-checkers worldwide.
With several European countries holding elections in 2025, platforms that retreat from the fight against misinformation and disinformation enable—and even invite—interference in their electoral processes, especially by foreign actors. The EU in particular must stand firm in enforcing its own laws, even in the face of pressure from other countries.
What the facts (and Meta) say about the impact of the third-party fact-checking program
In its announcement, Meta also equated labeling verified disinformation with censorship: “A program meant to inform too often became a tool for censorship.” In reality, this is the opposite of how the system works. Labels on misinformation allow users to make informed decisions about what content to engage with and share. In fact, during the 2024 European Parliament elections, Meta highlighted the effectiveness of its labeling system, stating: “Between July and December 2023, for example, more than 68 million pieces of content viewed in the EU on Facebook and Instagram had fact-checking labels. When a verified fact label is applied to a post, 95% of people do not click through to view it.”
Meta had also celebrated the third-party fact-checking program as successful and beneficial for users: “We know this program is working and people find value in the warning screens we apply to content after a fact-checking partner has rated it.” The Meta CEO now refers to “too many mistakes and too much censorship”; however, Meta’s own transparency report shows that content that received reduced visibility in error accounted for only 3.15% of all reports in that category—the lowest percentage of all.
The “Community Notes” model proposed as an alternative to the independent fact-checking program also has weaknesses. Community Notes could be used more effectively to counter false claims when supported by journalistic and methodological work. In the context of the 2024 U.S. elections, Poynter found that X’s Community Notes had, at best, an extremely marginal effect in combating election disinformation. In another study, EFCSN member organization Science Feedback found that most content on X (formerly Twitter) that fact-checkers deemed false or misleading showed no visible signs of having been moderated.