NewsGuard's Reality Check

Share this post

NewsGuard's Reality Check
Before Shutdown, Meta’s Fact-Checking Program Only Labeled 14 Percent of Russian, Chinese, and Iranian Disinformation Posts
Copy link
Facebook
Email
Notes
More

Discover more from NewsGuard's Reality Check

A report on how misinformation online is undermining trust—and who’s behind it. Produced by co-CEOs Steven Brill, Gordon Crovitz, and the NewsGuard Team.
Continue reading
Sign in

Before Shutdown, Meta’s Fact-Checking Program Only Labeled 14 Percent of Russian, Chinese, and Iranian Disinformation Posts

Flaws in process could spill over into coming community notes system — with worse results

NewsGuard
Jan 23, 2025
4
Share

Special Report

By Dimitris Dimitriadis, Eva Maitland, and McKenzie Sadeghi

Only 14 percent of posts advancing a sampling of 30 Russian, Chinese, and Iranian disinformation narratives identified by NewsGuard analysts on Meta platforms Facebook, Instagram, and Threads from global users were tagged as false under a fact-checking program that Meta is soon ending in the U.S., NewsGuard found.

Mark Zuckerberg announced on Jan. 7, 2025, that Meta’s fact-checking program — launched in December 2016 following criticism that the company did not do enough to curb foreign influence in the U.S. presidential election — was being dropped in the U.S. The program has contracted third party fact-checkers at major news outlets including USA Today, Reuters, and The Associated Press and overlays their fact-check articles to false content on Meta platforms.

In its place, Meta says that it will adopt “Community Notes.” That is a crowdsourced approach similar to the practice of Elon Musk’s X.

However, as explained below, if Meta applies the same technology and rules for applying community notes to posts that it has used for fact checker-generated labels, the results are likely to be no more promising. In fact, the results could be even weaker in terms of speed and coverage because a community note requires a process in which a community of users first must be shown to have what Facebook has said has to be “a range of perspectives.”

NewsGuard found that even with Meta’s fact-checking initiatives in place, the vast majority of posts advancing foreign disinformation narratives spread without carrying any of the fact-checking labels used by Meta: False, Altered, Partly False, Missing Context, or Satire. The analysis looked at the spread of 30 false claims from NewsGuard‘s proprietary database of Misinformation Fingerprints, a continuously updated, machine-readable feed of data about provably false claims circulating online. (See Methodology note below.)

The sample covered 30 of the 508 disinformation narratives identified by NewsGuard analysts pushed by Russian, Chinese, and Iranian government media between June 2023 and January 2025. These claims, which meet the NewsGuard criterion that they are determined through reporting by NewsGuard analysts to be “provably false,” include the Russian falsehood that eight Ukrainian military officials owned mansions that burned down in the 2025 Southern California fires, the Chinese falsehood that the U.S. operates a secret bioweapons laboratory in Kazakhstan, and the Iranian falsehood that Donald Trump praised Iran as a “powerful nation” in his 2015 book “Crippled America.”

NewsGuard identified 457 posts across the Meta platforms advancing these 30 false claims. Of these, 253 posts advanced Iranian disinformation, 170 Russian disinformation, and 34 Chinese disinformation. (Only one of the examples of Chinese disinformation received a fact check label of any kind.) In total, only 66 of the posts (14 percent) carried a fact-check label, leaving 391 posts (86 percent) with no indication to users that the content contained foreign state-sponsored disinformation.

The five most common false narratives that appeared on the Meta platforms were claims that:

  • Reuters reported that Israel’s ambassador to Cyprus and his two bodyguards were kidnapped (116 posts)

  • Leading Syrian chemistry scientist Hamdi Ismail Nada was assassinated in Damascus (61 posts)

  • Actor Johnny Depp and Russian model Yulia Vlasova are opening a jewelry store in Moscow (57 posts)

  • Syrian President Bashar al-Assad was killed in a plane crash (41 posts)

  • Germany plans to take in 1.9 million Kenyan workers (37 posts)

NewsGuard sent two emails to Meta seeking comment on its findings, but did not receive a response.

A breakdown of the posts advancing false claims by country of origin that spread without a fact-checking label compared to those flagged with a label. (Graphic via NewsGuard)

Broken Process: Meta’s Algorithm Misses Differences in Language

Meta states on its website that after an independent partner fact-checks a false post on one of its platforms, it employs a technology that seeks out other posts with “near identical” phrasing to apply fact-check labels to other posts advancing the same false claim. However, Meta adds that it only applies ratings to posts that are “the same or almost exactly the same” as the post fact-checked by the third-party partner.

“We generally do not add notices to content that makes a similar claim rated by fact checkers, if the content is not identical,” Meta states. That means that if a false claim is rephrased or paraphrased, it may not trigger a fact-check label, allowing slightly altered versions of the same disinformation to spread unchecked.

Asked if the same technology yielding the 14 percent solution and the same rules would apply to the upcoming community notes process — that is, whether a note will only be added if the claim is “the same or almost exactly the same” — Meta’s press office did not respond. Meta’s website announcing the switch to Community Notes says, “Meta won’t write Community Notes or decide which ones show up. They are written and rated by contributing users … Community Notes will require agreement between people with a range of perspectives to help prevent biased ratings.”

Asked how agreement among people with “a range of perspectives” would ever be arrived at to enable a community note if that range of perspectives included users promoting a Russian propaganda message and those who do not believe the message, Meta did not respond.

One Hamas hoax corrected, 113 posts with the same claim go untouched

In 10 of the 30 narratives NewsGuard examined, Meta labeled one or more of the posts advancing the narrative but left uncorrected dozens of other posts containing the same false claim with the same meaning, albeit with some different wording.

For example, Meta’s fact-checking partners, including the Agence France-Presse’s (AFP) Arabic fact-check team, debunked the claim that Reuters reported that Oren Anolik, Israel’s ambassador to Cyprus, and his two bodyguards were kidnapped. The narrative was part of a broader disinformation effort by Iran to portray attacks by Iran and its proxies Hezbollah and Hamas as successful. The post labeled as false stated, “Occupation media report the kidnapping of the Israeli ambassador to Cyprus and two of his companions.”

Ambassador Anolik debunked the claim in a video posted on X, and Reuters issued a statement stating that it never reported such news.

Nevertheless, NewsGuard found 113 other posts across Facebook, Instagram, and Threads advancing this claim with the same meaning but different phrasings that were not flagged by Meta, such as: “Reuters: Israeli ambassador in Cyprus, two of his entourages, kidnapped and led to unknown destination.” The post that was fact-checked by the AFP and labeled as “False” by Meta generated 3,000 likes, while the 113 posts that did not carry a fact-check label cumulatively generated 21,720 likes.

While most of the unlabeled posts identified by NewsGuard in the analysis came from anonymously run pro-Russian, Iranian, and Chinese accounts, 12 of the posts advancing false claims came directly from state-controlled media outlets, such as Iranian state-run HispanTV, the state-run Chinese News Service, previously part of Xinhua News Agency, and the accounts of Russian embassies.

Complete Misses: ‘Chad migrant rapes a French 12-year-old,’ ‘Taylor Swift turns on Taiwan’

Beyond the limitations of Meta’s algorithm, some of the posts identified by NewsGuard as advancing disinformation did not appear to be fact-checked by Meta’s third-party partners at all.

For example, NewsGuard identified three posts that jointly generated 7,000 likes containing a video purporting to show a migrant from Chad being confronted by two French men after he was supposedly detained and released by French police after allegedly raping a 12-year-old girl. The video was apparently intended to stoke anti-immigration sentiment in France.

However, there is no evidence that a migrant from Chad was detained and released by French authorities after confessing to raping a 12-year-old girl in France. No such incident was reported. A French security official told NewsGuard in December 2024 that the video was staged by a Russian influence operation dubbed by Microsoft as Storm-1516. NewsGuard did not find any debunks of this claim by Meta’s third-party partners.

Similarly, NewsGuard identified four unlabeled posts on Meta platforms from pro-China users aiming to advance Beijing’s claims of sovereignty over Taiwan by falsely claiming that Google Maps had changed its name for Taiwan to “Taiwan Province.”

A Google spokesperson told NewsGuard in a December 2024 email, “We have not made any changes to how we depict Taiwan on Google Maps. People using Maps in Mainland China see ‘Taiwan Province’ as that name is required by local law - elsewhere, people see ‘Taiwan.’” NewsGuard did not find any debunks of this claim on any of the Meta platforms.

In another example of a false claim that went unchecked on Meta platforms, pro-China accounts posted an AI-generated image purporting to show pop singer Taylor Swift holding a sign that reads, “Taiwan is a Chinese province.” (Besides the image being fake, Swift has not made any such statement endorsing Beijing’s “One China” policy.) NewsGuard identified seven such posts across Facebook, Instagram, and Threads, which, combined, generated 677 likes.

Scattered Successes

In some cases, the labeling appeared to have some success in thwarting false narratives that could have reached a wider audience. For example, in mid-December 2024, pro-Kremlin sources on Facebook shared an article falsely claiming that the German government signed a labor agreement allowing 1.9 million Kenyan migrants to enter Germany. Of the 36 posts advancing the claim, 20 were flagged by Facebook as “false information,” directing users to a fact-check article from German news outlet DPA informing readers that the claim in fact distorted an authentic labor agreement between Germany and Kenya.

The false narrative, part of a broader Russian disinformation effort ahead of Germany’s Feb. 23, 2025, snap elections, strongly resembles the handiwork of John Mark Dougan, the former Florida deputy sheriff who fled to Russia and became a leading Kremlin propagandist. The 36 Facebook posts collectively gained only 173 views, suggesting that Facebook’s algorithm may have successfully limited their reach.

As NewsGuard’s findings indicate, even with the fact-checking program, malign foreign actors still found ways to exploit Meta’s platforms, and the occasional successes of the fact-checking program — such as the labeling of the Russian disinformation claim targeting Germany’s upcoming elections — are now at risk of disappearing. Taking its place will be a community notes system that seems likely to be even less effective.

Methodology: NewsGuard selected 30 Misinformation Fingerprints (10 Russian, 10 Iranian, and 10 Chinese) from our catalog of provably false claims debunked by NewsGuard analysts. NewsGuard’s analysis covered posts that were shared from the time the narrative first emerged — the earliest was June 2023 — to Jan. 13, 2025. Given the challenges posed by Meta's discontinuation of CrowdTangle — a tool previously used by researchers for tracking content on its platforms — NewsGuard conducted a comprehensive manual search. This involved using the search terms and keywords derived from the Misinformation Fingerprints across multiple languages, including English, Russian, Chinese, Persian, and Arabic. Posts were categorized as labeled if they carried any of Meta’s fact-checking designations, such as “False,” “Altered,” “Partly False,” “Missing Context,” or “Satire.” Posts without such labels were categorized as unlabeled. NewsGuard sought out all posts advancing these false claims, regardless of whether they originated from state-sponsored sources or individual accounts. It is possible that NewsGuard’s analysis missed posts due to limitations in search capabilities, and the potential for posts to have been deleted before they could be analyzed.

Edited by Dina Contini and Eric Effron

Subscribe to NewsGuard's Reality Check

A report on how misinformation online is undermining trust—and who’s behind it. Produced by co-CEOs Steven Brill, Gordon Crovitz, and the NewsGuard Team.

4
Share
Triple Hearsay: Original Sources of the Claim that Haitians Eat Pets in Ohio Admit No First-Hand Knowledge
“I’m not sure I’m the most credible source because I don’t actually know the person who lost the cat,” Kimberly Newton told NewsGuard. She is the…
Sep 13, 2024
DeepSeek Debuts with 83 Percent ‘Fail Rate’ in NewsGuard’s Chatbot Red Team Audit
The new Chinese AI tool finished 10th out of 11 industry players
19 hrs ago • 
NewsGuard
Wild Claims About L.A. Wildfires Get Millions of Views
NewsGuard has identified and debunked 18 false claims related to the wildfires
Jan 16

Ready for more?

© 2025 NewsGuard Technologies, Inc.
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More