January 16, 2025


📰 FEATURE STORY

Is Meta’s fact-checking change the right call?

Since “fake news” entered the lexicon, fact-checking hasn’t been far behind. They’re synonymous with each other in some ways. In an increasingly polarised world where everyone can access a microphone to opine, fact-checking has become more important than ever. No matter the event or who or what’s involved, misinformation will eventually flood social media feeds.

In this crucial time, Meta announced it is ending its third-party professional fact-checking across all its platforms. CEO Mark Zuckerberg announced that it will be replaced by a user-based community notes model similar to X. The news sparked a frenzy. Some are happy with the decision, while others see it as backwards and harmful. Did Meta get this right?

Context

Let’s go back to see how Meta’s fact-checking journey began. Following the 2016 Presidential election, Facebook faced backlash for amplifying posts that were false and meant to sway people in favour of Trump. During the election campaign, several investigations found that social media was rife with bots that pushed and amplified false news stories to help sway the election. Most of this was the handiwork of Russian intelligence as part of its active measures campaign to help Trump win.

Social media platforms were under the spotlight and were heavily criticised for allowing false information to be disseminated. The US Congress would later investigate to see how social media was exploited by bad actors. Executives from these companies were brought to committees to testify about what happened and how they planned to fix it.

Facebook decided to flag fake news stories with the help of outside experts. The company worked with ABC News, the Associated Press, Snopes, FactCheck.org, and Politifact. The process was straightforward. Readers alerted the company about possible fake stories. These were sent to these third-party entities for verification. If a story failed the fact-check, it was publically flagged as “disputed by 3rd party fact-checkers”.

Another change was identifying stories being shared more by people who only read the headline than people who clicked on an article and read the text. The company found that if reading an article makes people less likely to share it, it could mislead users.

In a press release at the time, Facebook said, “It’s important to us that the stories you see on Facebook are authentic and meaningful,” In his post, Zuckerberg said the business had a greater responsibility to the public than only being a technology company.

Over the years, fake news and misinformation in elections have become a global problem. The same could be said for the pandemic. Fact-checking took on greater significance, with newsrooms and media companies taking extra care to ensure authenticity.

With Meta’s announcement last week, things have shifted. Zuckerberg said there was a need to simplify content policies on topics since they were “out of touch with the mainstream discourse.” According to him, third-party fact-checkers have been “too politically biased.” Will this U-turn be fortuitous or a disaster?

VIEW: It’s the right call

Social media’s information landscape is still a mystery. While there’s plenty of information about the type and quality of news disseminated by media outlets, little is known about content generated by online users. These online users are in the millions and generate so much content that professional fact-checkers can’t readily intervene. Crowdsourcing provides a rapid and far-reaching response. Peer-to-peer communication can be effective compared to something coming from official sources after the fact.

X (formerly Twitter) has adopted the community notes approach. Some studies have shown that recruiting users to add context to potentially misleading tweets can reduce their spread. People are less likely to agree with its content. One study by researchers at the University of Illinois Urbana-Champaign showed that a displayed community note increased the likelihood of tweet retraction. Another study from the University of Luxembourg found that exposing users to community notes reduced the spread of misleading posts by 61%.

The fact of the matter is that every post from every random person, brand, or company can’t be nitpicked and treated as the same. It should be noted that Meta will still moderate content related to drugs, terrorism, child exploitation, fraud, and scams. There’s still an Oversight Board to adjudicate cases that violate company policy. The company aims to return to its roots—to be a platform where people can post freely. Zuckerberg sees it as a place where people can exchange ideas across any proverbial boundaries.

COUNTERVIEW: A disaster in the making

The timing of Meta’s announcement isn’t surprising. It comes right before Trump takes the Presidential oath for the second time. The 2024 election was seen as something of a cultural shift, and companies took notice. It’s a new era of “free speech” with no guardrails. Another point to note is that Meta is also caving to Trump because he has signalled his willingness to enact regulatory action against Big Tech for their support of his opponents and supposedly stifling conservative voices. It’s something Trump’s allies have repeatedly spoken about.

The community-based approach proposed by Meta isn’t going to work. A crowdsourced fact-checking solution will only be as effective as the platform, owners, and developers behind it. On X, less than 10% of proposed notes end with consensus. Precious few of these tackle dangerous political and medical misinformation. Community-based notes themselves contain plenty of lies. People mostly tag opinions or projections and use biased sources or other people’s posts.

Meta understands it’s in the attention span business. The company’s addictive algorithms have been accused of supercharging posts that encouraged ethnic cleansing in Myanmar, for example. Last week’s announcement applies to the US only. We don’t know if it’ll expand since the company has been vague about it. If it does, as fact-checking groups and experts have stated, it could be a misinformation free-for-all. That includes India, where Meta works with 11 independent fact-checking groups covering content in 15 languages. Unrestricted availability of content will overwhelm millions.

Reference Links:

  • Facebook partners with fact-checking organizations to begin flagging fake news – The Verge
  • Facebook’s Role in Truth: Understanding the Impact of Fact Checking Facebook – Originality.ai
  • Fact-Checking Was Too Good for Facebook – The Atlantic
  • Meta will attempt crowdsourced fact-checking. Here’s why it won’t work – Poynter
  • Meta’s Decision to End Fact-Checking Could Have Disastrous Consequences – The New York Times
  • Will the EU fight for the truth on Facebook and Instagram? – The Guardian
  • Meta’s pivot on fact-checks in US sparks misinformation fears in India – Business Standard

What is your opinion on this?
(Only subscribers can participate in polls)

a) Meta’s fact-checking change is the right call.
b) Meta’s fact-checking change is the wrong call.

Previous poll’s results:

  • Blinkit’s 10-minute ambulance service is the next step in emergency healthcare: 85.7% 🏆
  • Blinkit’s 10-minute ambulance service isn’t the next step in emergency healthcare: 14.3%

🕵️ BEYOND ECHO CHAMBERS

For the Right:

Has Jammu and Kashmir really ‘prospered’ after 2019? Data suggests otherwise

For the Left:

By Engaging With Taliban, India Has Cornered Pakistan In Afghanistan