Will Our Facebook Feeds Be Flooded with #FakeNews?

By Marie-Antoinette Issa
on 13 January 2025

So, you’re scrolling through Facebook, catching up on your cousin’s holiday snaps or watching yet another cat video, when a post pops up claiming that drinking lemon water cures cancer or that a celebrity you love has passed away. Just as you’re about to hit “share,” you spot it – a tiny banner saying, “False Information: Checked by independent fact-checkers.”

If you’ve ever hesitated before sharing a post because something felt a little off, or if you’ve noticed a warning pop up on a news story that seemed too wild to be true, you’ve already met Facebook’s fact-checking system in action. It’s there to save us from a digital world where misinformation spreads faster than a meme.

But now, that safety net is being removed.

In a significant shift, Meta, the tech behemoth behind Facebook, Instagram, and Threads, is abandoning its third-party fact-checking program in the United States, replacing it with a community-driven initiative called Community Notes. This change, coupled with broader plans to ease content moderation and personalise political content, has sparked debate about the future of free speech, misinformation, and the role of social media in shaping public discourse.

Fact checking v freedom of speech

Meta CEO Mark Zuckerberg has long championed free speech as a cornerstone of social progress. In his 2019 Georgetown University address, he argued that inhibiting expression often entrenches existing power structures rather than empowering individuals. This philosophy underpins Meta’s latest moves. “Platforms where billions can have a voice are inherently messy,” Meta stated in its announcement. “But that’s free expression.”

These changes come as public sentiment and political climates increasingly favour less moderation online. Zuckerberg linked Meta’s shift to a broader cultural moment, highlighting a growing preference for fewer restrictions and more open discourse, particularly in light of political shifts like Donald Trump’s re-election.

From fact checking to community notes

Meta’s independent fact-checking programme, launched in 2016, aimed to provide users with expert insights on viral hoaxes and misinformation. It was a familiar presence for users encountering headlines too shocking to believe. However, the programme has faced criticism for its perceived biases and overreach. “Experts, like everyone else, have their own biases,” Meta admitted, noting that legitimate political speech was often swept into the fact-checking net.

The new Community Notes system, inspired by X’s (formerly Twitter’s) crowdsourced context model, shifts responsibility to users. Contributions will require agreement from diverse perspectives to prevent bias, with Meta abstaining from writing or prioritising notes. The goal, according to Meta, is to offer users additional information without intrusive warnings or aggressive content demotion.

Content moderation made easier

Meta is also rolling back restrictions on several hot-button topics, such as immigration and gender identity, arguing that such discussions are central to political debate. The company acknowledged that its automated moderation systems have been overly aggressive, leading to excessive censorship and user frustration.

To address this, Meta will focus enforcement on illegal and severe violations – like terrorism and child exploitation – while adopting a report-driven approach for less severe infractions. The company plans to refine its systems to reduce errors, share transparency reports on moderation mistakes, and streamline appeals processes.

A personalised approach to politics

Acknowledging user fatigue with political content, Meta initially reduced civic-related posts in feeds. Now, it’s adopting a more tailored approach, allowing users to see as much – or as little  – political content as they prefer. This personalisation leverages signals like post interactions to curate content, aiming to strike a balance between user control and engagement.

Meta’s pivot is emblematic of a larger trend across Silicon Valley, where platforms are rethinking their roles as arbiters of truth. Elon Musk’s approach at X, which champions minimal moderation, has similarly challenged traditional norms.

Critics worry these changes could exacerbate the spread of misinformation. Fact-checking initiatives, while imperfect, served as a check against viral falsehoods. Meta’s retreat raises questions about the viability of such efforts, especially as many relied on funding from the company. Between 2016 and 2022, Meta invested $100 million in fact-checking programs, a lifeline for initiatives certified by the International Fact-Checking Network.

John P. Wihbey, a media innovation expert, noted that Meta’s decision reflects both political shifts and business realities. “The move aligns with a global cooling on fact-checking as platforms prioritise user engagement over enforcement,” he said.

Facing facts, and the future of Facebook

Meta’s gamble on Community Notes and reduced moderation underscores a commitment to free expression, but it also tests the limits of user-driven accountability. Will empowering users foster a healthier information ecosystem, or will it open the floodgates to misinformation?

As the changes roll out in the United States and potentially expand globally, the world will be watching to see whether Meta’s vision of free expression truly enables informed discourse – or devolves into chaos.

In an era where the balance between free speech and responsible content management remains elusive, one thing is clear: the digital landscape is about to get even messier.

Related News


More WLT News