Photo Credits: peopleimages12/123rf
The Cost of Fake News in the Context of Meta’s Shift Away from Fact-Checking
The Cost of Fake News in the Context of Meta’s Shift Away from Fact-CheckingThe AI Action Summit’s fourth pillar is “Trust in Artificial Intelligence”. Organizers acknowledge the very real risks linked to the integration of AI systems. Participants will discuss the impact of AI on ethics, the fight against discrimination, “malicious use” of these technologies and security commitments by the historic AI pioneers. These debates will address major challenges of AI in terms information manipulation, challenges which HEC Professor David Restrepo has been exploring for several years. He reflects on the latest one, the decision by facebook founder, Mark Zuckerberg, to remove fact-checkers from Meta. This is an adapted version of a tribune first published by Forbes in January 2025. Restrepo also discusses the role of regulations designed to ensure that humans do not lose control over automated systems and that AI systems will do not diminish humanity and human agency.

In January 2025, Meta announced a controversial shift in its approach to misinformation, replacing independent fact-checkers on Facebook and Instagram with a Community Notes-style system. As the company framed it, this move is designed to support “more speech and fewer mistakes” by leveraging user contributions to contextualize misleading posts. Such claims reflect those made by X, which implemented similar policies following the takeover by Elon Musk. But our research on his company underlines how the question of speed undermines such policies as falsehoods spread considerably faster than rectifications.
Increasingly, we have seen how quickly fake news can upend financial markets and corporate reputations. In 2023, for example, a fabricated tweet showing a fake explosion near the Pentagon rattled the U.S. stock market, causing a brief but impactful downturn. Then there was the notorious case of Eli Lilly’s fake tweet promising free insulin in November 2022. That cost the pharmaceutical multinational $22 billion in the stock market. This isn’t a new phenomenon - as far back as 2013, a fake report of explosions at the White House caused the S&P 500 to lose $130 billion in market capitalization within minutes.
Research on X’s Community Notes and their Limits
These examples demonstrate that fake news is more than an annoyance – it presents a significant social, economic, political and reputational threat. This is one of several conclusions from our years of research built on a database of around 240,000 notes from X’s (formerly Twitter) Community Notes program. This is a system where users collaboratively provide context to potentially misleading posts. We sought to analyze the causal influence of appending contextual information to potentially misleading posts on their dissemination. While the program offered valuable insights into combating misinformation, our findings reveal critical limitations.
In this study, we found that Community Notes double the probability of a tweet being deleted by its creator. However, as we point out, the note often arrives too late, as around 50% of retweets happen within the first six hours of a tweet’s life. While Community Notes reduce on average retweets by more than 60%, the median note takes over 18 hours to be published - too slow to combat the initial, viral spread of misinformation. This confirms an MIT study in 2018 which showed that falsehoods can go “10 to 20 times faster than facts”.
It also highlights a critical challenge: while community-driven fact-checking is a valuable tool, its current design and speed are insufficient to mitigate the rapid dissemination of fake news. And the latter is only getting faster.
The Way Forward: Leadership in the Age of Misinformation
Meta’s decision to replace independent fact-checkers with a Community Notes-style system on Instagram and Facebook highlights the urgency of addressing misinformation at scale. Its announcement sparked a wave of criticism, including an open letter to Mark Zuckerberg from the International Fact-Checking Network (IFCN) which warned of the increased risks of misinformation and its consequences for businesses and society. The letter underscored that this approach undermines accountability and could exacerbate the rapid spread of fake news, leaving businesses particularly vulnerable.
As our research demonstrates, these systems need to evolve to match the speed of misinformation’s spread. We believe that integrating AI-driven tools could significantly enhance human efforts, enabling faster detection and flagging of potentially harmful content. For example, machine learning models trained to identify patterns of misinformation can serve as an early warning system, while large language models (LLMs) can complement these efforts. LLMs analyze the linguistic and thematic patterns of viral posts to provide real-time contextualization. This dual approach allows platforms and companies to respond to misinformation more effectively and in near real-time. Moreover, fostering partnerships between social media platforms, governments, and private entities could lead to more unified standards for combating fake news.
Is Regulation Effective in Promote a Human-centric AI?
Parallel to our research on fact-checkers, we have been exploring regulatory initiatives - such as Europe’s AI Act and an AI Liability Directive - which aim to promote a human-centric approach to AI. Our recent study explores a dual approach to tackle excessive regulation and ineffective policy in the field of AI. We suggest that for AI regulation to promote a human-centric approach, human rights should be the main regulatory benchmark for assessing AI systems and balancing the purposes they serve in the market. A first practical step is to require an explicit proportionality test. This test acknowledges that AI systems may impact human rights, such as privacy and non-discrimination, and requires developers to explicitly disclose the trade-offs between the optimisation strategies designed to achieve the business objectives pursued by the AI systems and their potential negative impact on human rights. Moreover, the proportionality would also help to make explicit the trade-offs between human rights themselves, such as in cases where content moderation is performed by algorithms. These algorithms, by determining whether or not to moderate potentially offensive messages, ultimately balance the rights to freedom of expression and non-discrimination.
Secondly, we suggest a co-evolutionary and life cycle approach which can help ensure accountability beyond the design stage. We propose to achieve this through meaningful human control and human-AI interaction across the entire lifecycle of the system. This allows decision-makers to constantly update and adapt AI systems to answer the challenges they identify during each phase.
Staying Ahead of the Curve
In today’s fast-moving digital landscape, trust has become as valuable as revenue. The rapid spread of misinformation, amplified by market-driven platforms, presents both a risk and an opportunity for businesses and governments alike. Through research and real-world examples, we see that those who proactively address these challenges can foster both resilience and long-term integrity.
The way forward requires a blend of technological innovation and strategic collaboration. Businesses must integrate AI-driven tools to detect and mitigate misinformation faster than it can spread. However, technology alone is not enough. Leadership is also crucial. By adopting regulatory frameworks and implementing proportionality tests, organizations can ensure that human rights remain central to their AI strategies. This regulatory approach helps make explicit the trade-offs between business objectives and their potential impact on rights such as privacy and non-discrimination. Furthermore, continuous human oversight across the entire AI lifecycle ensures that systems can evolve in response to emerging risks and ethical concerns.
Businesses that stay ahead of the curve by investing in these strategies not only protect their reputations but also contribute to a more informed and resilient society. In doing so, they turn today’s crises into opportunities for innovation and leadership, shaping a future where trust and accountability are the cornerstones of success.