Meta Ends U.S. Fact-Checking Program and Revises Policies

Meta is ending its U.S. third-party fact-checking program for Facebook, Instagram, and Threads, shifting to a "community notes" model similar to X. Additionally, Meta revised its hateful-conduct policies, removing explicit bans on language comparing women to "household objects" and referring to transgender individuals as "it." Critics argue these changes align with President-elect Donald Trump's agenda.

Jan 6, 2025

On January 6, 2025, Meta announced the termination of its third-party fact-checking program in the United States, marking a major shift in the company’s approach to content moderation on Facebook, Instagram, and Threads. The decision comes amid increasing political scrutiny and debates over free speech, as the company moves toward a more hands-off model similar to the “community notes” system used by Elon Musk’s X (formerly Twitter).

Meta’s fact-checking initiative, which partnered with independent organizations to flag and limit the spread of misinformation, had been a key part of the platform’s efforts to combat false or misleading content since 2016. However, in its official statement, Meta explained that the shift reflects its commitment to “empowering users to assess information for themselves.” The company emphasized that it will continue to use artificial intelligence to identify and downrank content deemed misleading but will no longer rely on third-party fact-checkers to label posts.

In addition to ending fact-checking partnerships, Meta has also revised its hateful conduct policies. The changes include the removal of specific prohibitions, such as those banning comparisons of women to "household objects" and language referring to transgender individuals as "it." Critics argue that these changes reflect a broader rollback of content moderation policies, while supporters see them as a move toward less restrictive speech regulation on social media platforms.

The timing of Meta’s decision has sparked controversy, as it coincides with the lead-up to the 2024 U.S. presidential election. Some analysts speculate that the changes align with the political climate following Donald Trump’s re-election, as the former president and his supporters have long criticized fact-checking efforts as biased. Others suggest that Meta’s shift is driven by financial concerns, as increased regulatory pressures and declining ad revenues have forced the company to reassess its priorities.

Reactions to the move have been divided. Advocacy groups warn that eliminating third-party fact-checking could lead to a surge in misinformation, particularly on topics related to politics, health, and science. Meanwhile, free speech advocates have praised the decision, arguing that it reduces the risk of censorship and allows for a broader range of viewpoints to be shared online.

Despite the policy changes, Meta insists that it remains committed to combating harmful content and ensuring platform integrity. The company has stated that it will continue to rely on AI-driven moderation tools, user reporting systems, and transparency measures to manage content across its platforms. However, questions remain about the long-term impact of these shifts and how they will affect the digital information landscape in the U.S.

Share on:

Copy Link

Related blogs

Related blogs

Copyright 2025 USA NEWS all rights reserved

Copyright 2025 USA NEWS all rights reserved

Copyright 2025 USA NEWS all rights reserved

Copyright 2025 USA NEWS all rights reserved