In 2020, I wrote a piece assessing Mark Zuckerberg’s recipe for Facebook’s next decade. Amidst the techlash against the technology sector, Zuckerberg declared that he did not think “private companies should be making so many important decisions that touch on fundamental democratic values.” He instead called for “new regulation” and “clearer rules for the internet” and reconfirmed the creation of the Oversight Board to allow users to appeal content decisions.
Despite the failure to articulate a human rights agenda, Meta has, since then, made progress in its corporate commitment to human rights. In 2021, the company adopted a Corporate Human Rights Policy, and it has so far published three Human Rights Reports with the aim of disclosing “how [they]’re addressing human rights impacts, including by sharing relevant insights arising from human rights due diligence and the actions [they] are taking in response.” The Oversight Board, which started its operations at the end of 2020, has taken a robust human rights approach.
A shift backward
While these steps represented a positive, albeit imperfect, move towards alignment with the UN Guiding Principles on Business and Human Rights, Zuckerberg’s “updated” recipe for Meta, announced on January 7, 2025, constitutes a concerning backpedaling and a serious threat to human rights.
Zuckerberg’s announcement is premised on the idea that Meta needs to “go back to its roots around free expression” (although, as aptly explained by Mike Masnick and Ben Whitelaw in their podcast, “Facebook was never a free speech platform”). Therefore, the updated measures are presented as an attempt to “prioritize speech” and “restore free expression” on Meta’s platforms. These measures include (1) replacing fact-checkers with community notes, an approach similar to Elon Musk’s on X; (2) simplifying content policies; (3) raising the threshold for removing prohibited content; (4) bringing back civic (i.e., political) content; (5) moving Trust and Safety and Content Moderation teams from California to Texas, where “there is less concern about the bias of [Meta’s] teams”; and (6) “protect[ing] free expression worldwide” by working with President Trump to push back on regulation.
Despite their stated aim of “prioritizing speech,” these measures severely neglect human rights, including the right to freedom of expression as enshrined in the International Covenant on Civil and Political Rights (Article 19). They also contravene Meta’s own Corporate Human Rights Policy and human rights commitment.
Mainstreaming hate speech
The most glaring contravention of Meta’s human rights commitment is the simplification of content policies. In Zuckerberg’s words, the change aims to “get rid of a bunch of restrictions on topics such as immigration and gender that are just out of touch with mainstream discourse.” These changes have already been implemented in the most recent update to what is now called the “Hateful Conduct Community Standard” (previously “Hate Speech Community Standard”).
The policy now permits the use of insulting language as well as calls for exclusion based on gender and sexual orientation as well as national origin. For instance, the policy explicitly allows for “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism [a loaded term often used by anti-trans activists] and homosexuality and common non-serious usage of words like ‘weird.’” While prohibiting calls for social exclusion, it explicitly allows for calls “for sex or gender-based exclusion from spaces commonly limited by sex or gender, such as restrooms, sports and sports leagues, health and support groups, and specific schools.”
While Meta’s content policies are certainly complex, efforts to simplify them cannot use the yardstick of pervasive hate speech at the expense of marginalized communities. Instead, any simplification should be implemented in accordance with the company’s commitment to human rights, which also comprises a commitment to conduct human rights due diligence. This includes, in particular, the identification of human rights risks to “users from groups or populations that may be at heightened risk of becoming vulnerable or marginalized.” Freedom of expression can be legitimately restricted when it is necessary and proportionate to protect the rights of others. More specifically, protecting the right to non-discrimination against women, trans people, and immigrants is, under international human rights law, a legitimate aim for restricting expression.
The move towards such a “simplification” of the hate speech policy is particularly problematic in light of Meta’s history: it was precisely the failure to moderate content adequately in contrast to the “mainstream discourse” that fueled the genocide against the Rohingyas in Myanmar. Yet, the new policy update also removed the recognition that hate speech “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.”
Additionally, this measure contravenes the Oversight Board’s findings on Meta’s policies more generally. As I have discussed elsewhere, the Board has found on several occasions that Meta’s content policies failed to comply with the legality standard, with many policies lacking sufficient clarity and specificity. The loosening of restrictions on hate speech, coupled with the lowering of thresholds in content filters for policy enforcement, will create fertile ground for a rise in online and offline violence. By failing yet again to integrate human rights standards into its content policies, Meta has chosen to neglect freedom of expression and to prioritize hate speech.
Opposing regulation
Zuckerberg’s mission to “prioritize speech” is not limited to an internal distancing from human rights. Instead, in aiming to “protect free expression worldwide,” he also seeks to challenge the same “clearer rules for the internet” he had called for only five years ago.
Among the governments around the world “guilty” of “going after American companies,” Zuckerberg also targets European ones, which, according to him, have “an ever-increasing number of laws institutionalizing censorship.” In recent years, the European Union has targeted Big Tech through its regulatory efforts. The Digital Services Act (DSA) and the Digital Markets Act are two directives that can play a fundamental role in protecting human rights online. The DSA has been defined as “a digital civil charter that shines through the entire [EU] legal system and radiates minimum rights for individuals.”
Governments can—and do— censor legitimate expression, at times in violation of their own international human rights law obligations. In these instances, social media companies, in accordance with their responsibility to respect human rights, should push back against the censorship of lawful content. However, as underscored by David Kaye, it is “human rights law [that] gives companies a language to articulate their positions worldwide in ways that respect democratic norms and counter authoritarian demands.” As also stressed by Article 19, opposing regulation that aims to promote human rights and ensure platform accountability further prioritizes corporate interests over human rights.
Five years after my previous post, the conclusion is, regrettably, the same: if the company does not shift its internal narratives to align with its commitment to human rights, Meta’s recipe for the future will leave users increasingly vulnerable to both platforms and governments. A recipe that does not include human rights cannot meaningfully protect freedom of expression.