The societal disruption posed by generative artificial intelligence (AI)—a broad term for the automated production of content, such as text, images, and audio—has been extensively cataloged. These disruptions range from the potential effects on workers, to concentrating corporate power, to enabling greater surveillance.
It is too early to know how any of these predictions about generative AI will shake out, but there is one additional issue that human rights advocates should monitor: how governments might respond to generative AI in ways that restrict freedom of expression. In particular, advocates should encourage governments to target AI regulations in ways that respect existing legal regimes protecting freedom of expression.
Any attempts to regulate the content produced by generative AI, including large language models, run the risk of operating broadly to restrict protected expression. The fact that humans may have used a particular technological tool—whether it be a large language model or an image generator or a search engine—to create or shape a message does not mean that such a message constitutes unprotected expression. Rather, technology is often a key driver that enables people to record and disseminate information in novel ways, including through photography, digital videos, or by scraping public data on the internet to gather information necessary to hold powerful actors accountable.
Courts in the United States have recognized that simply because technology is involved in enabling expression does not mean that people lose their rights to engage in that expression. For example, the right to record police in public would be largely meaningless if it did not protect the ability to use mobile phones, as so many individuals and protestors the world over have done to disseminate evidence of police misconduct. The same concept applies to generative AI, which people may use to support or augment (or do the work of) expressing their messages.
In the context of copyright, the US government is already exploring distinctions between works produced solely by humans as opposed to works produced by AI tools. If this distinction is adopted outside the realm of intellectual property, it might lead governments to regulate AI-generated content differently than human-generated content. And any such laws would have to be scrutinized closely for their effect on individual users’ rights to free speech.
One of the specific pressures governments will face is to address the widespread concern that generative AI will turbocharge the production of content that is false or misleading. Generative AI tools might make it cheaper and easier to produce false information in a sophisticated format, thereby allowing the rapid spread of such information in ways that human fact-checkers will struggle to keep up with.
This concern is not new, as discussions around regulating so-called “deepfake” videos have been ongoing for years, and there have already been proposals at the state and federal levels in the United States to regulate such content. But First Amendment advocates have pointed out that many of these proposals sweep too broadly and would prohibit a host of constitutionally protected speech, including CGI-generated images in movies or obvious parody videos of public officials.
In the United States, falsehoods—including parody—are constitutionally protected speech except in certain narrowly defined circumstances, such as fraud or defamation. That is why the country’s legal regime allows for laws narrowly regulating those categories of speech. International law similarly protects expressions of an erroneous opinion or an incorrect interpretation of past events. Human rights advocates should be wary of any attempt to generate new legal regimes, especially criminal laws, limiting permissible expression in the name of combating the threats posed by generative AI.
Even if governments do not impose new restrictions on permissible content, there will still be questions about how to impose liability for AI-generated content that is unlawful under current legal regimes. In an attempt to address these questions, governments may seek to impose accountability on the use of generative AI tools by requiring identification of authorship online or some traceable provenance of AI-produced speech, videos, and other media.
But doing so could impinge on the right to speak anonymously, as protected by the First Amendment and international human rights law. We have already seen governments attempt to restrict the use of encrypted messaging tools in the name of addressing crime, even though such tools allow people to communicate privately and anonymously, securing human rights in the digital age.
Advocates should resist government attempts to erode the ability to use anonymity tools in the name of fighting back against a proliferation of AI-generated unlawful content. The right to speak anonymously remains fundamental and worth protecting, especially for marginalized communities and those living in repressive environments.
So where does that leave governments that want to address the potentially disruptive effects of generative AI? Regulation of AI is urgently necessary, and no amount of self-policing by industry is a replacement for enforceable, democratically imposed mandates on those who develop and deploy AI tools. Regulation of AI can and should continue to target the use of AI in particular contexts, including its use to make consequential decisions affecting people’s rights to liberty, due process, and freedom from discrimination in employment, housing, and education.
Governments can also work to ensure that AI tools are not concentrated in the hands of a few actors, and that the development and deployment of such tools satisfy certain standards, including data privacy regulations and worker protections. But those tasked with regulating AI must also ensure that broad restrictions on AI-generated content do not end up restricting critical freedom of expression protections.