I open the ChatGPT window and ask it to write a 1,000-word essay for Open Global Rights explaining three opportunities and three challenges that its artificial intelligence poses for human rights. It blinks for a moment, as if hesitating, but soon begins to respond with a resounding sentence: "ChatGPT has already made a significant impact on the world of human rights.”
I am struck by how confident this young creature is and how it refers to itself in the third person, in the manner of some politicians. But I must acknowledge how accurate the response was. If it were that of a student in an exam, I would surely have given them a good grade.
Perhaps that ambiguity —that mixture of fascination and strangeness, sprinkled with varying doses of suspicion and dread depending on the topic and the day— is the tone that dominates the emerging debate about the human rights implications of language models like ChatGPT. This is shown by the essays that inaugurate Open Global Rights' series on the subject: generative artificial intelligence can increase misinformation and lies online, but also be a formidable tool for legally exercising freedom of expression; it can protect or undermine the rights of migrants and refugees, depending on whether it is used to monitor them or to detect patterns of abuse against them; and it can be useful for traditionally marginalized groups, but also may increase the risks of discrimination against the LBGBTI+ community, whose fluid identities often do not fit into the algorithmic boxes of AIs.
I sense a similar ambiguity in ChatGPT's response —which is to be expected, considering it is a language model that synthesizes (some would say steals) ideas shared by humans on the web. With discipline, it highlights three contributions of its technology to human rights: greater access to information, better translations between languages, and more data analysis and prediction of trends on relevant issues, all of which are advances that facilitate the investigation and reporting of rights violations around the world. And it closes with three risks: that it may reproduce the biases of the information it devours, that it violates privacy through the use of personal data, and that it lends itself to abuse because “ChatGPT can be difficult to understand and analyze, making it challenging to hold those responsible for its use accountable.” Note that it is referring to the responsibility of users, not the corporations that created the technology, which are still reluctant to reveal what they know and don’t know about how it works. Perhaps they have programmed the child not to disown its parents.
While these effects are important, I think analysts, both human and virtual, tend to register the earthquake, but lose sight of the tectonic plates that are shifting beneath the surface. In reality, generative AI not only sharpens known impacts, but questions the basic categories that give meaning to human rights.
One need only scrutinize ChatGPT's response to bring some of those questions to the surface. “ChatGPT has the potential to provide access to information to people who might otherwise not have it,” it tells me as if speaking of someone else. “By providing accurate and timely information, ChatGPT can help people make informed decisions and take action to protect their rights.”
What my new virtual assistant doesn’t mention is that its superpowers are good for both information and misinformation. The immediate risk of deregulated AI is that the same actors and companies that manufacture or disseminate the lies that have shaken democracies and human rights will now do so on an infinitely larger scale, further blurring the line between what’s true or false. If the public sphere ends up flooded with texts as impeccable as they are fallacious, or images and videos as credible as they are mendacious, the time-honored human rights tactic of speaking truth to power will also drown out. Models like ChatGPT are not stochastic parrots, but potential hackers of the human linguistic codes in which we have conceived rights and all our norms and beliefs.
Which brings us to the other blind spot in the debate: the transformation of the human in human rights. The emphasis of the ChatGPT discussion has been on the what, on the rights affected by the new technology. But the more complex and fascinating question relates the who, the dividing line between humans and non-humans as subjects of rights.
This conversation is already well established in other fields and currents, from philosophy of mind to information theory to cybernetics, communications, and transhumanism. One of its most lucid analysts, Meghan O'Gieblyn, insightfully summarizes the paradox in which we find ourselves: “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.” I doubt ChatGPT can see as deeply as O'Gieblyn, at least for now.
Hence, the anthropocentrism that inspires modern rights, with their exclusive recognition of humans, is being challenged from very different points of view. While some are beginning to propose that AIs be granted some rights, others (myself included) have been suggesting that the circle of rights be widened to include the natural intelligences of animals, plants, fungi, and ecosystems.
Unless we address these questions, human rights may not have much to say to a world confused and transformed not only by AI, but by the climate emergency, geopolitical shifts, and backsliding democracies. ChatGPT, in contrast, will surely have a lot to say.