Even the most sanguine optimist about ChatGPT and other generative artificial intelligence (AI) systems admits to their potential for abuse and misuse. The capabilities of this new generation of AI raise a gauntlet of serious challenges, from the distortion of the information sphere to weaponization by trolls, hate groups, terrorists, and authoritarians. Even OpenAI, the developer of ChatGPT, concedes that “AI systems will have… great potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”
For all of the impressive advances in simulating human-level responses to prompts and tasks, AI developers have made much less progress in creating workable safeguards against system misuse. This new generation of AI systems is more opaque and less transparent than its predecessors.
We are left with an undesirable situation where highly capable AI systems have proliferated around the world with little clarity about their risks, the efficacy of system safeguards, or what laws might govern those issues. Human rights risk falling by the wayside in this context, driven by market forces and technological advances.
However, this is far from a simple story of industry greed and government inaction. It is easy to take stock of the dangers posed by generative AI and demand that regulators stop sitting on their hands. It is much harder to come up with a legal framework that can offer a practical path forward. While pinpointing threats to human rights is an important part of this process, the complexity of generative AI systems demands some deep thinking about how best to embed protections for those rights into law.
As an example of this regulatory quagmire, consider the European Union’s proposed AI Act, considered the world’s most comprehensive AI regulatory framework. While the AI Act was explicitly intended to be future-proof, its very design has been thrust into question by the rise of ChatGPT and generative AI—sending EU lawmakers frantically back to the drawing board before the legislation has even been passed.
The central challenge is that generative AI systems like ChatGPT are incredibly flexible, able to perform an essentially unlimited range of functions. This malleability poses thorny issues for frameworks like the AI Act that apportion responsibilities based on what a particular AI system is designed to do. For instance, the legislation requires that AI systems used in sensitive areas like education, border control, and law enforcement are high risk and subject to stricter guidelines. On the other hand, AI systems designed for nonsensitive uses are asked only to comply with minor requirements like basic transparency.
That risk-based framework works in the context of specialized algorithms and other AI systems designed specifically for particular contexts, such as the eBrain system used to process migrant applications or the COMPAS risk assessment system used widely in the United States’ criminal justice system. However, it is far from obvious how to categorize a general-purpose system like ChatGPT that could be used in migration or justice contexts, as well as many others.
A second major challenge is the “downstream developer” issue, where systems like ChatGPT are increasingly serving as a base platform for other AI developers to then customize further. Who should be responsible under the law if the customized AI is used in a manner that breaches human rights?
The original developer may have created and honed the AI system, but will not be privy to how the system is used, adapted, or integrated by the downstream developer. For their part, the downstream developer did not participate in the creation or training of the AI system, so may overestimate its capacity or underestimate its limitations. This complicated value chain muddies the water around accountability and makes it difficult to assign responsibilities.
Any legal framework aiming to corral generative AI systems, including frameworks grounded on protecting human rights, will need to reckon with these technological nuances. We may need to rethink common refrains for addressing AI human rights issues in light of this new context. For instance, the Biden Administration’s proposed AI Bill of Rights would require that “automated systems provide explanations that are technically valid, meaningful and useful.”
While potentially valuable as a vision statement for what we would like in an ideal world, it is not clear whether it is even technologically possible for generative AI to comply with a requirement like this. ChatGPT is unable to detail exactly why it answered a particular way to a question, or how it created a piece of content. While you can “ask” the system why it gave a specific answer, the response will be what ChatGPT predicts a human might respond with rather than any genuine reflection on its own processes. This inability is a function of their complicated algorithmic processing rather than a particular design decision by OpenAI.
Despite the inherent difficulties of regulating generative AI systems, the alternative is worse. A wide array of human rights are at stake if generative AI continues to be developed and released in a regulatory vacuum. A laissez-faire approach to regulating advanced AI puts individual users at risk and subjects important issues like misinformation, misuse, and data privacy to whatever minimal safeguards are demanded by the market.
However, efforts to impose a human rights lattice over the growing AI industry will not work if the legal framework is unmoored from the technological context. Lawmakers, AI developers, and human rights groups alike will need to work toward a functional law designed with the capabilities of generative AI in mind.