Artificial intelligence (AI) applications like ChatGPT are normalizing a probabilistic turn in the Western worldview: this turn promotes embracing uncertainty through probabilistic thinking and it elevates statistics and complex modeling as an approach to knowledge. With the deployment of ChatGPT, for example, language itself is being reframed as a system of mere probabilities—a lottery in which truth and fact are only participants. This explains why ChatGPT systematically and confidently mixes truth with plausible fictions, hallucinations that we are expected to accept as an inevitable side effect that will become less uncomfortable once we fully embrace the probabilistic worldview.
This shift toward a probabilistic worldview is a slow undercurrent that has been gaining traction over the past decades and, as such, is both more pervasive and less tangible than much of the technologically minded framing that is dominating the news cycle. This probabilistic shift builds on the quantitative shift that has been ongoing since the 1970s in both government and academia. The probabilistic approach has been present in scoring systems (e.g., credit or insurance scoring) for decades and is increasingly shaping everyday life. In essence, these systems assess the unobserved behavior of individuals by projecting onto them the observed behavior of individuals who share certain characteristics that are deemed to be relevant by the program and its system designers and are therefore given the same or similar treatment.
Through the deployment of sensors in personal internet-connected devices and the increase of time spent within heavily surveilled virtual environments, the choice of characteristics that is used to define the boundaries of these artificially constructed groups has become increasingly granular and abstract. For example, we could imagine predictive models creating grouping X98899T2, which brings together all the people who have at some point visited a website for a certain shampoo, been at an airport once in the past six months, are between the ages of 40 and 55, and own an iPhone, as long as these seemingly random collection of characteristics are statistically shown to be good predictors of some other behavior, such as buying a perfume from brand X. Categories that have traditionally been central to ordering human life, such as age and sex, become less relevant as these abstract, unlabelled and tailor-made segmentations aimed at informing not human but computer decisions take their place.
A recent investigation by the Markup revealed a file with 650,000 discrete labels used by advertisers to categorize individuals. To put this number into context, that is over two labels for each of the 270,000 words defined in the Oxford English Dictionary. As a result, the fate of something like whether or not we get an advertisement for a home or a job becomes subject to a complex function that balances hundreds of previously unavailable variables (and often seemingly irrelevant data).
Based on the combination of such previously unavailable data points, these complex models then place us into a cohort of people that is considered similar enough that their behavior is considered to predict our own behavior, often impacting us as if we had indeed acted as they did. This secretive system is taking place within an ongoing process of privatization of knowledge in general and of AI technologies in particular.
This probabilistic turn shakes the certainty and causality that have been at the center of our Western worldview and, perhaps since the Enlightenment, promoted the adoption of the scientific method. As such, the probabilistic turn is destabilizing the pillars of our human rights system. In particular, it destabilizes the idea that human rights are inalienable and universal.
Human rights cannot be inalienable when our ability to exercise them is mediated by probabilistic machines. This is precisely what is happening, for example, when automated content moderation and content takedown systems limit what we can express over social media apps. The normalization of such practice has led EU lawmakers to encode it into law, turning freedom of expression into a probabilistic right—one that, given the amount of false positives, understands our ability to exercise our freedom of expression as being subject to a lottery.
The pillars of universality, in turn, are threatened by the way in which AI is shifting the attention from the individual to the group and from a focus on agency and responsibility for past actions toward the prospective focus on probabilistic harm reduction. The system of social organization through rights that was built with the individual as the core building block is in crisis because the relevance of the collection of characteristics that were placed under the label of the individual have come into question, including the idea of individual autonomy itself. For example, this probabilistic worldview will give judges technologically infused confidence in their ability to predict future behavior, which will shape their sentencing. This is not merely because of the prospective outlook, but because the idea of personal responsibility will be watered down by the normalization of a system that increasingly derives authority from its ability to gather population-level insights. This is a shift from grounding authority in the ability to study and understand the underlying individual, which has characterized most modern judicial systems of the past centuries.
On the one hand, as noted by Kathrina Geddes, this transformation might allow us to acknowledge the systemic nature of some of our social problems. It could help us displace the moral judgment from the individual onto the overarching set of relationships that define the social structure as a whole.
On the other hand, the complexity of the tools being deployed to operate at this epistemic scale defy human understanding and thus create space for the consolidation of power in the hands of an ever smaller and unaccountable elite—an elite who reserves the right to interpret the sayings of what increasingly seems to be marketed as new Silicon gods and a system that forces us into participating in a constant lottery that will define what we see online, what jobs we are offered, what relationships we are available for, and whether or not we go to jail.
If there is space to reform the values underlying these technologies to make them serve the public interest, governments will need to reassert their democratic legitimacy as architects of social relationships and replace the market incentives that are fueling technological development with public interest goals. The incumbent companies present themselves as the only ones capable of saving the public from the dire risk posed by the complex systems that they have themselves created and that they claim are too complex to audit. Unless we take a stand, this process will reshape our understanding of identity and outsource our human rights system to a secretive lottery.