As the use of AI permeates daily life in various forms, from diagnostic labs to dating apps, it brings a disturbing new frontier—systems that claim to detect human sexuality through algorithmic “classifiers.” This “AI gaydar” may be exciting for some developers in Western climes, but it poses grave risks for Queer people in regions like Sub-Saharan Africa, where the deployment—even the perception—of such technology can exacerbate existing vulnerabilities, enabling invasive surveillance, forced outings, and targeting by state and non-state actors.
The problem with using AI to detect sexuality
The quest to algorithmically detect sexuality has gained momentum in academic circles. In 2017, Stanford researchers claimed their AI could identify sexual orientation with 81% accuracy for men and 74% for women. Using the same neural network architectures the following year, a University of Pretoria study yielded lower but still significant rates of 63% and 72%, respectively. And then, in 2023, Swiss researchers reported an 83% accuracy rate for identifying gay men using deep-learning models. These findings raise two chilling prospects: the potential (mis)connection between sexuality and neural patterns and, more critically, a looming threat to Queer people living in homophobic states in an age of ubiquitous AI surveillance.
“AI gaydar” relies on extensive data mining to profile individuals. For instance, the Stanford study analyzed over 35,000 facial images from US dating websites and concluded that gay individuals often exhibit “gender-atypical”’ facial features, expressions, and grooming styles. The concept of “AI gaydar” has been dismissed as both dangerous and pseudoscientific, with its accuracy deemed highly questionable. Yet, the debate has largely ignored its potential impact on Queer lives in repressive regimes, particularly in sub-Saharan Africa. Digital privacy remains a contested issue in this region, especially for Queer people who decry the heightened and disproportionate risks of tech-enabled surveillance, doxxing, and targeted online harassment.
With the impact of AI systems being deterritorial (i.e., not confined to any single country or geographic location) and most software in Africa being imported (primarily from the West), it’s likely that such problematic deep-learning models will also reach places where governments or homophobic groups may exploit them.
Novelty above, nightmare below: “AI gaydar” as an example of our global digital contradiction
Across Africa, many Queer people live in constant fear of being outed and subjected to harsh laws, facing risks that range from arrest and imprisonment to the devastating reality of the death penalty. The internet and social media may provide some Queer individuals with a rare space for self-expression and visibility, but this has prompted many governments and leaders to intensify their repression and abuses.
For example, in Egypt, police use dating apps to entrap and arrest gay individuals. In Uganda, in addition to the prescription of the death penalty for “aggravated homosexuality,” in 2023, a recent proposal to punish same-sex relationships with a life sentence was signed by the president. He described gay people as “very dangerous for humanity” and called for regulators across Africa to ban Queer expression online. The president of Burundi also said of gay people, “if we find these kinds of people…, it is better to take them to a stadium and stone them.”
In this sort of climate, where anti-gay sentiments bordering on genocidal are rather popular, Queer people in this region may only find safety in online spaces, including dating apps, where they can form new bonds in the form of friendships, chosen family, or love. Some Queer people report that they now also use generative AI tools to access life-saving information about their mental health and wellness. The obvious downside is that such digital sanctuaries now risk becoming traps in an era of algorithmic surveillance. The very data that Queer individuals produce (be it through social media interactions, dating apps, or even engagement with seemingly innocuous platforms) may become a resource for developers who are building pseudoscientific tools or snake-oil experiments like “AI gaydar.”
So, without adequate data privacy protection that is context-specific, the digital footprints of Queer people may be exploited not only by researchers who are hungry for the next “cool” AI experiment but also by repressive governments or actors whose homophobic disposition may lead to more dangerous outcomes. And with the propensity for AI systems to be biased and error-prone, the potential harm here extends not only to Queer individuals but to anyone and everyone who is likely to be falsely identified or targeted as “atypical,” amplifying the scope of algorithmic bias and discrimination.
Another pseudoscience in the age of AI
As more of these “AI gaydar” experiments come to light, Queer people will perhaps be forced into a harrowing dilemma: withdraw from dating websites and apps so as not to be likely subjects of such AI products or targets of persecution or—worse—risk erasure, a profound violation of their fundamental right to expression and existence.
In an age where Queer identity is seen more as a form of personal expression and the definition becomes much looser than it was a decade ago, the very premise of such experiments becomes outdated and unnecessary. It is one thing to create an AI system that is built on datasets with an inclusive and diverse representation of sexual orientations and gender identities; it is another thing to make one that focuses on them. The former is likely recommended, but the latter (if not well-founded) may perpetuate stereotypes and reduce complex human qualities to simplistic, algorithmic categorizations that serve no constructive purpose.
An urgent need for “group privacy” in this age of AI
In this age of advanced data analytics and AI, group characteristics (of, say, a particular ethnic group, ancestry, sexual orientation, or even a patient group) may be identified and used in ways that affect the group. Therefore, privacy protection for marginalized groups must be prioritized, not just at the individual level but also for collectives, to prevent discrimination and ensure their ability to exercise their fundamental rights.
A communitarian perspective on privacy and data protection for AI developers and regulators should complement the dominant atomistic view of privacy protection. Similar to what has been recognized under the African human rights system, the recognition and protection of the group privacy of vulnerable populations can apply here. For example, in several sub-Saharan African countries where in-person Queer gatherings are often unsafe, the internet, apps, and other technologies provide a rare lifeline. Over time, these communities have developed and relied on specific vocabulary, signals, and other forms of group language. Within this context, it becomes clear that the protection of their group privacy is also critical, as any breach or abuse may undo all these efforts.
The unchecked development of AI systems capable of predicting sexual orientation reveals a glaring regulatory gap. This lack of oversight in several jurisdictions is what allows some developers to mine sensitive data, often collected without informed consent, to fuel projects like “AI gaydar.” Tech companies and researchers bear an international responsibility to ensure human rights are always respected throughout their training and experimental lifecycles. Profit and innovation should never come at the expense of the privacy of individuals and communities and their right to exist free from surveillance and harm.