Imagine calling your bank to resolve an issue, but the customer service representative suddenly accuses you of fraud and threatens to suspend your bank account if you don’t invite the “real” you. Why? Because their artificial intelligence (AI) voice recognition system concluded that your voice was not “male” enough to match their records. This is a universal experience among transgender individuals, including myself, and is just one of the significant risks that AI poses to the LGBTQI+ community.
AI, which refers to computer applications that demonstrate human intelligence capabilities, is rapidly infiltrating various aspects of our lives. However, the research on the AI technologies’ profound risks for marginalized communities and ways to mitigate these risks is not keeping up with the speed of its development. Although AI can be helpful in some areas, it can facilitate human rights violations and aggravate discrimination against sexual and gender diverse people. Private businesses and governments must holistically address these risks with measures based on human rights and community participation as soon as possible.
Decoding humanity: The limitations of AI’s approach to identity
There is a worrisome growth of systems that claim to be able to identify LGBTQI+ individuals via an analysis of their facial features, voice, social connections, group memberships, customer behavior, and even profile picture filters. However, the software is unable to accurately determine people’s sexual orientation and gender identity, as these personal and deeply felt characteristics cannot be discerned from external factors alone, can change over time, and may not conform to the Western constructions and data sets used to train AI.
For example, because the data sets often conflate gender identity and sex characteristics, AI inevitably fails transgender people. The cost of mistakes in gender identification can range from banning a profile on a dating app and misgendering transgender individuals to suspending their bank accounts, and subjecting them to invasive security checks at the airport.
Furthermore, automated gender classification systems can fail intersex people, who have sex characteristics that do not conform to societal expectations about male or female bodies. For example, AI algorithms trained on endosex people’s data sets, such as menstruation tracking or self-diagnosis apps, AI-based prenatal testing and screening, and AI-powered targeted commercials promoting harmful medical interventions, may provide intersex individuals and their parents with inappropriate or biased information and contribute to ill-informed medical decisions and their irreversible consequences.
In addition, the commercial data sets used to train AI reinforce unrealistic stereotypes about LGBTQI+ people as those that have a particular look, purchase certain products, can safely disclose their sexual orientation online, and want to spend time on social media. Improving technology will not alleviate these problems because people can have any appearance or behavior regardless of their sexual orientation or gender identity.
Even in the case of successful identification, AI can still cause harm. For example, AI-targeted advertising or public health information can out a child to a homophobic family on a shared computer or identify vulnerable adults for conversion practices.
Similarly, research suggests that AI algorithms struggle to distinguish between dangerous and ordinary speech in the LGBTQI+ context. The instances of harmful social media censorship include restricting transgender people’s names, censoring drag queens’ “mock impoliteness,” removing benign content, banning profiles, and demon(et)izing videos. At the same time, AI algorithms might overlook dangerous content. These mistakes result in the traumatizing feeling when one’s identity is erased and cause self-censorship and a chilling effect on LGBTQI+ people’s self-expression, including digital activism.
AI-powered oppression of sexual and gender diversity
When it is trained on biased data or programmed in a discriminatory way, AI absorbs and perpetuates prejudices that circulate in society and reproduces them independently. However, the dangers to the LGBTQI+ community are the most significant when AI technologies are used with the intention to harm, for example, to generate harmful content or to target the community more efficiently.
The advent of AI can level up prosecution tactics of homophobic governments to monitor and punish LGBTQI+ individuals with unprecedented speed and sophistication. Sooner than later, prejudiced governments all over the world will be able to apply AI to target LGBTQI+ individuals, activists, and allies for prosecution and smear campaigns by analyzing their online activity, connections and communities, contacts on mobile phones, online streaming history, hotels and rentals, taxi rides, and so on. The Russian government has already launched an AI-driven system aimed at identifying “illegal” content online to enforce the “gay propaganda” law.
Moreover, governmental authorities would be able to curtail the freedom of assembly by immediately detecting public community event announcements and identifying protesters through facial recognition following these events. Finally, private actors, such as social media and publishing houses, can use AI to censor LGBTQI+ content or discriminate against job or insurance applicants.
Protecting the LGBTQI+ community in the age of AI
Conventional legal tactics, such as the prohibition of discrimination and strategic litigation lawsuits, might be ill-equipped to deal with these dangers, especially in countries that are not welcoming to the LGBTQI+ community. For example, proving intent for direct discrimination claims might be impossible due to the opacity of AI-based systems. Moreover, traditional human rights legal mechanisms rely on the responsibility of state actors. Still, many of the abovementioned harms are caused by machines, private actors, or authoritarian governments lacking the rule of law.
Therefore, a significant part of the responsibility for preventing the unethical use of AI should fall on private businesses. They should involve the LGBTQI+ community and organizations in designing and evaluating artificial intelligence systems, revoke any technologies that attempt to identify gender and sexual orientation, and avoid assisting state-sponsored homophobia across the globe.
The omnipresent ethical guidelines on AI should be replaced with policies based on human rights, since these policies are an internationally recognized set of universal principles not constrained by a single school of thought in ethics and can be better enforced through monitoring and accountability mechanisms. These guidelines can be strengthened by an authoritative interpretation of existing international human rights obligations, as well as by adopting national laws. One possible legal measure is a total ban on AI technologies that claim to identify sexual orientation and gender identity (in any way or for law enforcement purposes), as suggested by politicians and civil society. Finally, different solutions may be necessary for the unintentional versus intentional discriminatory use of AI.