A few years ago, an official from an international organization asked me, “Beyond the good vibes of human rights, what is algorithmic transparency for?” I felt unsettled because I believe the connection between algorithmic transparency and human rights is sufficient to justify the need for increased efforts in this area. Moreover, algorithmic transparency will not materialize if it is left to the “market” to decide what should be transparent, nor are non-binding ethical artificial intelligence (AI) frameworks enough to hold the state accountable for using computational systems to guide or automate decision-making processes. We need mandatory rules and new transparency instruments that ensure that people are aware of the use of these technologies by public bodies.
Algorithmic transparency has many definitions, but I understand it as a principle that requires individuals and organizations using computational systems to guide or automate decision-making processes to make information accessible regarding these systems, when and how they are used, and what implications arise for those affected by their deployment. The information disclosed may include details about the system’s design, development, acquisition, operation, usage, or evaluation. It may be provided upon request (e.g., in response to a freedom of information request) or disclosed proactively (e.g., by publishing online repositories of public algorithms or releasing model cards for a system).
The nexus of algorithmic transparency with human rights
Algorithmic transparency is closely associated with respecting, protecting, and promoting human rights. Article 19 of the Universal Declaration of Human Rights recognizes the right “to seek, receive and impart information and ideas through any media and regardless of frontiers.” Moreover, international covenants and national constitutional and statutory provisions in 114 countries establish the right to information and access mechanisms, particularly when such information is held by or on behalf of the state.
Art. 19. "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas..." Universal Declaration of Human Rights
When governments use automated decision-making systems (ADMs) in processes that involve the provision of public services, the degree to which people can access relevant information may affect other human rights. For instance, if a public body secretly implements an ADM system to decide who is eligible for government benefits and people are unjustly excluded due to the tool, then several human rights would be at stake: access to public information, non-discrimination, due process, and the rights associated with the government benefit.
Despite legislation on freedom of information and Open Government initiatives, it is common for governments to challenge requests for algorithmic transparency based on concerns about intellectual property, data protection, and cybersecurity. For example, in Colombia and Spain, civil society organizations that sought information on the use of algorithmic systems were denied access and had to turn to the judiciary to pursue their requests. These cases underscore the need for mandatory rules regarding the disclosure of information about the algorithmic systems used by governments, particularly when these systems may affect human rights.
The importance of algorithmic transparency for AI
The principle of algorithmic transparency is usually included in AI ethics frameworks. One study reviewed 200 AI ethics guidelines and reported that 165 included the principles of transparency, explainability, and auditability. In addition, algorithmic transparency is necessary for achieving other objectives that are important in the use of AI systems, such as ensuring the explainability of tools used for decision-making and the accountability of organizations that deploy them.
Explainability in AI refers to the possibility of understanding how a system works and its outcomes. For example, if a government uses an algorithm to allocate subsidies, a person denied that subsidy would need to know that such a tool was used, as well as how it was used, to be able to understand how to challenge the decision. This is similar to the duties that some data protection laws, such as the European Union’s and Kenya’s data protection regimes, have established regarding the rights of data subjects to know when their data is processed with automated systems, to challenge decisions taken with such systems, and to oppose being subject to decisions solely based on automated processing.
Second, accountability in AI refers to the duty of an organization that deploys an AI system to inform and justify its usage and results. For instance, if a government employs a biased algorithm to identify potential subsidy misuse by recipients and to prioritize and trigger investigations, the outcome may result in false positives, leading to discrimination against thousands of individuals by the government. This injustice occurred in the Dutch childcare benefit scandal, and it could have been prevented if the population had been aware of the existence and operation of this algorithmic tool.
We need mandatory rules and collaborative governance
To be clear, the need for algorithmic transparency is not absolute, and other rights can justify its limitation. For example, a government body that acquires proprietary AI tools from a tech company cannot publicly disclose algorithms protected by intellectual property rights. However, balancing the need for algorithmic transparency with other rights through effective, efficient, and equitable algorithmic transparency instruments requires that public bodies issue clear and mandatory rules governing what type of information about ADMs should be made available and through which means.
We do not have to start from scratch, as there are existing examples of recommendations, guidelines, and standards of algorithmic transparency in the public sector that have been issued in Chile, the United Kingdom, and Australia. The Chilean guidelines, for example, recommend that public bodies “keep permanently available to the public, through their websites . . . updated information, at least monthly, on those ADMs that have an impact on the fundamental rights of individuals or on their access to services, social programmes, subsidies, funds and other benefits.” Moreover, with regards to each ADM system, the guidelines establish that the following items, among others, should be disclosed: the name of the system, the processes in which it is used, its purpose and how it works, its version and date, who has the proprietary rights to the system, the channel for queries or complaints concerning the system and/or its output, and the types of data that it uses.
Effective algorithmic transparency governance requires coordinated action across society to ensure that the deployment of computational systems respects human rights and provides meaningful public accountability. States, tech companies, civil society organizations, universities, and the public have a role to play.
States should have clear rules for disclosing to the public how their algorithmic systems function, particularly when fundamental rights are at stake, and implement transparency mechanisms to comply with these rules. Additionally, the state should ensure that consultation mechanisms are in place that engage affected communities throughout the algorithmic lifecycle—from design to deployment and evaluation—ensuring that systems are shaped by the people they are meant to serve.
Tech companies should document their AI models’ limitations, evaluation metrics, potential biases, and foreseeable risks, providing explanations in plain language that affected communities can understand and act upon. Finally, civil society organizations and universities should create algorithmic transparency tools, such as online repositories, develop independent assessment tools, and conduct external audits.
Through a collaborative governance of algorithmic transparency, we can harness the potential of ADM systems while ensuring they remain accountable to the people they affect and protecting both democratic values and human rights.
This blog is part of OGR's ongoing Technology & Human Rights series.