The transformative potential of artificial intelligence (AI) has been celebrated across industries and sectors. From healthcare to government operations, industry actors have hailed AI as the key to unlocking unprecedented efficiencies, predicting patterns, and solving some of humanity’s most pressing challenges. Similarly, peacebuilders and human rights defenders are either planning to or already applying AI to strengthen partnerships and projects across the world to address a range of issues impacting vulnerable communities and countries in conflict.
However, behind the enthusiasm for the technology’s potential lies a more concerning narrative —one in which the rapid development and deployment of AI technologies contribute to environmental degradation, exacerbate existing global inequalities, and pose new risks to peace and security. AI presents a dilemma for peacebuilders and human rights defenders: when leveraging the technology to create positive change, they must also address the adverse impacts of AI that could undermine human rights, equality, and peace.
The environmental and human costs of AI development and use
There are environmental costs to the development and use of AI that are not readily apparent to users of the technology. AI’s intense energy and water demands place communities in competition with technology companies for resources and pose challenges to global efforts to combat the effects of climate change.
The physical infrastructure and components behind AI—data centers, advanced chips, and servers—directly impact countries throughout the world. Many of the raw materials required for AI infrastructure, including cobalt, copper, and lithium, are sourced from areas in the Global South, including countries facing ongoing conflicts or near lands where Indigenous communities are located, adversely impacting the environment for those communities. The extraction of these resources without regard for local communities contributes to fueling violence, human rights abuses, and environmental degradation. For instance, in the Democratic Republic of Congo, which is abundant in cobalt and copper, some of the armed groups active in the country’s ongoing conflicts receive financing based on managing access to those natural resources. Workers in these regions face exploitative labor conditions, furthering inequality between areas that receive the benefits of AI and those used for extraction.
Further, AI systems are powered by massive amounts of data. This data is often harvested from digital activities, much of it coming from individuals who have no knowledge or control over how their data is used. Global South countries face exploitative data extraction practices from technology companies with headquarters in the United States. Personal data from these regions is extracted and used to train AI systems, effectively turning individuals into unwitting and uncompensated contributors to AI progress.
For AI systems to process the vast amounts of data inputted into them, workers must label the data. Data labeling is dependent on low-wage labor, with individuals sitting for hours viewing different types of content (e.g., images and text). This labor poses psychological and emotional harms similar to those caused by social media content moderation. Global South workers are often tasked with reviewing and labeling disturbing videos, text, and images to help AI systems distinguish between objectionable content and information fit for AI user consumption.
A sustainable development approach to AI
Peacebuilders and human rights defenders cannot ignore the significant environmental and human costs of AI as they seek to leverage this technology in their work. To balance AI’s adverse impacts on countries across the world with the technology’s potential to solve complex issues, peacebuilders and human rights defenders need to create new approaches to tackle these threats to global peace and security.
A sustainable development framework rooted in the principles of equity, human rights, and environmental responsibility offers a pathway toward reducing AI’s harmful impacts. Peacebuilders and human rights defenders can and should play a prominent role in how AI is designed, developed, deployed, and used. They are best positioned from their work in countries across the world to understand the toll that advances in AI and other technologies have on countries, particularly those susceptible to conflict. The voices of those most affected by AI development—particularly communities in the Global South—must be included in shaping these frameworks. Local and Indigenous knowledge should inform ethical guidelines and regulatory standards.
The conversations and practices around sustainable AI are still in the early stages of formation, as Elisa Orrù has argued. This presents an opportunity for peacebuilders and human rights defenders to actively shape the discussion and practices around sustainable AI. A key approach is to engage in interdisciplinary coordination and partnerships that incorporate the communities most impacted by AI’s environmental harms. Human rights defenders and peacebuilders should leverage their expertise and skills in bringing diverse stakeholders together, fostering open dialogue, and mediating differing opinions to provide a path forward that is based on equity, inclusion, transparency, and accountability.
AI companies must be held accountable for the human and environmental impacts of their technology and supply chains, from mineral extraction to the exploitative labor conditions for data labeling. Corporations that profit from AI should be required to adopt sustainable and ethical practices that prioritize the well-being of workers and the protection of the environment. Peacebuilders and human rights defenders should be actively involved in strengthening worker power through unions and other local forms of organizing that strengthen workers’ positions vis-à-vis corporations.
A shift in how the global community measures progress in AI development is needed. Instead of focusing solely on technological advancement and maximizing profits, success should be defined as reducing the social and environmental impacts of AI while harnessing it for the public good. Metrics of success should include reductions in inequality and exclusion, improvements in working conditions, progress towards Sustainable Development Goals (SDGs), and contributions to advancing peace and security.
AI holds enormous potential to address global challenges, but its current trajectory threatens to deepen inequality, fuel conflicts, and harm the most vulnerable communities. Peacebuilders and human rights defenders need to be at the forefront—pushing a sustainable development approach to AI to ensure that the global community can harness its power for the benefit of all while minimizing its adverse effects on communities and the environment.