If there is one thing that social justice practitioners should care about, it is the wellbeing of the individuals for whom their work is designed. And for the most part, they do. But we work in big institutions that are often difficult ships to turn, and working in such cumbersome environments does not mesh well with needs and issues that are changing faster than ever. The proliferation of technologies and the availability of increasingly large and diverse data sets are changing the way we interact with people, where and how we get data (read: personal information) and the vulnerabilities for keeping that data safe. What principles do we need in order to be ethical in our present environment?
We are not using technology enough to better our advocacy, to make our research outputs stronger, to track our theories of change and to prove our effectiveness.
Certainly, as a human rights community we are already proving that the use of new technology helps us “do better research”, but we are not using it enough to better our advocacy, to make our research outputs stronger, to track our theories of change and to prove our effectiveness. More specifically, we need to address the issues around agency (i.e., what is the role of the people we claim to represent?) and consent (i.e., whose permission do we have, whose data are we using, and what are we using it for?). These issues have always existed in our work and have often been criticized as being based in power dichotomies: us/them, north/south, or extractive/participatory research methods.
Pixabay/BenjaminNelan (Some rights reserved)The caption could be: The use and experimentation with new technologies and data have much promise in creating social change.
In fact, we often end up forsaking some hard won ethical considerations for the sake of experimentation with new technologies and data. Sometimes by wanting to do something innovative to change the way the systems work, we end up causing harm. Or, at the very least, we create risks for harm that have not existed before, or have not existed at this scale before. For example: an app that was developed to help gather information about a sensitive subject that was not secure, or a website that was meant to gather crowd sourced data that ended up revealing the location of communities facing numerous vulnerabilities. The reality is that when innovating new ways of working some risk and harm will be inevitable. What is important is that our sector learns from these experiences and is therefore constantly improving our collective ethics. The Responsible Data Forum has helpfully published a number of “Reflection Stories”, which helpfully explore nine technology and data related projects where unforeseen harm materialized, and how the organizations addressed it. These types of resources are critical to create more consideration of these rapidly evolving issues.
How, then, can the human rights community use new technology and data not just to better our research methods, but to better know how we actually go about doing our work?
Recognizing the agency of the people that we work with means centralizing the experience of those people in our work. For so long, social justice and development sectors have been criticized for failing to truly understand the needs they are addressing or for appropriating the stories of the individuals for whom they claim to be campaigning. “Agency”—understood here as the basic understanding that every individual has the interest and ability to make the best decisions for their own lives—has been the vehicle through which institutions have tried learning from these criticisms and re-orienting their work. But have we spent enough time thinking about what agency means in our evolving technological and data-centric contexts? When developing new projects that will employ innovative uses of technology, do we assess whether this will bring us closer or farther from understanding the experiences of the people involved? And, most importantly, do we ever try to develop new apps, new data flows or new methods that are specific to our institution’s agency practices?
In general, the human rights community also needs to better address consent. The consent discussion is incredibly broad in scope and has been sparking some fascinating discussions with the technology and feminist movements. We make strong arguments about consent in terms of distributing revenge porn, and the privacy community is making great strides in pushing back against (monopolized) user agreements that actually provide us with no choice. But we still do not think critically enough about taking a radical and empowering stance towards consent in our human rights work—in how we, as human rights workers, interact with and handle personal information.
The definition of informed consent varies in different fields, though it can be generally understood as the process of getting prior permission from a subject before taking an action that will or could affect their lives or wellbeing. Further, consent needs to have four main components: disclosure, voluntariness, comprehension and capacity. To have these four elements, there must be some human interaction. Yes, that interaction may be over the Internet or though some sort of mobile technology, but the interaction does have to be there. If it isn't, then it is not consent.
What does this mean for the work of social change organizations? First, it means that wherever possible we need to build consent into all of our projects, but in particular our tech projects. For example, if the tech is facilitating data collection where face-to-face interaction is not possible, then are we building consent mechanisms into the tech to sufficiently satisfy consent criteria? What about when we can’t get consent, or when the tech doesn’t allow for the kind of interaction necessary? What happens when we are accessing data sets that were collected by third parties or for different purposes?
It seems that there is a general tendency to shoehorn some version of consent into these situations. But this may just be because consent has been our ethical framework for so long. Perhaps we should no longer force it and instead, when there is no possibility of consent, we need to shift to a duty of care towards that data because it represents people—we should never assume a free-for-all use of any form of data.
The use and experimentation with new technologies and data have much promise in creating social change. But we need to move beyond the immediately interesting or flashy. Let’s put the same energies into baking hard won ethics considerations into each and every technology and data-related project we do. Let’s explore new technologies and data to get better at institutionalizing agency and consent overall. In general, we need to be more thoughtful and responsive to ethics issues as they, and our working contexts, evolve.
***This piece is based on a speech given at the 2016 MerlTech conference on. The opinions expressed are the author’s own and not attributable to Amnesty International.