When a viral video of shocking violence crops up online—such as videos of security forces in Iraq firing “head-splitting tear gas grenades” at protestors—open source investigators comb through the footage with techniques like sophisticated social media research and geolocation to verify allegations. Frequently, the videos, photos, and other posts these investigators examine are traumatizing experiences from the poster’s lived reality. Moreover, it is overwhelmingly the investigators who own the analysis and narrative moving forward, not the people at the core of the footage or the incident under examination.
Both content-creators (e.g. witnesses, documenters, activists – excluding perpetrators who are filmers) and content-analyzers (e.g. investigators, lawyers, archivists – often “outside advocates”) share a strong belief in the incredible potential of everyday users of everyday technology to document and seek accountability for terrible acts of abuse, unchecked power, and violence. An immense amount of highly probative information is readily made available with a solid Wi-Fi connection and the knowledge of where and how to upload or find it.
However, user-generated content and its subsequent analysis as open source information or open source intelligence (OSINT) tend to be treated as two distinct sides of the same coin. On one side, “users” of recording devices such as smartphones create and share content for evidence, often depicting grave harms and violations of fundamental rights. On the other, investigators probe, archive, and report on this material. While there is occasional information sharing and network-building across the sides (or within organizations such as WITNESS and Mnemonic), these are generally conceptualized as distinct stages of working with the exact same documentation. In Digital Witness, Rahman and Ivens articulate the injustices of this divide: “high-profile investigators receiving accolades for their valuable investigative work, while the people who made that work possible remain unacknowledged, perhaps even unaware that the video or image that they recorded was even used.” This dynamic illustrates the fundamental flaw of open source investigative practice: it is fractured by a binary of our own making, which weakens, ignores, and even severs the possibility of solidarity across a fabricated content-creator/content-analyzer divide.
Collaboration is and should be the cornerstone of open source investigation.
Open source tools make it technically possible, and, arguably, easier than ever before to conduct a form of rigorous investigation without a clear, methodological need to foster personal relationships between content-creators and content-analyzers—or, in other settings, between witnesses and fact-finders. Such displacing and disempowering practices must be actively resisted.
Open source investigative methodology should not be ignored or underutilized, but with important caveats. Given severe budget and workload issues—and now COVID-related restrictions—the possibility to implement disconnected, digital methodologies will be tempting for human rights organizations around the world. Instead, organizations should integrate these digital methodologies alongside deliberate, solidarity-based practices to avoid co-opting efforts for justice or otherwise minimizing those closest to the documented harms. As Minogue and Makumbe remind us for Yemen: “the use of OSINT does not have to mean driving accountability efforts out of the hands of Yemenis and into those of international investigators. Its analysis is far better carried out by, or involving, those with local knowledge.”
Practicing solidarity in open source investigation could be key to bridging the content-creator/content-analyzer divide. Solidarity in human rights involves deliberate, negotiated, genuine, transnational collaboration between advocates in the global North and the Global South. It requires creating a common framework—or a “we”— among actors that redistributes and balances burdens and benefits “across vast divides of privilege, geography, language, culture, education, and more.” Global north actors perpetuate serious harms when their methodology fails to sincerely consider racism, power dynamics, colonial attitudes, and other global structures of oppression. Similar behaviors and harms emerge when content-analyzers fail to establish a “we” during so-called discovery and subsequent repurposing of documentation posted online by content-creators.
Solidarity in human rights involves deliberate, negotiated, genuine, transnational collaboration between advocates in the global North and the Global South.
But, what would meaningful solidarity look like in open source human rights investigation? To start, here are five commonly observable characteristics of open source investigative practice that exacerbate divides and should therefore be rejected.
First, translation software cannot replace interpretative translation, and there are known risks to relying on these tools. Slang does not always register in translation software: used for “drone” by some in Arabic, “zenana” (زنانة) means buzzing but translates to “dungeon” in English via Google Translate. There are also communication patterns and coded language to consider: “the language [of sexual violence, including sex trafficking] changes very, very, very quickly. One week, it may be called ‘get a key.’ The next week, it may be called ‘capturing coffee’” (citation from a forthcoming article by Alexa Koenig and Ulic Egan). Without human interpretation, an open source investigation will miss valuable footage. Outsider investigators should carefully consider what other tools might similarly lead to unintended consequences.
Second, public, visual narratives are not representative of complex lived experience. Compelling or viral videos making the rounds across social media platforms often appear to be incredibly damning evidence, but these dominant narratives will never present an entire reality. The open source visual narrative alone—often made up of the most visible or visually compelling incidents such as airstrikes or perpetrator-filmed video of extrajudicial killings—could never encompass all aspects of the human experience of a given harm or violent environment. In particular, experiences that are not commonly filmed, such as sexual violence, are largely excluded from open source investigative efforts.
Third, language choice on public platforms matters. When analyzers working with OSINT use casual or even joking language in comments about investigations on a public social media platform, it is a disturbing display of disrespect to those other users of that same social media platform from whom content is being scraped.
Fourth, award of credit and funding should reflect the important labor done at each stage of an open source investigation. In human rights work, there is a recognized and severe imbalance in funding. The content-creator/content-analyzer divide in open source investigations often reproduces this outcome via a ready-made, hidden labor-based vertical model. The result: content-analyzers receive almost all credit for outputs and outcomes, wield disproportionate decision-making power as compared to content-creators not associated with their organizations, and receive the vast majority of funding.
Fifth, informed consent should be a baseline expectation, as it is with all human rights work. Content-analyzers should not assume freedom to co-opt digitally-sourced witness statements and documentation—innocuously termed “user-generated content”—without first seeking to obtain the informed consent of the content-creator. To model meaningful solidarity with content-creators, however, content-analyzers must push farther: to active collaboration and skills-sharing with content-creators when analyzing documentation for a mutually established purpose.
These are only some of the existing tensions in common practice that would benefit from deliberate interventions of meaningful solidarity. Open source investigation is a relatively new methodology primed for a coordinated shift in protocol and, importantly, actual practice. Collaboration is and should be the cornerstone of open source investigation: it is widely accessible and strongly interdisciplinary, only made possible by contributions from activists, documenters, journalists, data scientists, software engineers, lawyers, and archivists (among others). It is a strong example of how digital tools in public spaces might “democratize the process of human rights fact-finding.” To pursue this goal, however, practitioners should mainstream solidarity with content-creators via a transparent and transformative deepening of this spirit of collaboration.