Several years ago I joined Amnesty International as the very first technologist in a nearly new born Technology & Human Rights team. Today, our team has evolved into a dedicated program called Amnesty Tech, nearly 20 people strong, almost half of which are technologists.

Especially at the beginning, I would be frequently asked what a security researcher such as myself does at Amnesty International. Years later, I realized I never really explained that in details, and I never took the opportunity to tell what we collectively do now. Despite having thus far mostly operated out of the spotlights, I believe our work is remarkable and unique, so I figured I would use this newsletter to explain it.

At the moment, our program primarily splits between a team tackling emerging technologies (such as machine learning, facial recognition, as well as social media economics) and a team that instead focuses on digital security of Human Rights Defenders (HRDs). I will speak to the latter, as I primarily contribute to that.

This team consists of technologists currently based in Berlin, Tunis, Beirut, Dakar and Nairobi. Along with other researchers and advisors in our team, we try to tackle the growing threat of digital surveillance against HRDs through various means, primarily:

  1. Investigate, Expose and Disrupt Illegitimate Surveillance
  2. Build Networks and Mentor Human Rights Defenders
  3. Create Tools and Services to Support Individuals at Risk

Why Amnesty?

While a large human rights organization can be an heavy and slow machinery to work with, doing this work at Amnesty comes with some precious benefits. Firstly, we operate free from any governmental influences (oddly, an uncommon privilege). Secondly, we get to work outside of the schemes of “Digital Rights” or “Internet Freedom”, which I find by definition to be extremely insulating, and fundamentally Western. We work through the lense of “Human Rights”, in law and in principle, which is a lot more empowering, and speaks to a much wider audience. Lastly, and most importantly, we can count on many colleagues from all over the world who are connected and trusted by those communities we often find targeted with digital attacks. My colleagues’ such diverse set of skills, backgrounds, and locations, is a rare asset.

Now, let’s get into some more details.

Investigate, Expose and Disrupt Illegitimate Surveillance

The Berlin office hosts our Security Lab, of which I’m the Head. It’s a small team that conducts most of our technical investigation on digital surveillance against civil society. Here, we work closely with our colleagues from the offices around the world in order to support them in cases of emergency, and together discover and investigate threats affecting their respective regions. The majority of our technologists are talented trainers, community organizers, and researchers from the Global South.

Having technologists familiar with the language, the culture, as well as the relevant threats in their regions is instrumental for us to identify global trends, as well as to collectively develop proper technology and security education material pertinent for the local HRDs at risk. Many of the tools and guides available today tend to be very Western-centric, and are not necessarily as applicable in other regions. As a truly global team, we learn from each other and invest more effectively our time and resources.

Sometimes we publish details on the campaigns of targeted attacks we come across, especially when we believe we can share useful insights and perhaps raise awareness of particular attacks that might not be commonly understood. (I included at the bottom of this newsletter a list of some of the research we published.) However, publishing is not our primary objective: we don’t want our research to be exclusively an opportunity for media attention, but to result in tangible benefit to the HRDs and NGOs we work with. Therefore, the knowledge we build on global digital threats against civil society directly feeds into other activities and projects.

Build Networks and Mentor Human Rights Defenders

In some of the places we operate, networks of technologists are still lacking. Building a resilient civil society requires connecting people, creating networks, and have communities talk to each other. As technologists working for a large Human Rights organisation like Amnesty, we are very uniquely well placed to facilitate these connections and foster their development. Some of my colleagues do incredible work on this aspect: my colleague Sadibou, for example, dedicates a lot of his time to the creation of a network of techies in West Africa (https://www.amnesty.org/en/latest/research/2019/04/amnesty-tech-secure-squad/).

Security trainings represented a common practice in civil society for a long time. However, because of logistical constraints, trainings often tend to be occasional, standardized, and hardly locally contextualized. We decided to take on a new approach, and mentor rather than train. Some of our technologists nurture long-term mentorships of individuals and organisations in their respective regions, and help them learn not just how to use this or that tool, but how to think and practice digital security. Our hope is to obtain a long lasting impact, rather than an immediate excitement that generally fades away quick.

Create Tools and Services to Support Individuals at Risk

Over the many years working in this space I observed (and personally contributed to) an almost exclusive attention to research & publishing, and very little to building security. Although the media started paying attention to it, the difficult state of civil society’s digital security capacity barely improved.

I don’t intend to talk here about the technological ecosystem of this space, as it would require a much longer newsletter of its own (and if there is an interest, I’d be happy to write one - let me know!), but to boil it down to the core issue: Human Rights Defenders face threats similar to large corporates and governments, and at a much higher personal cost, but are only equipped with consumer-grade technology. There is a fundamental asymmetry that significantly disadvantages civil society, which tends to be relegated at the margins of conversations on cybersecurity.

Unfortunately, tackling this asymmetry is tricky because civil society is not a target market for the information security industry, and because of lacking financial resources. Additionally, the particular traits of typical civil society groups don’t lend well to appropriate security modeling. There is no provisioning of hardware, software or services, therefore the norm is BYO* (Bring Your Own Everything, as I call it), opening little possibilities for centralized control and monitoring. It is hard to detect attacks in these conditions, and so far we mostly managed through conversations rather than with technology.

For how inadequate this situation is, I believe it creates interesting opportunities for thinking about security differently, re-adapting concepts that might seem trivial in corporate space, but that could make a significant difference in civil society. We need to be creative and bold in re-inventing some wheels.

Here are some of the reports we have published so far: