If your company offers a product or a service with a sufficiently large user base, chances are you also have users at risk. Journalists, dissidents, and human rights defenders classify as regular consumers to the technology market, despite their peculiar needs. Security firms, software vendors and social media companies, or other service providers, interested in paying particular care to those users might face challenges. Paying attention to their needs, however, is already a great first step, even more so considering they most likely constitute a very small percentage of the whole. Kudos.

From the occasional conversations I have had with such companies, and from cases I experienced over the years, I decided to offer here some initial thoughts on how to think about security for users at risk. However, this is a complex topic, that should be unpacked according to the characteristics of your product. This is a starting point.

Product design makes a difference, negative or positive

I believe design to be an integral part of digital security, particularly in this context. While in the enterprise sector, security professionals are tasked to protect the company’s infrastructure and employees, individuals at risk are on their own. They carry a heavier burden on their own shoulders, and a lot of their security education, self-taught or trained, relies on being on the lookout for anomalies in their daily use of products and services. Be it in the webmail or the document processor they use, users count on visual clues of unusual behavior. As a matter of fact, the majority of campaigns of targeted attacks we discover originate from the own suspicion of a “patient zero”.

Users’ suspicion tends to be provoked by disparate factors. Often the most misguided originates from news coverage of a prominent hack. I have lost count of the number of people who contacted us alarmed by missed WhatsApp calls, after the infamous NSO Group exploit from April 2019. The more savvy will instead be trained to look for a “green tick”, a “green padlock”, a “domain in the browser’s address bar”, an “Enable this content” button on Microsoft Office, and infinite more clues. Users will also identify their own visual elements and develop patterns, for example: how email notifications are worded by Internet companies, or how login pages typically look.

Some of these visual clues become obsolete (great example is the “green padlock” now, with the popularity of Cloudflare and Let’s Encrypt among phishing sites), some others instead are occasionally changed by developers. Because of this, I am reluctant to recommend them in trainings, but I know they are relied upon. When you develop a platform, every little detail counts.

When you introduce a new action, try to make it as explicit as possible, and use it as an opportunity to reinforce safe behavior. An example: you need to send an email notification to the user, soliciting some action. Under the glossy HTML button, offer in text the explicit destination they can manually reach. Structure your site so that critical links you expect users to be often directed to are concise and memorable, such as “account.service.com/security”, or “security.service.com”, not “https://account-management.customers-portal.service.com/internal/users/qwertyuiop/review.php?id=1234589&z=asdfghjkl&q=zxcvbnm#0987654321”. In this way you offer an alternative from blindly clicking buttons, and you will add a visual clue users can spontaneously identify.

At the point you roll out a design change, for as small as it might be, you should be committed to it. You might conclude, for example, that the “green tick” you added a while ago to authenticate an identity wasn’t that useful after all. However, its sudden removal might provoke unwarranted suspicion, or worse a false reassurance to catastrophic effect for a user at risk. Be conscious of potential negative effects of your design choices.

Occasionally I notice design changes quietly appear in the services I use. Sometimes new visual elements appear that might prove useful to enhance users’ safety, but seem to be taken for granted and their meaning is not explained. Security education resources are invaluable, and most platform providers offer some, but you need to keep in mind that users do not spontaneously, let alone regularly, visit them. Try to embed as much education as possible directly within your product’s interface, and direct people to your long-form guides. Bolden and promote the positive effects of your design choices.

Prevention is not enough, information is key

Besides the obvious technical and structural differences, one fundamental aspect separates enterprise security from consumer security products: for the former, security products and services tend to be designed to provide (or simplify collecting) as much contextual information as possible about an attack, the attacker, and its motivations, in order to help prioritize incidents to respond to; for the latter instead, products and services are usually designed to block or prevent attacks with as little involvement of the end user as possible.

This design choice quickly becomes inadequate for individuals at risk, such as journalists, dissidents or activists, who are indeed consumers but that face threats comparable to those faced by enterprises. This asymmetry creates a tension which vendors should creatively address by re-adapting principles typical for corporate security to this new context.

Above all, I believe, information is key. Under the assumption that the broader consumer base faces threats that are untargeted and probably financially motivated, providing details of a common attack might be unnecessary. At-risk individuals instead deal with personalized attacks that come with deeper personal implications. Being alerted that a state-sponsored group is attempting to compromise their devices and online accounts could be an early warning to increased scrutiny and pressure. By experience, in fact, cyber attacks tend to anticipate or complement other forms of repression.

With these cosideration in mind, a malicious email being silently moved to the “Spam” folder, or a malware infection getting quietly quarantined, or an account login being blocked, might prevent crucial opportunities for re-calculating personal risk.

Some of the big tech companies, such as Google, Facebook, Yahoo, and Microsoft, started warnings their end-users of state-sponsored attacks they might have suffered. This is an important first step, which more companies should embrace. Yet, there is always room for improvement: these warnings tend to be context-free and offer little advice. Instead, aim to share enough details on the type of attack detected, its timeframe, and appropriate mitigations and resources for support (more on this later). I understand that sharing the suspected origin of an attack isn’t easy, but I would suggest offering some clarifying elements in case there’s a good chance that the targets would, on their own, come to the wrong conclusions (for example: in cases you have clear evidence that suggests the attacker is not domestic, but a foreign actor).

What to do when you identify targeted individuals at risk

Occasionally threat intelligence companies stumble in campaigns of targeted attacks extending beyond the corporate space and into civil society. As it goes with such investigations, researchers at times discover the identity of individual targets, usually because the attackers committed some mistakes and left operational details exposed. Obtaining such deep visibility into the activities of state-sponsored threat actors can be precious. When those targeted are individuals or groups at risk, I believe researchers need to carefully consider how to act responsibly.

First thing first: don’t withold that information, it could be critical to inform targets. Secondly: don’t just directly publish their identities, especially if they are of individuals or small groups. At the very least, you need to obtain their consent. Such disclosures might force their adversaries’ hands and accelerate existing pressures. While you might feel you have a clear understanding of a campaign you discovered and the capabilities of the adversary, you most likely can’t say the same for targets’ condition. You need to exercise more care than you normally would even in corporate space.

I have seen security companies address this issue by disclosing the targets’ identities to journalists (while securing coverage for their own research) and essentially dropping the ball. This might not be the right approach either. I must say that, hoping my journalists friends will not resent me, reporters are not best equipped to deal with at-risk individuals under these circumstances. They will most likely establish contact with those affected, inform them about the company’s discoveries, and seek to collect a testimony for their own articles. Nothing wrong with that. However, someone who is already on the lookout because of their exposure will be alarmed by the news and will need support. It might be technical support, but also legal, logistical or psychological. For example: because of the risk we calculated with targeted individuals we identified during cases we worked at Amnesty, we determined necessary to provide legal counsel, or resources to install security cameras, or even plan to evacuate a person from their country. Cyber attacks do not happen in a vacuum for human rights defenders, and support needs to be holistic.

Therefore, I recommend to reach out to organizations known to provide support to the targeted community, such as Amnesty International, Frontline Defenders, or Committee to Protect Journalists, explain the circumstances and seek their collaboration (and I underline “collaboration” here, because they’re not your clearing house either) in making sure the situation is addressed appropriately and any potential damange is mitigated. Some might be more connected than others in certain communities or geographical regions, in that case you will likely be pointed to more appropriate partners.