SecureDrop review
post by samuelshadrach (xpostah) · 2025-04-19T04:29:32.270Z · LW · GW · 0 commentsThis is a link post for https://samuelshadrach.com/?file=raw/english/./my_research/securedrop_review.md
Contents
No comments
This is a living document. Crosspost below may not be updated. Click link for the latest version.
2025-04-19
tldr
- SecureDrop provides less source privacy than journalist privacy, even though it is the source not the journalist who is more likely to be punished for their actions.
- SecureDrop opsec guide may be overkill for the median whistleblower but not good enough for the high-profile whistleblower, who is facing as much risk as a spy operating on foreign soil.
- SecureDrop dev team in the US could consider working with dev teams in other countries, and citizen journalists in countries outside of US/Europe.
- SecureDrop dev team could consider whether they're ready to accept political consequences of "anyone can violate anyone's consent and get all their info published", and make a clear statement to this end.
For diplomacy sake, I'll offer compliment before criticism.
Compliment
- SecureDrop has clearly pushed forward on security-versus-usability pareto frontier for some subset of users.
- SecureDrop has been used in many of the highest-profile leaks in the time period 2015-2025, as confirmed by Guardian and others.
- It is rare for a small dev team to single-handedly shape human history in the way SecureDrop has.
- I'm only criticising them because I think they're working on something important. Otherwise I might not be paying as much attention to it as I am.
Technical
- SecureDrop does not offer the maximum security possible.
- User privacy
- Source privacy: No source airgap, source PGP encryption is optional, source using tails is optional, no redaction guide for the source.
- Destination privacy: Imperfect destination airgap (plaintext visible in RAM before re-encryption with PGP)
- Both: No education for source and destination on how to isolate from your current social circle and manage your psychology while doing so, no test run or practice time period for the source.
- Example: I'm confident some of the sources have "how to whistleblow" in their google search results the same week they send the documents.
- Example: I'm confident atleast some of these media orgs have journalists who tell their family and friends about their work, and someone in this circle will crack under police interrogation.
- Distributed software and development
- Codebase could be made even simpler (so multiple dev teams could manage it), dev team has arbitrarily decided to "encourage" some use cases and "discourage" others such as crime victims and political opinions of mentally ill. It's not very clear to me how they plan to encourage or discourage them. An app whose primary selling point is censorship-resistance should ideally serve everyone IMO.
- Multiple independent dev teams operating from different countries would ensure the dev teams can't pick political sides or be co-opted by the interests of any political side. Example: bitcoin core dev teams operate from multiple countries.
- Distributed hardware
- Only 75 users onboarded after 10 years. Not sure why this is, and whether the dev team "discouraged" some interested users.
- Most of the servers are run by people in the same profession (journalism) and from US/Europe, which means their decisions are correlated. Ideally spreading the servers across more professions and more countries would ensure more censorship-resistance.
- User privacy
- Security-versus-usability
- All of the above makes their system more secure but less usable than say, Signal + Tor + tails. This makes their system less secure but more usable than using PGP + airgap + Tor (curl request) + Whonix on both source and destination side.
- For whistleblowers that are not sufficiently high-profile, it would not surprise me if even Signal running on a mobile phone (no Tor, no linux) is sufficient security.
Political
- Diversify incentives and culture
- As a rule of thumb, you want to ensure there are multiple different actors writing the software, hosting the hardware and using the software. Additionally, this particular use case requires protecting the user privacy.
- It's important that not all the actors belong to the same incentives and culture.
- Do the actors belong to different countries and professions? Do some of them have a lot of capital or attention, or any formal position of power?
- (I've already mostly answered these questions above.)
- No trial by fire
- As a rule of thumb, if your system isn't being used to store >$100M bitcoin or trade drugs and CP on a daily basis, it doesn't have empirical evidence that it matches the security required for the highest-profile leaks.
- Dread opsec guides are better than SecureDrop opsec recommendations IMO, because there's more trial-by-fire going on. Dark web drug vendors get arrested at a higher rate than journalists or whistleblowers.
- The correct reference class for the highest-profile whistleblowers is spies operating on foreign soil, which is a higher risk category than drug vendors. The ideal opsec guide for such whistleblowers should be even stricter than that of drug vendors.
- Journalists are often more protected than whistleblowers. In theory, a journalist can provide bad opsec recommendations to their sources, end up getting a source arrested, fail to face any consequences themselves, and continue providing same bad opsec recommendations to their next source.
- It's important to take opsec advice from the sources themselves, as they are ones under more trail-by-fire. I think it's not great that most of the opsec suggested is for the journalists not the source, when it's the source who is at higher risk of being imprisoned / tortured / murdered for their actions.
- Power-law distribution
- Median whistleblower is a corporate whistleblower for a random Fortune 500 company and therefore isn't a priority for nation states to pursue. However someone needs to always be prepared for that rare high-profile whistleblower who will be a priority for nation states to pursue. This power-law distribution allows people involved (sources, journalists, SecureDrop developers) to be lax on security for the median case and then fail badly on the tail case.
- SecureDrop team needs to clearly decide where on the spectrum of security-versus-usability they want to be. (Maybe they have decided and I'm unaware, I just want more clarity on what their decision is.)
- Too low, and they're not an improvement over Signal.
- Too high, and they're going to lose some of the journalists they already have onboarded. They might not be used for the lower-profile leaks but they might help protect anonymity of the next Chelsea Manning.
- Outside view, 2025 versus 2010
- Empirically it's not obvious to me that ~75 media orgs running SecureDrop has lead to a more transparent world in 2025 compared to 2010 when only wikileaks existed. Seems worth investigating if this is true and if yes, what are some reasons for this.
- Why has SecureDrop only gotten 75 users in 10 years, and were there other interested users who were "discouraged" by SecureDrop dev team as they did not have a professional reputation as journalists?
- Is it simply that none of these 75 users have courage to go against their local incentives and post the highest-profile stuff in the way that Assange did?
- Many media orgs often write a sensationalised story about the leaked documents with a clear political leaning, and avoid actually publishing the leaked documents so the reader can form an independent opinion. If your browse both websites, there are obvious difference between a wikileaks post and a new york times post, even if they're both talking about the same document.
- Negative consequences
- In general I get the vibe that some of the devs in this broader space (Tor, tails, Signal etc) haven't fully accepted the political consequences of their own work, including potential negative ones.
- It won't just be "good guys" violating consent of "bad guys" to publish their information in public, it will be anyone violating consent of anyone else (using any ideology that appeals to some greater good) and publishing the information in public.
- SecureDrop dev team could consider making a clear statement whether their app is solely for who they consider good guys or for everyone.
0 comments
Comments sorted by top scores.