Biosecurity Culture, Computer Security Culture

post by jefftk (jkaufman) · 2023-08-30T16:40:03.101Z · LW · GW · 11 comments

Contents

11 comments

While I've only worked in biosecurity for about a year and my computer security background consists of things I picked up while working on other aspects of software engineering, the cultures seem incredibly different. Some examples of good computer security culture that would be bad biosecurity culture:

This is not how computer security has always been, or how it is everywhere, and people in the field are often fiercely protective of these ideals against vendors that try to hide flaws or silence researchers. And overall my impression is that this culture has been tremendously positive in computer security.

Which means that if you come into the effective altruism corner of biosecurity with a computer security background and see all of these discussions of "information hazards", people discouraging trying to find vulnerabilities, and people staying quiet about dangerous things they've discovered it's going to feel very strange, and potentially rotten.

So here's a framing that might help see things from this biosecurity perspective. Imagine that the Morris worm never happened, nor Blaster, nor Samy. A few people independently discovered SQL injection but kept it to themselves. Computer security never developed as a field, even as more and more around us became automated. We have driverless cars, robosurgeons, and simple automated agents acting for us, all with the security of original Sendmail. And it's all been around long enough that the original authors have moved on and no one remembers how any of it works. Someone who put in some serious effort could cause immense distruction, but this doesn't happen because the people who have the expertise to cause havoc have better things to do. Introducing modern computer security culture into this hypothetical world would not go well!

Most of the cultural differences trace back to what happens once a vulnerability is known. With computers:

But with biology there is no vendor, a specific fix can take years, a fully general fix may not be possible, and mitigation could be incredibly expensive. The culture each field needs is downstream from these key differences.

Overall this is sad: we could move faster if we could all just talk about what we're most concerned about, plus cause prioritization would be simpler. I wish we were in a world where we could apply the norms from computer security! But different constraints lead to different solutions, and the level of caution I see in biorisk seems about right given these constraints.

(Note that when I talk about "good biosecurity culture" I'm describing a set of norms that I see as the right ones for the situation we're in, and that are common among effective altruists and other people with a similar view of the world. There's another set of norms within biology, however, that developed when the main threats were natural. Since there's no risk of nature using public knowledge to cause harm, this older approach is even more open than computer security culture, and in my opinion is a very poor fit for the environment we're in now.)

Comment via: facebook, mastodon

11 comments

Comments sorted by top scores.

comment by ChristianKl · 2023-09-02T14:36:16.770Z · LW(p) · GW(p)

The problem with biosecurity is that most people working in the field are aligned with universities that do dangerous research. As a result they are more like the computer security department of a big company than they are like independent hackers in the field of computer security.

It seems that the position of the biosafety EA people is that if someone would actually act like a computer security hacker, they would be completely shunned and not have the reputation to achieve anything in that field. 

There's a EA post that speaks about how a lot of research that's done is gain of function research and that anyone who speaks with biosafety people should not speak about gain of function and instead enhanced potential pandemic pathogens. That because most of the research in the field includes gain of function research.

This basically mean that everyone in the field knows that Fauci perjured himself when claiming that there's no gain-of-function research in Wuhan but everyone in the field is afraid to do so.

The NIH saying that the research in Wuhan somehow wasn't on enhanced potential pandemic pathogens because the coronavirus (and thus it was somehow justified to fund it while the moratorium was in place) on which they worked wasn't a human illness was pretty bullshit. 

It should be obvious to everyone that it's bullshit if you have a transparent conversation about it. A lot of people in power in the field have a lot to lose if there's an open debate and thus do everything not to have that debate. Unfortunately, it might be correct that a biosafety EA would burn to many bridges when speaking honestly and transparently.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-09-04T02:46:04.066Z · LW(p) · GW(p)

What if one day a biosecurity expert in Moscow or Beijing or New Delhi, etc., starts speaking honestly and transparently? Even about potentially deleterious topics?

It seems like that's an even more dangerous scenario since it would naturally inflame existing geopolitical tensions and also spur a sudden rush to release huge amounts of info in order to salvage credibility in the eyes of the public.

comment by faul_sname · 2023-08-31T00:02:19.243Z · LW(p) · GW(p)

One key difference I see is that tremendous amounts of fungible value is locked away behind (hopefully) secure computing infrastructure, so in a world with keep-quiet norms there would be a tremendous financial incentive to defect on those norms.

As far as I know, no corresponding financial incentive exists for biosecurity (unless you count stuff like "antibiotics are an exploit against the biology of bacteria that people will pay lots of money for").

comment by Herb Ingram · 2023-09-01T17:33:33.270Z · LW(p) · GW(p)

I think it makes a huge difference that most cybersecurity desasters only cost money (or cause damage to a company's reputation and loss of confidential information of customers) while a biosecurity desaster can kill a lot of people. This post seems to ignore this?

comment by jbash · 2023-09-03T03:36:47.718Z · LW(p) · GW(p)

Imagine that the Morris worm never happened, nor Blaster, nor Samy. A few people independently discovered SQL injection but kept it to themselves. [...]

That hypothetical world is almost impossible, because it's unstable. As soon as certain people noticed that they could get an advantage, or even a laugh, out of finding and exploiting bugs, they'd do it. They'd also start building on the art, and they'll even find ways to organize. And finding out that somebody had done it would inspire more people to do it.

You could probably have a world without the disclosure norm, but I don't see how you could have a world without the actual exploitation.

We have driverless cars, robosurgeons, and simple automated agents acting for us, all with the security of original Sendmail.

None of those things are exactly bulletproof as it is.

But having the whole world at the level you describe basically sounds like you've somehow managed to climb impossibly high up an incredibly rickety pile of junk, to the point where instead of getting bruised when you inevitably do fall, you're probably going to die.

Introducing the current norms into that would be painful, but not doing so would just let it get keeping worse, at least toward an asymptote.

and the level of caution I see in biorisk seems about right given these constraints.

If that's how you need to approach it, then shouldn't you shut down ALL biology research, and dismantle the infrastructure? Once you understand how something works, it's relatively easy to turn around and hack it, even if that's not how you originally got your understanding.

Of course there'd be defectors, but maybe only for relatively well understood and controlled purposes like military use, and the cost of entry could be pretty high. If you have generally available infrastructure, anybody can run amok.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2023-09-03T11:07:38.448Z · LW(p) · GW(p)

That hypothetical world is almost impossible

I don't think it's a world we could have ended up in, no. It's an example to get people thinking about how norms they currently view as really positive could be a very bad fit in a different situation.

We have driverless cars, robosurgeons, and simple automated agents acting for us, all with the security of original Sendmail.

None of those things are exactly bulletproof as it is.

I'd say if you wanted to exploit these in practice today, as a random bystander, automated agents are by far the easiest, via prompt injection. And we've responded by talking a lot about the issue so people don't over rely on them, not deploying them much yet, working hard to exploit them, and working hard to figure out how to make LLMs more robust against prompt injection. This is computer security norms working well: scrutinize technology heavily, starting as soon as it comes out before we have a heavy dependency on it (which wasn't an option with the emergence of biological life).

But having the whole world at the level you describe basically sounds like you've somehow managed to climb impossibly high up an incredibly rickety pile of junk, to the point where instead of getting bruised when you inevitably do fall, you're probably going to die.

Introducing the current norms into that would be painful, but not doing so would just let it get keeping worse, at least toward an asymptote.

Introducing the current computer security norms into biology without adjustment for the different circumstances means we, very likely, all die. I think you're assuming that those norms are the only option to improve the situation, though, which is why you'd take them over nothing? But instead there are other ways we can make good progress on biosecurity. I think Delay, Detect, Defend (disclosure: I work for Kevin) is a good intro.

If that's how you need to approach it, then shouldn't you shut down ALL biology research, and dismantle the infrastructure?

That sounds like the equivalent of shutting down all computing because you're concerned about AI safety? Shutting down some areas, however, is something I think ranges from "clearly right" to "clearly wrong" depending on the risk/reward of the area. Stop researching how to predict whether a pathogen would be pandemic-class? Stop researching pest resistant food crops?

If the only choices were shutting it all down and doing nothing then I would lean towards the former, but not only aren't those the choices, "shut it all down everywhere" would be (mostly rightly!) an incredibly unpopular approach we wouldn't be able to make progress on.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-09-03T17:01:52.199Z · LW(p) · GW(p)

Introducing the current computer security norms into biology without adjustment for the different circumstances means we, very likely, all die.

Only because of the issue of a mere catastrophe potentially leading us never to grow back our population. If you discount this effect, we're not even sure if it's possible at all for a biological infection to kill us all, and even if it is, I expect it to require way more implementation effort than people think.

I feel like this is either misinformation or very close to it.

https://www.lesswrong.com/posts/8NPFtzPhkeYZXRoh3/perpetually-declining-population [LW · GW]

Replies from: jkaufman, jkaufman
comment by jefftk (jkaufman) · 2024-12-16T13:57:51.263Z · LW(p) · GW(p)

Here is a now-public example of how a biological infection could kill us all: Biological Risk from the Mirror World [LW · GW].

comment by jefftk (jkaufman) · 2023-09-03T17:23:37.477Z · LW(p) · GW(p)

not even sure if it's possible at all for a biological infection to kill us all

Flagging that I think this is false, but probably can't get into why.

comment by Review Bot · 2024-02-18T22:47:48.633Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by gabrielrecc (pseudobison) · 2023-08-30T20:04:18.522Z · LW(p) · GW(p)

Cybersecurity seems in a pretty bad state globally - it's not completely obvious to me that a historical norm of "people who discover things like SQL injection are pretty tight-lipped about them and share them only with governments / critical infrastructure folks / other cybersecurity researchers" would have led to a worse situation than the one we're in cybersecuritywise...