New x-risk organization at Cambridge University

post by lukeprog · 2012-04-24T17:50:19.153Z · LW · GW · Legacy · 13 comments

Contents

13 comments

CSER at Cambridge University joins the others.

Good people involved so far, but the expected output depends hugely on who they pick to run the thing.

13 comments

Comments sorted by top scores.

comment by JoshuaZ · 2012-04-27T02:00:06.041Z · LW(p) · GW(p)

I'm a little worried to see that Nick Bostrom is involved in this group. Bostrom is very smart and he's clearly one of the people who is thinking the most about existential risk, but there's a real danger that having the same few people be involved in existential risk organizations will lead to problems like anchoring and availability bias. If one takes the Fermi paradox seriously and takes a not too strong Copernican principle, one concludes that other species would be likely to come up with the notion of the Fermi paradox and that this didn't help them at all. That suggests that if there's any Great Filter in our future it may be extremely non-obvious to the point where species that encounter it do so generally without warning even when they are looking for existential risk issues (and at a meta level, they might even have been aware of this problem!). If that's the case, it is all the more important that we have creative people thinking about existential risk independently from each other if we are going to have any hope of seeing such a threat before it arrives.

Replies from: DanArmak, Oscar_Cunningham, lukeprog
comment by DanArmak · 2012-05-01T22:44:12.005Z · LW(p) · GW(p)

If one takes the Fermi paradox seriously and takes a not too strong Copernican principle, one concludes that other species would be likely to come up with the notion of the Fermi paradox and that this didn't help them at all.

How is this different from reasoning more generally? I.e, "one concludes that we will come up with generally the same ideas as other species, and since we infer this didn't help most or all the other species, nothing we do is likely to help us either." Or in simpler words: we infer the Great Barrier is really Great.

Different potentially spacefaring, expansionist lifeforms, from completely different evolutions, will have an awful lot of differences on average. Those of them who use observation and rational deduction (a subset), will observe Fermi paradox and predict Great Filter just like we do, and on natural-selection principles at least some would try to avoid it, but we see none who have succeeded around. That's my reading of your argument.

But if we allow that they use observation and rational deduction to plan actions - that they are intelligent in a way comparable to ours - then it is also likely they are similar to us in other consequences of such intelligence. Should we conclude that no product of a generalized capacity for intelligence is likely to save us from the Great Filter, and we should instead try to use uniquely human advantages less likely to evolve twice, like e.g. our social-political behaviors?

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-01T23:03:49.485Z · LW(p) · GW(p)

I'm not sure how to respond. Your comment is potentially the most enlightening and disturbing thing I've seen on LW for a while.

comment by Oscar_Cunningham · 2012-04-28T21:22:52.627Z · LW(p) · GW(p)

It's not clear to me why the Fermi paradox should be evidence of an unexpected great filter, as opposed to one that's just hard. Can you explain?

Replies from: JoshuaZ
comment by JoshuaZ · 2012-04-29T00:25:42.139Z · LW(p) · GW(p)

One of the easiest ways to have a hard filter is if it is unexpected. But yes, the Fermi paradox is more generally evidence of a hard filter rather than an unexpected filter by itself.

comment by lukeprog · 2012-04-29T06:00:05.701Z · LW(p) · GW(p)

Nick Bostrom's involvement is one of my greatest causes for hope for the department. If someone like him isn't involved, I expect the department to do almost no genuinely useful work, because it will be another standard department whose output can be predicted by, as Eliezer puts it, the simple model of a dumb amoeba attracted toward status and funding and no other considerations.

Replies from: JoshuaZ, John_Maxwell_IV, XiXiDu
comment by JoshuaZ · 2012-04-29T15:34:15.469Z · LW(p) · GW(p)

There's no question that people are attracted by status and funding. But status and funding are really good incentives for people to actually get work done. In math for example, status is tied pretty closely to mathematical output, so trying to go for status helps. Similarly, in biomedical engineering, funding is tied pretty closely to the production of functioning medical devices. Just because people have status and funding as goals rather than pure productivity doesn't mean they won't be very productive.

comment by John_Maxwell (John_Maxwell_IV) · 2014-01-01T05:29:13.082Z · LW(p) · GW(p)

It seems a little ironic that you cite the support of a very high status person (who, being high status, likely has status-seeking tendencies to some degree or another) as evidence that an organization will not be corrupted by status-seeking. If you're a supporter of Bostrom's work, it seems worth noting that he's managed to become pretty high-status in the process of doing it.

Also, what evidence is there for Eliezer's "dumb amoeba" model of academia? My impression is also that many people go in to academia precisely because they are interested in doing research that interests them for its own sake rather than seeking status and compensation in the for-profit sector. For example, this programmer writes about various paths to a "unicorn job", where you get paid to work on things you're curious about, and why he estimated that academia was the best path to such a job. Regardless of whether the "dumb amoeba" model is correct, it seems like academia has produced a lot of valuable stuff in other domains, and I'm curious if you think that x-risk is going to be an unusual domain for some reason or another.

comment by XiXiDu · 2012-04-29T10:42:58.129Z · LW(p) · GW(p)

Nick Bostrom's involvement is one of my greatest causes for hope for the department. If someone like him isn't involved, I expect the department to do almost no genuinely useful work, because it will be another standard department whose output can be predicted by, as Eliezer puts it, the simple model of a dumb amoeba attracted toward status and funding and no other considerations.

Not only can your comment be perceived to state that the other people involved in the project are mainly interested in status but also that they are selfish and incompetent. One reason for that impression are the connotations of a "dumb amoeba" that you mention. Your remark that without Nick Bostrom you expect them to do almost no genuinely useful work further adds to the overall negative perception.

You further miss the importance of status and public relations when it comes to raising awareness of existential risks and in arguing with policy makers.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-04-29T11:05:53.249Z · LW(p) · GW(p)

Luke's comment talks about "someone like [Bostrom]", not "Bostrom".

Replies from: XiXiDu
comment by XiXiDu · 2012-04-29T12:55:47.074Z · LW(p) · GW(p)

Luke's comment talks about "someone like [Bostrom]", not "Bostrom".

Right. That allows the comment to be technically superfluous, given a charitable interpretation. Otherwise it still implies that he either deems the rest of the current staff to be not like Bostrom or that most academics are not like him and instead only care for status while getting nothing useful done. Especially since he could have instead stated that he is happy to see people being involved in the project that are probably going to do useful work.

Replies from: lukeprog, Vladimir_Nesov
comment by lukeprog · 2012-04-29T13:08:05.412Z · LW(p) · GW(p)

By "like Bostrom" I mean: consistently outputs work useful for making decisions affecting global risk.

Most plausible hires are, indeed, not like Bostrom in this respect. My statement does not imply, however, that most academics "care only for status." I only said that their output could be predicted by a simple model of an amoeba seeking status and funding. (One of the major results of the heuristics & biases tradition, and also neuroeconomics, is that we are not Homo Economicus, and thus we cannot infer desires cleanly from behavior or "output".)

comment by Vladimir_Nesov · 2012-04-29T13:01:03.903Z · LW(p) · GW(p)

that most academics are not like [Bostrom] and instead only care for status while getting nothing useful done

This seems to be the assumption (modulo the equivocation of "care").