Selection Effects in estimates of Global Catastrophic Risk

post by bentarm · 2011-11-04T09:14:43.364Z · LW · GW · Legacy · 62 comments

Contents

62 comments

Here's a poser that occurred to us over the summer, and one that we couldn't really come up with any satisfactory solution to. The people who work at the Singularity Institute have a high estimate of the probability that an Unfriendly AI will destroy the world. People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year). 

It seems like there are good reasons to take these numbers seriously, because Eliezer is probably the world expert on AI risk, and Hellman is probably the world expert on nuclear risk. However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war. Similarly, Hellman chose to study nuclear risks and not AI risk I because he had a higher than average estimate of the threat of nuclear war. 

It seems like it might be a good idea to know what the probability of each of these risks is. Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?

62 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2011-11-06T23:46:27.470Z · LW(p) · GW(p)

However, there's a problem - Eliezer is an expert on AI risk because he believes that AI risk is a bigger risk than nuclear war.

This isn't right. Eliezer got into the AI field because he wanted to make a Singularity happen sooner, and only later determined that AI risk is high. Even if Eliezer thought that nuclear war is a bigger risk than AI, he would still be in AI, because he would be thinking that creating a Singularity ASAP is the best way to prevent nuclear war.

Is there a sensible way for these people to correct for the fact that the people studying these risks are those that have high estimate of them in the first place?

I suggest that if you have the ability to evaluate the arguments on an object level, then do that, otherwise try to estimate P(E|H1) and P(E|H2) where E is the evidence you see and H1 is the "low risk" hypothesis (i.e., AI risk is actually low), H2 is the "high risk" hypothesis, and apply Bayes' rule.

Here's a simple argument for high AI risk. "AI is safe" implies that either superintelligence can't be created by humans, or any superintelligence we do create will somehow converge to a "correct" or "human-friendly" morality. Either of these may turn out to be true, but it's hard to see how anyone could (justifiably) have high confidence in either of them at this point in our state of knowledge.

As for P(E|H1) and P(E|H2), I think it's likely that even if AI risk is actually low, there would be someone in the world trying to make a living out of "crying wolf" about AI risk, so that alone (i.e., an apparent expert warning about AI risk) doesn't increase the posterior probability of H2 much. But what would be the likelihood of that person also creating a rationalist community and trying to "raise the sanity waterline"?

comment by timtyler · 2011-11-04T12:14:42.330Z · LW(p) · GW(p)

I think people should discount risk estimates fairly heavily when an organisation is based around doom mongering. For instance, The Singularity Institute, The Future of Humanity Institute and the Bulletin of the Atomic Scientists all seem pretty heavily oriented around doom. Such organisations initially attract those with high risk estimates, and they then actively try and "sell" their estimates to others.

Obtaining less biased estimates seems rather challenging. The end of the would would obviously be an unprecidented event.

The usual way of eliciting probability is with bets. However, with an apocalypse, this doesn't work too well. Attempts to use bets have some serious problems.

Replies from: Wei_Dai, lessdazed
comment by Wei Dai (Wei_Dai) · 2011-11-06T23:53:31.507Z · LW(p) · GW(p)

I think people should discount risk estimates fairly heavily when an organisation is based around doom mongering.

That's why I refuse to join SIAI or FHI. If I did, I'd have to discount my own risk estimates, and I value my opinions too much for that. :)

comment by lessdazed · 2011-11-04T13:23:46.198Z · LW(p) · GW(p)

One should read materials from the people in the organization from before it was formed and grant those extra credence depending on how much one suspects the organization has written its bottom line first.

Replies from: timtyler
comment by timtyler · 2011-11-04T14:50:40.453Z · LW(p) · GW(p)

Note however that that systematically fails to account for the selection bias whereby doom-mongering organisations arise from groups of individuals with high risk estimates.

In the case of Yudkowsky, he started out all: yay: Singularity - and was actively working on accelerating it:

Since then, Yudkowsky has become not just someone who predicts the Singularity, but a committed activist trying to speed its arrival. "My first allegiance is to the Singularity, not humanity," he writes in one essay. "I don't know what the Singularity will do with us. I don't know whether Singularities upgrade mortal races, or disassemble us for spare atoms.... If it comes down to Us or Them, I'm with Them."

This was written before he hit on the current doom-mongering scheme. According to your proposal, it appears that we should be assigning such writings extra credence - since they reflect the state of play before the financial motives crept in.

Replies from: lessdazed
comment by lessdazed · 2011-11-04T15:52:09.502Z · LW(p) · GW(p)

Yes, those writings were also free from financial motivation and less subject to the author's feeling the need to justify them than currently produced ones. However, notice that other thoughts also before there was a financial motivation militate against them rather strongly.

An analogy: if someone wants a pet and begins by thinking that they would be happier with a cat than a dog, and writes why, and then thinks about it more and decides that no, they'd be happier with a dog, and writes why, and then gets a dog, and writes why that was the best decision at the time with the evidence available, and in fact getting a dog was actually the best choice, the first two sets of writings are much more free from this bias than the last set. The last set is valuable because it was written with the most information available and after the most thought. The second set is more valuable than the first set in this way. The first set is in no similar way more valuable than the second set.

As an aside, that article is awful. Most glaringly, he said:

To Asimov, only three laws were necessary

comment by spuckblase · 2011-11-04T14:14:02.666Z · LW(p) · GW(p)

I don't see a special problem...evaluate the arguments, try to correct for biases. Business as usual. Or do you suspect there is a new type of bias at work here?

comment by JoshuaZ · 2011-11-04T21:00:33.450Z · LW(p) · GW(p)

One way of testing this is to whether people are willing to discuss existential risk threats that cannot be solved by giving them money. Such comments do exist (see for example Stephen Hawking's comments about the danger of aliens). It is however interesting to note that he's made similar remarks about the threat of AI. (See e.g. here). I'm not sure whether such evaluations are relevant.

Also, I don't think it follows that people like Yudkowsky and Hellman necessarily decide to study the existential risks they do because they have a higher than average estimate for the threats in question. They may just have internalized the threats more. Most humans simply don't internalize the risks of existential risk in a way that alters their actions, even if they are willing to acknowledge high probabilities of problems.

Replies from: timtyler
comment by timtyler · 2011-11-07T02:01:12.327Z · LW(p) · GW(p)

An attitude of "faster" might help a little to deal with the threat from aliens.

Our actions can probably affect the issue - at least a little - so money might help.

Hawking's comments are pretty transparently more about publicity than fundraising, though.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-07T02:10:35.071Z · LW(p) · GW(p)

I'd prefer humanity choose to cooperate with aliens if we are in the stronger position. But I agree that we shouldn't expect them to do the same, and that this does argue for generic importance of developing technology faster. (On the other hand, intelligent life seems to be really rare, so trying to outrace others might be a bad idea if there isn't much else, or if the reason there's so little is because of some future filtration event.)

comment by Logos01 · 2011-11-04T15:30:33.975Z · LW(p) · GW(p)

People who work for http://nuclearrisk.org/ have a very high estimate of the probability that a nuclear war will destroy the world (by their estimates, if you are American and under 40, then nuclear war is the single most likely way in which you might die next year).

Nuclear weapons have been available on the "black market" (thanks to sloppy soviet handling practices) for decades, yet no terrorist or criminal group has ever used a nuclear fission initiation device. Nuclearrisk.org claims "terrorists may soon get their own button on the vest", citing Al-Qaeda's open desire to acquire nuclear weapons.

I am unable, if I assume fully honest and rational assessments, to rectify these points of fact with one another. They disagree with each other. Given the fact that, furthermore, many of these assessments of risk seem to carry the implicit assumption that if a single nuke is used, the whole world will start glowing in the dark ( see: http://news.stanford.edu/news/2009/july22/hellman-nuclear-analysis-071709.html for an example of this (Martin Hellman himself) ) -- well, it gets further absurd.

In other words; folks need to be careful, when crafting expert opinions, to avoid Déformation professionnelle.

Replies from: gwern
comment by gwern · 2011-11-04T15:39:30.947Z · LW(p) · GW(p)

Nuclear weapons have been available on the "black market" (thanks to sloppy soviet handling practices) for decades, yet no terrorist or criminal group has ever used a nuclear fission initiation device

Cite please. From Pinker's new book:

It’s really only nuclear weapons that deserve the WMD acronym. Mueller and Parachini have fact-checked the various reports that terrorists got “just this close” to obtaining a nuclear bomb and found that all were apocryphal. Reports of “interest” in procuring weapons on a black market grew into accounts of actual negotiations, generic sketches morphed into detailed blueprints, and flimsy clues (like the aluminum tubes purchased in 2001 by Iraq) were overinterpreted as signs of a development program.

Each of the pathways to nuclear terrorism, when examined carefully, turns out to have gantlets of improbabilities. There may have been a window of vulnerability in the safekeeping of nuclear weapons in Russia, but today most experts agree it has been closed, and that no loose nukes are being peddled in a nuclear bazaar. Stephen Younger, the former director of nuclear weapons research at Los Alamos National Laboratory, has said, “Regardless of what is reported in the news, all nuclear nations take the security of their weapons very seriously.”274 Russia has an intense interest in keeping its weapons out of the hands of Chechen and other ethnic separatist groups, and Pakistan is just as worried about its archenemy Al Qaeda. And contrary to rumor, security experts consider the chance that Pakistan’s government and military command will fall under the control of Islamist extremists to be essentially nil.275 Nuclear weapons have complex interlocks designed to prevent unauthorized deployment, and most of them become “radioactive scrap metal” if they are not maintained.276 For these reasons, the forty-seven-nation Nuclear Security Summit convened by Barack Obama in 2010 to prevent nuclear terrorism concentrated on the security of fissile material, such as plutonium and highly enriched uranium, rather than on finished weapons.

The dangers of filched fissile material are real, and the measures recommended at the summit are patently wise, responsible, and overdue. Still, one shouldn’t get so carried away by the image of garage nukes as to think they are inevitable or even extremely probable. The safeguards that are in place or will be soon will make fissile materials hard to steal or smuggle, and if they went missing, it would trigger an international manhunt. Fashioning a workable nuclear weapon requires precision engineering and fabrication techniques well beyond the capabilities of amateurs. The Gilmore commission, which advises the president and Congress on WMD terrorism, called the challenge “Herculean,” and Allison has described the weapons as “large, cumbersome, unsafe, unreliable, unpredictable, and inefficient.”277 Moreover, the path to getting the materials, experts, and facilities in place is mined with hazards of detection, betrayal, stings, blunders, and bad luck. In his book On Nuclear Terrorism, Levi laid out all the things that would have to go right for a terrorist nuclear attack to succeed, noting, “Murphy’s Law of Nuclear Terrorism: What can go wrong might go wrong.”278 Mueller counts twenty obstacles on the path and notes that even if a terrorist group had a fifty-fifty chance of clearing every one, the aggregate odds of its success would be one in a million. Levi brackets the range from the other end by estimating that even if the path were strewn with only ten obstacles, and the probability that each would be cleared was 80 percent, the aggregate odds of success facing a nuclear terrorist group would be one in ten. Those are not our odds of becoming victims. A terrorist group weighing its options, even with these overly optimistic guesstimates, might well conclude from the long odds that it would better off devoting its resources to projects with a higher chance of success. None of this, to repeat, means that nuclear terrorism is impossible, only that it is not, as so many people insist, imminent, inevitable, or highly probable.

Replies from: Logos01
comment by Logos01 · 2011-11-04T16:50:43.204Z · LW(p) · GW(p)

Cite, please.

200 Soviet nukes lost in Ukraine -- article from Sept 13, 2002. There have been reported losses of nuclear submarines at sea since then as well (though those are improbably recoverable). Note: even if that window is closed now, it was open then, and no terrorist groups used that channel to acquire nukes -- nor is there, as your citation notes, even so much as an actually recorded attempt to do so -- in the entirety of that window of opportunity.

When dozens of disparate extremist groups failed to even attempt to acquire a specific category of weapon, we can safely at that point generalize into a principle that governs how 'terrorists' interact with 'nukes' (in this case) such that they are exceedingly unlikely to want to do so.

In this case, I assert it is because all such groups are inherently political, and as such the knowable political fallout (pun intended) of using a nuclear bomb is sufficient that it in and of itself acts as a deterrant against their use: I am possessed of a strong belief that any terrorist organization that used a nuclear bomb would be eradicated by the governments of every nation on the planet. There is no single event more likely to unify the hatred of all mankind against the perpetrator than the rogue use of a nuclear bomb; we have stigmatized them to that great an extent.

Replies from: gwern
comment by gwern · 2011-11-04T20:06:43.406Z · LW(p) · GW(p)

200 Soviet nukes lost in Ukraine -- article from Sept 13, 2002. There have been reported losses of nuclear submarines at sea since then as well (though those are improbably recoverable).

A Pravda article about an accounting glitch is not terribly convincing. Accounting problems do not even mean that the bombs were accessible at any point (assuming they existed), much less that they have been available 'on the "black market" (thanks to sloppy soviet handling practices) for decades'! Srsly.

(Nor do lost submarines count; the US and Russia have difficulties in recovering them, black-market groups are right out, even the drug cartels can barely build working shallow subs.)

Replies from: Logos01
comment by Logos01 · 2011-11-04T22:21:48.479Z · LW(p) · GW(p)

A Pravda article about an accounting glitch is not terribly convincing. Accounting problems do not even mean that the bombs were accessible at any point (assuming they existed), much less that they have been available 'on the "black market" (thanks to sloppy soviet handling practices) for decades'! Srsly.

You've missed the point of what I was asserting with that article.

I was demonstrating that the Soviets did not keep proper track of their nuclear weapons, to the point where even they did not know how many they had. The rest follows from there with public-knowledge information not the least of which being the extremity of corruption that existed in the CCCP.

comment by Drahflow · 2011-11-06T12:36:28.607Z · LW(p) · GW(p)

Risk mitigation groups would gain some credibility by publishing concrete probability estimates of "the world will be destroyed by X before 2020" (and similar for other years). As many of the risks are a rather short event (think nuclear war / asteroid strike / singularity), the world will be destroyed by a single cause and the respective probabilities can be summed. I would not be surprised if the total probability comes out well above 1. Has anybody ever compiled a list of separate estimates?

On a related note, how much of the SIAI is financed on credit? Any group which estimates high risks of disastrous events should be willing to pay higher interest rates than market average. (As the expected amount of repayments is reduced by the nontrivial probability of everyone dying before maturity of the contract).

comment by [deleted] · 2011-11-08T05:29:20.883Z · LW(p) · GW(p)

So you more highly value your immediate personal comfort than you do the long-term survival of the human race?

I don't care at all about the long-term survival of the human race. Is there any reason I should? I care about the short-term survival of humanity but only because it affects me and other people that I care about. But going to prison would also affect me and the people I care about so it would be a big deal. At least like 25% as bad as the end of humanity.

I suspect what you lack is imagination and determination.

Certainly that is true in this case. I'm not going to put a lot of work into developing an elaborate plan to do something that I don't think should be done.

Replies from: Logos01
comment by Logos01 · 2011-11-08T07:22:27.025Z · LW(p) · GW(p)

I don't care at all about the long-term survival of the human race. Is there any reason I should?

Define "long-term", then, as "more than a decade from today". I.e.; "long-term" includes your own available lifespan.

But going to prison would also affect me and the people I care about so it would be a big deal. At least like 25% as bad as the end of humanity.

Would you be so kind as to justify this assertion for me? I find my imagination insufficient to the task of assigning equivalent utility metrics to "me in prison" == 0.25x "end of the species".

Certainly that is true in this case. I'm not going to put a lot of work into developing an elaborate plan to do something that I don't think should be done.

... I really hate it when people reject counterfactuals on the basis of their being counterfactuals alone. It's a dishonest conversational tactic.

Replies from: None
comment by [deleted] · 2011-11-08T15:50:11.321Z · LW(p) · GW(p)

Would you be so kind as to justify this assertion for me? I find my imagination insufficient to the task of assigning equivalent utility metrics to "me in prison" == 0.25x "end of the species".

Well, I give equivalent utility to "death of all the people I care about" and "end of the species." Thinking about it harder I feel like "death of all the people I care about" is more like 10-100X worse than my own death. Me going to prison for murder is about as bad as my own death, so its more like .01-.1x end of humanity. Can you imagine that?

... I really hate it when people reject counterfactuals on the basis of their being counterfactuals alone. It's a dishonest conversational tactic.

I was considering writing a long thing about your overconfidence in thinking you could carry out such a plan without any (I am presuming) experience doing that kind of thing. I was going to explain how badly you are underestimating the complexity of the world around you and overestimating how far you can stray from your own personal experience and still make reasonable predictions. But this is just a silly conversation that everyone else on LW seems to hate so y bother?

Replies from: Logos01
comment by Logos01 · 2011-11-08T16:03:21.172Z · LW(p) · GW(p)

Me going to prison for murder is about as bad as my own death, so its more like .01-.1x end of humanity. Can you imagine that?

I'm curious, now, as to what nation or state you live in.

Thinking about it harder I feel like "death of all the people I care about" is more like 10-100X worse than my own death.

Well -- in this scenario you are "going to die" regardless of the outcome. The only question is whether the people you care about will. Would you kill others (who were themselves also going to die if you did nothing) and allow yourself to die, if it would save people you cared about?

(Also, while it can lead to absurd consequences -- Eliezer's response to the Sims games for example -- might I suggest a re-examination of your internal moral consistency? As it stands it seems like you're allowing many of your moral intuitions to fall in line with evolutionary backgrounds. Nothing inherently wrong with that -- our evolutionary history has granted us a decent 'innate' morality. But we who 'reason' can do better.)

I was considering writing a long thing about your overconfidence in thinking you could carry out such a plan without any (I am presuming) experience doing that kind of thing.

I didn't list any plan. This was intentional. I'm not going to give pointers to others who might be seeking them out for reasons I personally haven't vetted on how to do exactly what this topic entails. That, unlike what some others have criticized about this conversation, actually would be irresponsible.

That being said, the fact that you're addressing this to the element you are is really demonstrating a further non-sequitor. It doesn't matter whether or not you believe the scenario plausible: what would your judgment of the rightfulness of carrying out the action yourself int he absence of democratic systems be?

that everyone else on LW seems to hate so y bother?

  1. Why allow your opinions to be swayed by the emotional responses of others?

  2. In my case, I'm currently sitting at -27 on my 30-day karma score. That's not even the lowest I've been in the last thirty days. I'm not really worried about my popularity here. :)

Replies from: None
comment by [deleted] · 2011-11-08T16:38:23.430Z · LW(p) · GW(p)

I'm curious, now, as to what nation or state you live in.

I live in Illinois. I am curious as to y you are curious.

Would you kill others (who were themselves also going to die if you did nothing) and allow yourself to die, if it would save people you cared about?

Probably. For instance, I would try to defend my wife/child from imminent physical harm even if it put me in a lot of danger. If that meant trying to kill someone then I would do that but in that case it would be justifiable and I probably wouldn't go to prison if I survived.

what would your judgment of the rightfulness of carrying out the action yourself int he absence of democratic systems be?

I feel like we are doomed to talk about different things. I think you are talking about "morally right" which I don't usually think about unless I am trying to convince someone to do something against their own interest. I observe that large democratic governments deliberately kill people all the time without consequence. I also observe that individuals have more trouble doing so. Consequently, I think that individuals trying to kill people is a bad idea. So its not right in the same sense that exercising a 60 delta call 3 mos from expiration is not right.

Why allow your opinions to be swayed by the emotional responses of others?

My opinions are unaffected but my actions might be. If I am telling jokes and everyone is staring at me stone faced I'm likely to stop.

Replies from: pedanterrific, Logos01
comment by pedanterrific · 2011-11-08T16:47:30.674Z · LW(p) · GW(p)

Me going to prison for murder is about as bad as my own death

I'm curious, now, as to what nation or state you live in.

I live in Illinois. I am curious as to y you are curious.

I imagine if you lived in Norway you would not be of that opinion.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-08T18:57:36.842Z · LW(p) · GW(p)

Me going to prison for murder is about as bad as my own death

I'm curious, now, as to what nation or state you live in.

I live in Illinois. I am curious as to y you are curious.

I imagine if you lived in Norway you would not be of that opinion.

What are Norwegian prisons like?

Replies from: pedanterrific
comment by pedanterrific · 2011-11-08T19:02:14.977Z · LW(p) · GW(p)

Yeah, that's... what I was getting at. (Was this meant as a refutation somehow? I'm confused.)

comment by Logos01 · 2011-11-09T02:33:14.493Z · LW(p) · GW(p)

Probably. For instance, I would try to defend my wife/child from imminent physical harm even if it put me in a lot of danger.

How many people you didn't know would you equate to being "of equal concern" to you as one person you do know when deciding whether or not it's worth it to risk your own life to save them? Please express this as a ratio -- unknowns:knowns -- and then, if you like, knowns:loveds.

comment by Dmytry · 2012-03-28T06:50:37.785Z · LW(p) · GW(p)

"because Eliezer is probably the world expert on AI risk"

There is no experts on the AI risk. There's nothing where to get expertise from. He read some SF, got caught on an idea, did not study (self study or otherwise) CS or any actual relevant body of knowledge to the point of producing anything useful, and he is a very convincing writer. The experts, you'll get experts in 2050. He's a dilettante.

People follow some sort of distribution on their risk estimates. Eliezer is just the far far off end of the bell curve on the risk estimate for AI, among those with writing skills. He does make some interesting points, but he's not a risk estimator.

comment by [deleted] · 2011-11-04T17:32:19.336Z · LW(p) · GW(p)

It seems like it might be a good idea to know what the probability of each of these risks is.

I think you need to consider this point further. Before you go through the effort of estimating a probability it is good to know if there is any value to such an estimate. For instance, if you did a lot of work and figured out that the prob that the world would be destroyed by UFAI was 5% would that change your behavior in any way? What if you found it to be 50% or .000005%? Personally, I don't think I would do much differently. Maybe in the 50% case I would vote for the mass murder of all AI researchers but currently I don't know of any major political candidates with that in their platform. Other than that it seems like pretty useless information to me.

Replies from: Logos01
comment by Logos01 · 2011-11-07T05:16:11.841Z · LW(p) · GW(p)

Maybe in the 50% case I would vote for the mass murder of all AI researchers but currently I don't know of any major political candidates with that in their platform. Other than that it seems like pretty useless information to me.

If you were willing to assert that AI researchers should be 'murdered', why would you limit yourself to the political process to that end? Why not start picking them off in various ways through your own direct actions? (Such as saving up enough money to put out hits on them all simultaneously, etc..)?

What I'm getting at is; why do you restrict what you believe to be "right and necessary" to a democratic process when you could take individual action to that end as well?

Replies from: None
comment by [deleted] · 2011-11-08T03:07:14.766Z · LW(p) · GW(p)

There are several reasons why I wouldn't want to personally murder AI researchers, even if I believed that they were going to destroy the world (which I don't).

  1. I don't want to go to prison

2.. I generally don't like killing mammals. People are some of the least cute mammals out there but it would still take an emotional toll to kill them. I'd rather outsource the killing of mammals to others.

  1. I think that killing all AI researchers would be fairly effective government policy in terms of reducing the risk of UFAI. I think that my trying to kill AI researchers would do nothing to prevent UFAI.
Replies from: steven0461
comment by steven0461 · 2011-11-08T04:14:04.700Z · LW(p) · GW(p)

I know both of you are speaking hypothetically, but please don't make comments that could be read as advocating murder, or that could be read as creepily cavalier about the possibility.

Replies from: Logos01, None
comment by Logos01 · 2011-11-08T07:29:31.011Z · LW(p) · GW(p)

but please don't make comments that could be read as advocating murder, or that could be read as creepily cavalier about the possibility.

I understand that this topic has a high yuck factor -- but it is the duty of the rigorously disciplined rationalist to maintain that discipline even in the face of uncomfortable thoughts.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2011-11-09T07:16:13.642Z · LW(p) · GW(p)

You're missing Steven's point: "avoid looking needlessly creepy".

Replies from: Logos01
comment by Logos01 · 2011-11-09T07:17:03.131Z · LW(p) · GW(p)

I'm not missing it. I'm rejecting it.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2011-11-09T07:24:35.539Z · LW(p) · GW(p)

"Yuck factor" has nothing to do with it. The "duty of the rigorously disciplined rationalist" does not include ignoring others' reactions to your statements.

Replies from: Logos01
comment by Logos01 · 2011-11-09T07:35:31.523Z · LW(p) · GW(p)

Avoiding unpleasant and "creepy" topics merely because others find them unpleasant is to fail in that duty. It does, in fact, include ignoring others' reactions to your statements in terms of topic.

The topic was already framed; and the reactions have been most vehement to statements well-framed with context, ignoring that context as they espouse those reactions.

To allow an entire topic to be squelched for no better reason than others saying "that is creepy", or something analogous, is in fact a failure mode.

comment by [deleted] · 2011-11-08T05:07:23.488Z · LW(p) · GW(p)

I really don't want to be perceived as advocating murder. Please don't get hung up on my use of the word "murder." I really just meant deliberate killing. What I was talking about would not be murder any more than when the US military killed Osama Bin Laden. Murder is bad and illegal. For the USGov to kill Bin Laden was both legal and good, hence definitely not murder.

Maybe if it turns out that UFAI is a big problem in like the 2030s then pro-AI people will be viewed in that decade somewhat like how pro-Bin Laden people are viewed now.

Replies from: Zed
comment by Zed · 2011-11-08T13:17:58.577Z · LW(p) · GW(p)

You shouldn't do it because it's an invitation for people to get sidetracked. We try to avoid politics for the same reason.

Replies from: None
comment by [deleted] · 2011-11-08T15:04:16.282Z · LW(p) · GW(p)

Sidetracked from what?

Replies from: Zed
comment by Zed · 2011-11-08T15:26:20.823Z · LW(p) · GW(p)

From the topic, in this case "selection effects in estimates of global catastrophic risk". If you casually mention you don't particularly care about humans or that personally killing a bunch of them may be an effective strategy the discussion is effectively hijacked. So it doesn't matter that you don't wish to do anybody harm.

Replies from: None
comment by [deleted] · 2011-11-08T16:17:25.887Z · LW(p) · GW(p)

I can't control what other people say but I didn't at any point say that I don't care about humans, nor did I say that personally killing anyone is a good idea ever.

my main point was that the probabilities of various xRisks don't matter. My side point was that if it turned out that UFAI was a significant risk then politically enforced luddism would be the logical response. I like to make that point once in awhile in the hopes that SingInst will realize the wisdom of it.

Replies from: lessdazed
comment by lessdazed · 2011-11-10T16:36:54.459Z · LW(p) · GW(p)

politically enforced luddism would be the logical response.

It would be a response, but you have described it as "logical" instead of with an adjective describing some of its relative virtues.

Also, distinguish the best response for society and the best response for an advocate, even if you think they are nearly the same, just to show you've considered that.

comment by Logos01 · 2011-11-08T05:48:56.578Z · LW(p) · GW(p)

That would require that I had asserted I agreed with the underlying premise that UFAI was a significant risk.

At the moment, I do not.

I also find it rather unsurprising that the comment in question has been as far down-voted as it has been, though once again I am left noting how while I am not surprised, I am disappointed with LW in general. This is happening too often, I fear.

Replies from: ArisKatsaris, betterthanwell, nshepperd, None
comment by ArisKatsaris · 2011-11-08T06:35:48.646Z · LW(p) · GW(p)

I also find it rather unsurprising that the comment in question has been as far down-voted as it has been,

Most of us frown on irresponsible encouragements to criminal acts.

Replies from: Logos01
comment by Logos01 · 2011-11-08T06:45:07.755Z · LW(p) · GW(p)

Most of us frown on irresponsible encouragements to criminal acts.

As well you should. Of course, this carries a number of interesting assumptions:

  • The assumption of irresponsibility.

  • The assumption of encouragement.

  • The assumption of the 'wrongness' of criminal acts.

Let me rephrase this: If you believed -- very strongly (say confidence over 90%) -- there was a strong chance that a specific person was going to destroy the world, and you also knew that only you were willing to acknowledge the material evidence which lead you to this conclusion...

Would you find sitting still and letting the world end merely because ensuring the survival of the human race was criminal an acceptable thing to do?

In that counterfactual, I do not. I find it reprehensibly irresponsible, in fact.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-11-08T10:57:59.862Z · LW(p) · GW(p)

Logos, you don't need to preach about utilitarian calculations to us. You have it the other way around. We don't condemn your words because we can't make them, we condemn them because we can make them better than you.

It was your posts I condemned and downvoted as irresponsible, it was your posts' utility that I considered negative, not lone heroic actions that saved the world from inventors of doom. You did none of the latter, you did some of the former. So it's the utility of the former that's judged.

Also, if I ever found myself perceiving that "only I was willing to acknowledge the material evidence which lead me to this conclusion...", the probabilities would be severely in favour of my own mind having cracked, rather than me being the only rational person in the world. We run on corrupted hardware!.

That you don't seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.

Replies from: Logos01
comment by Logos01 · 2011-11-08T11:11:17.286Z · LW(p) · GW(p)

Logos, you don't need to preach about utilitarian calculations to us. You have it the other way around. We don't condemn your words because we can't make them, we condemn them because we can make them better than you.

Then do so.

That you don't seem to consider that, nor do you urge others to consider it, is part of the fatal irresponsibility of your words.

I don't seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned "material evidence" at all -- an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology. The objection of "but it could be a faulty belief" is irrelevant here. We have already done away with it in the formation of the specific counterfactual. That it is an exceedingly unlikely counterfactual does not change the fact that it is a useful counterfactual.

What I'm elucidating here is a rather ugly version of a topic that Eliezer was discussing with his Sword of Good parable: to be effective in discerning what is morally correct one must be in the practice and habit of throwing away cached moral beliefs and evaluating even the most unpleasant of situations according to their accepted epistemological framework's methodology for such evaluations.

The AI serial-killer scenario is one such example.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-11-08T11:28:38.498Z · LW(p) · GW(p)

I don't seem to consider it because it is a necessary part of the calculus of determining whether a belief is valid. This would be why I mentioned "material evidence" at all -- an indicator that checks and confirmations are necessary to a sufficiently rigorous epistemology.

Don't you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?

If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words, and also says they never noticed you saying anything about "checks and confirmations", wouldn't you feel remotely responsible?

And as a sidenote, the lone nuts you'd be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that'd be working in military bases with the support of Russia, or China, or North Korea, or America. Therefore, if anything, they'd be more likely to bring about the world's doom, not prevent it.

Replies from: Logos01
comment by Logos01 · 2011-11-08T11:55:03.796Z · LW(p) · GW(p)

Don't you think that a remotely responsible post should have at the very least emphasized that significantly more than you did?

Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a "90%" certainty statement would be immune to the kind of discourse this site focuses on in the first place.

It's not just implicit; it's necessary to reach that state. It's not irresponsible to know your audience.

If tomorrow some lone nut murders an AI researcher, and after being arrested says they found encouragement in your specific words,

Then they are a lunatic who does not know how to reason and would have done it one way or the other. In fact, this is already a real-world problem -- and my words have no impact on that one way or the other on those individuals.

and also says they never noticed you saying anything about "checks and confirmations", wouldn't you feel remotely responsible?

No. Nor should I. Any person who could come to a statement of "I am 90% certain of X" (as I used that 90% as a specific inclusion in the counterfactual) who also could not follow the dialogue-as-it-was to the reasonable conclusion that it was a counterfactual... well, they would have had their conclusion long before they read my words.

And as a sidenote, the lone nuts you'd be encouraging would be much more likely to murder FAI researchers, than those uFAI researchers that'd be working in military bases with the support of Russia, or China, or North Korea, or America.

I'm curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities. The uFAI and the FAI 'crowd' are undifferentiated, today, in terms of their accessibility.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-11-08T12:43:47.701Z · LW(p) · GW(p)

Any person insufficiently familiar with rational skepticism to the point that they would not doubt their own conclusions and go through a rigorous process of validation before reaching a "90%" certainty statement would be immune to the kind of discourse this site focuses on in the first place.

What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?

I'm curious as to what makes you believe this to be the case. As far as I am aware, the fundamental AgI research ongoing in the world is currently being conducted in universities.

I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening in secret I consider it significantly more likely that it is uFAI, and significantly less likely that its participants would be likely to be convinced of the need for Friendliness than independent (and thus significantly more unprotected) researchers.

Replies from: Logos01
comment by Logos01 · 2011-11-08T12:53:34.338Z · LW(p) · GW(p)

What is your certainty for this conclusion, and what rigorous process of validation did you use to arrive to it?

My certainty is fairly high, though of course not absolute. I base it off of my knowledge of how humans form moral convictions; how very few individuals will abandon cached moral beliefs, and the reasons I have ever encountered for individuals doing so (either through study of psychology, reports of others' studies of psychology -- including the ten years I have spent cohabitating with a student of abnormal psychology), personal observations into the behaviors of extremists and conformists, and a whole plethera of other such items that I just haven't the energy to list right now.

I do not presume to know what secret research on the subject is or is not happening sponsored by governments around the world, but if any such government-sponsored work is happening iin secret I consider it significantly more likely that it is uFAI

I'm not particularly given to conspiratic paranoia. DARPA is the single most likely resource for such and having been in touch with some individuals from that "area of the world" I know that our military has strong reservations with the idea of advancing weaponized autonomous AI.

Besides; the theoretical groundwork for AgI in general is insufficient to even begin to assign high probability of AI itself coming about anytime within the next generation. Friendly or otherwise. IA is far more likely to occur, frankly. Especially with the work of folks like Theodore Berger.

However, you here have contradicted yourself: you claim to have no special knowledge yet you also assign high probability to uFAI researchers surviving a conscientious pogrom of AI researchers.

This is contradictory.

comment by betterthanwell · 2011-11-08T06:28:07.022Z · LW(p) · GW(p)

That would require that I had asserted I agreed with the underlying premise that UFAI was a significant risk.

At the moment, I do not.

Quoted for posterity.

Replies from: Logos01
comment by Logos01 · 2011-11-08T06:46:26.907Z · LW(p) · GW(p)

In that case, allow me to add that I believe the current likelihood of UFAI to be well below any other known species-level existential risk, and that I also believe that the current crop of AGI researchers are sufficiently fit to address this problem.

comment by nshepperd · 2011-11-08T06:48:51.255Z · LW(p) · GW(p)

That would require that I had asserted I agreed with the underlying premise that UFAI was a significant risk.

At the moment, I do not.

I wouldn't be terribly surprised, though, if this were the sort of consideration likely to be conveniently ignored by those in charge of enforcing the relevant laws in your jurisdiction!

Replies from: Logos01
comment by Logos01 · 2011-11-08T06:52:21.037Z · LW(p) · GW(p)

Anyone interested in "reporting" me to local law enforcement need only message me privately and I will provide them with my full name, address, and contact information for my local law enforcement.

I am that confident that this is a non-issue.

Send to: logos01@TempEmail.net (Address will expire on Nov. 23, 2011)

Replies from: pedanterrific
comment by pedanterrific · 2011-11-09T08:17:59.747Z · LW(p) · GW(p)

What are you trying to prove, here? What's the point of this?

Replies from: Logos01
comment by Logos01 · 2011-11-09T09:44:22.388Z · LW(p) · GW(p)

The demonstration of the invalidity of the raised concern of this dialogue being treated legally as a death threat, and furthermore the insincerity of its being raised as a concern: after a larger than 24-hour window not one message has arrived at that address (unless it was removed between the intervals I checked it, somehow).

This, then, is evidence against the legitimacy of the complaint; evidence for the notion that what's really motivating these responses, then, isn't concerns that this dialogue would be treated as a death threat, but some other thing. What precisely that other thing is, my offer could not differentiate between.

Replies from: pedanterrific
comment by pedanterrific · 2011-11-09T16:23:39.706Z · LW(p) · GW(p)

Or maybe, you know, everyone here knows it wasn't actually a death threat and has no desire to get you in legal trouble for no reason, but wanted to warn you it could be perceived that way out of genuine concern?

Replies from: Logos01
comment by Logos01 · 2011-11-09T16:47:34.099Z · LW(p) · GW(p)

No, what's going on here is something significantly "other" than "everyone here knows it wasn't actually a death threat [...] but wanted to warn you it could be perceived that way." -- those are mutually exclusive conditions by the way; either everyone does not know this, or it can't be perceived that way.

The truly ironic thing is that there isn't a legitimate interpretation of my words that could make them a death threat. I responded to an initial counterfactual with a query as to the moral justification of refusing to take individual action in an end-of-the-world-if-you-don't scenario.

In attempting to explore this, I was met with repeated willful refusals to engage the scenario, admonitions to "not be creepy", and bald assertions that "I'm not better at moral calculus but worse".

These responses, I cannot help but conclude, are demonstrative of cached moral beliefs inducing emotional responses overriding clear-headed reasoning. I'm used to this; the overwhelming majority of people are frankly unable to start from the 'sociopathic' (morally agnostic, that is) view and work their way back to a sound moral epistemology. It is no surprise to me that the population of LW is mainly comprised of "neurotypical" individuals. (Please note: this is not an assumption of superiority on my part.)

This is unfortunate, but... short of 'taking the karma beating' there's really no way for me to demonstratively point that out in any effective way.

I don't think I'm going to continue to respond any further in this thread, though. It's ceased being useful to any extent, insofar as I can see.

comment by [deleted] · 2011-11-08T06:27:24.225Z · LW(p) · GW(p)

What's there to be disappointed with?

Replies from: Logos01
comment by Logos01 · 2011-11-08T06:41:27.566Z · LW(p) · GW(p)

In this case? The demonstrated inability to parse counterfactuals from postulates, in emotionally charged contexts.

Replies from: None
comment by [deleted] · 2011-11-08T06:46:41.993Z · LW(p) · GW(p)

A counterfactual situation whose consequent is a death threat may still be a death threat, depending on your jurisdiction. You might want to seek legal advice, which I'm unable to provide.

Replies from: Logos01
comment by Logos01 · 2011-11-08T06:49:55.478Z · LW(p) · GW(p)

A counterfactual situation whose consequent is a death threat may still be a death threat, depending on your jurisdiction.

The facility with which free exercise (free speech) would be applied to this particular dialogue leaves me sufficiently confident that I have absolutely no legal concerns to worry about whatsoever. The entire nature of counterfactual dialogue is such that you are making it clear that you are not associating the topic discussed with any particular reality. I.e.; you are not actually advocating it.

And, frankly, if LW isn't prepared to discuss the "harder" questions of how to apply our morality in such murky waters, and is only going to reserve itself to the "low-hanging fruit" -- well... I'm fully justified in being disappointed in the community.

I expect better, you see, of a community that prides itself on "claiming" the term "rationalist".