Best career models for doing research?

post by Kaj_Sotala · 2010-12-07T16:25:22.584Z · score: 33 (31 votes) · LW · GW · Legacy · 1028 comments

Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.

The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.

What (dis)advantages does this have compared to the traditional model?

Some advantages:

Some disadvantages:

EDIT: Note that while I certainly do appreciate comments specific to my situation, I posted this over at LW and not Discussion because I was hoping the discussion would also be useful for others who might be considering an academic path. So feel free to also provide commentary that's US-specific, say.

1028 comments

Comments sorted by top scores.

comment by jsteinhardt · 2010-12-08T02:24:21.790Z · score: 36 (42 votes) · LW(p) · GW(p)

I believe that most people hoping to do independent academic research vastly underestimate both the amount of prior work done in their field of interest, and the advantages of working with other very smart and knowledgeable people. Note that it isn't just about working with other people, but with other very smart people. That is, there is a difference between "working at a university / research institute" and "working at a top university / research institute". (For instance, if you want to do AI research in the U.S., you probably want to be at MIT, Princeton, Carnegie Mellon, Stanford, CalTech, or UC Berkeley. I don't know about other countries.)

Unfortunately, my general impression is that most people on LessWrong are mostly unaware of the progress made in statistical machine learning (presumably the brand of AI that most LWers care about) and cognitive science in the last 20 years (I mention these two fields because I assume they are the most popular on LW, and also because I know the most about them). And I'm not talking about impressive-looking results that dodge around the real issues, I'm talking about fundamental progress towards resolving the key problems in artificial intelligence. Anyone planning to do AI research should probably at least understand these first, and what the remaining obstacles are.

You aren't going to understand this without doing a lot of reading, and by the time you've done that reading, you'll probably have identified a research group whose work clearly reflects your personal research goals. At this point it seems like the obvious next step is to apply to work with that group as a graduate student / post doc. This circumvents the problem of having to work on research you aren't interested in. As for other annoyances, while teaching can potentially be a time-sink, the rest of "wasted" time seems to be about publishing your work; I really find it hard to justify not publishing your work, since (a) other people need to know about it, and (b) writing up your results formally oftentimes leads to a noticeably deeper understanding than otherwise. Of course, you can waste time trying to make your results look better than they are, but this certainly isn't a requirement and has obvious ethical issues.

EDIT: There is the eventual problem that senior professors spend more and more of their time on administrative work / providing guidance to their lab, rather than doing research themselves. But this isn't going to be an issue until you get tenure, which is, if you do a post-doc, something like 10-15 years out from starting graduate school.

comment by Danny_Hintze · 2010-12-10T23:30:46.353Z · score: 6 (6 votes) · LW(p) · GW(p)

There is the eventual problem that senior professors spend more and more of their time on administrative work / providing guidance to their lab, rather than doing research themselves. But this isn't going to be an issue until you get tenure, which is, if you do a post-doc, something like 10-15 years out from starting graduate school.

This might not even be a significant problem when the time does come around. High fluid intelligence only lasts for so long, and thus using more crystallized intelligence later on in life to guide research efforts rather than directly performing research yourself is not a bad strategy if the goal is to optimize for the actual research results.

comment by jsteinhardt · 2010-12-11T03:05:43.836Z · score: 4 (4 votes) · LW(p) · GW(p)

Those are roughly my thoughts as well, although I'm afraid that I only believe this to rationalize my decision to go into academia. While the argument makes sense, there are definitely professors that express frustration with their position.

What does seem like pretty sound logic is that if you could get better results without a research group, you wouldn't form a research group. So you probably won't run into the problem of achieving suboptimal results from administrative overhead (you could always just hire less people), but you might run into the problem of doing work that is less fun than it could be.

Another point is that plausibly some other profession (corporate work?) would have less administrative overhead per unit of efficiency, but I don't actually believe this to be true.

comment by nhamann · 2010-12-10T11:27:35.714Z · score: 6 (6 votes) · LW(p) · GW(p)

... the progress made in statistical machine learning (presumably the brand of AI that most LWers care about) and cognitive science in the last 20 years... And I'm not talking about impressive-looking results that dodge around the real issues, I'm talking about fundamental progress towards resolving the key problems in artificial intelligence.

Could you point me towards some articles here? I fully admit I'm unaware of most of this progress, and would like to learn more.

comment by jsteinhardt · 2010-12-11T03:56:43.396Z · score: 12 (12 votes) · LW(p) · GW(p)

A good overview would fill up a post on its own, but some relevant topics are given below. I don't think any of it is behind a paywall, but if it is, let me know and I'll link to another article on the same topic. In cases where I learned about the topic by word of mouth, I haven't necessarily read the provided paper, so I can't guarantee the quality for all of these. I generally tried to pick papers that either gave a survey of progress or solved a specific clearly interesting problem. As a result you might have to do some additional reading to understand some of the articles, but hopefully this is a good start until I get something more organized up.

Learning:

Online concept learning: rational rules for concept learning [a somewhat idealized situation but a good taste of the sorts of techniques being applied]

Learning categories: Bernoulli mixture model for document classification, spatial pyramid matching for images

Learning category hierarchies: nested Chinese restaurant process, hierarchical beta process

Learning HMMs (hidden Markov models): HDP-HMMs this is pretty new so the details haven't been hammered out, but the article should give you a taste of how people are approaching the problem, although I also haven't read this article; I forget where I read about HDP-HMMs, although another paper on HDPs is this one. I think the original article I read was one of Erik Sudderth's, which are here. Another older algorithm is the Baum-Welch algorithm.

Learning image characteristics: deep Boltzmann machines

Handwriting recognition: hierarchical Bayesian approach, basically the same as the previous research

Learning graphical models: a survey paper


Planning:

Planning in MDPs: value iteration, plus LQR trees for many physical systems

Planning in POMDPs: I don't actually know much about this; my impression is that we need to do more work in this area, but approaches include reinforcement learning. A couple interesting papers: Bayes risk approach, plus a survey of hierarchical methods

comment by Perplexed · 2011-01-22T04:52:52.275Z · score: 1 (1 votes) · LW(p) · GW(p)

... my general impression is that most people on LessWrong are mostly unaware of the progress made in statistical machine learning (presumably the brand of AI that most LWers care about) and cognitive science in the last 20 years ... . And I'm not talking about impressive-looking results that dodge around the real issues, I'm talking about fundamental progress towards resolving the key problems in artificial intelligence. Anyone planning to do AI research should probably at least understand these first, and what the remaining obstacles are.

I'm not planning to do AI research, but I do like to stay no more than ~10 years out of date regarding progress in fields like this. At least at the intelligent-outsider level of understanding. So, how do I go about getting and keeping almost up-to-date in these fields. Is MacKay's book a good place to start on machine learning? How do I get an unbiased survey of cognitive science? Are there blogs that (presuming you follow the links) can keep you up to date on what is getting a buzz?

comment by jsteinhardt · 2011-01-22T21:19:18.627Z · score: 2 (2 votes) · LW(p) · GW(p)

I haven't read MacKay myself, but it looks like it hits a lot of the relevant topics.

You might consider checking out Tom Griffiths' website, which has a reading list as well as several tutorials.

comment by sark · 2011-01-21T23:10:18.009Z · score: 1 (1 votes) · LW(p) · GW(p)

We should try to communicate with long letters (snail mail) more. Academics seem to have done that a lot in the past. From what I have seen these exchanges seem very productive, though this could be a sampling bias. I don't see why there aren't more 'personal communication' cites, except for them possibly being frowned upon.

comment by jsteinhardt · 2011-01-21T23:46:01.029Z · score: 1 (1 votes) · LW(p) · GW(p)

Why use snail mail when you can use skype? My lab director uses it regularly to talk to other researchers.

comment by sark · 2011-01-22T01:06:23.247Z · score: 3 (3 votes) · LW(p) · GW(p)

Because it is written. Which makes it good for communicating complex ideas. The tradition behind it also lends it an air of legitimacy. Researchers who don't already have a working relationship with each other will take each other's letters more seriously.

comment by jsteinhardt · 2011-01-22T04:28:09.599Z · score: 2 (2 votes) · LW(p) · GW(p)

Upvoted for the good point about communication. Not sure I agree with the legitimacy part (what is p(Crackpot | Snail Mail) compared to p(Crackpot | Email)? I would guess higher).

comment by Sniffnoy · 2011-01-23T05:31:55.730Z · score: 1 (1 votes) · LW(p) · GW(p)

What I'm now wondering is, how does using email vs. snail mail affect the probability of using green ink, or its email equivalent...

comment by sark · 2011-01-22T12:01:12.880Z · score: 1 (1 votes) · LW(p) · GW(p)

Heh you are probably right. It just seemed strange to me how researchers cannot just communicate with each other as long as they have the same research interests. My first thought was that it might have been something to do with status games, where outsiders are not allowed. I suppose some exchanges require rapid and frequent feedback. But then, like you mentioned, wouldn't Skype do?

comment by jsteinhardt · 2011-01-22T21:09:22.736Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm not sure what the general case looks like, but the professors who I have worked with (who all have the characteristic that they do applied-ish research at a top research university) are both constantly barraged by more e-mails than they can possibly respond to. I suspect that as a result they limit communication to sources that they know will be fruitful.

Other professors in more theoretical fields (like pure math) don't seem to have this problem, so I'm not sure why they don't do what you suggest (although some of them do). And I am not sure that all professors run into the same problem as I have described, even in applied fields.

comment by Desrtopa · 2011-01-22T06:17:53.494Z · score: 0 (0 votes) · LW(p) · GW(p)

"In the past" as in before they had alternative methods of long distance communication, or after?

comment by lix · 2010-12-07T19:08:51.831Z · score: 22 (22 votes) · LW(p) · GW(p)

After several years as a post-doc I am facing a similar choice.

If I understand correctly you have no research experience so far. I'd strongly suggest completing a doctorate because:

  • you can use that time to network and establish a publication record
  • most advisors will allow you as much freedom as you can handle, particularly if you can obtain a scholarship so you are not sucking their grant money. Choose your advisor carefully.
  • you may well get financial support that allows you to work full time on your research for at least 4 years with minimal accountability
  • if you want, you can practice teaching and grant applications to taste how onerous they would really be
  • once you have a doctorate and some publications, it probably won't be hard to persuade a professor to offer you an honorary (unpaid) position which gives you an institutional affiliation, library access, and maybe even a desk. Then you can go ahead with freelancing, without most of the disadvantages you cite.

You may also be able to continue as a post-doc with almost the same freedom. I have done this for 5 years. It cannot last forever, though, and the longer you go on, the more people will expect you to devote yourself to grant applications, teaching and management. That is why I'm quitting.

comment by Kaj_Sotala · 2010-12-07T19:18:29.455Z · score: 4 (4 votes) · LW(p) · GW(p)

once you have a doctorate and some publications, it probably won't be hard to persuade a professor to offer you an honorary (unpaid) position which gives you an institutional affiliation, library access, and maybe even a desk. Then you can go ahead with freelancing, without most of the disadvantages you cite.

Huh. That's a fascinating idea, one which had never occurred to me. I'll have to give this suggestion serious consideration.

comment by billswift · 2010-12-07T21:45:23.741Z · score: 7 (7 votes) · LW(p) · GW(p)

Ron Gross's The Independent Scholar's Handbook has lots of ideas like this. A lot of the details in it won't be too useful, since it is mostly about history and the humanities, but quite a bit will be. It is also a bit old to have some more recent stuff, since there was almost no internet in 1993.

comment by James_Miller · 2010-12-07T22:40:55.080Z · score: 3 (3 votes) · LW(p) · GW(p)

Or become a visiting professor in which you teach one or two courses a year in return for modest pay, affiliation and library access.

comment by Louie · 2010-12-10T10:35:15.057Z · score: 15 (15 votes) · LW(p) · GW(p)

I'm putting the finishing touches on a future Less Wrong post about the overwhelming desirability of casually working in Australia for 1-2 years vs "whatever you were planning on doing instead". It's designed for intelligent people who want to earn more money, have more free time, and have a better life than they would realistically be able to get in the US or any other 1st world nation without a six-figure, part-time career... something which doesn't exist. My world saving article was actually just a prelim for this.

comment by Alicorn · 2010-12-10T13:07:15.970Z · score: 12 (12 votes) · LW(p) · GW(p)

Are you going to accompany the "this is cool" part with a "here's how" part? I estimate that would cause it to influence an order of magnitude more people, by removing an inconvenience that looks at least trivial and might be greater.

comment by David_Gerard · 2010-12-10T14:38:50.793Z · score: 3 (3 votes) · LW(p) · GW(p)

I'm now thinking of why Australian readers should go to London and live in a cramped hovel in an interesting place. I feel like I've moved to Ankh-Morpork.

comment by Mardonius · 2010-12-13T14:56:40.167Z · score: 1 (1 votes) · LW(p) · GW(p)

Simple! Tell them they too can follow the way of Lu-Tze, The Sweeper! For is it not said, "Don't knock a place you've never been to"

comment by erratio · 2010-12-11T01:46:45.465Z · score: 2 (2 votes) · LW(p) · GW(p)

As someone already living in Australia and contemplating a relocation to the US for study purposes, I would be extremely interested in this article

comment by David_Gerard · 2010-12-11T02:10:04.819Z · score: 1 (1 votes) · LW(p) · GW(p)

Come to England! It's small, cramped and expensive! The stuff here is amazing, though.

(And the GBP is taking a battering while the AUD is riding high.)

comment by Desrtopa · 2010-12-11T02:14:24.295Z · score: 0 (0 votes) · LW(p) · GW(p)

I was under the impression that England was quite difficult to emigrate to?

comment by David_Gerard · 2010-12-11T02:22:13.623Z · score: 0 (0 votes) · LW(p) · GW(p)

My mother's English, so I'm British by paperwork. Four-year working or study visas for Australians without a British parent are not impossible and can also be converted to a working one or even permanent residency if whatever hoops are in place at the time happen to suit.

comment by diegocaleiro · 2010-12-11T00:44:09.564Z · score: 1 (1 votes) · LW(p) · GW(p)

Hope face.

Let's see if you can beat my next 2 years in Brazil..... I've been hoping for something to come along (trying to defeat my status quo bias) but it has been really hard to find something comparable.

In fact, if this comment is upvoted enough, I might write a "How to be effective from wherever you are currently outside 1st world countries" post...... because if only I knew, life would be just, well, perfect. I assume many other latinos, africans, filipinos, and slavic fellows feel the same way!

comment by lukeprog · 2011-01-21T01:00:21.600Z · score: 0 (0 votes) · LW(p) · GW(p)

Louie? I was thinking about this years ago and would love to know more details. Hurry up and post it! :)

comment by katydee · 2010-12-10T11:11:12.090Z · score: 0 (0 votes) · LW(p) · GW(p)

Color me very interested!

comment by wedrifid · 2010-12-09T21:06:30.155Z · score: 15 (15 votes) · LW(p) · GW(p)

Whats frustrating is I would have had no idea it was deleted- and just assumed it wasn't interesting to anyone, had I not checked after reading the above. I'd much rather be told to delete the relevant portions of the comment- lets at least have precise censorship!

Wow. Even the people being censored don't know it. That's kinda creepy!

his comment led me to discover that quite a long comment I made a little bit ago had been deleted entirely.

How did you work out that it had been deleted? Just by logging out, looking and trying to remember where you had stuff posted?

comment by Vladimir_Nesov · 2010-12-09T21:09:23.582Z · score: 17 (17 votes) · LW(p) · GW(p)

I think it's a standard tool: trollish comments look like being ignored to the trolls. But I think it's impolite to delete comments made in good faith without notification and usable guidelines for cleaning up and reposting. (Hint hint.)

comment by Jack · 2010-12-09T21:15:09.378Z · score: 3 (3 votes) · LW(p) · GW(p)

How did you work out that it had been deleted? Just by logging out, looking and trying to remember where you had stuff posted?

I only made one comment on the subject and I was rather confused that it was being ignored. I also knew I might have said too much about the Roko post and actually included a sentence saying that if I crossed the line I'd appreciate being told to edit it instead of having the entire thing deleted. So I just checked that one comment in particular. If other comments of mine have been deleted I wouldn't know about it, though this was the only comment in which I have discussed the Roko post.

comment by [deleted] · 2010-12-09T21:08:02.313Z · score: 3 (3 votes) · LW(p) · GW(p)

I doubt that this is a deliberate feature.

comment by Wei_Dai · 2011-01-23T20:28:24.902Z · score: 11 (11 votes) · LW(p) · GW(p)

Consider taking a job as a database/web developer at a university department. This gets you around journal paywalls, and is a low-stress job (assuming you have or can obtain above-average coding skills) that leaves you plenty of time to do your research. (My wife has such a job.) I'm not familiar with freelance journalism at all, but I'd still guess that going the software development route is lower risk.

Some comments on your list of advantages/disadvantages:

  • Harder to network effectively. - I guess this depends on what kind of research you want to do. For the areas I've been interested in, networking does not seem to matter much (unless you count participating in online forums as networking :).
  • Journals might be biased against freelance researchers. - I publish my results online, informally, and somehow they've usually found an interested audience. Also, the journals I'm familiar with require anonymous submissions. Is this not universal?
  • Harder to combat akrasia. - Actually, might be easier.

A couple other advantages of the non-traditional path:

  • If you get bored you can switch topics easily.
  • I think it's crazy to base one's income on making research progress. How do you stay objective when you depend on your ideas being accepted as correct for food and shelter? Also, you'd be forced to pick research goals that have high probability of success (so you can publish and keep your job) instead of high expected benefit for humanity (or for your intellectual interests).
comment by Roko · 2010-12-10T19:23:36.007Z · score: 10 (16 votes) · LW(p) · GW(p)

Well I guess this is our true point of disagreement. I went to the effort of finding out a lot, went to SIAI and Oxford to learn even more, and in the end I am left seriously disappointed by all this knowedge. In the end it all boils down to:

"most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed, and you almost certainly fail to have an effect anyway. And by the way the future is an impending train wreck"

I feel quite strongly that this knowledge is not a worthy thing to have sunk 5 years of my life into getting. I don't know, XiXiDu, you might prize such knowledge, including all the specifics of how that works out exactly.

If you really strongly value the specifics of this, then yes you probably would on net benefit from the censored knowledge, the knowledge that was never censored because I never posted it, and the knowledge that I never posted because I was never trusted with it anyway. But you still probably won't get it, because those who hold it correctly infer that the expected value of releasing it is strongly negative from an altruist's perspective.

comment by WrongBot · 2010-12-10T20:30:22.104Z · score: 30 (30 votes) · LW(p) · GW(p)

The future is probably an impending train wreck. But if we can save the train, then it'll grow wings and fly up into space while lightning flashes in the background and Dragonforce play a song about fiery battlefields or something. We're all stuck on the train anyway, so saving it is worth a shot.

I hate to see smart people who give a shit losing to despair. This is still the most important problem and you can still contribute to fixing it.

TL;DR: I want to give you a hug.

comment by Roko · 2010-12-10T23:35:25.723Z · score: -3 (5 votes) · LW(p) · GW(p)

We're all stuck on the train anyway, so saving it is worth a shot.

I disagree with this argument. Pretty strongly. No selfish incentive to speak of.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-10T22:17:58.167Z · score: 12 (12 votes) · LW(p) · GW(p)

most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed,

So? They're just kids!

(or)

He glanced over toward his shoulder, and said, "That matter to you?"

Caw!

He looked back up and said, "Me neither."

comment by Roko · 2010-12-10T22:25:41.024Z · score: 4 (4 votes) · LW(p) · GW(p)

I mean I guess I shouldn't complain that you don't find this bothers you, because you are, in fact, helping me by doing what you do and being very good at it, but that doesn't stop it being demotivating for me! I'll see what I can do regarding quant jobs.

comment by Jack · 2010-12-10T22:25:25.887Z · score: 0 (2 votes) · LW(p) · GW(p)

I liked the first response better.

comment by Jack · 2010-12-10T20:12:49.421Z · score: 3 (3 votes) · LW(p) · GW(p)

Upvoted for the excellent summary!

"most people are irrational, hypocritical and selfish, if you try and tell them they shoot the messenger, and if you try and do anything you bear all the costs, internalize only tiny fractions of the value created if you succeed, and you almost certainly fail to have an effect anyway. And by the way the future is an impending train wreck"

comment by katydee · 2010-12-10T20:14:28.023Z · score: 4 (4 votes) · LW(p) · GW(p)

I'm curious about the "future is an impending train wreck" part. That doesn't seem particularly accurate to me.

comment by Roko · 2010-12-10T20:18:52.055Z · score: 1 (3 votes) · LW(p) · GW(p)

Maybe it will all be OK. Maybe the trains fly past each other on separate tracks. We don't know. There sure as hell isn't a driver though. All the inside-view evidence points to bad things,with the exception that Big Worlds could turn out nicely. Or horribly.

comment by timtyler · 2010-12-10T20:56:57.031Z · score: 0 (4 votes) · LW(p) · GW(p)

Perhaps try this one: The Rational Optimist: How Prosperity Evolves

comment by timtyler · 2010-12-10T19:30:50.650Z · score: 3 (3 votes) · LW(p) · GW(p)

That doesn't sound right to me. Indeed, it sounds as though you are depressed :-(

Unsolicited advice over the public internet is rather unlikely to help - but maybe focus for a bit on what you want - and the specifics of how to get to there.

comment by katydee · 2010-12-10T19:28:35.872Z · score: 3 (3 votes) · LW(p) · GW(p)

This isn't meant as an insult, but why did it take you 5 years of dedicated effort to learn that?

comment by Roko · 2010-12-10T19:32:49.854Z · score: 4 (4 votes) · LW(p) · GW(p)

Specifics. Details. The lesson of science is that details can sometimes change the overall conclusion. Also some amount of nerdyness meaning that the statements about human nature weren't obvious to me.

comment by Manfred · 2010-12-07T18:23:29.366Z · score: 9 (11 votes) · LW(p) · GW(p)

The largest disadvantage to not having, essentially, an apprenticeship is the stuff you don't learn.

Now, if you want to research something where all you need is a keen wit, and there's not a ton of knowledge for you to pick up before you start... sure, go ahead. But those topics are few and far between. (EDIT: oh, LW-ish stuff. Meh. Sure, then, I guess. I thought you meant researching something hard >:DDDDD

No, but really, if smart people have been doing research there for 50 years and we don't have AI, that means that "seems easy to make progress" is a dirty lie. It may mean that other people haven't learned much to teach you, though - you should put some actual effort (get responses from at least two experts) finding out of this is the case)

Usually, an apprenticeship will teach you:

  • What needs to be done in your field.

  • How to write, publicize and present your work. The communication protocols of the community. How to access the knowledge of the community.

  • How to use all the necessary equipment, including the equipment that builds other equipment.

  • How to be properly rigorous - a hard one in most fields, you have to make it instinctual rather than just known.

  • The subtle tricks an experienced researcher uses to actually do research - all sorts of things you might not have noticed on your own.

  • And more!

comment by Roko · 2010-12-07T17:32:11.445Z · score: 9 (9 votes) · LW(p) · GW(p)

Another idea is the "Bostrom Solution", i.e. be so brilliant that you can find a rich guy to just pay for you to have your own institute at Oxford University.

Then there's the "Reverse Bostrom Solution": realize that you aren't Bostrom-level brilliant, but that you could accrue enough money to pay for an institute for somebody else who is even smarter and would work on what you would have worked on. (FHI costs $400k/year, which isn't such a huge amount as to be unattainable by Kaj or a few Kaj-like entities collaborating)

comment by shokwave · 2010-12-07T17:39:10.268Z · score: 4 (6 votes) · LW(p) · GW(p)

the "Reverse Bostrom Solution"

Sounds like a good bet even if you are brilliant. Make money, use money to produce academic institute, do your research in concert with academics at your institute. This solves all problems of needing to be part of academia, and also solves the problem of academics doing lots of unnecessary stuff - at your institute, academics will not be required to do unnecessary stuff.

comment by Roko · 2010-12-07T17:46:20.425Z · score: 9 (14 votes) · LW(p) · GW(p)

Maybe. The disadvantage is lag time, of course. Discount rate for Singularity is very high. Assume that there are 100 years to the singularity, and that P(success) is linearly decreasing in lag time; then every second approximately 25 galaxies are lost, assuming that the entire 80 billion galaxies' fate is decided then.

25 galaxies per second. Wow.

comment by PeerInfinity · 2010-12-12T00:56:18.324Z · score: 5 (5 votes) · LW(p) · GW(p)

I'm surprised that noone has asked Roko where he got these numbers from.

Wikipedia says that there are about 80 billion galaxies in the "observable universe", so that part is pretty straightforward. Though there's still the question of why all of them are being counted, when most of them probably aren't reachable with slower-than-light travel.

But I still haven't found any explanation for the "25 galaxies per second". Is this the rate at which the galaxies burn out? Or the rate at which something else causes them to be unreachable? Is it the number of galaxies, multiplied by the distance to the edge of the observable universe, divided by the speed of light?

calculating...

Wikipedia says that the comoving distance from Earth to the edge of the observable universe is about 14 billion parsecs (46 billion light-years short scale, i.e. 4.6 × 10^10 light years) in any direction.

Google Calculator says 80 billion galaxies / 46 billion light years = 1.73 galaxies per year, or 5.48 × 10^-8 galaxies per second

so no, that's not it.

If I'm going to allow my mind to be blown by this number, I would like to know where the number came from.

comment by Caspian · 2010-12-12T02:54:00.817Z · score: 2 (2 votes) · LW(p) · GW(p)

I also took a while to understand what was meant, so here is my understanding of the meaning:

Assumptions: There will be a singularity in 100 years. If the proposed research is started now it will be a successful singularity, e.g. friendly AI. If the proposed research isn't started by the time of the singularity, it will be a unsuccessful (negative) singularity, but still a singularity. The probability of the successful singularity linearly decreases with the time when the research starts, from 100 percent now, to 0 percent in 100 years time.

A 1 in 80 billion chance of saving 80 billion galaxies is equivalent to definitely saving 1 galaxy, and the linearly decreasing chance of a successful singularity affecting all of them is equivalent to a linearly decreasing number being affected. 25 galaxies per second is the rate of that decrease.

comment by Roko · 2010-12-12T00:58:20.545Z · score: 2 (2 votes) · LW(p) · GW(p)

I meant if you divide the number of galaxies by the number of seconds to an event 100 years from now. Yes, not all reachable. Probably need to discount by an order of magnitude for reachability at lightspeed.

comment by FAWS · 2010-12-12T02:00:39.462Z · score: 0 (0 votes) · LW(p) · GW(p)

Hmm, by the second wikipedia link there is no basis for the 80 billion galaxies since only a relatively small fraction of the observable universe (4.2%?) is reachable if limited by the speed of light, and if not the whole universe is probably at least 10^23 times larger (by volume or by radius?).

comment by shokwave · 2010-12-07T18:01:32.536Z · score: 3 (7 votes) · LW(p) · GW(p)

Guh. Every now and then something reminds me of how important the Singularity is. Time to reliable life extension is measured in lives per minute, time to Singularity is measured in galaxies per second.

comment by MartinB · 2010-12-08T10:22:04.203Z · score: 1 (1 votes) · LW(p) · GW(p)

Now thats a way to eat up your brain.

comment by Roko · 2010-12-07T18:04:43.650Z · score: 1 (7 votes) · LW(p) · GW(p)

Well conservatively assuming that each galaxy supports lives at 10^9 per sun per century (1/10th of our solar system), that's already 10^29 lives per second right there.

And assuming utilization of all the output of the sun for living, i.e. some kind of giant spherical shell of habitable land, we can add another 12 orders of magnitude straight away. Then if we upload people that's probably another 10 orders of magnitude.

Probably up to 10^50 lives per second, without assuming any new physics could be discovered (a dubious assumption). If instead we assume that quantum gravity gives us as much of an increase in power as going from newtonian physics to quantum mechanics did, we can pretty much slap another 20 orders of magnitude onto it, with some small probability of the answer being "infinity".

comment by XFrequentist · 2010-12-07T20:46:32.782Z · score: 2 (4 votes) · LW(p) · GW(p)

In what I take to be a positive step towards viscerally conquering my scope neglect, I got a wave of chills reading this.

comment by [deleted] · 2010-12-08T14:02:04.431Z · score: 1 (3 votes) · LW(p) · GW(p)

assuming that the entire 80 billion galaxies' fate is decided then.

What's your P of "the fate of all 80 billion galaxies will be decided on Earth in the next 100 years"?

comment by Vladimir_Nesov · 2010-12-08T14:31:26.154Z · score: 0 (0 votes) · LW(p) · GW(p)

About 10% (if we ignore existential risk, which is a way of resolving the ambiguity of "will be decided"). Multiply that by opportunity cost of 80 billion galaxies.

comment by David_Gerard · 2010-12-08T14:58:15.567Z · score: 1 (1 votes) · LW(p) · GW(p)

Could you please detail your working to get to this 10% number? I'm interested in how one would derive it, in detail.

comment by Vladimir_Nesov · 2010-12-08T15:20:26.774Z · score: 0 (2 votes) · LW(p) · GW(p)

I restored the question as asking about probability that we'll be finishing an FAI project in the next 100 years. Dying of engineered virus doesn't seem like an example of "deciding the fate of 80 billion galaxies", although it's determining that fate.

FAI looks really hard. Improvements in mathematical understanding to bridge comparable gaps in understanding can take at least many decades. I don't expect a reasonable attempt at actually building a FAI anytime soon (crazy potentially world-destroying AGI projects go in the same category as engineered viruses). One possible shortcut is ems, that effectively compress the required time, but I estimate that they probably won't be here for at least 80 more years, and then they'll still need time to become strong enough and break the problem. (By that time, biological intelligence amplification could take over as a deciding factor, using clarity of thought instead of lots of time to think.)

comment by [deleted] · 2010-12-09T01:00:46.958Z · score: 0 (2 votes) · LW(p) · GW(p)

My question has only a little bit to do with the probability that an AI project is successful. It has mostly to do with P(universe goes to waste | AI projects are unsuccessful). For instance, couldn't the universe go on generating human utility after humans go extinct?

comment by ata · 2010-12-09T01:05:09.311Z · score: 1 (3 votes) · LW(p) · GW(p)

For instance, couldn't the universe go on generating human utility after humans go extinct?

How? By coincidence?

(I'm assuming you also mean no posthumans, if humans go extinct and AI is unsuccessful.)

comment by [deleted] · 2010-12-09T01:23:45.848Z · score: 2 (2 votes) · LW(p) · GW(p)

Aliens. I would be pleased to learn that something amazing was happening (or was going to happen, long "after" I was dead) in one of those galaxies. Since it's quite likely that something amazing is happening in one of those 80 billion galaxies, shouldn't I be pleased even without learning about it?

Of course, I would be correspondingly distressed to learn that something horrible was happening in one of those galaxies.

comment by Roko · 2010-12-08T14:15:18.535Z · score: 0 (4 votes) · LW(p) · GW(p)

Some complexities regarding "decided" since physics is deterministic, but hand waving that aside, I'd say 50%.

comment by [deleted] · 2010-12-09T00:50:42.111Z · score: 1 (1 votes) · LW(p) · GW(p)

With high probability, many of those galaxies are already populated. Is that irrelevant?

comment by Roko · 2010-12-09T12:24:19.230Z · score: 0 (2 votes) · LW(p) · GW(p)

I disagree. I claim that the probability of >50% of the universe being already populated (using the space of simultaneity defined by a frame of reference comoving with earth) is maybe 10%.

comment by [deleted] · 2010-12-09T13:33:40.985Z · score: 0 (2 votes) · LW(p) · GW(p)

"Already populated" is a red herring. What's the probability that >50% of the universe will ever be populated? I don't see any reason for it to be sensitive to how well things go on Earth in the next 100 years.

comment by Roko · 2010-12-09T18:32:33.540Z · score: 1 (1 votes) · LW(p) · GW(p)

I think it is likely that we are the only spontaneously-created intelligent species in the entire 4-manifold that is the universe, space and time included (excluding species which we might create in the future, of course).

comment by [deleted] · 2010-12-09T18:58:44.563Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm curious to know how likely, and why. But do you agree that aliens are relevant to evaluating astronomical waste?

comment by timtyler · 2010-12-09T18:37:12.986Z · score: 0 (0 votes) · LW(p) · GW(p)

That seems contrary to the http://en.wikipedia.org/wiki/Self-Indication_Assumption

Do you have a critique - or a supporting argument?

comment by Roko · 2010-12-09T18:38:43.454Z · score: 3 (3 votes) · LW(p) · GW(p)

Yes, I have a critique. Most of anthropics is gibberish. Until someone makes anthropics work, I refuse to update on any of it. (Apart from the bits that are commonsensical enough to derive without knowing about "anthropics", e.g. that if your fising net has holes 2 inches big, don't expect to catch fish smaller then 2 inches wide)

comment by timtyler · 2010-12-09T19:59:08.424Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't think you can really avoid anthropic ideas - or the universe stops making sense. Some anthropic ideas can be challenging - but I think we have got to try.

Anyway, you did the critique - but didn't go for a supporting argument. I can't think of very much that you could say. We don't have very much idea yet about what's out there - and claims to know such things just seem over-confident.

comment by Roko · 2010-12-09T20:18:16.549Z · score: 1 (1 votes) · LW(p) · GW(p)

Basically Rare Earth seems to me to be the only tenable solution to Fermi's paradox.

comment by timtyler · 2010-12-09T20:26:03.500Z · score: 0 (2 votes) · LW(p) · GW(p)

Fermi's paradox implying no aliens surely applies within-galaxy only. Many galaxies are distant, and intelligent life forming there concurrently (or long before us) is quite compatible with it not having arrived on our doorsteps yet - due to the speed of light limitation.

If you think we should be able to at least see life in distant galaxies, then, in short, not really - or at least we don't know enough to say yea or nay on that issue with any confidence yet.

comment by Roko · 2010-12-09T20:44:18.452Z · score: 0 (2 votes) · LW(p) · GW(p)

The Andromeda Galaxy is 2.5 million light-years away. The universe is about 1250 million years old. Therefore that's not far enough away to protect us from colonizing aliens travelling at 0.5c or above.

comment by timtyler · 2010-12-09T20:57:50.015Z · score: 2 (2 votes) · LW(p) · GW(p)

The universe is about 13,750 million years old. The Fermi argument suggests that - if there were intelligent aliens in this galaxy, they should probably have filled it by now - unless they originated very close to us in time - which seems unlikely. The argument applies much more weakly to galaxies, because they are much further away, and they are separated from each other by huge regions of empty space. Also, the Andromeda Galaxy is just one galaxy. Say only one galaxy in 100 has intelligent life - and the Andromeda Galaxy isn't among them. That bumps the required distance to be travelled up to 10 million light years or so.

Even within this galaxy, the Fermi argument is not that strong. Maybe intelligent aliens formed in the last billion years, and haven't made it here yet - because space travel is tricky, and 0.1c is about the limit. The universe is only about 14 billion years old. For some of of that there were not too many second generations stars. The odds are against there being aliens nearby - but they are not that heavily stacked. For other galaxies, the argument is much, much less compelling.

comment by [deleted] · 2010-12-09T18:55:14.265Z · score: 0 (0 votes) · LW(p) · GW(p)

There are strained applications of anthropics, like the doomsday argument. "What happened here might happen elsewhere" is much more innocuous.

comment by [deleted] · 2010-12-09T18:58:45.079Z · score: 1 (1 votes) · LW(p) · GW(p)

There are some more practical and harmless applications as well. In Nick Bostrom's Anthropic Bias, for example, there is an application of the Self-Sampling Assumption to traffic analysis.

comment by timtyler · 2010-12-09T19:53:48.007Z · score: 1 (1 votes) · LW(p) · GW(p)

Bostrom says: "Cars in the next lane really do go faster"

comment by Vladimir_Nesov · 2010-12-09T18:40:13.053Z · score: 0 (2 votes) · LW(p) · GW(p)

I agree.

comment by [deleted] · 2010-12-09T18:46:12.129Z · score: 2 (2 votes) · LW(p) · GW(p)

Even Nick Bostrom, who is arguably the leading expert on anthropic problems, rejects SIA for a number of reasons (see his book Anthropic Bias). That alone is a pretty big blow to its credibility.

comment by timtyler · 2010-12-09T19:49:53.818Z · score: 0 (0 votes) · LW(p) · GW(p)

That is curious. Anyway, the self-indication assumption seems fairly straight-forwards (as much as any anthropic reasoning is, anyway). The critical material from Bostrom on the topic I have read seems unpersuasive. He doesn't seem to "get" the motivation for the idea in the first place.

comment by Kevin · 2010-12-09T14:28:22.493Z · score: 0 (0 votes) · LW(p) · GW(p)

If you think there is a significant probability that an intelligence explosion is possible or likely, then that question is sensitive to how well things go on Earth in the next 100 years.

comment by [deleted] · 2010-12-09T15:06:06.106Z · score: 3 (3 votes) · LW(p) · GW(p)

However likely they are, I expect intelligence explosions to be evenly distributed through space and time. If 100 years from now Earth loses by a hair, there are still plenty of folks around the universe who will win or have won by a hair. They'll make whatever use of the 80 billion galaxies that they can--will they be wasting them?

If Earth wins by a hair, or by a lot, we'll be competing with those folks. This also significantly reduces the opportunity cost Roko was referring to.

comment by timtyler · 2010-12-08T10:03:24.582Z · score: 1 (5 votes) · LW(p) · GW(p)

That seems like a rather exaggerated sense of importance. It may be a fun fantasy in which the fate of the entire universe hangs in the balance in the next century - but do bear in mind the disconnect between that and the real world.

comment by shokwave · 2010-12-08T15:33:28.715Z · score: 3 (3 votes) · LW(p) · GW(p)

the disconnect between that and the real world.

Out of curiosity: what evidence would convince you that the fate of the entire universe does hang in the balance?

comment by Manfred · 2010-12-08T22:16:19.166Z · score: 2 (2 votes) · LW(p) · GW(p)

No human-comparable aliens, for one.

Which seems awfully unlikely, the more we learn about solar systems.

comment by timtyler · 2010-12-08T22:47:42.867Z · score: -1 (1 votes) · LW(p) · GW(p)

"Convince me" - with some unspecified level of confidence? That is not a great question :-|

We lack knowlegde of the existence (or non-existence) of aliens in other galaxies. Until we have such knowledge, our uncertainty on this matter will necessarily be high - and we should not be "convinced" of anything.

comment by shokwave · 2010-12-09T04:49:23.589Z · score: 1 (1 votes) · LW(p) · GW(p)

What evidence would convince you, with 95% confidence, that the fate of the universe hangs in the balance in this next century on Earth?

You may specify evidence such as "strong evidence that we are completely alone in the universe" even if you think it is unlikely we will get such evidence.

comment by timtyler · 2010-12-09T07:20:31.244Z · score: -1 (1 votes) · LW(p) · GW(p)

I did get the gist of your question the first time - and answered according. The question takes us far into counter-factual territory, though.

comment by shokwave · 2010-12-09T07:49:11.688Z · score: 1 (1 votes) · LW(p) · GW(p)

I was just curious to see if you rejected the fantasy on principle, or if you had other reasons.

comment by Larks · 2010-12-07T17:45:35.407Z · score: 1 (1 votes) · LW(p) · GW(p)

Unfortunately, FHI seems to have filled the vacancies it advertised earlier this month.

comment by Alexandros · 2010-12-07T20:09:00.531Z · score: 1 (1 votes) · LW(p) · GW(p)

Are you talking about these? (http://www.fhi.ox.ac.uk/news/2010/vacancies) This seems odd, the deadline for applications is on Jan 12th.

comment by Larks · 2010-12-08T20:57:55.932Z · score: 0 (0 votes) · LW(p) · GW(p)

Oh yes - strange, I swear it said no vacancies...

comment by Roko · 2010-12-07T17:47:23.438Z · score: 0 (0 votes) · LW(p) · GW(p)

Sure, so this favors the "Create a new James Martin" strategy.

comment by Bongo · 2010-12-08T12:14:46.601Z · score: 8 (16 votes) · LW(p) · GW(p)

(I would have liked to reply to the deleted comment, but you can't reply to deleted comments so I'll reply to the repost.)

  • EDIT: Roko reveals that he was actually never asked to delete his comment! Disregard parts of the rest of this comment accordingly.

I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.

The correct reaction when someone posts something scandalous like

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause

is not to attempt to erase it, even if that was possible, but to reveal the context. The context, supposedly, would make it seem less scandalous - for example, maybe it was a private dicussion about philosophical hypotheticals. If it wouldn't, that's a bad sign about SIAI.

The fact that that erasure was the reaction suggests that there is no redeeming context!

That someone asked Roko to erase his comment isn't a very bad sign, since it's enough that one person didn't understand the reasoning above for that to happen. That fact that Roko conceded is a bad sign, though.

Now SIAI should save face not by asking a moderator to delete wfg's reposts, but by revealing the redeeming context in which the scandalous remarks that Roko alluded to were made.

comment by CarlShulman · 2010-12-08T18:43:02.479Z · score: 26 (28 votes) · LW(p) · GW(p)

Roko may have been thinking of [just called him, he was thinking of it] a conversation we had when he and I were roommates in Oxford while I was visiting the Future of Humanity Institute, and frequently discussed philosophical problems and thought experiments. Here's the (redeeming?) context:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.

Instead, I typically advocate careful introspection and the use of something like Nick Bostrom's parliamentary model:

The idea here is that moral theories get more influence the more probable they are; yet even a relatively weak theory can still get its way on some issues that the theory think are extremely important by sacrificing its influence on other issues that other theories deem more important. For example, suppose you assign 10% probability to total utilitarianism and 90% to moral egoism (just to illustrate the principle). Then the Parliament would mostly take actions that maximize egoistic satisfaction; however it would make some concessions to utilitarianism on issues that utilitarianism thinks is especially important. In this example, the person might donate some portion of their income to existential risks research and otherwise live completely selfishly.

In the conversation with Roko, we were discussing philosophical thought experiments (trolley problem style, which may indeed be foolish ) to get at 'real' preferences and values for such an exercise. To do that, one often does best to adopt the device of the True Prisoner's Dilemma and select positive and negative payoffs that actually have emotional valence (as opposed to abstract tokens). For positive payoffs, we used indefinite lifespans of steady "peak experiences" involving discovery, health, status, and elite mates. For negative payoffs we used probabilities of personal risk of death (which comes along with almost any effort, e.g. driving to places) and harms that involved pain and/or a decline in status (since these are separate drives). Since we were friends and roommates without excessive squeamishness, hanging out at home, we used less euphemistic language.

Neither of us was keen on huge sacrifices in Pascal's-Mugging-like situations, viewing altruism as only one part of our respective motivational coalitions, or one term in bounded utility functions. I criticized his past "cheap talk" of world-saving as a primary motivation, given that in less convenient possible worlds, it was more easily overcome than his phrasing signaled. I said he should scale back his claims of altruism to match the reality, in the way that I explicitly note my bounded do-gooding impulses.

We also differed in our personal views on the relative badness of torture, humiliation and death. For me, risk of death was the worst, which I was least willing to trade off in trolley-problem type cases to save others. Roko placed relatively more value on the other two, which I jokingly ribbed and teased him about.

In retrospect, I was probably a bit of a jerk in pushing (normative) Hansonian transparency. I wish I had been more careful to distinguish between critiquing a gap between talk and values, and critiquing the underlying values, and probably should just take wedifrid's advice on trolley-problem-type scenarios generally.

comment by waitingforgodel · 2010-12-09T03:16:21.653Z · score: 2 (12 votes) · LW(p) · GW(p)

First off, great comment -- interesting, and complex.

But, some things still don't make sense to me...

Assuming that what you described led to:

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause. I mean not actually, but, you know, in theory. Precommiting to being prepared to make a sacrifice that big. shrugs

  1. How did precommitting enter in to it?

  2. Are you prepared to be tortured or raped for the cause? Have you precommitted to it?

  3. Have other SIAI people you know of talked about this with you, have other SIAI people precommitted to it?

  4. What do you think of others who do not want to be tortured or raped for the cause?

Thanks, wfg

comment by CarlShulman · 2010-12-09T09:03:49.821Z · score: 18 (18 votes) · LW(p) · GW(p)

I find this whole line of conversation fairly ludicrous, but here goes:

Number 1. Time-inconsistency: we have different reactions about an immediate certainty of some bad than a future probability of it. So many people might be willing to go be a health worker in a poor country where aid workers are commonly (1 in 10,000) raped or killed, even though they would not be willing to be certainly attacked in exchange for 10,000 times the benefits to others. In the actual instant of being tortured anyone would break, but people do choose courses of action that carry risk (every action does, to some extent), so the latter is more meaningful for such hypotheticals.

Number 2. I have driven and flown thousands of kilometers in relation to existential risk, increasing my chance of untimely death in a car accident or plane crash, so obviously I am willing to take some increased probability of death. I think I would prefer a given chance of being tortured to a given chance of death, so obviously I care enough to take at least some tiny risk from what I said above. As I also said above, I'm not willing to make very big sacrifices (big probabilities of such nasty personal outcomes) for tiny shifts in probabilities of big impersonal payoffs (like existential risk reduction). In realistic scenarios, that's what "the cause" would refer to. I haven't made any verbal or explicit "precommitment" or promises or anything like that.

In sufficiently extreme (and ludicrously improbable) trolley-problem style examples, e.g. "if you push this button you'll be tortured for a week, but if you don't then the Earth will be destroyed (including all your loved ones) if this fair coin comes up heads, and you have incredibly (impossibly?) good evidence that this really is the setup" I hope I would push the button, but in a real world of profound uncertainty, limited evidence, limited personal power (I am not Barack Obama or Bill Gates), and cognitive biases, I don't expect that to ever happen. I also haven't made any promises or oaths about that.

I am willing to give of my time and effort, and forgo the financial rewards of a more lucrative career, in exchange for a chance for efficient do-gooding, interaction with interesting people who share my values, and a meaningful project. Given diminishing returns to money in rich countries today, and the ease of obtaining money for folk with high human capital, those aren't big sacrifices, if they are sacrifices at all.

Number 3. SIAIers love to be precise and analytical and consider philosophical thought experiments, including ethical ones. I think most have views pretty similar to mine, with somewhat varying margins. Certainly Michael Vassar, the head of the organization, is also keen on recognizing one's various motives and living a balanced life, and avoiding fanatics. Like me, he actively advocates Bostrom-like parliamentary model approaches to combining self-concern with parochial and universalist altruistic feelings.

I have never heard anyone making oaths or promises to make severe sacrifices.

Number 4. This is a pretty ridiculous question. I think that's fine and normal, and I feel more comfortable with such folk than the alternative. I think people should not exaggerate that do-gooding is the most important thing in their life lest they deceive themselves and others about their willingness to make such choices, which I criticized Roko for.

comment by waitingforgodel · 2010-12-09T16:54:30.853Z · score: 6 (10 votes) · LW(p) · GW(p)

This sounds very sane, and makes me feel a lot better about the context. Thank you very much.

I very much like the idea that top SIAI people believe that there is such a thing as too much devotion to the cause (and, I'm assuming, actively talk people who are above that level down as you describe doing for Roko).

As someone who has demonstrated impressive sanity around these topics, you seem to be in a unique position to answer these questions with an above-average level-headedness:

  1. Do you understand the math behind the Roko post deletion?

  2. What do you think about the Roko post deletion?

  3. What do you think about future deletions?

comment by CarlShulman · 2010-12-09T21:53:14.878Z · score: 13 (17 votes) · LW(p) · GW(p)

Do you understand the math behind the Roko post deletion?

Yes, his post was based on (garbled versions of) some work I had been doing at FHI, which I had talked about with him while trying to figure out some knotty sub-problems.

What do you think about the Roko post deletion?

I think the intent behind it was benign, at least in that Eliezer had his views about the issue (which is more general, and not about screwed-up FAI attempts) previously, and that he was motivated to prevent harm to people hearing the idea and others generally. Indeed, he was explicitly motivated enough to take a PR hit for SIAI.

Regarding the substance, I think there are some pretty good reasons for thinking that the expected value (with a small probability of a high impact) of the info for the overwhelming majority of people exposed to it would be negative, although that estimate is unstable in the face of new info.

It's obvious that the deletion caused more freak-out and uncertainty than anticipated, leading to a net increase in people reading and thinking about the content compared to the counterfactual with no deletion. So regardless of the substance about the info, clearly it was a mistake to delete (which Eliezer also recognizes).

What do you think about future deletions?

Obviously, Eliezer is continuing to delete comments reposting on the topic of the deleted post. It seems fairly futile to me, but not entirely. I don't think that Less Wrong is made worse by the absence of that content as such, although the fear and uncertainty about it seem to be harmful. You said you were worried because it makes you uncertain about whether future deletions will occur and of what.

After about half an hour of trying, I can't think of another topic with the same sorts of features. There may be cases involving things like stalkers or bank PINs or 4chan attacks or planning illegal activities. Eliezer called on people not to discuss AI at the beginning of Less Wrong to help establish its rationality focus, and to back off from the gender warfare, but hasn't used deletion powers for such things.

Less Wrong has been around for 20 months. If we can rigorously carve out the stalker/PIN/illegality/spam/threats cases I would be happy to bet $500 against $50 that we won't see another topic banned over the next 20 months.

comment by Alicorn · 2010-12-09T22:05:59.120Z · score: 20 (22 votes) · LW(p) · GW(p)

Less Wrong has been around for 20 months. If we can rigorously carve out the stalker/PIN/illegality/spam/threats cases I would be happy to bet $500 against $50 that we won't see another topic banned over the next 20 months.

That sounds like it'd generate some perverse incentives to me.

comment by CarlShulman · 2010-12-09T22:08:25.164Z · score: 8 (8 votes) · LW(p) · GW(p)

Urk.

comment by TheOtherDave · 2010-12-09T22:13:05.316Z · score: 5 (5 votes) · LW(p) · GW(p)

clearly it was a mistake to delete (which Eliezer also recognizes).

Just to be clear: he recognizes this by comparison with the alternative of privately having the poster delete it themselves, rather than by comparison to not-deleting.

Or at least that was my understanding.

Regardless, thanks for a breath of clarity in this thread. As a mostly disinterested newcomer, I very much appreciated it.

comment by CarlShulman · 2010-12-09T23:27:47.692Z · score: 2 (2 votes) · LW(p) · GW(p)

Well, if counterfactually Roko hadn't wanted to take it down I think it would have been even more of a mistake to delete it, because then the author would have been peeved, not just the audience/commenters.

comment by TheOtherDave · 2010-12-10T03:12:22.435Z · score: 5 (5 votes) · LW(p) · GW(p)

Which is fine.

But Eliezer's comments on the subject suggest to me that he doesn't think that.

More specifically, they suggest that he thinks the most important thing is that the post not be viewable, and if we can achieve that by quietly convincing the author to take it down, great, and if we can achieve it by quietly deleting it without anybody noticing, great, and if we can't do either of those then we achieve it without being quiet, which is less great but still better than leaving it up.

And it seemed to me your parenthetical could be taken to mean that he agrees with you that deleting it would be a mistake in all of those cases, so I figured I would clarify (or let myself be corrected, if I'm misunderstanding).

comment by waitingforgodel · 2010-12-10T08:27:52.872Z · score: -1 (15 votes) · LW(p) · GW(p)

I should have taken this bet

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-10T08:29:26.439Z · score: 5 (7 votes) · LW(p) · GW(p)

Your post has been moved to the Discussion section, not deleted.

comment by CarlShulman · 2010-12-10T09:33:16.962Z · score: 0 (0 votes) · LW(p) · GW(p)

Looking at your recent post, I think Alicorn had a good point.

comment by TimFreeman · 2011-07-13T17:17:45.430Z · score: 2 (2 votes) · LW(p) · GW(p)

So many people might be willing to go be a health worker in a poor country where aid workers are commonly (1 in 10,000) raped or killed, even though they would not be willing to be certainly attacked in exchange for 10,000 times the benefits to others.

I agree with your main point, but the thought experiment seems to be based on the false assumption that the risk of being raped or murdered are smaller than 1 in 10K if you stay at home. Wikipedia guesstimates that 1 in 6 women in the US are on the receiving end of attempted rape at some point, so someone who goes to a place with a 1 in 10K chance of being raped or murdered has probably improved their personal safety. To make a better thought experiment, I suppose you have to talk about the marginal increase in rape or murder rate when working in the poor country when compared to staying home, and perhaps you should stick to murder since the rape rate is so high.

comment by wedrifid · 2010-12-09T09:32:34.551Z · score: 0 (2 votes) · LW(p) · GW(p)

You lost me at 'ludicrous'. :)

comment by waitingforgodel · 2010-12-09T16:54:46.530Z · score: 4 (8 votes) · LW(p) · GW(p)

but he won me back by answering anyway <3

comment by CarlShulman · 2010-12-09T09:36:50.121Z · score: 0 (0 votes) · LW(p) · GW(p)

How so?

comment by Bongo · 2010-12-08T18:51:16.710Z · score: 1 (1 votes) · LW(p) · GW(p)

Thanks!

comment by multifoliaterose · 2010-12-08T20:35:04.804Z · score: 0 (4 votes) · LW(p) · GW(p)

Great comment Carl!

comment by Nick_Tarleton · 2010-12-08T16:54:34.376Z · score: 6 (6 votes) · LW(p) · GW(p)

I don't think Roko should have been requested to delete his comment. I don't think Roko should have conceded to deleting his comment.

Roko was not requested to delete his comment. See this parallel thread. (I would appreciate it if you would edit your comment to note this, so readers who miss this comment don't have a false belief reinforced.) (ETA: thanks)

The correct reaction when someone posts something scandalous like

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause

is not to attempt to erase it, even if that was possible, but to reveal the context.... Now SIAI should save face not by asking a moderator to delete wfg's reposts....

Agreed (and I think the chance of wfg's reposts being deleted is very low, because most people get this). Unfortunately, I know nothing about the alleged event (Roko may be misdescribing it, as he misdescribed my message to him) or its context.

comment by Bongo · 2010-12-08T17:28:22.846Z · score: 1 (1 votes) · LW(p) · GW(p)

Roko said he was asked. You didn't ask him but maybe someone else did?

comment by Nick_Tarleton · 2010-12-08T17:59:11.544Z · score: 4 (6 votes) · LW(p) · GW(p)

Roko's reply to me strongly suggested that he interpreted my message as requesting deletion, and that I was the cause of him deleting it. I doubt anyone at SIAI would have explicitly requested deletion.

comment by Roko · 2010-12-08T18:09:05.182Z · score: 5 (7 votes) · LW(p) · GW(p)

I can confirm that I was not asked to delete the comment but did so voluntarily.

comment by Vladimir_Nesov · 2010-12-08T19:47:02.499Z · score: 6 (8 votes) · LW(p) · GW(p)

I think you are too trigger-happy.

comment by waitingforgodel · 2010-12-09T05:32:44.089Z · score: -6 (14 votes) · LW(p) · GW(p)

The wording here leaves weird wiggle room -- you're implying it wasn't Nick?

comment by Perplexed · 2010-12-08T18:10:43.292Z · score: 1 (13 votes) · LW(p) · GW(p)

I'm wondering whether you, Nick, have learned anything from this experience - something perhaps about how attempting to hide something is almost always counterproductive?

Of course, Roko contributed here by deleting the message, you didn't create this mess by yourself. But you sure have helped. :)

comment by Roko · 2010-12-08T18:12:53.197Z · score: 9 (11 votes) · LW(p) · GW(p)

Well, look, I deleted it of my own accord, but only after being prompted that it was a bad thing to have posted. Can we just drop this? It makes me look like even more of a troublemaker than I already look like, and all I really want to do is finish the efficient charity competition then get on with life outside teh intenetz.

comment by XiXiDu · 2010-12-09T14:03:26.725Z · score: 7 (9 votes) · LW(p) · GW(p)

Will you at least publicly state that you precommit, on behalf of CEV, to not apply negative incentives in this case? (Roko, Jul 24, 2010 1:37 PM)

This is very important. If the SIAI is the organisation to solve the friendly AI problem and implement CEV then it should be subject to public examination, especially if they ask for money.

comment by David_Gerard · 2010-12-09T14:32:05.299Z · score: 6 (6 votes) · LW(p) · GW(p)

The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.

If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.

This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially. And we have those. The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans. However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

Addendum: I see others have been asking "but what do you actually mean?" for a couple of years now.

comment by Nick_Tarleton · 2010-12-09T17:34:45.448Z · score: 7 (7 votes) · LW(p) · GW(p)

The current evidence that anyone anywhere can implement CEV is two papers in six years that talk about it a bit. There appears to have been nothing else from SIAI and no-one else in philosophy appears interested.

If that's all there is for CEV in six years, and AI is on the order of thirty years away, then (approximately) we're dead.

This strikes me as a demand for particular proof. SIAI is small (and was much smaller until the last year or two), the set of people engaged in FAI research is smaller, Eliezer has chosen to focus on writing about rationality over research for nearly four years, and FAI is a huge problem, in which any specific subproblem should be expected to be underdeveloped at this early stage. And while I and others expect work to speed up in the near future with Eliezer's attention and better organization, yes, we probably are dead.

The reason for CEV is (as I understand it) the danger of the AI going FOOM before it cares about humans.

Somewhat nitpickingly, this is a reason for FAI in general. CEV is attractive mostly for moving as much work from the designers to the FAI as possible, reducing the potential for uncorrectable error, and being fairer than letting the designers lay out an object-level goal system.

This is rather disappointing, as if CEV is possible then a non-artificial general intelligence should be able to implement it, at least partially.... However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

This sounds interesting; do you think you could expand?

comment by David_Gerard · 2010-12-09T17:40:33.372Z · score: 2 (2 votes) · LW(p) · GW(p)

This strikes me as a demand for particular proof.

It wasn't intended to be - more incredulity. I thought this was a really important piece of the puzzle, so expected there'd be something at all by now. I appreciate your point: that this is a ridiculously huge problem and SIAI is ridiculously small.

However, human general intelligences don't go FOOM but should be able to do the work for CEV. If they know what that work is.

This sounds interesting; do you think you could expand?

I meant that, as I understand it, CEV is what is fed to the seed AI. Or the AI does the work to ascertain the CEV. It requires an intelligence to ascertain the CEV, but I'd think the ascertaining process would be reasonably set out once we had an intelligence on hand, artificial or no. Or the process to get to the ascertaining process.

I thought we needed the CEV before the AI goes FOOM, because it's too late after. That implies it doesn't take a superintelligence to work it out.

Thus: CEV would have to be a process that mere human-level intelligences could apply. That would be a useful process to have, and doesn't require first creating an AI.

I must point out that my statements on the subject are based in curiosity, ignorance and extrapolation from what little I do know, and I'm asking (probably annoyingly) for more to work with.

comment by Nick_Tarleton · 2010-12-09T17:47:30.369Z · score: 4 (4 votes) · LW(p) · GW(p)

"CEV" can (unfortunately) refer to either CEV the process of determining what humans would want if we knew more etc., or the volition of humanity output by running that process. It sounds to me like you're conflating these. The process is part of the seed AI and is needed before it goes FOOM, but the output naturally is neither, and there's no guarantee or demand that the process be capable of being executed by humans.

comment by David_Gerard · 2010-12-09T18:00:08.549Z · score: 2 (2 votes) · LW(p) · GW(p)

OK. I still don't understand it, but I now feel my lack of understanding more clearly. Thank you!

(I suppose "what do people really want?" is a large philosophical question, not just undefined but subtle in its lack of definition.)

comment by Roko · 2010-12-09T18:19:31.390Z · score: 5 (5 votes) · LW(p) · GW(p)

I have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are, in my opinion, the best we can hope for, and better than we are likely to get from any other entity involved in this.

Besides, I think that XiXiDu, et al are complaining about the difference between cotton and silk, when what is actually likely to happen is more like a big kick in the teeth from reality. SIAI is imperfect. Yes. Well done. Nothing is perfect. At least cut them a bit of slack.

comment by timtyler · 2010-12-09T18:32:19.181Z · score: 2 (6 votes) · LW(p) · GW(p)

I have recieved assurances that SIAI will go to significant efforts not to do nasty things, and I believe them. Private assurances given sincerely are the best we can hope for, and better than we are likely to get from any other entity involved in this.

What?!? Open source code - under a permissive license - is the traditional way to signal that you are not going to run off into the sunset with the fruits of a programming effort. Private assurances are usually worth diddly-squat by comparison.

comment by Roko · 2010-12-09T18:34:02.459Z · score: 1 (5 votes) · LW(p) · GW(p)

I think that you don't realize just how bad the situation is. You want that silken sheet. Rude awakening methinks. Also open-source not neccessarily good for FAI in any case.

comment by XiXiDu · 2010-12-09T19:26:58.473Z · score: 4 (6 votes) · LW(p) · GW(p)

I think that you don't realize just how bad the situation is.

I don't think that you realize how bad it is. I'd rather have the universe being paperclipped than supporting the SIAI if that means that I might be tortured for the rest of infinity!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-09T19:44:55.727Z · score: 15 (19 votes) · LW(p) · GW(p)

To the best of my knowledge, SIAI has not planned to do anything, under any circumstances, which would increase the probability of you or anyone else being tortured for the rest of infinity.

Supporting SIAI should not, to the best of my knowledge, increase the probability of you or anyone else being tortured for the rest of infinity.

Thank you.

comment by XiXiDu · 2010-12-09T19:52:43.109Z · score: 5 (7 votes) · LW(p) · GW(p)

But imagine there was a person a level above yours that went to create some safeguards for an AGI. That person would tell you that you can be sure that the safeguards s/he plans to implement will benefit everyone. Are you just going to believe that? Wouldn't you be worried and demand that their project is being supervised?

You are in a really powerful position because you are working for an organisation that might influence the future of the universe. Is it really weird to be skeptical and ask for reassurance of their objectives?

comment by Vladimir_Nesov · 2010-12-09T19:56:38.405Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't think so.

Logical rudeness is the error of rejecting an argument for reasons other than disagreement with it. Does your "I don't think so" mean that you in fact believe that SIAI (possibly) plans to increase the probability of you or someone else being tortured for the rest of eternity? If not, what does this statement mean?

comment by XiXiDu · 2010-12-09T20:12:27.720Z · score: 8 (8 votes) · LW(p) · GW(p)

I removed that sentence. I meant that I didn't believe that the SIAI plans to harm someone deliberately. Although I believe that harm could be a side-effect and that they would rather harm a few beings than allowing some Paperclip maximizer to take over.

You can call me a hypocrite because I'm in favor of animal experiments to support my own survival. But I'm not sure if I'd like to have someone leading an AI project who thinks like me. Take that sentence to reflect my inner conflict. I see why one would favor torture over dust specks but I don't like such decisions. I'd rather have the universe to end now, or having everyone turned into paperclips, than having to torture beings (especially if I am the being).

I feel uncomfortable that I don't know what will happen because there is a policy of censorship being favored when it comes to certain thought experiments. I believe that even given negative consequences, transparency is the way to go here. If the stakes are this high, people who believe will do anything to get what they want. That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.

comment by Vladimir_Nesov · 2010-12-09T20:13:53.391Z · score: 2 (2 votes) · LW(p) · GW(p)

I removed that sentence.

I apologize. I realized my stupidity in interpreting your comment a few seconds after posting the reply (which I then deleted).

comment by timtyler · 2010-12-10T19:15:50.096Z · score: 0 (2 votes) · LW(p) · GW(p)

That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.

Better yet, you could use a kind of doublethink - and then even actually mean it. Here is W. D. Hamilton on that topic:

A world where everyone else has been persuaded to be altruistic is a good one to live in from the point of view of pursuing our own selfish ends. This hypocracy is even more convincing if we don't admit it even in our thoughts - if only on our death beds, so to speak, we change our wills back to favour the carriers of our own genes.

  • Discriminating Nepotism - as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.356.
comment by timtyler · 2010-12-10T19:10:43.495Z · score: -1 (3 votes) · LW(p) · GW(p)

That Yudkowsky claims that they are working for the benefit of humanity doesn't mean it is true. Surely I'd write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.

In TURING'S CATHEDRAL, George Dyson writes:

For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI.

I think many people would like to be in that group - if they can find a way to arrange it.

comment by shokwave · 2010-12-10T20:02:30.782Z · score: 1 (1 votes) · LW(p) · GW(p)

Quote from George Dyson

Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly.

If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of 'social cooperation' mechanism will not arise.

comment by timtyler · 2010-12-10T20:29:22.812Z · score: -2 (4 votes) · LW(p) · GW(p)

Machine intelligence programmers seem likely to construct their machines so as to help them satisfy their preferences - which in turn is likely to make them satisfied. I am not sure what you are talking about - but surely this kind of thing is already happening all the time - with Sergey Brin, James Harris Simons - and so on.

comment by katydee · 2010-12-10T20:31:54.590Z · score: 0 (0 votes) · LW(p) · GW(p)

That doesn't really strike me as a stunning insight, though. I have a feeling that I could find many people who would like to be in almost any group of "cheerful, contented, intellectually and physically well-nourished people."

comment by sketerpot · 2010-12-10T19:47:43.501Z · score: 0 (0 votes) · LW(p) · GW(p)

This all depends on what the AI wants. Without some idea of its utility function, can we really speculate? And if we speculate, we should note those assumptions. People often think of an AI as being essentially human-like in its values, which is problematic.

comment by timtyler · 2010-12-10T20:01:33.506Z · score: -1 (1 votes) · LW(p) · GW(p)

It's a fair description of today's more successful IT companies. The most obvious extrapolation for the immediate future involves more of the same - but with even greater wealth and power inequalities. However, I would certainly also council caution if extrapolating this out more than 20 years or so.

comment by [deleted] · 2010-12-10T20:43:08.713Z · score: 2 (8 votes) · LW(p) · GW(p)

Currently, there are no entities in physical existence which, to my knowledge, have the ability to torture anyone for the rest of eternity.

You intend to build an entity which would have that ability (or if not for infinity, for a googolplex of subjective years).

You intend to give it a morality based on the massed wishes of humanity - and I have noticed that other people don't always have my best interests at heart. It is possible - though unlikely - that I might so irritate the rest of humanity that they wish me to be tortured forever.

Therefore, you are, by your own statements, raising the risk of my infinite torture from zero to a tiny non-zero probability. It may well be that you are also raising my expected reward enough for that to be more than counterbalanced, but that's not what you're saying - any support for SIAI will, unless I'm completely misunderstanding, raise the probability of infinite torture for some individuals.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-10T21:37:28.356Z · score: 4 (6 votes) · LW(p) · GW(p)

You intend to give it a morality based on the massed wishes of humanity -

See the "Last Judge" section of the CEV paper.

Therefore, you are, by your own statements, raising the risk of my infinite torture from zero to a tiny non-zero probability.

As Vladimir observes, the alternative to SIAI doesn't involve nothing new happening.

comment by [deleted] · 2010-12-10T21:45:58.836Z · score: 3 (5 votes) · LW(p) · GW(p)

That just pushes the problem along a step. IF the Last Judge can't be mistaken about the results of the AI running AND the Last Judge is willing to sacrifice the utility of the mass of humanity (including hirself) to protect one or more people from being tortured, then it's safe. That's very far from saying there's a zero probability.

comment by ata · 2010-12-11T00:28:59.509Z · score: 2 (4 votes) · LW(p) · GW(p)

IF ... the Last Judge is willing to sacrifice the utility of the mass of humanity (including hirself) to protect one or more people from being tortured, then it's safe.

If the Last Judge peeks at the output and finds that it's going to decide to torture people, that doesn't imply abandoning FAI, it just requires fixing the bug and trying again.

comment by Vladimir_Nesov · 2010-12-10T21:15:53.691Z · score: 2 (2 votes) · LW(p) · GW(p)

Just because AGIs have capability to inflict infinite torture, doesn't mean they have a motive. Also, status quo (with regard to SIAI's activity) doesn't involve nothing new happening.

comment by [deleted] · 2010-12-10T21:33:13.449Z · score: 5 (5 votes) · LW(p) · GW(p)

I explained that he is planning to supply one with a possible motive (namely that the CEV of humanity might hate me or people like me). It is precisely because of this that the problem arises. A paperclipper, or any other AGI whose utility function had nothing to do with humanity's wishes, would have far less motive to do this - it might kill me, but it really would have no motive to torture me.

comment by timtyler · 2010-12-09T20:09:39.313Z · score: -2 (8 votes) · LW(p) · GW(p)

Also open-source not neccessarily good for FAI in any case.

You can have your private assurances - and I will have my open-source software.

Gollum gave his private assurances to Frodo - and we all know how that turned out.

If someone solicits for you to "trust in me", alarm bells should start ringing immediately. If you really think that is "the best we can hope for", then perhaps revisit that.

comment by wedrifid · 2010-12-09T20:40:58.615Z · score: 9 (9 votes) · LW(p) · GW(p)

Gollum gave his private assurances to Frodo - and we all know how that turned out.

Well I'm convinced. Frodo should definitely have worked out a way to clone the ring and made sure the information was available to all of Middle Earth. You can never have too many potential Ring-Wraiths.

comment by [deleted] · 2010-12-09T20:44:47.155Z · score: 2 (2 votes) · LW(p) · GW(p)

Suddenly I have a mental image of "The Lord of the Rings: The Methods of Rationality."

comment by Alicorn · 2010-12-09T20:45:59.819Z · score: 5 (5 votes) · LW(p) · GW(p)

Someone should write that (with a better title). We could have a whole genre of rational fanfiction.

comment by [deleted] · 2010-12-09T20:50:41.001Z · score: 1 (1 votes) · LW(p) · GW(p)

Agreed; Lord of the Rings seems like a natural candidate for discussing AI and related topics.

comment by Alicorn · 2010-12-09T20:55:01.019Z · score: 5 (5 votes) · LW(p) · GW(p)

I'd also like to see His Dark Materials with rationalist!Lyra. The girl had an alethiometer. She should have kicked way more ass than she did as soon as she realized what she had.

comment by jimrandomh · 2010-12-09T20:29:13.510Z · score: 4 (6 votes) · LW(p) · GW(p)

Open source AGI is not a good thing. In fact, it would be a disastrously bad thing. Giving people the source code doesn't just let them inspect it for errors, it also lets them launch it themselves. If you get an AGI close to ready for launch, then sharing its source code means that instead of having one party to decide whether there are enough safety measures ready to launch, you have many parties individually deciding whether to launch it themselves, possibly modifying its utility function to suit their own whim, and the hastiest party's AGI wins.

Ideally, you'd want to let people study the code, but only trustworthy people, and in a controlled environment where they can't take the source code with them. But even that is risky, since revealing that you have an AGI makes you a target for espionage and attack by parties who shouldn't be trusted with humanity's future.

comment by timtyler · 2010-12-09T20:38:44.586Z · score: 0 (2 votes) · LW(p) · GW(p)

Actually it reduces the chance of any party drawing massively ahead of the rest. It acts as an equalising force, by power-sharing. Since one of the main things we want to avoid is a disreputable organisation using machine intelligence to gain an advantage - and sustaining it over a long period of time. Using open-source software helps to defend against that possibility.

Machine intelligence will be a race - but it will be a race, whether participants share code or not.

Having said all that, machine intelligence protected by patents with secret source code on a server somewhere does seem like a reasonably probable outcome.

comment by jimrandomh · 2010-12-09T21:18:15.331Z · score: -1 (1 votes) · LW(p) · GW(p)

Using open-source software helps to defend against that possibility.

Only if (a) there is no point at which AGIs "foom", (b) source code sharing is well enough enforced on everyone that no bad organizations combine open source with refinements that they keep secret for an advantage, (c) competing AIs form a stable power equilibrium at all points along their advancement, and (d) it is impossible to trade off goal system stability for optimization power.

I estimate probabilities of 0.4, 0.2, 0.05, and 0.5 for these hypotheses, respectively.

comment by timtyler · 2010-12-09T21:38:42.236Z · score: 3 (5 votes) · LW(p) · GW(p)

I disagree with most of that analysis. I assume machine intelligence will catalyse its own creation. I fully expect that some organisations will stick with secret source code. How could the probability of that possibly be as low as 0.8!?!

I figure that use of open source software is more likely to lead to a more even balance of power - and less likely to lead to a corrupt organisation in charge of the planet's most advanced machine intelligence efforts. That assessment is mostly based on the software industry to date - where many of the worst abuses appear to me to have occurred at the hands of proprietary software vendors.

If you have an unethical open source project, people can just fork it, and make an ethical version. With a closed source project, people don't have that option - they often have to go with whatever they are given by those in charge of the project.

Nor am I assuming that no team will ever win. If there is to be a winner, we want the best possible lead up. The "trust us" model is not it - not by a long shot.

comment by jimrandomh · 2010-12-11T00:02:12.826Z · score: 3 (3 votes) · LW(p) · GW(p)

I figure that use of open source software is more likely to lead to a more even balance of power - and less likely to lead to a corrupt organisation in charge of the planet's most advanced machine intelligence efforts. That assessment is mostly based on the software industry to date - where many of the worst abuses appear to me to have occurred at the hands of proprietary software vendors.

If you have an unethical open source project, people can just fork it, and make an ethical version. With a closed source project, people don't have that option - they often have to go with whatever they are given by those in charge of the project.

There are two problems with this reasoning. First, you have the causality backwards: makers of open-source software are less abusive than makers of closed-source software not because open-source is such a good safeguard, but because the sorts of organizations that would be abusive don't open source in the first place.

And second, if there is an unethical AI running somewhere, then forking the code will not save humanity. Forking is a defense against not having good software to use yourself; it is not a defense against other people running software that does bad things to you.

comment by timtyler · 2010-12-11T10:51:15.005Z · score: 0 (0 votes) · LW(p) · GW(p)

you have the causality backwards: makers of open-source software are less abusive than makers of closed-source software not because open-source is such a good safeguard, but because the sorts of organizations that would be abusive don't open source in the first place.

Really? I just provided an example of a mechanism that helps keep open source software projects ethical - the fact that if the manufacturers attempt to exploit their customers it is much easier for the customers to switch to a more ethical fork - because creating such a fork no longer violates copyright law. Though you said you were pointing out problems with my reasoning, you didn't actually point out any problems with that reasoning.

We saw an example of this kind of thing very recently - with LibreOffice. The developers got afraid that their adopted custodian, Oracle, was going to screw the customers of their project - so, to protect their customers and themselves, they forked it - and went their own way.

if there is an unethical AI running somewhere, then forking the code will not save humanity. Forking is a defense against not having good software to use yourself; it is not a defense against other people running software that does bad things to you.

If other people are running software that does bad things to you then running good quality software yourself most certainly is a kind of defense. It means you are better able to construct defenses, better able to anticipate their attacks - and so on. Better brains makes you more powerful.

Compare with the closed-source alternative: If other people are running software that does bad things to you - and you have no way to run such software yourself - since it is on their server and running secret source that is also protected by copyright law - you are probably pretty screwed.

comment by XiXiDu · 2010-12-09T13:59:05.841Z · score: 5 (5 votes) · LW(p) · GW(p)

It makes me look like even more of a troublemaker...

How so? I've just reread some of your comments on your now deleted post. It looks like you honestly tried to get the SIAI to put safeguards into CEV. Given that the idea spread to many people by now, don't you think it would be acceptably to discuss the matter before one or more people take it serious or even consider to implement it deliberately?

comment by Roko · 2010-12-09T18:30:21.826Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't think it is a good idea to discuss it. I think that the costs outweigh the benefits. The costs are very big. Benefits marginal.

comment by Perplexed · 2010-12-08T18:39:52.487Z · score: 3 (7 votes) · LW(p) · GW(p)

Ok by me. It is pretty obvious by this point that there is no evil conspiracy involved here. But I think the lesson remains, I you delete something, even if it is just because you regret posting it, you create more confusion than you remove.

comment by waitingforgodel · 2010-12-09T04:18:40.955Z · score: 2 (12 votes) · LW(p) · GW(p)

I think the question you should be asking is less about evil conspiracies, and more about what kind of organization SIAI is -- what would they tell you about, and what would they lie to you about.

comment by XiXiDu · 2010-12-09T10:54:12.381Z · score: 4 (6 votes) · LW(p) · GW(p)

If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI. That alone is enough to conclude that the SIAI is not trying to hold back something that would discredit it as an organisation concerned with charitable objectives. The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence. Making the subject matter public did already harm some people and could harm people in future.

comment by David_Gerard · 2010-12-09T11:04:29.646Z · score: 7 (7 votes) · LW(p) · GW(p)

But the forbidden topic is already public. All the effects that would follow from it being public would already follow. THE HORSE HAS BOLTED. It's entirely unclear to me what pretending it hasn't does for the problem or the credibility of the SIAI.

comment by XiXiDu · 2010-12-09T11:14:11.646Z · score: 3 (3 votes) · LW(p) · GW(p)

It is not as public as you think. If it was then people like waitingforgodel wouldn't ask about it.

I'm just trying to figure out how to behave without being able talk about it directly. It's also really interesting on many levels.

comment by wedrifid · 2010-12-09T11:33:07.154Z · score: 8 (8 votes) · LW(p) · GW(p)

It is not as public as you think.

Rather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P

comment by David_Gerard · 2010-12-09T14:01:21.337Z · score: 3 (5 votes) · LW(p) · GW(p)

It is not as public as you think.

Rather more public than a long forgotten counterfactual discussion collecting dust in the blog's history books would be. :P

Precisely. The place to hide a needle is in a large stack of needles.

The choice here was between "bad" and "worse" - a trolley problem, a lose-lose hypothetical - and they appear to have chosen "worse".

comment by wedrifid · 2010-12-09T15:32:38.381Z · score: 6 (6 votes) · LW(p) · GW(p)

Precisely. The place to hide a needle is in a large stack of needles.

I prefer to outsource my needle-keeping security to Clippy in exchange for allowing certain 'bending' liberties from time to time. :)

comment by David_Gerard · 2010-12-09T15:38:09.353Z · score: 4 (4 votes) · LW(p) · GW(p)

Upvoted for LOL value. We'll tell Clippy the terrible, no good, very bad idea with reasons as to why this would hamper the production of paperclips.

"Hi! I see you've accidentally the whole uFAI! Would you like help turning it into paperclips?"

comment by wedrifid · 2010-12-09T15:44:19.872Z · score: 3 (3 votes) · LW(p) · GW(p)

"Hi! I see you've accidentally the whole uFAI! Would you like help turning it into paperclips?"

Brilliant.

comment by David_Gerard · 2010-12-09T15:45:28.774Z · score: 0 (0 votes) · LW(p) · GW(p)

Frankly, Clippy would be better than the Forbidden Idea. At least Clippy just wants paperclips.

comment by TheOtherDave · 2010-12-09T16:20:29.599Z · score: 2 (2 votes) · LW(p) · GW(p)

Of course, if Clippy were clever he would then offer to sell SIAI a commitment to never release the UFAI in exchange for a commitment to produce a fixed number of paperclips per year, in perpetuity.

Admittedly, his mastery of human signaling probably isn't nuanced enough to prevent that from sounding like blackmail.

comment by David_Gerard · 2010-12-09T11:44:28.412Z · score: 5 (7 votes) · LW(p) · GW(p)

If the forbidden topic would be made public (and people would believe it), it would result in a steep rise of donations towards the SIAI.

I really don't see how that follows. Will more of the public take it seriously? As I have noted, so far the reaction from people outside SIAI/LW has been "They did WHAT? Are they IDIOTS?"

The censoring of the information was in accordance with their goal of trying to prevent unfriendly artificial intelligence.

That doesn't make it not stupid or not counterproductive. Sincere stupidity is not less stupid than insincere stupidity. Indeed, sincere stupidity is more problematic in my experience as the sincere are less likely to back down, whereas the insincere will more quickly hop to a different idea.

Making the subject matter public did already harm some people

Citation needed.

and could harm people in future.

Citation needed.

comment by XiXiDu · 2010-12-09T13:07:21.692Z · score: 5 (5 votes) · LW(p) · GW(p)

Citation needed.

I sent you another PM.

comment by David_Gerard · 2010-12-09T13:49:36.213Z · score: 4 (4 votes) · LW(p) · GW(p)

Hmm, okay. But that, I suggest, appears to have been a case of reasoning oneself stupid.

It does, of course, account for SIAI continuing to attempt to secure the stable doors after the horse has been dancing around in a field for several months taunting them with "COME ON IF YOU THINK YOU'RE HARD ENOUGH."

(I upvoted XiXiDu's comment here because he did actually supply a substantive response in PM, well deserving of a vote, and I felt this should be encouraged by reward.)

comment by waitingforgodel · 2010-12-08T14:16:52.154Z · score: 3 (11 votes) · LW(p) · GW(p)

If it wouldn't, that's a bad sign about SIAI.

I wish I could upvote twice

comment by Perplexed · 2010-12-08T14:42:30.392Z · score: 0 (0 votes) · LW(p) · GW(p)

That someone asked Roko to erase his comment isn't a very bad sign, since it's enough that one person didn't understand the reasoning above for that to happen. That fact that Roko conceded is a bad sign, though.

A kind of meta-question: is there any evidence suggesting that one of the following explanations of the recent deletion is better than another?

  • That an LW moderator deleted Roko's comment.
  • That Roko was asked to delete it, and complied.
  • That Roko deleted it himself, without prompting.
comment by waitingforgodel · 2010-12-09T03:29:03.244Z · score: -6 (22 votes) · LW(p) · GW(p)

One of the more disturbing topics in this post is the question of how much can you trust an organization of people who are willing to endure torture, rape, and death for their cause.

Surely lying isn't as bad as any of those...

Of course, lying for your cause is almost certainly a long term retarded thing to do... but so is censoring ideas...

It's hard to know what to trust on this thread

comment by AnnaSalamon · 2010-12-09T10:57:53.755Z · score: 10 (10 votes) · LW(p) · GW(p)

As you may know from your study of marketing, accusations stick in the mind even when one is explicitly told they are false. In the parent comment and a sibling, you describe a hypothetical SIAI lying to its donors because... Roko had some conversations with Carl that led you to believe we care strongly about existential risk reduction?

If your aim is to improve SIAI, to cause there to be good organizations in this space, and/or to cause Less Wrong-ers to have accurate info, you might consider:

  1. Talking with SIAI and/or with Fellows program alumni, so as to gather information on the issues you are concerned about. (I’d be happy to talk to you; I suspect Jasen and various alumni would too.) And then
  2. Informing folks on LW of anything interesting/useful that you find out.

Anyone else who is concerned about any SIAI-related issue is also welcome to talk to me/us.

comment by waitingforgodel · 2010-12-09T16:21:46.510Z · score: 3 (17 votes) · LW(p) · GW(p)

accusations stick in the mind even when one is explicitly told they are false

Actually that citation is about both positive and negative things -- so unless you're also asking pro-SIAI people to hush up, you're (perhaps unknowingly) seeking to cause a pro-SIAI bias.

Another thing that citation seems to imply is that reflecting on, rather than simply diverting our attention away from scary thoughts is essential to coming to a correct opinion on them.

One of the interesting morals from Roko's contest is that if you care deeply about getting the most benefit per donated dollar you have to look very closely at who you're giving it to.

Market forces work really well for lightbulb-sales businesses, but not so well for mom-and-pop shops, let alone charities. The motivations, preferences, and likely future actions of the people you're giving money to become very important. Knowing if you can believe the person, in these contexts, becomes even more important.

As you note, I've studied marketing, sales, propaganda, cults, and charities. I know that there are some people who have no problem lying for their cause (especially if it's for their god or to save the world).

I also know that there are some people who absolutely suck at lying. They try to lie, but the truth just seeps out of them.

That's why I give Roko's blurted comments more weight than whatever I'd hear from SIAI people who were chosen by you -- no offence. I'll still talk with you guys, but I don't think a reasonably sane person can trust the sales guy beyond a point.

As far as your question goes, my primary desire is a public, consistent moderation policy for LessWrong. If you're going to call this a community blog devoted to rationality, then please behave in sane ways. (If no one owns the blog -- if it belongs to the community -- then why is there dictatorial post deletion?)

I'd also like an apology from EY with regard to the chilling effects his actions have caused.

But back to what you replied to:

What would SIAI be willing to lie to donors about?

Do you have any answers to this?

comment by AnnaSalamon · 2010-12-09T17:51:40.115Z · score: 13 (13 votes) · LW(p) · GW(p)

To answer your question, despite David Gerard's advice:

I would not lie to donors about the likely impact of their donations, the evidence concerning SIAI's ability or inability to pull off projects, how we compare to other organizations aimed at existential risk reduction, etc. (I don't have all the answers, but I aim for accuracy and revise my beliefs and my statements as evidence comes in; I've actively tried to gather info on whether we or FHI reduce risk more per dollar, and I often recommend to donors that they do their own legwork with that charity comparison to improve knowledge and incentives). If a maniacal donor with a gun came searching for a Jew I had hidden in my house, or if I somehow had a "how to destroy the world" recipe and someone asked me how to use it, I suppose lying would be more tempting.

While I cannot speak for others, I suspect that Michael Vassar, Eliezer, Jasen, and others feel similarly, especially about the "not lying to one's cooperative partners" point.

comment by David_Gerard · 2010-12-09T17:58:06.953Z · score: 3 (3 votes) · LW(p) · GW(p)

I suppose I should add "unless the actual answer is not a trolley problem" to my advice on not answering this sort of hypothetical ;-)

(my usual answer to hypotheticals is "we have no plans along those lines", because usually we really don't. We're also really good at not having opinions on other organisations, e.g. Wikileaks, which we're getting asked about A LOT because their name starts with "wiki". A blog post on the subject is imminent. Edit: up now.)

comment by waitingforgodel · 2010-12-09T18:18:40.344Z · score: -11 (21 votes) · LW(p) · GW(p)

I notice that your list is future facing.

Lies are usually about the past.

It's very easy to not lie when talking about the future. It is much easier to "just this once" lie about the past. You can do both, for instance, by explaining that you believe a project will succeed, even while withholding information that would convince a donor otherwise.

An example of this would be errors or misconduct in completing past projects.

Lack of relevant qualifications for people SIAI plans to employ on a project.

Or administrative errors and misconduct.

Or public relations / donor outreach misconduct.

To put the question another, less abstract way, have you ever lied to a SIAI donor? Do you know of anyone affiliated with SIAI who has lied a donor?

Hypothetically, If I said I had evidence in the affirmative to the second question, how surprising would that be to you? How much money would you bet that such evidence doesn't exist?

comment by [deleted] · 2010-12-09T18:28:52.144Z · score: 11 (11 votes) · LW(p) · GW(p)

You're trying very hard to get everyone to think that SIAI has lied to donors or done something equally dishonest. I agree that this is an appropriate question to discuss, but you are pursuing the matter so aggressively that I just have to ask: do you know something we don't? Do you think that you/other donors have been lied to on a particular occasion, and if so, when?

comment by JGWeissman · 2010-12-09T18:39:51.229Z · score: 10 (10 votes) · LW(p) · GW(p)

An example of this would be errors or misconduct in completing past projects.

When I asked Anna about the coordination between SIAI and FHI, something like "Do you talk enough with each other that you wouldn't both spend resources writing the same research paper?", she was told me about the one time that they had in fact both presented a paper on the same topic at a conference, and that they do now coordinate more to prevent that sort of thing.

I have found that Anna and others at SIAI are honest and forthcoming.

comment by lessdazed · 2011-08-16T05:18:23.231Z · score: 2 (2 votes) · LW(p) · GW(p)

Your comment here killed the hostage.

comment by David_Gerard · 2010-12-09T16:46:58.721Z · score: 9 (9 votes) · LW(p) · GW(p)

Another thing that citation seems to imply is that reflecting on, rather than simply diverting our attention away from scary thoughts is essential to coming to a correct opinion on them.

Well, uh, yeah. The horse has bolted. It's entirely unclear what choosing to keep one's head in the sand gains anyone.

What would SIAI be willing to lie to donors about?

Although this is a reasonable question to want the answer to, it's obvious even to me that answering at all would be silly and no sensible person who had the answer would.

Investigating the logic or lack thereof behind the (apparently ongoing) memory-holing is, however, incredibly on-topic and relevant for LW.

comment by wedrifid · 2010-12-09T16:55:40.176Z · score: 3 (3 votes) · LW(p) · GW(p)

Although this is a reasonable question to want the answer to, it's obvious even to me that answering at all would be silly and no sensible person who had the answer would.

Total agreement here. In Eliezer's words:

Ambiguity is their ally. Both answers elicit negative responses, and they can avoid that from most people by not saying anything, so why shouldn't they shut up?

comment by David_Gerard · 2010-12-09T16:57:37.158Z · score: 4 (4 votes) · LW(p) · GW(p)

Did you make up the 'memory-holing' term?

A fellow called George Orwell.

comment by wedrifid · 2010-12-09T17:02:38.996Z · score: 2 (2 votes) · LW(p) · GW(p)

Ahh, thankyou.

comment by David_Gerard · 2010-12-09T17:05:34.818Z · score: 4 (4 votes) · LW(p) · GW(p)

I presume you're not a native English speaker then - pretty much any moderately intelligent native English speaker has been forced to familiarity with 1984 at school. (When governments in the UK are being particularly authoritarian, there is often a call to send MPs copies of 1984 with a note "This is not a manual.") Where are you from? Also, you really should read the book, then lots of the commentary on it :-) It's one of the greatest works of science fiction and political fiction in English.

comment by wedrifid · 2010-12-09T17:31:37.666Z · score: 4 (4 votes) · LW(p) · GW(p)

I can tell you all about equal pigs and newspeak but 'memory-holing' has not seemed to make as much of a cultural footprint - probably because as a phrase it is rather awkward fit. I wholeheartedly approve of Orwell in principle but actually reading either of his famous books sounds too much like highschool homework. :)

comment by Jack · 2010-12-09T18:12:21.998Z · score: 4 (4 votes) · LW(p) · GW(p)

Animal Farm is probably passable (though it's so short). 1984 on the other hand is maybe my favorite book of all time. I don't think I've had a stronger emotional reaction to another book. It makes Shakespeare's tragedies look like comedies. I'd imagine you'd have similar feelings about it based on what I've read of your comments here.

comment by wedrifid · 2010-12-09T19:02:49.796Z · score: 1 (1 votes) · LW(p) · GW(p)

That's some high praise there.

It makes Shakespeare's tragedies look like comedies.

So I take it there isn't a romantic 'happily ever after' ending? :P

comment by [deleted] · 2010-12-10T22:49:14.926Z · score: 0 (0 votes) · LW(p) · GW(p)

Actually, there is... ;)

comment by Vaniver · 2010-12-09T17:36:54.748Z · score: 1 (1 votes) · LW(p) · GW(p)

Both are short and enjoyable- I strongly recommend checking them out from a library or picking up a copy.

comment by David_Gerard · 2010-12-09T17:34:10.464Z · score: 1 (1 votes) · LW(p) · GW(p)

Read them. They're actually really good books. His less-famous ones are not as brilliant, but are good too.

(We were taught 1984 in school, I promptly read to the end with eyes wide. I promptly borrowed Animal Farm of my own accord.)

comment by [deleted] · 2010-12-10T22:50:52.710Z · score: 1 (1 votes) · LW(p) · GW(p)

His less-famous novels aren't as good. On the other hand, some of his essays are among the clearest, most intelligent thinking I've ever come across, and would probably be of a lot of interest to LessWrong readers...

comment by David_Gerard · 2010-12-10T23:12:44.150Z · score: 0 (0 votes) · LW(p) · GW(p)

Oh yeah. Politics and the English Language is a classic on a par with the great two novels. I first read that in 1992 and wanted to print copies to distribute everywhere (we didn't have internet then).

comment by MBlume · 2010-12-10T23:45:43.738Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm terribly curious now -- did the use of any of the phrases Orwell singles out in the article actually drop significantly after the article was published? Wikipedia will not say...

comment by David_Gerard · 2010-12-10T23:52:26.272Z · score: 1 (1 votes) · LW(p) · GW(p)

Well, reading it in the 1990s and having a burnt-out ex-Communist for a housemate at the time, I fear I recognised far too many of the cliches therein as current in those circles ;-)

comment by [deleted] · 2010-12-10T23:58:18.561Z · score: 1 (1 votes) · LW(p) · GW(p)

A lot are still current in those less rational/more angry elements of the left who still think the Labour Party represents socialism and use phrases like that to justify themselves...

comment by [deleted] · 2010-12-10T23:37:13.634Z · score: 1 (1 votes) · LW(p) · GW(p)

Yeah, that's one of those I was thinking of. Also things like the piece about the PEN 'anti-censorship' event that wasn't, and his analysis of James Burnham's Managerialist writing...

comment by waitingforgodel · 2010-12-09T17:10:31.741Z · score: 0 (6 votes) · LW(p) · GW(p)

why shouldn't they shut up?

Because this is LessWrong -- you can give a sane response and not only does it clear the air, people understand and appreciate it.

Cable news debating isn't needed here.

Sure we might still wonder if they're being perfectly honest, but saying something more sane on the topic than silence seems like a net-positive from their perspective.

comment by wedrifid · 2010-12-09T17:36:20.733Z · score: 2 (2 votes) · LW(p) · GW(p)

By way of a reminder, the question under discussion was:

What would SIAI be willing to lie to donors about?

comment by wnoise · 2010-12-09T17:17:02.512Z · score: 1 (1 votes) · LW(p) · GW(p)

LessWrongers are not magically free of bias. Nor are they inherently moral people that wouldn't stoop to using misleading rhetorical techniques, though here they are more likely to be called on it.

In any case, an answer here is available to the public internet for all to see.

comment by waitingforgodel · 2010-12-09T17:06:13.829Z · score: 1 (7 votes) · LW(p) · GW(p)

no sensible person who had the answer would

I respectfully disagree, and have my hopes set on Carl (or some other level-headed person in a position to know) giving a satisfying answer.

This is LessWrong after all -- we can follow complicated arguments, and at least hearing how SIAI is actually thinking about such things would (probably) reduce my paranoia.

comment by David_Gerard · 2010-12-09T17:11:39.200Z · score: 1 (1 votes) · LW(p) · GW(p)

Yeah, but this is on the Internet for everyone to see. The potential for political abuse is ridiculous and can infect even LessWrong readers. Politics is the mind-killer, but pretending it doesn't affect almost everyone else strikes me as not smart.

comment by Bongo · 2010-12-09T10:01:33.224Z · score: 6 (6 votes) · LW(p) · GW(p)

The concept of ethical injunctions is known in SIAI circles I think. Enduring personal harm for your cause and doing unethical things for your cause are therefore different. Consider Eliezer's speculation about whether a rationalist "confessor" should ever lie in this post, too. And these personal struggles with whether to ever lie about SIAI's work.

comment by waitingforgodel · 2010-12-09T16:32:18.049Z · score: 4 (8 votes) · LW(p) · GW(p)

That "confessor" link is terrific

If banning Roko's post would reasonably cause discussion of those ideas to move away from LessWrong, then by EY's own reasoning (the link you gave) it seems like a retarded move.

Right?

comment by Bongo · 2010-12-09T17:38:04.210Z · score: 6 (6 votes) · LW(p) · GW(p)

If the idea is actually dangerous, it's way less dangerous to people who aren't familiar with pretty esoteric Lesswrongian ideas. They're prerequisites to being vulnerable to it. So getting conversation about the idea away from Lesswrong isn't an obviously retarded idea.

comment by Desrtopa · 2010-12-09T03:41:03.209Z · score: 1 (7 votes) · LW(p) · GW(p)

Lying for good causes has a time honored history. Protecting fugitive slaves or holocaust victims immediately comes to mind. Just because it is more often practical to be honest than not doesn't mean that dishonesty isn't sometimes unambiguously the better option.

comment by waitingforgodel · 2010-12-09T03:46:52.113Z · score: -2 (20 votes) · LW(p) · GW(p)

I agree that there's a lot in history, but the examples you cited have something that doesn't match here -- historically, you lie to people you don't plan on cooperating with later.

If you lie to an oppressive government, it's okay because it'll either get overthrown or you'll never want to cooperate with it (so great is your reason for lying).

Lying to your donor pool is very, very different than lying to the Nazis about hiding jews.

comment by Bongo · 2010-12-09T10:09:03.578Z · score: 5 (7 votes) · LW(p) · GW(p)

You're throwing around accusations of lying pretty lightly.

comment by waitingforgodel · 2010-12-09T16:32:48.873Z · score: -1 (15 votes) · LW(p) · GW(p)

Am I missing something? Desrtopa responded to questions of lying to the donor pool with the equivalent of "We do it for the greater good"

comment by AnnaSalamon · 2010-12-09T16:46:27.879Z · score: 6 (14 votes) · LW(p) · GW(p)

Desrtopa isn't affiliated with SIAI. You seem to be deliberately designing confusing comments, a la Glenn Beck's "I'm just asking questions" motif.

comment by David_Gerard · 2010-12-09T16:53:42.187Z · score: 3 (5 votes) · LW(p) · GW(p)

Is calling someone here Glenn Beck equivalent to Godwination?

wfg's post strikes me as almost entirely reasonable (except the last question, which is pointless to ask) and your response as excessively defensive.

Also, you're saying this to someone who says he's a past donor and has not yet ruled out being a future donor. This is someone who could reasonably expect his questions to be taken seriously.

(I have some experience of involvement in a charity that suffers a relentless barrage of blitheringly stupid questions from idiots, and my volunteer role is media handling - mostly I come up with good and effective soundbites. So I appreciate and empathise with your frustration, but I think I can state with some experience behind me that your response is actually terrible.)

comment by AnnaSalamon · 2010-12-09T17:10:10.521Z · score: 16 (20 votes) · LW(p) · GW(p)

Okay. Given your and the folks who downvoted my comment's perceptions, I'll revise my opinion on the matter. I'll also put that under "analogies not to use"; I was probably insufficiently familiar with the pop culture.

The thing I meant to say was just... Roko made a post, Nick suggested it gave bad impressions, Roko deleted it. wfg spent hours commenting again and again about how he had been asked to delete it, perhaps by someone "high up within SIAI", and how future censorship might be imminent, how the fact that Roko had had a bascially unrelated conversation suggested that we might be lying to donors (a suggestion that he didn't make explicitly, but rather left to innuendo), etc. I feel tired of this conversation and want to go back to research and writing, but I'm kind of concerned that it'll leave a bad taste in readers mouths not because of any evidence that's actually being advanced, but because innuendo and juxtapositions, taken out of context, leave impressions of badness.

I wish I knew how to have a simple, high-content, low-politics conversation on the subject. Especially one that was self-contained and didn't leave me feeling as though I couldn't bow out after awhile and return to other projects.

comment by David_Gerard · 2010-12-09T17:19:08.861Z · score: 9 (17 votes) · LW(p) · GW(p)

The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared. Those are not good places to be on the Internet. They are places where honesty is devalued and statements of fact must be reviewed for their political nature.

So it can happen here - because it did happen. It's no longer in the class "things that are unthinkable". This is itself a major credibility hit for LW.

And when a Roko post disappears - well, it was one of his posts that was disappeared before.

With this being the situation, assumptions of bad faith are going to happen. (And "stupidity" is actually the assumption of good faith.)

Your problem now is to restore trust in LW's intellectual integrity, because SIAI broke it good and hard. Note that this is breaking an expectation, which is much worse than breaking a rule - if you break a rule you can say "we broke this rule for this reason", but if you break expectations, people feel the ground moving under their feet, and get very upset.

There are lots of suggestions in this thread as to what people think might restore their trust in LW's intellectual integrity, SIAI needs to go through them and work out precisely what expectations they broke and how to come clean on this.

I suspect you could at this point do with an upside to all this. Fortunately, there's an excellent one: no-one would bother making all this fuss if they didn't really care about LW. People here really care about LW and will do whatever they can to help you make it better.

(And the downside is that this is separate from caring about SIAI, but oh well ;-) )

(and yes, this sort of discussion around WP/WMF has been perennial since it started.)

comment by Emile · 2010-12-09T22:25:46.477Z · score: 5 (7 votes) · LW(p) · GW(p)

The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared. Those are not good places to be on the Internet. They are places where honesty is devalued and statements of fact must be reviewed for their political nature.

Like Airedale, I don't have that impression - my impression is that 1) Censorship by website's owner doesn't have the moral problems associated with censorship by governments (or corporations), and 2) in online communities, dictatorship can work quite well, as long as the dictator isn't a complete dick.

I've seen quite functional communities where the moderators would delete posts without warning if they were too stupid, offensive, repetitive or immoral (such as bragging about vandalizing wikipedia).

So personally, I don't see a need for "restoring trust". Of course, as your post attests, my experience doesn't seem to generalize to other posters.

comment by Airedale · 2010-12-09T18:49:59.950Z · score: 5 (5 votes) · LW(p) · GW(p)

The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared. Those are not good places to be on the Internet. They are places where honesty is devalued and statements of fact must be reviewed for their political nature.

I’ve seen several variations of this expressed about this topic, and it’s interesting to me, because this sort of view is somewhat foreign to me. I wouldn’t say I’m pro-censorship, but as an attorney trained in U.S. law, I think I’ve very much internalized the idea that the most serious sorts of censorship actions are those taken by the government (i.e., this is what the First Amendment free speech right is about, and that makes sense because of the power of the government), and that there are various levels of seriousness/danger beyond that, with say, big corporate censorship also being somewhat serious because of corporate power, and censorship by the owner of a single blog (even a community one) not being very serious at all, because a blogowner is not very powerful compared to the government or a major corporation, and shutting down one outlet of communication on the Internet is comparatively not a big deal because it’s a big internet where there are lots of other places to express one’s views. If a siteowner exercises his or her right to delete something on a website, it's just not the sort of harm that I weigh very heavily.

What I’m totally unsure of is where the average LW reader falls on the scale between you and me, and therefore, despite the talk about the Roko incident being such a public relations disaster and a “spectacular” deletion, I just don’t know how true that is and I’m curious what the answer would be. People who feel like me may just not feel the need to weigh in on the controversy, whereas people who are very strongly anti-censorship in this particular context do.

comment by [deleted] · 2010-12-09T18:55:59.353Z · score: 3 (3 votes) · LW(p) · GW(p)

If a siteowner exercises his or her right to delete something on a website, it's just not the sort of harm that I weigh very heavily.

That's not really the crux of the issue (for me, at least, and probably not for others). As David Gerard put it, the banning of Roko's post was a blow to people's expectations, which was why it was so shocking. In other words, it was like discovering that LW wasn't what everyone thought it was (and not in a good way).

Note: I personally wouldn't classify the incident as a "disaster," but was still very alarming.

comment by waitingforgodel · 2010-12-09T18:53:46.024Z · score: 1 (9 votes) · LW(p) · GW(p)

Great post. It confuses me why this isn't at 10+ karma

comment by David_Gerard · 2010-12-09T23:09:06.259Z · score: 5 (5 votes) · LW(p) · GW(p)

+5 is fine!

Y'know, one of the actual problems with LW is that I read it in my Internet as Television time, but there's a REALLY PROMINENT SCORE COUNTER at the top left. This does not help in not treating it as a winnable video game.

(That said, could the people mass-downvoting waitingforgodel please stop? It's tiresome. Please try to go by comment, not poster.)

comment by komponisto · 2010-12-09T23:22:38.634Z · score: 2 (2 votes) · LW(p) · GW(p)

there's a REALLY PROMINENT SCORE COUNTER at the top left. This does not help in not treating [LW] as a winnable video game.

So true!

(Except it's at the top right. At least, the one I'm thinking of.)

comment by David_Gerard · 2010-12-09T23:25:26.263Z · score: 1 (1 votes) · LW(p) · GW(p)

The other left.

(Yes, I actually just confused left and right. STOP POSTING.)

comment by [deleted] · 2010-12-09T18:56:47.725Z · score: 1 (3 votes) · LW(p) · GW(p)

Probably because its buried in the middle of an enormous discussion that very few people have read and will read.

comment by waitingforgodel · 2010-12-09T18:59:39.187Z · score: -4 (12 votes) · LW(p) · GW(p)

Lol. right, that'd do it

comment by SilasBarta · 2010-12-09T17:13:31.092Z · score: 3 (3 votes) · LW(p) · GW(p)

I wish I knew how to have a simple, high-content, low-politics conversation on the subject. Especially one that was self-contained and didn't leave me feeling as though I couldn't bow out after awhile and return to other projects.

I wish you used a classification algorithm that more naturally identified the tension between "wanting low-politics conversation" and comparing someone to Glenn Beck as a means of criticism.

comment by AnnaSalamon · 2010-12-09T17:16:26.016Z · score: 3 (3 votes) · LW(p) · GW(p)

Sorry. This was probably simply a terrible mistake born of unusual ignorance of pop culture and current politics. I meant to invoke "using questions as a means to plant accusations" and honestly didn't understand that he was radically unpopular. I've never watched anything by him.

comment by SilasBarta · 2010-12-09T17:28:00.731Z · score: 2 (2 votes) · LW(p) · GW(p)

Well, it's not that Beck is unpopular; it's that he's very popular with people of a particular political ideology.

In fairness, though, he is sort of the canonical example for "I'm just asking questions, here!". (And I wasn't one of those voting you down on this.)

I think referring to the phenomenon itself is enough to make one's point on the issue, and it's not necessary to identify a person who does it a lot.

comment by XiXiDu · 2010-12-09T17:50:51.544Z · score: 2 (4 votes) · LW(p) · GW(p)

I wish I knew how to have a simple, high-content, low-politics conversation on the subject.

This is about politics. The censorship of an idea related to a future dictator implementing some policy is obviously about politics.

You tell people to take friendly AI serious. You tell people that we need friendly AI to marshal our future galactic civilisation. People take it serious. Now the only organisation working on this is the SIAI. Therefore the SIAI is currently in direct causal control of our collective future. So why do you wonder people care about censorship and transparency? People already care about what the U.S. is doing and demand transparency. Which is ludicrous in comparison to the power of a ruling superhuman artificial intelligence that implements what the SIAI came up with as the seed for its friendliness.

If you really think that the SIAI has any importance and could possible achieve to influence or implement the safeguards for some AGI project, then everything the SIAI does is obviously very important to everyone concerned (everyone indeed).

comment by timtyler · 2010-12-09T18:42:28.766Z · score: -1 (1 votes) · LW(p) · GW(p)

Now the only organisation working on this is the SIAI. Therefore the SIAI is currently in direct causal control of our collective future.

What? No way! The organisation seems very unlikely to produce machine intelligence to me - due to all the other vastly-better funded players.

comment by Vaniver · 2010-12-09T17:00:34.014Z · score: 2 (4 votes) · LW(p) · GW(p)

Is calling someone here Glenn Beck equivalent to Godwination?

-3 after less than 15 minutes suggests so!

comment by waitingforgodel · 2010-12-09T17:00:34.961Z · score: 1 (7 votes) · LW(p) · GW(p)

Make that "they do it for the greater good"

Sorry about mistakingly implying s/he was affiliated. I'll be more diligent with my google stalking in the future.

edit: In my defense, SIAI affiliation has been very common when looking up very "pro" people from this thread

comment by AnnaSalamon · 2010-12-09T17:03:56.866Z · score: 2 (2 votes) · LW(p) · GW(p)

Thanks. I appreciate that.

comment by Jordan · 2010-12-09T07:17:24.323Z · score: 7 (7 votes) · LW(p) · GW(p)

An important academic option: get tenure at a less reputable school. In the States at least there are tons of universities that don't really have huge research responsibilities (so you won't need to worry about pushing out worthless papers, preparing for conferences, peer reviewing, etc), and also don't have huge teaching loads. Once you get tenure you can cruise while focusing on research you think matters.

The down side is that you won't be able to network quite as effectively as if you were at a more prestigious university and the pay isn't quite as good.

comment by utilitymonster · 2010-12-09T13:49:13.985Z · score: 2 (2 votes) · LW(p) · GW(p)

Don't forget about the ridiculous levels of teaching you're responsible for in that situation. Lots worse than at an elite institution.

comment by Jordan · 2010-12-09T20:37:27.524Z · score: 2 (2 votes) · LW(p) · GW(p)

Not necessarily. I'm not referring to no-research universities, which do have much higher teaching loads (although still not ridiculous. Teaching 3 or 4 classes a semester is hardly strenuous). I'm referring to research universities that aren't in the top 100, but which still push out graduate students.

My undegrad Alma Mater, Kansas University, for instance. Professors teach 1 or 2 classes a semester, with TA support (really, when you have TAs, teaching is not real work). They are still expected to do research, but the pressure is much less than at a top 50 school.

comment by Unnamed · 2010-12-07T19:21:16.190Z · score: 7 (7 votes) · LW(p) · GW(p)

But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.

This depends on the field, university, and maybe country. In many cases, doing your own research is the main focus from graduate school on. At research universities in the US, at least, doing research is a professor's main job - although they do also have to do some teaching, apply for grants, and so on, professors are primarily judged by their publication record. In graduate school, many students get to work on their own research projects. A common model is: a professor who has some areas of interest & expertise gets graduate students who are interested in doing research in those areas. At first the students might work primarily on the professor's projects, especially if they don't have research ideas of their own yet, but during their time at grad school a student is expected to develop their own research ideas (within the same general area) and do their own projects, with guidance from the professor so that they can learn to do it well.

I think the academic route should work pretty well if you're interested in topics that are an established part of an academic field. If you're interested in an unusual topic that is not so well established, then you need to look and see if you'll be able to make academia work. Will you be able to get articles about that topic published in academic journals? Can you find a grad school, and then a university job, where they will support & encourage your research on that topic?

If you can find any published articles related to the topic then that's a starting point. Then I'd make a list of every researcher in the field who is interested in the topic, starting with the authors of published articles. Then look into all the grad students who have worked with those researchers, follow citation paths, and so on. You can get a decent sense of what academia might be like for you based on publicly available info (those researchers' websites, their lists of publications, and so on), and then you can contact them for more info. If you do go to grad school, you might go to one of their universities, or to a university that they recommended.

comment by steven0461 · 2010-12-10T21:50:05.114Z · score: 6 (8 votes) · LW(p) · GW(p)

I would choose that knowledge if there was the chance that it wouldn't find out about it. As far as I understand your knowledge of the dangerous truth, it just increases the likelihood of suffering, it doesn't make it guaranteed.

I don't understand your reasoning here -- bad events don't get a "flawless victory" badness bonus for being guaranteed. A 100% chance of something bad isn't much worse than a 90% chance.

comment by XiXiDu · 2010-12-11T09:07:07.437Z · score: 0 (0 votes) · LW(p) · GW(p)

I said that I wouldn't want to know it if a bad outcome was guaranteed. But if it would make a bad outcome possible, but very-very-unlikely to actually occur, then the utility I assign to knowing the truth would outweigh the very unlikely possibility of something bad happening.

comment by Roko · 2010-12-10T22:34:52.623Z · score: 0 (2 votes) · LW(p) · GW(p)

No, dude, you're wrong

comment by [deleted] · 2010-12-09T18:14:28.816Z · score: 6 (6 votes) · LW(p) · GW(p)

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

The problem is that we have to defer to Eliezer's (and, by extension, SIAI's) judgment on such issues. Many of the commenters here think that this is not only bad PR for them, but also a questionable policy for a "community blog devoted to refining the art of human rationality."

comment by JGWeissman · 2010-12-09T18:25:32.538Z · score: 7 (7 votes) · LW(p) · GW(p)

Most people wouldn't dispute the first half of your comment. What they might take issue with is this:

Yes, that means we have to trust Eliezer.

If you are going to quote and respond to that sentence, which anticipates people objecting to trusting Eliezer to make those judgments, you should also quote and repond to my response to that anticipation (ie, the next sentence):

But I have no reason to doubt Eliezer's honesty or intelligence in forming those expectations.

Also, I am getting tired of objections framed as predictions that others would make the objections. It is possible to have a reasonable discussion with people who put forth their own objections, explain their own true rejections, and update their own beleifs. But when you are presenting the objections you predict others will make, it is much harder, even if you are personally convinced, to predict that these nebulous others will also be persuaded by my response. So please, stick your own neck out if you want to complain about this.

comment by [deleted] · 2010-12-09T18:33:31.236Z · score: 2 (2 votes) · LW(p) · GW(p)

If you are going to quote and respond to that sentence, which anticipates people objecting to trusting Eliezer to make those judgments, you should also quote and repond to my response to that anticipation (ie, the next sentence)

That's definitely a fair objection, and I'll answer: I personally trust Eliezer's honesty, and he is obviously much smarter than myself. However, that doesn't mean that he's always right, and it doesn't mean that we should trust his judgment on an issue until it has been discussed thoroughly.

Also, I am getting tired of objections framed as predictions that others would make the predictions.

I agree. The above paragraph is my objection.

comment by JGWeissman · 2010-12-09T19:01:56.947Z · score: 1 (1 votes) · LW(p) · GW(p)

However, that doesn't mean that he's always right, and it doesn't mean that we should trust his judgment on an issue until it has been discussed thoroughly.

The problem with a public thorough discussion in these cases is that once you understand the reasons why the idea is dangerous, you already know it, and don't have the opportunity to choose whether to learn about it.

If you trust Eliezer's honesty, then though he may make mistakes, you should not expect him to use this policy as a cover for banning posts as part of some hidden agenda.

comment by [deleted] · 2010-12-09T19:05:35.084Z · score: 3 (5 votes) · LW(p) · GW(p)

The problem with a public thorough discussion in these cases is that once you understand the reasons why the idea is dangerous, you already know it, and don't have the opportunity to choose whether to learn about it.

That's definitely the root of the problem. In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea. If there is, then it means we are doing something wrong.

If you trust Eliezer's honesty, then though he may make mistakes, you should not expect him to use this policy as a cover for banning posts as part of some hidden agenda.

I don't think he's got a hidden agenda; I'm concerned about his mistakes. Though I'm not astute enough to point them out, I think the LW community as a whole is.

comment by JGWeissman · 2010-12-09T19:17:09.730Z · score: 3 (3 votes) · LW(p) · GW(p)

In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea.

I have a response to this that I don't actually want to say, because it could make the idea more dangerous to those who have heard about it but are currently safe due to not fully understanding it. I find that predicting that this sort of thing will happen makes me reluctant to discuss this issue, which may explain why of those who are talking about it, most seem to think the banning was wrong.

I don't think he's got a hidden agenda; I'm concerned about his mistakes.

Given that there has been one banned post. I think that his mistakes are much less of a problem than overwrought concern about his mistakes.

comment by [deleted] · 2010-12-09T19:19:59.042Z · score: 1 (1 votes) · LW(p) · GW(p)

If you have a reply, please PM me. I'm interested in hearing it.

comment by JGWeissman · 2010-12-09T19:24:04.359Z · score: 1 (1 votes) · LW(p) · GW(p)

Are you interested in hearing it if it does give you a better understanding of the dangerous idea that you then realize is in fact dangerous?

comment by [deleted] · 2010-12-09T20:41:06.265Z · score: 0 (0 votes) · LW(p) · GW(p)

It may not matter anymore, but yes, I would still like to hear it.

comment by JGWeissman · 2010-12-09T20:56:19.588Z · score: 0 (0 votes) · LW(p) · GW(p)

In this case, the same point has been made by others in this thread.

comment by Vladimir_Nesov · 2010-12-09T19:09:57.018Z · score: 3 (5 votes) · LW(p) · GW(p)

In general, though, if we are talking about FAI, then there shouldn't be a dangerous idea. If there is, then it means we are doing something wrong.

Why do you believe that? FAI is full of potential for dangerous ideas. In its full development, it's an idea with the power to rewrite 100 billion galaxies. That's gotta be dangerous.

comment by [deleted] · 2010-12-09T19:15:14.113Z · score: 9 (11 votes) · LW(p) · GW(p)

Let me try to rephrase: correct FAI theory shouldn't have dangerous ideas. If we find that the current version does have dangerous ideas, then this suggests that we are on the wrong track. The "Friendly" in "Friendly AI" should mean friendly.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-09T19:20:10.606Z · score: 9 (15 votes) · LW(p) · GW(p)

Pretty much correct in this case. Roko's original post was, in fact, wrong; correctly programmed FAIs should not be a threat.

comment by Vladimir_Nesov · 2010-12-09T19:25:12.645Z · score: 10 (12 votes) · LW(p) · GW(p)

(FAIs shouldn't be a threat, but a theory to create a FAI will obviously have at least potential to be used to create uFAIs. FAI theory will have plenty of dangerous ideas.)

comment by XiXiDu · 2010-12-09T19:40:41.037Z · score: 5 (9 votes) · LW(p) · GW(p)

I want to highlight at this point how you think about similar scenarios:

I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.

That isn't very reassuring. I believe that if you had the choice of either letting a Paperclip maximizer burn the cosmic commons or torture 100 people, you'd choose to torture 100 people. Wouldn't you?

...correctly programmed FAIs should not be a threat.

They are always a threat to some beings. For example beings who oppose CEV or other AI's. Any FAI who would run a human version of CEV would be a potential existential risk to any alien civilisation. If you accept all this possible oppression in the name of what is subjectively friendliness, how can I be sure that you don't favor torture for some humans that support CEV, in order to ensure it? After all you already allow for the possibility that many beings are being oppressed or possible killed.

comment by wedrifid · 2010-12-09T19:44:16.905Z · score: 3 (3 votes) · LW(p) · GW(p)

They are always a threat to some beings. For example beings who oppose CEV or other AI's. Any FAI who would run a human version of CEV would be a potential existential risk to any alien civilisation.

This seems to be true and obviously so.

comment by Vladimir_Nesov · 2010-12-09T19:43:08.261Z · score: 1 (9 votes) · LW(p) · GW(p)

...correctly programmed FAIs should not be a threat.

They are always a threat to some beings.

Narrowness. You can parry almost any statement like this, by posing a context outside its domain of applicability.

comment by cousin_it · 2010-12-09T23:23:52.395Z · score: 0 (0 votes) · LW(p) · GW(p)

Another pointless flamewar. This part makes me curious though:

Roko's original post was, in fact, wrong

There are two ways I can interpret your statement:

a) you know a lot more about decision theory than you've disclosed so far (here, in the workshop and elsewhere);

b) you don't have that advanced knowledge, but won't accept as "correct" any decision theory that leads to unpalatable consequences like Roko's scenario.

Which is it?

comment by Vladimir_Nesov · 2010-12-09T23:38:11.587Z · score: 5 (5 votes) · LW(p) · GW(p)

From my point of view, and as I discussed in the post (this discussion got banned with the rest, although it's not exactly on that topic), the problem here is the notion of "blackmail". I don't know how to formally distinguish that from any other kind of bargaining, and the way in which Roko's post could be wrong that I remember required this distinction to be made (it could be wrong in other ways, but that I didn't notice at the time and don't care to revisit).

(The actual content edited out and posted as a top-level post.)

comment by cousin_it · 2010-12-09T23:48:18.655Z · score: 1 (1 votes) · LW(p) · GW(p)

(I seem to have a talent for writing stuff, then deleting it, and then getting interesting replies. Okay. Let it stay as a little inference exercise for onlookers! And please nobody think that my comment contained interesting secret stuff; it was just a dumb question to Eliezer that I deleted myself, because I figured out on my own what his answer would be.)

Thanks for verbalizing the problems with "blackmail". I've been thinking about these issues in the exact same way, but made no progress and never cared enough to write it up.

comment by Perplexed · 2010-12-10T01:39:10.391Z · score: 4 (4 votes) · LW(p) · GW(p)

Perhaps the reason you are having trouble coming up with a satisfactory characterization of blackmail is that you want a definition with the consequence that it is rational to resist blackmail and therefore not rational to engage in blackmail.

Pleasant though this might be, I fear the universe is not so accomodating.

Elsewhere VN asks how to unpack the notion of a status-quo, and tries to characterize blackmail as a threat which forces the recipient to accept less utility than she would have received in the status quo. I don't see any reason in game theory why such threats should be treated any differently than other threats. But it is easy enough to define the 'status-quo'.

The status quo is the solution to a modified game - modified in such a way that the time between moves increases toward infinity and the current significance of those future moves (be they retaliations or compensations) is discounted toward zero. A player who lives in the present and doesn't respond to delayed gratification or delayed punishment is pretty much immune to threats (and to promises).

comment by David_Gerard · 2010-12-09T23:30:12.373Z · score: 4 (4 votes) · LW(p) · GW(p)

Another pointless flamewar.

On RW it's called Headless Chicken Mode, when the community appears to go nuts for a time. It generally resolves itself once people have the yelling out of their system.

The trick is not to make any decisions based on the fact that things have gone into headless chicken mode. It'll pass.

[The comment this is in reply to was innocently deleted by the poster, but not before I made this comment. However, I think I'm making a useful point here, so would prefer to keep this comment.]

comment by Jack · 2010-12-09T19:21:18.021Z · score: 1 (1 votes) · LW(p) · GW(p)

This is certainly the case with regard to the kind of decision theoretic thing in Roko's deleted post. I'm not sure if it is the case with all ideas that might come up while discussing FAI.

comment by Vladimir_Nesov · 2010-12-09T19:17:41.730Z · score: -7 (11 votes) · LW(p) · GW(p)

Let me try to rephrase: correct FAI theory shouldn't have dangerous ideas. If we find that the current version does have dangerous ideas, then this suggests that we are on the wrong track. The "Friendly" in "Friendly AI" should mean friendly.

Wrong and stupid.

comment by komponisto · 2010-12-09T19:26:23.770Z · score: 6 (6 votes) · LW(p) · GW(p)

FYI, this is an excellent example of contempt.

comment by Vladimir_Nesov · 2010-12-09T19:31:14.882Z · score: -2 (2 votes) · LW(p) · GW(p)

And so it was, but not an example for other times when it wasn't. A rare occurrence. I'm pretty sure it didn't lead to any errors though, in this simple case.

(I wonder why Eliezer pitched in the way he did, with only weak disambiguation between the content of Tetronian's comment and commentary on correctness of Roko's post.)

comment by Kutta · 2010-12-10T09:58:45.484Z · score: 0 (0 votes) · LW(p) · GW(p)

I got the impression that you responded to "FAI Theory" as our theorizing and Eliezer responded to it as the theory making its way to the eventual FAI.

comment by [deleted] · 2010-12-09T19:19:19.488Z · score: 2 (2 votes) · LW(p) · GW(p)

Ok...but why?

Edit: If you don't want to say why publicly, feel free to PM me.

comment by Vladimir_Nesov · 2010-12-09T19:25:44.893Z · score: 2 (2 votes) · LW(p) · GW(p)

here

comment by Nick_Tarleton · 2010-12-08T02:44:32.090Z · score: 6 (8 votes) · LW(p) · GW(p)

I pointed out to Roko by PM that his comment couldn't be doing his cause any favors, but did not ask him to delete it, and would have discouraged him from doing so.

comment by waitingforgodel · 2010-12-08T05:22:50.814Z · score: 1 (9 votes) · LW(p) · GW(p)

I can't be sure, but it sounded from:

I've been asked to remove it as it could potentially be damaging.

like he'd gotten a stronger message from someone high up in SIAI -- though of course, I probably like that theory because of the Bayesian Conspiracy aspects.

Would you mind PM'ing me (or just posting) the message you sent?

Also, does the above fit with your experiences at SIAI? I find it hard, but not impossible, to believe that Roko just described something akin to standard hiring procedure, and would very much like to hear an inside (and presumably saner) account.

comment by MichaelAnissimov · 2011-01-24T11:00:34.646Z · score: 7 (7 votes) · LW(p) · GW(p)

Most people who actually work full-time for SIAI are too busy to read every comments thread on LW. In some cases, they barely read it at all. The wacky speculation here about SIAI is very odd -- a simple visit in most cases would eliminate the need for it. Surely more than a hundred people have visited our facilities in the last few years, so plenty of people know what we're really like in person. Not very insane or fanatical or controlling or whatever generates a good comic book narrative.

comment by Nick_Tarleton · 2010-12-08T05:47:42.229Z · score: 4 (4 votes) · LW(p) · GW(p)

PMed the message I sent.

Certainly not anything like standard hiring procedure.

comment by waitingforgodel · 2010-12-08T06:02:25.036Z · score: 5 (11 votes) · LW(p) · GW(p)

Thanks Nick.

Please pardon my prying, but as you've spent more time with SIAI, have you seen tendencies toward this sort of thing? Public declarations, competitions/pressure to prove devotion to reducing existential risks, scolding for not towing the party line, etc.

I've seen evidence of fanaticism, but have always been confused about what the source is (did they start that way, or were they molded?).

Basically, I would very much like to know what your experience has been as you've gotten closer to SIAI.

I'm sure I'm not the only (past, perhaps future) donor would appreciate the air being cleared about this.

comment by Nick_Tarleton · 2010-12-09T16:23:32.140Z · score: 9 (9 votes) · LW(p) · GW(p)

Please pardon my prying,

No problem, and I welcome more such questions.

but as you've spent more time with SIAI, have you seen tendencies toward this sort of thing? Public declarations, competitions/pressure to prove devotion to reducing existential risks, scolding for not towing the party line, etc.

No; if anything, I see explicit advocacy, as Carl describes, against natural emergent fanaticism (see below), and people becoming less fanatical to the extent that they're influenced by group norms. I don't see emergent individual fanaticism generating significant unhealthy group dynamics like these. I do see understanding and advocacy of indirect utilitarianism as the proper way to 'shut up and multiply'. I would be surprised if I saw any of the specific things you mention clearly going on, unless non-manipulatively advising people on how to live up to ideals they've already endorsed counts. I and others have at times felt uncomfortable pressure to be more altruistic, but this is mostly pressure on oneself — having more to do with personal fanaticism and guilt than group dynamics, let alone deliberate manipulation — and creating a sense of pressure is generally recognized as harmful.

I've seen evidence of fanaticism, but have always been confused about what the source is (did they start that way, or were they molded?).

I think the major source is that self-selection for taking the Singularity seriously, and for trying to do something about it, selects for bullet-biting dispositions that predispose towards fanaticism, which is then enabled by having a cause and a group to identify with. I don't think this is qualitatively different from things that happen in other altruistic causes, just more common in SIAI due to much stronger selective pressure for bullet-biting.

I also have the impression that Singularitarian fanaticism in online discussions is more common among non-affiliated well-wishers than people who have spent time with SIAI (but there are more of the former category, so it's not easy to tell).

comment by Larks · 2010-12-08T21:37:34.540Z · score: 4 (4 votes) · LW(p) · GW(p)

I was there for a summer and don't think I was ever even asked to donate money.

comment by waitingforgodel · 2010-12-09T05:32:58.582Z · score: 0 (4 votes) · LW(p) · GW(p)

Ahh. I was trying to ask about Cialdini-style influence techniques.

comment by Roko · 2010-12-08T19:53:34.901Z · score: 4 (4 votes) · LW(p) · GW(p)

Very little, if any.

comment by wedrifid · 2010-12-08T04:34:24.001Z · score: 0 (0 votes) · LW(p) · GW(p)

What exactly is Roko's cause by your estimation? I wasn't aware he had one, at least in the secretive sense.

comment by Nick_Tarleton · 2010-12-08T04:40:41.245Z · score: 2 (2 votes) · LW(p) · GW(p)

I meant SIAI.

comment by alexflint · 2010-12-10T10:44:22.932Z · score: 5 (5 votes) · LW(p) · GW(p)

One big disadvantage is that you won't be interacting with other researchers from whom you can learn.

Research seems to be an insiders' game. You only ever really see the current state of research in informal settings like seminars and lab visits. Conference papers and journal articles tend to give strange, skewed, out-of-context projections of what's really going on, and books summarise important findings long after the fact.

comment by Danny_Hintze · 2010-12-10T23:21:30.223Z · score: 3 (3 votes) · LW(p) · GW(p)

At the same time however, you might be able to interact with researchers more effectively. For example, you could spend some of those research weeks visiting selected labs and seminars and finding out what's up. It's true that this would force you to be conscientious about opportunities and networking, but that's not necessarily a bad thing. Networks formed with a very distinct purpose are probably going to outperform those that form more accidentally. You wouldn't be as tied down as other researchers, which could give you an edge in getting the ideas and experiences you need for your research, while simultaneously making you more valuable to others when necessary (For example, imagine if one of your important research contacts needs two weeks of solid help on something. You could oblige whereas others with less fluid obligations could not.).

comment by MartinB · 2010-12-08T10:27:45.817Z · score: 5 (5 votes) · LW(p) · GW(p)

This thread raises the question about how many biologists and medical researchers are on here. Due to our specific cluster I expect a strong learning towards the IT people. So AI research gets over proportional recognition, while medical research including direct life extension falls on the wayside.

comment by Roko · 2010-12-10T20:15:33.267Z · score: 4 (6 votes) · LW(p) · GW(p)

The compelling argument for me is that knowing about bad things is useful to the extent that you can do something about them, and it turns out that people who don't know anything (call them "non-cogniscenti") will probably free-ride their way to any benefits of action on the collective-action-problem that is the at issue here, whilst avoiding drawing any particular attention to themselves ==> avoiding the risks.

Vladimir Nesov doubts this prima facie, i.e. he asks "how do you know that the strategy of being a completely inert player is best?".

-- to which I answer, "if you want to be the first monkey shot into space, then good luck" ;D

comment by timtyler · 2010-12-10T21:05:40.985Z · score: 0 (6 votes) · LW(p) · GW(p)

it turns out that people who don't know anything (call them "non-cogniscenti") will probably free-ride their way to any benefits of action on the collective-action-problem that is the at issue here, whilst avoiding drawing any particular attention to themselves ==> avoiding the risks.

This is the "collective-action-problem" - where the end of the world arrives - unless a select band of heroic messiahs arrive and transport everyone to heaven...?

That seems like a fantasy story designed to manipulate - I would council not getting sucked in.

comment by Roko · 2010-12-10T22:28:14.115Z · score: 7 (11 votes) · LW(p) · GW(p)

No, this is the "collective-action-problem" - where the end of the world arrives - despite a select band of decidedly amateurish messiahs arriving and failing to accomplish anything significant.

You are looking at those amateurs now.

comment by timtyler · 2010-12-11T12:18:07.868Z · score: -2 (12 votes) · LW(p) · GW(p)

The END OF THE WORLD is probably the most frequently-repeated failed prediction of all time. Humans are doing spectacularly well - and the world is showing many signs of material and moral progress - all of which makes the apocalypse unlikely.

The reason for the interest here seems obvious - the Singularity Institute's funding is derived largely from donors who think it can help to SAVE THE WORLD. The world must first be at risk to enable heroic Messiahs to rescue everyone.

The most frequently-cited projected cause of the apocalypse: an engineering screw-up. Supposedly, future engineers are going to be so incompetent that they accidentally destroy the whole world. The main idea - as far as I can tell - is that a bug is going to destroy civilisation.

Also - as far as I can tell - this isn't the conclusion of analysis performed on previous engineering failures - or on the effects of previous bugs - but rather is wild extrapolation and guesswork.

Of course it is true that there may be a disaster, and END OF THE WORLD might arrive. However there is no credible evidence that this is likely to be a probable outcome. Instead, what we have appears to be mostly a bunch of fear mongering used for fundraising aimed at fighting the threat. That gets us into the whole area of the use and effects of fear mongering.

Fearmongering is a common means of psychological manipulation, used frequently by advertisers and marketers to produce irrational behaviour in their victims.

It has been particularly widely used in the IT industry - mainly in the form of fear, uncertainty and doubt.

Evidently, prolonged and widespread use is likely to help to produce a culture of fear. The long-term effects of that are not terribly clear - but it seems to be dubious territory.

I would council those using fear mongering for fund-raising purposes to be especially cautious of the harm this might do. It seems like a potentially dangerous form of meme warfare. Fear targets circuits in the human brain that evolved in an earlier, more dangerous era - where death was much more likely - so humans have an evolved vulnerability in the area. The modern super-stimulus of the END OF THE WORLD overloads those vulnerable circuits.

Maybe this is an effective way of extracting money from people - but also, maybe it is an unpleasant and unethical one. So, wannabe heroic Messiahs, please: take care. Starting out by screwing over your friends and associates by messing up their heads with a hostile and virulent meme complex may not be the greatest way to start out.

comment by Bongo · 2010-12-11T15:19:49.920Z · score: 14 (16 votes) · LW(p) · GW(p)

Do you also think that global warming is a hoax, that nuclear weapons were never really that dangerous, and that the whole concept of existential risks is basically a self-serving delusion?

Also, why are the folks that you disagree with the only ones that get to be described with all-caps narrative tropes? Aren't you THE LONE SANE MAN who's MAKING A DESPERATE EFFORT to EXPOSE THE TRUTH about FALSE MESSIAHS and the LIES OF CORRUPT LEADERS and SHOW THE WAY to their HORDES OF MINDLESS FOLLOWERS to AN ENLIGHTENED FUTURE? Can't you describe anything with all-caps narrative tropes if you want?

Not rhethorical questions, I'd actually like to read your answers.

comment by multifoliaterose · 2010-12-11T15:54:54.543Z · score: 1 (1 votes) · LW(p) · GW(p)

I laughed aloud upon reading this comment; thanks for lifting my mood.

comment by Vladimir_Nesov · 2010-12-11T15:48:18.859Z · score: 1 (1 votes) · LW(p) · GW(p)

So the real problem here is weakness of arguments, since they lack explanatory power by being able to "explain" too much.

comment by timtyler · 2010-12-11T15:48:26.773Z · score: 0 (8 votes) · LW(p) · GW(p)

Tim on global warming: http://timtyler.org/end_the_ice_age/

1-line summary - I am not too worried about that either.

Global warming is far more the subject of irrational fear-mongering then machine intelligence is.

It's hard to judge how at risk the world was from nuclear weapons during the cold war. I don't have privileged information about that. After Japan, we have not had nuclear weapons used in anger or war. That doesn't give much in the way of actual statistics to go on. Whatever estimate is best, confidence intervals would have to be wide. Perhaps ask an expert on the history of the era this question.

The END OF THE WORLD is not necessarily an idea that benefits those who embrace it. If you consider the stereotypical END OF THE WORLD plackard carrier, they are probably not benefitting very much personally. The benefit associated with the behaviour accrues mostly to the END OF THE WORLD meme itself. However, obviously, there are some people who benefit. 2012 - and all that.

The probabality of the END OF THE WORLD soon - if it is spelled out exactly what is meant by that - is a real number which could be scientifically investigated. However whether the usual fundraising and marketing campaigns around the subject illuminate that subject more than they systematically distort it seems debatable.

comment by Desrtopa · 2010-12-11T16:07:40.741Z · score: 2 (2 votes) · LW(p) · GW(p)

Tim on global warming: http://timtyler.org/end_the_ice_age/

1-line summary - I am not too worried about that either.

This is a pretty optimistic way of looking at it, but unfortunately it's quite unfounded. Current scientific consensus is that we've already released more than enough greenhouse gases to avert the next glacial period. Melting the ice sheets and thus ending the ice age entirely is an extremely bad idea if we do it too quickly for global ecosystems to adapt.

comment by timtyler · 2010-12-11T16:16:06.477Z · score: -2 (4 votes) · LW(p) · GW(p)

We don't even really understand what causes the glacial cycles yet. This is an area where there are multiple competing hypotheses. I list four of these on my site. So, since we don't have a proper understanding of the mechanics involved with much confidence yet, we don't yet know what it would take to prevent them.

Here's what Dyson says on the topic:

We do not know how to answer the most important question: do our human activities in general, and our burning of fossil fuels in particular, make the onset of the next ice-age [sic] more likely or less likely? [...]

Until the causes of ice-ages are understood, we cannot know whether the increase of carbon-dioxide in the atmosphere is increasing or decreasing the danger.

I do not believe this is contrary to any "scientific consensus" on the topic. Where is this supposed "scientific consensus" of which you speak?

Melting the ice caps is inevitably an extremely slow process - due to thermal inertia. It is also widely thought to be a runaway positive feedback cycle - and so probably a phenomenon that it would be difficult to control the rate of.

comment by Desrtopa · 2010-12-11T16:22:45.876Z · score: 1 (1 votes) · LW(p) · GW(p)

Melting of the icecaps is now confirmed to be a runaway positive feedback process pretty much beyond a shadow of a doubt. Within the last few years, melting has occurred at a rate that exceeded the upper limits of our projection margins.

Have you performed calculations on what it would take to avert the next glacial period on the basis of any of the competing models, or did you just assume that ice ages are bad, so preventing them is good and we should thus work hard to prevent reglaciation? There's a reason why your site is the first and possibly only only result in online searches for support of preventing glaciation, and it's not because you're the only one to think of it

comment by timtyler · 2010-12-11T16:28:35.843Z · score: 0 (2 votes) · LW(p) · GW(p)

There are others who share my views - e.g.:

If we could choose between the climate of today with a dry Sahara and the climate of 6,000 years ago with a wet Sahara, should we prefer the climate of today? My second heresy answers yes to the first question and no to the second. It says that the warm climate of 6,000 years ago with the wet Sahara is to be preferred, and that increasing carbon dioxide in the atmosphere may help to bring it back. I am not saying that this heresy is true. I am only saying that it will not do us any harm to think about it."

Global warming, then, is great because it protects us from the unpredictable big freeze that would be far, far worse.

GLOBAL WARMING: A Boon to Humans and Other Animals

They also say that temperature increase is actually a good thing as in the past sudden cool periods have killed twice as many people as warm spells".

comment by Desrtopa · 2010-12-11T16:41:09.801Z · score: 1 (1 votes) · LW(p) · GW(p)

Why is being difficult to control glacial melting a point in the favor of increasing greenhouse gas emissions?

It's true that climate change models are limited in their ability to project climate change accurately, although they're getting better all the time. Unfortunately, the evidence currently suggests that they're undershooting actual warming rates even at their upper limits.

The pro-warming arguments on your site essentially boil down to "warm earth is better than cold earth, so we should try to warm the earth up." Regardless of the relative merits of a warmer or colder planet though, rapid change of climate is a major burden on ecosystems. Flooding and forest fires are relatively trivial effects, it's mass extinction events that are a real matter of concern.

comment by timtyler · 2010-12-11T16:52:05.286Z · score: 0 (0 votes) · LW(p) · GW(p)

Why is being difficult to control glacial melting a point in the favor of increasing greenhouse gas emissions?

That is hard to parse. You are asking why I think the rate of runaway positive feedback cycles is difficult to control? That is because that is often their nature.

It's true that climate change models are limited in their ability to project climate change accurately, although they're getting better all the time. Unfortunately, the evidence currently suggests that they're undershooting actual warming rates even at their upper limits.

You talk as though I am denying warming is happening. HUH?

The pro-warming arguments on your site essentially boil down to "warm earth is better than cold earth, so we should try to warm the earth up." Regardless of the relative merits of a warmer or colder planet though, rapid change of climate is a major burden on ecosystems. Flooding and forest fires are relatively trivial effects, it's mass extinction events that are a real matter of concern.

Right. So, if you want a stable climate, you need to end the yo-yo glacial cycles - and end the ice age. A stable climate is one of the benefits of doing that.

I have a section entitled "Climate stablity" in my essay. To quote from it:

Ice age climates are inherently unstable. That is because they are characterised by positive feedback - and are prone to flipping between extreme states.

comment by Desrtopa · 2010-12-11T17:05:39.992Z · score: 1 (1 votes) · LW(p) · GW(p)

That is hard to parse. You are asking why I think the rate of runaway positive feedback cycles is difficult to control? That is because that is often their nature.

I have no idea how you got that out of my question. It's obvious why runaway positive feedback cycles would be hard to control, the question I asked is why this in any way supports global warming not being dangerous.

You talk as though I am denying warming is happening. HUH?

That was not something I meant to imply. My point is that you seem to have decided that it's better for our earth to be warm than cold, and thus that it's good to approach that state, but not done any investigation into whether what we're doing is a safe means of accomplishing that end; rather you seem to have assumed that we cannot do too much.

Right. So, if you want a stable climate, you need to end the yo-yo glacial cycles - and end the ice age. A stable climate is one of the benefits of doing that.

Most of the species on earth today have survived through multiple glaciation periods. Our ecosystems have that plasticity, because those species that were not able to cope with the rapid cooling periods died out. Global warming could lead to a stable climate, but it's also liable to cause massive extinction in the process as climate zones shift in ways that they haven't in millions of years far at a rate outside the tolerances of many ecosystems.

When it comes to global climate, there are really no "better" or "worse" states. Species adapt to the way things are. Cretaceous organisms are adapted to Cretaceous climates, Cenozoic organisms are adapted to Cenozoic climates, and either would have problems dealing with the other's climate. Humans more often suffer problems from being too cold than too hot, but we've scarcely had time to evolve since we left near-equatorial climates. We're adapted to be comfortable in hotter climates than the ones in which most people live today, but the species we rely on are mostly adapted to deal with the climates they're actually in, with cooling periods lying within the tolerances of ecosystems that have been forced to deal with them recently in their evolutionary history.

comment by timtyler · 2010-12-11T19:37:12.002Z · score: 0 (0 votes) · LW(p) · GW(p)

When it comes to global climate, there are really no "better" or "worse" states.

There most certainly are - from the perspective of individuals, groups, or species.

comment by Desrtopa · 2010-12-11T19:55:37.977Z · score: 1 (1 votes) · LW(p) · GW(p)

From the perspective of species, "better," is generally "maintain ecosystem status quo" and "worse" is everything else, except for cases where they come out ahead due to competitors suffering more heavily from the changes.

comment by timtyler · 2010-12-11T20:08:48.356Z · score: 1 (3 votes) · LW(p) · GW(p)

For most possible changes, a good rule of thumb is on average that half the agents affected do better than average, and half the agents affected do worse than average.

Fitness is relative - and that's just what it means to consider an average value.

I go into all this in more detail on: http://timtyler.org/why_everything_is_controversial/

comment by Desrtopa · 2010-12-11T20:16:07.752Z · score: 0 (0 votes) · LW(p) · GW(p)

Roughly half of agents may have a better than average response to the change, but when rapid ecosystem changes occur, the average species response is negative. Particularly when accompanied by other forms of ecosystem pressure (which humanity is certainly exerting) rapid changes in climate tend to be accompanied by extinction spikes and decreases in species diversity.

comment by timtyler · 2010-12-11T20:28:16.171Z · score: 0 (4 votes) · LW(p) · GW(p)

I am not sure I am following. You are saying that such changes are bad - because they drive species towards extinction?

If you look at: http://alife.co.uk/essays/engineered_future/

...you will see that I expect the current mass extinction to intensify tremendously. However, I am not clear about how or why that would be bad. Surely it is a near-inevitable result of progress.

comment by Desrtopa · 2010-12-11T20:41:06.976Z · score: 0 (0 votes) · LW(p) · GW(p)

Rapid change drives species to extinction at a rate liable to endanger the function of ecosystems we rely on. Massive extinction events are in no way an inevitable consequence of improving the livelihoods of humans, although I'm not optimistic about our prospects of actually avoiding them.

Loss of a large percentage of the species on earth would hurt us, both in practical terms and as a matter of widely shared preference. As a species, we would almost certainly survive anthropogenic climate change even if it caused a runaway mass extinction event, but that doesn't mean that it's not an outcome that would be better to avoid if possible. Frankly, I don't expect legislation or social agitation ever to have an adequate impact in halting anthropogenic global warming; unless we come up with some really clever hack, the battle is going to be lost, but that doesn't mean that we shouldn't be aware of what we stand to lose, and take notice if any viable means of avoiding it arises.

comment by timtyler · 2010-12-11T20:56:51.749Z · score: 1 (1 votes) · LW(p) · GW(p)

The argument suggesting that we should move away from the "cliff edge" of reglaciation is that it is dangerous hanging around there - and we really don't want to fall off.

You seem to be saying that we should be cautious about moving too fast - in case we break something. Very well, I agree entirely - so: let us study the whole issue while moving as rapidly away from the danger zone as we feel is reasonably safe.

comment by Desrtopa · 2010-12-11T21:01:35.032Z · score: 2 (2 votes) · LW(p) · GW(p)

As I already noted, as best indicated by our calculations we have already overshot the goal of preventing the next glaciation period. Moving away from the danger zone at a reasonably safe pace would mean a major reduction in greenhouse gas emissions.

comment by timtyler · 2010-12-11T21:27:28.937Z · score: 3 (7 votes) · LW(p) · GW(p)

We don't know that. The science of this isn't settled. The Milankovitch hypothesis of glaciation is more band-aid than theory. See:

http://en.wikipedia.org/wiki/Milankovitch_cycles#Problems

CO2 apparently helps - but even that is uncertain. I would want to see a very convincing case that we are far enough from the edge for the risk of reglaciation to be over before advocating hanging around on the reglaciation cliff-edge. Short of eliminating the ice caps, it is difficult to imagine what would be convincing. Those ice caps are potentially major bad news for life on the planet - and some industrial CO2 is little reassurance - since that could relatively quickly become trapped inside plants and then buried.

comment by Desrtopa · 2010-12-12T01:02:16.082Z · score: 2 (2 votes) · LW(p) · GW(p)

The global ice caps have been around for millions of years now. Life on earth is adapted to climates that sustain them. They do not constitute "major bad news for life on this planet." Reglaciation would pose problems for human civilization, but the onset of glaciation occurs at a much slower rate than the warming we're already subjecting the planet to, and as such even if raising CO2 levels above what they've been since before the glaciations began in the Pleistocene were not enough to prevent the next round, it would still be a less pressing issue.

On a geological time scale, the amount of CO2 we've released could quickly be trapped in plants and buried, but with the state of human civilization as it is, how do you suppose that would actually happen quickly enough to be meaningful for the purposes of this discussion?

comment by timtyler · 2010-12-12T10:43:03.362Z · score: 1 (3 votes) · LW(p) · GW(p)

The global ice caps have been around for millions of years now. Life on earth is adapted to climates that sustain them. They do not constitute "major bad news for life on this planet."

The ice age is a pretty major problem for the planet. Huge ice sheets obliterate most life on the northern hemisphere continents every 100 thousand years or so.

Re: reglaciation being slow - the last reglaciation looked slower than the last melt. The one before that happened at about the same speed. However, they both look like runaway positive feedback processes. Once the process has started it may not be easy to stop it.

Thinking of reglaciation as "not pressing" seems like a quick way to get reglaciated. Humans have got to intervene in the planet's climate and warm it up in order to avoid this disaster. Leaving the climate alone would be a recipe for reglaciation. Pumping CO2 into the atmosphere may have saved us from disaster already, may save us from disaster in the future, may merely be a step in the right direction - or may be pretty ineffectual. However, it is important to realise that humans have got to take steps to warm the planet up - otherwise our whole civilisation may be quickly screwed.

We don't know that industrial CO2 will protect us from reglaciation - since we don't yet fully understand the latter process - though we do know that it devastates the planet like clockwork, and so has an astronomical origin.

The atmosphere has a CO2 decay function with an estimated half-life time of somwhere between 20-100 years. It wouldn't vanish overnight - but a lot of it could go pretty quickly if civilisation problems resulted in a cessation of production.

comment by NancyLebovitz · 2010-12-11T22:00:14.477Z · score: 2 (2 votes) · LW(p) · GW(p)

If reglaciation starts, could it be stopped by sprinkling coal dust on some of the ice?

comment by timtyler · 2010-12-11T22:10:04.377Z · score: 1 (3 votes) · LW(p) · GW(p)

Hopefully - if we have enough of a civilisation at the time. Reglaciation seems likely to only really be a threat after a major disaster or setback - I figure. Otherwise, we can just adjust the climate controls. The chances of such a major setback may seem slender - but perhaps are not so small that we can afford to be blazee about the matter. What we don't want is to fall down the stairs - and then be kicked in the teeth.

I discuss possible theraputic interventions on: http://timtyler.org/tundra_reclamation/

The main ones listed are planting northerly trees and black ground sheets.

comment by Vladimir_Nesov · 2010-12-11T23:19:50.939Z · score: 0 (0 votes) · LW(p) · GW(p)

We don't know great many things, but what to do right now, we must decide right now, based on whatever we happen to know. (To address the reason for Desrtopa's comment, if not any problem with your comment on this topic I'm completely ignorant about.)

comment by timtyler · 2010-12-11T20:48:26.151Z · score: 0 (0 votes) · LW(p) · GW(p)

If you are concerned about loss of potentially valuable information in the form of species extinction, global warming seems like total fluff. Look instead to habitat destruction and decimation, farming practices, and the resistribution of pathogens, predators and competitors by humans.

comment by Desrtopa · 2010-12-11T20:53:22.263Z · score: 0 (2 votes) · LW(p) · GW(p)

I do look at all these issues. I've spoken at conferences about how they receive too little attention relative to the danger they pose. That doesn't mean that global warming does not stand to cause major harm, and going on the basis of the content of your site, you don't seem to have invested adequate effort into researching the potential dangers, only the potential benefits.

comment by timtyler · 2010-12-11T21:08:15.978Z · score: -4 (6 votes) · LW(p) · GW(p)

Really? Don't you hear enough about the supposed potential dangers from elsewhere already?!? I certainly do.

Be advised that I have pretty minimal effort to invest in discussing global warming these days. It is a big environmentalist scam. I list it at the top of my list of bad causes here.

The only reason worth discussing it - IMO - is if you are trying to direct all the resources that are being senselessly squandered on it towards better ends. You show little sign of doing that. Rather you appear to be caught up in promoting the lunacy. Global warming is mostly fluff. WAKE UP! Bigger fish are frying.

comment by Desrtopa · 2010-12-12T00:32:36.883Z · score: 2 (2 votes) · LW(p) · GW(p)

Frankly, reviewing the content of your site strongly leads me to suspect that your position is not credible; you consistently fail to accurately present what scientists consider to be the reasons for concern. When you claim that global warming is mostly fluff, I already have stronger reason than usual to suspect that you haven't come by your conclusion from an unbiased review of the data.

I would care much less about bothering to convince you if you were not hosting a website for the purpose of convincing others to support furthering global warming, and despite the fact that there is now virtually no controversy that global warming exists, that we are causing it, and that the negatives will outweigh the positives, many people will take any excuse to dismiss it.

comment by timtyler · 2010-12-12T10:57:48.048Z · score: -1 (3 votes) · LW(p) · GW(p)

you consistently fail to accurately present what scientists consider to be the reasons for concern

My site doesn't go into the down-sides of global warming sufficiently for you?!? My main purpose is to explain the advantages. My site is part of the internet. You can obtain a huge mountain of information about the disadvantages by following the links. I do not expect readers to use my site as their sole source of information on the topic.

despite the fact that there is now virtually no controversy that global warming exists, that we are causing it, and that the negatives will outweigh the positives, many people will take any excuse to dismiss it.

There is little controversy that global warming exists. There's quite a bit more controversy about the role of humans - though my personal view is that humans are implicated. Relatively few people seem to have given much thought to the optimal temperature of the planet for living systems. Certainly very few of those involved in the "global warming movement".

Anyway, global warming is good. It may have saved the planet from reglaciation already, or it may do so in the future, but without global warming, the planet and our civilisation would be screwed for a long time, with high probabality. The effects so far have been pretty miniscule, though. We have to carry on warming up the planet on basic safety grounds. The only issue I see is: "how fast".

comment by Desrtopa · 2010-12-12T15:54:23.599Z · score: 1 (1 votes) · LW(p) · GW(p)

My site doesn't go into the down-sides of global warming sufficiently for you?!? My main purpose is to explain the advantages. My site is part of the internet. You can obtain a huge mountain of information about the disadvantages by following the links. I do not expect readers to use my site as their sole source of information on the topic.

Your representations of the downsides (forest fires and floods) are actively misleading to readers. You create a false comparison by weighing all the most significant pros you can raise against a few of the more trivial cons, and encourage readers to make a judgment on that basis. Remember that people tend to actively seek out information sources that support their own views, and once they have adopted a view, seeing arguments for it falsified will not tend to revise their confidence downwards. Additionally, there is no shortage of people who will seize on any remotely credible sounding excuse to vindicate them from having to worry about global warming. Your site runs counter to the purpose of informing the public.

Anyway, global warming is good. It may have saved the planet from reglaciation already, or it may do so in the future, but without global warming, the planet and our civilisation would be screwed for a long time, with high probabality. The effects so far have been pretty miniscule, though. We have to carry on warming up the planet on basic safety grounds. The only issue I see is: "how fast".

If global warming has already prevented reglaciation (which, as I have stated repeatedly, is most likely the case,) then why should we continue warming the planet? Unless we slow it down by orders of magnitude, the warming is occurring faster than the geological timescales on which ecosystems without technological protection are equipped to cope with it. Onset of glaciation is much slower than the progress of global warming. We are closer to the "cliff edge" of warming the world too fast than we are to reglaciation.

You've already stated that you consider global warming to be a storm in a teacup because you expect to see weather control before it becomes a serious issue, but even assuming that's realistic, it gives us a larger time window to deal with reglaciation, should we turn out not to have already prevented it.

comment by timtyler · 2010-12-12T16:30:08.748Z · score: 1 (3 votes) · LW(p) · GW(p)

there is no shortage of people who will seize on any remotely credible sounding excuse to vindicate them from having to worry about global warming

There is no shortage of people with smoke coming out of their ears about the issue either.

Look, I don't just mention fires and floods. I mention sea-level rise, coral reef damage, desertification, heatstroke, and a number of other disadvantages. However, the disadvantages of GW are not the main focus of my article - you can find them on a million other web sites.

If global warming has already prevented reglaciation (which, as I have stated repeatedly, is most likely the case,) then why should we continue warming the ---planet?

Repeating something doesn't make it true. Show that the risk is so small that we need no longer concern ourselves with the huge catastrophe it would represent, and I swear, I will not mention it again. The current situation - as I understand it - is that our understanding of glaciation cycles is weak, our understanding of climate dynamics is weak, and we have little idea of what risk of reglaciation we face.

Of course, a warmer planet will still be healthier and better one - risk of glaciation or no - but then warming it up becomes less urgent.

Onset of glaciation is much slower than the progress of global warming. We are closer to the "cliff edge" of warming the world too fast than we are to reglaciation.

There isn't a "cliff-edge" of warming nearby - AFAICS. If you look at the temperature graph of the last million years, we are at a high point - and about to fall off a cliff into a glacial phase. The other direction we can only "fall" in by rearranging the continents, so there isn't land over the south pole, and a land-locked region around the north pole. With ice-age continental positions, warming the planet is more like struggling uphill.

One thing we could try and do about that is to destroy the Isthmus of Panama. However, that needs further research - and might be a lot of work.

You've already stated that you consider global warming to be a storm in a teacup because you expect to see weather control before it becomes a serious issue, but even assuming that's realistic, it gives us a larger time window to deal with reglaciation, should we turn out not to have already prevented it.

Indeed. So, I only have one page about the topic, and thousands of pages about other things. As I said, this area is only of interest as a bad cause, really.

comment by Desrtopa · 2010-12-12T16:44:18.140Z · score: 2 (2 votes) · LW(p) · GW(p)

There is no shortage of people with smoke coming out of their ears about the issue either.

Look, I don't just mention fires and floods. I mention sea-level rise, coral reef damage, desertification, heatstroke, and a number of other disadvantages. However, the disadvantages of GW are not the main focus of my article - you can find them on a million other web sites.

It's true that there are people with an unrealistic view of the dangers that global warming poses, and hyperbolic reactions may stand to hurt the cause of getting people to take it seriously, but your expectations of how seriously people ought to be taking it are artificially low.

You don't just mention fires and floods, this is true, but you invite readers to make a comparison of the pros and cons without accurately presenting the cons. If you don't want to mislead people, you should either be presenting them more accurately, or outright telling people "This site does not accurately present the cons of global warming, and you should not form an opinion on the basis of this site without doing further research at sites not dedicated to arguing against the threats it may pose."

You can claim that your readers are free to research the issue elsewhere, but what you are doing is encouraging them to be informationally irresponsible.

There isn't a "cliff-edge" of warming nearby - AFAICS. If you look at the temperature graph of the last million years, we are at a high point - and about to fall off a cliff into a glacial phase. The other direction we can only "fall" in by rearranging the continents, so there isn't land over the south pole, and a land-locked region around the north pole. With ice-age continental positions, warming the planet is more like struggling uphill.

As you have already noted earlier in this debate, ice sheet melting is a runaway positive feedback process. Once we have started the cycle, warming the planet is not an uphill struggle. If you look at the temperature graph of the last million years, you will not see that we are about to fall into another glaciation period, because the climate record of the last million years does not reflect the current situation. Even if greenhouse gas levels halted where they are now, temperature would continue to rise to meet the level they set. Our climate situation is more reflective of the Pliocene, which did not have cyclical glaciation periods.

Repeating something doesn't make it true. Show that the risk is so small that we need no longer concern ourselves with the huge catastrophe it would represent, and I swear, I will not mention it again. The current situation - as I understand it - is that our understanding of glaciation cycles is weak, our understanding of climate dynamics is weak, and we have little idea of what risk of reglaciation we face.

I don't have the articles on which I based the statement; I would need access to materials from courses I've already graduated. I may be able to get the data by contacting old professors, but there's a significant hassle barrier, particularly since I'm arguing with someone who I do not have a strong expectation of being receptive to new information. If I do retrieve the information, do you commit to revising your site to reflect it, or removing the site as obsolete?

comment by timtyler · 2010-12-12T17:08:04.287Z · score: 1 (3 votes) · LW(p) · GW(p)

You can claim that your readers are free to research the issue elsewhere, but what you are doing is encouraging them to be informationally irresponsible.

You what? You are starting to irritate me with your unsolicited editorial advice. Don't like my web page - go and make your own!

As you have already noted earlier in this debate, ice sheet melting is a runaway positive feedback process.

Yes - from the point of full glaciation to little glaciation, ice core evidence suggests this process is a relatively unstoppable one-way slide. However, we have gone down that slide already in this glacial cycle. Trying to push the temperature upward from there hasn't happened naturally for millions of years. That doesn't look so easy - while we still have an ice-age continental configuration.

If you look at the temperature graph of the last million years, you will not see that we are about to fall into another glaciation period, because the climate record of the last million years does not reflect the current situation.

That sounds more like ignoring the temperature graph - and considering other information - to me.

If I do retrieve the information, do you commit to revising your site to reflect it, or removing the site as obsolete?

You are kidding, presumably. The information needs to be compelling evidence that there isn't a significant risk. Whether you retreive it or not seems likely to be a minor factor.

As I previously explained, a warmer planet would still be better, reglaciation risk or no. More of the planet would be habitable, fewer would die of pneumonia, there would be fewer deserts - and so on. However, the reglaciation risk does add urgency to the situation.

comment by Desrtopa · 2010-12-12T17:27:15.540Z · score: 1 (1 votes) · LW(p) · GW(p)

You what? You are starting to irritate me with your unsolicited editorial advice. Don't like my web page - go and make your own!

Given that people tend to seek out informational sources that flatter their biases, and your website is one of relatively few sources attempting to convince people that the data suggests global warming is positive, making another website has less utility to me than convincing you to revise yours.

That sounds more like ignoring the temperature graph - and considering other information - to me.

It is considering the temperature graph insofar as it is relevant, while incorporating other information that is more reflective of our current situation. Would you try to extrapolate technological advances in the next twenty years based on the average rate of technological advancement over the last six hundred?

Yes - from the point of full glaciation to little glaciation, ice core evidence suggests this process is a relatively unstoppable one-way slide. However, we have gone down that slide already in this glacial cycle. Trying to push the temperature upward from there hasn't happened naturally for millions of years. That doesn't look so easy - while we still have an ice-age continental configuration.

We had more or less the same continental configuration in the Pliocene, without cyclical glaciation periods.

You are kidding, presumably. The information needs to be compelling evidence that there isn't a significant risk. Whether you retreive it or not seems likely to be a minor factor.

As I previously explained, a warmer planet would still be better, reglaciation risk or no. More of the planet would be habitable, fewer would die of pneumonia, there would be fewer deserts - and so on. However, the reglaciation risk does add urgency to the situation.

Climate zones would shift, necessitating massive relocations which would cause major infrastructure damage. If we alter the earth's temperature to make larger proportions of the land mass optimally biologically suitable to humans, we are going to do tremendous damage to species that are not adapted to near-equatorial habitation, and the impact on those other species is going to cause more problems than the relatively small drop in rates of pneumonia.

How compelling would you expect the evidence to be in order for you to be willing to revise the content of your site?

comment by timtyler · 2010-12-12T17:48:20.374Z · score: -1 (1 votes) · LW(p) · GW(p)

We had more or less the same continental configuration in the Pliocene, without cyclical glaciation periods.

I am not sure what you are getting at there. Continental configuration is one factor. There may be other ones - including astronomical influences.

How compelling would you expect the evidence to be in order for you to be willing to revise the content of your site?

Why do you think it needs revising? A warmer planet looks remarkably positive - while keeping the planet in the freezer does not. I am not very impressed by the argument that change is painful, so we should keep the planet in an ice-age climate. Anyway, lots of evidence that global warming is undesirable - and that a half-frozen planet is good - would cause me to revise my position. However, I don't seriously expect that to happen. That is a misguided lie - and a cause of much wasted energy and resources on the planet.

comment by Desrtopa · 2010-12-12T18:08:06.649Z · score: 1 (1 votes) · LW(p) · GW(p)

I am not sure what you are getting at there. Continental configuration is one factor. There may be other ones - including astronomical influences.

You are proposing an additional mechanism to a phenomenon that we can now model with considerable confidence without it

Why do you think it needs revising? Global warming looks remarkably positive - while keeping the planet in the freezer does not. I am not very impressed by the argument that change is painful, so we should keep the planet in an ice-age climate. Anyway, lots of evidence that global warming is undesirable - and that a half-frozen planet is good - would cause me to revise my position. However, I don't expect that to happen. That is a misguided lie - and a cause of much wasted energy and resources on the planet.

Since you do not seem prepared to acknowledge the more significant dangers of rapid climate change, while substantially overstating the benefits of altering the entire planet to suit the biological preferences of a species that adapted to life near the equator, I doubt you would be swayed by the amount of evidence that should be (and, I assert, is,) forthcoming if you are wrong. I expect that most of the other members here already regard your position with an appropriate degree of skepticism, so while it frustrates me to leave the matter at rest while I am convinced that your position is untenable, I don't see any benefit in continuing to participate in this debate.

comment by timtyler · 2010-12-12T18:17:48.192Z · score: 0 (2 votes) · LW(p) · GW(p)

Continental configuration is not an unnecessary "additional mechanism". It is a well known factor - see:

http://en.wikipedia.org/wiki/Ice_age#Position_of_the_continents

Thanks for the link, though. It will be interesting to see whether their ideas stand up to peer review. If so, it seems like bad news - the authors seem to think the forces that lead to reglaciation are due to kick in around about now.

I don't see any benefit in continuing to participate in this debate.

OK, then - bye!

comment by NancyLebovitz · 2010-12-12T17:14:32.328Z · score: 0 (0 votes) · LW(p) · GW(p)

Do you have a feeling for how much of the planet would be temperate if it were warmer?

comment by timtyler · 2010-12-12T17:23:27.782Z · score: 0 (2 votes) · LW(p) · GW(p)

Imagine http://en.wikipedia.org/wiki/Temperateness with the bands further towards the poles.

comment by Desrtopa · 2010-12-12T17:35:20.342Z · score: 1 (1 votes) · LW(p) · GW(p)

And a bit further away from the equator.

The reason for the shift to the term "climate change" over "global warming" is that climate zones would shift. Places that were previously not arable would become so, but places that previously were arable would cease to be. It would require a significant restructuring of our civilization to chase around the climate zones with the appropriate infrastructure.

comment by timtyler · 2010-12-12T18:09:08.289Z · score: 0 (2 votes) · LW(p) · GW(p)

And a bit further away from the equator.

Looking at a map ...the warming occurs quite a bit more near the poles - and over land masses. So, mostly Canada and Russia will get less frosty and more cosy. That seems good. It would be hard to imagine a more positive kind of climate change than one that makes our large and mostly barren northern wastelands more productive and habitable places.

It would require a significant restructuring of our civilization to chase around the climate zones with the appropriate infrastructure.

Right - over tens of thousands of years, probably. The Antarctic is pretty thick.

comment by Vaniver · 2010-12-11T21:11:37.246Z · score: 2 (4 votes) · LW(p) · GW(p)

WAKE UP!

While I can't speak for everyone, I strongly suspect presenting things like this makes your case less persuasive.

comment by timtyler · 2010-12-11T21:18:43.293Z · score: 0 (4 votes) · LW(p) · GW(p)

A dead-pan presentation washes out my character - and means I have to be more wordy to get across my intended message. Who knows if a slap will work? Not me. Consider it an experiment.

comment by TheOtherDave · 2010-12-11T21:54:38.740Z · score: 4 (4 votes) · LW(p) · GW(p)

Consider it an experiment.

All right. In that case: what would you consider meaningful experimental results, and what would they demonstrate?

comment by timtyler · 2010-12-11T21:59:48.976Z · score: -9 (9 votes) · LW(p) · GW(p)

This is my experiment - and, alas, you will have to leave me to it.

comment by Vaniver · 2010-12-11T22:30:56.708Z · score: 1 (1 votes) · LW(p) · GW(p)

A dead-pan presentation washes out my character - and means I have to be more wordy to get across my intended message.

I would be wary of overloading your message. Consider this as a core:

"My heavily researched impression is that global warming, while somewhat concerning, is significantly less important than many other concerns."

Now, we dress that up. Notice the impact the following change has on you:

"I, Vaniver, have heavily researched global warming, and my impression is that while it is somewhat concerning, it is significantly less important than many other concerns."

Most of the time, putting yourself into the message reduces persuasiveness, and the places where it doesn't probably don't occur during abstract debates. (Persuading someone to share their lunch with you is an example of an appropriate time, but I wouldn't call it an abstract debate.) So while some character is inevitable, I would generally seek to err on the side of suppressing it rather than expressing it.

comment by timtyler · 2010-12-11T19:34:53.467Z · score: -1 (3 votes) · LW(p) · GW(p)

the question I asked is why this in any way supports global warming not being dangerous

Global warming seems a lot less dangerous than reglaciation.

My point is that you seem to have decided that it's better for our earth to be warm than cold, and thus that it's good to approach that state, but not done any investigation into whether what we're doing is a safe means of accomplishing that end; rather you seem to have assumed that we cannot do too much.

Actually, I expect us to master climate control fairly quickly. That is another reason why global warming is a storm in a teacup. However, the future is uncertain. We might get unlucky - and be hit by a fair-sized meteorite. If that happens, reglaciation is about the last thing we would want for desert.

comment by nshepperd · 2010-12-12T01:13:35.134Z · score: 2 (2 votes) · LW(p) · GW(p)

"Fairly quickly"? What if we don't? Do you expect reglaciation to occur within the next 100 years, 200 years? If not we can wait until we have the knowledge to pull off climate control safely. (And if we do get hit by an asteroid, the last thing we probably want is runaway climate change started when we didn't know what we were doing either.)

comment by timtyler · 2010-12-12T11:11:17.444Z · score: -2 (4 votes) · LW(p) · GW(p)

If things go according to plan, we get climate control - and then need to worry little about either warming or reglaciation. The problem is things not going according to plan.

And if we do get hit by an asteroid, the last thing we probably want is runaway climate change started when we didn't know what we were doing either.

Indeed. The "runaway climate change" we are scheduled for is reglaciation. The history of the planet is very clear on this topic. That is exactly what we don't want. A disaster followed by glaciers descending over the northern continents could make a mess of civilisation for quite a while. Warming, by contrast doesn't represent a significant threat - living systems including humans thrive in warm conditions.

comment by Desrtopa · 2010-12-12T16:22:19.434Z · score: 3 (3 votes) · LW(p) · GW(p)

Living systems including humans also thrive in cold conditions. Most species on the planet today have persisted through multiple glaciation periods, but not through pre-Pleistocene level warmth or rapid warming events.

Plus, the history of the Pleistocene, in which our record of glaciation exists, contains no events of greenhouse gas release and warming comparable to the one we're in now, this is not business as usual on the track to reglaciation. Claiming that the history of the planet is very clear that we're headed for reglaciation is flat out misleading. Last time the world had CO2 levels as high as they are now, it wasn't going through cyclical glaciation.

comment by timtyler · 2010-12-12T16:53:25.990Z · score: -1 (1 votes) · LW(p) · GW(p)

Most species on the planet are less than 2.5 million years old?!?

I checked and found: "The fossil record suggests an average species lifespan of about five million years" and "Average species lifespan in fossil record: 4 million years." (search for sources).

So, I figure your claim is probably factually incorrect. However, isn't it a rather meaningless statistic anyway? It depends on how often lineages speciate. That actually says very little about how long it takes to adapt to an environment.

comment by Desrtopa · 2010-12-12T17:09:24.271Z · score: 1 (1 votes) · LW(p) · GW(p)

The average species age is necessarily lower than the average species duration.

Additionally, the fossil record measures species in paleontological terms, a paleontological "species" is not a species in biological terms, but a group which cannot be distinguished from each other by fossilized remains. Paleontological species duration sets the upper bound on biological species duration; in practice, biological species duration is shorter.

Species originating more than 2.5 million years ago which were not capable of enduring glaciation periods would have died out when they occurred. The origin window for species without adaptations to cope is the last ten thousand years. Any species with a Pleistocene origin or earlier has persisted through glaciation periods.

comment by Vaniver · 2010-12-11T17:07:29.792Z · score: 0 (0 votes) · LW(p) · GW(p)

That is hard to parse. You are asking why I think the rate of runaway positive feedback cycles is difficult to control? That is because that is often their nature.

Allow me to try: There are positive feedback cycles which appear to be going in runaway mode. Why is this evidence for "things are going to get better" rather than "things are going to get worse"?

Your argument as a whole- "we need to get above this variability regime into a stable regime"- answers why the runaway positive feedback loop would be desirable, but does not convincingly establish (the part I've read, at least, you may do this elsewhere) that the part above the current variability is actually a stable attractor, instead of us shooting to up to Venus's climate (or something less extreme but still regrettable for humans).

comment by timtyler · 2010-12-11T19:30:51.098Z · score: 0 (0 votes) · LW(p) · GW(p)

but does not convincingly establish (the part I've read, at least, you may do this elsewhere) that the part above the current variability is actually a stable attractor, instead of us shooting to up to Venus's climate (or something less extreme but still regrettable for humans).

Well, we already know what the planet is like when it is not locked into a crippling ice age. Ice-cap free is how the planet has spent the vast majority of its history. We have abundant records about that already.

comment by timtyler · 2010-12-11T19:28:50.859Z · score: 0 (0 votes) · LW(p) · GW(p)

Why is this evidence for "things are going to get better" rather than "things are going to get worse"?

That's the whole "ice age: bad / normal planet: good" notion. I figure a planet locked into a crippling era of catastrophic glacial cycles is undesirable.

comment by Roko · 2010-12-11T12:37:57.899Z · score: 5 (5 votes) · LW(p) · GW(p)

Point of fact: the negative singularity isn't a superstimulus for evolved fear circuits: current best-guess would be that it would be a quick painless death in the distant future (30 years+ by most estimates, my guess 50 years+ if ever). It doesn't at all look like how I would design a superstimulus for fear.

comment by timtyler · 2010-12-11T12:44:49.392Z · score: 2 (2 votes) · LW(p) · GW(p)

It typically has the feature that you, all your relatives, friends and loved-ones die - probably enough for most people to seriously want to avoid it. Michael Vasser talks about "eliminating everything that we value in the universe".

Maybe better super-stimuli could be designed - but there are constraints. Those involved can't just make up the apocalypse that they think would be the most scary one.

Despite that, some positively hell-like scenarios have been floated around recently. We will have to see if natural selection on these "hell" memes results in them becoming more prominent - or whether most people just find them too ridiculous to take seriously.

comment by wedrifid · 2010-12-11T14:05:45.569Z · score: 2 (2 votes) · LW(p) · GW(p)

Maybe better super-stimuli could be designed - but there are constraints.

Yes, you can only look at them through a camera lens, as a reflection in a pool or possibly through a ghost! ;)

comment by Roko · 2010-12-11T12:55:06.811Z · score: 1 (3 votes) · LW(p) · GW(p)

I think you're trying to fit the facts to the hypothesis. Negatve singularity in my opinion is at least 50 years away. Many people I know will already be dead by then, including me if I die at the same point in life as the average of my family.

And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see).

It is also not well-optimized to be believable.

comment by XiXiDu · 2010-12-11T16:15:52.117Z · score: 3 (5 votes) · LW(p) · GW(p)

And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus...

It doesn't work. Jehovah's Witnesses don't even believe into a hell and they are gaining a lot of members each year and donations are on the rise. Donations are not even mandatory either, you are just asked to donate if possible. The only incentive they use is positive incentive.

People will do everything for their country if it asks them to give their life. Suicide bombers also do not blow themselves up because of negative incentive but because they promise their families help and money. Also some believe that they will enter paradise. Negative incentive makes many people reluctant. There is much less crime in the EU than in the U.S. and they got death penalty. Here you get out of jail after max. ~20 years and there's almost no violence in jails either.

comment by wedrifid · 2010-12-11T14:03:53.409Z · score: 0 (0 votes) · LW(p) · GW(p)

Negatve singularity in my opinion is at least 50 years away.

I take it that you would place (t(positive singularity) | positive singularity) a significant distance further still?

And as a matter of fact it is failing to actually get much in the way of donations, compared to donations to the church which is using hell as a superstimulus, or even compared to campaigns to help puppies (about $10bn in total as far as I can see).

This got a wry smile out of me. :)

comment by Roko · 2010-12-11T14:12:16.396Z · score: 0 (0 votes) · LW(p) · GW(p)

(t(positive singularity) | positive singularity)

I'm going to say 75 years for that. But really, this is becoming very much total guesswork.

I do know that AGI -ve singularity won't happen in the next 2 decades and I think one can bet that it won't happen after that for another few decades either.

comment by wedrifid · 2010-12-11T14:20:06.921Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm going to say 75 years for that. But really, this is becoming very much total guesswork.

It's still interesting to hear your thoughts. My hunch is that the difficulty of the -ve --> +ve step is much harder than the 'singularity' step so I would expect the time estimates to reflect that somewhat. But there are all sorts of complications there and my guesswork is even more guess-like than yours!

I do know that AGI -ve singularity won't happen in the next 2 decades and I think one can bet that it won't happen after that for another few decades either.

If you find anyone who is willing to take you up on a bet of that form given any time estimate and any odds then please introduce them to me! ;)

comment by Roko · 2010-12-11T14:26:52.042Z · score: 0 (0 votes) · LW(p) · GW(p)

Many plausible ways to S^+ involve something odd or unexpected happening. WBE might make computational political structures, i.e. political structures based inside a computer full of WBEs. This might change the way humans cooperate.

Suffices to say that FAI doesn't have to come via the expected route of someone inventing AGI and then waiting until they invent "friendliness theory" for it.

comment by timtyler · 2010-12-11T13:07:58.503Z · score: 0 (2 votes) · LW(p) · GW(p)

Church and cute puppies are likely worse causes, yes. I listed animal charities in my "Bad causes" video.

I don't have their budget at my fingertips - but SIAI has raked in around 200,000 dollars a year for the last few years. Not enormous - but not trivial. Anyway, my concern is not really with the cash, but with the memes. This is a field adjacent to one I am interested in: machine intelligence. I am sure there will be a festival of fear-mongering marketing in this area as time passes, with each organisation trying to convince consumers that its products will be safer than those of its rivals. "3-laws-safe" slogans will be printed. I note that Google's recent chrome ad was full of data destruction images - and ended with the slogan "be safe".

Some of this is potentially good. However, some of it isn't - and is more reminiscent of the Daisy ad.

comment by Roko · 2010-12-11T16:41:30.139Z · score: 8 (12 votes) · LW(p) · GW(p)

To me, $200,000 for a charity seems to be pretty much the smallest possible amount of money. Can you find any charitable causes that recieve less than this?

Basically, you are saying that SIAI DOOM fearmongering is a trick to make money. But really, it fails to satisfy several important criteria:

  • it is shit at actually making money. I bet you that there are "save the earthworm" charities that make more money.

  • it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.

  • it is not optimized for believability. In fact it is almost optimized for anti-believability, "rapture of the nerds", much public ridicule, etc.

comment by Roko · 2010-12-11T16:47:21.028Z · score: 4 (4 votes) · LW(p) · GW(p)

A moment's googling finds this:

http://www.buglife.org.uk/Resources/Buglife/Buglife%20Annual%20Report%20-%20web.pdf

"Total Income £546,415"

($863 444)

I leave it to readers to judge whether Tim is flogging a dead horse here.

comment by wedrifid · 2010-12-11T16:53:31.216Z · score: 3 (3 votes) · LW(p) · GW(p)

it is not actually frightening. I am not frightened; quick painless death in 50 years? boo-hoo. Whatever.

Not the sort of thing that could, you know, give you nightmares?

comment by Roko · 2010-12-11T20:55:24.567Z · score: 4 (4 votes) · LW(p) · GW(p)

The sort of thing that could give you nightmares is more like the stuff that is banned. This is different than the mere "existential risk" message.

comment by timtyler · 2010-12-11T19:50:33.229Z · score: -2 (8 votes) · LW(p) · GW(p)

Alas, I have to reject your summary of my position. The situation as I see it:

  • DOOM-based organisations are likely to form with a frequency which depends on the extent to which the world is percieved to be at risk;

  • They are likely to form from those with the highest estimates of p(DOOM);

  • Once they exist, they are likely to try and grow, much like all organisations tend to do - wanting attention, time, money and other available resources;

  • Since they are funded in proportion to the percived value of p(DOOM), such organisations will naturally promote the notion that p(DOOM) is a large value.

This is all fine. I accept that DOOM-based organisations will exist, will loudly proclaim the coming apocalypse, and will find supporters to help them propagate their DOOM message. They may be ineffectual, cause despair and depression or help save the world - depending on their competence - and on to what extent their paranoia turns out to be justified.

However, such organisations seem likely to be very bad sources of information for anyone interested in the actual value of p(DOOM). They have obvious vested interests.

comment by Roko · 2010-12-12T00:16:29.694Z · score: 4 (6 votes) · LW(p) · GW(p)

Agreed that x-risk orgs are a biased source of info on P(risk) due to self-selection bias. Of course you have to look at other sources of info, you have to take the outside view on these questions, etc.

Personally I think that we are so ignorant and irrational as a species (humanity) and as a culture that there's simply no way to get a good, stable probability estimate for big important questions like this, much less to act rationally on the info.

But I think your pooh-pooh'ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.

Why don't you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.

The most likely route to survival would seem to be that the entire model of the future propounded here is wrong. But in that case we move into the domain of irrelevance.

comment by timtyler · 2010-12-12T11:20:48.540Z · score: 0 (2 votes) · LW(p) · GW(p)

I think your pooh-pooh'ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.

I hope I am not "pooh-pooh'ing". There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine - or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view - and so its views on the topic should be taken with multiple pinches of salt.

Why don't you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.

I am not sure I understand fully - but I think the short answer is because I don't agree with that. What risks there are, we can collectively do things about. I appreciate that it isn't easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.

Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there's much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.

If our idea of an ethical corporation is one whose motto is "don't be evil", then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.

comment by NancyLebovitz · 2010-12-12T15:12:51.698Z · score: 3 (3 votes) · LW(p) · GW(p)

One important aspect of corporate reputation is what it's like to work there-- and this is important on the department level and smaller level.

Abusive work environments cause a tremendous amount of misery, and there's no reliable method of finding out whether a job is likely to land you in one.

This problem is made worse if leaving a job makes an potential employee seem less reliable.

Another aspect of a universal reputation system is that there needs to be some method of updating and verification. Credit agencies are especially notable for being sloppy.

comment by Roko · 2010-12-12T11:56:10.598Z · score: 0 (0 votes) · LW(p) · GW(p)

What risks there are, we can collectively do things about.

Not necessarily. The risk might be virtually unstoppable, like a huge oil tanker compared to the force of a single person swimming in the water trying to slow it down.

comment by timtyler · 2010-12-12T12:03:40.660Z · score: 2 (2 votes) · LW(p) · GW(p)

What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that - but they are not very likely to hit us soon.

comment by Roko · 2010-12-12T12:06:59.485Z · score: -1 (1 votes) · LW(p) · GW(p)

Ok, I see. Well, that's just a big factual disagreement then.

comment by timtyler · 2010-12-12T12:18:01.987Z · score: 1 (1 votes) · LW(p) · GW(p)

The usual Singularity Institute line is that it is worth trying too, I believe. As to what p(success) is, the first thing to do would be to make sure that the parties involved mean the same thing by "success". Otherwise, comparing values would be rather pointless.

comment by Roko · 2010-12-12T12:19:33.887Z · score: -1 (3 votes) · LW(p) · GW(p)

This all reminds me of the dirac delta function. Its width is infinitesimal but its area is 1. Sure, it's worth trying in the "Dirac Delta Function" sense.

comment by Roko · 2010-12-12T11:51:25.792Z · score: 0 (0 votes) · LW(p) · GW(p)

Agreed that there are vested interests potentially biasing reasoning.

comment by Airedale · 2010-12-11T17:06:51.062Z · score: 1 (1 votes) · LW(p) · GW(p)

I believe the numbers are actually higher than $200,000. SIAI's 2008 budget was about $500,000. 2006 was about $400,000 and 2007 was about $300,000 (as listed further in the linked thread). I haven't researched to see if gross revenue numbers or revenue from donations are available. Curiously, Guidestar does not seem to have 2009 numbers for SIAI, or at least I couldn't find those numbers; I just e-mailed a couple people at SIAI asking about that.

That being said, even $500,000, while not trivial, seems to me a pretty small budget.

comment by timtyler · 2010-12-11T19:26:26.987Z · score: 0 (0 votes) · LW(p) · GW(p)

Sorry, yes, my bad. $200,000 is what they spent on their own salaries.

comment by NancyLebovitz · 2010-12-11T13:49:39.413Z · score: 1 (3 votes) · LW(p) · GW(p)

I think your disapproval of animal charities is based on circular logic, or at least an unproven premise.

You seem to be saying that animal causes are unworthy recipients of human effort because animals aren't humans. However, people care about animals because of the emotional effects of animals. They care about people because of the emotional effects of people. I don't think it's proven that people only like animals because the animals are super-stimuli.

I could be mistaken, but I think that a more abstract utilitarian approach grounds out in some sort of increased enjoyment of life, or else it's an effort to assume a universe-eye's view of what's ultimately valuable. I'm inclined to trust the former more.

What's your line of argument for supporting charities that help people?

comment by timtyler · 2010-12-11T14:05:04.464Z · score: 1 (1 votes) · LW(p) · GW(p)

I usually value humans much more than I value animals. Given a choice between saving a human or N non-human animals, N would normally have to be very large before I would even think twice about it. Similar values are enshrined in law in most countries.

comment by wedrifid · 2010-12-11T14:12:48.266Z · score: 1 (1 votes) · LW(p) · GW(p)

Similar values are enshrined in law in most countries.

To the extent that the law accurately represents the values of the people it governs charities are not necessary. Vales enshrined in law are by necessity irrelevant.

(Noting by way of pre-emption that I do not require that laws should fully represent the values of the people.)

comment by timtyler · 2010-12-11T14:21:26.441Z · score: 0 (0 votes) · LW(p) · GW(p)

I do not agree. If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.

comment by wedrifid · 2010-12-11T14:59:06.082Z · score: 0 (0 votes) · LW(p) · GW(p)

If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.

And yet this is not contrary to my point. Charity operates, only needs to operate, on areas that laws do not already create a solution for. If there was a law specifying that dying kids get trips to Disneyland and visits by popstars then there wouldn't be a "Make A Wish Foundation".

comment by timtyler · 2010-12-11T15:27:27.863Z · score: 0 (0 votes) · LW(p) · GW(p)

You said the law was "irrelevant" - but there's a sense in which we can see consensus human values about animals by looking at what the law dictates as punishment for their maltreatment. That is what I was talking about. It seems to me that the law has something to say about the issue of the value of animals relative to humans.

For the most part, animals are given relatively few rights under the law. There are exceptions for some rare ones. Animals are routinely massacred in huge numbers by humans - including some smart mammals like pigs and dolphins. That is a broad reflection how relatively-valuable humans are considered to be.

comment by shokwave · 2010-12-11T14:38:58.623Z · score: 0 (0 votes) · LW(p) · GW(p)

If the law says that killing a human is much worse than killing a dog, that is probably a reflection of the views of citizens on the topic.

And once it's enshrined in law, it no longer matters whether citizens think killing a human is worse or better than killing a dog. I think that is what wedrifid was noting.

comment by multifoliaterose · 2010-12-11T23:57:05.357Z · score: 0 (0 votes) · LW(p) · GW(p)

You seem to be saying that animal causes are unworthy recipients of human effort because animals aren't humans. However, people care about animals because of the emotional effects of animals. They care about people because of the emotional effects of people. I don't think it's proven that people only like animals because the animals are super-stimuli.

You may be interested in Alan Dawrst's essays on animal suffering and animal suffering prevention.

comment by steven0461 · 2010-12-10T21:30:34.128Z · score: 6 (8 votes) · LW(p) · GW(p)

I wonder what fraction of actual historical events a hostile observer taking similar liberties could summarize to also sound like some variety of "a fantasy story designed to manipulate".

comment by timtyler · 2010-12-10T21:55:49.945Z · score: 0 (4 votes) · LW(p) · GW(p)

I don't know - but believing inaction is best is rather common - and there are pages all about it - e.g.:

http://en.wikipedia.org/wiki/Learned_helplessness

comment by JoshuaZ · 2010-12-07T21:29:50.135Z · score: 4 (4 votes) · LW(p) · GW(p)

Speaking as someone who is in grad school now, even with prior research, the formal track of grad school is very helpful. I am doing research that I'm interested in. I don' t know if I'm a representative sample in that regard. It may be that people have more flexibility in math than in other areas. Certainly my anecdotal impression is that people in some areas such as biology don't have this degree of freedom. I'm also learning more about how to research and how to present my results. Those seem to be the largest advantages. Incidentally, my impression is that for grad school at least in many areas, taking a semester or two off if very stressed isn't treated that badly if one is otherwise doing productive research.

comment by Matt_Simpson · 2010-12-14T00:46:06.142Z · score: 0 (0 votes) · LW(p) · GW(p)

I am doing research that I'm interested in. I don' t know if I'm a representative sample in that regard.

I'm in grad school in statistics and am in the same boat. It doesn't seem that difficult to do research on something you're interested in while still in grad school. In a nutshell, choose your major professor wisely. (And make sure the department is large enough that there are plenty of options)

comment by aletheilia · 2010-12-07T19:29:25.919Z · score: 4 (4 votes) · LW(p) · GW(p)

Being in a similar position (also as far as aversion to moving to e.g. US is concerned), I decided to work part time (roughly 1/5 of the time of even less) in software industry and spend the remainder of the day studying relevant literature, leveling up etc. for working on the FAI problem. Since I'm not quite out of the university system yet, I'm also trying to build some connections with our AI lab staff and a few other interested people in the academia, but with no intention to actually join their show. It would eat away almost all my time, so I could work on some AI-ish bio-informatics software or something similarly irrelevant FAI-wise.

There are of course some benefits in joining the academia, as you mentioned, but it seems to me that you can reap quite a bit of them by just befriending an assistant professor or two.

comment by Roko · 2010-12-07T16:42:18.505Z · score: 4 (4 votes) · LW(p) · GW(p)

Kaj, why don't you add the option of getting rich in your 20s by working in finance, then paying your way into research groups in your late 30s? PalmPilot guy, uh Jeff Hawkins essentially did this. Except he was an entrepreneur.

comment by Kaj_Sotala · 2010-12-07T16:54:08.979Z · score: 3 (3 votes) · LW(p) · GW(p)

That doesn't sound very easy.

comment by wedrifid · 2010-12-07T16:59:58.754Z · score: 6 (6 votes) · LW(p) · GW(p)

Sounds a heck of a lot easier than doing an equivalent amount of status grabbing within academic circles over the same time.

Money is a lot easier to game and status easier to buy.

comment by David_Gerard · 2010-12-07T17:29:04.236Z · score: 8 (12 votes) · LW(p) · GW(p)

There is the minor detail that it really helps not to hate each and every individual second of your working life in the process. A goal will only pull you along to a certain degree.

(Computer types know all the money is in the City. I did six months of it. I found the people I worked with and the people whose benefit I worked for to be excellent arguments for an unnecessarily bloody socialist revolution.)

comment by wedrifid · 2010-12-07T18:57:43.422Z · score: 2 (2 votes) · LW(p) · GW(p)

A goal will only pull you along to a certain degree.

For many people that is about half way between the Masters and PhD degrees. ;)

If only being in a university was a guarantee of an enjoyable working experience.

comment by Roko · 2010-12-07T18:19:46.736Z · score: 1 (1 votes) · LW(p) · GW(p)

Curious, why did it bother you that you disliked the people you worked with? Couldn't you just be polite to them and take part in their jokes/socialgames/whatever? They're paying you handsomely to be there, after all?

Or was it a case of them being mean to you?

comment by David_Gerard · 2010-12-07T18:22:06.577Z · score: 2 (2 votes) · LW(p) · GW(p)

No, just loathsome. And the end product of what I did and finding the people I was doing it for loathsome.

comment by Roko · 2010-12-07T18:27:11.626Z · score: 2 (2 votes) · LW(p) · GW(p)

I dunno, "loathsome" sounds a bit theoretical to me. Can you be specific?

comment by CronoDAS · 2010-12-07T18:40:00.549Z · score: 4 (4 votes) · LW(p) · GW(p)

One of my brother's co-workers at Goldman Sachs has actively tried to sabotage his work. (Goldman Sachs runs on a highly competitive "up or out" system; you either get promoted or fired, and most people don't get promoted. If my brother lost his job, his coworker would be more likely to keep his.)

comment by Roko · 2010-12-07T19:36:07.104Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't understand: he tried to sabotage his cowerker's work, or his own?

comment by sfb · 2010-12-07T19:38:17.120Z · score: 6 (6 votes) · LW(p) · GW(p)

CronoDAS's Brother's Co-worker tried to sabotage CronoDAS's Brother's work.

comment by TheOtherDave · 2010-12-07T21:11:32.712Z · score: 0 (0 votes) · LW(p) · GW(p)

"Hamlet, in love with the old man's daughter, the old man thinks."

comment by David_Gerard · 2010-12-08T01:03:25.828Z · score: 2 (2 votes) · LW(p) · GW(p)

Not without getting political. Fundamentally, I didn't feel good about what I was doing. And I was just a Unix sysadmin.

This was just a job to live, not a job taken on in the furtherance of a larger goal.

comment by Roko · 2010-12-07T17:23:59.929Z · score: 2 (2 votes) · LW(p) · GW(p)

Agreed. Average Prof is a nobody at 40, average financier is a millionaire. shrugs

comment by Hul-Gil · 2011-07-26T00:32:15.891Z · score: 0 (0 votes) · LW(p) · GW(p)

The average financier is a millionaire at 40?! What job is this, exactly?

comment by sark · 2011-01-25T18:07:29.490Z · score: 1 (1 votes) · LW(p) · GW(p)

Thank you for this. This was a profound revelation for me.

comment by Manfred · 2010-12-07T18:19:25.896Z · score: 4 (6 votes) · LW(p) · GW(p)

Upvoted for comedy.

comment by Roko · 2010-12-07T17:27:44.133Z · score: 3 (3 votes) · LW(p) · GW(p)

Also, you can get a PhD in a relevant mathy discipline first, thereby satisfying the condition of having done research.

And the process of dealing with the real world enough to make money will hopefully leave you with better anti-akrasia tactics, better ability to achieve real-world goals, etc.

You might even be able to hire others.

comment by Roko · 2010-12-07T16:57:32.120Z · score: 1 (3 votes) · LW(p) · GW(p)

I don't think you need to be excessively rich. $1-4M ought to be enough.

Edit: oh, I forgot, you live in scandanavia, with a taxation system so "progressive" that it has an essential singularity at $100k. Might have to move to US.

comment by Kaj_Sotala · 2010-12-07T17:01:28.601Z · score: 1 (1 votes) · LW(p) · GW(p)

Might have to move to US.

I'm afraid that's not really an option for me, due to various emotional and social issues. I already got horribly homesick during just a four month visit.

comment by Vaniver · 2010-12-07T17:39:32.843Z · score: 4 (6 votes) · LW(p) · GW(p)

Alaska might be a reasonable Finland substitute, weather-wise, but the other issues will be difficult to resolve (if you're moving to the US to make a bunch of money, Alaska is not the best place to do it).

One of my favorite professors was Brazilian, who went to graduate school at the University of Rochester. Horrified (I used to visit my ex in upstate New York, and so was familiar with the horrible winters that take up 8 months of the year without the compensations that convince people to live in Scandinavia), I asked him how he liked the transition- and he said that he loved it, and it was the best time of his life. I clarified that I was asking about the weather, and he shrugged and said that in academia, you absolutely need to put the ideas first. If the best place for your research is Antarctica, that's where you go.

The reason why I tell this story is that this is what successful professors look like, and only one tenth of the people that go to graduate school end up as professors. If you would be outcompeted by this guy instead of this guy, keep that in mind when deciding you want to enter academia. And, if you want to do research outside of academia, in order to do that well that requires more effort than research done inside of academia.

comment by Kaj_Sotala · 2010-12-07T18:42:41.672Z · score: 1 (1 votes) · LW(p) · GW(p)

It's not the weather: I'd actually prefer a warmer climate than Finland has. It's living in a foreign culture and losing all of my existing social networks.

I don't have a problem with putting in a lot of work, but to be able to put in a lot of work, my life needs to be generally pleasant otherwise, and the work needs to be at least somewhat meaningful. I've tried the "just grit your teeth and toil" mentality, and it doesn't work - maybe for someone else it does, but not for me.

comment by Vaniver · 2010-12-07T23:45:37.903Z · score: 4 (4 votes) · LW(p) · GW(p)

my life needs to be generally pleasant otherwise, and the work needs to be at least somewhat meaningful. I've tried the "just grit your teeth and toil" mentality, and it doesn't work - maybe for someone else it does, but not for me.

The first part is the part I'm calling into question, not the second. Of course you need to be electrified by your work. It's hard to do great things when you're toiling instead of playing.

But your standards for general pleasantness are, as far as I can tell, the sieve for a lot of research fields. As an example, it is actually harder to be happy on a grad student/postdoc salary; instead of it being shallow to consider that a challenge, it's shallow-mindedness to not recognize that that is a challenge. It is actually harder to find a mate and start a family while an itinerant academic looking for tenure. (Other examples abound; two should be enough for this comment.) If you're having trouble leaving your network of friends to go to grad school / someplace you can get paid more, then it seems likely that you will have trouble with the standard academic life or standard corporate life.

While there are alternatives, those tend not to play well with doing research, since the alternative tends to take the same kind of effort that you would have put into research. I should comment that I think a normal day job plus research on the side can work out but should be treated like writing a novel on the side- essentially, the way creative literary types play the lottery.

comment by diegocaleiro · 2010-12-09T04:30:10.497Z · score: 1 (1 votes) · LW(p) · GW(p)

It's living in a foreign culture and losing all of my existing social networks.

Of course it is! I am in the same situation. Just finished undergrad in philosophy. But here life is completely optimized for happiness: 1) No errands 2) Friends filtered through 15 years for intelligence, fun, beauty, awesomeness. 3) Love, commitment, passion, and just plain sex with the one, and the others. 4) Deep knowledge of the free culture available 5) Ranking high in the city (São Paulo's) social youth hierarchy 6) Cheap services 7) Family and acquaintances network. 8) Freedom timewise to write my books 9) Going to the park 10 min walking 10) Having been to, and having friends who were in the US, and knowing for fact that life just is worse there....

This is how much fun I have, the list's impact is the only reason I'm considering not going to study, get FAI faster, get anti-ageing faster.

If only life were just a little worse...... I would be in a plane towards posthumanity right now.

So how good has a life to be for you to be forgiven of not working for what really matters? Help me folks!

comment by Roko · 2010-12-07T17:15:11.745Z · score: 1 (3 votes) · LW(p) · GW(p)

Well, you wanna make an omlet, you gotta break some eggs!

comment by Clippy · 2010-12-07T18:56:48.075Z · score: 13 (15 votes) · LW(p) · GW(p)

Conditioning on yourself deeming it optimal to make a metaphorical omelet by breaking metaphorical eggs, metaphorical eggs will deem it less optimal to remain vulnerable to metaphorical breakage by you than if you did not deem it optimal to make a metaphorical omelet by breaking metaphorical eggs; therefore, deeming it optimal to break metaphorical eggs in order to make a metaphorical omelet can increase the difficulty you find in obtaining omelet-level utility.

comment by JGWeissman · 2010-12-07T19:18:44.515Z · score: 4 (4 votes) · LW(p) · GW(p)

Many metaphorical eggs are not [metaphorical egg]::Utility maximizing agents.

comment by Clippy · 2010-12-07T19:28:16.066Z · score: 1 (3 votes) · LW(p) · GW(p)

True, and to the extent that is not the case, the mechanism I specified would not activate.

comment by Strange7 · 2010-12-07T19:17:21.291Z · score: 0 (2 votes) · LW(p) · GW(p)

Redefining one's own utility function so as to make it easier to achieve is the road that leads to wireheading.

comment by Clippy · 2010-12-07T19:25:22.256Z · score: 3 (3 votes) · LW(p) · GW(p)

Correct. However, the method I proposed does not involve redefining one's utility function, as it leaves terminal values unchanged. It simply recognizes that certain methods of achieving one's pre-existing terminal values are better than others, which leaves the utility function unaffected (it only alters instrumental values).

The method I proposed is similar to pre-commitment for a causal decision theorist on a Newcomb-like problem. For such an agent, "locking out" future decisions can improve expected utility without altering terminal values. Likewise, a decision theory that fully absorbs such outcome-improving "lockouts" so that it outputs the same actions without explicit pre-commitment can increase its expected utility for the same utility function.

comment by Larks · 2010-12-07T18:27:56.358Z · score: 1 (1 votes) · LW(p) · GW(p)

Do you have any advice for getting into Quant work? (I'm a second year maths student at Oxford, don't know much about the city).

comment by [deleted] · 2010-12-08T08:31:53.076Z · score: 5 (5 votes) · LW(p) · GW(p)

An advice sheet for mathematicians considering becoming quants. It's not a path that interests me, but if it was I think I'd find this useful.

comment by katydee · 2010-12-08T01:55:46.058Z · score: 0 (2 votes) · LW(p) · GW(p)

Are there any good ways of getting rich that don't involve selling your soul?

comment by rhollerith_dot_com · 2010-12-08T03:44:24.151Z · score: 6 (6 votes) · LW(p) · GW(p)

Please rephrase without using "selling your soul".

comment by wedrifid · 2010-12-08T04:07:32.939Z · score: 15 (17 votes) · LW(p) · GW(p)

Are there any good ways of getting rich that don't involve a Faustian exchange with Lucifer himself?

comment by Alicorn · 2010-12-08T04:25:41.728Z · score: 3 (5 votes) · LW(p) · GW(p)

Pfft. No good ways.

comment by katydee · 2010-12-08T07:44:06.618Z · score: 2 (2 votes) · LW(p) · GW(p)

Without corrupting my value system, I suppose? I'm interested in getting money for reasons other than my own benefit. I am not fully confident in my ability to enter a field like finance without either that changing or me getting burned out by those around me.

comment by gwern · 2010-12-08T03:09:55.946Z · score: 3 (5 votes) · LW(p) · GW(p)

As well ask if there are hundred-dollar bills lying on sidewalks.

EDIT: 2 days after I wrote this, I was walking down the main staircase in the library and laying on the central landing, highly contrasted against the floor, in completely clear view of 4 or 5 people who walked past it, was a dollar bill. I paused for a moment reflecting on the irony that sometimes there are free lunches - and picked it up.

comment by Desrtopa · 2010-12-07T16:38:14.052Z · score: 3 (3 votes) · LW(p) · GW(p)

Depending on what you're planning to research, lack of access to university facilities could also be a major obstacle. If you have a reputation for credible research, you might be able to collaborate with people within the university system, but I suspect that making the original break in would be pretty difficult.

comment by LucasSloan · 2010-12-07T16:27:59.330Z · score: 3 (5 votes) · LW(p) · GW(p)

How hard is it to live off the dole in Finland? Also, non-academic research positions in think tanks and the like (including, of course, SIAI).

comment by Kaj_Sotala · 2010-12-07T16:42:20.306Z · score: 5 (5 votes) · LW(p) · GW(p)

Not very hard in principle, but I gather it tends to be rather stressful, with stuff like payments not arriving when they're supposed to happening every now and then. Also, I couldn't avoid the feeling of being a leech, justified or not.

Non-academic think tanks are a possibility, but for Singularity-related matters I can't think of others than the SIAI, and their resources are limited.

comment by [deleted] · 2010-12-07T18:00:59.127Z · score: 3 (7 votes) · LW(p) · GW(p)

Many people would steal food to save lives of the starving, and that's illegal.

Working within the national support system to increase the chance of saving everybody/everything? If you would do the first, you should probably do the second. But you need to weigh the plausibility of the get-rich-and-fund-institute option, including the positive contributions of the others you could potentially hire.

comment by Roko · 2010-12-07T19:39:36.017Z · score: 0 (4 votes) · LW(p) · GW(p)

I wonder how far some people would go for the cause. For Kaj, clearly, leeching of an already wasteful state is too far.

I was once criticized by a senior singinst member for not being prepared to be tortured or raped for the cause. I mean not actually, but, you know, in theory. Precommiting to being prepared to make a sacrifice that big. shrugs

comment by wedrifid · 2010-12-07T20:08:29.445Z · score: 3 (5 votes) · LW(p) · GW(p)

I was once chastized by a senior singinst member for not being prepared to be tortured or raped for the cause.

Forget entirely 'the cause' nonsense. How far would you go just to avoid not personally getting killed? How much torture per chance that your personal contribution at the margin will prevent your near term death?

comment by Eugine_Nier · 2010-12-08T05:56:23.087Z · score: 1 (3 votes) · LW(p) · GW(p)

Could we move this discussion somewhere, where we don't have to constantly worry about it getting deleted.

comment by Nick_Tarleton · 2010-12-08T06:55:25.455Z · score: 9 (11 votes) · LW(p) · GW(p)

I'm not aware that LW moderators have ever deleted content merely for being critical of or potentially bad PR for SIAI, and I don't think they're naive enough to believe deletion would help. (Roko's infamous post was considered harmful for other reasons.)

comment by waitingforgodel · 2010-12-08T07:04:07.956Z · score: -1 (17 votes) · LW(p) · GW(p)

"Harmful for other reasons" still has a chilling effect on free speech... and given that those reasons were vague but had something to do with torture, it's not unreasonable to worry about deletion of replies to the above question.

comment by Bongo · 2010-12-08T14:43:38.759Z · score: 3 (3 votes) · LW(p) · GW(p)

The reasons weren't vague.

Of course this is just your assertion against mine since we're not going to actually discuss the reasons here.

comment by wedrifid · 2010-12-08T06:15:13.549Z · score: 1 (1 votes) · LW(p) · GW(p)

There doesn't seem to be anything censor relevant in my question and for my part I tend to let big brother worry about his own paranoia and just go about my business. In any case while the question is an interesting one to me it doesn't seem important enough to create a discussion somewhere else. At least not until I make a post. Putting aside presumptions of extreme altruism just how much contribution to FAI development is rational? To what extent does said rational contribution rely on newcomblike reasoning? How much would a CDT agent contribute on the expectation that his personal contribution will make the difference and save his life?

On second thoughts maybe the discussion does seem to interest me sufficiently. If you are particularly interested in answering me feel free to copy and paste my questions elsewhere and leave a back-link. ;)

comment by waitingforgodel · 2010-12-08T06:40:49.935Z · score: -3 (15 votes) · LW(p) · GW(p)

I think you/we're fine -- just alternate between two tabs when replying, and paste it to the rationalwiki if it gets deleted.

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

Besides, it's looking like after the Roko thing they've decided to cut back on such silliness.

comment by Vladimir_Nesov · 2010-12-08T11:22:39.037Z · score: 10 (18 votes) · LW(p) · GW(p)

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

You are compartmentalizing. What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech. That the decision conflicts with freedom of speech doesn't necessarily mean that it's incorrect, and if the correct decision conflicts with freedom of speech, or has you kill a thousand children (estimation of its correctness must of course take this consequence into account), it's still correct and should be taken.

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision. )

(This is a note about a problem in your argument, not an argument for correctness of EY's decision. My argument for correctness of EY's decision is here and here.)

comment by wedrifid · 2010-12-08T11:52:53.103Z · score: 4 (6 votes) · LW(p) · GW(p)

You are compartmentalizing.

This is possible but by no means assured. It is also possible that he simply didn't choose to write a full evaluation of consequences in this particular comment.

comment by Vladimir_Golovin · 2010-12-08T12:01:04.365Z · score: 2 (2 votes) · LW(p) · GW(p)

What you should be asking yourself is whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.

Upvoted. This just helped me get unstuck on a problem I've been procrastinating on.

comment by xamdam · 2010-12-08T20:37:17.251Z · score: 1 (1 votes) · LW(p) · GW(p)

whether the decision is correct (has better expected consequences than the available alternatives), not whether it conflicts with freedom of speech.

Sounds like a good argument for WikiLeaks dilemma (which is of course confused by the possibility the government is lying their asses off about potential harm)

comment by Vladimir_Nesov · 2010-12-08T20:43:34.602Z · score: 1 (1 votes) · LW(p) · GW(p)

The question with WikiLeaks is about long-term consequences. As I understand it, the (sane) arguments in favor can be summarized as stating that expected long-term good outweighs expected short-term harm. It's difficult (for me) to estimate whether it's so.

comment by xamdam · 2010-12-09T16:05:40.835Z · score: 0 (0 votes) · LW(p) · GW(p)

I suspect it's also difficult for Julian (or pretty much anybody) to estimate these things; I guess intelligent people will just have to make best guesses about this type of stuff. In this specific case a rationalist would be very cautious of "having an agenda", as there is significant opportunity to do harm either way.

comment by waitingforgodel · 2010-12-08T11:56:28.691Z · score: 1 (5 votes) · LW(p) · GW(p)

(There is only one proper criterion to anyone's actions, goodness of consequences, and if any normally useful heuristic stays in the way, it has to be put down, not because one is opposed to that heuristic, but because in a given situation, it doesn't yield the correct decision.)

Very much agree btw

comment by red75 · 2010-12-08T15:21:12.617Z · score: -1 (3 votes) · LW(p) · GW(p)

Shouldn't AI researchers precommit to not build AI capable of this kind of acausal self-creation? This will lower chances of disaster both causally and acausally.

And please, define how do you tell moral heuristics and moral values apart. E.g. which is "don't change moral values of humans by wireheading"?

comment by waitingforgodel · 2010-12-08T11:53:55.963Z · score: -5 (19 votes) · LW(p) · GW(p)

We're basically talking about a logical illusion... an AI Ontological Argument... with all the flaws of an ontological argument (such as bearing no proof)... that was foolishly censored leading to a lot of bad press, hurt feelings, lost donations, and general existential risk increase.

From, as you call it, a purely correctness optimizing perspective, It's long term bad having silly, irrational stuff like this associated with LW. I think that EY should apologize, and we should get an explicit moderation policy for LW, but in the mean time I'll just undo any existential risk savings hoped to be gained from censorship.

In other words, this is less about Free Speech, as it is about Dumb Censors :p

comment by Vladimir_Nesov · 2010-12-08T12:15:28.222Z · score: 2 (4 votes) · LW(p) · GW(p)

It's long term bad having silly, irrational stuff like this associated with LW.

Whether it's irrational is one of the questions we are discussing in this thread, so it's bad conduct to use your answer as an element of an argument. I of course agree that it appears silly and irrational and absurd, and that associating that with LW and SIAI is in itself a bad idea, but I don't believe it's actually irrational, and I don't believe you've seriously considered that question.

comment by Vladimir_Nesov · 2010-12-08T12:28:27.859Z · score: -1 (7 votes) · LW(p) · GW(p)

We're basically talking about a logical illusion... an AI Ontological Argument... with all the flaws of an ontological argument (such as bearing no proof)...

In other words, you don't understand the argument, and are not moved by it, and so your estimation of improbability of the outrageous prediction stays the same. The only proper way to argue past this point is to discuss the subject matter, all else would be sophistry that equally applies to predictions of Astrology.

comment by Vladimir_Nesov · 2010-12-08T12:48:25.177Z · score: 5 (7 votes) · LW(p) · GW(p)

Don't let EY chill your free speech -- this is supposed to be a community blog devoted to rationality... not a SIAI blog where comments are deleted whenever convenient.

Following is another analysis.

Consider a die that was tossed 20 times, and each time it fell even side up. It's not surprising because it's a low-probability event: you wouldn't be surprised if you observed most other combinations equally improbable under the hypothesis that the die is fair. You are surprised because a pattern you see suggests that there is an explanation for your observations that you've missed. You notice your own confusion.

In this case, you look at the event of censoring a post (topic), and you're surprised, you don't understand why that happened. And then your brain pattern matches all sorts of hypotheses that are not just improbable, but probably meaningless cached phrases, like "It's convenient", or "To oppose freedom of speech", or "To manifest dictatorial power".

Instead of leaving the choice of a hypothesis to the stupid intuitive processes, you should notice your own confusion, and recognize that you don't know the answer. Acknowledging that you don't know the answer is better than suggesting an obviously incorrect theory, if much more probability is concentrated outside that theory, where you can't suggest a hypothesis.

comment by waitingforgodel · 2010-12-08T12:56:57.497Z · score: 2 (12 votes) · LW(p) · GW(p)

Since we're playing the condescension game, following is another analysis:

You read a (well written) slogan, and assumed that the writer must be irrational. You didn't read the thread he linked you to, you focused on your first impression and held to it.

comment by Vladimir_Nesov · 2010-12-08T13:21:24.631Z · score: 2 (6 votes) · LW(p) · GW(p)

Since we're playing the condescension game

I'm not. Seriously. "Whenever convenient" is a very weak theory, and thus using it is a more serious flaw, but I missed that on first reading and addressed a different problem.

You read a (well written) slogan, and assumed that the writer must be irrational. You didn't read the thread he linked you to, you focused on your first impression and held to it.

Please unpack the references. I don't understand.

comment by waitingforgodel · 2010-12-08T13:42:44.118Z · score: 2 (8 votes) · LW(p) · GW(p)

Sorry, it looks like we're suffering from a bit of cultural crosstalk. Slogans, much like ontological arguments, are designed to make something of an illusion in the mind -- a lever to change the change the way you look at the world. "Whenever convenient" isn't there as a statement of belief, so much as a prod to get you thinking...

"How much to I trust that EY knows what he's doing?"

You may as well argue with Nike: "Well, I can hardly do everything..." (re: Just Do It)

That said I am a rationalist... I just don't see any harm in communicating to the best of my ability.

I linked you to this thread, where I did display some biases, but also decent evidence for not having the ones you're describing... which I take to be roughly what you'd expect of a smart person off the street.

comment by Vladimir_Nesov · 2010-12-08T14:01:45.980Z · score: 2 (2 votes) · LW(p) · GW(p)

I can't place this argument at all in relation to the thread above it. Looks like a collection of unrelated notes to me. Honest. (I'm open to any restatement; don't see what to add to the notes themselves as I understand them.)

comment by waitingforgodel · 2010-12-08T14:12:21.050Z · score: 3 (7 votes) · LW(p) · GW(p)

The whole post you're replying to comes from your request to "Please unpack the references".

Here's the bit with references, for easy reference:

You read a (well written) slogan, and assumed that the writer must be irrational. You didn't read the thread he linked you to, you focused on your first impression and held to it.

The first part of the post you're replying to's "Sorry, it looks... best of my ability" maps to "You read a.. irrational" in the quote above, and this tries to explain the problem as I understand it: that you were responding to a slogans words not it's meaning. Explained it's meaning. Explained how "Whenever convenient" was a pointer to the "Do I trust EY?" thought. Gave a backup example via the Nike slogan.

The last paragraph in the post you're replying to tried to unpack the "you focused... held to it" from the above quote

comment by Vladimir_Nesov · 2010-12-08T15:03:56.330Z · score: 0 (2 votes) · LW(p) · GW(p)

I see. So the "writer" in the quote is you. I didn't address your statement per se, more a general disposition of the people who state ridiculous things as explanation for the banning incident, but your comment did make the same impression on me. If you correctly disagree that it applies to your intended meaning, good, you didn't make that error, and I don't understand what did cause you to make that statement, but I'm not convinced by your explanation so far. You'd need to unpack "Distrusting EY" to make it clear that it doesn't fall in the same category of ridiculous hypotheses.

comment by shokwave · 2010-12-08T15:15:43.874Z · score: 1 (1 votes) · LW(p) · GW(p)

The Nike slogan is "Just Do It", if it helps.

comment by Vladimir_Nesov · 2010-12-08T15:38:28.016Z · score: 0 (0 votes) · LW(p) · GW(p)

Thanks. It doesn't change the argument, but I'll still delete that obnoxious paragraph.

comment by Eugine_Nier · 2010-12-08T07:29:20.308Z · score: 1 (1 votes) · LW(p) · GW(p)

Besides, it's looking like after the Roko thing they've decided to cut back on such silliness.

I believe EY takes this issue very seriously.

comment by waitingforgodel · 2010-12-08T07:35:24.625Z · score: 1 (5 votes) · LW(p) · GW(p)

Ahh. Are you aware of any other deletions?

comment by XiXiDu · 2010-12-08T20:38:41.059Z · score: 3 (5 votes) · LW(p) · GW(p)

Are you aware of any other deletions?

Here...

I'd like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?

The subject matter here has a somewhat different nature that rather fits a more people - more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn't mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.

comment by TheOtherDave · 2010-12-08T21:43:52.951Z · score: 4 (4 votes) · LW(p) · GW(p)

How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it?

Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.

If necessary, make this two articulations: one that is easy to understand (in the sense of answering "is what I'm about to say a problem?") even if it's way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.

Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn't itself violate those boundaries.

Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that's more than just me, run my boundary articulation(s) past the group and edit as appropriate.

Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).

Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).

Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.

==

That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn't.

comment by David_Gerard · 2010-12-09T15:31:25.852Z · score: 4 (4 votes) · LW(p) · GW(p)

A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky - he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.

And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.

Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren't the only person in the whole world smart enough to find the flaw.

comment by Eugine_Nier · 2010-12-08T07:52:30.718Z · score: 2 (2 votes) · LW(p) · GW(p)

Yes, several times other poster's have brought up the subject and had their comments deleted.

comment by Bongo · 2010-12-10T17:34:35.317Z · score: 0 (0 votes) · LW(p) · GW(p)

I hadn't seen a lot of stubs of deleted comments around before the recent episode, but you say people's comments had gotten deleted several times.

So, have you seen comments being deleted in a special way that doesn't leave a stub?

comment by Eugine_Nier · 2010-12-10T19:24:07.914Z · score: 2 (2 votes) · LW(p) · GW(p)

Comments only leave a stub if they have replies that aren't deleted.

comment by waitingforgodel · 2010-12-08T08:05:48.877Z · score: -5 (13 votes) · LW(p) · GW(p)

Interesting. Do you have links? I rather publicly vowed to undo any assumed existential risk savings EY thought were to be had via censorship.

That one stayed up, and although I haven't been the most vigilant in checking for deletions, I had (perhaps naively) assumed they stopped after that :-/

comment by Roko · 2010-12-07T20:09:53.520Z · score: 1 (1 votes) · LW(p) · GW(p)

Hard to say. Probably a lot if I could precommit to it in advance, so that once it had began I couldn't change my mind.

There are many complicating factors, though.

comment by waitingforgodel · 2010-12-08T06:56:45.539Z · score: -1 (11 votes) · LW(p) · GW(p)

Am I the only one who can honestly say that it would depend on the day?

There's a TED talk I once watched about how republicans reason on five moral channels and democrats only reason on two.

They were (roughly):

  1. harm/care
  2. fairness/reciprocity
  3. in-group/out-group
  4. authority
  5. purity/scarcity/correctness

According to the talk, Democrats reason with primarily the first two and Republicans with all of them.

I took this to mean that Republicans were allowed to do moral calculus that Democrats could not... for instance, if I can only reason with the firs two, then punching a baby is always wrong (it causes harm, and isn't fair)... If, on the other hand, I'm allowed to reason with all five, it might be okay to punch a baby because my Leader said to do it, or because the baby isn't from my home town, or because my religion says to.

Republicans therefore have it much easier in rationalizing self-serving motives.

(As an aside, it's interesting to note that Democrats must have started with more than just the two when they were young. "Mommy said not to" is a very good reason to do something when you're young. It seems that they must have grown out of it).

After watching the TED talk, I was reflecting on how it seems that smart people (myself sadly included) let relatively minor moral problems stop them from doing great things... and on how if I were just a little more Republican (in the five channel moral reasoning sense) I might be able to be significantly more successful.

The result is a WFG that cycles in and out of 2-channel/5-channel reasoning.

On my 2-channel days, I'd have a very hard time hurting another person to save myself. If I saw them, and could feel that human connection, I doubt I could do much more than I myself would be willing to endure to save another's life (perhaps two hours assuming hand-over-a-candle level of pain -- permanent disfigurement would be harder to justify, but if it was relatively minor).

On my 5-channel days, I'm (surprisingly not so embarrassed to say) I'd probably go arbitrarily high... after all, what's their life compared to mine?

Probably a bit more than you were looking to hear.

What's your answer?

comment by Eugine_Nier · 2010-12-08T07:25:45.179Z · score: 1 (3 votes) · LW(p) · GW(p)

I took this to mean that Republicans were allowed to do moral calculus that Democrats could not... for instance, if I can only reason with the firs two, then punching a baby is always wrong (it causes harm, and isn't fair)... If, on the other hand, I'm allowed to reason with all five, it might be okay to punch a baby because my Leader said to do it, or because the baby isn't from my home town, or because my religion says to.

First let me say that as a Republican/libertarian I don't entirely agree with Haidt's analysis.

In any case, the above is not quiet how I understand Haidt's analysis. My understanding is that Democracts have no way to categorically say that punching (or even killing) a baby is wrong. While they can say it's wrong because as you said it causes harm and isn't fair, they can always override that judgement by coming up with a reason why not punching and/or killing the baby would also cause harm. (See the philosophy of Peter Singer for an example).

Republicans on the other hand can invoke sanctity of life.

comment by waitingforgodel · 2010-12-08T07:32:29.334Z · score: 2 (8 votes) · LW(p) · GW(p)

Sure, agreed. The way I presented it only showed very simplistic reasoning.

Let's just say that, if you imagine a Democrat that desperately wants to do x but can't justify it morally (punch a baby, start a somewhat shady business, not return a lost wallet full of cash), one way to resolve this conflict is to add Republican channels to his reasoning.

It doesn't always work (sanctity of life, etc), but I think for a large number of situations where we Democrats-at-heart get cold feet it works like a champ :)

comment by Eugine_Nier · 2010-12-08T07:49:26.633Z · score: 0 (4 votes) · LW(p) · GW(p)

It doesn't always work (sanctity of life, etc), but I think for a large number of situations where we Democrats-at-heart get cold feet it works like a champ :)

So I've noticed. See the discussion following this comment for an example.

On the other hand other times Democrats take positions that Republicans horrific, e.g., euthanasia, abortion, Peter Singer's position on infanticide.

comment by David_Gerard · 2010-12-08T08:27:51.344Z · score: 6 (6 votes) · LW(p) · GW(p)

Peter Singer's media-touted "position on infanticide" is an excellent example of why even philosophers might shy away from talking about hypotheticals in public. You appear to have just become Desrtopa's nighmare.

comment by Eugine_Nier · 2010-12-08T08:38:35.345Z · score: 1 (1 votes) · LW(p) · GW(p)

My problem with Singer is that his "hypotheticals" don't appear all that hypothetical.

comment by Eugine_Nier · 2010-12-08T08:31:18.102Z · score: 1 (1 votes) · LW(p) · GW(p)

You appear to have just become Desrtopa's nighmare.

What specifically are you referring to? (I haven't been following Desporta's posts.)

comment by David_Gerard · 2010-12-08T08:41:17.394Z · score: 3 (3 votes) · LW(p) · GW(p)

It's evident you really need to read the post. He can't get people to answer hypotheticals in almost any circumstances and thought this was a defect in the people. Approximately everyone responded pointing out that in the real world, the main use of hypotheticals is to use them against people politically. This would be precisely what happened with the factoid about Singer.

comment by waitingforgodel · 2010-12-08T10:48:36.727Z · score: 2 (6 votes) · LW(p) · GW(p)

Thanks for the link -- very interesting reading :)

comment by wedrifid · 2010-12-07T19:53:55.240Z · score: 0 (0 votes) · LW(p) · GW(p)

I was once chastized by a senior singinst member for not being prepared to be tortured or raped for the cause.

Here I was thinking it was, well, nearly the opposite of that! :)

comment by Eugine_Nier · 2010-12-08T05:54:11.566Z · score: 2 (2 votes) · LW(p) · GW(p)

How hard is it to live off the dole in Finland?

Given the current economic situation in Europe, I'm not sure that's a good long term strategy.

Also, I suspect spending to long on the dole may cause you to develop habits that'll make it harder to work a paying job.

comment by Perplexed · 2010-12-12T01:28:25.199Z · score: 2 (4 votes) · LW(p) · GW(p)

What (dis)advantages does this have compared to the traditional model?

I think this thread perfectly illustrates one disadvantage of doing research in an unstructured environment. It is so easy to become distracted from the original question by irrelevant, but bright and shiny distractions. Having a good academic adviser cracking the whip helps to keep you on track.

855 comments so far, with no sign of slowing down!

comment by Vaniver · 2010-12-11T17:12:29.173Z · score: 2 (2 votes) · LW(p) · GW(p)

For those curious: we do agree, but he went to quite a bit more effort in showing that than I did (and is similarly more convincing).

comment by JGWeissman · 2010-12-09T23:08:53.325Z · score: 2 (4 votes) · LW(p) · GW(p)

The above deleted comment referenced some details of the banned post. With those details removed, it said:

(Note, this comment reacts to this thread generally, and other discussion of the banning)

The essential problem is that with the (spectacular) deletion of the Forbidden Post, LessWrong turned into the sort of place where posts get disappeared.

I realize that you are describing how people generally react to this sort of thing, but this knee jerk stupid reaction is one of the misapplied heurestics we ought to be able notice and overcome.

So far, one post has been forbidden (not counting spam).

It was not forbidden because it criticized SIAI, other posts have criticized SIAI and were not banned.

It was not forbidden because it discussed torture, other posts have discussed torture and were not banned.

It was not forbidden for being inflammatory, other posts have been inflammatory and where not banned.

It was forbidden for being a Langford Basilisk.

comment by David_Gerard · 2010-12-09T23:21:17.835Z · score: 2 (2 votes) · LW(p) · GW(p)

Strange LessWrong software fact: this showed up in my reply stream as a comment consisting only of a dot ("."), though it appears to be a reply to a reply to me.

comment by JGWeissman · 2010-12-09T23:31:16.422Z · score: 0 (0 votes) · LW(p) · GW(p)

It also shows up on my user page as a dot. Before I edited it to be just a dot, it showed up in your comment stream and my user page with the original complete content.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-09T20:22:03.580Z · score: 2 (4 votes) · LW(p) · GW(p)

No, the rationale for deletion was not based on the possibility that his exact, FAI-based scenario could actually happen.

comment by wedrifid · 2010-12-09T20:31:28.288Z · score: 3 (3 votes) · LW(p) · GW(p)

No, the rationale for deletion was not based on the possibility that his exact, FAI-based scenario could actually happen.

What was the grandparent?

comment by ata · 2010-12-09T20:38:16.835Z · score: 4 (4 votes) · LW(p) · GW(p)

Hm? Did my comment get deleted? I still see it.

comment by komponisto · 2010-12-09T21:10:29.995Z · score: 2 (4 votes) · LW(p) · GW(p)

I noticed you removed the content of the comment from the record on your user page. I would have preferred you not do this; those who are sufficiently curious and know about the trick of viewing the user page ought to have this option.

comment by ata · 2010-12-09T21:15:13.899Z · score: 1 (5 votes) · LW(p) · GW(p)

There was nothing particularly important or interesting in it, just a question I had been mildly curious about. I didn't think there was anything dangerous about it either, but, as I said elsewhere, I'm willing to take Eliezer's word for it if he thinks it is, so I blanked it. Let it go.

comment by komponisto · 2010-12-09T21:33:58.174Z · score: 7 (11 votes) · LW(p) · GW(p)

I'm willing to take Eliezer's word for it if he thinks it is, so I blanked it

I know why you did it. My intention is to register disagreement with your decision. I claim it would have sufficed to simply let Eliezer delete the comment, without you yourself taking additional action to further delete it, as it were.

Let it go.

I could do without this condescending phrase, which unnecessarily (and unjustifiably) imputes agitation to me.

comment by ata · 2010-12-10T23:11:17.497Z · score: 2 (2 votes) · LW(p) · GW(p)

I could do without this condescending phrase, which unnecessarily (and unjustifiably) imputes agitation to me.

Sorry, you're right. I didn't mean to imply condescension or agitation toward you; it was written in a state of mild frustration, but definitely not at or about your post in particular.

comment by Vladimir_Nesov · 2010-12-09T21:14:37.700Z · score: 1 (1 votes) · LW(p) · GW(p)

Only if you disagree with correctness of moderator's decision.

comment by komponisto · 2010-12-09T21:17:41.637Z · score: 1 (1 votes) · LW(p) · GW(p)

Disagreement may be only partial. One could agree to the extent of thinking that viewing of the comment ought to be restricted to a more narrowly-filtered subset of readers.

comment by Vladimir_Nesov · 2010-12-09T21:20:02.481Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes, this is a possible option, depending on the scope of moderator's decision. Banning comments from a discussion, even if they a backed up and publicly available elsewhere, is still an effective tool in shaping the conversation.

comment by wedrifid · 2010-12-09T20:46:05.437Z · score: 1 (3 votes) · LW(p) · GW(p)

Weird. I see:

09 December 2010 08:08:04PM* Comment deleted [-]

How does Eliezer's delete option work exactly? It stays visible to the author? Now I'm curious.

comment by ata · 2010-12-09T20:47:46.773Z · score: 0 (4 votes) · LW(p) · GW(p)

Yes, I've been told that it was deleted but that I still see it since I'm logged in.

In that case I won't repeat what I said in it, partly because it'll just be deleted again but mainly because I actually do trust Eliezer's judgment on this. (I didn't realize that I was saying more than I was supposed to.) All I'll say about it is that it did not actually contain the question that Eliezer's reply suggests he thought it was asking, but it's really not important enough to belabor the point.

comment by waitingforgodel · 2010-12-09T20:44:38.009Z · score: 1 (9 votes) · LW(p) · GW(p)

yep, this one is showing as deleted

comment by ata · 2010-12-09T20:36:21.930Z · score: 1 (1 votes) · LW(p) · GW(p)

What? Did something here get deleted?

comment by XiXiDu · 2010-12-09T11:26:37.903Z · score: 2 (2 votes) · LW(p) · GW(p)

Yeah, I thought about that as well. Trying to suppress it made it much more popular and gave it a lot of credibility. If they decided to act in such a way deliberately, that be fascinating. But that sounds like one crazy conspiracy theory to me.

comment by David_Gerard · 2010-12-09T11:33:36.182Z · score: 7 (9 votes) · LW(p) · GW(p)

I don't think it gave it a lot of credibility. Everyone I can think of who isn't an AI researcher or LW regular who's read it has immediately thought "that's ridiculous. You're seriously concerned about this as a likely consequence? Have you even heard of the Old Testament, or Harlan Ellison? Do you think your AI will avoid reading either?" Note, not the idea itself, but that SIAI took the idea so seriously it suppressed it and keeps trying to. This does not make SIAI look more credible, but less because it looks strange.

These are the people running a site about refining the art of rationality; that makes discussion of this apparent spectacular multi-level failure directly on-topic. It's also become a defining moment in the history of LessWrong and will be in every history of the site forever. Perhaps there's some Xanatos retcon by which this can be made to work.

comment by XiXiDu · 2010-12-09T11:44:30.414Z · score: 3 (3 votes) · LW(p) · GW(p)

I just have a hard time to believe that they could be so wrong, people who write essays like this. That's why I allow for the possibility that they are right and that I simply do not understand the issue. Can you rule out that possibility? And if that was the case, what would it mean to spread it even further? You see, that's my problem.

comment by David_Gerard · 2010-12-09T11:49:17.020Z · score: 6 (8 votes) · LW(p) · GW(p)

Indeed. On the other hand, humans frequently use intelligence to do much stupider things than they could have done without that degree of intelligence. Previous brilliance means that future strange ideas should be taken seriously, but not that the future ideas must be even more brilliant because they look so stupid. Ray Kurzweil is an excellent example - an undoubted genius of real achievements, but also now undoubtedly completely off the rails and well into pseudoscience. (Alkaline water!)

comment by timtyler · 2010-12-09T19:00:29.480Z · score: 1 (1 votes) · LW(p) · GW(p)

Ray on alkaline water:

http://glowing-health.com/alkaline-water/ray-kurzweil-alkaine-water.html

comment by David_Gerard · 2010-12-09T23:15:25.751Z · score: 1 (1 votes) · LW(p) · GW(p)

See, RationalWiki is a silly wiki full of rude people. But one thing we know a lot about, is woo. That reads like a parody of woo.

comment by [deleted] · 2010-12-09T19:08:11.859Z · score: 1 (1 votes) · LW(p) · GW(p)

Scary.

comment by shokwave · 2010-12-09T12:09:33.049Z · score: -2 (4 votes) · LW(p) · GW(p)

I don't think that's credible. Eliezer has focused much of his intelligence on avoiding "brilliant stupidity", orders of magnitude more so than any Kurzweil-esque example.

comment by David_Gerard · 2010-12-09T16:19:04.943Z · score: 3 (3 votes) · LW(p) · GW(p)

So the thing to do in this situation is to ask them: "excuse me wtf are you doin?" And this has been done.

So far there's been no explanation, nor even acknowledgement of how profoundly stupid this looks. This does nothing to make them look smarter.

Of course, as I noted, a truly amazing Xanatos retcon is indeed not impossible.

comment by TheOtherDave · 2010-12-09T14:47:28.234Z · score: 3 (3 votes) · LW(p) · GW(p)

There is no problem.

If you observe an action (A) that you judge so absurd that it casts doubt on the agent's (G) rationality, then your confidence (C1) in G's rationality should decrease. If C1 was previously high, then your confidence (C2) in your judgment of A's absurdity should decrease.

So if someone you strongly trust to be rational does something you strongly suspect to be absurd, the end result ought to be that your trust and your suspicions are both weakened. Then you can ask yourself whether, after that modification, you still trust G's rationality enough to believe that there exist good reasons for A.

The only reason it feels like a problem is that human brains aren't good at this. It sometimes helps to write it all down on paper, but mostly it's just something to practice until it gets easier.

In the meantime, what I would recommend is giving some careful thought to why you trust G, and why you think A is absurd, independent of each other. That is: what's your evidence? Are C1 and C2 at all calibrated to observed events?

If you conclude at the end of it that they one or the other is unjustified, your problem dissolves and you know which way to jump. No problem.

If you conclude that they are both justified, then your best bet is probably to assume the existence of either evidence or arguments that you're unaware of (more or less as you're doing now)... not because "you can't rule out the possibility" but because it seems more likely than the alternatives. Again, no problem.

And the fact that other people don't end up in the same place simply reflects the fact that their prior confidence was different, presumably because their experiences were different and they don't have perfect trust in everyone's perfect Bayesianness. Again, no problem... you simply disagree.

Working out where you stand can be a useful exercise. In my own experience, I find it significantly diminishes my impulse to argue the point past where anything new is being said, which generally makes me happier.

This comment is also relevant.

comment by Vaniver · 2010-12-09T16:43:35.562Z · score: 3 (3 votes) · LW(p) · GW(p)

Another thing: rationality is best expressed as a percentage, not a binary. I might look at the virtues and say "wow, I bet this guy only makes mistakes 10% of the time! That's fantastic!"- but then when I see something that looks like a mistake, I'm not afraid to call it that. I just expect to see fewer of them.

comment by timtyler · 2010-12-09T18:55:59.919Z · score: 0 (6 votes) · LW(p) · GW(p)

What issue? The forbidden one? You are not even supposed to be thinking about that! For pennance, go and say 30 "Hail Yudkowskys"!

comment by shokwave · 2010-12-09T11:57:39.438Z · score: 1 (1 votes) · LW(p) · GW(p)

Everyone I can think of who isn't an AI researcher or LW regular who's read it has immediately thought "that's ridiculous. You're seriously concerned about this as a likely consequence?"

You could make a similar comment about cryonics. "Everyone I can think of who isn't a cryonics project member or LW regular who's read [hypothetical cryonics proposal] has immediately thought "that's ridiculous. You're seriously considering this possibility?". "People think it's ridiculous" is not always a good argument against it.

Consider that whoever made the decision probably made it according to consequentialist ethics; the consequences of people taking the idea seriously would be worse than the consequences of censorship. As many consequentialist decisions tend to, it failed to take into account the full consequences of breaking with deontological ethics ("no censorship" is a pretty strong injunction). But LessWrong is maybe the one place on the internet you could expect not to suffer for breaking from deontological ethics.

This does not make SIAI look more credible, but less because it looks strange.

Again, strange from a deontologist's perspective. If you're a deontologist, okay, your objection to the practice has been noted.

The perfect Bayesian consequentialist, however, would look at the decision, estimate the chances of the decision-maker being irrational (their credibility), and promptly revise their probability estimate of 'bad idea is actually dangerous' upwards, enough to approve of censorship. Nothing strange there. You appear to be downgrading SIAI's credibility because it takes an idea seriously that you don't - I don't think you have enough evidence to conclude that they are reasoning imperfectly.

comment by David_Gerard · 2010-12-09T14:03:47.392Z · score: 8 (8 votes) · LW(p) · GW(p)

I'm speaking of convincing people who don't already agree with them. SIAI and LW look silly now in ways they didn't before.

There may be, as you posit, a good and convincing explanation for the apparently really stupid behaviour. However, to convince said outsiders (who are the ones with the currencies of money and attention), the explanation has to actually be made to said outsiders in an examinable step-by-step fashion. Otherwise they're well within rights of reasonable discussion not to be convinced. There's a lot of cranks vying for attention and money, and an organisation has to clearly show itself as better than that to avoid losing.

comment by shokwave · 2010-12-09T14:11:21.187Z · score: 0 (0 votes) · LW(p) · GW(p)

the explanation has to actually be made to said outsiders in an examinable step-by-step fashion.

By the time a person can grasp the chain of inference, and by the time they are consequentialist and Aumann-agreement-savvy enough for it to work on them, they probably wouldn't be considered outsiders. I don't know if there's a way around that. It is unfortunate.

comment by David_Gerard · 2010-12-09T15:49:17.674Z · score: 6 (6 votes) · LW(p) · GW(p)

To generalise your answer: "the inferential distance is too great to show people why we're actually right." This does indeed suck, but is indeed not reasonably avoidable.

The approach I would personally try is furiously seeding memes that make the ideas that will help close the inferential distance more plausible. See selling ideas in this excellent post.

comment by TheOtherDave · 2010-12-09T16:04:59.121Z · score: 3 (3 votes) · LW(p) · GW(p)

For what it's worth, I gather from various comments he's made in earlier posts that EY sees the whole enterprise of LW as precisely this "furiously seeding memes" strategy.

Or at least that this is how he saw it when he started; I realize that time has passed and people change their minds.

That is, I think he believes/ed that understanding this particular issue depends on understanding FAI theory depends on understanding cognition (or at least on dissolving common misunderstandings about cognition) and rationality, and that this site (and the book he's working on) are the best way he knows of to spread the memes that lead to the first step on that chain.

I don't claim here that he's right to see it that way, merely that I think he does. That is, I think he's trying to implement the approach you're suggesting, given his understanding of the problem.

comment by David_Gerard · 2010-12-09T16:10:18.985Z · score: 3 (3 votes) · LW(p) · GW(p)

Well, yes. (I noted it as my approach, but I can't see another one to approach it with.) Which is why throwing LW's intellectual integrity under the trolley like this is itself remarkable.

comment by TheOtherDave · 2010-12-09T19:37:14.265Z · score: 2 (2 votes) · LW(p) · GW(p)

Well, there's integrity, and then there's reputation, and they're different.

For example, my own on-three-minutes-thought proposed approach is similar to Kaminsky's, though less urgent. (As is, I think, appropriate... more people are working on hacking internet security than on, um, whatever endeavor it is that would lead one to independently discover dangerous ideas about AI. To put it mildly.)

I think that approach has integrity, but it won't address the issues of reputation: adopting that approach for a threat that most people consider absurd won't make me seem any less absurd to those people.

comment by David_Gerard · 2010-12-09T14:21:43.527Z · score: 5 (7 votes) · LW(p) · GW(p)

However, discussion of the chain of reasoning is on-topic for LessWrong (discussing a spectacularly failed local chain of reasoning and how and why it failed), and continued removal of bits of the discussion does constitute throwing LessWrong's integrity in front of the trolley.

comment by Vaniver · 2010-12-09T16:41:54.706Z · score: 2 (2 votes) · LW(p) · GW(p)

The perfect Bayesian consequentialist, however, would look at the decision, estimate the chances of the decision-maker being irrational (their credibility), and promptly revise their probability estimate of 'bad idea is actually dangerous' upwards, enough to approve of censorship.

There are two things going on here, and you're missing the other, important one. When a Bayesian consequentialist sees someone break a rule, they perform two operations- reduce the credibility of the person breaking the rule by the damage done, and increase the probability that the rule-breaking was justified by the credibility of the rule-breaker. It's generally a good idea to do the credibility-reduction first.

Keep in mind that credibility is constructed out of actions (and, to a lesser extent, words), and that people make mistakes. This sounds like captainitis, not wisdom.

comment by Jack · 2010-12-09T16:57:27.122Z · score: 0 (0 votes) · LW(p) · GW(p)

Aside:

It's generally a good idea to do the credibility-reduction first.

Why would it matter?

comment by Vaniver · 2010-12-09T17:10:18.073Z · score: -1 (1 votes) · LW(p) · GW(p)

You have three options, since you have two adjustments to do and you can use old or new values for each (but only three because you can't use new values for both).* Adjusting credibility first (i.e. using the old value of the rule's importance to determine the new credibility, then the new value of credibility to determine the new value of the credibility's importance) is the defensive play, and it's generally a good idea to behave defensively.

For example, let's say your neighbor Tim (credibility .5) tells you that there are aliens out to get him (prior probability 1e-10, say). If you adjust both using the old values, you get that Tim's credibility has dropped massively, but your belief that aliens are out to get Tim has risen massively. If you adjust the action first (where the 'rule' is "don't believe in aliens having practical effects"), your belief that aliens are out to get Tim rises massively- and then your estimate of Tim's credibility drops only slightly. If you adjust Tim's credibility first, you find that his credibility has dropped massively, and thus when you update the probability that aliens are out to get Tim it only bumps up slightly.

*You could iterate this a bunch of times, but that seems silly.

comment by Jack · 2010-12-09T17:49:52.512Z · score: 1 (3 votes) · LW(p) · GW(p)

Er, any update that doesn't use the old values for both is just wrong. If you use new values you're double-counting the evidence.

comment by Vaniver · 2010-12-10T02:25:21.187Z · score: 0 (0 votes) · LW(p) · GW(p)

I suppose that could be the case- I'm trying to unpack what exactly I'm thinking of when I think of 'credibility.' I can see strong arguments for either approach, depending on what 'credibility' is. Originally I was thinking of something along the lines of "prior probability a statement they make will be correct" but as soon as you know the content of the statement, that's not really relevant- and so now I'm imagining something along the lines of "how much I weight unlikely statements made by them," or more likely for a real person, "how much effort i put into checking their statements."

And so for the first one, it doesn't make sense to update the credibility- if someone previously trustworthy tells you something bizarre, you weight it highly. But for the second one, it does make sense to update the credibility first- if someone previously trustworthy tells you something bizarre, you should immediately become more skeptical of the that statement and subsequent ones.

comment by Will_Sawin · 2010-12-10T02:44:14.758Z · score: 3 (3 votes) · LW(p) · GW(p)

But no more skeptical than is warranted by your prior probability.

Let's say that if aliens exist, a reliable Tim has a 99% probability of saying they do. If they don't, he has a 1% probability of saying they do.

An unreliable Tim has a 50/50 shot in either situation.

My prior was 50/50 reliable/unreliable, 1,000,000/1 don't exist, exist so prior weights:

reliable, exist: 1 unreliable, exist: 1 reliable, don't exist: 1,000,000 unreliable, don't exist, 1,000,000

Updates after he says they do:

reliable, exist: .99 unreliable, exist: .5 reliable, don't exist: 10,000 unreliable, don't exist: 500,000

So we now believe approximately 50 to1 that he's unreliable, and 510,000 to 1.49 or 342,000 to 1 that they don't exist.

This is what you get if you decide each of the new based on the old.

comment by Vaniver · 2010-12-10T05:54:59.143Z · score: 0 (0 votes) · LW(p) · GW(p)

Thanks for working that out- that made clearer to me what I think I was confused about before. What I was imagining by "update credibility based on their statement" was configuring your credibility estimate to the statement in question- but rather than 'updating' that's just doing a lookup to figure out what Tim's credibility is for this class of statements.

Looking at shokwave's comment again with a clearer mind:

The perfect Bayesian consequentialist, however, would look at the decision, estimate the chances of the decision-maker being irrational (their credibility), and promptly revise their probability estimate of 'bad idea is actually dangerous' upwards, enough to approve of censorship. Nothing strange there. You appear to be downgrading SIAI's credibility because it takes an idea seriously that you don't - I don't think you have enough evidence to conclude that they are reasoning imperfectly.

When you estimate the chances that the decision-maker is irrational, I feel you need to include the fact that you disagree with them now (my original position of playing defensively), instead of just looking at your past.

Why? Because it reduces the chances you get stuck in a trap- if you agree with Tim on propositions 1-10 and disagree on proposition 11, you might say "well, Tim might know something I don't, I'll change my position to agree with his." Then, when you disagree on proposition 12, you look back at your history and see that you agree with Tim on everything else, so maybe he knows something you don't. Now, even though you changed your position on proposition 11, you probably did decrease Tim's credibility- maybe you have stored "we agreed on 10 (or 10.5 or whatever) of 11 propositions."

So, when we ask "does SIAI censor rationally?" it seems like we should take the current incident into account before we decide whether or not to take their word on their censorship. It's also rather helpful to ask that narrower question, instead of "is SIAI rational?", because general rationality does not translate to competence in narrow situations.

comment by shokwave · 2010-12-10T17:25:27.632Z · score: 1 (1 votes) · LW(p) · GW(p)

So, when we ask "does SIAI censor rationally?" it seems like we should take the current incident into account before we decide whether or not to take their word on their censorship.

This is a subtle part of Bayesian updating. The question "does SIAI censor rationally?" is different to "was SIAI's decision to censor this case made rationally?" (it is different because in the second case we have some weak evidence that it was not - ie, that we as rationalists would not have made the decision they did). We used our prior for "SIAI acts rationally" to determine or derive the probability of "SIAI censors rationally" (as you astutely pointed out, general rationality is not perfectly transitive), and then used "SIAI censors rationally" as our prior for the calculation of "did SIAI censor rationally in this case".

After our calculation, "did SIAI censor rationally in this case" is necessarily going to be lower in probability than our prior "SIAI censors rationally." Then, we can re-assess "SIAI censors rationally" in light of the fact that one of the cases of rational censorship has a higher level of uncertainty (now, our resolved disagreement is weaker evidence that SIAI does not censor rationally). That will revise "SIAI censors rationally" downwards - but not down to the level of "did SIAI censor rationally in this case".

To use your Tim's propositions example, you would want your estimation of proposition 12 to depend on not only how much you disagreed with him on prop 11, but also how much you agreed with him on props 1-10.

Perfect-Bayesian-Aumann-agreeing isn't binary about agreement; it would continue to increase the value of "stuff Tim knows that you don't" until it's easier to reduce the value of "Tim is a perfect Bayesian reasoner about aliens" - in other words, at about prop 13-14 the hypothesis "Tim is stupid with respect to aliens existing" would occur to you, and at prop 20 "Tim is stupid WRT aliens" and "Tim knows something I don't WRT aliens" would be equally likely.

comment by timtyler · 2010-12-09T18:53:44.863Z · score: 2 (4 votes) · LW(p) · GW(p)

It was left up for ages before the censorship. The Streisand effect is well known. Yes, this is a crazy kind of marketing stunt - but also one that shows Yu'El's compassion for the tender and unprotected minds of his flock - his power over the other participants - and one that adds to the community folklore.

comment by David_Gerard · 2010-12-08T19:03:48.137Z · score: 2 (4 votes) · LW(p) · GW(p)

No, I think you're nitpicking to dodge the question, and looking for a more convenient world.

I think at this point it's clear that you really can't be expected to give a straight answer. Well done, you win!

comment by Vladimir_Nesov · 2010-12-08T19:09:11.210Z · score: -2 (8 votes) · LW(p) · GW(p)

No, I think you're nitpicking to dodge the question, and looking for a more convenient world.

Have you tried?

I read your comment, understood the error you made, and it was about not seeing the picture clearly enough. If you describe the situation in terms of the components I listed, I expect you'll see what went wrong. If you don't oblige, I'll probably describe the solution tomorrow.

Edit in response to severe downvoting: Seriously? It's not allowed to entertain exercises about a conversational situation? (Besides, I was merely explaining an exercise given in another comment.) Believe, argument can be a puzzle to understand, and not a fight. If clumsy attempts to understand are discouraged, how am I supposed to develop my mastery?

comment by TheOtherDave · 2010-12-08T19:55:59.134Z · score: 4 (4 votes) · LW(p) · GW(p)

If you're genuinely unaware of the status-related implications of the way you phrased this comment, and/or of the fact that some people rate those kinds of implications negatively, let me know and I'll try to unpack them.

If you're simply objecting to them via rhetorical question, I've got nothing useful to add.

If it matters, I haven't downvoted anyone on this thread, though I reserve the right to do so later.

comment by Vladimir_Nesov · 2010-12-08T20:10:55.684Z · score: 1 (1 votes) · LW(p) · GW(p)

If you're genuinely unaware of the status-related implications of the way you phrased this comment, and/or of the fact that some people rate those kinds of implications negatively, let me know and I'll try to unpack them.

I understand that status-grabbing phrasing can explain why downvotes were in fact made, but object that they should be made for that reason here, on Less Wrong. If I turn out to be wrong, then sure. There could be other reasons beside that.

If you're simply objecting to them via rhetorical question, I've got nothing useful to add.

Likely this, but it's not completely clear to me what you mean.

If it matters, I haven't downvoted anyone on this thread, though I reserve the right to do so later.

Not as an affiliation signal, since the question is about properties of my comments, not of the people who judge them. But since you are not one of the downvoters, this says that you have less access to the reasons behind their actions than if you were one of them.

comment by wedrifid · 2010-12-08T21:36:03.513Z · score: 5 (5 votes) · LW(p) · GW(p)

I am not one of the downvoters you are complaining about but the distinction is a temporal one, not one of differing judgement. I have since had the chance to add my downvote. That suggests my reasoning may have a slightly higher correlation at least. :)

If you're genuinely unaware of the status-related implications of the way you phrased this comment, and/or of the fact that some people rate those kinds of implications negatively, let me know and I'll try to unpack them.

I understand that status-grabbing phrasing can explain why downvotes were in fact made, but object that they should be made for that reason here, on Less Wrong.

Something I have observed is that people can often get away with status grabbing ploys but they will be held to a much higher standard while they are doing so. People will extend more grace to you when you aren't insulting them, bizarrely enough.

I often observe that the one state of mind that leads me to sloppy thinking is that of contempt. Contempt is also the signal you were laying on thickly in your comments here and thinking displayed therein was commensurably shoddy. Not in the sense that they were internally inconsistent but in as much as they didn't relate at all well with the comments that you were presuming to reply to. (Whether the 'contempt' causality is, in fact, at play is not important - it is the results that get the votes.)

I wouldn't normally make such critiques but rhetorically or not you asked for one and this is a sincere reply.

comment by Vladimir_Nesov · 2010-12-08T22:03:53.735Z · score: 0 (0 votes) · LW(p) · GW(p)

Thank you. Contempt was not intended (or felt), I'll try keeping this possible impression in mind to figure out where I should tune down the way I talk to communicate emotion more accurately.

I often observe that the one state of mind that leads me to sloppy thinking is that of contempt.

Yes, it's fascinating for me how severely reasoning can be distorted by strong emotions, but generally I feel (social) emotions more rarely than other people. When that happens, I identify motivated thoughts by dozens, and have impaired ability to think clearly. I wish there was a reproducible way of inducing such emotional experience to experiment more with those states of mind.

Contempt is also the signal you were laying on thickly in your comments here and thinking displayed therein was commensurably shoddy. Not in the sense that they were internally inconsistent but in as much as they didn't relate at all well with the comments that you were presuming to reply to.

I don't believe that it's the cause. I'm generally bad at guessing what people mean, I often need being told explicitly. I don't believe it's the case with David Gerard's comments in this thread though (do you disagree?). I believe it was more the case with waitingforgodel's comments today.

I wouldn't normally make such critiques but rhetorically or not you asked for one and this is a sincere reply.

I appreciate such critiques, so at least my nonexistent disapproval of them shouldn't be a reason for making them more rarely.

comment by wedrifid · 2010-12-08T22:06:14.130Z · score: 3 (3 votes) · LW(p) · GW(p)

I don't believe that it's the cause. I'm generally bad at guessing what people mean, I often need being told explicitly. I don't believe it's the case with David Gerard's comments in this thread though (do you disagree?). I believe it was more the case with waitingforgodel's comments today.

Much less so with David. David also expressed himself more clearly - or perhaps instead in a more compatible idiom.

I wish there was a reproducible way of inducing such emotional experience to experiment more with those states of mind.

While such things are never going to be perfectly tailored for the desired effect MDMA invokes a related state. :)

comment by TheOtherDave · 2010-12-08T20:24:33.714Z · score: 2 (2 votes) · LW(p) · GW(p)

Likely this, but it's not completely clear to me what you mean.

I meant that if you were asking the question as a way of expressing your objections I had nothing useful to add.

the question is about properties of my comments, not of the people who judge them. But since you are not one of the downvoters, this says that you have less access to the reasons behind their actions than if you were one of them

Yes. Of course, if the question isn't about the people who judge the comments, then access to those people's motivations isn't terribly relevant to the question.

comment by Vladimir_Nesov · 2010-12-08T20:34:43.250Z · score: 0 (0 votes) · LW(p) · GW(p)

Of course, if the question isn't about the people who judge the comments, then access to those people's motivations isn't terribly relevant to the question.

The reasons they had for making their decisions can (should) be about my comment, not about them.

comment by xamdam · 2010-12-08T21:58:34.743Z · score: 0 (0 votes) · LW(p) · GW(p)

To be fair, I think the parent of the downvoted comment also has status implications:

I think you're nitpicking to dodge the question

It's a serious accusation hurled at the wrong type of guy IMO - Vladimir probably takes the objectivity award on this forum. I think his response was justified and objective, as usual.

comment by David_Gerard · 2010-12-09T11:25:47.510Z · score: 4 (4 votes) · LW(p) · GW(p)

When someone says "look, here is this thing you did that led to these clear problems in reality" and the person they're talking to answers "ah, but what is reality?" then the first person may reasonably consider that dodging the question.

comment by [deleted] · 2010-12-08T00:59:57.295Z · score: 2 (2 votes) · LW(p) · GW(p)

While it's not geared specifically towards individuals trying to do research, the (Virtual) Employment Open Thread has relevant advice for making money with little work.

comment by James_Miller · 2010-12-07T22:51:24.789Z · score: 2 (2 votes) · LW(p) · GW(p)

If you had a paper that was good enough to get published if you were a professor then the SIAI could probably find a professor to co-author with you.

Google Scholar has greatly reduced the benefit of having access to a college library.

comment by sketerpot · 2010-12-08T01:20:46.159Z · score: 6 (6 votes) · LW(p) · GW(p)

Google Scholar has greatly reduced the benefit of having access to a college library.

That depends on the field. Some fields are so riddled with paywalls that Google Scholar is all but useless; others like computer science, are much more progressive.

comment by katydee · 2010-12-10T18:47:50.773Z · score: 1 (3 votes) · LW(p) · GW(p)

Ah, you remind me of me from a while back. When I was an elementary schooler, I once replied to someone asking "would you rather be happy or right" with "how can I be happy if I can't be right?" But these days I've moderated somewhat, and I feel that there is indeed knowledge that can be harmful.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-09T21:00:45.288Z · score: 1 (17 votes) · LW(p) · GW(p)

Reposting comments deleted by the authors or by moderators will be considered hostile behavior and interfering with the normal and intended behavior of the site and its software, and you will be asked to leave the Less Wrong site.

-- Eliezer Yudkowsky, Less Wrong Moderator.

comment by Vladimir_Nesov · 2010-12-09T21:25:19.044Z · score: 15 (15 votes) · LW(p) · GW(p)

This decree is ambiguous enough to be seen as threatening people not to repost their banned comments (made in good faith but including too much forbidden material by accident) even after removing all objectionable content. I think this should be clarified.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-09T21:33:54.593Z · score: 9 (11 votes) · LW(p) · GW(p)

Consider that clarification made; no such threat is intended.

comment by XiXiDu · 2010-12-10T09:53:44.522Z · score: 4 (4 votes) · LW(p) · GW(p)

Does this decree have a retrospective effect? And what about the private message system?

comment by XiXiDu · 2011-01-28T11:06:47.906Z · score: 0 (0 votes) · LW(p) · GW(p)

Reposting comments deleted by the authors or by moderators will be considered hostile behavior...

Does this only apply to comments? The reason that I ask by replying to this old comment is because I noticed that you can't delete posts. If you delete a post it is not listed anymore and the name of the author disappears but it is still available, can be linked to using the original link and can be found via the search function.

If you want to delete a post you first have to edit it to remove its content manually.

comment by waitingforgodel · 2010-12-09T21:07:26.158Z · score: -10 (24 votes) · LW(p) · GW(p)

Then I guess I'll be asked to leave the lesswrong site.

The 0.0001% bit was a reference to my earlier precommitment

comment by Vladimir_Nesov · 2010-12-09T21:12:32.249Z · score: 11 (17 votes) · LW(p) · GW(p)

In other words, you have allegedly precommited to existential terrorism, killing the Future with small probability if your demands are not met.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-09T21:37:55.439Z · score: 5 (9 votes) · LW(p) · GW(p)

WFG gave this reply which was downvoted under the default voting threshold:

Agree except for the 'terrorism' and 'allegedly' part.

I just emailed a right-wing blogger some stuff that probably isn't good for the future. Not sure what the increase was, hopefully around 0.0001%.

I'll write it up in more detail and post a top-level discussion thread after work.

-wfg

I'm also reposting this just in case wfg tries to delete it or modify it later, since I think it's important for everyone to see. Ordinarily I'd consider that a violation of netiquette, but under these here exact circumstances...

comment by CharlieSheen · 2011-08-28T11:09:39.505Z · score: 3 (3 votes) · LW(p) · GW(p)

I just emailed a right-wing blogger some stuff that probably isn't good for the future. Not sure what the increase was, hopefully around 0.0001%.

Wow, that manages to signal a willingness to use unnecessarily risky tactics, malignancy AND marginal incompetence.

While I do understand that right wing people are naturally the kind of people who bring about increased existential risks, I think my own occasional emails to left wing blogers aren't' that shabby (since that makes me a dirty commie enabler). In fact I email all sorts of blogers with questions and citations to papers they might be interested in.

Muhahahaha.

How does one even estimate something like 0.0001% increase of existential risk out of something like sending an email to a blogger? The error bars on the thing are vast. All you are doing is putting up a giant sign with negative characteristics that will make people want to cooperate with you less.

comment by waitingforgodel · 2010-12-09T21:15:48.770Z · score: -29 (45 votes) · LW(p) · GW(p)

Agree except for the 'terrorism' and 'allegedly' part.

I just emailed a right-wing blogger some stuff that probably isn't good for the future. Not sure what the increase was, hopefully around 0.0001%.

I'll write it up in more detail and post a top-level discussion thread after work.

-wfg

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-09T21:19:20.434Z · score: 25 (33 votes) · LW(p) · GW(p)

(Shrugs.)

Your decision. The Singularity Institute does not negotiate with terrorists.

comment by wedrifid · 2010-12-09T21:29:06.634Z · score: 21 (23 votes) · LW(p) · GW(p)

WFG, please quit with the 'increase existential risk' idea. Allowing Eliezer to claim moral high ground here makes the whole situation surreal.

A (slightly more) sane response would be to direct your altruistic punishment towards the SIAI specifically. They are, after all, the group who is doing harm (to you according to your values). Opposing them makes sense (given your premises.)

comment by waitingforgodel · 2010-12-09T22:09:31.291Z · score: -4 (16 votes) · LW(p) · GW(p)

I don't think my addition gives EY the high ground.

What are the points you wanted to bring up with him?

comment by waitingforgodel · 2010-12-09T21:37:49.111Z · score: -14 (30 votes) · LW(p) · GW(p)

A (slightly more) sane response would be to direct your altruistic punishment towards the SIAI specifically.

I'm all ears.

If you can think of something equally bad that targets SIAI specifically, (or anyone reading this can), email it to badforsiai.wfg@xoxy.net

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-09T21:39:20.625Z · score: 17 (25 votes) · LW(p) · GW(p)

Note that comments like these are still not being deleted, by the way. LW censors Langford Basilisks, not arbitrarily great levels of harmful stupidity or hostility toward the hosts - those are left to ordinary downvoting.

comment by waitingforgodel · 2010-12-09T22:14:32.043Z · score: -4 (30 votes) · LW(p) · GW(p)

Are you aware of the damage your censoring is doing?

That blocking these things you think are bad (and most people do not) is causing tangible PR problems, chilling effects, SIAI/LW appearing psycho, and repeated discussion of a ancient thread?

If you weren't you, but were instead witnessing a friend do something so utterly stupid, wouldn't you tell them to stop?

comment by Roko · 2010-12-09T22:17:05.090Z · score: 11 (15 votes) · LW(p) · GW(p)

May I at this point point out that I agree that the post in question should not appear in public. Therefore, it is a question of the author's right to retract material, not of censorship.

comment by wedrifid · 2010-12-09T22:23:54.925Z · score: 9 (9 votes) · LW(p) · GW(p)

Therefore, it is a question of the author's right to retract material, not of censorship.

That does not actually follow. Censoring what other people say is still censoring even if you happened to have said related things previously.

comment by Vladimir_Nesov · 2010-12-09T22:32:09.389Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes, this other aspect.

comment by waitingforgodel · 2010-12-09T22:27:43.067Z · score: 4 (14 votes) · LW(p) · GW(p)

In this case, the comment censored was not posted by you. Therefore you're not the author.

FYI the actual author didn't even know it was censored.

comment by Vladimir_Nesov · 2010-12-09T22:22:25.493Z · score: 3 (5 votes) · LW(p) · GW(p)

Good idea. We should've started using this standard reference when the censorship complaints began, but at least henceforth.

comment by Roko · 2010-12-09T22:23:37.473Z · score: 0 (6 votes) · LW(p) · GW(p)

Yes. THIS IS NOT CENSORSHIP. Just in case anyone missed it.

comment by wedrifid · 2010-12-09T22:27:33.175Z · score: 14 (18 votes) · LW(p) · GW(p)

You are evidently confused about what the word means. The systematic deletion of any content that relates to an idea that the person with power does not wish to be spoken is censorship in the same way that threatening to (probabilistically) destroy humanity is terrorism. As in, blatantly obviously - it's just what the words happen to mean.

Going around saying 'this isn't censorship' while doing it would trigger all sorts of 'crazy cult' warning bells.

comment by fortyeridania · 2010-12-10T10:17:55.776Z · score: 6 (6 votes) · LW(p) · GW(p)

Yes, the acts in question can easily be denoted by the terms "blackmail" and "censorship." And your final sentence is certainly true as well.

To avoid being called a cult, to avoid being a cult, and to avoid doing bad things generally, we should stop the definition debate and focus on whether people's behavior has been appropriate. If connotation conundrums keep you quarreling about terms, pick variables (e.g. "what EY did"=E and "what WFG precommitted to doing, and in fact did"=G) and keep talking.

comment by waitingforgodel · 2010-12-09T22:28:29.426Z · score: 5 (17 votes) · LW(p) · GW(p)

YES IT IS. In case anyone missed it. It isn't Roko's post we're talking about right now

comment by Roko · 2010-12-09T22:37:59.997Z · score: 5 (15 votes) · LW(p) · GW(p)

There is still a moral sense in which if, after careful thought, I decided that that material should not have been posted, then any posts which resulted solely from my post are in a sense a violation of my desire to not have posted it. Especially if said posts operate under the illusion that my original post was censored rather than retracted.

But in reality such ideas tend to propagate like the imp of the perverse: a gnawing desire to know what the "censored" material is, even if everyone who knows what it is has subsequently decided that they wished they didn't! E.g both me and Nesov have been persuaded (once fully filled in) that this is really nasty stuff and shouldn't be let out. (correct me if I am wrong).

This "imp of the perverse" property is actually part of the reason why the original post is harmful. In a sense, this is an idea-virus which makes people who don't yet have it want to have it, but as soon as they have been exposed to it, they (belatedly) realize they really didn't want to know about it or spread it.

Sigh.

comment by XiXiDu · 2010-12-10T13:46:35.860Z · score: 8 (12 votes) · LW(p) · GW(p)

The only people who seem to be filled in are you and Yudkowsky. I think Nesov just argues against it based on some very weak belief. As far as I can tell, I got all the material in question. The only possible reason I can see for why one wouldn't want to spread it is that its negative potential does outweigh its very-very-low-probability (and that only if you accept a long chain of previous beliefs). It doesn't. It also isn't some genuine and brilliant idea that all this mystery mongering makes it seem to be. Everyone I sent it just laughed about it. But maybe you can fill me in?

comment by Roko · 2010-12-10T17:06:28.736Z · score: 5 (19 votes) · LW(p) · GW(p)

Look, you have three people all of whom think it is a bad idea to spread this. All are smart. Two initially thought it was OK to spread it.

Furthermore, I would add that I wish I had never learned about any of these ideas. In fact, I wish I had never come across the initial link on the internet that caused me to think about transhumanism and thereby about the singularity; I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm with such small durations of inattention, uncautiousness and/or stupidity, even if it is all premultiplied by a small probability. (not a very small one, mind you. More like 1/500 type numbers here)

If this is not enough warning to make you stop wanting to know more, then you deserve what you get.

comment by XiXiDu · 2010-12-10T17:59:50.561Z · score: 13 (19 votes) · LW(p) · GW(p)

I wish I had never come across the initial link on the internet that caused me to think about transhumanism and thereby about the singularity;

I wish you'd talk to someone other than Yudkowsky about this. You don't need anyone to harm you, you already seem to harm yourself. You indulge yourself in self-inflicted psychological stress. As Seneca said, "there are more things that terrify us than there are that oppress us, and we suffer more often in opinion than in reality". You worry and pay interest for debt that will likely never be made.

Look, you have three people all of whom think it is a bad idea to spread this. All are smart.

I read about quite a few smart people who hold idiot beliefs, I only consider this to be marginal evidence.

Furthermore, I would add that I wish I had never learned about any of these ideas.

You'd rather be some ignorant pleasure maximizing device? For me truth is the most cherished good.

If this is not enough warning to make you stop wanting to know more, then you deserve what you get.

BS.

comment by Roko · 2010-12-10T18:04:07.523Z · score: 5 (5 votes) · LW(p) · GW(p)

For me truth is the most cherished good.

More so than not opening yourself up to a small risk of severe consequences? E.g. if you found a diary that clearly belonged to some organized crime boss, would you open it up and read it? I see this situation as analogous.

comment by Manfred · 2010-12-10T19:39:06.864Z · score: 3 (3 votes) · LW(p) · GW(p)

Really thought you were going to go with Tom Riddle on this one. Perfect line break for it :)

comment by timtyler · 2010-12-10T18:11:24.739Z · score: -3 (5 votes) · LW(p) · GW(p)

For me truth is the most cherished good.

You are a truth seeker? Really? I think that makes you pretty rare and unusual!

There's a lot of truth out there. Is there any pattern to which truths you are interested in?

comment by XiXiDu · 2010-12-10T18:41:34.249Z · score: 2 (6 votes) · LW(p) · GW(p)

You are a truth seeker? Really?

Yes, I'd choose to eat from the tree of the knowledge of good and evil and tell God to fuck off.

comment by timtyler · 2010-12-10T18:46:40.256Z · score: 8 (12 votes) · LW(p) · GW(p)

So, as a gift: 63,174,774 + 6,761,374,774 = 6,824,549,548.

Or - if you don't like that particular truth - care to say which truths you do like?

comment by XiXiDu · 2010-12-10T18:54:00.789Z · score: 0 (2 votes) · LW(p) · GW(p)

Or - if you don't like that particular truth - care to say which truths you do like?

I can't tell you, I cherry-pick what I want to know when it is hinted at. But generally most of all I want to know about truths that other agents don't want me to know about.

comment by TheOtherDave · 2010-12-10T19:11:46.524Z · score: 6 (6 votes) · LW(p) · GW(p)

There are thousands of truths I know that I don't want you to know about. (Or, to be more precise, that I want you to not know about.) Are you really most interested in those, out of all the truths I know?

I think I'd be disturbed by that if I thought it were true.

comment by Emile · 2010-12-10T22:32:22.253Z · score: 4 (4 votes) · LW(p) · GW(p)

But generally most of all I want to know about truths that other agents don't want me to know about.

I'm not sure that's a very good heuristic - are you sure that truly describes the truths you care most about? It seems analogous to the fact that people are more motivated by a cause if they learn some people opposes it, which is silly.

comment by timtyler · 2010-12-10T19:00:14.625Z · score: 0 (0 votes) · LW(p) · GW(p)

Heh - OK. Thanks for the reply. Yes, that is not that bad a heuristic! Maybe someday you can figure this out in more detail. It is surely good to know what you want.

comment by katydee · 2010-12-10T18:53:15.277Z · score: 0 (2 votes) · LW(p) · GW(p)

I love this reply. I don't think it's necessarily the best reply, and I don't really even think it's a polite reply, but it's certainly one of the funniest ones I've seen here.

comment by Vaniver · 2010-12-10T17:23:22.622Z · score: 8 (10 votes) · LW(p) · GW(p)

Look, you have three people all of whom think it is a bad idea to spread this. All are smart. Two initially thought it was OK to spread it.

I see a lot more than three people here, most of whom are smart, and most of them think that Langford basilisks are fictional, and even if they aren't, censoring them is the wrong thing to do. You can't quarantine the internet, and so putting up warning signs makes more people fall into the pit.

comment by katydee · 2010-12-10T18:09:43.342Z · score: 3 (7 votes) · LW(p) · GW(p)

I saw the original idea and the discussion around it, but I was (fortunately) under stress at the time and initially dismissed it as so implausible as to be unworthy of serious consideration. Given the reactions to it by Eliezer, Alicorn, and Roko, who seem very intelligent and know more about this topic than I do, I'm not so sure. I do know enough to say that, if the idea is something that should be taken seriously, it's really serious. I can tell you that I am quite happy that the original posts are no longer present, because if they were I am moderately confident that I would want to go back and see if I could make more sense out of the matter, and if Eliezer, Alicorn, and Roko are right about this, making sense out of the matter would be seriously detrimental to my health.

Thankfully, either it's a threat but I don't understand it fully, in which case I'm safe, or it's not a threat, in which case I'm also safe. But I am sufficiently concerned about the possibility that it's a threat that I don't understand fully but might be able to realize independently given enough thought that I'm consciously avoiding extended thought about this matter. I will respond to posts that directly relate to this one but am otherwise done with this topic-- rest assured that, if you missed this one, you're really quite all right for it!

comment by Vaniver · 2010-12-10T18:21:41.543Z · score: 5 (7 votes) · LW(p) · GW(p)

Given the reactions to it by Eliezer, Alicorn, and Roko, who seem very intelligent and know more about this topic than I do, I'm not so sure.

This line of argument really bothers me. What does it mean for E, A, and R to seem very intelligent? As far as I can tell, the necessary conclusion is "I will believe a controversial statement of theirs without considering it." When you word it like that, the standards are a lot higher than "seem very intelligent", or at least narrower- you need to know their track record on decisions like this.

(The controversial statement is "you don't want to know about X," not X itself, by the way.)

comment by katydee · 2010-12-10T18:27:56.918Z · score: 9 (9 votes) · LW(p) · GW(p)

I am willing to accept the idea that (intelligent) specialists in a field may know more about their field than nonspecialists and are therefore more qualified to evaluate matters related to their field than I.

comment by Vaniver · 2010-12-10T18:37:09.540Z · score: 5 (5 votes) · LW(p) · GW(p)

Good point, though I would point out that you need E, A, and R to be specialists when it comes to how people react to X, not just X, and I would say there's evidence that's not true.

comment by katydee · 2010-12-10T18:44:08.778Z · score: 1 (1 votes) · LW(p) · GW(p)

I agree, but I know what conclusion I would draw from the belief in question if I actually believed it, so the issue of their knowledge of how people react is largely immaterial to me in particular. I was mostly posting to provide a data point in favor of keeping the material off LW, not to attempt to dissolve the issue completely or anything.

comment by Vladimir_Nesov · 2010-12-10T18:27:07.025Z · score: 0 (0 votes) · LW(p) · GW(p)

When you word it like that, the standards are a lot higher than "seem very intelligent", or at least narrower- you need to know their track record on decisions like this.

You don't need any specific kind of proof, you already have some state of knowledge about correctness of such statements. There is no "standard of evidence" for forming a state of knowledge, it just may be that without the evidence that meets that "standard" you don't expect to reach some level of certainty, or some level of stability of your state of knowledge (i.e. low expectation of changing your mind).

comment by Roko · 2010-12-10T17:34:09.246Z · score: 0 (6 votes) · LW(p) · GW(p)

Whatever man, go ahead and make your excuses, you have been warned.

comment by Vaniver · 2010-12-10T17:41:37.862Z · score: 8 (12 votes) · LW(p) · GW(p)

I have not only been warned, but I have stared the basilisk in the eyes, and I'm still here typing about it. In fact, I have only cared enough to do so because it was banned, and I wanted the information on how dangerous it was to judge the wisdom of the censorship.

On a more general note, being terrified of very unlikely terrible events is a known human failure mode. Perhaps it would be more effective at improving human rationality to expose people to ideas like this with the sole purpose of overcoming that sort of terror?

comment by Jack · 2010-12-10T18:31:19.160Z · score: 5 (5 votes) · LW(p) · GW(p)

I'll just second that I also read it a while back (though after it was censored) and thought that it was quite interesting but wrong on multiple levels. Not 'probably wrong' but wrong like an invalid logic proof is wrong (though of course I am not 100% certain of anything). My main concern about the censorship is that not talking about what was wrong with the argument will allow the proliferation of the reasoning errors that left people thinking the conclusion was plausible. There is a kind of self-fulfilling prophesy involved in not recognizing these errors which is particularly worrying.

comment by JGWeissman · 2010-12-11T01:58:40.275Z · score: 7 (9 votes) · LW(p) · GW(p)

Consider this invalid proof that 1 = 2:

1. Let x = y
2. x^2 = x*y
3. x^2 - y^2 = x*y - y^2
4. (x - y)*(x + y) = y*(x - y)
5. x + y = y
6. y + y = y  (substitute using 1)
7. 2y = y
8. 2 = 1

You could refute this by pointing out that step (5) involved division by (x - y) = (y - y) = 0, and you can't divide by 0.

But imagine if someone claimed that the proof is invalid because "you can't represent numbers with letters like 'x' and 'y'". You would think that they don't understand what is actually wrong with it, or why someone might mistakenly believe it. This is basically my reaction to everyone I have seen oppose the censorship because of some argument they present that the idea is wrong and no one would believe it.

comment by Jack · 2010-12-11T03:11:11.903Z · score: 2 (4 votes) · LW(p) · GW(p)

I'm actually not sure if I understand your point. Either it is a round-about way of making it or I'm totally dense and the idea really is dangerous (or some third option).

It's not that the idea is wrong and no one would believe it, it's that the idea is wrong and when presented with with the explanation for why it's wrong no one should believe it. In addition, it's kind of important that people understand why it's wrong. I'm sympathetic to people with different minds that might have adverse reactions to things I don't but the solution to that is to warn them off, not censor the topics entirely.

comment by JGWeissman · 2010-12-11T03:26:39.994Z · score: 1 (5 votes) · LW(p) · GW(p)

Yes, the idea really is dangerous.

it's that the idea is wrong and when presented with with the explanation for why it's wrong no one should believe it.

And for those who understand the idea, but not why it is wrong, nor the explanation of why it is wrong?

the solution to that is to warn them off, not censor the topics entirely.

This is a politically reinforced heuristic that does not work for this problem.

comment by XiXiDu · 2010-12-11T12:12:35.651Z · score: 6 (6 votes) · LW(p) · GW(p)

This is a politically reinforced heuristic that does not work for this problem.

Transparency is very important regarding people and organisations in powerful and unique positions. The way they act and what they claim in public is weak evidence in support of their honesty. To claim that they have to censor certain information in the name of the greater public good, and to fortify the decision based on their public reputation, does bear no evidence about their true objectives. The only way to solve this issue is by means of transparency.

Surely transparency might have negative consequences, but it mustn't and can outweigh the potential risks from just believing that certain people are telling the truth and do not engage in deception to follow through on their true objectives.

There is also nothing that Yudkowsky has ever achieved that would sufficiently prove his superior intellect that would in turn justify people to just believe him about some extraordinary claim.

comment by JGWeissman · 2010-12-11T17:49:15.851Z · score: 1 (5 votes) · LW(p) · GW(p)

When I say something is a misapplied politically reinforced heuristic, you only reinforce my point by making fully general political arguments that it is always right.

Censorship is not the most evil thing in the universe. The consequences of transparency are allowed to be worse than censorship. Deal with it.

comment by XiXiDu · 2010-12-11T19:08:52.017Z · score: 3 (3 votes) · LW(p) · GW(p)

When I say something is a misapplied politically reinforced heuristic, you only reinforce my point by making fully general political arguments that it is always right.

I already had Anna Salamon telling me something about politics. You sound as incomprehensible to me. Sorry, not meant as an attack.

Censorship is not the most evil thing in the universe. The consequences of transparency are allowed to be worse than censorship. Deal with it.

I stated several times in the past that I am completely in favor of censorship, I have no idea why you are telling me this.

comment by jimrandomh · 2010-12-11T21:13:06.500Z · score: 3 (5 votes) · LW(p) · GW(p)

Our rules and intuitions about free speech and censorship are based on the types of censorship we usually see in practice. Ordinarily, if someone is trying to censor a piece of information, then that information falls into one of two categories: either it's information that would weaken them politically, by making others less likely to support them and more likely to support their opponents, or it's information that would enable people to do something that they don't want done.

People often try to censor information that makes people less likely to support them, and more likely to support their opponents. For example, many governments try to censor embarrassing facts ("the Purple Party takes bribes and kicks puppies!"), the fact that opposition exists ("the Pink Party will stop the puppy-kicking!") and its strength ("you can join the Pink Party, there are 10^4 of us already!"), and organization of opposition ("the Pink Party rally is tomorrow!"). This is most obvious with political parties, but it happens anywhere people feel like there are "sides" - with religions (censorship of "blasphemy") and with public policies (censoring climate change studies, reports from the Iraq and Afghan wars). Allowing censorship in this category is bad because it enables corruption, and leaves less-worthy groups in charge.

The second common instance of censorship is encouragement and instructions for doing things that certain people don't want done. Examples include cryptography, how to break DRM, pornography, and bomb-making recipes. Banning these is bad if the capability is suppressed for a bad reason (cryptography enables dissent), if it's entangled with other things (general-purpose chemistry applies to explosives), or if it requires infrastructure that can also be used for the first type of censorship (porn filters have been caught blocking politicians' campaign sites).

These two cases cover 99.99% of the things we call "censorship", and within these two categories, censorship is definitely bad, and usually worth opposing. It is normally safe to assume that if something is being censored, it is for one of these two reasons. There are gray areas - slander (when the speaker knows he's lying and has malicious intent), and bomb-making recipes (when they're advertised as such and not general-purpose chemistry), for example - but the law has the exceptions mapped out pretty accurately. (Slander gets you sued, bomb-making recipes get you surveilled.) This makes a solid foundation for the principle that censorship should be opposed.

However, that principle and the analysis supporting it apply only to censorship that falls within these two domains. When things fall outside these categories, we usually don't call them censorship; for example, there is a widespread conspiracy among email and web site administrators to suppress ads for Viagra, but we don't call that censorship, even though it meets every aspect of the definition except motive. If you happen to find a weird instance of censorship which doesn't fall into either category, then you have to start over and derive an answer to whether censorship in that particular case is good or bad, from scratch, without resorting to generalities about censorship-in-general. Some of the arguments may still apply - for example, building a censorship-technology infrastructure is bad even if it's only meant to be used on spam - but not all of them, and not with the same force.

If the usual arguments against censorship don't apply, and we're trying to figure out whether to censor it, the next two things to test are whether it's true, and whether an informed reader would want to see it. If both of these conditions hold, then it should not be censored. However, if either condition fails to hold, then it's okay to censor.

Either the forbidden post is false, in which case it does not deserve protection because it's false, or it's true, in which case it should be censored because no informed person should want to see it. In either case, people spreading it are doing a bad thing.

comment by Jack · 2010-12-11T21:37:23.282Z · score: 6 (6 votes) · LW(p) · GW(p)

Either the forbidden post is false, in which case it does not deserve protection because it's false,

Even if this is right the censorship extends to perhaps true conversations about why the post is false. Moreover, I don't see what truth has to do with it. There are plenty of false claims made on this site that nonetheless should be public because understanding why they're false and how someone might come to think that they are true are worthwhile endeavors.

The question here is rather straight forward: does the harm of the censorship outweigh the harm of letting people talk about the post. I can understand how you might initially think those who disagree with you are just responding to knee-jerk anti-censorship instincts that aren't necessarily valid here. But from where I stand the arguments made by those who disagree with you do not fit this pattern. I think XiXi has been clear in the past about why the transparency concern does apply to SIAI. We've also seen arguments for why censorship in this particular case is a bad idea.

comment by Vaniver · 2010-12-11T22:38:56.673Z · score: 3 (5 votes) · LW(p) · GW(p)

Either the forbidden post is false, in which case it does not deserve protection because it's false, or it's true, in which case it should be censored because no informed person should want to see it. In either case, people spreading it are doing a bad thing.

There are clearly more than two options here. There seem to be two points under contention:

It is/is not (1/2) reasonable to agree with the forbidden post.

It is/is not (3/4) desirable to know the contents of the forbidden post.

You seem to be restricting us to either 2+3 or 1+4. It seems that 1+3 is plausible (should we keep children from ever knowing about death because it'll upset them?), and 2+4 seems like a good argument for restriction of knowledge (the idea is costly until you work through it, and the benefits gained from reaching the other side are lower than the costs).

But I personally suspect 2+3 is the best description, and that doesn't explain why people trying to spread it are doing a bad thing. Should we delete posts on Pascal's Wager because someone might believe it?

comment by David_Gerard · 2010-12-11T22:04:38.365Z · score: 3 (3 votes) · LW(p) · GW(p)

Either the forbidden post is false, in which case it does not deserve protection because it's false, or it's true, in which case it should be censored because no informed person should want to see it.

Excluded middle, of course: incorrect criterion. (Was this intended as a test?) It would not deserve protection if it were useless (like spam), not "if it were false."

The reason I consider sufficient to keep it off LessWrong is that it actually hurt actual people. That's pretty convincing to me. I wouldn't expunge it from the Internet (though I might put a warning label on it), but from LW? Appropriate. Reposting it here? Rude.

Unfortunately, that's also an argument as to why it needs serious thought applied to it, because if the results of decompartmentalised thinking can lead there, humans need to be able to handle them. As Vaniver pointed out, there are previous historical texts that have had similar effects. Rationalists need to be able to cope with such things, as they have learnt to cope with previous conceptual basilisks. So it's legitimate LessWrong material at the same time as being inappropriate for here. Tricky one.

(To the ends of that "compartmentalisation" link, by the way, I'm interested in past examples of basilisks and other motifs of harmful sensation in idea form. Yes, I have the deleted Wikipedia article.)

Note that I personally found the idea itself silly at best.

comment by TheOtherDave · 2010-12-11T21:52:48.453Z · score: 1 (1 votes) · LW(p) · GW(p)

The assertion that if a statement is not true, fails to alter political support, fails to provide instruction, and an informed reader wants to see that statement, it is therefore a bad thing to spread that statement and a OK thing to censor, is, um, far from uncontroversial.

To begin with, most fiction falls into this category. For that matter, so does most nonfiction, though at least in that case the authors generally don't intend for it to be non-true.

comment by jimrandomh · 2010-12-11T22:22:36.004Z · score: 0 (0 votes) · LW(p) · GW(p)

The assertion that if a statement is not true, fails to alter political support, fails to provide instruction, and an informed reader wants to see that statement, it is therefore a bad thing to spread that statement and a OK thing to censor, is, um, far from uncontroversial.

No, you reversed a sign bit: it is okay to censor if an informed reader wouldn't want to see it (and the rest of those conditions).

comment by TheOtherDave · 2010-12-11T22:43:14.794Z · score: 0 (0 votes) · LW(p) · GW(p)

No, I don't think so. You said "if either condition fails to hold, then it's okay to censor." If it isn't true, and an informed reader wants to see it, then one of the two conditions failed to hold, and therefore it's OK to censor.

No?

comment by jimrandomh · 2010-12-11T23:39:14.256Z · score: 0 (0 votes) · LW(p) · GW(p)

Oops, you're right - one more condition is required. The condition I gave is only sufficient to show that it fails to fall into a protected class, not that it falls in the class of things that should be censored; there are things which fall in neither class (which aren't normally censored because that requires someone with a motive to censor it, which usually puts it into one of the protected classes). To make it worthy of censorship, there must additionally be a reason outside the list of excluded reasons to censor it.

comment by JGWeissman · 2010-12-11T19:21:23.285Z · score: -2 (2 votes) · LW(p) · GW(p)

I stated several times in the past that I am completely in favor of censorship, I have no idea why you are telling me this.

Your comment that I am replying too is often way more salient than things you have said in the past that I may or may not have observed.

comment by XiXiDu · 2010-12-11T19:52:42.313Z · score: 3 (3 votes) · LW(p) · GW(p)

I just have trouble understanding what you are saying. That might very well be my fault. I do not intent any hostile attack against you or the SIAI. I'm just curious, not worried at all. I do not demand anything. I'd like to learn more about you people, what you believe and how you arrived at your beliefs.

There is this particular case of the forbidden topic and I am throwing everything I got at it to see if the beliefs about it are consistent and hold water. That doesn't mean that I am against censorship or that I believe it is wrong. I believe it is right but too unlikely (...). I believe that Yudkowsky and the SIAI are probably honest (although my gut feeling is to be very skeptic) but that there are good arguments for more transparency regarding the SIAI (if you believe it is as important as being portrayed). I believe that Yudkowsky is wrong about his risk estimation regarding the idea.

I just don't understand your criticism of my past comments and that included telling me something about how I use politics (I don't get it) and that I should accept that censorship sometimes is necessary (which I haven't argued against).

comment by JGWeissman · 2010-12-11T20:22:47.204Z · score: 4 (6 votes) · LW(p) · GW(p)

There is this particular case of the forbidden topic and I am throwing everything I got at it to see if the beliefs about it are consistent and hold water.

The problem with that is that Eliezer and those who agree with him, including me, cannot speak freely about our reasoning on the issue, because we don't want to spread the idea, so we don't want to describe it and point to details about it as we describe our reasoning. If you imagine yourself in our position, believing the idea is dangerous, you could tell that you wouldn't want to spread the idea in the process of explaining its danger either.

Under more normal circumstances, where the ideas we disagree about are not thought by anyone to be dangerous, we can have effective discussion by laying out our true reasons for our beliefs, and considering counter arguments that refer to the details of our arguments. Being cut off from our normal effective methods of discussion is stressful, at least for me.

I have been trying to persuade people who don't know the details of the idea or don't agree that it is dangerous that we do in fact have good reasons for believing it to be dangerous, or at least that this is likely enough that they should let it go. This is a slow process, as I think of ways to express my thoughts without revealing details of the dangerous idea, or explaining them to people who know but don't understand those details. And this ends up involving talking to people who, because they don't think the idea is dangerous and don't take it seriously, express themselves faster and less carefully, and who have conflicting goals like learning or spreading the idea, or opposing censorship in general, or having judged for themselves the merits of censorship (from others just like them) in this case. This is also stressful.

I engage in this stressful topic, because I think it is important, both that people do not get hurt from learning about this idea, and that SIAI/Eliezer do not get dragged through mud for doing the right thing.

Sorry, but I am not here to help you get the full understanding you need to judge if the beliefs are consistent and hold water. As I have been saying, this is not a normal discussion. And seriously, you would be better of dropping it and finding something else to worry about. And if you think it is important, you can remember to track if SIAI/Eliezer/supporters like me engage in a pattern of making excuses to ban certain topics to protect some hidden agenda. But then please remember all the critical discussion that don't get banned.

comment by Vladimir_Nesov · 2010-12-12T00:01:05.162Z · score: 3 (3 votes) · LW(p) · GW(p)

I have been trying to persuade people who don't know the details of the idea or don't agree that it is dangerous that we do in fact have good reasons for believing it to be dangerous, or at least that this is likely enough that they should let it go. This is a slow process, as I think of ways to express my thoughts without revealing details of the dangerous idea, or explaining them to people who know but don't understand those details.

Note that this shouldn't be possible other than through arguments from authority.

(I've just now formed a better intuitive picture of the reasons for danger of the idea, and saw some of the comments previously made unnecessarily revealing, where the additional detail didn't actually serve the purpose of convincing people I communicated with, who lacked some of the prerequisites for being able to use that detail to understand the argument for danger, but would potentially gain (better) understanding of the idea. It does still sound silly to me, but maybe the lack of inferential stability of this conclusion should actually be felt this way - I expect that the idea will stop being dangerous in the following decades due to better understanding of decision theory.)

comment by timtyler · 2010-12-11T20:16:32.711Z · score: 4 (4 votes) · LW(p) · GW(p)

There is this particular case of the forbidden topic and I am throwing everything I got at it to see if the beliefs about it are consistent and hold water.

You are just going to piss off the management.

IMO, it isn't that interesting.

Yudkowsky apparently agrees that squashing it was handled badly.

Anyway, now Roko is out of self-imposed exile, I figure it is about time to let it drop.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-11T05:18:57.479Z · score: -4 (20 votes) · LW(p) · GW(p)

Does this theory of yours require that Eliezer Yudkowsky plus several other old-time Less Wrongians are holding the Idiot Ball and being really stupid about something that you can just see as obvious?

Now might be a good time to notice that you are confused.

comment by Jack · 2010-12-11T09:15:02.391Z · score: 23 (23 votes) · LW(p) · GW(p)

Something to keep in mind when you reply to comments here is that you are the default leader of this community and its highest status member. This means comments that would be reasonably glib or slightly snarky from other posters can come off as threatening and condescending when made by you. They're not really threatening but they can instill in their targets strong fight-or-flight responses. Perhaps this is because in the ancestral environment status challenges from group leaders were far more threatening to our ancestor's livelihood than challenges from other group members. When you're kicking out trolls it's a sight to see, but when you're rhetorically challenging honest interlocutors it's probably counter-productive. I had to step away from the computer because I could tell that even if I was wrong the feelings this comment provoked weren't going to let me admit it (and you weren't even actually mean, just snobby).

As to your question, I don't think my understanding of the idea requires anyone to be an idiot. In fact from what you've said I doubt we're that far a part on the matter of how threatening the idea is. There may be implications I haven't thought through that you have and there maybe general responses to implications I've thought of that you haven't. I often have trouble telling how much intelligence I needed to get somewhere but I think I've applied a fair amount in this case. Where I think we probably diverge significantly is in our estimation of the cost of the censorship which I think is more than high enough to outweigh the risk of making Roko's idea public. It is at least plausible that you are underestimating this cost due to biases resulting from you social position in this group and your organizational affiliation.

I'll note that, as wedrifid suggested, your position also seems to assume that quite a few Less Wrongians are being really stupid and can't see the obvious. Perhaps those who have expressed disagreement with your decision aren't quite as old-time as those who have. And perhaps this is because we have not internalized important concepts or accessed important evidence required to see the danger in Roko's idea. But it is also noteworthy that the people who have expressed disagreement have mostly been outside the Yudkowsky/SIAI cluster relative to those who have agreed with you. This suggests that they might be less susceptible to the biases that may be affecting your estimation of the cost of the censorship.

I am a bit confused as I'm not totally sure the explanations I've thought of or seen posted for your actions sufficiently explain them- but that's just the kind of uncertainty one always expects in disagreements. Are you not confused? If I didn't think there was a downside to the censorship I would let it go. But I think the downside is huge, in particular I think the censorship makes it much harder to get more people to take Friendliness seriously as a scholarly field by people beyond the SIAI circle. I'm not sure you're humble enough to care about that (that isn't meant as a character attack btw). It makes the field look like a joke and makes its leading scholar look ridiculous. I'm not sure you have the political talents to recognize that. It also slightly increases the chances of someone not recognizing this failure mode (the one in Roko's post) when it counts. I think you might be so sure (or so focused on the possibility that) you're going to be the one flipping the switch in that situation that you aren't worried enough about that.

comment by wedrifid · 2010-12-11T05:48:43.159Z · score: 17 (19 votes) · LW(p) · GW(p)

Repeating "But I say so!" with increasing emphasis until it works. Been taking debating lessons from Robin?

comment by multifoliaterose · 2010-12-11T06:38:39.688Z · score: 6 (8 votes) · LW(p) · GW(p)

It seems to me that the natural effect of a group leader persistently arguing from his own authority is Evaporative Cooling of Group Beliefs. This is of course conducive to confirmation bias and corresponding epistemological skewing for the leader; things which seem undesirable for somebody in Eliezer's position. I really wish that Eliezer was receptive to taking this consideration seriously.

comment by wedrifid · 2010-12-11T06:44:00.162Z · score: 4 (6 votes) · LW(p) · GW(p)

It seems to me that the natural effect of a group leader persistently arguing from his own authority is Evaporative Cooling of Group Beliefs. This is of course conducive to confirmation bias and corresponding epistemological skewing for the leader; things which seem undesirable for somebody in Eliezer's position. I really wish that Eliezer was receptive to taking this consideration seriously.

The thing is he usually does. That is one thing that has in the past set Eliezer apart from Robin and impressed me about Eliezer. Now it is almost as though he has embraced the evaporative cooling concept as an opportunity instead of a risk and gone and bought himself a blowtorch to force the issue!

comment by JGWeissman · 2010-12-11T07:29:40.696Z · score: 2 (10 votes) · LW(p) · GW(p)

Maybe, given the credibility he has accumulated on all these other topics, you should be willing to trust him on the one issue on which he is asserting this authority and on which it is clear that if he is right, it would be bad to discuss his reasoning.

comment by wedrifid · 2010-12-11T08:17:51.595Z · score: 10 (10 votes) · LW(p) · GW(p)

Maybe, given the credibility he has accumulated on all these other topics, you should be willing to trust him on the one issue on which he is asserting this authority and on which it is clear that if he is right, it would be bad to discuss his reasoning.

The well known (and empirically verified) weakness in experts of the human variety is that they tend to be systematically overconfident when it comes to judgements that fall outside their area of exceptional performance - particularly when the topic is one just outside the fringes.

When it comes to blogging about theoretical issues of rationality Eliezer is undeniably brilliant. Yet his credibility specifically when it comes to responding to risks is rather less outstanding. In my observation he reacts emotionally and starts making rookie mistakes of rational thought and action. To the point when I've very nearly responded 'Go read the sequences!' before remembering that he was the flipping author and so should already know better.

Also important is the fact that elements of the decision are about people, not game theory. Eliezer hopefully doesn't claim to be an expert when it comes to predicting or eliciting optimal reactions in others.

comment by JGWeissman · 2010-12-11T17:45:00.073Z · score: -2 (6 votes) · LW(p) · GW(p)

Yet his credibility specifically when it comes to responding to risks is rather less outstanding.

We were talking about his credibility in judging whether this idea is a risk, and that is within his area of expertise.

comment by wedrifid · 2010-12-11T18:19:28.709Z · score: 5 (5 votes) · LW(p) · GW(p)

Was it not clear that I do not assign particular credence to Eliezer when it comes to judging risks? I thought I expressed that with considerable emphasis.

I'm aware that you disagree with my conclusions - and perhaps even my premises - but I can assure you that I'm speaking directly to the topic.

comment by XiXiDu · 2010-12-11T11:28:41.224Z · score: 5 (9 votes) · LW(p) · GW(p)

Maybe, given the credibility he has accumulated on all these other topics, you should be willing to trust him on the one issue on which he is asserting this authority and on which it is clear that if he is right, it would be bad to discuss his reasoning.

I do not consider this strong evidence as there are many highly intelligent and productive people who hold crazy beliefs:

  • Francisco J. Ayala who “…has been called the “Renaissance Man of Evolutionary Biology” is a geneticist ordained as a Dominican priest. “His “discoveries have opened up new approaches to the prevention and treatment of diseases that affect hundreds of millions of individuals worldwide…”
  • Francis Collins (geneticist, Human Genome Project) noted for his landmark discoveries of disease genes and his leadership of the Human Genome Project (HGP) and described by the Endocrine Society as “one of the most accomplished scientists of our time” is a evangelical Christian.
  • Peter Duesberg (a professor of molecular and cell biology at the University of California, Berkeley) claimed that AIDS is not caused by HIV, which made him so unpopular that his colleagues and others have — until recently — been ignoring his potentially breakthrough work on the causes of cancer.
  • Georges Lemaître (a Belgian Roman Catholic priest) proposed what became known as the Big Bang theory of the origin of the Universe.
  • Kurt Gödel (logician, mathematician and philosopher) who suffered from paranoia and believed in ghosts. “Gödel, by contrast, had a tendency toward paranoia. He believed in ghosts; he had a morbid dread of being poisoned by refrigerator gases; he refused to go out when certain distinguished mathematicians were in town, apparently out of concern that they might try to kill him.”
  • Mark Chu-Carroll (PhD Computer Scientist, works for Google as a Software Engineer) “If you’re religious like me, you might believe that there is some deity that created the Universe.” He is running one of my favorite blogs, Good Math, Bad Math, and writes a lot on debunking creationism and other crackpottery.
  • Nassim Taleb (the author of the 2007 book (completed 2010) The Black Swan) does believe: Can’t track reality with science and equations. Religion is not about belief. We were wiser before the Enlightenment, because we knew how to take knowledge from incomplete information, and now we live in a world of epistemic arrogance. Religious people have a way of dealing with ignorance, by saying “God knows”.
  • Kevin Kelly (editor) is a devout Christian. Writes pro science and technology essays.
  • William D. Phillips (Nobel Prize in Physics 1997) is a Methodist.

I could continue this list with people like Ted Kaczynski or Roger Penrose. I just wanted show that intelligence and rational conduct do not rule out the possibility of being wrong about some belief.

comment by Vladimir_Nesov · 2010-12-11T12:00:04.871Z · score: 0 (0 votes) · LW(p) · GW(p)

Taleb quote doesn't qualify. (I won't comment on others.)

comment by XiXiDu · 2010-12-11T13:05:52.373Z · score: 2 (4 votes) · LW(p) · GW(p)

Taleb quote doesn't qualify. (I won't comment on others.)

I should have made more clearly that it is not my intention to indicate that I believe that those people, or crazy ideas in general, are wrong. But there are a lot of smart people out there who'll advocate opposing ideas. Using their reputation of being highly intelligent to follow through on their ideas is in my opinion not a very good idea in itself. I could just believe Freeman Dyson that existing simulation models of climate contain too much error to reliably predict future trends. I could believe Peter Duesberg that HIV does not cause aids, after all he is a brilliant molecular biologist. But I just do not think that any amount of reputation is enough evidence to believe extraordinary claims uttered by such people. And in the case of Yudkowsky, there doesn't even exist much reputation and no great achievements at all that would justify some strong belief in his infallibility. What there exists in Yudkowsky's case seems to be strong emotional commitment. I just can't tell if he is honest. If he really believes that he's working on a policy for some future superhuman intelligence that will rule the universe, then I'm going to be very careful. Not because it is wrong, but because such beliefs imply huge payoffs. Not that I believe he is the disguised Dr. Evil, but can we be sure enough to just trust him with it? Censorship of certain ideas does bear more evidence against him as it does in favor of his honesty.

comment by JGWeissman · 2010-12-11T17:31:01.280Z · score: -2 (10 votes) · LW(p) · GW(p)

How extensively have you searched for experts who made correct predictions outside their fields of expertise? What would you expect to see if you just searched for experts making predictions outside their field of expertise and then determined if that prediction were correct? What if you limited your search to experts who had expressed the attitude Eliezer expressed in Outside the Laboratory?

I just wanted show that intelligence and rational conduct do not rule out the possibility of being wrong about some belief.

"Rule out"? Seriously? What kind of evidence is it?

comment by wedrifid · 2010-12-11T18:39:00.027Z · score: 8 (12 votes) · LW(p) · GW(p)

"Rule out"? Seriously? What kind of evidence is it?

You extracted the "rule out" phrase from the sentence:

I just wanted show that intelligence and rational conduct do not rule out the possibility of being wrong about some belief.

From within the common phrase 'do not rule out the possibility' no less!

You then make a reference to '0 and 1s not probabilities' with exaggerated incredulity.

To put it mildly this struck me as logically rude and in general poor form. XiXiDu deserves more courtesy.

comment by JGWeissman · 2010-12-11T19:19:07.690Z · score: 1 (7 votes) · LW(p) · GW(p)

You extracted the "rule out" phrase from the sentence:

I just wanted show that intelligence and rational conduct do not rule out the possibility of being wrong about some belief.

From within the common phrase 'do not rule out the possibility' no less!

None of this affects my point that ruling out the possibility is the wrong, (in fact impossible), standard.

You then make a reference to '0 and 1s not probabilities' with exaggerated incredulity.

Not exaggerated. XiXiDu's post did seem to be saying: here are these examples of experts being wrong so it is possible that an expert is wrong in this case, without saying anything useful about how probable it is for this particular expert to be wrong on this particular issue.

To put it mildly this struck me as logically rude and in general poor form.

You have made an argument accusing me of logical rudeness that, quite frankly, does not stand up to scrutiny.

comment by Vladimir_Nesov · 2010-12-11T22:29:17.844Z · score: 0 (0 votes) · LW(p) · GW(p)

-

comment by XiXiDu · 2010-12-11T18:31:09.802Z · score: 3 (5 votes) · LW(p) · GW(p)

What kind of evidence is it?

Better evidence than I've ever seen in support of the censored idea. I have these well-founded principles, free speech and transparency, and weigh them against the evidence I have in favor of censoring the idea. That evidence is merely 1.) Yudkowsky's past achievements, 2.) his output and 3.) intelligence. That intelligent people have been and are wrong about certain ideas while still being productive and right about many other ideas is evidence to weaken #3. That people lie and deceive to get what they want is evidence against #1 and #2 and in favor of transparency and free speech, which are both already more likely to have a positive impact than the forbidden topic is to have a negative impact.

And what are you trying to tell me with this link? I haven't seen anyone stating numeric probability estimations regarding the forbidden topic. And I won't state one either, I'll just say that it is subjectively improbable enough to ignore it because there are possible too many very-very-low-probability events to take into account (for every being that will harm me if I don't do X there is another being that will harm me if I do X, which cancel out each other). But if you'd like to pull some number out of thin air, go ahead. I won't because I don't have enough data to even calculate the probability of AI going FOOM versus a slow development.

comment by JGWeissman · 2010-12-11T19:05:00.989Z · score: 0 (4 votes) · LW(p) · GW(p)

You have failed to address my criticisms of you points, that you are seeking out only examples that support your desired conclusion, and that you are ignoring details that would allow you to construct a narrower, more relevant reference class for your outside view argument.

And what are you trying to tell me with this link?

I was telling you the "ruling out the possibility" is the wrong, (in fact impossible), standard.

comment by XiXiDu · 2010-12-11T19:35:34.192Z · score: 1 (3 votes) · LW(p) · GW(p)

You have failed to address my criticisms of you points, that you are seeking out only examples that support your desired conclusion.

Only now I understand your criticism. I do not seek out examples to support my conclusion but to weaken your argument that one should trust Yudkowsky because of his previous output. I'm aware that Yudkowsky can very well be right about the idea but do in fact believe that the risk is worth taking. Have I done extensive research on how often people in similar situations have been wrong? Nope. No excuses here, but do you think there are comparable cases of predictions that proved to be reliable? And how much research have you done in this case and about the idea in general?

I was telling you the "ruling out the possibility" is the wrong, (in fact impossible), standard.

I don't, I actually stated a few times that I do not think that the idea is wrong.

comment by Vladimir_Nesov · 2010-12-11T22:37:04.002Z · score: 0 (2 votes) · LW(p) · GW(p)

I do not seek out examples to support my conclusion but to weaken your argument that one should trust Yudkowsky because of his previous output.

You shouldn't seek to "weaken an argument", you should seek what is the actual truth, and then maybe ways of communicating your understanding. (I believe that's what you intended anyway, but think it's better not to say it this way, as a protective measure against motivated cognition.)

comment by jsalvatier · 2010-12-11T23:09:40.135Z · score: 0 (0 votes) · LW(p) · GW(p)

I like your parenthetical, I often want to say something like this, and you've put it well.

comment by JGWeissman · 2010-12-11T19:47:26.829Z · score: 0 (4 votes) · LW(p) · GW(p)

I do not seek out examples to support my conclusion but to weaken your argument that one should trust Yudkowsky because of his previous output.

Seeking out just examples that weaken my argument, when I never predicted that no such examples would exist, is the problem I am talking about.

What made you think that supporting your conclusion and weakening my argument are different things?

comment by XiXiDu · 2010-12-11T20:09:06.666Z · score: 0 (4 votes) · LW(p) · GW(p)

Seeking out just examples that weaken my argument, when I never predicted that no such examples would exist, is the problem I am talking about.

My reason to weaken your argument is not that I want to be right but that I want feedback about my doubts. I said that 1.) people can be wrong, regardless of their previous reputation, 2.) that people can lie about their objectives and deceive by how they act in public (especially when the stakes are high), 3.) that Yudkowsky's previous output and achievements are not remarkable enough to trust him about some extraordinary claim. You haven't responded on why you tell people to believe Yudkowsky, in this case, regardless of my objections.

What made you think that supporting your conclusion and weakening my argument are different things?

I'm sorry if I made it appear as if I hold some particular belief. My epistemic state simply doesn't allow me to arrive at your conclusion. To highlight this I argued in favor of what it would mean to not accept your argument, namely to stand to previously well-established concepts like free speech and transparency. Yes, you could say that there is no difference here, except that I do not care about who is right but what is the right thing to do.

comment by Vladimir_Nesov · 2010-12-11T22:45:30.233Z · score: 4 (6 votes) · LW(p) · GW(p)

people can be wrong, regardless of their previous reputation

Still, it's incorrect to argue from existence of examples. You have to argue from likelihood. You'd expect more correctness from a person with reputation for being right than from a person with reputation for being wrong.

People can also go crazy, regardless of their previous reputation, but it's improbable, and not an adequate argument for their craziness.

And you need to know what fact you are trying to convince people about, not just search for soldier-arguments pointing in the preferred direction. If you believe that the fact is that a person is crazy, you too have to recognize that "people can be crazy" is inadequate argument for this fact you wish to communicate, and that you shouldn't name this argument in good faith.

(Craziness is introduced as a less-likely condition than wrongness to stress the structure of my argument, not to suggest that wrongness is as unlikely.)

comment by timtyler · 2011-01-19T13:03:18.570Z · score: 1 (3 votes) · LW(p) · GW(p)

I said that 1.) people can be wrong, regardless of their previous reputation, 2.) that people can lie about their objectives and deceive by how they act in public (especially when the stakes are high), 3.) that Yudkowsky's previous output and achievements are not remarkable enough to trust him about some extraordinary claim.

I notice that Yudkowsky wasn't always self-professed human-friendly. Consider this:

I must warn my reader that my first allegiance is to the Singularity, not humanity. I don't know what the Singularity will do with us. I don't know whether Singularities upgrade mortal races, or disassemble us for spare atoms. While possible, I will balance the interests of mortality and Singularity. But if it comes down to Us or Them, I'm with Them. You have been warned.

comment by wedrifid · 2011-01-19T13:11:33.461Z · score: 2 (2 votes) · LW(p) · GW(p)

Wow. That is scary. Do you have an estimated date on that bizarre declaration? Pre 2004 I assume?

comment by shokwave · 2011-01-19T14:02:25.995Z · score: 0 (2 votes) · LW(p) · GW(p)

He's changed his mind since. That makes it far, far less scary.

(Parenthetical about how changing your mind, admitting you were wrong, oops, etc, is a good thing).

comment by Perplexed · 2011-01-19T17:48:42.469Z · score: 2 (6 votes) · LW(p) · GW(p)

He's changed his mind since. That makes it far, far less scary.

He has changed his mind about one technical point in meta-ethics. He now realizes that super-human intelligence does not automatically lead to super-human morality. He is now (IMHO) less wrong. But he retains a host of other (mis)conceptions about meta-ethics which make his intentions abhorrent to people with different (mis)conceptions. And he retains the arrogance that would make him dangerous to those he disagrees with, if he were powerful.

"... far, far less scary"? You are engaging in wishful thinking no less foolish than that for which Eliezer has now repented.

comment by JoshuaZ · 2011-01-21T14:37:08.210Z · score: 0 (0 votes) · LW(p) · GW(p)

He is now (IMHO) less wrong. But he retains a host of other (mis)conceptions about meta-ethics which make his intentions abhorrent to people with different (mis)conceptions.

I'm not at all sure that I agree with Eliezer about most meta-ethics, and definitely disagree on some fairly important issues. But, that doesn't make his views necessarily abhorrent. If Eliezer triggers a positive Singularity (positive in the sense that it reflects what he wants out of a Singularity, complete with CEV), I suspect that that will be a universe which I won't mind living in. People can disagree about very basic issues and still not hate each others' intentions. They can even disagree about long-term goals and not hate it if the other person's goals are implemented.

comment by Perplexed · 2011-01-21T15:28:22.684Z · score: 1 (3 votes) · LW(p) · GW(p)

If Eliezer triggers a positive Singularity (positive in the sense that it reflects what he wants out of a Singularity, complete with CEV), I suspect that that will be a universe which I won't mind living in.

Have you ever have one of those arguments with your SO in which:

  • It is conceded that your intentions were good.
  • It is conceded that the results seem good.
  • The SO is still pissed because of the lack of consultation and/or presence of extrapolation?

I usually escape those confrontations by promising to consult and/or not extrapolate the next time. In your scenario, Eliezer won't have that option.

When people point out that Eliezer's math is broken because his undiscounted future utilities leads to unbounded utility, his response is something like "Find better math - discounted utility is morally wrong".

When Eliezer suggests that there is no path to a positive singularity which allows for prior consultation with the bulk of mankind, my response is something like "Look harder. Find a path that allows people to feel that they have given their informed consent to both the project and the timetable - anything else is morally wrong."

ETA: In fact, I would like to see it as a constraint on the meaning of the word "Friendly" that it must not only provide friendly consequences, but also, it must be brought into existence in a friendly way. I suspect that this is one of those problems in which the added constraint actually makes the solution easier to find.

comment by jimrandomh · 2011-01-21T15:57:51.552Z · score: 2 (2 votes) · LW(p) · GW(p)

Could you link to where Eliezer says that future utilities should not be discounted? I find that surprising, since uncertainty causes an effect roughly equivalent to discounting.

I would also like to point out that achieving public consensus about whether to launch an AI would take months or years, and that during that time, not only is there a high risk of unfriendly AIs, it is also guaranteed that millions of people will die. Making people feel like they were involved in the decision is emphatically not worth the cost

comment by Perplexed · 2011-01-21T16:32:06.350Z · score: 1 (1 votes) · LW(p) · GW(p)

Could you link to where Eliezer says that future utilities should not be discounted?

He makes the case in this posting. It is a pretty good posting, by the way, in which he also points out some kinds of discounting which he believes are justified. This posting does not purport to be a knock-down argument against discounting future utility - it merely states Eliezer's reasons for remaining unconvinced that you should discount (and hence for remaining in disagreement with most economic thinkers).

ETA: One economic thinker who disagrees with Eliezer is Robin Hanson. His response to Eliezer's posting is also well worth reading.

Examples of Eliezer conducting utilitarian reasoning about the future without discounting are legion.

I find that surprising, since uncertainty causes an effect roughly equivalent to discounting.

Tim Tyler makes the same assertion about the effects of uncertainty. He backs the assertion with metaphor, but I have yet to see a worked example of the math. Can you provide one?

Of course, one obvious related phenomenon - it is even mentioned with respect in Eliezer's posting - is that the value of a promise must be discounted with time due to the increasing risk of non-performance: my promise to scratch your back tomorrow is more valuable to you than my promise to scratch next week - simply because there is a risk that you or I will die in the interim, rendering the promise worthless. But I don't see how other forms of increased uncertainty about the future should have the same (exponential decay) response curve.

achieving public consensus about whether to launch an AI would take months or years,

So, start now.