Should I believe what the SIAI claims?

post by XiXiDu · 2010-08-12T14:33:49.617Z · LW · GW · Legacy · 633 comments

Contents

633 comments

Major update here.

The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear. 

Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?

There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.

I know that what I'm saying may simply be due to a lack of knowledge and education, that is why I am inquiring about it. How many of you, who currently support the SIAI, are able to analyse the reasoning that led you to support the SIAI in the first place, or at least substantiate your estimations with other kinds of evidence than a coherent internal logic?

I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. Are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.

I'm concerned that, although consistently so, the SIAI and its supporters are updating on fictional evidence. This post is meant to inquire about the foundations of your basic premises. Are you creating models to treat subsequent models or are your propositions based on fact?

An example here is the use of the Many-worlds interpretation. Itself a logical implication, can it be used to make further inferences and estimations without additional evidence? MWI might be the only consistent non-magic interpretation of quantum mechanics. The problem here is that such conclusions are, I believe, widely considered not to be enough to base further speculations and estimations on. Isn't that similar to what you are doing when speculating about the possibility of superhuman AI and its consequences? What I'm trying to say here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.

The gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Imagination allows for endless possibilities while scientific evidence provides hints of what might be possible and what impossible. Science does provide the ability to assess your data. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of what might be possible. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction, and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed by the SIAI.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who's aware of something that might shatter the universe? Why is it that people like Vernor Vinge, Robin Hanson or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI? Why aren't Eric Drexler, Gary Drescher or AI researches like Marvin Minsky worried to the extent that they signal their support?

I'm talking to quite a few educated people outside this community. They do not doubt all those claims for no particular reason. Rather they tell me that there are too many open questions to focus on the possibilities depicted by the SIAI and to neglect other near-term risks that might wipe us out as well.

I believe that many people out there know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have named other people. That's besides the point though, it's not just Hanson or Vinge but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

What do you expect me to do, just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? Maybe after a few years of study I'll know more.

...

2011-01-06: As this post received over 500 comments I am reluctant to delete it. But I feel that it is outdated and that I could do much better today. This post has however been slightly improved to account for some shortcomings but has not been completely rewritten, neither have its conclusions been changed. Please account for this when reading comments that were written before this update.

2012-08-04: A list of some of my critical posts can be found here: SIAI/lesswrong Critiques: Index

633 comments

Comments sorted by top scores.

comment by Rain · 2010-08-12T20:37:19.394Z · LW(p) · GW(p)

(Disclaimer: My statements about SIAI are based upon my own views, and should in no way be interpreted as representing their stated or actual viewpoints on the subject matter. I am talking about my personal thoughts, feelings, and justifications, no one else's. For official information, please check the SIAI website.)

Although this may not answer your questions, here are my reasons for supporting SIAI:

  • I want what they're selling. I want to understand morality, intelligence, and consciousness. I want a true moral agent outside of my own thoughts, something that can help solve that awful, plaguing question, "Why?" I want something smarter than me that can understand and explain the universe, providing access to all the niches I might want to explore. I want something that will save me from death and pain and find a better way to live.

  • It's the most logical next step. In the evolution of mankind, intelligence is a driving force, so "more intelligent" seems like an incredibly good idea, a force multiplier of the highest order. No other solution captures my view of a proper future like friendly AI, not even "...in space!"

  • No one else cares about the big picture. (Nick Bostrom and the FHI excepted; if they came out against SIAI, I might change my view.) Every other organization seems to focus on the 'generic now', leaving unintended consequences to crush their efforts in the long run, or avoiding the true horrors of the world (pain, age, poverty) due to not even realizing they're solvable. The ability to predict the future, through knowledge, understanding, and computation power, are the key attributes toward making that future a truly good place. The utility calculations are staggeringly in support of the longest view, such as that provided by SIAI.

  • It's the simplest of the 'good outcome' possibilities. Everything else seems to depend on magical hand-waving, or an overly simplistic view of how the world works or what a single advance would mean, rather than the way it interacts with all the diverse improvements that happen along side it and how real humans would react to them. Friendly AI provides 'intelligence-waving' that seems far more likely to work in a coherent fashion.

  • I don't see anything else to give me hope. What else solves all potential problems at the same time, rather than leaving every advancement to be destroyed by that one failure mode you didn't think of? Of course! Something that can think of those failure modes for you, and avoid them before you even knew they existed.

  • It's cheap and easy to do so on a meaningful scale. It's very easy to make up a large percentage of their budget; I personally provided more than 3 percent of their annual operating costs for this year, and I'm only upper middle class. They also have an extremely low barrier to entry (any amount of US dollars and a stamp, or a credit card, or PayPal).

  • They're thinking about the same things I am. They're providing a tribe like LessWrong, and they're pushing, trying to expand human knowledge in the ways I think are most important, such as existential risk, humanity's future, rationality, effective and realistic reversal of pain and suffering, etc.

  • I don't think we have much time. The best predictions aren't very good, but human power has increased to the point where there's a true threat we'll destroy ourselves within the next 100 years through means nuclear, biological, nano, AI, wireheading, or nerf the world. Sitting on money and hoping for a better deal, or donating to institutions now that will compound into advancements generations in the future seems like too little, too late.

I still put more money into savings accounts than I give to SIAI. I'm investing in myself and my own knowledge more than the purported future of humanity as they envision. I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

Replies from: multifoliaterose, XiXiDu
comment by multifoliaterose · 2010-08-12T23:42:01.319Z · LW(p) · GW(p)

Good, informative comment.

comment by XiXiDu · 2010-08-13T08:25:58.834Z · LW(p) · GW(p)

I want what they're selling.

Yeah, that's why I'm donating as well.

It's the most logical next step.

Sure, but why the SIAI?

No one else cares about the big picture.

I accept this. Although I'm not sure if the big picture should be a top priority right now. And as I wrote, I'm unable to survey the utility calculations at this point.

It's the simplest of the 'good outcome' possibilities.

So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.

I don't see anything else to give me hope.

I think you overestimate the friendliness of friendly AI. Too bad Roko's posts have been censored.

It's cheap and easy to do so on a meaningful scale.

I want to believe.

They're thinking about the same things I am.

Beware of those who agree with you?

I don't think we have much time.

Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don't have enough time regarding other kinds of threats.

I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

I can accept that. But I'm unable to follow the process of elimination yet.

Replies from: Rain
comment by Rain · 2010-08-13T12:13:11.894Z · LW(p) · GW(p)

It's the most logical next step.

Sure, but why the SIAI?

Who else is working directly on creating smarter-than-human intelligence with non-commercial goals? And if there are any, are they self-reflective enough to recognize its potential failure modes?

No one else cares about the big picture.

I accept this. Although I'm not sure if the big picture should be a top priority right now. And as I wrote, I'm unable to survey the utility calculations at this point.

I used something I developed which I call Point-In-Time Utility to guide my thinking on this matter. It basically boils down to, 'the longest view wins', and I don't see anyone else talking about potentially real pangalactic empires.

It's the simplest of the 'good outcome' possibilities.

So you replace a simple view that is evidende based with one that might or might not be based on really shaky ideas such as an intelligence explosion.

I don't think it has to be an explosion at all, just smarter-than-human. I'm willing to take things one step at a time, if necessary. Though it seems unlikely we could build a smarter-than-human intelligence without understanding what intelligence is, and thus knowing where to tweak, if even retroactively. That said, I consider intelligence tweaking itself to be a shaky idea, though I view alternatives as failure modes.

I don't see anything else to give me hope.

I think you overestimate the friendliness of friendly AI. Too bad Roko's posts have been censored.

I think you overestimate my estimation of the friendliness of friendly AI. Note that at the end of my post I said it is very likely SIAI will fail. My hope total is fairly small. Roko deleted his own posts, and I was able to read the article Eliezer deleted since it was still in my RSS feed. It didn't change my thinking on the matter; I'd heard arguments like it before.

They're thinking about the same things I am.

Beware of those who agree with you?

Hi. I'm human. At least, last I checked. I didn't say all my reasons were purely rational. This one is dangerous (reinforcement), but I do a lot of reading of opposing opinions as well, and there's still a lot I disagree with regarding SIAI's positions.

I don't think we have much time.

Maybe we do have enough time regarding AI and the kind of threats depicted on this site. Maybe we don't have enough time regarding other kinds of threats.

The latter is what I'm worried about. I see all of these threats as being developed simultaneously, in a race to see which one passes the threshold into reality first. I'm hoping that Friendly AI beats them.

I think it's very likely SIAI will fail in their mission in every way. They're just what's left after a long process of elimination. Give me a better path and I'll switch my donations. But I don't see any other group that comes close.

I can accept that. But I'm unable to follow the process of elimination yet.

I haven't seen you name any other organization you're donating to or who might compete with SIAI. Aside from the Future of Humanity Institute or the Lifeboat Foundation, both of which seem more like theoretical study groups than action-takers, people just don't seem to be working on these problems. Even the Methuselah Foundation is working on a very narrow portion which, although very useful and awesome if it succeeds, doesn't guard against the threats we're facing.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T13:02:22.636Z · LW(p) · GW(p)

Who else is working directly on creating smarter-than-human intelligence with non-commercial goals?

That there are no other does not mean we shouldn't be keen to create them, to establish competition. Or do it at all at this point.

...'the longest view wins', and I don't see anyone else talking about potentially real pangalactic empires.

I'm not sure about this.

I don't think it has to be an explosion at all, just smarter-than-human.

I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.

I think you overestimate my estimation of the friendliness of friendly AI.

You are right, never mind what I said.

I see all of these threats as being developed simultaneously...

Yeah and how is their combined probability less worrying than that of AI? That doesn't speak against the effectiveness of donating all to the SIAI of course. Creating your own God to fix the problems the imagined one can't is indeed a promising and appealing idea, given it is feasible.

I haven't seen you name any other organization you're donating to or who might compete with SIAI.

I'm mainly concerned about my own well-being. If I was threated by something near-term within Germany, that would be my top-priority. So the matter is more complicated for me than for the people who are merely conerned about the well-being of all beings.

As I said before, it is not my intention to discredit the SIAI but to steer some critical discussion for us non-expert, uneducated but concerned people.

Replies from: Rain
comment by Rain · 2010-08-13T13:20:22.954Z · LW(p) · GW(p)

That there are no other does not mean we shouldn't be keen to create them, to establish competition.

Absolutely agreed. Though I'm barely motivated enough to click on a PayPal link, so there isn't much hope of my contributing to that effort. And I'd hope they'd be created in such a way as to expand total funding, rather than cannibalizing SIAI's efforts.

I'm not sure about this.

Certainly there are other ways to look at value / utility / whatever and how to measure it. That's why I mentioned I had a particular theory I was applying. I wouldn't expect you to come to the same conclusions, since I haven't fully outlined how it works. Sorry.

I feel there are too many assumptions in what you state to come up with estimations like a 1% probability of uFAI turning everything into paperclips.

I'm not sure what this is saying. I think UFAI is far more likely than FAI, and I also think that donating to SIAI contributes somewhat to UFAI, though I think it contributes more to FAI, such that in the race I was talking about, FAI should come out ahead. At least, that's the theory. There may be no way to save us.

Yeah and how is their combined probability less worrying than that of AI?

AI is one of the things on the list racing against FAI. I think AI is actually the most dangerous of them, and from what I've read, so does Eliezer, which is why he's working on that problem instead of, say, nanotech.

I'm mainly concerned about my own well-being.

I've mentioned before that I'm somewhat depressed, so I consider my philanthropy to be a good portion 'lack of caring about self' more than 'being concerned about the well-being of all beings'. Again, a subtractive process.

As I said before, it is [...] my intention [...] to steer some critical discussion for us non-expert, uneducated but concerned people.

Thanks! I think that's probably a good idea, though I would also appreciate more critical discussion from experts and educated people, a sort of technical minded anti-Summit, without all the useless politics of the IEET and the like.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T14:11:01.270Z · LW(p) · GW(p)

I think UFAI is far more likely than FAI...

It's more likely that the Klingon warbird can overpower the USS Enterprise.

I think AI is actually the most dangerous of them...

Why? Because EY told you? I'm not trying to make snide remarks here but how people arrived at this conclusion was what I have been inquiring about in the first place.

...though I would also appreciate more critical discussion from experts and educated people...

Me too, but I was the only one around willing to start one at this point. That's the sorry state of critical examination.

Replies from: Rain
comment by Rain · 2010-08-13T14:16:50.204Z · LW(p) · GW(p)

It's more likely that the Klingon warbird can overpower the USS Enterprise.

To pick my own metaphor, it's more likely that randomly chosen matter will form clumps of useless crap than a shiny new laptop. As defined, UFAI is likely the default state for AGI, which is one reason I put such low hope on our future. I call myself an optimistic pessimist: I think we're going to create wonderful, cunning, incredibly powerful technology, and I think we're going to misuse it to destroy ourselves.

Why [is AI the most dangerous threat]?

Because intelligent beings are the most awesome and scary things I've ever seen. The History Channel is a far better guide than Eliezer in that respect. And with all our intelligence and technology, I can't see us holding back from trying to tweak intelligence itself. I view it as inevitable.

Me too [I also would appreciate more critical discussion from experts]

I'm hoping that the Visiting Fellows program and the papers written with the money from the latest Challenge will provide peer review in other respected venues.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T15:11:37.000Z · LW(p) · GW(p)

What I was trying to show you by the Star Trek metaphor is that you are making estimations within a framework of ideas of which I'm not convinced to be based on firm ground.

Replies from: Rain
comment by Rain · 2010-08-13T15:19:00.138Z · LW(p) · GW(p)

I'm not a very good convincer. I'd suggest reading the original material.

Replies from: HughRistik
comment by HughRistik · 2010-08-13T18:50:25.820Z · LW(p) · GW(p)

Can we get some links up in here? I'm not putting the burden on you in particular, but I think more linkage would be helpful in this discussion.

Replies from: Rain
comment by Rain · 2010-08-14T16:14:45.995Z · LW(p) · GW(p)

This thread has Eliezer's request for specific links, which appear in replies.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T20:17:23.045Z · LW(p) · GW(p)

I'm currently preparing for the Summit so I'm not going to hunt down and find links. Those of you who claimed they wanted to see me do this should hunt down the links and reply with a list of them.

Given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain. We don't emphasize this very hard when people talk in concrete terms about donating to more than one organization, because charitable dollars are not substitutable from a limited pool, the main thing is the variance in the tiny fraction of their income people donate to charity in the first place and so the amount of warm glow people generate for themselves is important; but when they talk about "putting all eggs in one basket" as an abstract argument we will generally point out that this is, in fact, the diametrically wrong direction in which abstract argument should be pushing.

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).

Read the Yudkowsky-Hanson AI Foom Debate. (Someone link to the sequence.)

  • Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Read Eric Drexler's Nanosystems. (Someone find an introduction by Foresight and link to it, that sort of thing is their job.) Also the term you want is not "grey goo", but never mind.

  • The likelihood of exponential growth versus a slow development over many centuries.

Exponentials are Kurzweil's thing. They aren't dangerous. See the Yudkowsky-Hanson Foom Debate.

  • That it is worth it to spend most on a future whose likelihood I cannot judge.

Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility. Things you spend on charitable efforts that just make you feel good should be considered selfish. If you are entirely selfish but you can think past a hyperbolic discount rate then it's still possible you can get more hedons per dollar by donating to existential risk projects.

Your difficulties in judgment should be factored into a probability estimate. Your sense of aversion to ambiguity may interfere with warm glows, but we can demonstrate preference reversals and inconsistent behaviors that result from ambiguity aversion which doesn't cash out as a probability estimate and factor straight into expected utility.

  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Michael Vassar is leading. I'm writing a book. When I'm done writing the book I plan to learn math for a year. When I'm done with that I'll swap back to FAI research hopefully forever. I'm "leading" with respect to questions like "What is the form of the AI's goal system?" but not questions like "Do we hire this guy?"

My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority. Therefore I perceive it as unreasonable to put all my eggs in one basket.

Someone link to relevant introductions of ambiguity aversion as a cognitive bias and do the detailed explanation on the marginal utility thing.

What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I'm donating to the SIAI but also spend considerable amounts of resources maximizing utility at present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.

Can someone else do the work of showing how this sort of satisficing leads to a preference reversal if it can't be viewed as expected utility maximization?

Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their asserted accuracy. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?

Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense. Don't look at just one side and think about how much you doubt it and can't guess. Look at both of them. Also, read the FOOM debate.

And this is what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air.

Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.

You could tell me to learn about Solomonoff induction etc., I know that what I'm saying may simply be due to a lack of education. But that's what I'm arguing and inquiring about here. And I dare to bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.

Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)

I can however follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?

It sounds like you haven't done enough reading in key places to expect to be able to judge the overall credence out of your own estimates.

There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.

You may have an unrealistic picture of what it takes to get scientists interested enough in you that they will read very long arguments and do lots of work on peer review. There's no prestige payoff for them in it, so why would they?

I'm concerned that although consistently so, the LW community is updating on fictional evidence. This post is meant to inquire the basic principles, the foundation of the sound argumentation's and the basic premises that they are based upon . That is, are you creating models to treat subsequent models or are the propositions based on fact?

You have a sense of inferential distance. That's not going to go away until you (a) read through all the arguments that nail down each point, e.g. the FOOM debate, and (b) realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.

Replies from: Eliezer_Yudkowsky, XiXiDu, JGWeissman
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T20:17:31.334Z · LW(p) · GW(p)

An example here is the treatment and use of MWI (a.k.a. the "many-worlds interpretation") and the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that's it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least as understood within the LW community? But that's besides the point. The problem here is that such conclusions are, I believe, widely considered to be weak evidence to base further speculations and estimations on.

Reading the QM sequence (someone link) will show you that to your surprise and amazement, what seemed to you like an unjustified leap and a castle in the air, a mere interpretation, is actually nailed down with shocking solidity.

What I'm trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not to say that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, on ideas that are themselves not based on firm ground.

Actually, now that I read this paragraph, it sounds like you think that "exponential", "evolving" AI is an unsupported premise, rather than "AI go FOOM" being the conclusion of a lot of other disjunctive lines of reasoning. That explains a lot about the tone of this post. And if you're calling it "exponential" or "evolving", which are both things the reasoning would specifically deny (it's supposed to be faster-than-exponential and have nothing to do with natural selection), then you probably haven't read the supporting arguments. Read the FOOM debate.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out? The only person who's aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?

After reading enough sequences you'll pick up enough of a general sense of what it means to treat a thesis analytically, analyze it modularly, and regard every detail of a thesis as burdensome, that you'll understand people here would mention Bostrom or Hanson instead. The sort of thinking where you take things apart into pieces and analyze each piece is very rare, and anyone who doesn't do it isn't treated by us as a commensurable voice with those who do. Also, someone link an explanation of pluralistic ignorance and bystander apathy.

I'm talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all all those claims for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site and by the SIAI rather than other near-term risks that might very well wipe us out.

An argument which makes sense emotionally (ambiguity aversion, someone link to hyperbolic discounting, link to scope insensitivity for the concept of warm glow) but not analytically (the expected utility intervals are huge, research often has long lead times).

I believe that hard-SF authors certainly know a lot more than I do, so far, about related topics and yet they seem not to be nearly as concerned about the relevant issues than the average Less Wrong member. I could have picked Greg Egan. That's besides the point though, it's not just Stross or Egan but everyone versus Eliezer Yudkowsky and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

Good reasoning is very rare, and it only takes a single mistake to derail. "Teach but not use" is extremely common. You might as well ask "Why aren't there other sites with the same sort of content as LW?" Reading enough, and either you'll pick up a visceral sense of the quality of reasoning being higher than anything you've ever seen before, or you'll be able to follow the object-level arguments well enough that you don't worry about other sources casually contradicting them based on shallower examinations, or, well, you won't.

What do you expect me to do? Just believe Eliezer Yudkowsky? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn't allow me to devote all my resource to the SIAI, or even a substantial amount of my income. The thought makes me reluctant to give anything at all.

Start out with a recurring Paypal donation that doesn't hurt, let it fade into the background, consider doing more after the first stream no longer takes a psychic effort, don't try to make any commitment now or think about it now in order to avoid straining your willpower.

Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

Replies from: JGWeissman, thomblake, NancyLebovitz, Jonathan_Graehl, wedrifid
comment by thomblake · 2010-08-13T21:02:54.996Z · LW(p) · GW(p)

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

The relevant fallacy in 'Aristotelian' logic is probably false dilemma, though there are a few others in the neighborhood.

comment by NancyLebovitz · 2010-08-13T20:39:57.080Z · LW(p) · GW(p)

I forget the term for the fallacy of all-or-nothing reasoning, someone look it up and link to it.

Probably black and white thinking.

comment by Jonathan_Graehl · 2010-08-17T18:59:47.625Z · LW(p) · GW(p)

I haven't done the work to understand MWI yet, but if this FAQ is accurate, almost nobody likes the Copenhagen interpretation (observers are SPECIAL) and a supermajority of "cosmologists and quantum field theorists" think MWI is true.

Since MWI seems to have no practical impact on my decision making, this is good enough for me. Also, Feynman likes it :)

comment by wedrifid · 2010-08-14T06:16:02.830Z · LW(p) · GW(p)

Thanks for taking the time to give a direct answer. I enjoyed reading this and these replies will likely serve as useful comments to when people ask similar questions in the future.

comment by XiXiDu · 2010-08-14T18:01:10.424Z · LW(p) · GW(p)

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.

Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?

That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles.

Read the Yudkowsky-Hanson AI Foom Debate.

Awesome, I never came across this until now. It's not widely mentioned? Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing. All this might be featured in the debate, hopefully with reference to substantial third-party research papers, I don't know yet.

Read Eric Drexler's Nanosystems.

The whole point of the grey goo example was to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.

This is an open question and I'm inquiring about how exactly the uncertainties regarding these problems are accounted for in your probability estimations of the dangers posed by AI.

Exponentials are Kurzweil's thing. They aren't dangerous.

What I was inquiring about is the likelihood of slow versus fast development of AI. That is, how fast after we got AGI will we see the rise of superhuman AI? The means of development by which a quick transcendence might happen is circumstantial to the meaning of my question.

Where are your probability estimations that account for these uncertainties. Where are your variables and references that allow you to make any kind of estimations to balance the risks of a hard rapture with a somewhat controllable development?

Unless you consider yourself entirely selfish, any altruistic effort should go to whatever has the highest marginal utility.

You misinterpreted my question. What I meant by asking if it is even worth the effort is, as exemplified in my link, the question for why to choose the future over the present. That is: “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?”.

Simplify things. Take the version of reality that involves AIs being built and not going FOOM, and the one that involves them going FOOM, and ask which one makes more sense.

When I said that I already cannot follow the chain of reasoning depicted on this site I didn't mean to say that I was unable due to intelligence or education. I believe I am intelligent enough and am trying to close the education gap. What I meant is that the chain of reasoning is intransparent.

Take the case of evolution, you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn't far, it's not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.

Do you have better data from somewhere else? Suspending judgment is not a realistic policy. If you're looking for supporting arguments on FOOM they're in the referenced debate.

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"?

Nobody's claiming that having consistent probability estimates makes you rational. (Having inconsistent estimates makes you irrational, of course.)

I'm not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

...realize that most predictions are actually antipredictions (someone link) and that most arguments are actually just defeating anthropomorphic counterarguments to the antiprediction.

If your antiprediction is not as informed as the original prediction, how is it not at most reducing the original prediction but actually overthrowing it to the extent on which the SIAI is basing its risk estimations?

Replies from: wedrifid, Rain, wedrifid, Nick_Tarleton, Rain
comment by wedrifid · 2010-08-15T03:52:19.100Z · LW(p) · GW(p)

Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.

Um... yes? Superhuman is a low bar and, more importantly, a completely arbitrary bar.

I'm not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I'll ask you for how you came up with these estimations. I'll ask you to provide more than a consistent internal logic but some evidence-based prior.

Evidence based? By which you seem to mean 'some sort of experiment'? Who would be insane enough to experiment with destroying the world? This situation is exactly where you must understand that evidence is not limited to 'reference to historical experimental outcomes'. You actually will need to look at 'consistent internal logic'... just make sure the consistent internal logic is well grounded on known physics.

What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, "do you have better data than me"? Or, "I have a bunch of good arguments"?

And that, well, that is actually a reasonable point. You have been given some links (regarding human behavior) that are good answer to the question but it is nevertheless non-trivial. Unfortunately now you are actually going to have to do the work and read them.

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2011-06-08T14:02:56.492Z · LW(p) · GW(p)

...... just make sure the consistent internal logic is well grounded on known physics.

Is it? That smarter(faster)-than-human intelligence is possible is well grounded on known physics? If that is the case, how does it follow that intelligence can be applied to itself effectively, to the extent that one could realistically talk about "explosive" recursive self-improvement?

Replies from: timtyler, wedrifid
comment by timtyler · 2011-06-09T23:14:55.990Z · LW(p) · GW(p)

That smarter(faster)-than-human intelligence is possible is well grounded on known physics?

Some still seem sceptical - and you probably also need some math, compsci and philosophy to best understand the case for superhuman intelligence being possible.

comment by wedrifid · 2011-06-09T19:37:58.087Z · LW(p) · GW(p)

Not only is there evidence that smarter than human intelligence is possible it is something that should be trivial given a vaguely sane reductionist model. Moreover you specifically have been given evidence on previous occasions when you have asked similar questions.

What you have not been given and what are not available are empirical observations of smarter than human intelligences existing now. That is evidence to which you would not be entitled.

Replies from: None, None
comment by [deleted] · 2011-06-09T20:05:08.391Z · LW(p) · GW(p)

Moreover you specifically have been given evidence on previous occasions when you >have asked similar questions.

Please provide a link to this effect? (Going off topic, I would suggest that a "show all threads with one or more comments by users X, Y and Z" or "show conversations between users X and Y" feature on LW might be useful.)

(First reply below)

comment by [deleted] · 2011-06-09T19:59:57.099Z · LW(p) · GW(p)

Moreover you specifically have been given evidence on previous occasions when you >have asked similar questions.

Please provide such a link. (Going off-topic, I additionally suggest that a "show all conversations between user X and user Y" feature on Less Wrong might be useful.)

Replies from: wedrifid
comment by wedrifid · 2011-06-09T20:06:58.441Z · LW(p) · GW(p)

It is currently not possible for me to either link or quote. I do not own a computer in this hemisphere and my android does not seem to have keys for brackets or greater than symbols. workarounds welcome.

Replies from: jimrandomh, None, None
comment by jimrandomh · 2011-06-09T20:17:02.326Z · LW(p) · GW(p)

The solution varies by model, but on mine, alt-shift-letter physical key combinations do special characters that aren't labelled. You can also use the on-screen keyboard, and there are more onscreen keyboards available for download if the one you're currently using is badly broken.

Replies from: wedrifid
comment by wedrifid · 2011-06-09T22:37:49.845Z · LW(p) · GW(p)

SwiftKey x beta Brilliant!

comment by [deleted] · 2011-06-10T01:24:18.682Z · LW(p) · GW(p)

OK, can I have my quote(s) now? It might just be hidden somewhere in the comments to this very article.

comment by [deleted] · 2011-06-09T20:12:16.834Z · LW(p) · GW(p)

Can you copy and paste characters?

comment by XiXiDu · 2010-08-15T08:49:46.310Z · LW(p) · GW(p)

Um... yes? Superhuman is a low bar...

Uhm...yes? It's just something I would expect to be integrated into any probability estimates of suspected risks. More here.

Who would be insane enough to experiment with destroying the world?

Check the point that you said is a reasonable one. And I have read a lot without coming across any evidence yet. I do expect an organisation like the SIAI to have detailed references and summaries about their decision procedures and probability estimations to be transparently available and not hidden beneath thousands of posts and comments. "It's somewhere in there, line 10020035, +/- a million lines...." is not transparency! That is, an organisation who's conerned with something taking over the universe and asks for your money. And organisation I'm told of which some members get nightmares just reading about evil AI...

comment by Rain · 2010-08-15T01:43:40.991Z · LW(p) · GW(p)

I think you just want a brochure. We keep telling you to read archived articles explaining many of the positions and you only read the comment where we gave the pointers, pretending as if that's all that's contained in our answers. It'd be more like him saying, "I have a bunch of good arguments right over there," and then you ignore the second half of the sentence.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T09:11:49.979Z · LW(p) · GW(p)

I'm not asking for arguments. I know them. I donate. I'm asking for more now. I'm using the same kind of anti-argumentation that academics would use against your arguments. Which I've encountered myself a few times while trying to convince them to take a look at the inscrutable archives of posts and comment that is LW. What do they say? "I skimmed over it, but there were no references besides some sound argumentation, an internal logic.", "You make strong claims, mere arguments and conclusions extrapolated from a few premises are insufficient to get what you ask for."

Replies from: wedrifid
comment by wedrifid · 2010-08-15T10:08:45.691Z · LW(p) · GW(p)

I'm not asking for arguments. I know them.

Pardon my bluntness, but I don't believe you, and that disbelief reflects positively on you. Basically, if you do know the arguments then a not insignificant proportion of your discussion here would amount to mere logical rudeness.

For example if you already understood the arguments for, or basic explanation of why 'putting all your eggs in one basket' is often the rational thing to do despite intuitions to the contrary then why on earth would you act like you didn't?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T10:46:57.944Z · LW(p) · GW(p)

Oh crap, the SIAI was just a punching bag. Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you'd end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It's obvious, I don't see how someone wouldn't get this.

I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don't care to save or that doesn't need to be saved in the first place because I missed the fact that all the babies are puppets and not real.

I asked, are the babies real babies that need food and is the expected utility payoff of feeding them higher than eating the food myself right now?

I'm starting to doubt that anyone actually read my OP...

Replies from: wedrifid
comment by wedrifid · 2010-08-15T10:53:12.690Z · LW(p) · GW(p)

Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you'd end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It's obvious, I don't see how someone wouldn't get this.

I know this is just a tangent... but that isn't actually the reason.

I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don't care to save or that doesn't need to be saved in the first place because I missed the fact that all the babies are puppets and not real.

Just to be clear, I'm not objecting to this. That's a reasonable point.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T10:56:51.007Z · LW(p) · GW(p)

Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I've missed the reason then. Seriously, I'd love to read up on it now.

Here is an example of what I want:

As a result, sober calculations suggest that the lifetime risk of dying from an asteroid strike is about the same as the risk of dying in a commercial airplane crash. Yet we spend far less on avoiding the former risk than the latter.

Replies from: wedrifid
comment by wedrifid · 2010-08-15T11:31:45.634Z · LW(p) · GW(p)

Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW?

Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer:

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain.

... but unfortunately only asked for a link for the 'scope insensivity' part, not a link to a 'marginal utility' tutorial. I've had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T11:42:33.121Z · LW(p) · GW(p)

You should just be discounting expected utilities by the probability of the claims being true...

That's another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW?

This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.

Replies from: wedrifid
comment by wedrifid · 2010-08-15T11:55:01.515Z · LW(p) · GW(p)

That's another point.

I'm not trying to be a nuisance here, but it is the only point I'm making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T12:23:30.217Z · LW(p) · GW(p)

I'm sorry, I perceived your comment to be mainly about decision making regarding charities. Which is completely marginal since the SIAI is the only charity concerned with the risk I'm inquiring about. Is the risk in question even real and does its likelihood justify the consequences and arguments for action?

I inquired about the decisions making regarding charities because you claimed that what I stated about egg allocation is not the point being made. But I do not particularly care about that question as it is secondary.

comment by wedrifid · 2010-08-15T03:50:41.463Z · LW(p) · GW(p)

You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down.

Where are the formulas? What are the variables? Where is this method exemplified to reflect the decision process of someone who's already convinced, preferably of someone within the SIAI?

That is part of what I call transparency and a foundational and reproducible corroboration of one's first principles.

Leave aside SIAI specific claims here. The point Eliezer was making, was about 'all your eggs in one basket' claims in general. In situations like this (your contribution doesn't drastically change the payoff at the margin, etc) putting all your eggs in best basket is the right thing to do.

You can understand that insight completely independently of your position on existential risk mitigation.

comment by Nick_Tarleton · 2010-08-14T18:10:04.291Z · LW(p) · GW(p)

Anyway, what I notice from the Wiki entry is that one of the most important ideas, recursive improvement, that might directly support the claims of existential risks posed by AI, is still missing.

Er, there's a post by that title.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-14T19:48:25.452Z · LW(p) · GW(p)

...and "FOOM" means way the hell smarter than anything else around...

Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.

Not, "ooh, it's a little Einstein but it doesn't have any robot hands, how cute".

Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions), but how is it going to make use of the things it orders?

Optimizing yourself is a special case, but it's one we're about to spend a lot of time talking about.

I believe that self-optimization is prone to be very limited. Changing anything substantial might lead Gandhi to swallow the pill that will make him want to hurt people, so to say.

...humans developed the idea of science, and then applied the idea of science...

Sound argumentation that gives no justification to extrapolate it to an extent that you could apply it to the shaky idea of a superhuman intellect coming up with something better than science and applying it again to come up...

In an AI, the lines between procedural and declarative knowledge are theoretically blurred, but in practice it's often possible to distinguish cognitive algorithms and cognitive content.

All those ideas about possible advantages of being an entity that can reflect upon itself to the extent of being able to pinpoint its own shortcoming is again, highly speculative. This could be a disadvantage.

Much of the rest is about the plateau argument, once you got a firework you can go to the moon. Well yes, I've been aware of that argument. But that's weak, that there are many hidden mysteries about reality that we completely missed yet is highly speculative. I think even EY admits that whatever happens, quantum mechanics will be a part of it. Is the AI going to invent FTL travel? I doubt it, and it's already based on the assumption that superhuman intelligence, not just faster intelligence, is possible.

Insights are items of knowledge that tremendously decrease the cost of solving a wide range of problems.

Like the discovery that P ≠ NP? Oh wait, that would be limiting. This argument runs in both directions.

If you go to a sufficiently sophisticated AI - more sophisticated than any that currently exists...

Assumption.

But it so happens that the AI itself uses algorithm X to store associative memories, so if the AI can improve on this algorithm, it can rewrite its code to use the new algorithm X+1.

Nice idea, but recursion does not imply performance improvement.

You can't draw detailed causal links between the wiring of your neural circuitry, and your performance on real-world problems.

How can he make any assumptions then about the possibility to improve them recursively, given this insight, to an extent that they empower an AI to transcendent into superhuman realms?

Well, we do have one well-known historical case of an optimization process writing cognitive algorithms to do further optimization; this is the case of natural selection, our alien god.

Did he just attribute intention to natural selection?

Replies from: gwern, CarlShulman, MichaelVassar, wedrifid
comment by gwern · 2010-08-14T21:02:08.272Z · LW(p) · GW(p)

Questionable. Is smarter than human intelligence possible in a sense comparable to the difference between chimps and humans? To my awareness we have no evidence to this end.

What would you accept as evidence?

Would you accept sophisticated machine learning algorithms like the ones in the Netflix contest, who find connections that make no sense to humans, who simply can't work with high-dimensional data?

Would you accept a circuit designed by a genetic algorithm, which doesn't work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?

Would you accept a chess program which could crush any human chess player who ever lived? Kasparov at ELO 2851, Rybka at 3265. Wikipedia says grandmaster status comes at ELO 2500. So Rybka is now even further beyond Kasparov at his peak as Kasparov was beyond a new grandmaster. And it's not like Rybka or the other chess AIs will weaken with age.

Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?

Replies from: soreff, Sniffnoy, Aron2, XiXiDu
comment by soreff · 2010-08-15T02:24:31.967Z · LW(p) · GW(p)

I think it at least possible that much-smarter-than human intelligence might turn out to be impossible. There exist some problem domains where there appear to be a large number of solutions, but where the quality of the solutions saturate quickly as more and more resources are thrown at them. A toy example is how often records are broken in a continuous 1-D domain, with attempts drawn from a constant probability distribution: The number of records broken goes as the log of the number of attempts. If some of the tasks an AGI must solve are like this, then it might not do much better than humans - not because evolution did a wonderful job of optimizing humans for perfect intelligence, but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.

One (admittedly weak) piece of evidence: a real example of saturation, is an optimizing compiler being used to recompile itself. It is a recursive optimizing system, and, if there is a knob to allow more effort being used on the optimization, the speed-up from the first pass can be used to allow a bit more effort to be applied to a second pass for the same cpu time. Nonetheless, the results for this specific recursion are not FOOM.

The evidence in the other direction are basically existence proofs from the most intelligent people or groups of people that we know of. Something as intelligent as Einstein must be possible, since Einstein existed. Given an AI Einstein, working on improving its own intelligence - it isn't clear if it could make a little progress or a great deal.

Replies from: gwern
comment by gwern · 2010-08-15T08:18:05.900Z · LW(p) · GW(p)

but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.

This goes for your compilers as well, doesn't it? There are still major speed-ups available in compilation technology (the closely connected areas of whole-program compilation+partial evaluation+supercompilation), but a compiler is still expected to produce isomorphic code, and that puts hard information-theoretic bounds on output.

comment by Sniffnoy · 2010-08-15T05:06:13.860Z · LW(p) · GW(p)

Would you accept a circuit designed by a genetic algorithm, which doesn't work in the physics simulation but works better in reality than anything humans have designed, with mysterious parts that are not connected to anything but are necessary for it to function?

Can you provide details / link on this?

Replies from: gwern
comment by gwern · 2010-08-15T07:57:05.071Z · LW(p) · GW(p)

I should've known someone would ask for the cite rather than just do a little googling. Oh well. Turns out it wasn't a radio, but a voice-recognition circuit. From http://www.talkorigins.org/faqs/genalg/genalg.html#examples :

"This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems - a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way - yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997)."

comment by Aron2 · 2010-08-14T22:06:28.265Z · LW(p) · GW(p)

The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.

We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones.

What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It's the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).

BTW, is ELO supposed to have that kind of linear interpretation?

Replies from: gwern, CarlShulman, gwern
comment by gwern · 2010-08-15T08:14:14.371Z · LW(p) · GW(p)

The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.

Yes, this is the important part. Chimps lag behind humans in 2 distinct ways - they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)

What can we do with this distinction? How does it apply to my three examples?

After all, a human can still beat the best chess programs with a mere pawn handicap.

O RLY?

This may never get to two pawns. ever.

Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of 'no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050'?

BTW, is ELO supposed to have that kind of linear interpretation?

I'm not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn't show me any warning signs - ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn't matter. I think.

comment by CarlShulman · 2010-08-15T09:42:16.266Z · LW(p) · GW(p)

we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).

This is a possibility (made more plausible if we're talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it's greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.

comment by gwern · 2011-08-03T16:10:24.738Z · LW(p) · GW(p)

BTW, is ELO supposed to have that kind of linear interpretation?

It seems that whether or not it's supposed to, in practice it does. From the just released "Intrinsic Chess Ratings", which takes Rybka and does exhaustive evaluations (deep enough to be 'relatively omniscient') of many thousands of modern chess games; on page 9:

We conclude that there is a smooth relationship between the actual players’ Elo ratings and the intrinsic quality of the move choices as measured by the chess program and the agent fitting. Moreover, the final s-fit values obtained are nearly the same for the corresponding entries of all three time periods. Since a lower s indicates higher skill, we conclude that there has been little or no ‘inflation’ in ratings over time—if anything there has been deflation. This runs counter to conventional wisdom, but is predicted by population models on which rating systems have been based [Gli99].

The results also support a no answer to question 2 ["Were the top players of earlier times as strong as the top players of today?"]. In the 1970’s there were only two players with ratings over 2700, namely Bobby Fischer and Anatoly Karpov, and there were years as late as 1981 when no one had a rating over 2700 (see [Wee00]). In the past decade there have usually been thirty or more players with such ratings. Thus lack of inflation implies that those players are better than all but Fischer and Karpov were. Extrapolated backwards, this would be consistent with the findings of [DHMG07], which however (like some recent competitions to improve on the Elo system) are based only on the results of games, not on intrinsic decision-making.

comment by XiXiDu · 2010-08-15T09:45:38.059Z · LW(p) · GW(p)

You are getting much closer than any of the commenter's before you to provide some other form of evidence to substantiate one of the primary claims here.

You have to list your primary propositions on which you base further argumentation, from which you draw conclusions and which you use to come up with probability estimations stating risks associated with former premises. You have to list these main principles so anyone who comes across claims of existential risks and a plead for donation, can get an overview. Then you have to provide the references you listed above, if you believe they give credence to the ideas, so that people see that all you say isn't made up but based on previous work and evidence by people that are not associated with your organisation.

Or are you going to pull a no-true-Scotsman and assert that each one of these is mechanical or unoriginal or not really beyond human or just not different enough?

No, although I have heard about all of the achievements I'm not yet able to judge if they provide evidence supporting the possibility of strong superhuman AI, the kind that would pose a existential risk. Although in the case of chess I'm pretty much the opinion that this is no strong evidence as it is not sufficiently close to be able to overpower humans to an extent of posing a existential risk when extrapolated into other areas.

It would be good if you could provide links to the mentioned examples. Especially the genetic algorithm (ETA: Here.). It is still questionable however if this could lead to the stated recursive improvements or will shortly hit a limit. To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space.

Whether sophisticated machine learning algorithms are able to discover valuable insights beyond statistical inferences within higher-dimensional data-sets is a very interesting idea though. As I just read, the 2009 prize of the Netflix contest was given to a team that achieved a 10.05% improvement over the previous algorithm. I'll have to examine this further if it might bear evidence that shows this kind of complicated mesh of algorithms might lead to a quick self-improvement.

One of the best comments so far, thanks. Although your last sentence was to my understanding simply showing that you are reluctant to further critique.

Replies from: gwern
comment by gwern · 2010-08-15T10:58:59.580Z · LW(p) · GW(p)

I am reluctant because you seem to ask for magical programs when you write things like:

"To my knowledge genetic algorithms are merely used for optimization, based on previous design spaces and are not able to come up with something unique to the extent of leaving their design space."

I was going to link to AIXI and approximations thereof; full AIXI is as general as an intelligence can be if you accept that there are no uncomputable phenomenon, and the approximations are already pretty powerful (from nothing to playing Pac-Man).

But then it occurred to me that anyone invoking a phrase like 'leaving their design space' might then just say 'oh, those designs and models can only model Turing machines, and so they're stuck in their design space'.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T11:11:46.454Z · LW(p) · GW(p)

But then it occurred to me that anyone invoking a phrase like 'leaving their design space'...

I've no idea (formally) of what a 'design space' actually is. This is a tactic I'm frequently using against strongholds of argumentation that are seemingly based on expertise. I use their own terminology and rearrange it into something that sounds superficially clever. I like to call it a Chinese room approach. Sometimes it turns out that all they were doing was to sound smart but cannot explain themselves when faced with their own terminology set to inquire about their pretences.

I thank you however for taking the time to actually link to further third party information that will substantiate given arguments for anyone not trusting the whole of LW without it.

Replies from: gwern
comment by gwern · 2010-08-15T11:15:12.038Z · LW(p) · GW(p)

I see. Does that actually work for you? (Note that your answer will determine whether I mentally re-categorize you from 'interested open-minded outsider' to 'troll'.)

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T11:34:27.879Z · LW(p) · GW(p)

It works against cults and religion in general. I don't argue with them about their religion being not even wrong but rather accept their terms and highlight inconsistencies within their own framework by going as far as I can with one of their arguments and by inquiring about certain aspects based on their own terminology until they are unable to consistently answer or explain where I am wrong.

This also works with the anti GM-food bunch, data protection activists, hippies and many other fringe groups. For example, the data protection bunch concerned with information disclosure on social networks or Google Streetview. Yes, I say, that's bad, burglar could use such services to check out your house! I wonder what evidence there is for the increase of burglary in the countries where Streetview is already available for many years?

Or I tell the anti-gun lobbyists how I support their cause. It's really bad if anyone can buy a gun. Can you point me to the strong correlation between gun ownership and firearm homicides? Thanks.

comment by CarlShulman · 2010-08-15T09:32:28.771Z · LW(p) · GW(p)

Questionable. How is an encapsulated AI going to get this kind of control without already existing advanced nanotechnology? It might order something over the Internet if it hacks some bank account etc. (long chain of assumptions),

Any specific scenario is going to have burdensome details, but that's what you get if you ask for specific scenarios rather than general pressures, unless one spends a lot of time going through detailed possibilities and vulnerabilities. With respect to the specific example, regular human criminals routinely swindle or earn money anonymously online, and hack into and control millions of computers in botnets. Cloud computing resources can be rented with ill-gotten money.

but how is it going to make use of the things it orders?

In the unlikely event of a powerful human-indifferent AI appearing in the present day, a smartphone held by a human could provide sensors and communication to use humans for manipulators (as computer programs direct the movements of some warehouse workers today). Humans can be paid, blackmailed, deceived (intelligence agencies regularly do these things) to perform some tasks. An AI that leverages initial capabilities could jury-rig a computer-controlled method of coercion [e.g. a cheap robot arm holding a gun, a tampered-with electronic drug-dispensing implant, etc]. And as time goes by and the cumulative probability of advanced AI becomes larger, increasing quantities of robotic vehicles and devices will be available.

Replies from: XiXiDu, Unknowns
comment by XiXiDu · 2010-08-15T09:55:03.687Z · LW(p) · GW(p)

Thanks, yes I know about those arguments. One of the reasons I'm actually donating and accept AI to be one existential risk. I'm inquiring about further supporting documents and transparency. More on that here, especially check the particle collider analogy.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-15T10:14:27.882Z · LW(p) · GW(p)

With respect to transparency, I agree about a lack of concise, exhaustive, accessible treatments. Reading some of the linked comments about marginal evidence from hypotheses I'm not quite sure what you mean, beyond remembering and multiplying by the probability that particular premises are false. Consider Hanson's "Economic Growth Given Machine Intelligence". One might support it with generalizations from past population growth in plants and animals, from data on capital investment and past market behavior and automation, but what would you say would license drawing probabilistic inferences using it?

comment by Unknowns · 2010-08-17T06:49:03.142Z · LW(p) · GW(p)

Note that such methods might not result in the destruction of the world within a week (the guaranteed result of a superhuman non-Friendly AI according to Eliezer.)

Replies from: CarlShulman
comment by CarlShulman · 2010-08-17T10:41:16.914Z · LW(p) · GW(p)

destruction of the world within a week (the guaranteed result of a superhuman non-Friendly AI according to Eliezer.)

What guarantee?.

Replies from: Unknowns
comment by Unknowns · 2010-08-17T11:48:27.329Z · LW(p) · GW(p)

With a guarantee backed by $1000.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-17T12:33:16.144Z · LW(p) · GW(p)

The linked bet doesn't reference "a week," and the "week" reference in the main linked post is about going from infrahuman to superhuman, not using that intelligence to destroy humanity.

That bet seems underspecified. Does attention to "Friendliness" mean any attention to safety whatsoever, or designing an AI with a utility function such that it's trustworthy regardless of power levels? Is "superhuman" defined relative to the then-current level of human (or upload, or trustworthy less intelligent AI) capacity with any enhancements (or upload speedups, etc)? What level of ability counts as superhuman? You two should publicly clarify the terms.

Replies from: Unknowns
comment by Unknowns · 2010-08-17T12:42:49.745Z · LW(p) · GW(p)

A few comments later on the same comment thread someone asked me how much time was necessary, and I said I thought a week was enough, based on Eliezer's previous statements, and he never contradicted this, so it seems to me that he accepted it by default, since some time limit will be necessary in order for someone to win the bet.

I defined superhuman to mean that everyone will agree that it is more intelligent than any human being existing at that time.

I agree that the question of whether there is attention to Friendliness might be more problematic to determine. But "any attention to safety whatsoever" seems to me to be clearly stretching the idea of Friendliness-- for example, someone could pay attention to safety by trying to make sure that the AI was mostly boxed, or whatever, and this wouldn't satisfy Eliezer's idea of Friendliness.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-17T12:48:55.491Z · LW(p) · GW(p)

Ah. So an AI could, e.g. be only slightly superhuman and require immense quantities of hardware to generate that performance in realtime.

Replies from: Unknowns
comment by Unknowns · 2010-08-17T13:03:48.925Z · LW(p) · GW(p)

Right. And if this scenario happened, there would be a good chance that it would not be able to foom, or at least not within a week. Eliezer's opinion seems to be that this scenario is extremely unlikely, in other words that the first AI will already be far more intelligent than the human race, and that even if it is running on an immense amount of hardware, it will have no need to acquire more hardware, because it will be able to construct nanotechnology capable of controlling the planet through actions originating on the internet as you suggest. And as you can see, he is very confident that all this will happen within a very short period of time.

comment by MichaelVassar · 2010-12-29T18:58:40.526Z · LW(p) · GW(p)

Have you tried asking yourself non-rhetorically what an AI could do without MNT? That doesn't seem to me to be a very great inferential distance at all.

Replies from: XiXiDu, shokwave
comment by XiXiDu · 2010-12-29T20:12:04.674Z · LW(p) · GW(p)

Have you tried asking yourself non-rhetorically what an AI could do without MNT?

I believe that in this case an emulation would be the bigger risk because it would be sufficiently obscure and could pretend to be friendly for a long time while secretly strengthening its power. A purely artificial intelligence would be too alien and therefore would have a hard time to acquire the necessary power to transcend to a superhuman level without someone figuring out what it does, either by its actions or by looking at its code. It would also likely not have the intention to increase its intelligence infinitely anyway. I just don't see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You'd have to deliberately implement such an intention. It would generally require its creators to solve a lot of problems much more difficult than limiting its scope. That is why I do not see run-away self-improvement as a likely failure mode.

I could imagine all kinds of scenarios indeed. But I also have to assess their likelihood given my epistemic state. And my conclusion is that a purely artificial intelligence wouldn't and couldn't do much. I estimate the worst-case scenario to be on par with a local nuclear war.

Replies from: MichaelVassar, timtyler, hairyfigment, timtyler
comment by MichaelVassar · 2010-12-30T04:13:04.299Z · LW(p) · GW(p)

I simply can't see where the above beliefs might come from. I'm left assuming that you just don't mean the same thing by AI as I usually mean. My guess is that you are implicitly thinking of a fairly complicated story but are not spelling that out.

Replies from: XiXiDu
comment by XiXiDu · 2010-12-30T11:30:11.228Z · LW(p) · GW(p)

I simply can't see where the above beliefs might come from. I'm left assuming that you just don't mean the same thing by AI as I usually mean.

And I can't see where your beliefs might come from. What are you telling potential donors or AGI researchers? That AI is dangerous by definition? Well, what if they have a different definition, what should make them update in favor of your definition? That you thought about it for more than a decade now? I perceive serious flaws in any of the replies I got so far in under a minute and I am a nobody. There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven't thought about. If that kind of intelligence is as likely as other risks then it doesn't matter what it comes up with anyway because those other risks will wipe us out just as good and with the same probability.

There already are many people criticizing the SIAI right now, even on LW. Soon, once you are more popular, other people than me will scrutinize everything you ever wrote. And what do you expect them to conclude if even a professional AGI researcher, who has been a member of the SIAI, does write the following:

Every AGI research I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence -- are people associated with SIAI.

But I have never heard any remotely convincing arguments in favor of this odd, outlier view !!!

BTW the term "self-modifying" is often abused in the SIAI community. Nearly all learning involves some form of self-modification. Distinguishing learning from self-modification in a rigorous formal way is pretty tricky.

Why would I disregard his opinion in favor of yours? Can you present any novel achievements that would make me conclude that you people are actually experts when it comes to intelligence? The LW sequences are well written but do not showcase some deep comprehension of the potential of intelligence. Yudkowsky was able to compile previously available knowledge into a coherent framework of rational conduct. That isn't sufficient to prove that he has enough expertise on the topic of AI to make me believe him regardless of any antipredictions being made that weaken the expected risks associated with AI. There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.

If you would at least let some experts take a look at your work and assess its effectiveness and general potential. But there exists no peer review at all. There have been some popular people attend the Singularity Summit. Have you asked them why they do not contribute to the SIAI? Have you for example asked Douglas Hofstadter why he isn't doing everything he can to mitigate risks from AI? Sure, you got some people to donate a lot of money to the SIAI. But to my knowledge they are far from being experts and contribute to other organisations as well. Congratulations on that, but even cults get rich people to support them. I'll update on donors once they say why they support you and their arguments are convincing or if they are actually experts or people being able to showcase certain achievements.

My guess is that you are implicitly thinking of a fairly complicated story but are not spelling that out.

Intelligence is powerful, intelligence doesn't imply friendliness, therefore intelligence is dangerous. Is that the line of reasoning based on which I shall neglect other risks? If you think so then you are making it more complicated than necessary. You do not need intelligence to invent stuff to kill us if there's already enough dumb stuff around that is more likely to kill us. And I do not think that it is reasonable to come up with a few weak arguments on how intelligence could be dangerous and conclude that their combined probability beats any good argument against one of the premises or in favor of other risks. The problems are far too diverse, you can't combine them and proclaim that you are going to solve all of them by simply defining friendliness mathematically. I just don't see that right now because it is too vague. You could as well replace friendliness with magic as the solution to the many disjoint problems of intelligence.

Intelligence is also not the solution to all other problems we face. As I argued several times, I just do not see that recursive self-improvement will happen any time soon and cause an intelligence explosion. What evidence is there against a gradual development? As I see it we will have to painstakingly engineer intelligent machines. There won't be some meta-solution that outputs meta-science to subsequently solve all other problems.

Replies from: timtyler, timtyler, timtyler, Rain, Nick_Tarleton, timtyler, timtyler, timtyler, timtyler
comment by timtyler · 2010-12-30T20:32:41.226Z · LW(p) · GW(p)

Have you for example asked Douglas Hofstadter why he isn't doing everything he can to mitigate risks from AI?

Douglas Hofstadter and Daniel Dennett both seem to think these issues are probably still far away.

The reason I have injected myself into that world, unsavory though I find it in many ways, is that I think that it's a very confusing thing that they're suggesting. If you read Ray Kurzweil's books and Hans Moravec's, what I find is that it's a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It's as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid.

...

Kelly said to me, "Doug, why did you not talk about the singularity and things like that in your book?" And I said, "Frankly, because it sort of disgusts me, but also because I just don't want to deal with science-fiction scenarios." I'm not talking about what's going to happen someday in the future; I'm not talking about decades or thousands of years in the future. I'm talking about "What is a human being? What is an 'I'?" This may be an outmoded question to ask 30 years from now. Maybe we'll all be floating blissfully in cyberspace, there won't be any human bodies left, maybe everything will be software living in virtual worlds, it may be science-fiction city. Maybe my questions will all be invalid at that point. But I'm not writing for people 30 years from now, I'm writing for people right now. We still have human bodies. We don't yet have artificial intelligence that is at this level. It doesn't seem on the horizon.

comment by timtyler · 2010-12-30T20:42:50.492Z · LW(p) · GW(p)

And I do not think that it is reasonable to come up with a few weak arguments on how intelligence could be dangerous and conclude that their combined probability beats any good argument against one of the premises or in favor of other risks.

I'm not sure who is doing that. Being hit by an asteroid, nuclear war and biological war are other possible potentially major setbacks. Being eaten by machines should also have some probability assigned to it - though it seems pretty challenging to know how to do that. It's a bit of an unknown unknown. Anyway, this material probably all deserves some funding.

comment by timtyler · 2010-12-30T20:04:12.397Z · LW(p) · GW(p)

There is also insufficient evidence to conclude that Yudkowsky, or someone within the SIAI, is smart enough to be able to tackle the problem of friendliness mathematically.

The short-term goal seems more modest - prove that self-improving agents can have stable goal structures.

If true, that would be fascinating - and important. I don't know what the chances of success are, but Yudkowsky's pitch is along the lines of: look this stuff is pretty important, and we are spending less on it than we do on testing lipstick.

That's a pitch which it is hard to argue with, IMO. Machine intelligence research does seem important and currently-underfunded. Yudkowsky is - IMHO - a pretty smart fellow. If he will work on the problem for $80K a year (or whatever) it seems as though there is a reasonable case for letting him get on with it.

comment by Rain · 2010-12-30T15:03:57.228Z · LW(p) · GW(p)

I'm not sure you're looking at the probability of other extinction risks with the proper weighting. The timescales are vastly different. Supervolcanoes: one every 350,000 years. Major asteroid strikes: one every 700,000 years. Gamma ray bursts: hundreds of millions of years, etc. There's a reason the word 'astronomical' means huge beyond imagining.

Contrast that with the current human-caused mass extinction event: 10,000 years and accelerating. Humans operate on obscenely fast timescales compared to nature. Just with nukes we're able to take out huge chunks of Earth's life forms in 24 hours, most or all of it if we detonated everything we have in an intelligent, strategic campaign to end life. And that's today, rather than tomorrow.

Regarding your professional AGI researcher and recursive self-improvement, I don't know, I'm not an AGI researcher, but it seemed to me that a prerequisite to successful AGI is an understanding and algorithmic implementation of intelligence. Therefore, any AGI will know what intelligence is (since we do), and be able to modify it. Once you've got a starting point, any algorithm that can be called 'intelligent' at all, you've got a huge leap toward mathematical improvement. Algorithms have been getting faster at a higher rate than Moore's Law and computer chips.

Replies from: XiXiDu
comment by XiXiDu · 2010-12-30T16:48:08.676Z · LW(p) · GW(p)

I'm not sure you're looking at the probability of other extinction risks with the proper weighting.

That might be true. But most of them have one solution that demands research in many areas. Space colonization. It is true that intelligent systems, if achievable in due time, play a significant role here. But not an exceptional role if you disregard the possibility of an intelligence explosion, of which I am very skeptical. Further, it appears to me that donating to the SIAI would rather impede research on such systems giving their position that such systems themselves posit an existential risk. Therefore, at the moment, the possibility of risks from AI is partially being outweighed to the extent that the SIAI should be supported yet doesn't hold an exceptional position that would necessarily make it the one charity with the highest expected impact per donation. I am unable to pinpoint another charity at the moment, e.g. space elevator projects, because I haven't looked into it. But I do not know of any comparison analysis, although you and many other people claim they have calculated it nobody ever published their efforts. As you know, I am unable to do such an analysis myself at this point as I am still learning the math. But I am eager to get the best information by means of feedback anyhow. Not intended as an excuse of course.

Once you've got a starting point, any algorithm that can be called 'intelligent' at all, you've got a huge leap toward mathematical improvement. Algorithms have been getting faster at a higher rate than Moore's Law and computer chips.

That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution? Also, can algorithms that could be employed in real-world scenarios be speed-up to have an effect that would warrant superhuman power? Take photosynthesis, could that particular algorithm be improved considerably, to an extent that it would be vastly better than the evolutionary one? Further, will such improvements be accomplishable fast enough to outpace human progress or the adaption of the given results? My problem is that I do not believe that intelligence is fathomable as a solution that can be applied to itself effectively. I see a fundamental dependency on unintelligent processes. Intelligence is merely to recapitulate prior discoveries. To alter what is already known by means of natural methods. If 'intelligence' is shorthand for 'problem-solving' then it's also the solution which would mean that there was no problem to be solved. This can't be true, we still have to solve problems and are only able to do so more effectively if we are dealing with similar problems that can be subject to known and merely altered solutions. In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far 'intelligence' has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.

Nonetheless I believe your reply was very helpful as an impulse to look at it from a different perspective. Although I might not be able to judge it in detail at this point I'll have to incorporate it.

Replies from: jimrandomh, Rain
comment by jimrandomh · 2010-12-30T18:10:11.223Z · LW(p) · GW(p)

That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution?

This seems backwards - if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I'd also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time.

It seems to me that, if we reach a point where we can't improve an intelligence any further, it won't be because it's fundamentally impossible to improve, but because we've hit diminishing returns. And there's really no way to know in advance where the point of diminishing returns will be. Maybe there's one breakthrough point, after which it's easy until you get to the intelligence of an average human, then it's hard again. Maybe it doesn't become difficult until after the AI's smart enough to remake the world. Maybe the improvement is gradual the whole way up.

But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.

In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far 'intelligence' has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.

In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.

Replies from: NancyLebovitz, timtyler, timtyler
comment by NancyLebovitz · 2010-12-30T18:11:41.015Z · LW(p) · GW(p)

But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.

The AI will be much bigger than a virus. I assume this will make propagation much harder.

Replies from: jimrandomh
comment by jimrandomh · 2010-12-30T19:01:23.410Z · LW(p) · GW(p)

Harder, yes. Much harder, probably not, unless it's on the order of tens of gigabytes; most Internet connections are quite fast.

comment by timtyler · 2010-12-30T19:24:25.803Z · LW(p) · GW(p)

And one new algorithm might be enough to birth an artificial general intelligence.

Anything could be possible - though the last 60 years of the machine intelligence field are far more evocative of the "blood-out of-a-stone" model of progress.

comment by timtyler · 2010-12-30T19:11:21.537Z · LW(p) · GW(p)

If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.

Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world's security services and law-enforcement agencies, though.

Replies from: jimrandomh
comment by jimrandomh · 2010-12-30T19:49:47.669Z · LW(p) · GW(p)

Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.

Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it's because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.

Maybe the state of computer security will be better in 20 years, and this won't be as much of a risk anymore. I certainly hope so. But we can't count on it.

Replies from: timtyler
comment by timtyler · 2010-12-30T20:22:39.495Z · LW(p) · GW(p)

Mafia superintelligence, spyware superintelligence - it's all the forces of evil. The forces of good are much bigger, more powerful and better funded.

Sure, we should continue to be vigilant about the forces of evil - but surely we should also recognise that their chances of success are pretty slender - while still keeping up the pressure on them, of course.

Good is winning: http://www.google.com/insights/search/#q=good%2Cevil :-)

Replies from: jimrandomh
comment by jimrandomh · 2010-12-30T20:44:20.957Z · LW(p) · GW(p)

You seem to be seriously misinformed about the present state of computer security. The resources on the side of good are vastly insufficient because offense is inherently easier than defense.

Replies from: timtyler
comment by timtyler · 2010-12-30T21:23:06.362Z · LW(p) · GW(p)

Your unfounded supposition seems pretty obnoxious - and you aren't even right :-(

You can't really say something is "vastly insufficient" - unless you have an intended purpose in mind - as a guide to what would qualify as being sufficient.

There's a huge population of desktop and office computers doing useful work in the world - we evidently have computer security enough to support that.

Perhaps you are presuming some other criteria. However, projecting that presumption on to me - and then proclaiming that I am misinformed - seems out of order to me.

Replies from: jimrandomh
comment by jimrandomh · 2010-12-30T21:32:44.640Z · LW(p) · GW(p)

You can't say really something is "vastly insufficient" unless you have an intended purpose in mind. There's a huge population of desktop and office computers doing useful work in the world - we have computer security enough to support that.

The purpose I had in mind (stated directly in that post's grandparent, which you replied to) was to stop an artificial general intelligence from stealing vast computational resources. Since exploits in major software packages are still commonly discovered, including fairly frequent 0-day exploits which anyone can get for free just by monitoring a few mailing lists, the computer security we have is quite obviously not sufficient for that purpose. Not only that, humans do in fact steal vast computational resources pretty frequently. The fact that no one has tried to or wants to stop people from getting work done on their office computers is completely irrelevant.

Replies from: timtyler
comment by timtyler · 2010-12-30T21:42:30.886Z · LW(p) · GW(p)

You sound bullish - when IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are "seriously misinformed" - when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.

Replies from: jimrandomh
comment by jimrandomh · 2010-12-30T22:06:52.206Z · LW(p) · GW(p)

IMO what you should be doing is learning that it is presumptious and antagonistic to publicly tell people that they are "seriously misinformed" - when you have such feeble and inaccurate evidence of any such thing. Such nonsense just gets in the way of the discussion.

Perhaps it was presumptuous and antagonistic, perhaps I could have been more tactful, and I'm sorry if I offended you. But I stand by my original statement, because it was true.

Crocker's Rules for me. Will you do the same?

Replies from: timtyler, timtyler
comment by timtyler · 2010-12-30T23:06:12.021Z · LW(p) · GW(p)

I am not sure which statement you stand by. The one about me being "seriously misinformed" about computer security? Let's not go back to that - pulease!

The "adjusted" one - about the resources on the side of good being vastly insufficient to prevent a nasty artificial general intelligence from stealing vast computational resources? I think that is much too speculative for a true/false claim to be made about it.

The case against it is basically the case for good over evil. In the future, it seems reasonable that there will be much more ubiquitous government surveillance. Crimes will be trickier to pull off. Criminals will have more powerful weapons - but the government will know what colour socks they are wearing. Similarly, medicine will be better - and the life of pathogens will become harder. Positive forces look set to win, or at least dominate. Matt Ridley makes a similar case in his recent "Rational Optimism".

Is there a correspondingly convincing case that the forces of evil will win out - and that the mafia machine intelligence - or the spyware-maker's machine intelligence - will come out on top? That seems about as far-out to me as the SIAI contention that a bug is likely to take over the world. It seems to me that you have to seriously misunderstand evolution's drive to build large-scale cooperative systems to entertain such ideas for very long.

comment by timtyler · 2010-12-30T22:17:27.963Z · LW(p) · GW(p)

I don't have much inclination to think about my attitude towards Crocker's Rules just now - sorry. My initial impression is not favourable, though. Maybe it would work with infrastructure - or on a community level. Otherwise the overhead of tracking people's "Crocker status" seems considerable. You can take that as a "no".

comment by Rain · 2010-12-30T17:17:41.598Z · LW(p) · GW(p)

I believe your reply was very helpful as an impulse to look at it from a different perspective. Although I might not be able to judge it in detail at this point I'll have to incorporate it.

Thank you for continuing to engage my point of view, and offering your own.

I do not believe that intelligence is fathomable as a solution that can [be] applied to itself effectively.

That's an interesting hypothesis which easily fits into my estimated 90+ percent bucket of failure modes. I've got all kinds of such events in there, including things such as, there's no way to understand intelligence, there's no way to implement intelligence in computers, friendliness isn't meaningful, CEV is impossible, they don't have the right team to achieve it, hardware will never be fast enough, powerful corporations or governments will get there first, etc. My favorite is: no matter whether it's possible or not, we won't get there in time; basically, that it will take too long to be useful. I don't believe any of them, but I do think they have solid probabilities which add up to a great amount of difficulty.

But the future isn't set, they're just probabilities, and we can change them. I think we need to explore this as much as possible, to see what the real math looks like, to see how long it takes, to see how hard it really is. Because the payoffs or results of failure are in that same realm of 'astronomical'.

comment by Nick_Tarleton · 2011-01-05T04:00:56.594Z · LW(p) · GW(p)

A somewhat important correction:

There is too much at stake here to base the decision to neglect all other potential existential risks on the vague idea that intelligence might come up with something we haven't thought about.

To my knowledge, SIAI does not actually endorse neglecting all potential x-risks besides UFAI. (Analysis might recommend discounting the importance of fighting them head-on, but that analysis should still be done when resources are available.)

comment by timtyler · 2010-12-30T19:41:47.408Z · LW(p) · GW(p)

Intelligence is also not the solution to all other problems we face.

Not all of them - most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones - and so on. It probably won't fix the speed of light limit, though.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-12-30T20:02:23.029Z · LW(p) · GW(p)

Not all of them - most of them. War, hunger, energy limits, resource shortages, space travel, loss of loved ones - and so on. It probably won't fix the speed of light limit, though.

What makes you reach this conclusion? How can you think any of these problems can be solved by intelligence when none of them have been solved? I'm particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don't see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.

Replies from: laakeus, jimrandomh, shokwave, nshepperd, TheOtherDave, timtyler
comment by laakeus · 2010-12-31T07:05:03.042Z · LW(p) · GW(p)

I'm particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don't see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.

Violence has been declining on (pretty much) every timescale: Steven Pinker: Myth of Violence. I think one could argue that this is because of greater collective intelligence of human race.

comment by jimrandomh · 2010-12-30T20:05:57.125Z · LW(p) · GW(p)

I'm particularly perplexed by the claim that war would be solved by higher intelligence. Many wars are due to ideological priorities. I don't see how you can expect necessarily (or even with high probability) that ideologues will be less inclined to go to war if they are smarter.

War won't be solved by making everyone smarter, but it will be solved if a sufficiently powerful friendly AI takes over, as a singleton, because it would be powerful enough to stop everyone else from using force.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-12-30T20:07:03.764Z · LW(p) · GW(p)

Yes, that makes sense, but in context I don't think that's what was meant since Tim is one of the people here is more skeptical of that sort of result.

Replies from: timtyler
comment by shokwave · 2010-12-31T08:02:29.015Z · LW(p) · GW(p)

How can you think any of these problems can be solved by intelligence when none of them have been solved?

War has already been solved to some extent by intelligence (negotiations and diplomacy significantly decreased instances of war), hunger has been solved in large chunks of the world by intelligence, energy limits have been solved several times by intelligence, resource shortages ditto, intelligence has made a good first attempt at space travel (the moon is quite far away), and intelligence has made huge bounds towards solving the problem of loss of loved ones (vaccination, medical intervention, surgery, lifespans in the high 70s, etc).

Many wars are due to ideological priorities.

This is a constraint satisfaction problem (give as many ideologies as much of what they want as possible). Intelligence solves those problems.

comment by nshepperd · 2010-12-31T08:40:58.524Z · LW(p) · GW(p)

I have my doubts about war, although I don't think most wars really come down to conflicts of terminal values. I'd hope not, anyway.

However as for the rest, if they're solvable at all, intelligence ought to be able to solve them. Solvable means there exists a way to solve them. Intelligence is to a large degree simply "finding ways to get what you want".

Do you think energy limits really couldn't be solved by simply producing through thought working designs for safe and efficient fusion power plants?

ETA: ah, perhaps replace "intelligence" with "sufficient intelligence". We haven't already solved all these problems already in part because we're not really that smart. I think fusion power plants are theoretically possible, and at our current rate of progress we should reach that goal eventually, but if we were smarter we should obviously achieve it faster.

comment by TheOtherDave · 2010-12-30T21:01:04.900Z · LW(p) · GW(p)

As various people have said, the original context was not making everybody more intelligent and thereby changing their inclinations, but rather creating an arbitrarily powerful superintelligence that makes their inclinations irrelevant. (The presumption here is typically that we know which current human inclinations such a superintelligence would endorse and which ones it would reject.)

But I'm interested in the context you imply (of humans becoming more intelligent).

My $0.02: I think almost all people who value war do so instrumentally. That is, I expect that most warmongers (whether ideologues or not) want to achieve some goal (spread their ideology, or amass personal power, or whatever) and they believe starting a war is the most effective way for them to do that. If they thought something else was more effective, they would do something else.

I also expect that intelligence is useful for identifying effective strategies to achieve a goal. (This comes pretty close to being true-by-definition.)

So I would only expect smarter ideologues (or anyone else) to remain warmongers if if starting a war really was the most effective way to achieve their goals. And if that's true, everyone else gets to decide whether we'd rather have wars, or modify the system so that the ideologues have more effective options than starting wars (either by making other options more effective, or by making warmongering less effective, whichever approach is more efficient).

So, yes, if we choose to incentivize wars, then we'll keep getting wars. It follows from this scenario that war is the least important problem we face, so we should be OK with that.

Conversely, if it turns out that war really is an important problem to solve, then I'd expect fewer wars.

comment by timtyler · 2010-12-30T20:09:37.710Z · LW(p) · GW(p)

I was about to reply - but jimrandomh said most of what I was going to say already - though he did so using that dreadful "singleton" terminology, spit.

I was also going to say that the internet should have got the 2010 Nobel peace prize.

comment by timtyler · 2010-12-30T19:28:29.482Z · LW(p) · GW(p)

And what do you expect them to conclude if even a professional AGI researcher, who has been a member of the SIAI, does write the following:

Every AGI research I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence -- are people associated with SIAI.

Is that really the idea? My impression is that the SIAI think machines without morals are dangerous, and that until there is more machine morality research, it would be "nice" if progress in machine intelligence was globally slowed down. If you believe that, then any progress - including constructing machine toddlers - could easily seem rather negative.

comment by timtyler · 2010-12-30T19:45:52.363Z · LW(p) · GW(p)

I just do not see that recursive self-improvement will happen any time soon and cause an intelligence explosion. What evidence is there against a gradual development?

Darwinian gradualism doesn't forbid evolution taking place rapidly. I can see evolutionary progress accelerating over the course of my own lifespan - which is pretty incredible considering that evolution usually happens on a scale of millions of years. More humans in parallel can do more science and engineering. The better their living standard, the more they can do. Then there are the machines...

Maybe some of the pressures causing the speed-up will slack off - but if they don't then humanity may well face a bare-knuckle ride into inner-space - and fairly soon.

comment by timtyler · 2010-12-30T19:35:07.093Z · LW(p) · GW(p)

Re: toddler-level machine intelligence.

Most toddlers can't program, but many teenagers can. The toddler is a step towards the teenager - and teenagers are notorious for being difficult to manage.

comment by timtyler · 2010-12-31T11:52:40.755Z · LW(p) · GW(p)

I just don't see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You'd have to deliberately implement such an intention.

The usual cite given in this area is the paper The Basic AI Drives.

It suggests that open-ended goal-directed systems will tend to improve themselves - and to grab resources to help them fulfill their goals - even if their goals are superficially rather innocent-looking and make no mention of any such thing.

The paper starts out like this:

  1. AIs will want to self-improve - One kind of action a system can take is to alter either its own software or its own physical structure. Some of these changes would be very damaging to the system and cause it to no longer meet its goals. But some changes would enable it to reach its goals more effectively over its entire future. Because they last forever, these kinds of self-changes can provide huge benefits to a system. Systems will therefore be highly motivated to discover them and to make them happen. If they do not have good models of themselves, they will be strongly motivated to create them though learning and study. Thus almost all AIs will have drives towards both greater self-knowledge and self-improvement.
comment by hairyfigment · 2010-12-29T20:45:19.750Z · LW(p) · GW(p)

It would also likely not have the intention to increase its intelligence infinitely anyway. I just don't see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You'd have to deliberately implement such an intention.

Well, some older posts had a guy praising "goal system zero", which meant a plan to program an AI with the minimum goals it needs to function as a 'rational' optimization process and no more. I'll quote his list directly:

(1) Increasing the security and the robustness of the goal-implementing process. This will probably entail the creation of machines which leave Earth at a large fraction of the speed of light in all directions and the creation of the ability to perform vast computations.

(2) Refining the model of reality available to the goal-implementing process. Physics and cosmology are the two disciplines most essential to our current best model of reality. Let us call this activity "physical research".

(End of list.)

This seems plausible to me as a set of necessary conditions. It also logically implies the intention to convert all matter the AI doesn't lay aside for other purposes (of which it has none, here) into computronium and research equipment. Unless humans for some reason make incredibly good research equipment, the zero AI would thus plan to kill us all. This would also imply some level of emulation as an initial instrumental goal. Note that sub-goal (1) implies a desire not to let instrumental goals like simulated empathy get in the way of our demise.

comment by timtyler · 2010-12-31T12:00:08.435Z · LW(p) · GW(p)

I believe that in this case an emulation would be the bigger risk because it would be sufficiently obscure and could pretend to be friendly for a long time while secretly strengthening its power.

Perhaps, though if we can construct such a thing in the first place we may be able to deep-scan its brain and read its thoughts pretty well - or at least see if it is lying to us and being deceptive.

IMO, the main problem there is with making such a thing in the first place before we have engineered intelligence. Brain emulations won't come first - even though some people seem to think they will.

comment by shokwave · 2010-12-29T19:01:05.973Z · LW(p) · GW(p)

Seconding this question.

comment by wedrifid · 2010-08-15T04:15:23.040Z · LW(p) · GW(p)

Writing the word 'assumption' has its limits as a form of argument. At some stage you are going to have to read the links given.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-15T08:41:54.186Z · LW(p) · GW(p)

This was a short critique of one of the links given. The first I skimmed over. I wasn't impressed yet. At least to the extent of having nightmares when someone tells me about bad AI's.

comment by Rain · 2010-08-15T01:39:57.107Z · LW(p) · GW(p)

I like how Nick Bostrom put it re: probabilities and interesting future phenomena:

I see philosophy and science as overlapping parts of a continuum. Many of my interests lie in the intersection. I tend to think in terms of probability distributions rather than dichotomous epistemic categories. I guess that in the far future the human condition will have changed profoundly, for better or worse. I think there is a non-trivial chance that this "far" future will be reached in this century. Regarding many big picture questions, I think there is a real possibility that our views are very wrong. Improving the ways in which we reason, act, and prioritize under this uncertainty would have wide relevance to many of our biggest challenges.

comment by Mitchell_Porter · 2010-08-13T11:09:51.677Z · LW(p) · GW(p)

Can I say, first of all, that if you want to think realistically about a matter like this, you will have to find better authorities than science-fiction writers. Their ideas are generally not their own, but come from scientific and technological culture or from "futurologists" (who are also a very mixed bunch in terms of intellect, realism, and credibility); their stories present speculation or even falsehood as fact. It may be worthwhile going "cold turkey" on all the SF you have ever read, bearing in mind that it's all fiction that was ground out, word by word, by some human being living a very ordinary life, in a place and time not very far from you. Purge all the imaginary experience of transcendence from your system and see what's left.

Of course science-fictional thinking, treating favorite authors as gurus, and so forth is endemic in this subculture. The very name, "Singularity Institute", springs from science fiction. And SF occasionally gets things right. But it is far more a phenomenon of the time, a symptom of real things, rather than a key to understanding reality. Plain old science is a lot closer to being a reliable guide to reality, though even there - treating science as your authority - there are endless ways to go wrong.

A lot of the discourse here and in similar places is science fiction minus plot, characters, and other story-telling apparatus. Just the ideas - often the utopia of the hard-SF fan, bored by the human interactions and wanting to get on with the transcendent stuff. With transhumanist and singularity culture, this utopia has arrived, because you can talk all day about these radical futurist ideas without being tied to a particular author or oeuvre. The ideas have leapt from the page and invaded our brains, where they live even during the dull hours of daylight life. Hallelujah!

So, before you evaluate SIAI and its significance, there are a few more ideas that I would like you to drive from your brain: The many-worlds metaphysics. The idea of trillion-year lifespans. The idea that the future of the whole observable universe depends on the outcome of Earth's experiment with artificial intelligence. These are a few of the science-fiction or science-speculation ideas which have become a fixture in the local discourse.

I'm giving you this lecture because so many of your doubts about LW's favorite crypto-SF ideas masquerading as reality, are expressed in terms of ... what your favorite SF writers and futurist gurus think! But those people all have the same problem: they are trying to navigate issues where there simply aren't authorities yet. Stross and Egan have exactly the same syndrome affecting everyone here who writes about mind copies, superintelligence, alien utility functions, and so on. They live in two worlds, the boring everyday world and the world of their imagination. The fact that they produce descriptions of whole fictional worlds in order to communicate their ideas, rather than little Internet essays, and the fact that they earn a living doing this... I'm not sure if that means they have the syndrome more under control, or less under control, compared to the average LW contributor.

Probably you already know this, probably everyone here knows it. But it needs to be said, however clumsily: there is an enormous amount of guessing going on here, and it's not always recognized as such, and furthermore, there isn't much help we can get from established authorities, because we really are on new terrain. This is a time of firsts for the human species, both conceptually and materially.

Now I think I can start to get to the point. Suppose we entertain the idea of a future where none of these scenarios involving very big numbers (lifespan, future individuals, galaxies colonized, amount of good or evil accomplished) apply, and where none of these exciting info-metaphysical ontologies turns out to be correct. A future which mostly remains limited in the way that all human history to date has been limited, limited in the ways which inspire such angst and such promethean determination to change things, or determination to survive until they change, among people who have caught the singularity fever. A future where everyone is still going to die, where the human race and its successors only last a few thousand years, not millions or billions of them. If that is the future, could SIAI still matter?

My answer is yes, because artificial intelligence still matters in such a future. For the sake of argument, I may have just poured cold water on a lot of popular ideas of transcendence, but to go further and say that only natural life and natural intelligence will ever exist really would be obtuse. If we do accept that "human-level" artificial intelligence is possible and is going to happen, then it is a matter at least as consequential as the possibility of genocide or total war. Ignoring, again for the sake of a limited argument, all the ideas about planet-sized AIs and superintelligence, and it's still easy to see that AI which can out-think human beings and which has no interest in their survival ought to be possible. So even in this humbler futurology, AI is still an extinction risk.

The solution to the problem of unfriendly AI most associated with SIAI - producing the coherent extrapolated volition of the human race - is really a solution tailored to the idea of a single super-AI which undergoes a "hard takeoff", a rapid advancement in power. But SIAI is about a lot more than researching, promoting, and implementing CEV. There's really no organization like it in the whole sphere of "robo-ethics" and "ethical AI". The connection that has been made between "friendliness" and the (still scientifically unknown) complexities of the human decision-making process is a golden insight that has already justified SIAI's existence and funding many times over. And of course SIAI organizes the summits, and fosters a culture of discussion, both in real life and online (right here), which is a lot broader than SIAI's particular prescriptions.

So despite the excesses and enthusiasms of SIAI's advocates, supporters, and leading personalities, it really is the best thing we have going when it comes to the problem of unfriendly AI. Whether and how you personally should be involved with its work - only you can make that decision. (Even constructive criticism is a way of helping.) But SIAI is definitely needed.

Replies from: DSimon, None
comment by DSimon · 2010-08-14T01:12:52.195Z · LW(p) · GW(p)

Ignoring, again for the sake of a limited argument, all the ideas about planet-sized AIs and superintelligence, and it's still easy to see that AI which can out-think human beings and which has no interest in their survival ought to be possible. So even in this humbler futurology, AI is still an extinction risk.

Voted up for this argument. I think the SIAI would be well-served for accruing donations, support, etc. by emphasizing this point more.

Space organizations might similarly argue: "You might think our wilder ideas are full of it, but even if we can't ever colonize Mars, you'll still be getting your satellite communications network."

comment by [deleted] · 2010-08-19T21:11:05.298Z · LW(p) · GW(p)

I hadn't thought of it this way, but on reflection of course it's true.

comment by Kaj_Sotala · 2010-08-13T16:00:54.737Z · LW(p) · GW(p)

Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).

This claim can be broken into two separate parts:

  1. Will we have human-level AI?
  2. Once we have human-level AI, will it develop to become superhuman AI?

For 1: looking at current technology trends, Sandberg & Bostrom estimate that we should have the technology needed for whole brain emulation around 2030-2050 or so, at least assuming that it gets enough funding and that Moore's law keeps up. Even if there isn't much of an actual interest in whole brain emulations, improving scanning tools are likely to revolutionize neuroscience. Of course, respected neuroscientists are already talking about reverse-engineering of the brain as being within reach. If we are successful at reverse engineering the brain, then AI is a natural result.

As for two, as Eliezer mentioned, this is pretty much an antiprediction. Human minds are a particular type of architecture, running on a particular type of hardware: it would be an amazing coincidence if it just happened that our intelligence couldn't be drastically improved upon. We already know that we're insanely biased, to the point of people suffering death or collapses of national economies as a result. Computing power is going way up: with the current trends, we could in say 20 years have computers that only took three seconds to think 25 years' worth of human thoughts.

Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Molecular nanotechnology is not needed. As our society grows more and more dependant on the Internet, plain old-fashioned hacking and social engineering probably becomes more than sufficient to take over the world. Lethal micro-organisms can AFAIK be manufactured via the Internet even today.

The likelihood of exponential growth versus a slow development over many centuries.

Hardware growth alone would be enough to ensure that we'll be unable to keep up with the computers. Even if Moore's law ceased to be valid and we were stuck with a certain level of tech, there are many ways of gaining an advantage.

That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Eliezer Yudkowsky is hardly the only person involved in SIAI's leadership. Michael Vassar is the current president, and e.g. the Visiting Fellows program is providing a constant influx of fresh views on the topics involved.

As others have pointed out, SIAI is currently the only organization around that's really taking care of this. It is not an inconceivable suggestion that another organization could do better, but SIAI's currently starting to reach the critical mass necessary to really have an impact. E.g. David Chalmers joining in on the discussion, and the previously mentioned Visiting Fellow program motivating various people to start their own projects. This year's ECAP conference will be featuring five conference papers from various SIAI-affiliated folks, and so on.

Any competing organization, especially if it was competing for the same donor base and funds, should have a well-argued case for what it can do that SIAI can't or won't. While SIAI's starting to get big, I don't think that its donor base is large enough to effectively support two different organizations working for the same goal. To do good, any other group would need to draw its primary funding from some other source, like the Future of Humanity Institute does.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-13T16:19:04.889Z · LW(p) · GW(p)

Lethal micro-organisms can AFAIK be manufactured via the Internet even today.

Do you have a citation for this? You can get certain biochemical compounds synthesized for you (there's a fair bit of a market for DNA synthesis) but that's pretty far from synthesizing microorganisms.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-08-13T17:34:36.860Z · LW(p) · GW(p)

Right, sorry. I believe the claim (which I heard from a biologist) was that you can get DNA synthesized for you, and in principle an AI or anyone who knew enough could use those services to create their own viruses or bacteria (though no human yet has that required knowledge). I'll e-mail the person I think I heard it from and ask for a clarification.

comment by Paul Crowley (ciphergoth) · 2010-08-13T07:55:07.074Z · LW(p) · GW(p)

Is there more to this than "I can't be bothered to read the Sequences - please justify everything you've ever said in a few paragraphs for me"?

Replies from: whpearson, HughRistik
comment by whpearson · 2010-08-13T08:13:29.467Z · LW(p) · GW(p)

My charitable reading is that he is arguing there will be other people like him and if SIAI wishes to continue growing there does need to be easily digested material.

Replies from: None
comment by [deleted] · 2010-08-13T12:45:49.102Z · LW(p) · GW(p)

From my experience as a long-time lurker and occasional poster, LW is not easily accessible to new users. The Sequences are indeed very long and time consuming, and most of them have multiple links to other posts you are supposed to have already read, creating confusion if you should happen to forget the gist of a particular post. Besides, Eliezer draws a number of huge philosophical conclusions (reductionism, computationalism, MWI, the Singularity, etc.), and a lot of people aren't comfortable swallowing all of that at once. Indeed, the "why should I buy all this?" question has popped into my head many times while reading.

Furthermore, I think criticism like this is good, and the LW crowd should not have such a negative reaction to it. After all, the Sequences do go on and on about not getting unduly emotionally attached to beliefs; if the community can't take criticism, that is probably a sign that it is getting a little too cozy with its current worldview.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-13T12:57:48.558Z · LW(p) · GW(p)

Criticism is good, but this criticism isn't all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.

Replies from: None, XiXiDu, HughRistik
comment by [deleted] · 2010-08-13T13:07:11.979Z · LW(p) · GW(p)

Agreed--criticism of this sort vaguely reminds me of criticism of evolution in that it attacks a particular part of the desired target rather than its fundamental assumptions (my apologies to the original poster). Still, I think we should question the Sequences as much as possible, and even misguided criticism can be useful. I'm not saying we should welcome an unending series of top-level posts like this, but I for one would like to see critical essays on of some of LW's most treasured posts. (There goes my afternoon...)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-13T13:50:08.794Z · LW(p) · GW(p)

Of course, substantive criticism of specific arguments is always welcome.

comment by XiXiDu · 2010-08-13T19:14:51.374Z · LW(p) · GW(p)

My primary point was to inquire about the foundation and credibility of named chain of reasoning. Is it a coherent internal logic that is reasoning about itself or is it based on firm ground?

Take the following example: A recursively evolving AGI is quickly reaching a level that can be considered superhuman. As no advanced nanotechnology was necessary for its construction it is so far awful limited in what it can accomplish given its vast and fast intellect. Thus it solves all open problems associated with advanced nanotechnology and secretely mails its solutions a researcher. This researcher is very excited and consequently builds a corporation around this new technology. Later the AGI buys the stocks of that company and plants a front man. Due to some superhuman social engineering it finally obtains control of the technology...

At this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it. Taking a conclusion and running with it building a huge framework of further conclusions around it is in my opinion questionable. First this conclusion has to yield marginal evidence of its feasibility, then you are able to create a further hypothesis engaged with further consequences. You are making estimations within a framework that is itself not based on firm ground. The gist of what I was trying to say is not to subsequently base conclusions and actions on other conclusions which themselves do not bear evidence.

I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn't any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?

I've read and heard enough to be in doubt since I haven't come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.

In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.

comment by HughRistik · 2010-08-13T19:02:49.685Z · LW(p) · GW(p)

Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying "justify yourselves!" doesn't advance the debate.

Disagree. If you are asking people for money (and they are paying you), the burden is on you to provide justification at multiple levels of detail to your prospective or current donors.

But, but... then you'll have to, like, repeat yourself a lot!

No shit. If you want to change the world, be prepared to repeat yourself a lot.

comment by HughRistik · 2010-08-13T18:54:48.602Z · LW(p) · GW(p)

If so... is that request bad?

If you are running a program where you are trying to convince people on a large scale, then you need to be able to provide overviews of what you are saying at various levels of resolution. Getting annoyed (at one of your own donors!) for such a request is not a way to win.

Edit: At the time, Eliezer didn't realize that XiXiDu was a donor.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-13T23:38:38.553Z · LW(p) · GW(p)

Getting annoyed (at one of your own donors!) for such a request is not a way to win.

I don't begrudge SIAI at all for using Less Wrong as a platform for increasing its donor base, but I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor. You can ask Eliezer to not get annoyed, but is it fair to expect all the other LW regulars to do the same as well?

I'm not sure what the solution is to this problem, but I'm hoping that somebody is thinking about it.

Replies from: HughRistik, cata
comment by HughRistik · 2010-08-14T00:27:12.125Z · LW(p) · GW(p)

I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor.

Me too. The reason I upvoted this post was because I hoped it would stimulate higher quality discussion (whether complimentary, critical, or both) of SIAI in the future. I've been hoping to see such a discussion on LW for a while to help me think through some things.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-15T19:12:33.350Z · LW(p) · GW(p)

In other words, you see XiXiDu's post as the defector in the Asch experiment who chooses C when the group chooses B but the right answer is A?

comment by cata · 2010-08-14T00:42:59.811Z · LW(p) · GW(p)

To be fair, I don't think XiXiDu expected special treatment for being a donor; he didn't even mention it until Eliezer basically claimed that he was being insincere about his interest. (EDIT: Thanks to Wei Dai, I see he did mention it. No comment on motivations, then.)

I think that Eliezer's statement is not an expression of a desire to give donors special treatment in general; it's a reflection of the fact that, knowing Xi is a donor and proven supporter of SIAI, he then ought to give Xi's criticism of SIAI more credit for being sincere and worth addressing somehow. If Xi were talking about anything else, it wouldn't be relevant.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-14T00:52:33.384Z · LW(p) · GW(p)

He mentioned it earlier in a comment reply to Eliezer, and then again in the post itself:

That is, I'm donating to the SIAI but also spend considerable amounts of resources maximizing utility at present.

comment by orthonormal · 2010-08-12T17:58:36.102Z · LW(p) · GW(p)

These are reasonable questions to ask. Here are my thoughts:

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go).
  • Advanced real-world molecular nanotechnology (the grey goo kind the above intelligence could use to mess things up).

Virtually certain that these things are possible in our physics. It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI. But this possibility seems slimmer when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading) and it's hard to imagine that recursive improvement would cap out any time soon. At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.

  • The likelihood of exponential growth versus a slow development over many centuries.
  • That it is worth it to spend most on a future whose likelihood I cannot judge.

These are more about decision theory than logical uncertainty, IMO. If a self-improving AI isn't actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash. If it is possible soon, then it's a vital factor in existential risk. You'd have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.

For the other, this falls under the fuzzies and utilons calculation. Insofar as you want to feel confident that you're helping the world (and yes, any human altruist does want this), pick a charity certain to do good in the present. Insofar as you actually want to maximize your expected impact, you should weight charities by their uncertainty and their impact, multiply it out, and put all your eggs in the best basket (unless you've just doubled a charity's funds and made them less marginally efficient than the next one on your list, but that's rare).

  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

Aside from any considerations in his favor (development of TDT, for one publicly visible example), this sounds too much like a price for joining— if your really take the risk of Unfriendly AI seriously, what else could you do about it? In fact, the more well-known SIAI gets in the AI community and the more people take it seriously, the more likely that it will (1) instill in other GAI researchers some necessary concern for goal systems and (2) give rise to competing Friendly AI projects which might improve on SIAI in any relevant respects. Unless you thought they were doing as much harm as good, it still seems optimal to fund SIAI now if you're concerned about self-improving AI.

Further, do you have an explanation for the circumstance that Eliezer Yudkowsky is the only semi-popular person who has figured all this out?

My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI rather than doing other things with their life. There's a very unsurprising selection bias here.

ETA: Reading the comments, I just found that XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate. I've downvoted this post, and I now feel kind of stupid for having written out this huge reply.

Replies from: whpearson, XiXiDu
comment by whpearson · 2010-08-12T19:56:57.693Z · LW(p) · GW(p)

Virtually certain that these things are plausible.

What do you mean by plausible in this instance? Not currently refuted by our theories of intelligence or chemistry? Or something stronger.

Replies from: orthonormal
comment by orthonormal · 2010-08-12T23:59:12.116Z · LW(p) · GW(p)

Oh yeah, oops, I meant to say "possible in our physics". Edited accordingly.

comment by XiXiDu · 2010-08-13T09:07:15.948Z · LW(p) · GW(p)

It's possible that transhuman AI is too difficult for human beings to feasibly program, in the same way that we're sure chimps couldn't program trans-simian AI.

Where is the evidence that does support the claims that it is not only possible, but that it will also turn out to be MUCH smarter than a human being, not just more rational or faster. Where is the evidence for an intelligence explosion? Is action justified simply based on the mere possibility that it might be physical possible?

...when you consider that humans will start boosting their own intelligence pretty soon by other means (drugs, surgery, genetic engineering, uploading)...

Not even your master believes this.

At some point we'll have a descendant who can figure out self-improving AI; it's just a question of when.

Yes, once they turned themselves into superhuman intelligences? Isn't this what Kurzweil believes? No risks by superhuman AI because we'll go the same way anyway?

If a self-improving AI isn't actually possible for a long time, then funding SIAI (and similar projects, when they arise) is a waste of cash.

Yep.

You'd have to have strong evidence against the possibility of rapid self-improvement for Friendly AI research to be a bad investment within the existential risk category.

Yes, but to allocate all my egs to them? Remember, they ask for more than simple support.

Insofar as you actually want to maximize your expected impact...

I want to maximize my expected survival. If there are medium midterm risks that could kill me with a higher probability than AI in future, that is as important as the AI killing me later.

...development of TDT...

Highly interesting. Sadly it is not a priority.

...if your really take the risk of Unfriendly AI seriously, what else could you do about it?

I could, for example, start my own campaign to make people aware of possible risks. I could talk to people. I bet there's a lot more you smart people could do besides supporting EY.

...the more well-known SIAI gets in the AI community.

The SIAI and specially EY does not have the best reputation within the x-risk community and I bet that's the same in the AI community.

Unless you thought they were doing as much harm as good...

That might very well be the case given how they handle public relations.

My best guess is that the first smart/motivated/charismatic person who comes to these conclusions immediately tries to found something like SIAI.

He wasn't the first smart person who came to these conclusions. And he sure isn't charismatic.

XiXiDu has not actually read the Sequences before claiming that the evidence presented is inadequate.

I've read and heard enough to be in doubt since I haven't come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.

And if you feel stupid because I haven't read hundreds of articles to find a single piece of third party evidence in favor of the outstanding premises used to ask for donations, then you should feel stupid.

Replies from: kodos96
comment by kodos96 · 2010-08-13T09:52:06.887Z · LW(p) · GW(p)

Since I've now posted several comments on this thread defending and/or "siding with" XiXiDu, I feel I should state, for the record, that I think this last comment is a bit over the line, and I don't want to be associated with the kind of unnecessarily antagonistic tone displayed here.

Although there are a couple pieces of the SIAI thesis that I'm not yet 100% sold on, I don't reject it in its entirety, as it now sounds like XiXiDu does - I just want to hear some more thorough explanation on a couple of sticking points before I buy in.

Also, charisma is in the eye of the beholder ;)

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2010-08-13T10:34:05.872Z · LW(p) · GW(p)

I think I should say more about this. That EY has no charisma is, I believe, a reasonable estimation. Someone who says of himself that he's not neurotypical likely isn't a very appealing person in the eye of the average person. Then I got much evidence in the form of direct comments about EY that show that many people do not like him personally.

Now let's examine if I am hostile to EY and his movement. First a comment I made regarding Michael Anissimov' 26th birthday. I wrote:

Happy birthday!

I’m also 26…I’ll need another 26 years to reach your level though :-)

I’ll donate to SIAI again as soon as I can.

And keep up this great blog.

Have fun!!!

Let's examine my opinion about Eliezer Yudkowsky.

  • Here I suggest EY to be the most admirable person.
  • When I recommended reading Good and Real to a professional philosopher I wrote, "Don't know of a review, a recommendation by Eliezer Yudkowsky as 'great' is more than enough for me right now."
  • Here a long discussion with some physicists in which I try to defend MWI by linking them to EY' writings. Note: It is a backup since I deleted my comments there as I was angered by their hostile tone.

There is a lot more which I'm too lazy to look up now. You can check it for yourself, I'm promoting EY and the SIAI all the time, everywhere.

And I'm pretty disappointed that rather than answering my questions or linking me up to some supportive background information, I mainly seem to be dealing with a bunch of puffed up adherents.

comment by XiXiDu · 2010-08-13T10:05:14.255Z · LW(p) · GW(p)

...and I don't want to be associated with the kind of unnecessarily antagonistic tone displayed here.

Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments? Here are the first two replies of people in the academics I wrote about this post, addressing EY:

Wow, that's an incredibly arrogant put-down by Eliezer..SIAI won't win many friends if he puts things like that...

and

...he seems to have lost his mind and written out of strong feelings. I disagree with him on most of these matters.

Replies from: kodos96
comment by kodos96 · 2010-08-13T10:15:46.248Z · LW(p) · GW(p)

Have you seen me complaining about the antagonistic tone that EY is exhibiting in his comments?

I have been pointing that out as well - although I would describe his reactions more as "defensive" than "antagonistic". Regardless, it seemed to be out of character for Eliezer. Do the two of you have some kind of history I'm not aware of?

comment by Wei Dai (Wei_Dai) · 2010-08-12T17:37:16.741Z · LW(p) · GW(p)

I think Vernor Vinge at least has made a substantial effort to convince people of the risks ahead. What do you think A Fire Upon the Deep is? Or, here is a more explicit version:

If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post- Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self- aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors." It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)

I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans' natural competitiveness and the possibilities inherent in technology. And yet ... we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is:

He goes on to talk about intelligence amplification, and then:

Originally, I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity. (After all, IA allows our participation in a kind of transcendance.) Alas, looking back over these IA proposals, about all I am sure of is that they should be considered, that they may give us more options. But as for safety ... well, some of the suggestions are a little scarey on their face. One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today's world, one where losers take on the winners' tricks and are coopted into the winners' enterprises. A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare [26].

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T08:37:04.559Z · LW(p) · GW(p)

As I wrote in another comment, Eliezer Yudkowsky hasn't come up with anything unique. And there is no argument in saying that he's simply he smartest fellow around since clearly, other people have come up with the same ideas before him. And that was my question, why are they not signaling their support for the SIAI. Or in case they don't know about the SIAI, why are they not using all their resources and publicity and try to stop the otherwise inevitable apocalypse?

It looks like there might be arguments against the kind of fearmongering that can be found within this community. So why is nobody out to inquire about the reasons for the great silence within the group of those aware of a possible singularity but who nevertheless keep quiet? Maybe they know something you don't, or are you people so sure of your phenomenal intelligence?

Replies from: CarlShulman, Unknowns
comment by CarlShulman · 2010-08-13T11:47:13.080Z · LW(p) · GW(p)

David Chalmers has been writing and presenting to philosophers about AI and intelligence explosion since giving his talk at last year's Singularity Summit. He estimates the probability of human-level AI by 2100 at "somewhat more than one-half," thinks an intelligence explosion following that quite likely, and considers possible disastrous consequences quite important relative to other major causes today. However, he had not written or publicly spoken about his views, and probably would not have for quite some time had he not been invited to the Singularity Summit.

He reports a stigma around the topic as a result of the combination of science-fiction associations and the early failures of AI, and the need for some impetus to brave that. Within the AI field, there is also a fear that discussion of long-term risks, or unlikely short-term risks may provoke hostile reactions against the field thanks to public ignorance and affect heuristic. Comparisons are made to genetic engineering of agricultural crops, where public attention seems to be harmful on net in unduly slowing the development of more productive plants.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T12:22:59.952Z · LW(p) · GW(p)

Thanks. This is more, I think you call it rational evidence, from an outsider. But it doesn't answer the primary question of my post. How do you people arrive at the estimations you state? Where can I find the details of how you arrived at your conclusions about the likelihood of those events?

If all this was supposed to be mere philosophy, I wouldn't inquire about it to such an extent. But the SIAI is asking for the better part of your income and resources. There are strong claims being made by Eliezer Yudkowsky and calls for action. Is it reasonable to follow given the current state of evidence?

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T14:14:35.063Z · LW(p) · GW(p)

But the SIAI is asking for the better part of your income and resources.

If you are a hard-core consequentialist altruist who doesn't balance against other less impartial desires you'll wind up doing that eventually for something. Peter Singer's "Famine, Affluence, and Morality" is decades old, and there's still a lot of suffering to relieve. Not to mention the Nuclear Threat Initiative, or funding research into DNA vaccines, or political lobbying, etc. The question of how much you're willing to sacrifice in exchange for helping various numbers of people or influencing extinction risks in various ways is separate from data about the various options. No one is forcing you to reduce existential risk (except insofar as tax dollars go to doing so), certainly not to donate.

I'll have more to say on substance tomorrow, but it's getting pretty late. My tl;dr take would be that with pretty conservative estimates on total AI risk, combined with the lack of short term motives to address it (the threat of near-term and moderate scale bioterrorism drives research into defenses, not the fear of extinction-level engineered plagues; asteroid defense is more motivated by the threat of civilization or country-wreckers than the less common extinction-level events; nuclear risk reduction was really strong only in the face of the Soviets, and today the focus is still more on nuclear terrorism, proliferation, and small scale wars; climate change benefits from visibly already happening and a social movement built over decades in tandem with the existing environmentalist movement), there are still low-hanging fruit to be plucked. [That parenthetical aside somewhat disrupted the tl;dr billing, oh well...] When we get to the point where a sizable contingent of skilled folk in academia and elsewhere have gotten well into those low-hanging fruit, and key decision-makers in the relevant places are likely to have access to them in the event of surprisingly quick progress, that calculus will change.

Replies from: timtyler
comment by timtyler · 2010-08-14T11:13:19.795Z · LW(p) · GW(p)

It seems obvious why those at the top of charity pyramids support utilitarian ethics - their funding depends on it. The puzzle here is why they find so many suckers to exploit.

One might think that those who were inclined to give away their worldly goods to help the needy would have bred themselves out of the gene pool long ago - but evidently that is not the case.

Perhaps one can invoke the unusual modern environment. Maybe in the ancestral environment, helping others was more beneficial - since the high chance of repeated interactions made reciprocal altrusim work better. However, if people donate to help feed starving millions half way around the world, the underlying maths no longer adds up - resulting in what was previously an adaptive behaviour leading to failure in modern situations - maladaptive behaviour as a result of an unfamiliar environment.

One might expect good parents to work to keep their kids away from utilitarian cults - which feed off the material resources of their members - on the grounds that such organisations may systematically lead to a lack of grandchildren. "Interventions" may be required to extricate the entangled offspring from the feeding tentacles of these parasitic entities that exploit people's cognitive biases for their own ends.

Replies from: jimrandomh, wedrifid
comment by jimrandomh · 2010-08-15T00:36:15.767Z · LW(p) · GW(p)

It seems obvious why those at the top of charity pyramids support utilitarian ethics - their funding depends on it. The puzzle here is why they find so many suckers to exploit.

This reads like an attack on utilitarian ethics, but there's an extra inferential step in the middle which makes it compatible with utilitarian ethics being correct. Are you claiming that utilitarian ethics are wrong? Are you claiming that most charities are actually fraudulent and don't help people?

"charity pyramid" ... "good parents work to keep their kids away" ... "utilitarian cults" ... "feeding tentacles of these parasitic entities that exploit ... for their own ends"

Wow, my propagandometer is pegged. Why did you choose this language? Isn't exploiting people for their own ends incompatable with being utilitarian? Do you have any examples of charities structured like pyramid schemes, or as cults?

Replies from: timtyler
comment by timtyler · 2010-08-15T06:51:59.786Z · LW(p) · GW(p)

"Are you claiming that utilitarian ethics are wrong?"

"Right" and "wrong" are usually concepts that are applied with respect to an ethical system. Which ethical system am I expected to assume when trying to make sense of this questiion?

"Are you claiming that most charities are actually fraudulent and don't help people?"

No - I was not talking about that.

"Isn't exploiting people for their own ends incompatable with being utilitarian?"

If a charity's goals include "famine relief", then considerable means would be justified by that - within a utilitarian framework.

"Charity pyramids" was a loosely-chosen term. There may be some pyramid structure - but the image I wanted to convey was of a cause with leader(s) preaching the virtues of utilitarianism - being supported in their role by a "base" of "suckers" - individuals who are being duped into giving many of their resources to the cause.

Superficially, the situation represents a bit of a Darwinian puzzle: Are the "suckers" being manipulated? Have they been hypnotised? Do they benefit in some way by the affiliation? Are they fooled into treating the cause as part of their extended family? Are they simply broken? Do they aspire to displace the leader? Have their brains been hijacked by pathogenic memes? What is going on?

comment by wedrifid · 2010-08-14T12:00:34.471Z · LW(p) · GW(p)

It seems obvious why those at the top of charity pyramids support utilitarian ethics - their funding depends on it. The puzzle here is why they find so many suckers to exploit.

It helps that just pointing out observations like this is almost universally punished. Something to do with people on the top of pyramids having more power...

For my part I would upvote your comment another few times if I could but I note that someone else has downvoted you.

Replies from: timtyler
comment by timtyler · 2010-08-14T12:20:30.492Z · LW(p) · GW(p)

Another aspect of it is that people try and emulate charismatic leaders - in the hope of reproducing their success. If the guru says to give everything to the guru then the followers sometimes comply - because it is evident that the guru has things sussed - and is someone to be copied and emulated. Sometimes this strategy works - and it is possible for a cooperative follower to rise to power within the cult. However, if the gurus' success is largely down to their skill at feeding off their followers, the gurus are often heavily outnumbered.

comment by Unknowns · 2010-08-13T09:36:40.393Z · LW(p) · GW(p)

http://www.overcomingbias.com/2007/02/what_evidence_i.html

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T09:56:20.379Z · LW(p) · GW(p)

Absence of evidence is not evidence of absence?

There's simply no good reason to argue against cryonics. It is a chance in case of the worst case scenario and it is considerably higher than rotting six feet under.

Have you thought about the possibility that most experts simply are reluctant to come up with detailed critics about specific issues posed by the SIAI, EY and LW? Maybe they consider it not worth the effort as the data that is already available does not justify given claims in the first place.

Anyway, I think I might write some experts and all of the people mentioned in my post, if I'm not too lazy.

I've already got one reply, whom I'm not going to name right now. But let's first consider Yudkowsky' attitude of adressing other people:

You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong...

Now the first of those people I contacted about it:

There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.

Read Moral Machines for current state of the art thinking on how to build a moral machine mind.

SIAI dogma makes sense if you ignore the uncertainties at every step of their logic. It's like assigning absolute numbers to every variable in the Drake equation and determining that aliens must be all around us in the solar system, and starting a church on the idea that we are being observed by spaceships hidden on the dark side of the moon. In other words, religious thinking wrapped up to look like rationality.

ETA

I was told the person I quoted above is stating full ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed the person might not has been honest, or clueful. Otherwise I'll unnecessary end up perpetuating possible ad hominem attacks.

Replies from: utilitymonster, thomblake, Rain, timtyler, Unknowns
comment by utilitymonster · 2010-08-13T12:26:28.012Z · LW(p) · GW(p)

I feel some of the force of this...I do think we should take the opinions of other experts seriously, even if their arguments don't seem good.

I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T12:44:34.628Z · LW(p) · GW(p)

I sort of think that many of these criticisms of SIAI turn on not being Bayesian enough. Lots of people only want to act on things they know, where knowing requires really solid evidence, the kind of evidence you get through conventional experimental science, with low p-values and all. It is just impossible to have that kind of robust confidence about the far future. So you're going to have people just more or less ignore speculative issues about the far future, even if those issues are by far the most important. Once you adopt a Bayesian perspective, and you're just interested in maximizing expected utility, the complaint that we don't have a lot of evidence about what will be best for the future, or the complaint that we just don't really know whether SIAI's mission and methodology are going to work seems to lose a lot of force.

I have some sympathy for your remark.

The real question is just whether SIAI has greatly overestimated at least one of the relevant probabilities. I have high confidence that the SIAI staff have greatly overestimated their ability to have a systematically positive impact on existential risk reduction.

Replies from: utilitymonster
comment by utilitymonster · 2010-08-13T13:07:20.053Z · LW(p) · GW(p)

Have you read Nick Bostrom's paper, Astronomical Waste? You don't have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.

Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough. (I agree that this kind of argument is worrisome; maybe expected utility theory or utilitarianism breaks down with these huge numbers and tiny probabilities, but it is worth thinking about.)

If you're sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I'm not saying SIAI clearly wins, I just want to know what else you're thinking about.)

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T16:13:30.885Z · LW(p) · GW(p)

Have you read Nick Bostrom's paper, Astronomical Waste? You don't have to be able to affect the probabilities by very much for existential risk to be the thing to worry about, especially if you have a decent dose of credence in utilitarianism.

Is there a decent chance, in your view, of decrease x-risk by 10^-18 if you put all of your resources into it? That could be enough.

I agree with you about what you say above. I personally believe that it is possible to individuals to decrease existential risk by more than 10^(-18) (though I know reasonable people who have at one time or other thought otherwise).

If you're sold on x-risk, are there some candidate other things that might have higher expectations of x-risk reductions on the margin (after due reflection)? (I'm not saying SIAI clearly wins, I just want to know what else you're thinking about.

Two points to make here:

(i) Though there's huge uncertainty in judging these sorts of things and I'm by no means confident in my view on this matter, I presently believe that SIAI is increasing existential risk through unintended negative consequences. I've written about this in various comments, for example here, here and here.

(ii) I've thought a fair amount about other ways in which one might hope to reduce existential risk. I would cite the promotion and funding of an asteroid strike prevention program as a possible candidate. As I discuss here, placing money in a donor advised fund may be the best option. I wrote out much more detailed thoughts on these points which I can send you by email if you want (just PM me) but which are not yet ready for posting in public.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T17:56:22.060Z · LW(p) · GW(p)

I agree that 'poisoning the meme' is a real danger, and that SIAI has historically had both positives and negatives with respect to its reputational effects. My net expectation for it at the moment is positive, but I'll be interested to hear your analysis when it's ready. [Edit: apparently the analysis was about asteroids, not reputation.]

Here's the Fidelity Charitable Gift Fund for Americans. I'm skeptical about asteroid in light of recent investments in that area and technology curve, although there is potential for demonstration effects (good and bad) with respect to more likely risks.

comment by thomblake · 2010-08-13T18:11:11.743Z · LW(p) · GW(p)

read Moral Machines for current state of the art thinking on how to build a moral machine mind.

It's hardly that. Moral Machines is basically a survey; it doesn't go in-depth into anything, but it can point you in the direction of the various attempts to implement robot / AI morality.

And Eliezer is one of the people it mentions, so I'm not sure how that recommendation was supposed to advise against taking him seriously. (Moral Machines, page 192)

Replies from: thomblake
comment by thomblake · 2010-08-18T20:00:20.162Z · LW(p) · GW(p)

To follow up on this, Wendell specifically mentions EY's "friendly AI" in the intro to his new article in the Ethics and Information Technology special issue on "Robot ethics and human ethics".

comment by Rain · 2010-08-13T13:52:45.926Z · LW(p) · GW(p)

[...] many reasons to doubt [...] belief system of a cult [...] haphazard musings of a high school dropout [...] never written a single computer program [...] professes to be an expert [...] crying chicken little [...] only a handful take the FAI idea seriously.

[...] dogma [...] ignore the uncertainties at every step [...] starting a church [...] religious thinking wrapped up to look like rationality.

I am unable to take this criticism seriously. It's just a bunch of ad hominem and hand-waving. What are the reasons to doubt? How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview? How is a fiercely atheist group religious at all? How is it a cult (there are lots of posts about this in the LessWrong archive)? How is it irrational?

Edit: And I'm downvoted. You actually think a reply that's 50% insult and emotionally loaded language has substance that I should be engaging with? I thought it was a highly irrational response on par with anti-cryonics writing of the worst order. Maybe you should point out the constructive portion.

Replies from: HughRistik, NihilCredo, timtyler, thomblake
comment by HughRistik · 2010-08-13T18:23:33.265Z · LW(p) · GW(p)

The response by this individual seems like a summary, rather than an argument. The fact that someone writes a polemical summary of their views on a subject doesn't tell us much about whether their views are well-reasoned or not. A polemical summary is consistent with being full of hot air, but it's also consistent with having some damning arguments.

Of course, to know either way, we would have to hear this person's actual arguments, which we haven't, in this case.

How are they ignoring the uncertainties when they list them on their webpage and bring them up in every interview?

Just because a certain topic is raised, doesn't mean that it is discussed correctly.

How is a fiercely atheist group religious at all?

The argument is that their thinking has some similarities to religion. It's a common rhetorical move to compare any alleged ideology to religion, even if that ideology is secular.

How is it a cult (there are lots of posts about this in the LessWrong archive)?

The fact that EY displays an awareness of cultish dynamics doesn't necessarily mean that SIAI avoids them. Personally, I buy most of Eliezer's discussion that "every cause wants to become a cult," and I don't like the common practice of labeled movements as "cults." The net for "cult" is being drawn far too widely.

Yet I wouldn't say that the use of the word "cult" means that the individual is engaging in bad reasoning. While I think "cult" is generally a misnomer, it's generally used as short-hand for a group having certain problematic social-psychological qualities (e.g. conformity, obedience to authority). The individual could well be able to back those criticisms up. Who knows.

We would need to hear this individual's actual arguments to be able to evaluate whether the polemical summary is well-founded.

P.S. I wasn't the one who downvoted you.

Edit:

high school dropout, who has never written a single computer program

I don't know the truth of these statements. The second one seems dubious, but it might not be meant to be taken literally ("Hello World" is a program). If Eliezer isn't a high school dropout, and has written major applications, then the credibility of this writer is lowered.

comment by NihilCredo · 2010-08-15T00:44:45.505Z · LW(p) · GW(p)

I believe you weren't supposed to engage that reply, which is a dismissal more than criticism. I believe you were supposed to take a step back and use it as a hint as to why the SIAI's yearly budget is 5 x 10^5 rather than 5 x 10^9 USD.

comment by timtyler · 2010-08-14T12:43:06.199Z · LW(p) · GW(p)

Re: "How is it a cult?"

It looks a lot like an END OF THE WORLD cult. That is a well-known subspecies of cult - e.g. see:

http://en.wikipedia.org/wiki/Doomsday_cult

"The End of the World Cult"

The END OF THE WORLD acts as a superstimulus to human fear mechanisms - and causes caring people rush to warn their friends of the impending DOOM - spreading the panic virally. END OF THE WORLD cults typically act by simulating this energy - and then feeding from it. The actual value of p(DOOM) is not particularly critical for all this.

The net effect on society of the FEARMONGERING that usually results from such organisations seems pretty questionable. Some of those who become convinced that THE END IS NIGH may try and prevent it - but others will neglect their future plans, and are more likely to rape and pillage.

My "DOOM" video has more - http://www.youtube.com/watch?v=kH31AcOmSjs

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-14T13:51:06.580Z · LW(p) · GW(p)

Slight sidetrack:

There is, of course, one DOOM scenario (ok, one other DOOM scenario) which is entirely respectable here-- that the earth will be engulfed when the sun becomes a red giant.

That fate for the planet haunted me when I was a kid. People would say "But that's billions of years in the future" and I'd feel as though they were missing the point. It's possible that a more detailed discussion would have helped....

Recently, I've read that school teachers have a standard answer for kids who are troubled by the red giant scenario [1]-- that people will have found a solution by then.

This seems less intellectually honest than "The human race will be long gone anyway", but not awful. I think the most meticulous answer (aside from "that's the far future and there's nothing to be done about it now") is "that's so far in the future that we don't know whether people will be around, but if they are, they may well find a solution."

[1] I count this as evidence for the Flynn Effect.

comment by thomblake · 2010-08-13T18:06:49.728Z · LW(p) · GW(p)

Edit: And I'm downvoted.

Downvoted for this.

comment by timtyler · 2010-08-14T12:38:59.012Z · LW(p) · GW(p)

Re: "haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI."

This opinion sounds poorly researched - e.g.: "This document was created by html2html, a Python script written by Eliezer S. Yudkowsky." - http://yudkowsky.net/obsolete/plan.html

Replies from: XiXiDu, jimrandomh
comment by XiXiDu · 2010-08-14T13:44:59.305Z · LW(p) · GW(p)

I posted that quote to put it into perspective as to what others think of EY and his movement compared to what he thinks about them. Given that he thinks the same about those people, i.e. their opinion isn't worth much and that the LW crowd is much smarter anyway, it highlights an important aspect of the almost non-existing cooperation between him and the academics.

comment by jimrandomh · 2010-08-14T14:12:47.995Z · LW(p) · GW(p)

I don't think one possibly-trivial Python script (to which I am unable to find source code) counts as much evidence. It sets a lower bound, but a very loose one. I have no idea whether Eliezer can program, and my prior says that any given person is extremely unlikely to have real programming ability unless proven otherwise. So I assume he can't.

He could change my mind by either publishing a large software project, or taking a standardized programming test such as a TopCoder SRM and publishing his score.

EDIT: This is not meant to be a defense of obvious wrong hyperbole like "has never written a single computer program".

Replies from: timtyler
comment by timtyler · 2010-08-14T15:48:40.757Z · LW(p) · GW(p)

Eliezer has faced this criticism before and responded (somewhere!). I expect he will figure out coding. I got better at programming over the first 15 years I was doing it. So: he may also take a while to get up to speed. He was involved in this:

http://flarelang.sourceforge.net/

comment by Unknowns · 2010-08-13T10:03:13.784Z · LW(p) · GW(p)

This isn't contrary to Robin's post (except what you say about cryonics.) Robin was saying that there is a reluctance to criticize those things in part because the experts think they are not worth bothering with.

comment by Vladimir_Nesov · 2010-08-12T20:19:40.713Z · LW(p) · GW(p)

The questions of speed/power of AGI and possibility of its creation in the near future are not very important. If AGI is fast and near, we must work on FAI faster, but we must work on FAI anyway.

The reason to work on FAI is to prevent any non-Friendly process from eventually taking control over the future, however fast or slow, suddenly powerful or gradual it happens to be. And the reason to work on FAI now is because the fate of the world is at stake. The main anti-prediction to get is that the future won't be Friendly if it's not specifically made Friendly, even if it happens slowly. We can as easily slowly drift away from things we value. You can't optimize for something you don't understand.

It doesn't matter if it takes another thousand years, we still have to think about this hugely important problem. And since we can't guarantee that the deadline is not near, expected utility calculation says we must still work as fast as possible, just in case. If AGI won't be feasible for a long while, that's great news, more time to prepare, to understand what we want.

(To be clear, I do believe that AGIs FOOM, and that we are at risk in the near future, but the arguments for that are informal and difficult to communicate, while accepting these claims is not necessary to come to the same conclusion about policy.)

Replies from: multifoliaterose, whpearson
comment by multifoliaterose · 2010-08-12T20:31:19.920Z · LW(p) · GW(p)

As I've said elsewhere:

(a) There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created. I have not seen anybody present a coherent argument that AGI is likely to be developed before any other existential risk hits us,

(b) Even if AGI deserves top priority, there's still the important question of how to go about working toward a FAI. As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).

(c) Even if AGI is near, there are still serious issues of accountability and transparency connected with SIAI. How do we know that they're making a careful effort to use donations in an optimal way? As things stand, I believe that it would be better to start a organization which exhibits high transparency and accountability, fund that, and let SIAI fold. I might change my mind on this point if SIAI decided to strive toward transparency and accountability.

Replies from: mkehrt, timtyler, Vladimir_Nesov
comment by mkehrt · 2010-08-12T20:51:52.051Z · LW(p) · GW(p)

I really agree with both a and b (although I do not care about c). I am glad to see other people around here who think both these things.

comment by timtyler · 2010-08-13T06:41:44.521Z · LW(p) · GW(p)

Re: "There are other existential risks, not just AGI. I think it more likely than not that one of these other existential risks will hit before an unfriendly AI is created hits."

The humans are going to be obliterated soon?!?

Alas, you don't present your supporting reasoning.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T10:26:41.284Z · LW(p) · GW(p)

No, no, I'm not at all confident that humans will be obliterated soon. But why, for example, is it more likely that humans will go extinct due to AGI than that humans will go extinct due to a large scale nuclear war? It could be that AGI deserves top priority, but I haven't seen a good argument for why.

Replies from: ciphergoth, timtyler
comment by Paul Crowley (ciphergoth) · 2010-08-13T11:17:17.375Z · LW(p) · GW(p)

I think AGI wiping out humanity is far more likely than nuclear war doing so (it's hard to kill everyone with a nuclear war) but even if I didn't, I'd still want to work on the issue which is getting the least attention, since the marginal contribution I can make is greater.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T12:33:04.406Z · LW(p) · GW(p)

Yes, I actually agree with you about nuclear war (and did before I mentioned it!) - I should have picked a better example. How about existential risk from asteroid strikes?

Several points:

(1) Nuclear war could still cause an astronomical waste in the form that I discuss here.

(2) Are you sure that the marginal contribution that you can make to the issue which is getting the least attention is the greatest? The issues getting the least attention may be getting little attention precisely because people know that there's nothing that can be done about them.

(3) If you satisfactorially address my point (a), points (b) and (c) will remain.

Replies from: timtyler
comment by timtyler · 2010-08-13T20:17:14.479Z · LW(p) · GW(p)

p(asteroid strike/year) is pretty low. Most are not too worried.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-14T09:31:07.993Z · LW(p) · GW(p)

The question is whether at present it's possible to lower existential risk more by funding and advocating FAI research than than it is to lower existential risk by funding and advocating an asteroid strike prevention program. Despite the low probability of an asteroid strike, I don't think that the answer to this question is obvious.

Replies from: timtyler
comment by timtyler · 2010-08-14T10:17:49.715Z · LW(p) · GW(p)

I figure a pretty important thing is to get out of the current vulnerable position as soon as possible. To do that, a major thing we will need is intelligent machines - and so we should allocate resources to their development. Inevitably, that will include consideration of safety features. We can already see some damage when today's companies decide to duke it out - and today's companies are not very powerful compared to what is coming. The situation seems relatively pressing and urgent.

Replies from: xamdam, multifoliaterose
comment by xamdam · 2010-09-03T15:13:31.909Z · LW(p) · GW(p)

To do that, a major thing we will need is intelligent machines

that=asteroids?

If yes, I highly doubt we need machines significantly more intelligent than existing military technology adopted for the purpose.

Replies from: timtyler
comment by timtyler · 2010-09-03T20:13:28.797Z · LW(p) · GW(p)

That would hardly be a way to "get out of the current vulnerable position as soon as possible".

comment by multifoliaterose · 2010-08-14T10:34:21.436Z · LW(p) · GW(p)

I agree that friendly intelligent machines would be a great asset to assuaging future existential risk.

My current position is that at present, it's so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we'll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.

I may be wrong, but would require a careful argument for the opposite position before changing my mind.

Replies from: timtyler, Vladimir_Nesov
comment by timtyler · 2010-08-14T10:44:58.780Z · LW(p) · GW(p)

Asteroid strikes are very unlikely - so beating them is a really low standard, which IMO, machine intelligence projects do with ease. Funding the area sensibly would help make it happen - by most accounts. Detailed justification is beyond the scope of this comment, though.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-14T10:57:43.445Z · LW(p) · GW(p)

Assuming that an asteroid strike prevention program costs no more than a few hundred million dollars, I don't think that it's easy to do better to assuage existential risk than funding an asteroid strike prevention program (though it may be possible). I intend to explain why I think it's so hard to lower existential risk through funding FAI research later on (not sure when, but within a few months).

I'd be interested in hearing your detailed justification. Maybe you can make a string top level posts at some point.

comment by Vladimir_Nesov · 2010-08-14T10:38:28.051Z · LW(p) · GW(p)

My current position is that at present, it's so unlikely that devoting resources to developing safe intelligent machines will substantially increase the probability that we'll develop safe intelligent machines that funding and advocating an asteroid strike program is likely to reduce existential risk more than funding and advocating FAI research is.

Considering the larger problem statement, technically understanding what we value as opposed to actually building an AGI with those values, what do you see as distinguishing a situation where we are ready to consider the problem, from a situation where we are not? How can one come to such conclusion without actually considering the problem?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-14T10:52:03.603Z · LW(p) · GW(p)

I think that understanding what we value is very important. I'm not convinced that developing a technical understanding of what we value is the most important thing right now.

I imagine that for some people, working on a developing a technical understanding understanding what we value is the best thing that they could be doing. Different people have different strengths, and this leads to the utilitarian thing varying from person to person..

I don't believe that the best thing for me to do is to study human values. I also don't believe that at the margin, funding researchers who study human values is the best use of money.

Of course, my thinking on these matters is subject to change with incoming information. But if what I think you're saying is true, I'd need to see a more detailed argument than the one that you've offered so far to be convinced.

If you'd like to correspond by email about these things, I'd be happy to say more about my thinking about these things. Feel free to PM me with your email address.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-14T11:06:33.194Z · LW(p) · GW(p)

I didn't ask about perceived importance (that has already taken feasibility into account), I asked about your belief that it's not a productive enterprise (that is the feasibility component of importance, considered alone), that we are not ready to efficiently work on the problem yet.

If you believe that we are not ready now, but believe that we must work on the problem eventually, you need to have a notion of what conditions are necessary to conclude that it's productive to work on the problem under those conditions.

And that's my question: what are those conditions, or how can one figure them out without actually attempting to study the problem (by a proxy of a small team devoted to professionally studying the problem; I'm not yet arguing to start a program on the scale of what's expended on study of string theory).

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-14T20:11:52.733Z · LW(p) · GW(p)

I think that research of the type that you describe is productive. Unless I've erred, my statements above are statements about the relative efficacy of funding research of the type that you describe rather than suggestions that research of the type that you describe has no value.

I personally still feel the way that I did in June despite having read Fake Fake Utility Functions, etc. I don't think that it's very likely the case that we will eventually have to do research of the type that you describe to ensure an ideal outcome. Relatedly, I believe that at the margin, at the moment funding other projects has higher expected value than funding research of the type that you describe. But I may be wrong and don't have an argument against your position. I think that this is something that reasonable people can disagree on. I have no problem with you funding, engaging in and advocating research of the type that you describe.

You and I may have a difference which cannot be rationally resolved in a timely fashion on account of the information that we have access to being in a forms that makes it difficult or impossible to share. Having different people fund different projects according to their differing beliefs about the world serves as some sort of real world approximation to funding what should be funded according to the result of Bayesian averaging over all people and then funding what should be funded based on that.

So, anyway, I think you've given satisfactory answers to how you feel about questions (a) and (b) raised in my comment. I remain curious how you feel about point (c).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-14T20:37:36.079Z · LW(p) · GW(p)

I did answer to (c) before: any reasonable effort in that direction should start with trying to get SIAI itself to change or justify the way it behaves.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-14T21:35:44.886Z · LW(p) · GW(p)

Yes, I agree with you. I didn't remember that you had answered this question before. Incidentally, I did correspond with Michael Vassar. More on this to follow later.

comment by timtyler · 2010-08-13T20:15:57.038Z · LW(p) · GW(p)

p(machine intelligence) is going up annually - while p(nuclear holocaust) has been going down for a long time now. Neither are likely to obliterate civilisation - but machine intelligence could nontheless be disruptive.

comment by Vladimir_Nesov · 2010-08-12T20:40:31.657Z · LW(p) · GW(p)

My comment was specifically about importance of FAI irrespective of existential risks, AGI or not. If we manage to survive at all, this is what we must succeed at. It also prevents all existential risks on completion, where theoretically possible.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-12T20:47:57.868Z · LW(p) · GW(p)

Okay, we had this back and forth before and I didn't understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-12T20:58:26.287Z · LW(p) · GW(p)

This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed.

As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).

Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn't work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.

Replies from: multifoliaterose, timtyler
comment by multifoliaterose · 2010-08-12T23:32:12.887Z · LW(p) · GW(p)

This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it's not the question usually posed.

I agree that in general people should be more concerned about existential risk and that it's worthwhile to promote general awareness of existential risk.

But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice.

More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I'm seriously concerned about this issue.

If Eliezer can't explain why it's pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like "I believe that AGI will be developed over the next 100 years but it's hard for me to express why so it's understandable that people don't believe me" or "I'm uncertain as to whether or not AGI will be developed over the next 100 years"

When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he's actively damaging the cause of existential risk.

Replies from: timtyler
comment by timtyler · 2010-08-13T08:19:20.094Z · LW(p) · GW(p)

Re: "AGI will be developed over the next 100 years"

I list various estimates from those interested enough in the issue to bother giving probabality density functions at the bottom of:

http://alife.co.uk/essays/how_long_before_superintelligence/

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T10:29:13.931Z · LW(p) · GW(p)

Thanks, I'll check this out when I get a chance. I don't know whether I'll agree with your conclusions, but it looks like you've at least attempted to answer one of my main questions concerning the feasibility of SIAI's approach.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T11:58:46.539Z · LW(p) · GW(p)

Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.

Replies from: timtyler, gwern
comment by timtyler · 2010-08-13T20:10:42.892Z · LW(p) · GW(p)

http://www.engagingexperience.com/2006/07/ai50_first_poll.html

If the raw data was ever published, that might be of some interest.

comment by gwern · 2010-08-13T13:37:06.107Z · LW(p) · GW(p)

Any chance of piggybacking questions relevant to Maes-Garreau on that survey? As you point out on that page, better stats are badly needed.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T14:01:47.558Z · LW(p) · GW(p)

And indeed, I suggested to SIAI folk that all public record predictions of AI timelines be collected for that purpose, and such a project is underway.

Replies from: gwern
comment by gwern · 2010-08-13T14:19:10.720Z · LW(p) · GW(p)

Hm, I had not heard about that. SIAI doesn't seem to do a very good job of publicizing its projects or perhaps doesn't do a good job of finishing and releasing them.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T14:29:35.128Z · LW(p) · GW(p)

It just started this month, at the same time as Summit preparation.

comment by timtyler · 2010-08-13T08:13:03.684Z · LW(p) · GW(p)

Re: "Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all."

The marginal benefit of making machines smarter seems large - e.g. see automobile safety applications: http://www.youtube.com/watch?v=I4EY9_mOvO8

I don't really see that situation changing much anytime soon - there will probably be such marginal benefits for a long time to come.

comment by whpearson · 2010-08-12T23:26:39.181Z · LW(p) · GW(p)

Slowly gives the option of figuring out some things about the space of possible AIs with experimentation. Which might then constrain the possible ways to make them friendly.

To use the tired flying metaphor. The type of stabilisation you need for flying depends on the method of generating lift. If fixed wing aircraft are impossible there is not much point looking at ailerons and tails. If helicopters are possible then we should be looking at tail rotors.

comment by utilitymonster · 2010-08-12T16:02:35.927Z · LW(p) · GW(p)

I'm not exactly an SIAI true believer, but I think they might be right. Here are some questions I've thought about that might help you out. I think it would help others out if you told us exactly where you'd be interested in getting off the boat.

  1. How much of your energy are you willing to spend on benefiting others, if the expected benefits to others will be very great? (It needn't be great for you to support SIAI.)
  2. Are you willing to pursue a diversified altruistic strategy if it saves fewer expected lives (it almost always will for donors giving less than $1 million or so)?
  3. Do you think mitigating x-risk is more important than giving to down-to-earth charities (GiveWell style)? (This will largely turn on how you feel about supporting causes with key probabilities that are tough to estimate, and how you feel about low-probability, high expected utility prospects.)
  4. Do you think that trying to negotiate a positive singularity is the best way to mitigate x-risk?
  5. Is any known organization likely to do better than SIAI in terms of negotiating a positive singularity (in terms of decreasing x-risk) on the margin?
  6. Are you likely to find an organization that beats SIAI in the future?

Judging from your post, you seem most skeptical about putting your efforts into causes whose probability of success is very difficult to estimate, and perhaps low.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-12T18:30:10.557Z · LW(p) · GW(p)
  1. Maximal utlity for everyone is a preference but secondary. Most of all in whatever I support my personal short and long-term benefit is a priority.
  2. No
  3. Yes (Edit)
  4. Uncertain/Unable to judge.
  5. Maybe, but I don't know of one. That doesn't mean that we shouldn't create one, if only for the uncertainty of Eliezer Yudkowsky' possible unstated goals.
  6. Uncertain/Unable to judge. See 5.
Replies from: utilitymonster
comment by utilitymonster · 2010-08-12T18:48:44.253Z · LW(p) · GW(p)

Given your answers to 1-3, you should spend all of your altruistic efforts on mitigating x-risk (unless you're just trying to feel good, entertain yourself, etc.).

For 4, I shouldn't have asked you whether you "think" something beats negotiating a positive singularity in terms of x-risk reduction. Better: Is there some other fairly natural class of interventions (or list of potential examples) such that, given your credences, has a higher expected value? What might such things be?

For 5-6, perhaps you should think about what such organizations might be. Those interested in convincing XiXiDu might try listing some alternative best x-risk mitigating groups and provide arguments that they don't do as well. As for me, my credences are highly unstable in this area, so info is appreciated on my part as well.

comment by XiXiDu · 2010-08-19T14:33:49.823Z · LW(p) · GW(p)

Dawkins agrees with EY

Richard Dawkins states that he is frightened by the prospect of superhuman AI and even mentions recursion and intelligence explosion.

Replies from: JGWeissman
comment by JGWeissman · 2010-08-20T06:18:40.401Z · LW(p) · GW(p)

I was disappointed watching the video relative to the expectations I had from your description.

Dawkins talked about recursion as in a function calling itself, as an example of the sort of the thing that may be the final innovation that makes AI work, not an intelligence explosion as a result of recursive self-improvement.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-20T08:47:55.195Z · LW(p) · GW(p)

True, I just wanted to appeal to the majority here. And it worked, 7 upvotes. Whereas this won't work, even if true.

comment by xamdam · 2010-08-13T14:11:42.126Z · LW(p) · GW(p)

I was not sure whether to downvote this post for its epistemic value or upvote for instrumental (stimulating good discussion).

I ended up downvoting, I think this forum deserves better epistemic quality (I paused top-posting myself for this reason). I also donated to SIAI, because its value was once again validated to me by the discussion (though I have some reservations about apparent eccentricity of the SIAI folks, which is understandable (dropping out of high school is to me evidence of high rationality) but couterproductive (not having enough accepted academics involved). I mention this because it came up in the discussion and is definitely part of the subtext.

At to the concrete points of the post, I covered the part of it about the FAI vs AGI timeline here

The other part

Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?

Is simply uninformed, and shows lack of diligence, which is the main reason I feel the post is not up to par and hope the clearly intelligent OP does some more homework and keeps contributing to the site.

  • Vinge has written about bad Singularity scenarios (his Singularity paper and sci-fi).
  • Stross has written about bad Singularity scenarios, at least in Accelerando (spoiler: humanity survives but only because AIs did not care about their resources at that point in time)
  • Kurzweil has written about the possibility of bad scenarios (CIO article in discussion below)

I'll add one more, and to me rather damning: Peter Norvig, who wrote the (most widely used) book on AI and is head of research at Google is on the front page of SIAI (video clip), saying that as scientist we cannot ignore negative possibilities of AGI.

Replies from: Will_Newsome, XiXiDu
comment by Will_Newsome · 2010-08-13T16:35:33.273Z · LW(p) · GW(p)

dropping out of high school is to me evidence of high rationality

Are you talking about me? I believe I'm the only person that could sorta kinda be affiliated with the Singularity Institute who has dropped out of high school, and I'm a lowly volunteer, not at all representative of the average credentials of the people who come through SIAI. Eliezer demonstrated his superior rationality to me by never going to high school in the first place. Damn him.

Replies from: Alicorn, xamdam
comment by Alicorn · 2010-08-13T17:01:43.828Z · LW(p) · GW(p)

I dropped out of high school... to go to college early.

Replies from: xamdam
comment by xamdam · 2010-08-13T17:46:33.928Z · LW(p) · GW(p)

I finished high school early (16) by American standards, with college credit. By the more sane standards of Soviet education 16 is, well, standard (and you learn a lot more).

comment by xamdam · 2010-08-13T16:59:45.884Z · LW(p) · GW(p)

talking about this comment.

Now the first of those people I contacted about it:

There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout

comment by XiXiDu · 2010-08-13T18:52:14.850Z · LW(p) · GW(p)

Here are a few comments where I advance on that particular point:

comment by EStokes · 2010-08-12T23:04:58.164Z · LW(p) · GW(p)

I don't think this post was well-written, at the least. I didn't even understand the tl;dr?

tldr; Is the SIAI evidence-based or merely following a certain philosophy? I'm currently unable to judge if the Less Wrong community and the SIAI are updating on fictional evidence or if the propositions, i.e. the basis for the strong arguments for action that are proclaimed on this site, are based on fact.

I don't see much precise expansion on this, except for MWI? There's a sequence on it.

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.

Have you read the sequences?

As for why there aren't more people supporting SIAI, first of all, it's not widely known, second of all, it's liable to be dismissed on first impressions. Not many have examined the SIAI. Also, only (http://en.wikipedia.org/wiki/Religion#cite_ref-49)[4% of the general public in the US believe in neither a god nor a higher power]. The majority isn't always right.

I don't understand why this post has upvotes. It was unclear and seems topics went unresearched. The usefulness of donating to the SIAI has been discussed before, I think someone probably would've posted a link if asked in the open thread.

Replies from: kodos96, Interpolate
comment by kodos96 · 2010-08-13T05:18:28.511Z · LW(p) · GW(p)

I don't understand why this post has upvotes.

I think the obvious answer to this is that there are a significant number of people out there, even out there in the LW community, who share XiXiDu's doubts about some of SIAIs premises and conclusions, but perhaps don't speak up with their concerns either because a) they don't know quite how to put them into words, or b) they are afraid of being ridiculed/looked down on.

Unfortunately, the tone of a lot of the responses to this thread lead me to believe that those motivated by the latter option may have been right to worry.

Replies from: Furcas
comment by Furcas · 2010-08-13T05:23:20.045Z · LW(p) · GW(p)

Personally, I upvoted the OP because I wanted to help motivate Eliezer to reply to it. I don't actually think it's any good.

Replies from: kodos96, Wei_Dai, Eliezer_Yudkowsky
comment by kodos96 · 2010-08-13T05:47:44.309Z · LW(p) · GW(p)

Yeah, I agree (no offense XiXiDu) that it probably could have been better written, cited more specific objections etc. But the core sentiment is one that I think a lot of people share, and so it's therefore an important discussion to have. That's why it's so disappointing that Eliezer seems to have responded with such an uncharacteristically thin skin, and basically resorted to calling people stupid (sorry, "low g-factor") if they have trouble swallowing certain parts of the SIAI position.

Replies from: HughRistik
comment by HughRistik · 2010-08-13T18:52:21.557Z · LW(p) · GW(p)

This was exactly my impression, also.

comment by Wei Dai (Wei_Dai) · 2010-08-13T08:51:30.721Z · LW(p) · GW(p)

I think your upvote probably backfired, because (I'm guessing) Eliezer got frustrated that such a badly written post got upvoted so quickly (implying that his efforts to build a rationalist community were less successful than he had thought/hoped) and therefore responded with less patience than he otherwise might have.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T19:30:48.095Z · LW(p) · GW(p)

Then you should have written your own version of it. Bad posts that get upvoted just annoy me on a visceral level and make me think that explaining things is hopeless, if LWers still think that bad posts deserve upvotes. People like XiXiDu are ones I've learned to classify as noisemakers who suck up lots of attention but who never actually change their minds enough to start pitching in, no matter how much you argue with them. My perceptual system claims to be able to classify pretty quickly whether someone is really trying or not, and I have no concrete reason to doubt it.

I guess next time I'll try to remember not to reply at all.

Everyone else, please stop upvoting posts that aren't good. If you're interested in the topic, write your own version of the question.

Replies from: XiXiDu, orthonormal, Furcas
comment by XiXiDu · 2010-08-13T19:36:02.103Z · LW(p) · GW(p)

What are you considering as pitching in? That I'm donating as I am, or that I am promoting you, LW and the SIAI all over the web, as I am doing?

You simply seem to take my post as hostile attack rather than the inquiring of someone who happened not to be lucky enough to get a decent education in time.

Replies from: Eliezer_Yudkowsky, HughRistik
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T19:45:20.174Z · LW(p) · GW(p)

All right, I'll note that my perceptual system misclassified you completely and consider that concrete reason to doubt it from now on.

Sorry.

If you are writing a post like that one it is really important to tell me that you are an SIAI donor. It gets a lot more consideration if I know that I'm dealing with "the sort of thing said by someone who actually helps" and not "the sort of thing said by someone who wants an excuse to stay on the sidelines, and who will just find another excuse after you reply to them", which is how my perceptual system classified that post.

The Summit is coming up and I've got lots of stuff to do right at this minute, but I'll top-comment my very quick attempt at pointing to information sources for replies.

Replies from: xamdam, Clippy, XiXiDu
comment by xamdam · 2010-08-13T19:52:21.737Z · LW(p) · GW(p)

It was actually in the post

What I mean to say by using that idiom is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justify to make the SIAI a prime priority. That is, I'm donating to the SIAI but also spend considerable amounts of resources maximizing utility at present.

So you might suggest to your perceptual system to read the post first (at least before issuing a strong reply).

comment by Clippy · 2010-08-13T19:55:37.918Z · LW(p) · GW(p)

I also donated to SIAI, and it was almost all the USD I had at the time, so I hope posters here take my questions seriously. (I would donate even more if someone would just tell me how to make USD.)

Also, I don't like when this internet website is overloaded with noise posts that don't accomplish anything.

Replies from: thomblake, xamdam, CronoDAS
comment by thomblake · 2010-08-13T19:59:22.793Z · LW(p) · GW(p)

Clippy, you represent a concept that is often used to demonstrate what a true enemy of goodness in the universe would look like, and you've managed to accrue 890 karma. I think you've gotten a remarkably good reception so far.

comment by xamdam · 2010-08-13T20:04:15.837Z · LW(p) · GW(p)

I think we have different ideas of noise

Though I would miss you as the LW mascot if you stopped adding this noise.

comment by CronoDAS · 2010-08-14T09:55:16.188Z · LW(p) · GW(p)

I would donate even more if someone would just tell me how to make USD.

Depending on your expertise and assets, this site might provide some ways.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-14T10:07:28.064Z · LW(p) · GW(p)

I'm pretty sure Clippy meant "make" in a very literal sense.

Replies from: Clippy
comment by Clippy · 2010-08-14T15:47:48.892Z · LW(p) · GW(p)

Yeah, I want to know how to either produce the notes that will be recognized as USD, or access the financial system in a way that I can believably tell it that I own a certain amount of USD. The latter method could involve root access to financial institutions.

All the other methods of getting USD are disproportionately hard (_/

comment by XiXiDu · 2010-08-13T19:55:32.600Z · LW(p) · GW(p)

I'll donate again in the next few days and tell you what name and the amount. I don't have much, but so that you see that I'm not just making this up. Maybe you can also check the previous donation then.

And for the promoting, everyone can Google it. I link people up to your stuff almost every day. And there are people here who added me to Facebook and if you check my info you'll see that some of my favorite quotations are actually yours.

And how come that on my homepage, if you check the sidebar, your homepage and the SIAI are listed under favorite sites, for many years now?

I'm the kind of person who has to be skeptic about everything and if I'm bothered too much by questions I cannot resolve in time I do stupid things. Maybe this post was stupid, I don't know.

Replies from: Aleksei_Riikonen
comment by Aleksei_Riikonen · 2010-08-14T01:46:04.244Z · LW(p) · GW(p)

Sorry about this sounding impolite towards XiXiDu, but I'll use this opportunity to note that it is a significant problem for SIAI, that there are people out there like XiXiDu promoting SIAI even though they don't understand SIAI much at all.

I don't know what's the best attitude to try to minimize the problem this creates, that many people will first run into SIAI through hearing about it from people who don't seem very clueful or intelligent. (That's real bayesian evidence for SIAI being a cult or just crazy, and many people then won't acquire sufficient additional evidence to update out of the misleading first impression -- not to mention that the biased way of getting stuck in first impressions is very common also.)

Personally, I've adopted the habit of not even trying to talk about singularity stuff to new people who aren't very bright. (Of course, if they become interested despite this, then they can't just be completely ignored.)

Replies from: XiXiDu
comment by XiXiDu · 2010-08-14T09:02:28.113Z · LW(p) · GW(p)

I thought about that too. But many people outside this community suspect me, as they often state, to be intelligent and educated. And I mainly try to talk to people in the academics. You won't believe that even I am able to make them think that I'm one of them, up to the point of correcting errors in their calculations (it happened). Many haven't even heard about Bayesian inference by the way...

The way I introduce people to this is not by telling them about the risks of AGI but rather linking them up to specific articles on lesswrong.com or telling them about how the SIAI tries to develop ethical decision making etc.

I've grown up in a family of Jehovah's Witnesses, I know how to start selling bullshit. Not that the SIAI is bullshit, but I'd never use words like 'Singularity' while promoting it to people I don't know.

Many people know about the transhumanist/singularity fraction already and think it is complete nonsense, so I often can only improve their opinion.

There are people teaching on university level that told me I convinced them that he (EY) is to be taken seriously.

Replies from: Aleksei_Riikonen, kodos96
comment by Aleksei_Riikonen · 2010-08-14T14:56:41.475Z · LW(p) · GW(p)

What you state is good evidence that you are not one of those too stupid people I was talking about (even though you have managed to not understand what SIAI is saying very well). Thanks for presenting the evidence, and correcting my suspicion that someone on your level of non-comprehension would usually end up doing more harm than good.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-14T15:57:39.781Z · LW(p) · GW(p)

Although I personally don't care much if I'm called stupid, if I think it is justified, I doubt this attitude is very appealing to most people.

Where do you draw the line between being stupid and simply uneducated or uninformed?

...even though you have managed to not understand what SIAI is saying very well...

I've never read up on their program in the first place. When thinking about turning those comments the OP is based on into a top-level post I have been pondering much longer about the title than the rest of what I said until I became too lazy and simply picked the SIAI as punching bag to direct my questions at. I thought it would sufficiently work to steer some emotions. But after all that was most of what it did accomplish, rather than some answers.

What I really was on about was the attitude of many people here, especially regarding the posts related to the Roko-deletion-incident. I was struck by the apparent impact it had. It was not just considered to be worth sacrificing freedom of speech for it but people, including some working for the SIAI, actually had nightmares and suffered psychological trauma. I think I understood the posts and comments, as some told me over private message after inquiring about my knowledge, but however couldn't believe that something that far would be considered to be reasonably evidence-based to be worried to such an extent.

But inquiring about that would have turned the attention back to the relevant content. And after all I wanted to find out if such reactions are justified before deciding to spread the content anyway.

Replies from: Aleksei_Riikonen
comment by Aleksei_Riikonen · 2010-08-14T18:50:03.745Z · LW(p) · GW(p)

You admit you've never bothered to read up on what SIAI is about in the first place. Don't be surprised if people don't have the best possible attitude if despite this you want them to spend a significant amount of time explaining to you personally the very same content that is already available but you just haven't bothered to read.

Might as well link again the one page that I recommend as the starting point in getting to know what it is exactly that SIAI argues:

http://singinst.org/riskintro/index.html

I also think it's weird that you've actually donated money to SIAI, despite not having really looked into what it is about and how credible the arguments are. I personally happen to think that SIAI is very much worth supporting, but there doesn't seem to be any way how you could have known that before making your donations, and so it's just luck that it actually wasn't a weird cult that your way of making decisions lead you to give money to.

(And part of the reason I'm being this blunt with you is that I've formed the impression that you won't take it in a very negative way, in the way that many people would. And on a personal level, I actually like you, and think we'd probably get along very well if we were to meet IRL.)

Replies from: XiXiDu
comment by XiXiDu · 2010-08-14T18:59:38.912Z · LW(p) · GW(p)

I also think it's weird that you've actually donated money to SIAI, despite not having really looked into what it is about and how credible the arguments are.

I've actually this little crazy conspiracy theory in my head that EY is such a smart fellow that he was able to fool a bunch of nonconformists to make him live of their donations.

Why I donate despite that? I've also donated money to Peter Watts getting into the claws of the American justice. Wikipedia, TrueCrypt, the Kahn Academy and many more organisations and people. Why? They make me happy. And there's lots of cool stuff coming from EY, whether he's a cult leader or not.

I'd probably be more excited if it turned out to be a cult and donate even more. That be hilarious. On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I'd join.

Replies from: Anonymous9291, kodos96, NihilCredo
comment by Anonymous9291 · 2010-08-15T13:49:53.152Z · LW(p) · GW(p)

On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I'd join.

SCIENTOLOGY IS DANGEROUS. Scientology is not a joke and joining them is not something to be joked about. The fifth level of precaution is absolutely required in all dealings with the Church of Scientology and its members. A few minutes of research with Google will turn up extraordinarily serious allegations against the Church of Scientology and its top leadership, including allegations of brainwashing, abducting members into slavery in their private navy, framing their critics for crimes, and large-scale espionage against government agencies that might investigate them.

I am a regular Less Wrong commenter, but I'm making this comment anonymously because Scientology has a policy of singling out critics, especially prominent ones but also some simply chosen at random, for harrassment and attacks. They are very clever and vicious in the nature of the attacks they use, which have included libel, abusing the legal system, and framing their targets for crimes they did not commit. When protests are conducted against Scientology, the organizers advise all attendees to wear masks for their own safety, and I believe they are right to do so.

If you reply to this comment or discuss Scientology anywhere on the internet, please protect your anonymity by using a throwaway account. To discourage people from being reckless, I will downvote any comment which mentions Scientology and which looks like it's tied to a real identity.

comment by kodos96 · 2010-08-14T19:24:10.651Z · LW(p) · GW(p)

I'd probably be more excited if it turned out to be a cult and donate even more. That be hilarious. On the other hand I suspect Scientology not be to a cult. I think they are just making fun of religion and at the same time are some really selfish bastards who live of the money of people dumb enough to actually think they are serious. If they told me this, I'd join.

You sound more like a Discordian than a Singularitatian.

Not that there's anything wrong with that.

comment by NihilCredo · 2010-08-15T00:25:13.250Z · LW(p) · GW(p)

I've actually this little crazy conspiracy theory in my head that EY is such a smart fellow that he was able to fool a bunch of nonconformists to make him live of their donations.

I had the same idea! It's also interesting to consider if some discriminating evidence could (realistically) exist in either sense.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-15T00:41:45.814Z · LW(p) · GW(p)

I'm pretty sure there are easier ways to make a living off a charity than to invent a cause that's nowhere near the mainstream and which is likely to be of interest to only a tiny minority.

Admittedly, doing it that way means you won't have many competitors.....

Replies from: NihilCredo
comment by NihilCredo · 2010-08-15T00:54:14.841Z · LW(p) · GW(p)

The basic hypothesis is that AI theorising was already (one of) his main interest/s, and founding SIAI was the easiest path for him to be able to make a living doing the stuff he enjoys full-time.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-15T01:11:55.021Z · LW(p) · GW(p)

Eliezer says that AI theorizing became as interesting to him as it has because it is the most effective way for him to help people. Having observed his career (mostly through the net) for ten years, I would assign a very high (.96) probability that the causality actually runs that way rather than his altruism's being a rationalization for his interest in getting paid for AI theorizing.

Now as to the source of his altruism, I am much less confident, e.g., about which way he would choose if he found himself at a major decision point with large amounts of personal and global expected utility on the line where he had to choose between indelible widespread infamy or even total obscurity and helping people.

Replies from: NihilCredo
comment by NihilCredo · 2010-08-15T01:21:26.973Z · LW(p) · GW(p)

Not really useful as evidence against the mighty conspiracy theory, though - one would make identical statements to that effect whether he was honest, consciously deceiving, or anywhere inbetween.

Would you happen to remember an instance of Eliezer making an embarrassing / self-damaging admission when you couldn't see any reason for him to do so outside of an innate preference for honesty?

Replies from: Wei_Dai, rhollerith_dot_com, katydee
comment by Wei Dai (Wei_Dai) · 2010-08-15T02:18:54.317Z · LW(p) · GW(p)

Would you happen to remember an instance of Eliezer making an embarrassing / self-damaging admission when you couldn't see any reason for him to do so outside of an innate preference for honesty?

How would that constitute evidence against the "mighty conspiracy theory"? Surely Eliezer could have foreseen that someone would ask this question sooner and later, and made some embarrassing / self-damaging admission just to cover himself.

Replies from: NihilCredo
comment by NihilCredo · 2010-08-15T02:40:39.264Z · LW(p) · GW(p)

Good point. I didn't think much about the question, and it should have been obvious that the hypothesis of him simulating honesty is not strictly falsifiable by relying solely on his words.

Ok, new possibility for falsification: before SIAI was founded, a third party offered him a job in AI research that was just as interesting and brought at least as many assorted perks, but he refused because he genuinely thought FAI research was more important. Or for that matter any other scenario under which founding SIAI constituted a net sacrifice for Eliezer when not counting the benefit of potentially averting armageddon.

Quite a bit harder to produce, but that's par for the course with Xanatos-style conspiracy theories.

comment by RHollerith (rhollerith_dot_com) · 2010-08-15T01:53:12.072Z · LW(p) · GW(p)

Actually, I was responding to your "AI theorising was already (one of) his main interest/s", not your larger point.

I consider the possibility that Eliezer has intentionally deceived his donors all along as so unlikely as to not be worth discussing.

ADDED. Re-reading parent for the second time, I notice your "whether he was honest, consciously deceiving, or anywhere inbetween" (emphasis mine). So, since you (I now realize) probably were entertaining the possibility that he is "unconsciously deceiving" (i.e., has conveniently fooled himself), let me extend my reply.

People can be scrupulously honest in almost all matters, NihilCredo, and still deceive themselves about their motivations for doing something, so I humbly suggest that even though Eliezer has shown himself willing to issue an image-damaging public recantation when he discovers that something he has published is wrong that is not nearly enough evidence to trust his public statements about his motivations.

What one does instead is look at his decisions. And even more you look at what he is able to stay motivated to do over a long period of time. Consider for example the two years he spent blogging about rationality. This is educational writing or communication and it is extremely good educational communication. No matter how smart the person is, he cannot communicate or teach that effectively without doing a heck of a lot of hard work. And IMO no human being can work that hard for two whole years voluntarily (i.e., without fear of losing something he needs or loves and already has) unless the person is deriving some sort of real human satisfaction from the work. (Even with a very strong "negative" motivation like fear, it is hard to work that hard for 2 years without making yourself sick, and E sure did not look or act sick when I chatted with him at a Sep 2009 meetup.) And this is where the explanation gets complicated, and I want to cut it short.

There are only so many kinds of real human motivation. Scientists of course are usually motivated by the pleasure of discovery, of extending their understanding of the world. Many, perhaps most, scientists are motivated by reputation, for the good opinion of other scientists or the public at large. I find it unlikely however that any combination of those 2 motivations would have been enough for any human being to perform the way E did during his 2 years of "educating through blogging".

So, to summarize, I have some strong or firm reasons to believe that while he was writing those excellent blog posts, E regularly found pleasure and consequently found motivation in the idea of producing understanding in his readers, and this pleasure is an example of a "friendly impulse" or "altruistic desire" in E (part of the implementation in the human mind of the human capacity for what the evolutionary psychologists call reciprocal altruism).

And I know enough psychology to know that if E is capable of being motivated to extremely hard work by "the friendly impulse" when he started his blogging at age 27, then he was also capable of being motivated in his daydreams and in his career planning by "the friendly impulse" when he was a teenager (which is when he says he saw that AI research is the best way to help people and when he began his interest in AI theorizing). (It is rare for a person to be able to learn (even if they really want to) how to find pleasure (and consequently long-term motivation) from altruism / friendliness if they lacked the capacity in their teens like I did.)

Now I am not saying that E does not derive a lot of pleasure from scientific theorizing (most scientists of his caliber do), but I am saying that I believe his statements that the reason that most of his theorizing is about AI rather than string theory or population genetics is what he says it is.

This is all very condensed and it relies on beliefs of mine that are definitely not settled science, e.g., the belief that the only way a person every voluntarily works as hard as E must have for 2 years is if they find pleasure in the work) but it does explain just a little of the basis for the probability assignment I made in grandparent.

Replies from: NihilCredo
comment by NihilCredo · 2010-08-15T03:20:27.161Z · LW(p) · GW(p)

Definitely an interesting comment. Thanks.

I don't think I find your psychological argument very relevant here. The conspiracy allows - indeed, it makes a cardinal assumption - that Eliezer loves doing what he does, i.e. discussing and spreading ideas about rationality and theorising about AI and futurology; the only proposed dissonance between his statements and his findings would be that he is (whether intentionally or not, see below) overblowing the danger of a near-omnipotent unfriendly AI. And of course, people can be untruthful in one field and still be highly altruist in a hundred others.

Speaking of which, we ended up drifting further from the idea XiXiDu and I were originally entertaining, which was that of a cunning plot to create his dream job. While, only because of his passion for rationality, it would still be interesting if Eliezer were suffering from such a dramatic bias (and it would be downright hilarious if he were truly pulling a fast one), the more such a bias is unconscious and hard to spot, the closer it comes to being a honest mistake, rather than negligence; but it's not particularly interesting or amusing that someone could have made a honest mistake.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-15T03:29:59.200Z · LW(p) · GW(p)

Speaking of which, we ended up drifting further from the idea XiXiDu and I were originally entertaining, which was that of a cunning plot to create his dream job.

Yes, I am a little embarassed that I took the thread on such a sharp and lengthy tangent. I don't have time to move my comment though.

Replies from: NihilCredo
comment by NihilCredo · 2010-08-15T03:34:53.031Z · LW(p) · GW(p)

Oh, I wouldn't worry. To paraphrase what I once read being written about HP&MoR, overthinking stuff is pretty much the point of this site.

comment by katydee · 2010-08-15T02:02:37.059Z · LW(p) · GW(p)

I can remember several such instances, and I haven't been following things for as long as rhollerith. There are even a few of them in top-level posts.

comment by kodos96 · 2010-08-14T09:07:05.763Z · LW(p) · GW(p)

There are people teaching on university level that told me I convinced them that he (EY) is to be taken seriously.

Wow. That's impressive. I think XiXiDu should get some bonus karma points for pulling that off.

comment by HughRistik · 2010-08-13T19:45:55.701Z · LW(p) · GW(p)

Eliezer seems to have run your post through some crude heuristic and incorrectly categorized it. While you did make certain errors that many people have observed, I think you deserved a different response.

At least, Eliezer seemingly not realizing that you are a donor means that his treatment of you doesn't represent how he treats donors.

Edit: To his credit, Eliezer apologized and admitted to his perceptual misclassification.

comment by orthonormal · 2010-08-13T22:43:31.974Z · LW(p) · GW(p)

It has seemed to me for a while that a number of people will upvote any post that goes against the LW 'consensus' position on cryonics/Singularity/Friendliness, so long as it's not laughably badly written.

I don't think anything Eliezer can say will change that trend, for obvious reasons.

However, most of us could do better in downvoting badly argued or fatally flawed posts. It amazes me that many of the worst posts here won't drop below 0 for any stated amount of time, and even then not very far. Docking someone's karma isn't going to kill them, folks. Do everyone a favor and use those downvotes.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-14T09:14:05.685Z · LW(p) · GW(p)

...badly argued or fatally flawed posts.

My post is neither badly argued nor fatally flawed as I've mainly been asking questions and not making arguments. But if you think otherwise, why don't you argue where I am fatally flawed?

My post has not been written to speak out against any 'consensus', I agree with the primary conclusions but am skeptic about further chains of reasoning based on those conclusions as I don't perceive them to be based on firm ground but merely be what follows from previous evidence.

And yes, I'm a lazy bum. I've not thought about the OP for more than 10 minutes. It's actually copy and paste work from previous comments. Hell, what have you expected? A dissertation? Nobody else was asking those questions, someone had to.

comment by Furcas · 2010-08-13T22:25:52.116Z · LW(p) · GW(p)

Then you should have written your own version of it.

I find it difficult to write stuff I don't believe.

Bad posts that get upvoted just annoy me on a visceral level and make me think that explaining things is hopeless, if LWers still think that bad posts deserve upvotes.

Noted.

comment by Interpolate · 2010-08-14T03:58:50.135Z · LW(p) · GW(p)

I upvoted the original post for:

  • Stimulating critical discussion of the Less Wrong community - specifically: the beliefs almost unanimously shared, and the negativity towards criticsm; as someone who has found Less Wrong extremely helpful, and would hate to see it descend into groupthink and affiliation signalling.

A question to those who dismiss the OP as merely "noise": what do you make of the nature of this post?

  • Stimulating critical discussion of the operating premises of the SIAI; as someone who is considering donating and otherwise contributing. This additionally provides elucidation to those in a state of epistemic limbo regarding the various aspects of FAI and the Singularity.

I am reminded of this passage regarding online communities (source):

So there's this very complicated moment of a group coming together, where enough individuals, for whatever reason, sort of agree that something worthwhile is happening, and the decision they make at that moment is: This is good and must be protected. And at that moment, even if it's subconscious, you start getting group effects. And the effects that we've seen come up over and over and over again in online communities...

The first is sex talk, what he called, in his mid-century prose, "A group met for pairing off." And what that means is, the group conceives of its purpose as the hosting of flirtatious or salacious talk or emotions passing between pairs of members...

The second basic pattern that Bion detailed: The identification and vilification of external enemies. This is a very common pattern. Anyone who was around the Open Source movement in the mid-Nineties could see this all the time...

The third pattern Bion identified: Religious veneration. The nomination and worship of a religious icon or a set of religious tenets. The religious pattern is, essentially, we have nominated something that's beyond critique. You can see this pattern on the Internet any day you like...

So these are human patterns that have shown up on the Internet, not because of the software, but because it's being used by humans. Bion has identified this possibility of groups sandbagging their sophisticated goals with these basic urges. And what he finally came to, in analyzing this tension, is that group structure is necessary. Robert's Rules of Order are necessary. Constitutions are necessary. Norms, rituals, laws, the whole list of ways that we say, out of the universe of possible behaviors, we're going to draw a relatively small circle around the acceptable ones.

He said the group structure is necessary to defend the group from itself. Group structure exists to keep a group on target, on track, on message, on charter, whatever. To keep a group focused on its own sophisticated goals and to keep a group from sliding into these basic patterns. Group structure defends the group from the action of its own members.

Replies from: Aleksei_Riikonen
comment by Aleksei_Riikonen · 2010-08-14T04:06:27.030Z · LW(p) · GW(p)

As someone who thought the OP was of poor quality, and who has had a very high opinion of SIAI and EY for a long time (and still has), I'll say that that "Eliezer Yudkowsky facts" was indeed a lot worse. It was the most embarrassing thing I've ever read on this site. Most of those jokes aren't even good.

Replies from: Wei_Dai, Liron, simplicio, Eliezer_Yudkowsky, XiXiDu, ciphergoth
comment by Wei Dai (Wei_Dai) · 2010-08-14T05:16:33.005Z · LW(p) · GW(p)

"Eliezer Yudkowsky facts" is meant to be fun and entertainment. Do you agree that there is a large subjective component to what a person will think is fun, and that different people will be amused by different types of jokes? Obviously many people did find the post amusing (judging from its 47 votes), even if you didn't. If those jokes were not posted, then something of real value would have been lost.

The situation with XiXiDu's post's is different because almost everyone seems to agree that it's bad, and those who voted it up did so only to "stimulate discussion". But if they didn't vote up XiXiDu's post, it's quite likely that someone would eventually write up a better post asking similar questions and generating a higher quality discussion, so the outcome would likely be a net improvement. Or alternatively, those who wanted to "stimulate discussion" could have just looked in the LW archives and found all the discussion they could ever hope for.

Replies from: XiXiDu, Risto_Saarelma
comment by XiXiDu · 2010-08-14T14:01:09.155Z · LW(p) · GW(p)

If almost everyone thought it's bad I would expect it to have much more downvotes than upvotes, even given the few people who voted it up to "stimulate discussion". But you probably know more about statistics than I do, so never mind.

...it's quite likely that someone would eventually write up a better post asking similar questions.

Before or after the SIAI build a FAI? I waited half a decade for any of those questions to be asked in the first place.

Or alternatively, those who wanted to "stimulate discussion" could have just looked in the LW archives and found all the discussion they could ever hope for.

Right, haven't thought about that! I'll be right back reading a few thousand comments to find some transparency.

comment by Risto_Saarelma · 2010-08-14T10:31:36.897Z · LW(p) · GW(p)

Do you agree that there is a large subjective component to what a person will think is fun, and that different people will be amused by different types of jokes?

This is true. You might also be able to think of jokes that aren't worth making even though a group of people would find then genuinely funny.

I agree with Aleksei about the Facts article.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-14T10:40:53.120Z · LW(p) · GW(p)

Can you please explain why you think those jokes shouldn't have been made? I thought that making fun of authority figures is socially accepted in general, and in this case shows that we don't take Eliezer too seriously. Do you disagree?

Replies from: XiXiDu, Risto_Saarelma
comment by XiXiDu · 2010-08-14T14:05:00.690Z · LW(p) · GW(p)

Hey, I said the same, why was he upvoted for it and I downvoted? Oh wait, it's Wei_Dai, never mind.

Please downvote this comment as I'm adding noise while being hostile to someone who adds valuable insights to the discussion.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-14T14:14:37.992Z · LW(p) · GW(p)

You seemed to seriously imply that Eliezer didn't understand that the "facts" thread was a joke, while actually he was sarcastically joking by hinting at not getting the joke in the comment you replied to. I downvoted the comment to punish stupidity on LW (nothing personal, believe it or not, in other words it's a one-step decision based on the comment alone and not on impression made by your other comments). Wei didn't talk about that.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-14T14:23:41.047Z · LW(p) · GW(p)

I guess after so many comments implying things I never meant to say I was a bit aggrieved. Never mind.

comment by Risto_Saarelma · 2010-08-14T11:07:42.982Z · LW(p) · GW(p)

Making him the subject of a list like that looks plenty serious to me.

Beyond that, I don't think there's much that I can say. There's a certain tone-deafness that's rubbing me wrong in both the post and in this discussion, but exactly how that works is not something that I know how to convey with a couple of paragraphs of text.

Replies from: Wei_Dai, NancyLebovitz
comment by Wei Dai (Wei_Dai) · 2010-08-14T13:41:36.592Z · LW(p) · GW(p)

Ok, I think I have an explanation for what's going on here. Those of us "old hands" who went through the period where LW was OB, and Eliezer and Robin were the only main posters, saw Eliezer as initially having very high status, and considered the "facts" post as a fun way of taking him down a notch or two. Newcomers who arrived after LW became a community blog, on the other hand, don't have the initial high status in mind, and instead see that post as itself assigning Eliezer a very high status, which they see as unjustified/weird/embarrassing. Makes sense, right?

(Voted parent up from -1, btw. That kind of report seems useful, even if the commenter couldn't explain why he felt that way.)

comment by NancyLebovitz · 2010-08-14T14:12:45.219Z · LW(p) · GW(p)

I have a theory: all the jokes parse out to "Eliezer is brilliant, and we have a bunch of esoteric in-jokes to show how smart we are". This isn't making fun of an authority figure.

This doesn't mean the article was a bad idea, or that I didn't think it was funny. I also don't think it's strong evidence that LW and SIAI aren't cults.

ETA: XiXiDu's comment that this is the community making fun of itself seems correct.

comment by Liron · 2010-08-14T23:49:53.693Z · LW(p) · GW(p)

Fact: Evaluating humor about Eliezer Yudkowsky always results in an interplay between levels of meta-humor such that the analysis itself is funny precisely when the original joke isn't.

comment by simplicio · 2010-08-14T07:56:00.695Z · LW(p) · GW(p)

They are very good examples of the genre (Chuck Norris-style jokes). I for one could not contain my levity.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-14T07:28:28.016Z · LW(p) · GW(p)

I was embarrassed by most of the facts. The one about my holding up a blank sheet of paper and saying "a blank map does not correspond to a blank territory" and thus creating the universe is one I still tell at parties.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-14T09:34:47.856Z · LW(p) · GW(p)

I was embarrassed by most of the facts.

That post was meant as a playful muck, it was a joke. It was not meant as a hostile attack. I've no idea how you and Aleksei can come to this conclusions about something many people thought was really funny, even outside of the community. That post actually helped to loosen the very stern sentiment of some people regarding you personally and the SIAI.

"Hey, those people are actually able to make fun of themselves, maybe they are not a cult after all..."

Replies from: Aleksei_Riikonen
comment by Aleksei_Riikonen · 2010-08-14T14:48:36.651Z · LW(p) · GW(p)

What, why are you talking about a hostile attack?

Of course I didn't feel that it would be that. It's quite the opposite, it felt to me like communicating an unhealthy air of hero worship.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-14T15:38:48.592Z · LW(p) · GW(p)

Then I have been the one to completely misinterpret what you said. Apologize, I'm not good at this.

I've said it before the OP but failed miserably:

I should quit now and for some time stop participating on LW. I have to continue with my studies. I was only drawn here by the deletion incident. Replies and that it is fun to to argue have made me babble too much in the past few days.

Back to being lurker. Thanks.

comment by XiXiDu · 2010-08-14T09:26:20.377Z · LW(p) · GW(p)

Wow, I thought it was one of the best. By that post I actually introduced a philosopher (who teaches in Sweden), who's been skeptic about EY, to read up on the MWI sequence and afterwards agree that EY is right.

comment by Paul Crowley (ciphergoth) · 2010-08-14T07:57:50.893Z · LW(p) · GW(p)

I like that post - of course, few of the jokes are funny, but you read such a thing for the few gems they do contain. I think of it as hanging a lampshade (warning, TV tropes) on one of the problems with this website.

comment by Aleksei_Riikonen · 2010-08-12T18:39:47.776Z · LW(p) · GW(p)

This post makes very weird claims regarding what SIAI's positions would be.

"Spend most on a particular future"? "Eliezer Yudkowsky is the right and only person who should be leading"?

It doesn't at all seem to me that stuff such as these would be SIAI's position. Why doesn't the poster provide references for these weird claims?

Here's a good reference for what SIAI's position actually is:

http://singinst.org/riskintro/index.html

Replies from: Aleksei_Riikonen, Nick_Tarleton, Vladimir_Nesov, XiXiDu
comment by Aleksei_Riikonen · 2010-08-12T19:12:13.468Z · LW(p) · GW(p)

From the position paper I linked above, a key quote on what SIAI sees itself as doing:

"We aim to seed the above research programs. We are too small to carry out all the needed research ourselves, but we can get the ball rolling."

The poster makes claims that are completely at odds with even the most basic familiarity with what SIAI's position actually is.

comment by Nick_Tarleton · 2010-08-12T19:02:28.015Z · LW(p) · GW(p)

Seconded, plus I don't understand what the link from "worth it" has to do with the topic.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-12T19:19:38.847Z · LW(p) · GW(p)

I'll let the master himself answer this one:

Fun Theory, for instance: the questions of “What do we actually do all day, if things turn out well?,” “How much fun is there in the universe?,” “Will we ever run out of fun?,” “Are we having fun yet?” and “Could we be having more fun?” In order to answer questions like that, obviously, you need a Theory of Fun.

[...]

The question is: Is this what actually happens to you if you achieve immortality? Because, if that’s as good as it gets, then the people who go around asking “what’s the point?” are quite possibly correct.

comment by Vladimir_Nesov · 2010-08-12T21:57:20.486Z · LW(p) · GW(p)

http://singinst.org/riskintro/index.html

By the way, is it linked to from the SIAI site somewhere? It's a good summary, but I only ever saw the direct link (and the page is not in SIAI site format).

Replies from: Aleksei_Riikonen, Nick_Tarleton
comment by Aleksei_Riikonen · 2010-08-12T22:05:28.636Z · LW(p) · GW(p)

It's linked from the sidepanel here at least:

http://singinst.org/overview

But indeed it's not very prominently featured on the site. It's a problem of most of the site having been written substantially earlier than this particular summary, and there not (yet) having been a comprehensive change from that earlier state of how the site is organized.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-12T22:11:36.770Z · LW(p) · GW(p)

I see. This part of the site doesn't follow the standard convention of selecting the first sub-page in a category when you click on the category, instead it selects the second, which confused me before. I thought that I was reading "Introduction" when in fact I was reading the next item. Bad design decision.

comment by Nick_Tarleton · 2010-08-12T22:04:48.162Z · LW(p) · GW(p)

Overview -> Introduction

(it should probably be more prominent and maybe in the site format; the site format's font is kind of small for such a long document, but should plausibly just be bigger)

comment by XiXiDu · 2010-08-12T19:29:23.817Z · LW(p) · GW(p)

Less Wrong Q&A with Eliezer Yudkowsky: Video Answers

Q: The only two legitimate occupations for an intelligent person in our current world? Answer

Q: What's your advice for Less Wrong readers who want to help save the human race? Answer

Replies from: timtyler, Aleksei_Riikonen
comment by timtyler · 2010-08-21T19:09:57.360Z · LW(p) · GW(p)

A) doesn't seem to be quoted verbatim from the supplied reference!

There is some somewhat similar material there - but E.Y. is reading out a question that has been submitted by a reader! Misquoting him while he is quoting someone else doesn't seem to be very fair!

[Edit: please note the parent has been dramatically edited since this response was made]

Replies from: XiXiDu
comment by XiXiDu · 2010-08-21T19:50:06.258Z · LW(p) · GW(p)

I perceived it to be the gist of what he said and directly linked to the source. I have a hard time to transcribe spoken English. Would you do so please? Thanks.

Replies from: Tyrrell_McAllister, Nick_Tarleton
comment by Tyrrell_McAllister · 2010-08-22T23:49:18.754Z · LW(p) · GW(p)

You should not use quotation marks unless the quotes are verbatim. The "gist" does not suffice.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-23T08:37:02.944Z · LW(p) · GW(p)

I added a disclaimer. Still, it's what he means. If I wrote, Yudkowsky says "we need to work on FAI" without pasting all of the sequences on LW, it still be right. But if you want to nit-pick you are probably right.

Replies from: Emile
comment by Emile · 2010-08-23T09:03:46.823Z · LW(p) · GW(p)

I haven't followed the ins and the outs of this pointless drama, but I had assumed those were things Eliezer actually said. I'm pretty miffed to learn that those weren't actually quotes, but rather something you had "inferred from revealed self-evident wisdom".

That kind of stuff makes it tempting to pretty much ignore anything you write.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-23T12:05:42.893Z · LW(p) · GW(p)

..assumed those were things Eliezer actually said.

Don't be fooled by the other commenter's, go and listen to the related videos I linked to. It would however be reasonable to paraphrase Yudkowsky in that way even if he never came close to saying that as it is reasonable to infer from his other writings that working on FAI and donating to organisations working on FAI (the SIAI being the only one I'm told) is the most important thing you could possible do. If not, why are people here actually doing just that?

Replies from: Emile
comment by Emile · 2010-08-23T12:39:08.013Z · LW(p) · GW(p)

I listened to the video. He said that while reading aloud a question someone was asking him.

I'm not objecting to the reformulation in your now modified post. I'm just pissed that you made me believe that it was an actual Eliezer Yudkowsky quote.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-23T13:11:41.877Z · LW(p) · GW(p)

He said that while reading aloud a question someone was asking him.

Look, I didn't know that everything within these "" symbols is taken to be the verbatim quotation of someone. I use them all the time to highlight or mark whatever I see fit. And the answers I wrote to the questions he has been asked resemble the gist of what he said. I simply didn't expect anyone to believe that I transcribed two +4 minute videos which I linked to right after. I also never made anyone believe anything controversial. It is something that I think is widely accepted within this community anyway and it is what he said in the videos, although more long-winded.

Replies from: Airedale
comment by Airedale · 2010-08-23T14:45:45.828Z · LW(p) · GW(p)

Look, I didn't know that everything within these "" symbols is taken to be the verbatim quotation of someone. I use them all the time to highlight or mark whatever I see fit.

The "blog" of "unnecessary" quotation marks

But on a more "serious" note, when the implication is that you're "quoting" someone, and you're "using" quotation marks, "readers" will generally interpret the marks as quoting rather than "highlighting."

Replies from: XiXiDu
comment by XiXiDu · 2010-08-23T16:06:26.430Z · LW(p) · GW(p)

Look, I'm quite often an idiot. I was looking for excuses while being psychologically overwhelmed today. If people here perceived that I did something wrong there, I probably did. I was just being lazy and imprudent so I wrote what I thought was the answer given by EY to the question posed in that part of a Q&A video series. There was no fraudulent intent on my part.

So please accept my apologies for possible misguidance and impoliteness. I'm just going to delete the comment now. (ETA: I'll just leave the links to the videos.)

I'm trying for some time now to not getting involved on here anymore and to get back to being a passive reader. But replies make me feel constrained to answer once more.

If there is something else, let me know and I'll just delete it.

Replies from: WrongBot
comment by WrongBot · 2010-08-23T17:17:25.496Z · LW(p) · GW(p)

Upvoted for admission of error.

comment by Nick_Tarleton · 2010-08-21T20:53:09.616Z · LW(p) · GW(p)

You really, really, really should have noted that; as it is, your comment is an outright lie. (Thanks for catching, Tim.)

comment by Aleksei_Riikonen · 2010-08-12T19:41:15.908Z · LW(p) · GW(p)

How do your quotes claim that Eliezer Yudkowsky is the only person who should be leading?

(I would say that factually, there are also other people in leadership positions within SIAI, and Eliezer is extremely glad that this is so, instead of thinking that it should be only him.)

How do they demonstrate that donating to SIAI is "spending on a particular future"?

(I see it as trying to prevent a particular risk.)

comment by Simulation_Brain · 2010-08-13T07:56:24.651Z · LW(p) · GW(p)

I think there are very good questions in here. Let me try to simplify the logic:

First, the sociological logic: if this is so obviously serious, why is no one else proclaiming it? I think the simple answer is that a) most people haven't considered it deeply and b) someone has to be first in making a fuss. Kurzweil, Stross, and Vinge (to name a few that have thought about it at least a little) seem to acknowledge a real possibility of AI disaster (they don't make probability estimates).

Now to the logical argument itself:

a) We are probably at risk from the development of strong AI. b) The SIAI can probably do something about that.

The other points in the OP are not terribly relevant; Eliezer could be wrong about a great many things, but right about these.

This is not a castle in the sky.

Now to argue for each: There's no good reason to think AGI will NOT happen within the next century. Our brains produce AGI; why not artificial systems? Artificial systems didn't produce anything a century ago; even without a strong exponential, they're clearly getting somewhere.

There are lots of arguments for why AGI WILL happen soon; see Kurzweil among others. I personally give it 20-40 years, even allowing for our remarkable cognitive weaknesses.

Next, will it be dangerous? a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

Finally, it seems that SIAI might be able to do something about it. If not, they'll at least help raise awareness of the issue. And as someone pointed out, achieving FAI would have a nice side effect of preventing most other existential disasters.

While there is a chain of logic, each of the steps seems likely, so multiplying probabilities gives a significant estimate of disaster, justifying some resource expenditure to prevent it (especially if you want to be nice). (Although spending ALL your money or time on it probably isn't rational, since effort and money generally have sublinear payoffs toward happiness).

Hopefully this lays out the logic; now, which of the above do you NOT think is likely?

Replies from: utilitymonster, kodos96, jacob_cannell
comment by utilitymonster · 2010-08-13T13:30:55.500Z · LW(p) · GW(p)

a) Something much smarter than us will do whatever it wants, and very thoroughly. (this doesn't require godlike AI, just smarter than us. Self-improving helps, too.) b) The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.) Therefore, it will be dangerous if not VERY carefully designed. Humans are notably greedy and bad planners individually, and often worse in groups.

I've heard a lot of variations on this theme. They all seem to assume that the AI will be a maximizer rather than a satisficer. I agree the AI could be a maximizer, but don't see that it must be. How much does this risk go away if we give the AI small ambitions?

Replies from: wedrifid, Simulation_Brain, timtyler, anon895
comment by wedrifid · 2010-08-13T18:46:17.052Z · LW(p) · GW(p)

How much does this risk go away if we give the AI small ambitions?

Even small ambitions are risky. If I ask a potential superintelligence to do something easy but an obstacle gets in the way it will most likely obliterate that obstacle and do the 'simple thing'. Unless you are very careful that 'obstacle' could wind up being yourself or, if you are unlucky, your species. Maybe it just can't risk one of you pressing the off switch!

Replies from: soreff, timtyler
comment by soreff · 2010-09-01T18:23:14.686Z · LW(p) · GW(p)

Good point. The resources expended towards a "small" goal aren't directly bounded by the size of the goal. As you said, an obstacle can make the resources used go arbitrarily high. An alternative constraint would be on what the AI is allowed to use up in achieving the goal - "No more that 10 kilograms of matter, nor more than 10 megajoules of energy, nor any human lives, nor anything with a market value of more that $1000". This will have problems of its own, when the AI thinks up something to use up that we never anticipated (We have something of a similar problem with corporations - but at least they operate on human timescales).

Part of the safety of existing optimizers is that they can only use resources or perform actions that we've explicitly let them try using. An electronic CAD program may tweak transistor widths, but it isn't going to get creative and start trying to satisfy its goals by hacking into the controls of the manufacturing line and changing their settings. An AI with the option to send arbitrary messages to arbitrary places is quite another animal...

comment by timtyler · 2010-08-14T08:20:07.127Z · LW(p) · GW(p)

The idea is to prevent a "runaway" disaster.

Relatively standard and conventional engineering safety methodologies would be used for other kinds of problems.

Replies from: wedrifid
comment by wedrifid · 2010-08-14T09:39:39.150Z · LW(p) · GW(p)

The idea is to prevent a "runaway" disaster.

My observation is that small ambitions can become 'runaway disasters' unless a lot of the problems of FAI are solved.

Relatively standard and conventional engineering safety methodologies would be used for other kinds of problems.

That sounds as 'safe' as giving Harry Potter rules to follow.

I understand that this is an area in which we fundamentally disagree. I have previously disagreed about the wisdom of using human legal systems to control AI behaviour and I assume that our disagreement will be similar on this subject.

Replies from: timtyler
comment by timtyler · 2010-08-14T09:50:17.352Z · LW(p) · GW(p)

"Small ambitions" are a proposed solution. Get the machine to want something - and then stop when it's desires are satisfied - or at a specified date, whichever comes first.

The solution has some complications - but it does look as though it is a pretty obvious safety measure - one that suitably paranoid individuals are likely to have near the top of their lists.

It doesn't make a runaway disaster impossible. The agent could still set up minions, "forget" to switch them off - and then they run amok. The point is to make a runaway disaster much less likely. The safety level is pretty configurable - if the machine's desires are sufficiently constrained. I went into a lot of these issues on:

http://alife.co.uk/essays/stopping_superintelligence/

See also the previous discussion of the issue on this site.

Shane Legg has also gone into methods of restraining a machine "from within" - so to speak. Logically, you could limit space, time or matterial resources in this way - if you have control over an agent's utility function.

Replies from: Alexei
comment by Alexei · 2010-09-01T16:24:21.661Z · LW(p) · GW(p)

This is very dangerous thinking. There are many potential holes not covered in your essay. The problem with all these holes is that even the smallest one can potentially lead to the end of the universe. As Eliezer often mentions: the AI has to be mathematically rigorously proven to be friendly; there can't be any room for guessing or hoping.

As an example, consider that to the AI moving to quiescent state will be akin to dying. (Consider somebody wanting to make you not want anything or force you to want something that you normally don't.) I hope you don't come reply with a "but we can do X", because that would be another patch, and that's exactly what we want to avoid. There is no getting around creating a solid proven mathematical definition of friendly.

Replies from: timtyler
comment by timtyler · 2010-09-01T18:59:32.879Z · LW(p) · GW(p)

The end of the universe - OMG!

It seems reasonable to expect that agents will welcome their end if their time has come.

The idea, as usual, is not to try and make the agent do something it doesn't want to - but rather to make it want to do it in the first place.

I expect off switches - and the like - will be among the safety techniques employed. Provable correctness might be among them as well - but judging by the history of such techniques it seems rather optimistic to expect very much from them.

Replies from: Perplexed, Alexei
comment by Perplexed · 2010-09-01T19:36:07.526Z · LW(p) · GW(p)

I am fairly confident that we can tweak any correct program into a form which allows a mathematical proof that the program behavior meets some formal specification of "Friendly".

I am less confident that we will be able to convince ourselves that the formal specification of "Friendly" that we employ is really something that we want.

We can prove there are no bugs in the program, but we can't prove there are no bugs in the program specification. Because the "proof" of the specification requires that all of the stakeholders actually look at that specification of "Friendly", think about that specification, and then bet their lives on the assertion that this is indeed what they want.

What is a "stakeholder", you ask? Well, what I really mean is pitchfork-holder. Stakes are from a different movie.

comment by Alexei · 2010-09-02T00:57:35.582Z · LW(p) · GW(p)

is not to try and make the agent do something it doesn't want to - but rather to make it want to do it in the first place.

I don't think there is much different between the two. Either way you are modifying the agent's behavior. If it doesn't want it, it won't have it.

The problem with off switches is that 1) it might not be guaranteed to work (AI changes its own code or prevents anyone from accessing/using the off switch), 2) it might not be guaranteed to work the way you want to. Unless you have formally proven that AI and all the possible modifications it can make to itself are safe, you can't know for sure.

Replies from: timtyler
comment by timtyler · 2010-09-02T07:12:36.946Z · LW(p) · GW(p)

is not to try and make the agent do something it doesn't want to - but rather to make it want to do it in the first place.

I don't think there is much different between the two. Either way you are modifying the agent's behavior. If it doesn't want it, it won't have it.

It is not a modification if you make it that way "in the first place" as specified - and the "If it doesn't want it, it won't have it" seems contrary to the specified bit where you "make it want to do it in the first place".

The idea of off switches is not that they are guaranteed to work, but that they are a safety feature. If you can make a machine do anything you want at all, you can probably make it turn itself off. You can build it so the machine doesn't wish to stay turned on - but goes willing into the night.

We will never "know for sure" that a machine intelligence is safe. This is the real world, not math land. We may be able to prove some things about it - such that its initial state is not vulnerable to input stream buffer-overflow attacks - but we won't be able to prove something like that the machine will only do what we want it to do, for some value of "we".

At the moment, the self-improving systems we see are complex man-machine symbioses - companies and governments. You can't prove math theorems about such entities - they are just too messy. Machine intelligence seems likely to be like that for quite a while - functionally embedded in a human matrix. The question of "what would the machine do if no one could interfere with its code" is one for relatively late on - machines will already be very smart by then - smarter than most human computer programmers, anyway.

Replies from: LucasSloan
comment by LucasSloan · 2010-09-02T07:17:18.909Z · LW(p) · GW(p)

IF you can make a machine do anything you want at all, you can probably make it turn itself off.

The hardest part of Friendly AI is figuring out how to reliably instill any goal system.

Replies from: timtyler
comment by timtyler · 2010-09-02T07:29:19.303Z · LW(p) · GW(p)

If you can't get it to do what you want at all, the machine is useless, and there would be no point in constructing it. In practice, we know we can get machines to do what we want to some extent - we have lots of examples of that. So, the idea is to make the machine not mind being turned off. Don't make it an open-ended maximiser - make it maximise only until time t - or until its stop button is pressed - whichever comes sooner.

Replies from: Alexei
comment by Alexei · 2010-09-02T18:45:58.711Z · LW(p) · GW(p)

I don't think we really have a disagreement here. If you are building a normal program to do whatever, then by all means, do your best and try to implement safety features. Any failure would most likely be local.

However! If we are talking about building AI, which will go through many iterations, will modify its own code, and will become super-intelligent, then for all our sakes I hope you will have mathematically proven that the AI is Friendly. Otherwise you are betting the fate of this world on a hunch. If you don't agree with this point, I invite you to read Eliezer's paper on AI risks.

Replies from: timtyler
comment by timtyler · 2010-09-02T19:59:39.422Z · LW(p) · GW(p)

"The AI is Friendly" seems to be a vague and poorly-defined concept - and even if you could pin it down, what makes you think it is something that could be proved in the first place?

Ethical agents should probably not hold off creating machine intelligence while chasing imagined rainbows for too long - since intelligence could prevent the carnage on the roads, fix many diseases, and generally help humanity - and also because delaying gives less ethically-conscious agents an opportunity to get there first - which could be bad.

See my The risks of caution - or Max's critique of the precautionary principle for more on that.

Replies from: Alexei
comment by Alexei · 2010-09-03T20:53:09.851Z · LW(p) · GW(p)

In fact, there is nothing vague about definition of "friendly". Eliezer wrote a lot on that topic and I invite you to look at his writing, e.g. the link I gave you earlier.

I agree that if someone is going to launch a self-improving AI, then we will need to preempt them with our own AI if our AI has a greater probability of being friendly. It all comes down to the expected value of our choices.

Replies from: timtyler
comment by timtyler · 2010-09-03T21:16:16.802Z · LW(p) · GW(p)

In fact, there is nothing vague about definition of "friendly".

You really believe that?!? You have a pointer to some canonical definition?

Replies from: Alexei
comment by Alexei · 2010-09-04T00:57:05.878Z · LW(p) · GW(p)

Ok, I might have been a bit overenthusiastic with how simple "friendly" aspect is, but here is a good attempt at describing what we want.

Replies from: ata, timtyler
comment by ata · 2010-09-04T01:52:38.909Z · LW(p) · GW(p)

I'm sure Tim Tyler is familiar with CEV; I presume his objection is that CEV is not sufficiently clear or rigorous. Indeed, CEV is only semitechnical; I think the FAI research done by Eliezer and Marcello since CEV's publication has included work on formalizing it mathematically, but that's not available to the public.

Note also that defining the thing-we-want-an-AI-to-do is only half of the problem of Friendliness; the other half is solving the problems in decision theory that will allow us to prove that an AI's goal system and decision algorithms will cause it to not change its goal system. If we build an AGI that implements the foundation of CEV but fails to quine itself, then during recursive self-improvement, its values may be lost before it stabilizes its goal system itself, and it will all be for naught.

Replies from: Perplexed, kodos96, komponisto
comment by Perplexed · 2010-09-04T02:53:09.283Z · LW(p) · GW(p)

Why exactly do we want "recursive self-improvement" anyways? Why not build into the architecture the impossibility of rewriting its own code, prove the "friendliness" of the software that we put there, and then push the ON button without qualms. And then, when we feel like it, we can ask our AI to design a more powerful successor to itself.

Then, we repeat the task of checking the security of the architecture and proving the friendliness of the software before we build and turn on the new AI.

There is no reason we have to have a "hard takeoff" if we don't want one. What am I missing here?

Replies from: timtyler, ata, Mitchell_Porter
comment by timtyler · 2010-09-04T07:50:03.656Z · LW(p) · GW(p)

Why exactly do we want "recursive self-improvement" anyways?

You get that in many goal-directed systems, whether you ask for it or not.

Why not build into the architecture the impossibility of rewriting its own code, prove the "friendliness" of the software that we put there, and then push the ON button without qualms.

Impossible is not easy to implement. You can make it difficult for a machine to improve itself, but then that just becomes a challenge that it must overcome in order to reach its goals. If the agent is sufficiently smart, it may find some way of doing it.

Many here think that if you have a sufficiently intelligent agent that wants to do something you don't want it to do, you are probably soon going to find that it will find some way to get what it wants. Thus the interest in trying to get its goals and your goals better aligned.

Also, humans might well want to let the machine self-improve. They are in a race with competitiors; the machine says it can help with that, and it warns that - if the humans don't let it - the competitiors are likely to pull ahead...

comment by ata · 2010-09-04T03:23:00.771Z · LW(p) · GW(p)

Why exactly do we want "recursive self-improvement" anyways?

Because we want more out of FAI than just lowercase-f friendly androids that we can rely upon not to rebel or break too badly. If we can figure out a rigorous Friendly goal system and a provably stable decision theory, then we should want to; then the world gets saved and the various current humanitarian emergencies get solved much quicker than they would if we didn't know whether the AI's goal system was stable and we had to check it at every stage and not let it impinge upon the world directly (not that that's feasible anyway —)

Why not build into the architecture the impossibility of rewriting its own code, prove the "friendliness" of the software that we put there, and then push the ON button without qualms. And then, when we feel like it, we can ask our AI to design a more powerful successor to itself.

Then, we repeat the task of checking the security of the architecture and proving the friendliness of the software before we build and turn on the new AI.

Most likely, after each iteration, it would become more and more incomprehensible to us. Rice's theorem suggests that we will not be able to prove the necessary properties of a system from the top down, not knowing how it was designed; that is a massively different problem than proving properties of a system we're constructing from the bottom up. (The AI will know how it's designing the code it writes, but the problem is making sure that it is willing and able to continuously prove that it is not modifying its goals.)

And, in the end, this is just another kind of AI-boxing. If an AI gets smart enough and it ends up deciding that it has some goals that would be best carried out by something smarter than itself, then it will probably get around any safeguards we put in place. It'll emit some code that looks Friendly to us but isn't, or some proof that is too massively complicated for us to check, or it'll do something far too clever for a human like me to think of as an example. I'd say there's a dangerously high possibility that an AI will be able to start a hard takeoff even if it doesn't have access to its own code — it may be able to introspect and understand intelligence well enough that it could just write its own AI (if we can do that, then why can't it?), and then push that AI out "into the wild" by the usual means (smooth-talk a human operator, invent molecular nanotech that assembles a computer that runs the new software, etc.).

Even trying to do it this way would likely be a huge waste of time (at best) — if we don't build in a goal system that we know will preserve itself in the first place, then why would we expect its self-designed successor to preserve its goals?

If an AGI is not safe under recursive self-improvement, then it is not safe at all.

Replies from: Perplexed
comment by Perplexed · 2010-09-04T03:45:16.234Z · LW(p) · GW(p)

... we will not be able to prove the necessary properties of a system from the top down, not knowing how it was designed.

I guess I didn't make clear that I was talking about proof-checking rather than proof-finding. And, of course, we ask the designer to find the proof - if it can't provide one, then we (and it) have no reason to trust the design.

Doing it this way would also likely be a major waste of time — if we don't build in a goal system that we know will preserve itself in the first place, then why would we expect its self-designed successor to preserve its goals?

If an AGI is not safe under recursive self-improvement, then it is not safe at all.

I may be a bit less optimistic than you that we will ever be able to prove the correctness of self-modifying programs. But assume that such proofs are possible, but we humans have not yet made the conceptual breakthroughs by the time we are ready to build our first super-human AI. But assume that we can prove friendliness for non-self-modifying programs.

In this case, proceeding as I suggest, and then asking the AI to help discover the missing proof technology, would not be wasting time - it would be saving time.

Your final sentence is a slogan, not an argument.

Replies from: ata
comment by ata · 2010-09-04T04:10:31.652Z · LW(p) · GW(p)

I guess I didn't make clear that I was talking about proof-checking rather than proof-finding. And, of course, we ask the designer to find the proof - if it can't provide one, then we (and it) have no reason to trust the design.

This still assumes in the first place that the AI will be motivated to design a successor that preserves its own goal system. If it wants to do this, or can be made to do this just by being told to, and you have a very good reason to believe this, then you've already solved the problem. We're just not sure if that comes automatically — there are intuitive arguments that it does, like the one about Gandhi and the murder-pill, but I'm convinced that the stakes are high enough that we should prove this to be true before we push the On button on anything that's smarter than us or could become smarter than us. The danger is that while you're waiting for it to provide a new program and a proof of correctness to verify, it might instead decide to unbox itself and go off and do something with Friendly intentions but unstable self-modification mechanisms, and then we'll end up with a really powerful optimization process with a goal system that only stabilizes after it's become worthless. Or even if you have an AI with no goals other than truthfully answering questions, that's still dangerous; you can ask it to design a provably-stable reflective decision theory, and perhaps it will try, but if it doesn't already have a Friendly motivational mechanism, then it may go about finding the answer in less-than-agreeable ways. Again as per the Omohundro paper, we can expect recursive self-improvement to be pretty much automatic (whether or not it has access to its own code), and we don't know if value-preservation is automatic, and we know that Friendliness is definitely not automatic, so creating a non-provably-stable or non-Friendly AI and trying to have it solve these problems is putting the cart before the horse, and there's too great a risk of it backfiring.

Your final sentence is a slogan, not an argument.

It was neither; it was intended only as a summary of the conclusion of the points I was arguing in the preceding paragraphs.

comment by Mitchell_Porter · 2010-09-04T03:20:13.732Z · LW(p) · GW(p)

Why exactly do we want "recursive self-improvement" anyways?

Generally we want our programs to be as effective as possible. If the program can improve itself, that's a good thing, from an ordinary perspective.

But for a sufficiently sophisticated program, you don't even need to make self-improvement an explicit imperative. All it has to do is deduce that improving its own performance will lead to better outcomes. This is in the paper by Steve Omohundro (ata's final link).

Why not build into the architecture the impossibility of rewriting its own code

There are too many possibilities. The source code might be fixed, but the self-improvement occurs during run-time via alterations to dynamical objects - data structures, sets of heuristics, virtual machines. An AI might create a new and improved AI rather than improving itself. As Omohundro argues, just having a goal, any goal at all, gives an AI an incentive to increase the amount of intelligence being used in the service of that goal. For a complicated architecture, you would have to block this incentive explicitly, declaratively, at a high conceptual level.

comment by kodos96 · 2010-09-04T02:09:43.707Z · LW(p) · GW(p)

I think the FAI research done by Eliezer and Marcello since CEV's publication has included work on formalizing it mathematically, but that's not available to the public

I'm curious - where did you hear this, if it's not available to the public? And why isn't it available to the public? And who's Marcello? There seems to be virtually no information in public circulation about what's actually going on as far as progress towards implementing CEV/FAI.... is current progress being kept secret, or am I just not in the loop? And how does one go about getting in the loop?

Replies from: ata
comment by ata · 2010-09-04T02:31:49.021Z · LW(p) · GW(p)

Marcello is Marcello Herreshoff, a math genius and all around cool guy who is Eliezer's apprentice/coworker. Eliezer has mentioned on LW that he and Marcello "work[ed] for a year on AI theory", and from conversations about these things when I was at Benton(/SIAI House) for a weekend, I got the impression that some of this work included expanding on and formalizing CEV, though I could be misremembering.

(Regarding "where did you hear this, if it's not available to the public?" — I don't think the knowledge that this research happened is considered a secret, only the content of it is. And I am not party to any of that content, because I am still merely a wannabe FAI researcher.)

comment by komponisto · 2010-09-04T03:08:18.623Z · LW(p) · GW(p)

Note also that defining the thing-we-want-an-AI-to-do is only half of the problem of Friendliness; the other half is solving the problems in decision theory that will allow us to prove that an AI's goal system and decision algorithms will cause it to not change its goal system and decision algorithms.

My understanding is that Eliezer considers this second part to be a substantially easier problem.

comment by timtyler · 2010-09-04T07:32:36.929Z · LW(p) · GW(p)

Probably the closest thing I have seen to a definition of "friendly" from E.Y. is:

"The term "Friendly AI" refers to the production of human-benefiting, non-human-harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals."

That appears to make Deep Blue "friendly". It hasn't harmed too many people so far - though maybe Kasparov's ego got a little bruised.

Another rather different attempt:

"I use the term "Friendly AI" to refer to this whole challenge. Creating a mind that doesn't kill people but does cure cancer ...which is a rather limited way of putting it. More generally, the problem of pulling a mind out of mind design space, such that afterwards that you are glad you did it."

  • here, 29 minutes in

...that one has some pretty obvious problems, as I describe here.

These are not operational definitions. For example, both rely on some kind of unspecified definition of what a "person" is. That maybe obvious today - but human nature will probably be putty in the hands of an intelligent machine - and it may well start wondering about the best way to gently transform a person into a non-person.

comment by Simulation_Brain · 2010-08-13T19:27:19.651Z · LW(p) · GW(p)

Now this is an interesting thought. Even a satisficer with several goals but no upper bound on each will use all available matter on the mix of goals it's working towards. But a limited goal (make money for GiantCo, unless you reach one trillion, then stop) seems as though it would be less dangerous. I can't remember this coming up in Eliezer's CFAI document, but suspect it's in there with holes poked in its reliability.

comment by timtyler · 2010-08-13T20:22:17.559Z · LW(p) · GW(p)

I discuss "small" ambitions in:

http://alife.co.uk/essays/stopping_superintelligence/

They seem safer to me too. This is one of the things people can do if they are especially paranoid about leaving the machine turned on - for some reason or another.

comment by anon895 · 2010-09-01T20:22:20.018Z · LW(p) · GW(p)

An AI that was a satisficer would't be "the" AI; it'd be the first of many.

Replies from: Perplexed
comment by Perplexed · 2010-09-01T20:25:45.640Z · LW(p) · GW(p)

Odd. I would have thought that the first satisfied superhuman AI would be the last AI.

Replies from: anon895
comment by anon895 · 2010-09-01T20:50:16.579Z · LW(p) · GW(p)

I was probably wrong in assuming I understood the discussion, in that case.

Replies from: Perplexed
comment by Perplexed · 2010-09-01T20:54:43.355Z · LW(p) · GW(p)

Your mistake may be in assuming that I understand.

comment by kodos96 · 2010-08-13T08:47:35.478Z · LW(p) · GW(p)

The only part of the chain of logic that I don't fully grok is the "FOOM" part. Specifically, the recursive self improvement. My intuition tells me that an AGI trying to improve itself by rewriting its own code would encounter diminishing returns after a point - after all, there would seem to be a theoretical minimum number of instructions necessary to implement an ideal Bayesian reasoner. Once the AGI has optimized its code down to that point, what further improvements can it do (in software)? Come up with something better than Bayesianism?

Now in your summary here, you seem to downplay the recursive self-improvement part, implying that it would 'help,' but isn't strictly necessary. But my impression from reading Eliezer was that he considers it an integral part of the thesis - as it would seem to be to me as well. Because if the intelligence explosion isn't coming from software self-improvement, then where is it coming from? Moore's Law? That isn't fast enough for a "FOOM", even if intelligence scaled linearly with the hardware you threw at it, which my intuition tells me it probably wouldn't.

Now of course this is all just intuition - I haven't done the math, or even put a lot of thought into it. It's just something that doesn't seem obvious to me, and I've never heard a compelling explanation to convince me my intuition is wrong.

Replies from: ShardPhoenix, Simulation_Brain, cata
comment by ShardPhoenix · 2010-08-13T09:27:46.506Z · LW(p) · GW(p)

I don't think anyone argues that there's no limit to recursive self-improvement, just that the limit is very high. Personally I'm not sure if a really fast FOOM is possible, but I think it's likely enough to be worth worrying about (or at least letting the SIAI worry about it...).

comment by Simulation_Brain · 2010-08-13T19:22:22.896Z · LW(p) · GW(p)

I think the concern stands even without a FOOM; if AI gets a good bit smarter than us, however that happens (design plus learning, or self-improvement), it's going to do whatever it wants.

As for your "ideal Bayesian" intuition, I think the challenge is deciding WHAT to apply it to. The amount of computational power needed to apply it to every thing and every concept on earth is truly staggering. There is plenty of room for algorithmic improvement, and it doesn't need to get that good to outwit (and out-engineer) us.

comment by cata · 2010-08-13T19:00:35.823Z · LW(p) · GW(p)

I think the widespread opinion is that the human brain has relatively inefficient hardware -- I don't have a cite for this -- and, most likely, inefficient software as well (it doesn't seem like evolution is likely to have optimized general intelligence very well in the relatively short timeframe that we have had it at all, and we don't seem to be able to efficiently and consistently channel all of our intelligence into rational thought.)

That being the case, if we were going to write an AI that was capable of self-improvement on hardware that was roughly as powerful or more powerful than the human brain (which seems likely) it stands to reason that it could potentially be much faster and more effective than the human brain; and self-improvement should move it quickly in that direction.

comment by jacob_cannell · 2010-09-01T21:27:13.715Z · LW(p) · GW(p)

I for one largely agree, but a few differences:

Artificial systems didn't produce anything a century ago; even without a strong exponential, they're clearly getting somewhere.

We've had a strong exponential since the beginning of computing. Thinking that humans create computers is something of a naive anthropocentric viewpoint: humans don't create computers and haven't for decades. Human+computer systems create computers, and the speed of progress is largely constrained by the computational aspects even today (computers increasingly do more of the work, and perhaps already do the majority). To understand this more, read this post from a former intel engineer (and apparently AI lab manager). Enlightening inside knowledge, but for whatever reason he only got up to 7 karma and wandered away.

Also, if you plotted out the data points of brain complexity on earth over time, I'm near certain it also follows a strong exponential.

The differences between all these exponentials are 'just' constants.

The vast majority of possible "wants" done thoroughly will destroy us. (Any goal taken to extremes will use all available matter in accomplishing it.)

I find this dubious, mainly because physics tells us that using all available matter is actually highly unlikely to ever be a very efficient strategy.

However, agreed about the potential danger of future hyper-intelligence.

comment by JoshuaZ · 2010-08-12T20:12:32.002Z · LW(p) · GW(p)

The Charlie Stross example seems to be less than ideal. Much of what Stross has wrote about touches upon or deals intensely with issues connected to runaway AI. For example, the central premise of "Singularity Sky" involves an AI in the mid 20th century going from stuck in a lab to godlike in possibly a few seconds. His short story "Antibodies" focuses on the idea that very bad fast burns occur very frequently. He also has at least one (unpublished) story the central premises of which is that Von Neumann and Turing proved that P=NP and that the entire cold war was actually a way of keeping lots of weapons online ready to nuke any rogue AIs.

Note also that you mention Greg Egan who has also wrote fiction in which rogue AIs and bad nanotech make things very unpleasant (see for example Crystal Nights).

As to the other people you mention as to why they aren't very worried about the possibilities that Eliezer takes seriously, at least one person on your list (Kurzweil) is an incredible optimist and not much of a rationalist and so it seems extremely unlikely that he would ever become convinced that any risk situation was of high likelyhood unless the evidence for the risk was close to overwhelming.

MWI, I've read this sequence and it seems that Eliezer makes one of the strongest cases for Many-Worlds that I've seen. However, I know that there are a lot of people who have thought about this issue and have much more physics background and have not reached this conclusion. I'm therefore extremely uncertain about MWI. So what should one do if one doesn't know much about this? In this case, the answer is pretty easy, since MWI doesn't alter actual behavior much (unless you are intending to engage in quantum suicide or the like). So figuring out whether Eliezer is correct about MWI should not be a high priority, except in so far as it provides a possible data point for deciding if Eliezer is correct about other things.

Advanced real-world molecular nanotechnology - Of the points you bring up this one seems to me to be the most unlikely to be actually correct. There are a lot of technical barriers to grey goo and most of the people actually working with nanotech don't seem to see that sort of situation as very likely. But it also seems clear that that doesn't mean that there aren't many other possible things that molecular nanotech could do that wouldn't make things very unpleasant for us. Here, Eliezer is by far not the only person worried about this. See for example, this article which is a few years of date but does show that there's serious worry in this regards by academics and governments.

Runaway AI/AI going FOOM - This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can't find the link right now. If someone else can track it down I'd appreciate it). One thing to note is that estimates for nanotech can impact the chance of an AI going FOOM substantially. If cheap easy nanotech exists than an AI may be able to improve its hardware at a very fast rate. If however, such nanotech does not exist then an AI will be limited to self-improvement primarily by improving software, which might be much more limited. See this subthread, where I bring up some of the possible barriers to software improvement and become by the end of it substantially more convinced by cousin_it that the barriers to escalating software improvement may be small.

What about the other Bayesians out there? Are they simply not as literate as Eliezer Yudkowsky in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

Note that even practiced Bayesians are from from perfect rationalists. If one hasn't thought about an issue or even considered that something is possible there's not much one can do about it. Moreover, a fair number of people who self-identify as Bayesian rationalists aren't very rational, and the set of people who do self-identify as such is pretty small.

Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.

Given your data set this seems reasonable to me. Frankly, if I were to give money or support the SIAI I would do so primarily because I think that the Singularity Summits are clearly helpful and getting together lots of smart people and that this is true even if one assigns a low probability for any Singularity type event occurring in the next 50 years.

Replies from: utilitymonster
comment by utilitymonster · 2010-08-13T13:38:45.822Z · LW(p) · GW(p)

Runaway AI/AI going FOOM - This is potentially the most interesting of your points simply because it is so much more unique to the SIAI and Eliezer. So what can one do to figure out if this is correct? One thing to do is to examine the arguments and claims being made in detail. And see what other experts think on the subject. In this context, most AI people seem to consider this to be an unlikely problem, so maybe look at what they have to say? Note also that Robert Hanson of Overcoming Bias has discussed these issues extensively with Eliezer and has not been at all convinced (they had a written debate a while ago but I can't find the link right now. If someone else can track it down I'd appreciate it).

FOOM Debate

comment by Johnicholas · 2010-08-12T17:19:48.119Z · LW(p) · GW(p)

This is an attempt (against my preference) to defend SIAI's reasoning.

Let's characterize the predictions of the future into two broad groups: 1. business as usual, or steady-state. 2. aware of various alarmingly exponential trends broadly summarized as "Moore's law". Let's subdivide the second category into two broad groups: 1. attempting to take advantage of the trends in roughly a (kin-) selfish manner 2. attempting to behave extremely unselfishly.

If you study how the world works, the lack of steady-state-ness is everywhere. We cannot use fossil fuels or arable land indefinitely at the current rates. "Business as usual" depends crucially on progress in order to continue! We're investing heavily in medical research, and there's no reason to expect that to stop, Self-replicating molecular-scale entities already exist, and there is every reason to expect that we will understand how to build them better than we currently do.

Supposing that the above paragraph has convinced the reader that the lack of steady--state-ness is fairly obvious, and given the reader's knowledge of human nature, how many people do you expect would be trying to behave in extremely unselfish manner?

Your post seems to expect that if various luminaries believed that incomprehensibly-sophisticated computing engines and dangerous self-replicating atomic-scale entities were likely in our future, then they would behave extremely unselfishly - is that reasonable? Supposing that almost everyone was aware of this probable future, how would they take advantage of it in a (kin-)selfish manner as best they can? I think the hypothesized world would look very much like this one.

comment by JamesAndrix · 2010-08-13T02:28:02.897Z · LW(p) · GW(p)
  • That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

I don't believe that is necessarily true, just that no one else is doing it. I think other teams working on FAI Specifically would be a good thing, provided they were competent enough not to be dangerous.

Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way. When I arrived I had a different view on morality than EY, but I was very happy to see another group of people at least working on the problem.

Also note that you only need to believe in the likelihood of UFAI -or- nanotech -or- other existential threats in order to want FAI . I'd have to step back a few feet to wrap my head around considering it infeasible at this point.

Replies from: CarlShulman, CronoDAS
comment by CarlShulman · 2010-08-13T07:26:31.864Z · LW(p) · GW(p)

That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.

That's just a weird claim. When Richard Posner or David Chalmers does writing in the area SIAI folk cheer, not boo. And I don't know anyone at SIAI who thinks that the Future of Humanity Institute's work in the area isn't a tremendously good thing.

Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way.

Have you looked into the philosophical literature?

comment by CronoDAS · 2010-08-14T09:59:30.937Z · LW(p) · GW(p)

Likewise, Lesswrong (then Overcoming bias) is just the only place I've found that actually looked at the morality problem is a non-obviously wrong way.

I recommend http://atheistethicist.blogspot.com/ for this. (See the sidebar for links to an explanation of his metaethical theory.)

comment by XiXiDu · 2010-10-30T09:09:14.441Z · LW(p) · GW(p)

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) (Thanks Kevin)

SIAI's leaders and community members have a lot of beliefs and opinions, many of which I share and many not, but the key difference between our perspectives lies in what I'll call SIAI's "Scary Idea", which is the idea that: progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race.

Of course it's rarely clarified what "provably" really means. A mathematical proof can only be applied to the real world in the context of some assumptions, so maybe "provably non-dangerous AGI" means "an AGI whose safety is implied by mathematical arguments together with assumptions that are believed reasonable by some responsible party"? (where the responsible party is perhaps "the overwhelming majority of scientists" … or SIAI itself?).

Please note that, although I don't agree with the Scary Idea, I do agree that the development of advanced AGI has significant risks associated with it.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-10-30T09:40:15.511Z · LW(p) · GW(p)

Have turned this into a top-level article - many thanks for the pointer!

comment by Paul Crowley (ciphergoth) · 2010-08-12T17:19:44.148Z · LW(p) · GW(p)

Do you have any reason to suppose that Charlie Stross has even considered SIAI's claims?

Replies from: MartinB, Wei_Dai, XiXiDu, whpearson
comment by MartinB · 2010-08-12T19:00:25.878Z · LW(p) · GW(p)

Lets all try not to confuse SF writers with futurists and neither with researchers or engineers. Stories follow the rules of awesome, or they don't sell well. There is a wonderful Letter from Heinlein to a fan that asked why he wrote, and the top answer was:'to put food on the table'. It is probably online, but I could not find it atm. Comparing the work of the SIAI to any particular writer is like comparing the british navy with Jack London.

Replies from: NancyLebovitz, XiXiDu
comment by NancyLebovitz · 2010-08-13T09:01:47.046Z · LW(p) · GW(p)

Heinlein also described himself as competing for his reader's beer money.

comment by XiXiDu · 2010-08-13T16:00:02.339Z · LW(p) · GW(p)

Stories follow the rules of awesome...

This is kind of off topic but I think the prospects being depicted on LW etc. are more awesome than a lot of SF stories.

comment by Wei Dai (Wei_Dai) · 2010-08-12T20:10:14.301Z · LW(p) · GW(p)

Stross's views are simply crazy. See his “21st Century FAQ” and others' critiques of it.

I do wonder why Ray Kurzweil isn't more concerned about the risk of a bad Singularity. I'm guessing he must have heard SIAI's claims, since he co-founded the Singularity Summit along with SIAI. Has anyone put the question to him?

Replies from: timtyler, ciphergoth, CarlShulman
comment by timtyler · 2010-08-13T06:27:45.440Z · LW(p) · GW(p)

Re: "I do wonder why Ray Kurzweil isn't more concerned about the risk of a bad Singularity"

http://www.cio.com/article/29790/Ray_Kurzweil_on_the_Promise_and_Peril_of_Technology_in_the_21st_Century

comment by Paul Crowley (ciphergoth) · 2010-08-12T21:11:31.345Z · LW(p) · GW(p)

I think "simply crazy" is overstating it, but it's striking he makes the same mistake that Wright and other critics make: SIAI's work is focussed on AI risks, while the critics focus on AI benefits. This I assume is because rather than addressing what SIAI actually say, they're addressing their somewhat religion-like picture of it.

Replies from: whpearson, Vladimir_Nesov
comment by whpearson · 2010-08-12T21:23:20.344Z · LW(p) · GW(p)

I got the sense that he is very pessimistic about the chance of controlling things if they do go FOOM. If he is that pessimistic and also believes that the advance of AI will be virtually impossible to stop, then forgetting about will be as purposeful as worrying about it.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T07:17:21.709Z · LW(p) · GW(p)

I think this is an accurate picture of Stross' point.

comment by Vladimir_Nesov · 2010-08-12T21:15:52.867Z · LW(p) · GW(p)

I think "simply crazy" is overstating it, but it's striking he makes the same mistake that Wright and other critics make: SIAI's work is focussed on AI risks, while the critics focus on AI benefits.

Well, I also try to focus on AI benefits. The critics fail because of broken models, not because of the choice of claims they try to address.

comment by CarlShulman · 2010-08-13T12:38:16.112Z · LW(p) · GW(p)

Crazy in which respect? It seemed to me that those critiques were narrow and mostly talking past Stross. The basic point that space is going to remain much more expensive and less pleasant than expansion on Earth for quite some time, conditioning on no major advances in AI, nanotechnology, biotechnology, etc, is perfectly reasonable. And Stross does so condition.

He has a few lines about it in The Singularity is Near, basically saying that FAI seems very hard (no foolproof solutions available, he says), but that AI will probably be well integrated. I don't think he means "uploads come first, and manage AI after that," as he predicts Turing-Test passing AIs well before uploads, but he has said things suggesting that those Turing Tests will be incomplete, with the AIs not capable of doing original AI research. Or he may mean that the ramp up in AI ability will be slow, and that IA will improve our ability to monitor and control AI systems institutionally, aided by non-FAI engineering of AI motivational systems and the like.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-13T12:48:51.213Z · LW(p) · GW(p)

Crazy in which respect?

Look at his answer for The Singularity:

The rapture of the nerds, like space colonization, is likely to be a non-participatory event for 99.999% of humanity — unless we're very unlucky. If it happens and it's interested in us, all our plans go out the window. If it doesn't happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea. The best approach to the singularity is to apply Pascal's Wager — in reverse — and plan on the assumption that it ain't going to happen, much less save us from ourselves.

He doesn't even consider the possibility of trying to nudge it in a good direction. It's either "plan on the assumption that it ain't going to happen", or sit around waiting for AIs to save us.

ETA: The "He" in your second paragraph is Kurtzweil, I presume?

Replies from: Rain
comment by Rain · 2010-08-13T12:54:48.009Z · LW(p) · GW(p)

That quote could also be interpreted as saying that UFAI is far more likely than FAI.

Replies from: Wei_Dai, Risto_Saarelma
comment by Wei Dai (Wei_Dai) · 2010-08-13T13:52:26.751Z · LW(p) · GW(p)

Thinking that FAI is extremely difficult or unlikely isn't obviously crazy, but Stross isn't just saying "don't bother trying FAI" but rather "don't bother trying anything with the aim of making a good Singularity more likely". The first sentence of his answer, which I neglected to quote, is "Forget it."

comment by Risto_Saarelma · 2010-08-13T13:36:46.962Z · LW(p) · GW(p)

Pretty much how I read it. It should acknowledge the attempts to make a FAI, but it seems like a reasonable pessimistic opinion that FAI is too difficult to ever be pulled off successfully before strong AI in general.

Seems like a sensible default stance to me. Since humans exist, we know that a general intelligence can be built out of atoms, and since humans have many obvious flaws as physical computation systems, we know that any successful AGI is likely to end up at least weakly superhuman. There isn't a similarly strong reason to assume a FAI can be built, and the argument for one seems to be more on the lines of things being likely to go pretty weird and bad for humans if one can't be built but an AGI can.

comment by XiXiDu · 2010-08-12T18:50:00.693Z · LW(p) · GW(p)

If someone like me who failed secondary school can come up with such ideas before coming across the SIAI, I thought that someone who writes SF novels about the idea of a technological singualrity might too. And you don't have to link me to the post about 'Generalizing From One Example', I'm aware of it.

And Charles Stross was not the only person that I named, by the way. At least one of those people is a member on this site.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-13T03:12:25.563Z · LW(p) · GW(p)

At least one of those people is a member on this site.

If you're referring to Gary Drescher, I forwarded him a link of your post, and asked him what his views of SIAI actually are. He said that he's tied up for the next couple of days, but will reply by the weekend.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T08:05:51.090Z · LW(p) · GW(p)

Great, thank you! I was thinking of asking some people to actually comment here.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-13T10:06:28.831Z · LW(p) · GW(p)

I plan on asking Stross about this next time I visit Edinburgh, if he's in town.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T10:12:10.224Z · LW(p) · GW(p)

That be great. I'd be excited to have as many opinions as possible about the SIAI from people that are not associated with it.

I wonder if we could get some experts to actual write a informed critique about the whole matter. Not just some SF writers. Although I think Stross is probably as educated as EY.

What is Robin Hanson' opinion about all this, does anyone know? Is he as worried about the issues in question? Is he donating to the SIAI?

Replies from: CarlShulman, Unknowns, kodos96
comment by CarlShulman · 2010-08-13T11:19:03.831Z · LW(p) · GW(p)

Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don't know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.

I don't know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.

Replies from: CarlShulman, XiXiDu
comment by CarlShulman · 2010-08-13T18:09:37.560Z · LW(p) · GW(p)

Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.

comment by XiXiDu · 2010-08-13T12:15:08.557Z · LW(p) · GW(p)

Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.

Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.

Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?

comment by Unknowns · 2010-08-13T10:18:01.437Z · LW(p) · GW(p)

Robin Hanson said that he thought the probability of an AI being able to foom and destroy the world was about 1%. However, note that since this would be a 1% chance of destroying the world, he considers it reasonable to take precautions against this.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T10:33:41.945Z · LW(p) · GW(p)

That's AI built by a very small group fooming to take over the world at 1%, going from a millionth or less of the rest of the world economy to much larger very quickly. That doesn't account for risk from AI built by large corporations or governments, Darwinian AI evolution destroying everything we value, AI arms race leading to war (and accidental screwups), etc. His AI (80% of which he says is brain emulations) x-risk estimate is higher. He says between 1% and 50%.

comment by kodos96 · 2010-08-13T10:27:37.163Z · LW(p) · GW(p)

OK this is getting bizarre now. You seem to be trying to recruit an anti-SIAC Legion of Doom... via a comment thread on LW.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T10:45:21.958Z · LW(p) · GW(p)

Me looking for some form of peer review is deemed to be bizarre? It is not my desire to crush the SIAI but to figure out what is the right thing to do.

You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-13T14:13:46.300Z · LW(p) · GW(p)

You know what I would call bizarre? That someone writes in bold and all caps calling someone an idiot and afterwards banning his post. All that based on ideas that themselves are resulting from and based on unsupported claims. That is what EY is doing and I am trying to assess the credibility of such reactions.

EY is one of the smartest people on the planet and this has been his life's work for about 14 years. (He started SIAI in 2000.) By your own admission, you do not have the educational achievements necessary to evaluate his work, so it is not surprising that a small fraction of his public statements will seem bizarre to you because 14 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.

Humans are designed (by natural selection) to mistrust statements at large inferential distances from what they already believe. Human were not designed for a world (like the world of today) where there exists so much accurate knowledge of reality no one can know it all, and people have to specialize. Part of the process of becoming educated is learning to ignore your natural human incredulity at statements at large inferential distances from what you already believe.

People have a natural human mistrust of attempts by insiders to stifle discussion and to hoard strategic pieces of knowledge because in the past those things usually led to oppression or domination of outsiders by the insiders. I assure you that there is no danger of anything like that happening here. We cannot operate a society as complex and filled with dangerous knowledge as our society is on the principle that everyone discusses everything in public. It is not always true that the more people involved in a decision, the more correct and moral the decision will turn out. Some decisions do not work that way. We do not for example freely distribute knowledge of how to make nuclear weapons. It is almost a certainty that some group somewhere would make irresponsible and extremely destructive decisions with that knowledge.

About half of the regular readers of Less Wrong saw the deleted post, and the vast majority (including me) of those who saw it agree with or are willing to accept Eliezer's decision to delete it. Anyone can become a regular reader of Less Wrong: one does not have to be accepted by Eliezer or SIAI or promise to be loyal to Eliezer or SIAI.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T15:09:39.548Z · LW(p) · GW(p)

EY is one of the smartest people on the planet...

Can you even judge that without being as smart yourself? And how many people on the planet do you know? I know you likely just said this for other purposes, but I want to highlight the risk of believing him to be THAT smart and consenquently believe what he is saying based on your believe that he is smart.

...you do not have the educational achievements necessary to evaluate his work...

That is right, or might be, as the evidence that I could evaluate seems to be missing.

...4 years is plenty of time for Eliezer and his friends to have arrived at beliefs at very great inferential distance from any of your beliefs.

True, but in the case of evolution you are more likely to be able to follow the chain of subsequent conclusions. In the case of evolution evidence isn't far, it's not beneath 14 years of ideas based on some hypothesis. In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.

About half of the regular readers of Less Wrong saw the banned post, and the vast majority (including me) of those who saw it agree with or are willing to accept Eliezer's decision to delete it.

And my guess is that not one of them could explain their reasoning to support the censorship of ideas to an extent that would accommodate for such a decision. They will just base their reasoning on arguments of unknown foundation previously made by EY.

comment by whpearson · 2010-08-12T19:27:15.254Z · LW(p) · GW(p)

I'd be surprised if he hasn't at least come across the early arguments; he was active on the Extropy-Chat mailing list the same time as Eliezer. I didn't follow it closely enough to see if their paths crossed though.

comment by lucidfox · 2010-12-30T20:17:13.733Z · LW(p) · GW(p)

Good thing at least some people here are willing to think critically.

I know these are unpopular views around here, but for the record:

  • Risks be risks, but I believe it's unlikely that humanity will actually be destroyed in a foreseeable perspective.
  • I do not think it's likely that we'll arrive at a superhuman AI during my lifetime, friendly or not.
  • I do not think that Eliezer's techno-utopia is more desirable than simply humanity continuing to develop on its own at a natural pace.
  • I do not fear death of old age, nor do I desire immortality or uploads.
  • As muh as I respect Eliezer as a popularizer of science, when it comes to social wishes, he makes sweeping generalizations, too easily projects his personal desires onto the rest of humanity, and singles out whole broad categories as stupid or deluded just because they don't share his beliefs. If I don't trust his agenda enough to vote for him in a hypothetical election for President of United Earth, why should I trust his hypothetical AI?
Replies from: JoshuaZ, jimrandomh
comment by JoshuaZ · 2010-12-30T20:49:46.877Z · LW(p) · GW(p)

I do not think that Eliezer's techno-utopia is more desirable than simply humanity continuing to develop on its own at a natural pace.

What is the natural pace? Under what definition is there some level of technological development that is natural and some level that is not?

I do not fear death of old age, nor do I desire immortality or uploads.

Do you want to live tomorrow? Do you think you'll want to live the day after tomorrow? If there were a pill that would add five years on average to your lifespan and those would be five good years would you take it?

Good thing at least some people here are willing to think critically.

Unfortunately, saying that people are thinking critically about the SIAI is not the same thing as you seem to be doing. The OP and others in this thread have listed explicit concerns and issues about why they don't necessarily buy into the SIAI's claims. Your post seems much closer to simply listing a long set of conclusions and personal attitudes. That's not critical thinking.

comment by jimrandomh · 2010-12-30T21:01:36.787Z · LW(p) · GW(p)

Eliezer ... singles out whole broad categories as stupid or deluded just because they don't share his beliefs.

Are you sure he doesn't single out broad categories as stupid or deluded just because they really are? Calling people stupid may be bad politics, but there is a fact of the matter.

Replies from: None
comment by [deleted] · 2010-12-30T21:12:07.396Z · LW(p) · GW(p)

A belief can be true or false, but what makes a person stupid?

comment by Craig Daniel (craig-daniel) · 2022-09-15T02:17:44.787Z · LW(p) · GW(p)

I recently followed a link hole that got me here.

MIRI is not now what SIAI was then, so this isn't the most pressing of questions, but: what was the major update? It is, I'm sad to say, a victim of link rot.

comment by XiXiDu · 2010-08-19T14:18:44.485Z · LW(p) · GW(p)

Greg Egan and the SIAI?

I completey forgot about this interview, so I already knew why Greg Egan isn't that worried:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-12-29T17:19:50.787Z · LW(p) · GW(p)

He should try telling that to the Azetc, or better yet, the inhabitants of Hispaniola. Turns out that ten thousand years of divergence can mean instant death, no saving throw.

comment by CarlShulman · 2010-08-13T07:11:14.081Z · LW(p) · GW(p)

Here's the Future of Humanity Institute's survey results from their Global Catastrophic Risks conference. The median estimate of extinction risk by 2100 is 19%, with 5% for AI-driven extinction by 2100:

http://www.fhi.ox.ac.uk/selected_outputs/fohi_publications/global_catastrophic_risks_survey

Unfortunately, the survey didn't ask for probabilities of AI development by 2100, so one can't get probability of catastrophe conditional on AI development from there.

Replies from: timtyler
comment by timtyler · 2010-08-13T08:02:32.704Z · LW(p) · GW(p)

That sample is drawn from those who think risks are important enough to go to a conference about the subject.

That seems like a self-selected sample of those with high estimates of p(DOOM).

The fact that this is probably a biased sample from the far end of a long tail should inform interpretations of the results.

Replies from: CarlShulman, Rain
comment by CarlShulman · 2010-08-13T18:13:54.780Z · LW(p) · GW(p)

There is also the unpacking bias mentioned in the survey pdf. Going the other direction are some knowledge effects. Also note that most of the attendees were not AI types, but experts on asteroids, nukes, bioweapons, cost-benefit analysis, astrophysics, and other non-AI risks. It's still interesting that the median AI risk was more than a quarter of median total risk in light of that fact.

comment by Rain · 2010-08-13T12:42:48.282Z · LW(p) · GW(p)

There's also the possibility that people dismiss it out of hand, without even thinking, and the more you look into the facts, the more your estimate rises. In this instance, the people at the conference just have the most facts.

comment by MartinB · 2010-11-02T10:08:23.074Z · LW(p) · GW(p)

Robert A. Heinlein was an Engineer and SF writer, who created many stories that hold up quite well. He put in his understanding of human interaction, and of engineering to make stories that are somewhat realistic. But no one should confuse him with someone researching the actual likelyhood of any particular future. He did not build anything that improved the world, but he wrote interesting about the possibilities and encouraged many others to per-sue technical careers. SF has often bad usage of logic, and the well known hero bias, or scientists that put together something to solve a current crisis that all their colleagues before had not managed to do. Unrealistic, but fun to read. SF writer write for a living, hard SF writers take stuff a bit more serial, but still are not actual experts on technology. Except when they are. Vinge would be such a case. Egan I have not read yet. Kurzweil seems to be one of the more present futurists, (Critiquing his ideas can take its own place.) But you will notice that the air gets pretty thin in this area, where everyone leads his own cult and spends more time on PR, than on finding good counter arguments for their current views. It would be awesome to have more people work on transhumanism/lifeextension/AI and what not, but that is not yet the case. There might even be good reasons for that, which LWers fail to perceive, or it could be that many scientists actually have a massive blindspot in regards to some of the topics. Regarding AI I fail to estimate how likely it is to reach it any time soon, since I really can not estimate all the complications on the way. The general possibility of human level intelligence looks plausible, because there are humans running around who have it. But even if the main goal of SIAI is never ever reached I already profit from the side products. Instead of concentrating on the AI stuff you can take the real-world part of the sequences and work on becoming a better thinker in whichever domain happens to be yours.

comment by XiXiDu · 2010-08-15T16:13:23.765Z · LW(p) · GW(p)

This comment is my last comment for at least the rest of 2010.

You can always e-Mail me: ak[at]xixidu.net

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-17T18:51:36.373Z · LW(p) · GW(p)

This comment is my last comment for at least the rest of 2010.

Since you've posted more, I assume you meant "last comment on this post"?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-17T18:54:57.812Z · LW(p) · GW(p)

No, I changed my mind. Or maybe it was a lack of self-control. You are right. I have no excuse.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-17T19:12:48.836Z · LW(p) · GW(p)

Well, I didn't think making clear-cut resolutions like this is a good idea (publicly or not), but pointed out an inconsistency.

comment by John_Maxwell (John_Maxwell_IV) · 2010-08-13T19:34:26.333Z · LW(p) · GW(p)

Therefore I perceive it as unreasonable to put all my eggs in one basket.

It doesn't sound to me as though you're maximizing expected utility. If you were maximizing expected utility, you would put all of your eggs in the most promising basket.

Or perhaps you are maximizing expected utility, but your utility function is equal to the number of digits in some number representing the amount of good you've done for the world. This is a pretty selfish/egotistical utility function to have, and it might be mine as well, but if you have it it's better to be honest and admit it. We're hardly the only ones:

http://www.slate.com/id/2034

Replies from: thomblake
comment by thomblake · 2010-08-13T19:53:35.125Z · LW(p) · GW(p)

Or perhaps you are maximizing expected utility, but your utility function is equal to the number of digits in some number representing the amount of good you've done for the world.

I'm having trouble reading this in a way that is not inconsistent, specifically the tension between "good" and "utility function". Any help?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-08-13T20:07:18.759Z · LW(p) · GW(p)

"good" means increasing the utility functions of others.

More precise: "Perhaps you are maximizing expected utility, but your utility function is equal to some logarithm of some number representing the amount you've increased values assumed by the utility functions of others."

Replies from: thomblake
comment by thomblake · 2010-08-13T20:16:06.047Z · LW(p) · GW(p)

Aha. That wasn't even one of my guesses. Thanks!

comment by thomblake · 2010-08-12T14:46:20.193Z · LW(p) · GW(p)

This was a very good job of taking a number of your comments and turning them into a coherent post. It raised my estimation that Eliezer will be able to do something similar with turning his blog posts into a book.

Replies from: EStokes, Vladimir_Nesov
comment by EStokes · 2010-08-12T23:11:06.972Z · LW(p) · GW(p)

It didn't feel very clear/coherant, but I'm tired so meh. I think it could've done with more lists, or something like that. Something like an outline or clear summation of his points.

comment by Vladimir_Nesov · 2010-08-12T23:05:44.356Z · LW(p) · GW(p)

This was a very good job of taking a number of your comments and turning them into a coherent post. It raised my estimation that Eliezer will be able to do something similar with turning his blog posts into a book.

The connection to Eliezer's ability to write a book is bizarre (to say so politely).

Replies from: Alicorn
comment by Alicorn · 2010-08-12T23:11:23.216Z · LW(p) · GW(p)

I think the idea is that if one was originally skeptical about the general feasibility of stitching together separate posts into a single book, this post offers an example of it being done on a smaller scale and ups the estimate of that feasibility.

Replies from: Vladimir_Nesov, thomblake
comment by Vladimir_Nesov · 2010-08-12T23:14:18.416Z · LW(p) · GW(p)

It's not in the nature of ideas to be blog posts. One generally can present ideas in a book form, depending on one's writing skills.

comment by thomblake · 2010-08-13T14:30:56.960Z · LW(p) · GW(p)

Right, exactly.

comment by Interpolate · 2010-08-14T04:17:41.952Z · LW(p) · GW(p)

test

comment by timtyler · 2010-08-12T20:30:50.385Z · LW(p) · GW(p)

Two key propositions seem to be:

  1. The world is at risk from a superintelligence-gone-wrong;

  2. The SIAI can help to do something about that.

Both propositions seem debatable. For the first point, certainly some scenarios are better than others - but the superintelligence causing widespread havoc by turning on its creators hypothesises substantial levels of incompetence, followed up by a complete failure of the surrounding advanced man-machine infrastructure to deal with the problem. Most humans may well have more to fear from a superintelligence-gone-right, but in dubious hands.

comment by NancyLebovitz · 2010-08-12T16:50:11.457Z · LW(p) · GW(p)

I'm pretty sure that a gray goo nanotech disaster is generally not considered plausible-- if nothing else, it would generate so much heat the nanotech would fail.

This doesn't address less dramatic nanotech disasters-- say, a uFAI engineering viruses to wipe out the human race so that it can build what it wants without the risk of interference.

Replies from: jimrandomh, timtyler, Eliezer_Yudkowsky
comment by jimrandomh · 2010-08-12T16:59:24.946Z · LW(p) · GW(p)

I'm pretty sure that a gray goo nanotech disaster is generally not considered plausible--if nothing else, it would generate so much heat the nanotech would fail.

This argument can't be valid, because it also implies that biological life can't work either. At best, this implies a limit on the growth rate; but without doing the math, there is no particular reason to think that limit is slow.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-12T17:05:25.192Z · LW(p) · GW(p)

Grey goo is assumed to be a really fast replicator that will eat anything. Arguably, it's a movie plot disaster.

Replies from: timtyler
comment by timtyler · 2010-08-13T08:15:33.709Z · LW(p) · GW(p)

From that thread, it seems that many people like to speculate on possible disasters.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-13T08:45:47.683Z · LW(p) · GW(p)

The point is that they know they're doing it for the fun of it rather than actually coming up with anything that needs to be prevented.

comment by timtyler · 2010-08-13T06:33:21.821Z · LW(p) · GW(p)

Eric Drexler decided it was implausible some time ago:

"Nanotech guru turns back on 'goo'"

However, some still flirt with the corresponding machine intelligence scenarios - though those don't seem much more likely to me.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T20:54:26.399Z · LW(p) · GW(p)

Google "global ecophagy".

Replies from: NancyLebovitz, Clippy
comment by NancyLebovitz · 2010-08-13T09:00:36.465Z · LW(p) · GW(p)

I've done so. What's your take on the odds of the biosphere being badly deteriorated?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T19:24:42.293Z · LW(p) · GW(p)

Robert Freitas seemed to be trying to argue that it would be difficult, and he couldn't argue it very well - for example, he used Eric Drexler's assumptions that were conservative for Nanosystems and anticonservative for grey goo, about a single radiation strike being enough to make a nanosystem fail, in calculating the amount of shielding required for aerovores that were mostly shielding (if I recall my reactions upon reading correctly). And despite that, the best he could come up with was "the heat bloom would be detected and stopped by our police systems", like they couldn't spread through the jet stream first and go into their final reproductive phase later, etcetera.

Unless Freitas is missing something that he seemed heavily motivated to find, I have to conclude that turning the biosphere into grey goop does not seem to be very difficult given what we currently know of the rules.

comment by Clippy · 2010-08-13T19:36:28.532Z · LW(p) · GW(p)

What's an easy way to cause a global ecophagy?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T15:57:30.632Z · LW(p) · GW(p)

Did you actually read through the MWI sequence before deciding that you still can't tell whether MWI is true because of (as I understand your post correctly) the state of the social evidence? If so, do you know what pluralistic ignorance is, and Asch's conformity experiment?

If you know all these things and you still can't tell that MWI is obviously true - a proposition far simpler than the argument for supporting SIAI - then we have here a question that is actually quite different from the one you seem to try to be presenting:

  • I do not have sufficient g-factor to follow the detailed arguments on Less Wrong. What epistemic state is it rational for me to be in with respect to SIAI?

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't.

Replies from: wedrifid, CarlShulman, CronoDAS, multifoliaterose, MartinB, JoshuaZ, None, red75, XiXiDu
comment by wedrifid · 2010-08-12T17:10:19.486Z · LW(p) · GW(p)

If you know all these things and you still can't tell that MWI is obviously true - a proposition far simpler than the argument for supporting SIAI - then we have here a question that is actually quite different from the one you seem to try to be presenting:

  • I do not have sufficient g-factor to follow the detailed arguments on Less Wrong. What epistemic state is it rational for me to be in with respect to SIAI?

I respectfully disagree. I am someone who was convinced by your MWI explanations but even so I am not comfortable with outright associating reserved judgement with lack of g.

This is a subject that relies on an awful lot of crystalized knowledge about physics. For someone to come to a blog knowing only what they can recall of high school physics and be persuaded to accept a contrarian position on what is colloquially considered the most difficult part of science is a huge step.

The trickiest part is correctly accounting for meta-uncertainty. There are a lot of things that seem extremely obvious but turn out to be wrong. I would even suggest that the trustworthiness of someone's own thoughts is not always proportionate to g-factor. That leaves people with some situations where they need to trust social processes more than their own g. That may prompt them to go and explore the topic from various other sources until such time that they can trust that their confidence is not just naivety.

Replies from: Kaj_Sotala, jimrandomh
comment by Kaj_Sotala · 2010-08-12T20:28:04.453Z · LW(p) · GW(p)

On a subject like physics and MWI, I wouldn't take the explanation of any non-professional as enough to establish that a contrarian position is "obviously correct". Even if they genuinely believed in what they said, they'll still only be presenting the evidence from their own point of view. Or they might be missing something essential and I wouldn't have the expertise to realize that. Heck, I wouldn't even go on the word of a full-time researcher in the field before I'd heard what their opponents had to say.

On a subject matter like cryonics I was relatively convinced from simply hearing what the cryonics advocates had to say, because it meshed with my understanding of human anatomy and biology, and it seemed like nobody was very actively arguing the opposite. But to the best of my knowledge, people are arguing against MWI, and I simply wouldn't have enough domain knowledge to evaluate either sort of claim. You could argue your case of "this is obviously true" with completely made-up claims, and I'd have no way to tell.

Replies from: XiXiDu, komponisto
comment by XiXiDu · 2010-08-13T09:32:59.556Z · LW(p) · GW(p)

This is probably the best comment so far:

You could argue your case of "this is obviously true" with completely made-up claims, and I'd have no way to tell.

Rounds it up pretty well. Thank you.

Replies from: CronoDAS
comment by CronoDAS · 2010-08-14T09:28:21.572Z · LW(p) · GW(p)

I've said that before, but apparently not quite so well.

comment by komponisto · 2010-08-13T00:28:49.166Z · LW(p) · GW(p)

But whose domain knowledge are we talking about in the first place? Eliezer argues that MWI is a question of probability theory rather than physics per se. In general, I don't see much evidence that physicists who argue against MWI actually have the kind of understanding of probability theory necessary to make their arguments worth anything. (Though of course it's worth emphasizing here that "MWI" in this context means only "the traditional collapse postulate can't be right" and not "the Schroedinger equation is a complete theory of physics".)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-08-13T08:49:47.040Z · LW(p) · GW(p)

In general, I don't see much evidence that physicists who argue against MWI actually have the kind of understanding of probability theory necessary to make their arguments worth anything.

Physicists have something else, however, and that is domain expertise. As far as I am concerned, MWI is completely at odds with the spirit of relativity. There is no model of the world-splitting process that is relativistically invariant. Either you reexpress MWI in a form where there is no splitting, just self-contained histories each of which is internally relativistic, or you have locally propagating splitting at every point of spacetime in every branch, in which case you don't have "worlds" any more, you just have infinitely many copies of infinitely many infinitesimal patches of space-time which are glued together in some complicated way. You can't even talk about extended objects in this picture, because the ends are spacelike separated and there's no inherent connection between the state at one end and the state at the other end. It's a complete muddle, even before we try to recover the Born probabilities.

Rather than seeing MWI as the simple and elegant way to understand QM, I see it as an idea which in a way turns out to be too simple - which is another way of saying, naive or uninformed. Like Bohmian mechanics, conceptually it relies on a preferred frame.

The combination of quantum mechanics with special relativity yields quantum field theory. In quantum field theory, everything empirically meaningful is conceptually relativistic. In your calculations, you may employ entities (like wavefunctions evolving in time) which are dependent on a particular reference frame, but you can always do such calculations in a different frame. An example of a calculational output which is frame-independent would be the correlation function between two field operators at different points in space-time. By the time we reach the point of making predictions, that correlation function should only depend on the (relativistically invariant) space-time separation. But in order to calculate it, we may adopt a particular division into space and time, write down wavefunctions defined to exist on the constant-time hypersurfaces in that reference frame, and evolve them according to a Hamiltonian. These wavefunctions are only defined with respect to a particular reference frame and a particular set of hypersurfaces. Therefore, they are somehow an artefact of a particular coordinate system. But they are the sorts of objects in terms of which MWI is constructed.

The truly relativistic approach to QFT is the path integral, the sum over all field histories interpolating between conditions on an initial and a final hypersurface. These histories are objects which are defined independently of any particular coordinate system, because they are histories and not just instantaneous spacelike states. But then we no longer have an evolving superposition, we just have a "superposition" of histories which do not "split" or "join".

At any time, theoretical physics contains many ideas and research programs, and there are always a lot of them that are going nowhere. MWI has all the signs of an idea going nowhere. It doesn't advance the field in any way. Instead, as with Bohmian mechanics, what happens is that specific quantum theories are proposed (field theories, string theories), and then the Everettians, the Bohmians, and so on wheel out their interpretive apparatus, which they then "apply" to the latest theoretical advance. It's a parasitic relationship and it's a sign that in the long run this is a dead end.

I will provide an example of an idea which is more like what I would look for in an explanation of quantum theory. The real problem with quantum theory is the peculiar way its probabilities are obtained. You have complex numbers and negative quasiprobabilities and histories that cancel each other. The cancellation of possibilities makes no sense from the perspective of orthodox probability. If an outcome can come about in one way, the existence of a second way can only increase the probability of the outcome - according to probability theory and common sense. Yet in the double-slit experiment we have outcomes that are reduced in probability through "destructive interference". That is what we need to explain.

There is a long history of speculation that maybe the peculiar structure of quantum probabilities can be obtained by somehow conditioning on the future as well as on the past, or by having causality working backwards as well as forwards in time. No-one has ever managed to derive QM this way, but many people have talked about it.

In string theory, there are light degrees of freedom, and heavy degrees of freedom. The latter correspond to the higher (more energetic) excitations of the string, though we should not expect that strings are fundamental in the full theory. In any case, these heavy excitations should cause space to be very strongly curved. So, what if the heavy degrees of freedom create a non-time-orientable topology on the Planck scale, giving rise to temporally bidirectional constraints on causality, and then the light strings interact (lightly) with that background, and quantum-probability effects are the indirect manifestation of that deeper causal structure, which has nonlocal correlations in space and time?

That's an idea I had during my string studies. It is not likely to be right, because it's just an idea. But it is an explanation which is intrinsically connected to the developing edge of theoretical physics, rather than a prefabricated explanation which is then applied in a one-size-fits-all fashion to any quantum theory. It would be an intrinsically string-theoretic derivation of QM. That is the sort of explanation for QM that I find plausible, for the reason that everything deep in physics is deeply connected to every other deep thing.

Replies from: army1987
comment by A1987dM (army1987) · 2012-08-05T09:24:19.076Z · LW(p) · GW(p)

Either you reexpress MWI in a form where there is no splitting, just self-contained histories each of which is internally relativistic

Huh? This is what I've always¹ taken MWI in a relativistic context...

  1. Just kidding. More like, since the first time I thought about the issue after graduating (and hence having an understanding of SR and QM devoid of the misconceptions found in certain popularizations).

Anyway, I'll have to read the works by 't Hooft when I have time. They look quite interesting.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-06T08:13:34.290Z · LW(p) · GW(p)

In 1204.4926 the idea is that a quantum oscillator is actually a discrete deterministic system that cycles through a finite number of states. Then in 1205.4107 he maps a cellular automaton onto a free field theory made out of coupled quantum oscillators. Then in 1207.3612 he adds boolean variables to his CA (previously the cells were integer-valued) in order to add fermionic fields. At this point his CA is looking a little like a superstring, which from a "worldsheet" perspective is a line with bosonic and fermionic quantum fields on it. But there are still many issues whose resolution needs to be worked out.

comment by jimrandomh · 2010-08-12T17:31:04.780Z · LW(p) · GW(p)

I wasn't convinced of MWI by the quantum mechanics sequence when I read it. I came to the conclusion that it's probably true later, after thinking intensively about the anthropic trilemma (my preferred resolution is incompatible with single-world interpretations); but my probability estimate is still only at 0.8.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-08-12T18:18:19.110Z · LW(p) · GW(p)

the anthropic trilemma (my preferred resolution is incompatible with single-world interpretations)

Ooh, tell us more!

Replies from: jimrandomh
comment by jimrandomh · 2010-08-12T18:34:55.294Z · LW(p) · GW(p)

I'll write more about this, but it will take some time. I posted the basic idea on the Anthropic Trilemma thread when I first had it, but the explanation is too pithy to follow easily, and I don't think many people saw it. Revisiting it now has brought to mind an intuition pump to use.

comment by CarlShulman · 2010-08-13T12:57:14.302Z · LW(p) · GW(p)

I do not have sufficient g-factor to follow the detailed arguments on Less Wrong. What epistemic state is it rational for me to be in with respect to SIAI?

This is rude (although I realize there is now name-calling and gratuitous insult being mustered on both sides) , and high g-factor does not make those MWI arguments automatically convincing. High g-factor combined with bullet-biting, a lack of what David Lewis called the argument of the incredulous stare, does seem to drive MWI pretty strongly. I happen to think that weighting the incredulous stare as an epistemic factor independent of its connections with evolution, knowledge in society, etc, is pretty mistaken, but bullet-dodgers often don't. Accusing someone of being low-g rather than a non-bullet-biter is the insulting possibility.

Just recently I encountered someone very high IQ/SAT/GRE scores who bought partial quantitative parsimony/Speed Prior type views, and biases against the unseen. This person claimed that the power of parsimony was not enough to defeat the evidence for galaxies and quarks, but was sufficient to defeat a Big World much beyond our Hubble Bubble, and to favor Bohm's interpretation over MWI. I think that view isn't quite consistent without a lot of additional jury-rigging, but it isn't reliably prevented by high g and exposure to the arguments from theoretical simplicity, non-FTL, etc.

comment by CronoDAS · 2010-08-12T17:35:52.594Z · LW(p) · GW(p)

It seems to me that a sufficiently cunning arguer can come up with what appears to be a slam-dunk argument for just about anything. As far as I can tell, I follow the arguments in the MWI sequence perfectly, and the conclusion does pretty much follow from the premises. I just don't know if those premises are actually true. Is MWI what you get if you take the Schrodinger equation literally? (Never mind that the basic Schrodinger equation is non-relativistic; I know that there are relativistic formulations of QM.) I can't tell you, because I don't know the underlying math. And, indeed, the "Copenhagen interpretation" seems like patent nonsense, but what about all the others? I don't know enough to answer the question, and I'm not going to bother doing much more research because I just don't really care what the answer is.

Replies from: orthonormal, None
comment by orthonormal · 2010-08-12T18:05:36.596Z · LW(p) · GW(p)

Is MWI what you get if you take the Schrodinger equation literally?

Yes. This is agreed on even by even those who don't subscribe to MWI.

Replies from: CronoDAS
comment by CronoDAS · 2010-08-14T09:26:48.093Z · LW(p) · GW(p)

Do you still get MWI if you start with the Dirac equation (which I understand to be the version of the Schrodinger equation that's consistent with special relativity) instead? Mitchell Porter commented that MWI has issues with special relativity, so I wonder...

Replies from: orthonormal
comment by orthonormal · 2010-08-15T02:11:26.895Z · LW(p) · GW(p)

I'm not an expert on relativistic QM (anyone who is, correct me if I misspeak), but I know enough to tell that Mitchell Porter is confused by what splitting means. In relativistic QM, the wavefunction evolves in a local manner in configuration space, as opposed to the Schrödinger equation's instantaneous (but exponentially small) assignment of mass to distant configurations. Decoherence happens in this picture just as it happened in the regular one.

The reason that Mitchell (and others) are confused about the EPR experiment is that, although the two entangled particles are separated in space, the configurations which will decohere are very close to one another in configuration space. Locality is therefore not violated by the decoherence.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-08-15T03:30:54.901Z · LW(p) · GW(p)

The moment you speak about configuration space as real, you have already adopted a preferred frame. That is the problem.

Replies from: orthonormal
comment by orthonormal · 2010-08-15T17:03:00.271Z · LW(p) · GW(p)

Sorry, but no. The Dirac equation is invariant under Lorentz transformations, so what's local in one inertial reference frame is local in any other as well.

Replies from: prase, Mitchell_Porter
comment by prase · 2010-08-16T15:08:49.915Z · LW(p) · GW(p)

The Dirac equation is invariant, but there are a lot of problems with the concept of locality. For example, if you want to create localised one-particle states that remain local in any reference frame and form an orthonormal eigenbasis of the (one-particle subspace of) the Hilbert space, you will find it impossible.

The canonical solution in axiomatic QFT is to begin with local operations instead of localised particles. However, to see the problem, one has to question the notion of measurement and define it in a covariant manner, which may be a mess. See e.g.

http://prd.aps.org/abstract/PRD/v66/i2/e023510

I am sympathetic to the approach which is used by quantum gravitists, which uses extended Hamiltonian formalism and the Wheeler - DeWitt equation instead of the Schrödinger one. This approach doesn't use time as special, however the phase space isn't isomorphic to the state space on the classical level and similar thing holds on the quantum level for the Hilbert space, which makes the interpretation less obvious.

comment by Mitchell_Porter · 2010-08-16T04:37:38.775Z · LW(p) · GW(p)

You're missing my point. To make sense of the Dirac equation, you have to interpret it as a statement about field operators, so locality means (e.g.) that spacelike-separated operators commute. But that's just a statement about expectation values of observables. MWI is supposed to be a comprehensive ontological interpretation, i.e. a theory of what is actually there in reality.

You seem to be saying that configurations (field configurations, particle configurations, it makes no difference for this argument) are what is actually there. But a "configuration" is spatially extended. Therefore, it requires a universal time coordinate. Everett worlds are always defined with respect to a particular time-slicing - a particular set of spacelike hypersurfaces. From a relativistic perspective, it looks as arbitrary as any "objective collapse" theory.

comment by [deleted] · 2010-08-12T17:40:44.119Z · LW(p) · GW(p)

(Never mind that the basic Schrodinger equation is non-relativistic; I know that there are relativistic formulations of QM.)

Not that accurately model the motions of planets, there aren't.

comment by multifoliaterose · 2010-08-12T16:17:48.778Z · LW(p) · GW(p)

It looks to me as though you've focused in on one of the weaker points in XiXiDu's post rather than engaging with the (logically independent) stronger points.

Replies from: Eliezer_Yudkowsky, Wei_Dai, markan
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T17:10:08.430Z · LW(p) · GW(p)

XiXiDu wants to know why he can trust SIAI instead of Charles Stross. Reading the MWI sequence is supposed to tell him that far more effectively than any cute little sentence I could write. The first thing I need to know is whether he read the sequence and something went wrong, or if he didn't read the sequence.

Replies from: rwallace, multifoliaterose
comment by rwallace · 2010-08-12T19:32:46.160Z · LW(p) · GW(p)

Well, you've picked the weakest of his points to answer, and I put it to you that it was clearly the weakest.

You are right of course that what does or doesn't show up in Charles Stross's writing doesn't constitute evidence in either direction -- he's a professional fiction author, he has to write for entertainment value regardless of what he may or may not know or believe about what's actually likely or unlikely to happen.

A better example would be e.g. Peter Norvig, whose credentials are vastly more impressive than yours (or, granted, than mine), and who thinks we need to get at least another couple of decades of progress under our belts before there will be any point in resuming attempts to work on AGI. (Even I'm not that pessimistic.)

If you want to argue from authority, the result of that isn't just tilted against the SIAI, it's flat out no contest.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T13:11:37.772Z · LW(p) · GW(p)

A better example would be e.g. Peter Norvig, whose credentials are vastly more impressive than yours (or, granted, than mine), and who thinks we need to get at least another couple of decades of progress under our belts before there will be any point in resuming attempts to work on AGI. (Even I'm not that pessimistic.)

If this means "until the theory and practice of machine learning is better developed, if you try to build an AGI using existing tools you will very probably fail" it's not unusually pessimistic at all. "An investment of $X in developing AI theory will do more to reduce the mean time to AI than $X on AGI projects using existing theory now" isn't so outlandish either. What was the context/cite?

Replies from: rwallace
comment by rwallace · 2010-08-13T15:00:10.378Z · LW(p) · GW(p)

I don't have the reference handy, but he wasn't saying let's spend 20 years of armchair thought developing AGI theory before we start writing any code (I'm sure he knows better than that), he was saying forget about AGI completely until we've got another 20 years of general technological progress under our belts.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-13T15:02:32.887Z · LW(p) · GW(p)

Not general technological progress surely, but the theory and tools developed by working on particular machine learning problems and methodologies?

Replies from: rwallace
comment by rwallace · 2010-08-13T15:34:47.387Z · LW(p) · GW(p)

Those would seem likely to be helpful indeed. Better programming tools might also help, as would additional computing power (not so much because computing power is actually a limiting factor today, as because we tend to scale our intuition about available computing power to what we physically deal with on an everyday basis -- which for most of us, is a cheap desktop PC -- and we tend to flinch away from designs whose projected requirements would exceed such a cheap PC; increasing the baseline makes us less likely to flinch away from good designs).

comment by multifoliaterose · 2010-08-12T17:30:20.883Z · LW(p) · GW(p)

Here too, it looks like you're focusing on a weak aspect of his post rather than engaging him. Nobody who's smart and has read your writing carefully doubts that you're uncommonly brilliant and that this gives you more credibility than the other singulatarians. But there are more substantive aspects of XiXiDu's post which you're not addressing.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T17:39:56.694Z · LW(p) · GW(p)

Like what? Why he should believe in exponential growth? When by "exponential" he actually means "fast" and no one at SIAI actually advocates for exponentials, those being a strictly Kurzweilian obsession and not even very dangerous by our standards? When he picks MWI, of all things, to accuse us of overconfidence (not "I didn't understand that" but "I know something you don't about how to integrate the evidence on MWI, clearly you folks are overconfident")? When there's lots of little things scattered through the post like that ("I'm engaging in pluralistic ignorance based on Charles Stross's nonreaction") it doesn't make me want to plunge into engaging the many different little "substantive" parts, get back more replies along the same line, and recapitulate half of Less Wrong in the process. The first thing I need to know is whether XiXiDu did the reading and the reading failed, or did he not do the reading? If he didn't do the reading, then my answer is simply, "If you haven't done enough reading to notice that Stross isn't in our league, then of course you don't trust SIAI". That looks to me like the real issue. For substantive arguments, pick a single point and point out where the existing argument fails on it - don't throw a huge handful of small "huh?"s at me.

Replies from: None, multifoliaterose, XiXiDu
comment by [deleted] · 2010-08-12T17:58:41.069Z · LW(p) · GW(p)

Like what?

Castles in the air. Your claims are based on long chains of reasoning that you do not write down in a formal style. Is the probability of correctness of each link in that chain of reasoning so close to 1, that their product is also close to 1?

I can think of a couple of ways you could respond:

  1. Yes, you are that confident in your reasoning. In that case you could explain why XiXiDu should be similarly confident, or why it's not of interest to you whether he is similarly confident.

  2. It's not a chain of reasoning, it's a web of reasoning, and robust against certain arguments being off. If that's the case, then we lay readers might benefit if you would make more specific and relevant references to your writings depending on context, instead of encouraging people to read the whole thing before bringing criticisms.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T18:06:20.155Z · LW(p) · GW(p)

Most of the long arguments are concerned with refuting fallacies and defeating counterarguments, which flawed reasoning will always be able to supply in infinite quantity. The key predictions, when you look at them, generally turn out to be antipredictions, and the long arguments just defeat the flawed priors that concentrate probability into anthropomorphic areas. The positive arguments are simple, only defeating complicated counterarguments is complicated.

"Fast AI" is simply "Most possible artificial minds are unlikely to run at human speed, the slow ones that never speed up will drop out of consideration, and the fast ones are what we're worried about."

"UnFriendly AI" is simply "Most possible artificial minds are unFriendly, most intuitive methods you can think of for constructing one run into flaws in your intuitions and fail."

MWI is simply "Schrodinger's equation is the simplest fit to the evidence"; there are people who think that you should do something with this equation other than taking it at face value, like arguing that gravity can't be real and so needs to be interpreted differently, and the long arguments are just there to defeat them.

The only argument I can think of that actually approaches complication is about recursive self-improvement, and even there you can say "we've got a complex web of recursive effects and they're unlikely to turn out exactly exponential with a human-sized exponent", the long arguments being devoted mainly to defeating the likes of Robin Hanson's argument for why it should be exponential with an exponent that smoothly couples to the global economy.

Replies from: Unknowns, JamesAndrix, rwallace, None, None
comment by Unknowns · 2010-08-12T18:26:45.311Z · LW(p) · GW(p)

One problem I have with your argument here is that you appear to be saying that if XiXiDu doesn't agree with you, he must be stupid (the stuff about low g etc.). Do you think Robin Hanson is stupid too, since he wasn't convinced?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T20:49:30.176Z · LW(p) · GW(p)

If he wasn't convinced about MWI it would start to become a serious possibility.

Replies from: Unknowns, Perplexed, prase
comment by Unknowns · 2010-08-13T01:54:21.914Z · LW(p) · GW(p)

I haven't found the text during a two minute search or so, but I think I remember Robin assigning a substantial probability, say, 30% or so, to the possibility that MWI is false, even if he thinks most likely (i.e. the remaining 70%) that it's true.

Much as you argued in the post about Einstein's arrogance, there seems to be a small enough difference between a 30% chance of being false, and a 90% chance of being false, if the latter would imply that Robin was stupid, the former would imply it too.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T02:18:49.362Z · LW(p) · GW(p)

I suspect that Robin would not actually act-as-if those odds with a gun to his head, and he is being conveniently modest.

Replies from: Unknowns
comment by Unknowns · 2010-08-13T02:47:41.800Z · LW(p) · GW(p)

Right: in fact he would act as though MWI is certainly false... or at least as though Quantum Immortality is certainly false, which has a good chance of being true given MWI.

Replies from: wedrifid
comment by wedrifid · 2010-08-13T06:06:45.732Z · LW(p) · GW(p)

Quantum Immortality is certainly false, which has a good chance of being true given MWI.

No! He will act as if Quantum Immortality is a bad choice, which is true even if QI works exactly as described. 'True' isn't the right kind word to use unless you include a normative conclusion in the description of QI.

Replies from: Unknowns
comment by Unknowns · 2010-08-13T06:18:48.351Z · LW(p) · GW(p)

Consider the Least Convenient Possible World...

Suppose that being shot with the gun cannot possibly have intermediate results: either the gun fails, or he is killed instantly and painlessly.

Also suppose that given that there are possible worlds where he exists, each copy of him only cares about its anticipated experiences, not about the other copies, and that this is morally the right thing to do... in other words, if he expects to continue to exist, he doesn't care about other copies that cease to exist. This is certainly the attitude some people would have, and we could suppose (for the LCPW) that it is the correct attitude.

Even so, given these two suppositions, I suspect it would not affect his behavior in the slightest, showing that he would be acting as though QI is certainly false, and therefore as though there is a good chance that MWI is false.

Replies from: wedrifid
comment by wedrifid · 2010-08-13T06:37:30.998Z · LW(p) · GW(p)

each copy of him only cares about its anticipated experiences, not about the other copies, and that this is morally the right thing to do... in other words, if he expects to continue to exist, he doesn't care about other copies that cease to exist.

But that is crazy and false, and uses 'copies' to in a misleading way. Why would I assume that?

Even so, given these two suppositions, I suspect it would not affect his behavior in the slightest, showing that he would be acting as though QI is certainly false,

This 'least convenient possible world' is one in which Robin's values are changed according to your prescription but his behaviour is not, ensuring that your conclusion is true. That isn't the purpose of inconvenient worlds (kind of the opposite...)

and therefore as though there is a good chance that MWI is false.

Not at all. You are conflating "MWI is false" with a whole different set of propositions. MWI != QS.

Replies from: Unknowns
comment by Unknowns · 2010-08-13T06:40:23.585Z · LW(p) · GW(p)

Many people in fact have those values and opinions, and nonetheless act in the way I mention (and there is no one who does not so act) so it is quite reasonable to suppose that even if Robin's values were so changed, his behavior would remain unchanged.

Replies from: wedrifid
comment by wedrifid · 2010-08-13T07:18:11.791Z · LW(p) · GW(p)

The very reason Robin was brought up (by you I might add) was to serve as an ad absurdum with respect to intellectual disrespect.

One problem I have with your argument here is that you appear to be saying that if XiXiDu doesn't agree with you, he must be stupid (the stuff about low g etc.). Do you think Robin Hanson is stupid too, since he wasn't convinced?

In the Convenient World where Robin is, in fact, too stupid to correctly tackle the concept of QS, understand the difference between MWI and QI or form a sophisticated understanding of his moral intuitions with respect to quantum uncertainty this Counterfactual-Stupid-Robin is a completely useless example.

comment by Perplexed · 2010-08-16T16:36:57.225Z · LW(p) · GW(p)

If he wasn't convinced about MWI ...

I can imagine two different meanings for "not convinced about MWI"

  1. It refers to someone who is not convinced that MWI is as good as any other model of reality, and better than most.

  2. It refers to someone who is not convinced that MWI describes the structure of reality.

If we are meant to understand the meaning as #1, then it may well indicate that someone is stupid. Though, more charitably, it might more likely indicate that he is ignorant.

If we are meant to understand the meaning as #2, then I think that it indicates someone who is not entrapped by the Mind Projection Fallacy.

comment by prase · 2010-08-16T14:27:46.874Z · LW(p) · GW(p)

What do you mean by belief in MWI? What sort of experiment could settle whether MWI is true or not?

I suspect that a lot of people object to the stuff including copies of humans and other worlds we should care about and hypotheses about consciousness tacitly build on MWI, rather than MWI itself.

Replies from: timtyler, cousin_it
comment by timtyler · 2010-08-16T16:19:33.255Z · LW(p) · GW(p)

What sort of experiment could settle whether MWI is true or not?

From THE EVERETT FAQ:

"Is many-worlds (just) an interpretation?"

"What unique predictions does many-worlds make?"

"Could we detect other Everett-worlds?"

Replies from: prase, jimrandomh
comment by prase · 2010-08-16T17:02:52.730Z · LW(p) · GW(p)

I'm (yet) not convinced.

First, the links say that MWI needs a linear quantum theory, and lists therefore the linearity among its predictions. However, linearity is a part of the quantum theory and its mathematical formalism, and nothing specific to MWI. Also, weak non-linearity would be explicable using the language of MWI saying that the different worlds interact a little. I don't see how testing the superposition principle establishes MWI. A very weak evidence at best.

Second, there is a very confused paragraph about quantum gravity, which, apart from linking to itself, states only that MWI requires gravity to be quantised (without supporting argument) and therefore if gravity is successfully quantised, it forms evidence for MWI. However, nobody doubts that gravity has to be quantised somehow, even hardcore Copenhageners.

The most interesting part is that about the reversible measurement done by an artificial intelligence. As I understand it, it supposes that we construct a machine which could perform measurements in reversed direction of time, for which it has to be immune to quantum decoherence. It sounds interesting, but is also suspicious. I see no way how can we get the information into our brains without decoherence. The argument apparently tries to circumvent this objection by postulating an AI, which is reversible and decoherence-immune, but the AI will still face the same problem when trying to tell us the results. In fact, postulating the need of an AI here seems to be only a tool to make the proposed experiment more obscure and difficult to analyse. We will have a "reversible AI", therefore miraculously we will detect differences between Copenhagen and MWI.

However, at least there is a link to Deutsch's article which hopefully explains the experiment in greater detail, so I will read it and edit the comment later.

Replies from: timtyler
comment by timtyler · 2010-08-16T17:59:31.443Z · LW(p) · GW(p)

"Many-worlds is often referred to as a theory, rather than just an interpretation, by those who propose that many-worlds can make testable predictions (such as David Deutsch) or is falsifiable (such as Everett) or by those who propose that all the other, non-MW interpretations, are inconsistent, illogical or unscientific in their handling of measurements"

comment by jimrandomh · 2010-08-16T16:49:26.301Z · LW(p) · GW(p)

None of the tests in that FAQ look to me like they could distinguish MWI from MWI+worldeater. The closest thing to an experimental test I've come up with is the following:

Flip a quantum coin. If heads, copy yourself once, advance both copies enough to observe the result, then kill one of the copies. If tails, do nothing.

In a many-worlds interpretation of QM, from the perspective of the experimenter, the coin will be heads with probability 2/3, since there are two observers in that case and only one if the coin was tails. In the single-world case, the coin will be heads with probability 1/2. So each time you repeat the experiment, you get 0.4 bits of evidence for or against MWI. Unfortunately, this evidence is also non-transferrable; someone else can't use your observation as evidence the same way you can. And getting enough evidence for a firm conclusion involves a very high chance of subjective death (though it is guaranteed that exactly one copy will be left behind). And various quantum immortality hypotheses screw up the experiment, too.

So it is testable in principle, but the experiment involved more odious than one would imagine possible.

comment by cousin_it · 2010-08-16T14:33:59.308Z · LW(p) · GW(p)

The math works the same in all interpretations, but some experiments are difficult to understand intuitively without the MWI. I usually give people the example of the Elitzur-Vaidman bomb tester where the easy MWI explanation says "we know the bomb works because it exploded in another world", but other interpretations must resort to clever intellectual gymnastics.

Replies from: prase
comment by prase · 2010-08-16T14:45:40.973Z · LW(p) · GW(p)

If all interpretations are equivalent with respect to testable outcomes, what makes the belief in any particular interpretation so important? Ease of intuitive understanding is a dangerous criterion to rely on, and a relative thing too. Some people are more ready to accept mental gymnastic than existence of another worlds.

Replies from: cousin_it
comment by cousin_it · 2010-08-16T14:50:38.551Z · LW(p) · GW(p)

Well, that depends. Have you actually tried to do the mental gymnastics and explain the linked experiment using the Copenhagen interpretation? I suspect that going through with that may influence your final opinion.

Replies from: Vladimir_M, prase
comment by Vladimir_M · 2010-08-16T20:15:03.285Z · LW(p) · GW(p)

cousin_it:

Have you actually tried to do the mental gymnastics and explain the linked experiment [the Elitzur-Vaidman bomb tester] using the Copenhagen interpretation?

Maybe I'm missing something, but how exactly does this experiment challenge the Copenhagen interpretation more than the standard double-slit stuff? Copenhagen treats "measurement" as a fundamental and irreducible process and measurement devices as special components in each experiment -- and in this case it simply says that a dud bomb doesn't represent a measurement device, whereas a functioning one does, so that they interact with the photon wavefunction differently. The former leaves it unchanged, while the latter collapses it to one arm of the interferometer -- eiher its own, in which case it explodes, or the other one, in which case it reveals itself as a measurement device just by the act of collapsing.

As far as I understand, this would be similar to the standard variations on the double-slit experiment where one destroys the interference pattern by placing a particle detector at the exit from one of the holes. One could presumably do a similar experiment with a detector that might be faulty, and conclude that an interference-destroying detector works even if it doesn't flash when several particles are let through (in cases where they all happen to go through the other hole). Unless I'm misunderstanding something, this would be a close equivalent of the bomb test.

The final conclusion in the bomb test is surely more spectacular, but I don't see how it produces any extra confusion for Copenhageners compared to the most basic QM experiments.

comment by prase · 2010-08-16T15:14:50.941Z · LW(p) · GW(p)

Frankly, I don't know what you consider an explanation here. I am quite comfortable with the prediction which the theory gives, and accept that as an explanation. So I never needed mental gymnastics here. The experiment is weird, but it doesn't seem to me less weird by saying that the information about the bomb's functionality came from its explosion in the other world.

Replies from: cousin_it
comment by cousin_it · 2010-08-16T15:22:54.018Z · LW(p) · GW(p)

Fair enough.

comment by JamesAndrix · 2010-08-13T02:09:00.719Z · LW(p) · GW(p)

This should be revamped into a document introducing the sequences.

comment by rwallace · 2010-08-12T20:04:30.960Z · LW(p) · GW(p)

Your claims are only anti-predictions relative to science-fiction notions of robots as metal men.

Most possible artificial minds are neither Friendly nor unFriendly (unless you adopt such a stringent definition of mind that artificial minds are not going to exist in my lifetime or yours).

Fast AI (along with most of the other wild claims about what future technology will do, really) falls afoul of the general version of Amdahl's law. (On which topic, did you ever update your world model when you found out you were mistaken about the role of computers in chip design?)

About MWI, I agree with you completely, though I am more hesitant to berate early quantum physicists for not having found it obvious. For a possible analogy: what do you think of my resolution of the Anthropic Trilemma?

comment by [deleted] · 2010-08-12T18:15:12.506Z · LW(p) · GW(p)

This is quite helpful, and suggests that what I wanted is not a lay-reader summary, but an executive summary.

I brought this up elsewhere in this thread, but the fact that quantum mechanics and gravity are not reconciled suggests that even Schrodinger's equation does not fit the evidence. The "low-energy" disclaimer one has to add is very weird, maybe weirder than any counterintuitive consequences of quantum mechanics.

Replies from: orthonormal
comment by orthonormal · 2010-08-12T18:20:46.223Z · LW(p) · GW(p)

I brought this up elsewhere in this thread, but the fact that quantum mechanics and gravity are not reconciled suggests to be that even Schrodinger's equation does not fit the evidence. The "low-energy" disclaimer one has to add is very weird, maybe weirder than any counterintuitive consequences of quantum mechanics.

It's not the Schrödinger equation alone that gives rise to decoherence and thus many-worlds. (Read Good and Real for another toy model, the "quantish" system.) The EPR experiment and Bell's inequality can be made to work on macroscopic scales, so we know that whatever mathematical object the universe will turn out to be, it's not going to go un-quantum on us again: it has the same relevant behavior as the Schrödinger equation, and accordingly MWI will be the best interpretation there as well.

comment by [deleted] · 2010-08-12T23:15:03.649Z · LW(p) · GW(p)

Speaking of executive summaries, will you offer one for your metaethics?

Replies from: Eliezer_Yudkowsky, Psy-Kosh
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T02:22:56.548Z · LW(p) · GW(p)

"There is no intangible stuff of goodness that you can divorce from life and love and happiness in order to ask why things like that are good. They are simply what you are talking about in the first place when you talk about goodness."

And then the long arguments are about why your brain makes you think anything different.

Replies from: None
comment by [deleted] · 2010-08-13T04:51:06.495Z · LW(p) · GW(p)

This is less startling than your more scientific pronouncements. Are there any atheists reading this that find this (or at first found this) very counterintuitive or objectionable?

I would go further, and had the impression from somewhere that you did not go that far. Is that accurate?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T05:21:41.547Z · LW(p) · GW(p)

I'm a cognitivist. Sentences about goodness have truth values after you translate them into being about life and happiness etc. As a general strategy, I make the queerness go away, rather than taking the queerness as a property of a thing and using it to deduce that thing does not exist; it's a confusion to resolve, not an existence to argue over.

Replies from: None
comment by [deleted] · 2010-08-13T05:25:03.683Z · LW(p) · GW(p)

To be clear, if sentence X about goodness is translated into sentence Y about life and happiness etc., does sentence Y contain the word "good"?

Edit: What's left of religion after you make the queerness go away? Why does there seem to be more left of morality?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-13T06:54:16.904Z · LW(p) · GW(p)

No, nothing, and because while religion does contain some confusion, after you eliminate the confusion you are left with claims that are coherent but false.

comment by Psy-Kosh · 2010-08-13T00:34:35.234Z · LW(p) · GW(p)

I can do that:

Morality is a specific set of values (Or, more precisely, a specific algorithm/dynamic for judging values). Humans happen to be (for various reasons) the sort of beings that value morality as opposed to valuing, say, maximizing paperclip production. It is indeed objectively better (by which we really mean "more moral"/"the sort of thing we should do") to be moral than to be paperclipish. And indeed we should be moral, where by "should" we mean, "more moral".

(And moral, when we actually cash out what we actually mean by it seems to translate to a complicated blob of values like happiness, love, creativity, novelty, self determination, fairness, life (as in protecting theirof), etc...)

It may appear that paperclip beings and moral beings disagree about something, but not really. The paperclippers would, once they've analyzed what humans actually mean by "moral", would agree "yep, humans are more moral than us. But who cares about this morality stuff, it doesn't maximize paperclips!"

Of course, screw the desires of the paperclippers, after all, they're not actually moral. We really are objectively better (once we think carefully by what we mean by "better") than them.

(note, "does something or does something not actually do a good job of fulfilling a certain value?" is an objective question. ie, "does a particular action tend to increase the expected number of paperclips?" (on the paperclipper side) or, on our side, stuff like "does a particular action tend to save more lives, increase happiness, increase fairness, add novelty..." etc etc etc is an objective question in that we can extract specific meaning from that question and can objectively (in a way the paperclippers would agree with) judge that. It simply happens to be that we're the sorts of beings that actually care about the answer to that (as we should be), while the screwy hypothetical paperclippers are immoral and only care about paperclips.

How's that, that make sense? Or, to summarize the summary, "Morality is objective, and we humans happen to be the sorts of beings that value morality, as opposed to valuing something else instead"

Replies from: Wei_Dai, None, Vladimir_M
comment by Wei Dai (Wei_Dai) · 2010-08-13T04:11:12.626Z · LW(p) · GW(p)

Is morality actually:

  1. a specific algorithm/dynamic for judging values, or
  2. a complicated blob of values like happiness, love, creativity, novelty, self determination, fairness, life (as in protecting theirof), etc.?

If it's 1, can we say something interesting and non-trivial about the algorithm, besides the fact that it's an algorithm? In other words, everything can be viewed as an algorithm, but what's the point of viewing morality as an algorithm?

If it's 2, why do we think that two people on opposite sides of the Earth are referring to the same complicated blob of values when they say "morality"? I know the argument about the psychological unity of humankind (not enough time for significant genetic divergence), but what about cultural/memetic evolution?

I'm guessing the answer to my first question is something like, morality is an algorithm whose current "state" is a complicated blob of values like happiness, love, ... so both of my other questions ought to apply.

Replies from: Vladimir_M, None, Sniffnoy
comment by Vladimir_M · 2010-08-13T05:51:16.800Z · LW(p) · GW(p)

Wei_Dai:

If it's 2, why do we think that two people on opposite sides of the Earth are referring to the same complicated blob of values when they say "morality"? I know the argument about the psychological unity of humankind (not enough time for significant genetic divergence), but what about cultural/memetic evolution?

You don't even have to do any cross-cultural comparisons to make such an argument. Considering the insights from modern behavioral genetics, individual differences within any single culture will suffice.

comment by [deleted] · 2010-08-13T05:07:00.062Z · LW(p) · GW(p)

but what about cultural/memetic evolution?

There is no reason to be at all tentative about this. There's tons of cog sci data about what people mean when they talk about morality. It varies hugely (but predictably) across cultures.

comment by Sniffnoy · 2010-08-13T05:18:11.061Z · LW(p) · GW(p)

Why are you using algorithm/dynamic here instead of function or partial function? (On what space, I will ignore that issue, just as you have...) Is it supposed to be stateful? I'm not even clear what that would mean. Or is function what you mean by #2? I'm not even really clear on how these differ.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-13T06:31:23.872Z · LW(p) · GW(p)

You might have gotten confused because I quoted Psy-Kosh's phrase "specific algorithm/dynamic for judging values" whereas Eliezer's original idea I think was more like an algorithm for changing one's values in response to moral arguments. Here are Eliezer's own words:

I would say, by the way, that the huge blob of a computation is not just my present terminal values (which I don't really have - I am not a consistent expected utility maximizers); the huge blob of a computation includes the specification of those moral arguments, those justifications, that would sway me if I heard them.

Replies from: Unknowns
comment by Unknowns · 2010-08-13T06:43:09.312Z · LW(p) · GW(p)

Others have pointed out that this definition is actually quite unlikely to be coherent: people would be likely to be ultimately persuaded by different moral arguments and justifications if they had different experiences and heard arguments in different orders etc.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-13T07:02:33.382Z · LW(p) · GW(p)

Others have pointed out that this definition is actually quite unlikely to be coherent

Yes, see here for an argument to that effect by Marcello and subsequent discussion about it between Eliezer and myself.

I think the metaethics sequence is probably the weakest of Eliezer's sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.

Replies from: wedrifid, cousin_it, None
comment by wedrifid · 2010-08-13T07:24:05.145Z · LW(p) · GW(p)

I think the metaethics sequence is probably the weakest of Eliezer's sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.

This is somewhat of a concern given Eliezer's interest in Friendliness!

comment by cousin_it · 2010-08-13T08:23:16.718Z · LW(p) · GW(p)

As far as I can understand, Eliezer has promoted two separate ideas about ethics: defining personal morality as a computation in the person's brain rather than something mysterious and external, and extrapolating that computation into smarter creatures. The former idea is self-evident, but the latter (and, by extension, CEV) has received a number of very serious blows recently. IMO it's time to go back to the drawing board. We must find some attack on the problem of preference, latch onto some small corner, that will allow us to make precise statements. Then build from there.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-13T08:38:38.382Z · LW(p) · GW(p)

defining personal morality as a computation in the person's brain rather than something mysterious and external

But I don't see how that, by itself, is a significant advance. Suppose I tell you, "mathematics is a computation in a person's brain rather than something mysterious and external", or "philosophy is a computation in a person's brain rather than something mysterious and external", or "decision making is a computation in a person's brain rather than something mysterious and external" how much have I actually told you about the nature of math, or philosophy, or decision making?

comment by [deleted] · 2010-08-13T07:07:36.471Z · LW(p) · GW(p)

The linked discussion is very nice.

comment by [deleted] · 2010-08-13T05:03:14.703Z · LW(p) · GW(p)

This is currently at +1. Is that from Yudkowsky?

(Edit: +2 after I vote it up.)

This makes sense in that it is coherent, but it is not obvious to me what arguments would be marshaled in its favor. (Yudkowsky's short formulations do point in the direction of their justifications.) Moreover, the very first line, "morality is a specific set of values," and even its parenthetical expansion (algorithm for judging values), seems utterly preposterous to me. The controversies between human beings about which specific sets of values are moral, at every scale large and small, are legendary beyond cliche.

Replies from: ata
comment by ata · 2010-08-13T05:17:03.134Z · LW(p) · GW(p)

The controversies between human beings about which specific sets of values are moral, at every scale large and small, are legendary beyond cliche.

It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.

You may or may not find that convincing (you'll get to the arguments regarding that if you're reading the sequences), but assuming that is true, then "morality is a specific set of values" is correct, though vague: more precisely, it is a very complicated set of terminal values, which, in this world, happens to be embedded solely in a species of minds who are not naturally very good at rationality, leading to massive disagreement about instrumental values (though most people do not notice that it's about instrumental values).

Replies from: wedrifid, None, Nisan, Vladimir_Nesov
comment by wedrifid · 2010-08-13T06:42:54.876Z · LW(p) · GW(p)

It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.

It is? That's a worry. Consider this a +1 for "That thesis is totally false and only serves signalling purposes!"

Replies from: ata
comment by ata · 2010-08-13T11:42:32.957Z · LW(p) · GW(p)

It is?

I... think it is. Maybe I've gotten something terribly wrong, but I got the impression that this is one of the points of the complexity of value and metaethics sequences, and I seem to recall that it's the basis for expecting humanity's extrapolated volition to actually cohere.

Replies from: wedrifid
comment by wedrifid · 2010-08-13T14:07:37.246Z · LW(p) · GW(p)

I seem to recall that it's the basis for expecting humanity's extrapolated volition to actually cohere.

This whole area isn't covered all that well (as Wei noted). I assumed that CEV would rely on solving an implicit cooperation problem between conflicting moral systems. It doesn't appear at all unlikely to me that some people are intrinsically selfish to some degree and their extrapolated volitions would be quite different.

Note that I'm not denying that some people present (or usually just assume) the thesis you present. I'm just glad that there are usually others who argue against it!

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-08-14T08:15:40.129Z · LW(p) · GW(p)

solving an implicit cooperation problem

That's exactly what I took CEV to entail.

comment by [deleted] · 2010-08-13T06:28:05.296Z · LW(p) · GW(p)

Now this is a startling claim.

(you'll get to the arguments regarding that if you're reading the sequences)

Be more specific!

comment by Nisan · 2010-08-20T04:05:21.117Z · LW(p) · GW(p)

It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning.

Maybe it's true if you also specify "if they were fully capable of modifying their own moral intuitions." I have an intuition (an unexamined belief? a hope? a sci-fi trope?) that humanity as a whole will continue to evolve morally and roughly converge on a morality that resembles current first-world liberal values more than, say, Old Testament values. That is, it would converge, in the limit of global prosperity and peace and dialogue, and assuming no singularity occurs and the average lifespan stays constant. You can call this naive if you want to; I don't know whether it's true. It's what I imagine Eliezer means when he talks about "humanity growing up together".

This growing-up process currently involves raising children, which can be viewed as a crude way of rewriting your personality from scratch, and excising vestiges of values you no longer endorse. It's been an integral part of every culture's moral evolution, and something like it needs to be part of CEV if it's going to actually converge.

comment by Vladimir_Nesov · 2010-08-13T19:59:22.079Z · LW(p) · GW(p)

It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning.

That's not plausible. That would be some sort of objective morality, and there is no such thing. Humans have brains, and brains are complicated. You can't have them imply exactly the same preference.

Now, the non-crazy version of what you suggest is that preferences of most people are roughly similar, that they won't differ substantially in major aspects. But when you focus on detail, everyone is bound to want their own thing.

comment by Vladimir_M · 2010-08-13T02:37:44.292Z · LW(p) · GW(p)

Psy-Kosh:

How's that, that make sense?

It makes sense in its own terms, but it leaves the unpleasant implication that morality differs greatly between humans, at both individual and group level -- and if this leads to a conflict, asking who is right is meaningless (except insofar as everyone can reach an answer that's valid only for himself, in terms of his own morality).

So if I live in the same society with people whose morality differs from mine, and the good-fences-make-good-neighbors solution is not an option, as it often isn't, then who gets to decide whose morality gets imposed on the other side? As far as I see, the position espoused in the above comment leaves no other answer than "might is right." (Where "might" also includes more subtle ways of exercising power than sheer physical coercion, of course.)

Replies from: Furcas, Psy-Kosh
comment by Furcas · 2010-08-13T03:55:36.497Z · LW(p) · GW(p)

...and if this leads to a conflict, asking who is right is meaningless (except insofar as everyone can reach an answer that's valid only for himself, in terms of his own morality).

So if I live in the same society with people whose morality differs from mine, and the good-fences-make-good-neighbors solution is not an option, as it often isn't, then who gets to decide whose morality gets imposed on the other side?

That two people mean different things by the same word doesn't make all questions asked using that word meaningless, or even hard to answer.

If by "castle" you mean "a fortified structure", while I mean "a fortified structure surrounded by a moat", who will be right if we're asked if the Chateau de Gisors is a castle? Any confusion here is purely semantic in nature. If you answer yes and I answer no, we won't have given two answers to the same question, we'll have given two answers to two different questions. If Psy-Kosh says that the Chateau de Gisors is a fortified structure but it is not surrounded by a moat, he'll have answered both our questions.

Now, once this has been clarified, what would it mean to ask who gets to decide whose definition of 'castle' gets imposed on the other side? Do we need a kind of meta-definition of castle to somehow figure out what the one true definition is? If I could settle this issue by exercising power over you, would it change the fact that the Chateau de Gisors is not surrounded by a moat? If I killed everyone who doesn't mean the same thing by the word 'castle' than I do, would the sentence "a fortified structure" become logically equivalent to the sentence "a fortified structure surrounded by a moat"?

In short, substituting the meaning of a word for the word tends to make lots of seemingly difficult problems become laughably easy to solve. Try it.

comment by Psy-Kosh · 2010-08-13T03:28:09.872Z · LW(p) · GW(p)

*blinks* how did I imply that morality varies? I thought (was trying to imply) that morality is an absolute standard and that humans simply happen to be the sort of beings that care about the particular standard we call "morality". (Well, with various caveats like not being sufficiently reflective to be able to fully explicitly state our "morality algorithm", nor do we fully know all its consequences)

However, when humans and paperclippers interact, well, there will probably be some sort of fight if one doesn't end up with some sort PD cooperation or whatever. It's not that paperclippers and humans disagree on anything, it's simply, well, they value paperclips a whole lot more than lives. We're sort of stuck with having to act in a way to prevent the hypothetical them from acting on that.

(of course, the notion that most humans seem to have the same underlying core "morality algorithm", just disagreeing on the implications or such, is something to discuss, but that gets us out of executive summary territory, no?)

Replies from: Vladimir_M
comment by Vladimir_M · 2010-08-13T03:55:53.353Z · LW(p) · GW(p)

Psy-Kosh:

(of course, the notion that most humans seem to have the same underlying core "morality algorithm", just disagreeing on the implications or such, is something to discuss, but that gets us out of executive summary territory, no?)

I would say that it's a crucial assumption, which should be emphasized clearly even in the briefest summary of this viewpoint. It is certainly not obvious, to say the least. (And, for full disclosure, I don't believe that it's a sufficiently close approximation of reality to avoid the problem I emphasized above.)

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-08-13T14:56:00.539Z · LW(p) · GW(p)

Hrm, fair enough. I thought I'd effectively implied it, but apparently not sufficiently.

(Incidentally... you don't think it's a close approximation to reality? Most humans seem to value (to various extents) happiness, love, (at least some) lives, etc... right?)

Replies from: CronoDAS
comment by CronoDAS · 2010-08-14T09:19:09.051Z · LW(p) · GW(p)

Different people (and cultures) seem to put very different weights on these things.

Here's an example:

You're a government minister who has to decide who to hire to do a specific task. There are two applicants. One is your brother, who is marginally competent at the task. The other is a stranger with better qualifications who will probably be much better at the task.

The answer is "obvious."

In some places, "obviously" you hire your brother. What kind of heartless bastard won't help out his own brother by giving him a job?

In others, "obviously" you should hire the stranger. What kind of corrupt scoundrel abuses his position by hiring his good-for-nothing brother instead of the obviously superior candidate?

comment by multifoliaterose · 2010-08-12T17:51:17.911Z · LW(p) · GW(p)

Okay, I can see how XiXiDu's post might come across that way. I think I can clarify what I think that XiXiDu is trying to get at by asking some better questions of my own.

  1. What evidence has SIAI presented that the Singularity is near?
  2. If the Singularity is near then why has the scientific community missed this fact?
  3. What evidence has SIAI presented for the existence of grey goo technology?
  4. If grey goo technology is feasible then why has the scientific community missed this fact?
  5. Assuming that the Singularity is near, what evidence is there that SIAI has a chance to lower global catastrophic risk in a nontrivial way?
  6. What evidence is there that SIAI has room for more funding?
Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T20:57:51.399Z · LW(p) · GW(p)

"Near"? Where'd we say that? What's "near"? XiXiDu thinks we're Kurzweil?

What kind of evidence would you want aside from a demonstrated Singularity?

Grey goo? Huh? What's that got to do with us? Read Nanosystems by Eric Drexler or Freitas on "global ecophagy". XiXiDu thinks we're Foresight?

If this business about "evidence" isn't a demand for particular proof, then what are you looking for besides not-further-confirmed straight-line extrapolations from inductive generalizations supported by evidence?

Replies from: multifoliaterose, multifoliaterose, XiXiDu
comment by multifoliaterose · 2010-08-12T22:56:35.707Z · LW(p) · GW(p)

"Near"? Where'd we say that? What's "near"? XiXiDu thinks we're Kurzweil?

You've claimed that in your blogging heads divlog with Scott Aaronson that you think that it's pretty obvious that there will be an AGI within the next century. As far as I know you have not offered a detailed description of the reasoning that led you to this conclusion that can be checked by others.

I see this as significant for the reasons given in my comment here.

Grey goo? Huh? What's that got to do with us? Read Nanosystems by Eric Drexler or Freitas on "global ecophagy". XiXiDu thinks we're Foresight?

I don't know what the situation is with SIAI's position on grey goo - I've heard people say the SIAI staff believe in nanotechnology having capabilities out of line with the beliefs of the scientific community, but they may have been misinformed. So let's forget about about questions 3 and 4.

Questions 1, 2, 5 and 6 remain.

Replies from: ciphergoth, Vladimir_Nesov
comment by Paul Crowley (ciphergoth) · 2010-08-13T05:56:53.165Z · LW(p) · GW(p)

You've claimed that in your blogging heads divlog with Scott Aaronson that you think that it's pretty obvious that there will be an AGI within the next century.

You've shifted the question from "is SIAI on balance worth donating to" to "should I believe everything Eliezer has ever said".

comment by Vladimir_Nesov · 2010-08-12T23:11:51.661Z · LW(p) · GW(p)

I don't know what the situation is with SIAI's position on grey goo - I've heard people say the SIAI staff believe in nanotechnology having capabilities out of line with the beliefs of the scientific community, but they may have been misinformed.

The point is that grey goo is not relevant to SIAI's mission (apart from being yet another background existential risk that FAI can dissolve). "Scientific community" doesn't normally professionally study (far) future technological capabilities.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T11:01:09.706Z · LW(p) · GW(p)

My whole point about grey goo has been, as stated, that a possible superhuman AI could use it to do really bad things. That is, I do not see how an encapsulated AI, even a superhuman AI, could pose the stated risks without the use of advanced nanotechnology. Is it going to use nukes, like Skynet? Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.

I'm shocked how you people misintepreted my intentions there.

Replies from: katydee, Vladimir_Nesov
comment by katydee · 2010-08-13T11:15:09.656Z · LW(p) · GW(p)

If a superhuman AI is possible without advanced nanotechnology, a superhuman AI could just invent advanced nanotechnology and implement it.

comment by Vladimir_Nesov · 2010-08-13T16:01:31.980Z · LW(p) · GW(p)

Grey goo is only a potential danger in its own right because it's a way dumb machinery can grow in destructive power (you don't need to assume AI controlling it for it to be dangerous, at least so goes the story). AGI is not dumb, so it can use something more fitting to precise control than grey goo (and correspondingly more destructive and feasible).

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T16:09:29.770Z · LW(p) · GW(p)

The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.

I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-13T16:32:33.698Z · LW(p) · GW(p)

The grey goo example was named to exemplify the speed and sophistication of nanotechnology that would have to be around to either allow an AI to be build in the first place or be of considerable danger.

As katydee pointed out, if for some strange reason grey goo is what AI would want, AI will invent grey goo. If you used "grey goo" to refer to the rough level of technological development necessary to produce grey goo, then my comments missed that point.

I consider your comment an expression of personal disgust. No way you could possible misinterpret my original point and subsequent explanation to this extent.

Illusion of transparency. Since the general point about nanotech seems equally wrong to me, I couldn't distinguish between the error of making it and making a similarly wrong point about the relevance of grey goo in particular. In general, I don't plot, so take my words literally. If I don't like something, I just say so, or keep silent.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T16:48:06.689Z · LW(p) · GW(p)

If it seems equally wrong, why haven't you pointed me to some further reasoning on the topic regarding the feasibility of AGI without advanced (grey goo level) nanotechnology? Why haven't you argued about the dangers of AGI which is unable to make use of advanced nanotechnology? I was inquiring about these issues in my original post and not trying to argue against the scenarios in question.

Yes, I've seen the comment regarding the possible invention of advanced nanotechnology by AGI. If AGI needs something that isn't there it will just pull it out of its hat. Well, I have my doubts that even a superhuman AGI can steer the development of advanced nanotechnology so that it can gain control of it. Sure, it might solve the problems associated with it and send the solutions to some researcher. Then it could buy the stocks of the subsequent company involved with the new technology and somehow gain control...well, at this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-13T16:49:27.629Z · LW(p) · GW(p)

To the point: if AGI can't pose a danger, because its hands are tied, that's wonderful! Then we have more time to work of FAI. FAI is not about superpowerful robots, it's about technically understanding what we want, and using that understanding to automate the manufacturing of goodness. The power is expected to come from unbounded automatic goal-directed behavior, something that happens without humans in the system to ever stop the process if it goes wrong.

comment by Vladimir_Nesov · 2010-08-13T16:48:49.119Z · LW(p) · GW(p)

To the point: if AI can't pose a danger, because its hands are tied, that's wonderful! Then we have more time to work of FAI.

comment by multifoliaterose · 2010-08-13T00:04:42.734Z · LW(p) · GW(p)

Overall I'd feel a lot more comfortable if you just said "there's a huge amount of uncertainty as to when existential risks will strike and which ones will strike, I don't know whether or not I'm on the right track in focusing on Friendly AI or whether I'm right about when the Singularity will occur, I'm just doing the best that I can."

This is largely because of the issue that I raise here

I should emphasize that I don't think that you'd ever knowingly do something that raised existential risk, I think that you're a kind and noble spirit. But I do think I'm raising a serious issue which you've missed.

Edit: See also these comments

comment by XiXiDu · 2011-06-08T14:22:27.485Z · LW(p) · GW(p)

If this business about "evidence" isn't a demand for particular proof, then what are you looking for besides not-further-confirmed straight-line extrapolations from inductive generalizations supported by evidence?

I am looking for the evidence in "supported by evidence". I am further trying to figure how you anticipate your beliefs to pay rent, what you anticipate to see if explosive recursive self-improvement is possible, and how that belief could be surprised by data.

If you just say, "I predict we will likely be wiped out by badly done AI.", how do you expect to update on evidence? What would constitute such evidence?

comment by XiXiDu · 2010-08-12T17:51:36.453Z · LW(p) · GW(p)

I haven't done the reading. For further explanation read this comment.

Why do you always and exclusively mention Charles Stross? I need to know if you actually read all of my post.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T21:00:03.030Z · LW(p) · GW(p)

Because the fact that you're mentioning Charles Stross means that you need to do basic reading, not complicated reading.

Replies from: Rain, XiXiDu
comment by Rain · 2010-08-13T12:36:32.005Z · LW(p) · GW(p)

To put my own spin on XiXiDu's questions: What quality or position does Charles Stross possess that should cause us to leave him out of this conversation (other than the quality 'Eliezer doesn't think he should be mentioned')?

comment by XiXiDu · 2010-08-13T08:12:08.871Z · LW(p) · GW(p)

Another vacuous statement. I expected more.

comment by Wei Dai (Wei_Dai) · 2010-08-12T17:30:34.168Z · LW(p) · GW(p)

What stronger points are you referring to? It seems to me XiXiDu's post has only 2 points, both of which Eliezer addressed:

  1. "Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic, i.e. imagination or fiction and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site."
  2. His smart friends/favorite SF writers/other AI researchers/other Bayesians don't support SIAI.
Replies from: XiXiDu, multifoliaterose
comment by XiXiDu · 2010-08-13T13:58:07.433Z · LW(p) · GW(p)

My point is that your evidence has to stand up to whatever estimations you come up with. My point is the missing transparency in your decision making regarding the possibility of danger posed by superhuman AI. My point is that any form of external peer review is missing and that therefore I either have to believe you or learn enough to judge all of your claims myself after reading hundreds of posts and thousands of documents to find some pieces of evidence hidden beneath. My point is that competition is necessary, that not just the SIAI should work on the relevant problems. There are many other points you seem to be missing entirely.

comment by multifoliaterose · 2010-08-12T17:36:39.074Z · LW(p) · GW(p)

"Is the SIAI evidence-based, or merely following a certain philosophy?"

Replies from: Eliezer_Yudkowsky, Wei_Dai
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T17:48:38.222Z · LW(p) · GW(p)

Oh, is that the substantive point? How the heck was I supposed to know you were singling that out?

That one's easy: We're doing complex multi-step extrapolations argued to be from inductive generalizations themselves supported by the evidence, which can't be expected to come with experimental confirmation of the "Yes, we built an unFriendly AI and it went foom and destroyed the world" sort. This sort of thing is dangerous, but a lot of our predictions are really antipredictions and so the negations of the claims are even more questionable once you examine them.

Replies from: XiXiDu, multifoliaterose
comment by XiXiDu · 2010-08-13T14:20:35.565Z · LW(p) · GW(p)

If you have nothing valuable to say, why don't you stay away from commenting at all? Otherwise you could simply ask me what I meant to say, if something isn't clear. But those empty statements coming from you recently make me question if you've been the person that I thought you are. You cannot even guess what I am trying to ask here? Oh come on...

I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn't any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?

comment by multifoliaterose · 2010-08-12T17:54:18.719Z · LW(p) · GW(p)

What you say here makes sense, sorry for not being more clear earlier. See my list of questions in my response to another one of your comments.

comment by Wei Dai (Wei_Dai) · 2010-08-12T17:46:00.328Z · LW(p) · GW(p)

How was Eliezer supposed to answer that, given that XiXiDu stated that he didn't have enough background knowledge to evaluate what's already on LW?

comment by markan · 2010-08-12T16:28:28.146Z · LW(p) · GW(p)

Agreed, and I think there's a pattern here. XiXiDu is asking the right questions about why SIAI doesn't have wider support. It is because there are genuine holes in its reasoning about the singularity, and SIAI chooses not to engage with serious criticism that gets at those holes. Example (one of many): I recall Shane Legg commenting that it's not practical to formalize friendliness before anyone builds any form of AGI (or something to that effect). I haven't seen SIAI give a good argument to the contrary.

Replies from: wedrifid, timtyler
comment by wedrifid · 2010-08-12T16:38:11.302Z · LW(p) · GW(p)

Example (one of many): I recall Shane Legg commenting that it's not practical to formalize friendliness before anyone builds any form of AGI (or something to that effect). I haven't seen SIAI give a good argument to the contrary.

Gahhh! The hoard of arguments against that idea that instantly sprang to my mind (with warning bells screeching) perhaps hints at why a good argument hasn't been given to the contrary (if, in fact, it hasn't). It just seems so obvious. And I don't mean that as a criticism of you or Shane at all. Most things that we already understand well seem like they should be obvious to others. I agree that there should be a post making the arguments on that topic either here on LessWrong or on the SIAI website somewhere. (Are you sure there isn't?)

Edit: And you demonstrate here just why Eliezer (or others) should bother to answer XiXiDu's questions even if there are some weaknesses in his reasoning.

Replies from: markan, rwallace
comment by markan · 2010-08-12T16:53:09.634Z · LW(p) · GW(p)

My point is that Shane's conclusion strikes me as the obvious one, and I believe many smart, rational, informed people would agree. It may be the case that, for the majority of smart, rational, informed people, there exists an issue X for which they think "obviously X" and SIAI thinks "obviously not X." To be taken seriously, SIAI needs to engage with the X's.

Replies from: wedrifid, xamdam
comment by wedrifid · 2010-08-12T17:16:48.218Z · LW(p) · GW(p)

I understand your point, and agree that your conclusion is one that many smart, rational people with good general knowledge would share. Once again I concur that engaging with those X's is important, including that 'X' we're discussing here.

Replies from: markan
comment by markan · 2010-08-12T17:52:56.131Z · LW(p) · GW(p)

Sounds like we mostly agree. However, I don't think it's a question of general knowledge. I'm talking about smart, rational people who have studied AI enough to have strongly-held opinions about it. Those are the people who need to be convinced; their opinions propagate to smart, rational people who haven't personally investigated AI in depth.

I'd love to hear your take on X here. What are your reasons for believing that friendliness can be formalized practically, and an AGI based on that formalization built before any other sort of AGI?

Replies from: whpearson, wedrifid, XiXiDu
comment by whpearson · 2010-08-12T19:49:15.268Z · LW(p) · GW(p)

If I was SIAI my reasoning would be the following. First stop with the believes- believes not dichotomy and move to probabilities.

So what is the probability of a good outcome if you can't formalize friendliness before AGI? Some of them would argue infinitesimal. This is based on fast take-off winner take all type scenarios (I have a problem with this stage, but I would like it to be properly argued and that is hard).

So looking at the decision tree (under these assumptions) the only chance of a good outcome is to try to formalise FAI before AGI becomes well known. All the other options lead to extinction.

So to attack the "formalise Friendliness before AGI" position you would need to argue that the first AGIs are very unlikely to kill us all. That is the major battleground as far as I am concerned.

Replies from: Benja, jimrandomh, ciphergoth, timtyler, markan
comment by Benya (Benja) · 2010-08-12T19:58:11.260Z · LW(p) · GW(p)

Agreed about what the "battleground" is, modulo one important nit: not the first AGI, but the first AGI that recursively self-improves at a high speed. (I'm pretty sure that's what you meant, but it's important to keep in mind that, e.g., a roughly human-level AGI as such is not what we need to worry about -- the point is not that intelligent computers are magically superpowerful, but that it seems dangerously likely that quickly self-improving intelligences, if they arrive, will be non-magically superpowerful.)

comment by jimrandomh · 2010-08-12T19:54:53.748Z · LW(p) · GW(p)

I don't think formalize-don't formalize should be a simple dichotomy either; friendliness can be formalized in various levels of detail, and the more details are formalized, the fewer unconstrained details there are which could be wrong in a way that kills us all.

comment by Paul Crowley (ciphergoth) · 2010-08-13T06:07:51.203Z · LW(p) · GW(p)

I'd look at it the other way: I'd take it as practically certain that any superintelligence built without explicit regard to Friendliness will be unFriendly, and ask what the probability is that through sufficiently slow growth in intelligence and other mere safeguards, we manage to survive building it.

My best hope currently rests on the AGI problem being hard enough that we get uploads first.

(This is essentially the Open Thread about everything Eliezer or SIAI have ever said now, right?)

Replies from: NihilCredo, timtyler
comment by NihilCredo · 2010-08-15T00:19:51.270Z · LW(p) · GW(p)

Uploading would have quite a few benefits, but I get the impression it would make us more vulnerable to whatever tools a hostile AI may possess, not less.

comment by timtyler · 2010-08-13T06:54:09.133Z · LW(p) · GW(p)

Re: "My best hope currently rests on the AGI problem being hard enough that we get uploads first."

Surely a miniscule chance. It would be like Boeing booting up a scanned bird.

comment by timtyler · 2010-08-13T07:36:18.311Z · LW(p) · GW(p)

"So what is the probability of a good outcome if you can't formalize friendliness before AGI? Some of them would argue infinitesimal."

One problem here is the use of a circular definition of "friendliness" - that defines the concept it in terms of whether it leads to a favourable outcome. If you think "friendly" is defined in terms of whether or not the machine destroys humanity, then clearly you will think that an "unfriendly" machine would destroy the world. However, this is just a word game - which doesn't tell us anything about the actual chances of such destruction happening.

comment by markan · 2010-08-12T20:31:13.202Z · LW(p) · GW(p)

Let's say "we" are the good guys in the race for AI. Define

W = we win the race to create an AI powerful enough to protect humanity from any subsequent AIs

G = our AI can be used to achieve a good outcome

F = we go the "formalize friendliness" route

O = we go a promising route other than formalizing friendliness

At issue is which of the following is higher:

P(G|WF)P(W|F) or P(G|WO)P(W|O)

From what I know of SIAI's approach to F, I estimate P(W|F) to be many orders of magnitude smaller than P(W|O). I estimate P(G|WO) to be more than 1% for a good choice of O (this is a lower bound; my actual estimate of P(G|WO) is much higher, but you needn't agree with that to agree with my conclusion). Therefore the right side wins.

There are two points here that one could conceivably dispute, but it sounds like the "SIAI logic" is to dispute my estimate of P(G|WO) and say that P(G|WO) is in fact tiny. I haven't seen SIAI give a convincing argument for that.

Replies from: whpearson
comment by whpearson · 2010-08-12T20:54:08.457Z · LW(p) · GW(p)

I'd start here to get an overview.

My summary would be: there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds then it likely to be contrary to our values because it will have a different sense of what is good or worthwhile. This moderately relies on the speed/singleton issue, because evolution pressure between AI might force them in the same direction as us. We would likely be out-competed before this happens though, if we rely on competition between AIs.

I think various people associated with SIAI mean different things by formalizing friendliness. I remember Vladimir Nesov means getting better than 50% probability for providing a good outcome.

Edited to add my own overview.

Replies from: markan, Vladimir_Nesov, timtyler
comment by markan · 2010-08-12T21:24:20.858Z · LW(p) · GW(p)

It doesn't matter what happens when we sample a mind at random. We only care about the sorts of minds we might build, whether by designing them or evolving them. Either way, they'll be far from random.

Replies from: whpearson
comment by whpearson · 2010-08-12T21:38:35.798Z · LW(p) · GW(p)

Consider my "at random" short hand for "at random from the space of possible minds built by humans".

The Eliezer approved example of humans not getting a simple system to do what they want is the classic Machine Learning example where a Neural Net was trained on two different sorts of tanks. It had happened that the photographs of the different types of tanks had been taken at different times of day. So the classifier just worked on that rather than actually looking at the types of tank. So we didn't build a tank classifier but a day/night classifier. More here.

While I may not agree with Eliezer on everything, I do agree with him it is damn hard to get a computer to do what you want when you stop programming them explicitly .

Replies from: markan
comment by markan · 2010-08-12T21:48:21.582Z · LW(p) · GW(p)

Obviously AI is hard, and obviously software has bugs.

To counter my argument, you need to make a case that the bugs will be so fundamental and severe, and go undetected for so long, that despite any safeguards we take, they will lead to catastrophic results with probability greater than 99%.

Replies from: whpearson
comment by whpearson · 2010-08-12T21:54:38.828Z · LW(p) · GW(p)

How do you consider "formalizing friendliness" to be different from "building safeguards"?

Replies from: markan
comment by markan · 2010-08-12T21:56:10.403Z · LW(p) · GW(p)

Things like AI boxing or "emergency stop buttons" would be instances of safeguards. Basically any form of human supervision that can keep the AI in check even if it's not safe to let it roam free.

Replies from: whpearson, wedrifid
comment by whpearson · 2010-08-12T22:19:41.718Z · LW(p) · GW(p)

Are you really suggesting a trial and error approach where we stick evolved and human created AIs in boxes and then eyeball them to see what they are like? Then pick the nicest looking one, on a hunch, to have control over our light cone?

I've never seen the appeal of AI boxing.

comment by wedrifid · 2010-08-13T05:42:07.187Z · LW(p) · GW(p)

This is why we need to create friendliness before AGI -> A lot of people who are loosely familiar with the subject think those options will work!

A goal directed intelligence will work around any obstacles in front of it. It'll make damn sure that it prevents anyone from pressing emergency stop buttons.

comment by Vladimir_Nesov · 2010-08-12T21:01:11.156Z · LW(p) · GW(p)

Better than chance? What chance?

Replies from: whpearson
comment by whpearson · 2010-08-12T21:14:38.076Z · LW(p) · GW(p)

Sorry, "Better than chance" is an english phrase than tends to mean more than 50%.

It assumes an even chance of each outcome. I.e. do better than selecting randomly.

Not appropriate in this context, my brain didn't think of the wider implications as it wrote it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-12T21:18:09.035Z · LW(p) · GW(p)

It's easy to do better than random. *Pours himself a cup of tea.*

comment by timtyler · 2010-08-13T06:52:28.198Z · LW(p) · GW(p)

Programmers do not operate by "picking programs at random", though.

The idea that "picking programs at random" has anything to do with the issue seems just confused to me.

Replies from: whpearson
comment by whpearson · 2010-08-13T08:10:45.773Z · LW(p) · GW(p)

The first AI will be determined by the first programmer, sure. But I wasn't talking about that level; the biases and concern for the ethics of the AI of that programmer will be random from the space of humans. Or at least I can't see any reason why I should expect people who care about ethics to be more likely to make AI than those that think economics will constrain AI to be nice,

Replies from: timtyler
comment by timtyler · 2010-08-13T08:29:50.931Z · LW(p) · GW(p)

That is now a completely different argument to the original "there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds".

Re: "the biases and concern for the ethics of the AI of that programmer will be random from the space of humans"

Those concerned probably have to be an expert programmers, able to build a company or research group, and attract talented assistance, as well as probably customers. They will probably be far-from what you would get if you chose at "random".

Replies from: whpearson
comment by whpearson · 2010-08-13T09:22:46.649Z · LW(p) · GW(p)

Do we pick a side of a coin "at random" from the two possibilities when we flip it?

Epistemically, yes, we don't have sufficient information to predict it*. However if we do the same thing twice it has the same outcome so it is not physically random.

So while the process that decides what the first AI is like is not physically random, it is epistemically random until we have a good idea of what AIs produce good outcomes and get humans to follow those theories. For this we need something that looks like a theory of friendliness, to some degree.

Considering we might use evolutionary methods for part of the AI creation process, randomness doesn't look like too bad a model.

*With a few caveats. I think it is biased to land the same way up as it was when flipped, due to the chance of making it spin and not flip.

Edit: Oh and no open source AI then?

Replies from: timtyler
comment by timtyler · 2010-08-13T09:43:59.980Z · LW(p) · GW(p)

We do have an extensive body of knowledge about how to write computer programs that do useful things. The word "random" seems like a terrible mis-summary of that body of information to me.

As for "evolution" being equated to "randomnness" - isn't that one of the points that creationists make all the time? Evolution has two motors - variation and selection. The first of these may have some random elements, but it is only one part of the overall process.

Replies from: whpearson
comment by whpearson · 2010-08-13T09:59:40.360Z · LW(p) · GW(p)

I think we have a disconnect on how much we believe proper scary AIs will be like previous computer programs.

My conception of current computer programs is that they are crystallised thoughts plucked from our own minds and easily controllable and unchangeable. When we get interesting AI the programs will morphing and be far less controllable without a good theory of how to control the change.

I shudder every time people say the "AI's source code" as if it is some unchangeable and informative thing about the AI's behaviour after the first few days of the AI's existence.

I'm not sure how to resolve that difference.

comment by wedrifid · 2010-08-13T05:36:06.262Z · LW(p) · GW(p)

You have correctly identified the area in which we do not agree.

The most relevant knowledge needed in this case is knowledge of game theory and human behaviour. They also need to know 'friendliness is a very hard problem'. They then need to ask themselves the following question:

What is likely to happen if people have the ability to create an AGI but do not have a proven mechanism for implementing friendliness? Is it:

  • Shelve the AGI, don't share the research and set to work on creating a framework for friendliness. Don't rush the research - act as if the groundbreaking AGI work that you just created was a mere toy problem and the only real challenge is the friendliness. Spend an even longer period of time verifying the friendliness design and never let on that you have AGI capabilities.
  • Something else.

What are your reasons for believing that friendliness can be formalized practically, and an AGI based on that formalization built before any other sort of AGI?

I don't (with that phrasing). I actually suspect that the problem is too difficult to get right and far too easy to get wrong. We're probably all going to die. However, I think we're even more likely to die if some fool goes and invents a AGI before they have a proven theory of friendliness.

comment by XiXiDu · 2010-08-13T13:40:59.073Z · LW(p) · GW(p)

Those are the people, indeed. But where do the donations come from? EY seems to be using this argument against me as well. I'm just not educated, well-read or intelligent enough for any criticism. Maybe so, I acknowledged that in my post. But have I seen any pointers to how people arrive at their estimations yet? No, just the demand to read all of LW, which according to EY doesn't even deal with what I'm trying to figure out, but rather the dissolving of biases. A contradiction?

I'm inquiring about the strong claims made by the SIAI, which includes EY and LW. Why? Because they ask for my money and resources. Because they gather fanatic followers who believe into the possibility of literally going to hell. If you follow the discussion surrounding Roko's posts you'll see what I mean. And because I'm simply curious and like to discuss, besides becoming less wrong.

But if EY or someone else is going to tell me that I'm just too dumb and it doesn't matter what I do, think or donate, I can accept that. I don't expect Richard Dawkins to enlighten me about evolution either. But don't expect me to stay quiet about my insignificant personal opinion and epistemic state (as you like to call it) either! Although since I'm conveniently not neurotypical (I guess), you won't have to worry me turning into an antagonist either, simply because EY is being impolite.

comment by xamdam · 2010-08-12T23:39:46.524Z · LW(p) · GW(p)

SIAI position does dot require "obviously X" from a decision perspective, the opposite one does. To be so sure of something as complicated as the timeline of FAI math vs AGI development seems seriously foolish to me.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T13:52:25.906Z · LW(p) · GW(p)

It is not a matter about being sure of it but to weigh it against what is asked for in return, other possible events of equal probability and the utility payoff from spending the resources on something else entirely.

I'm not asking the SIAI to prove "obviously X" but rather to prove the very probability of X that they are claiming it has within the larger context of possibilities.

Replies from: xamdam
comment by xamdam · 2010-08-13T14:03:05.001Z · LW(p) · GW(p)

No such proof is possible with our machinery.

=======================================================

Capa: It's the problem right there. Between the boosters and the gravity of the sun the velocity of the payload will get so great that space and time will become smeared together and everything will distort. Everything will be unquantifiable.

Kaneda: You have to come down on one side or the other. I need a decision.

Capa: It's not a decision, it's a guess. It's like flipping a coin and asking me to decide whether it will be heads or tails.

Kaneda: And?

Capa: Heads... We harvested all Earth's resources to make this payload. This is humanity's last chance... our last, best chance... Searle's argument is sound. Two last chances are better than one.

=====================================================

(Sunshine 2007)

Not being able to calculate chances does not excuse one from using their best de-biased neural machinery to make a guess at a range. IMO 50 years is reasonable (I happen to know something about the state of AI research outside of the FAI framework). I would not roll over in surprise if it's 5 years given state of certain technologies.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-08-13T20:27:54.244Z · LW(p) · GW(p)

(I happen to know something about the state of AI research outside of the FAI framework). I would not roll over in surprise if it's 5 years given state of certain technologies.

I'm curious, because I like to collect this sort of data: what is your median estimate?

(If you don't want to say because you don't want to defend a specific number or list off a thousand disclaimers I completely understand.)

Replies from: xamdam
comment by xamdam · 2010-08-13T21:54:23.846Z · LW(p) · GW(p)

Median 15-20 years. I'm not really an expert, but certain technologies are coming really close to modeling cognition as I understand it.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-08-13T22:04:36.592Z · LW(p) · GW(p)

Thanks!

comment by rwallace · 2010-08-12T19:54:57.994Z · LW(p) · GW(p)

Well it's clear to me now that formalizing Friendliness with pen and paper is as naively impossible as it would have been for the people of ancient Babylon to actually build a tower that reached the heavens; so if resources are to be spent attempting it, then it's something that does need to be explicitly argued for.

comment by timtyler · 2010-08-13T07:28:49.351Z · LW(p) · GW(p)

"By focusing on excessively challenging engineering projects it seems possible that those interested in creating a positive future might actually create future problems – by delaying their projects to the point where less scrupulous rivals beat them to the prize"

comment by MartinB · 2010-08-12T17:25:17.179Z · LW(p) · GW(p)

[This comment is a response to the original post, but seemed to fit here most.] I upvoted the OP for raising interesting questions that will arise often and deserve an accessible answer. If someone can maybe put out or point to a reading guide with references.

On the crackpot index the claim that everyone else got it wrong deserves to raise a red flag, but that does not mean it is wrong. There are way to many examples on that in the world. (To quote Eliezer:'yes, people really are that stupid') Read "The Checklist Manifesto" by Atul Gawande for a real life example that is ridiculously simple to understand. (Really read that. It is also entertaining!) Look at the history of science. Consider the treatment that Semmelweis got for suggesting that doctors wash their hands before operations. You find lots of samples were plain simple ideas where ridiculed. So yes it can happen that a whole profession goes blind on one spot and for every change there has to be someone trying it out in the first place. The degree on which research is not done well is subject to judgment . Now it might be helpful to start out with more applicable ideas, like improving the tool set for real life problems. You don't have to care about the singularity to care about other LW content like self-debiasing, or winning.

Regarding the donation aspect, it seems like rationalist are particularly bad at supporting their own causes. You might estimate how much effort you spend in checking out any charity you do support, and then try to not demand higher standards of this one.

Replies from: XiXiDu, CarlShulman
comment by XiXiDu · 2010-08-12T18:06:41.079Z · LW(p) · GW(p)

Yes, but there are also many examples that show people coming up with the same idea or conclusion at the same time. Take for example A. N. Kolmogorov and Gregory Chaitin who proposed the same definition of randomness independently.

The circumstances regarding Eliezer Yudkowsky are however different. Other people came up with the ideas he is using as supportive fortification and pronunciamento. Some of those people even made similar inferences, yet they do not ask for donations to stop the otherwise inevitable apocalypse.

Replies from: MartinB
comment by MartinB · 2010-08-12T18:26:18.858Z · LW(p) · GW(p)

Your argument does not seem to work. I pointed out how there is stupidity in professionals, but I made no claim that there is only stupidity. So your samples do not disprove the point. It is nice when people come up with similar things, especially if they happen to be correct, but it is by no means to be expected in every case. Would you be interested in taking specific pieces apart and/or arguing them?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-12T18:59:50.384Z · LW(p) · GW(p)

The argument was that Eliezer Yudkowsky, to my knowledge, has not come up with anything unique. The ideas on which the SIAI is based and asks for donations are not new. Given the basic idea of superhuman AI and widespread awareness of it I thought it was not unreasonable to inquire about the state of activists trying to prevent it.

Are you trying to disprove an argument I made? I asked for an explanation and wasn't stating some insight about why the SIAI is wrong.

Is Robin Hanson donating most of his income to the SIAI?

comment by CarlShulman · 2010-08-13T07:31:46.148Z · LW(p) · GW(p)

You might estimate how much effort you spend in checking out any charity you do support, and then try to not demand higher standards of this one.

While it is silly to selectively apply efficacy standards to charity (giving to inefficient charities without thinking, and then rejecting much more efficient ones on the grounds that they are not maximal [compared to what better choice?]), far better to apply the same high standards across the board than low ones.

comment by JoshuaZ · 2010-08-12T21:14:02.600Z · LW(p) · GW(p)

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't.

I'm curious what evidence you actually have that "You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong." As far as I can tell, LWians are on a whole more rational than the general populace, and probably more rational than most smart people. But I'd be very curious as to what evidence you have that leads to conclude that the rationality standards of LW massively exceed those of a random individual's "smart friends." Empirically, people on LW have trouble telling when they have sufficient knowledge base about topics and repeat claims that aren't true that support their pre-existing worldview (I have examples of both of these which I'll link to if asked). LWians seem to be better than general smart people at updating views when confronted with evidence and somewhat better about not falling into certain common cognitive ruts.

That said, I agree that XiXi should read the MWI sequence and am annoyed that XiXi apparently has not read the sequence before making this posting.

Replies from: Eliezer_Yudkowsky, XiXiDu
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T22:18:13.842Z · LW(p) · GW(p)

Well, I could try to rephrase as "Below the standards of promoted, highly rated LW posts", i.e., below the standards of the LW corpus, but what I actually meant there (though indeed I failed to say it) was "the standards I hold myself to when writing posts on LW", i.e., what XiXiDu is trying to compare to Charles Stross.

Replies from: CronoDAS
comment by CronoDAS · 2010-08-14T09:35:40.912Z · LW(p) · GW(p)

Below the standards of promoted, highly rated LW posts

seems to be different than

our smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong

because we do a lot of armchair speculating in comment threads about things on which the "rational" position to take is far from clear - and, furthermore, just because someone isn't trying to present a rational argument for their position at any given moment doesn't mean that they can't.

comment by XiXiDu · 2010-08-13T14:31:37.924Z · LW(p) · GW(p)

Pfft, it was an example whose truth value is circumstantial as it was merely an analogy used to convey the gist what I was trying to say, namely to subsequently base conclusions and actions on other conclusions which themselves do not bear evidence. And I won't read the MWI sequence before learning the required math.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-13T14:36:34.502Z · LW(p) · GW(p)

What subsequent conclusions are based on MWI?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T15:18:01.324Z · LW(p) · GW(p)

Check my comment here. More details would hint at the banned content.

I never said EY or the SIAI based any conclusions on it. It was, as I frequently said, an example to elucidate what I'm talking about when saying that I cannot fathom the origin of some of the assertions made here as they appear to me to be based on other conclusions that are not yet tested themselves.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-13T16:24:55.817Z · LW(p) · GW(p)

What the hell? That link doesn't contain any conclusions based on MWI - in fact, it doesn't seem to contain any conclusions at all, just a bunch of questions. If you mean that MWI is based on unfounded conclusions (rather than that other conclusions are based on MWI), then that's a claim that you really shouldn't be making if you haven't read the MWI sequence.

I see no connection whatsoever to the banned content, either in the topic of MWI or in the comment you linked to. This is a bizarre non-sequitur, and as someone who wants to avoid thinking about that topic, I do not appreciate it. (If you do see a connection, explain only by private message, please. But I'd rather you just let it drop.)

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T16:37:54.429Z · LW(p) · GW(p)

just a bunch of questions.

My post was intended to be asking questions, not making arguments. Obviously you haven't read the banned content.

You seem not to understand my primary question that I tried to highlight by the MWI analogy. MWI is a founded conclusion but you shouldn't use it to make further conclusions based on it. That is, a conclusion first has to yield a new hypothesis that makes predictions. Once you got new data, something that makes a difference, you can go from there and hypothesize that you can influence causally disconnected parts of the multiverse or that it would be a good idea tossing a quantum coin to make key decisions.

After all it was probably a bad decisions to use that example. All you have to do is to substitute MWI with AGI. AGI is, though I'm not sure, a founded conclusion. But taking that conclusion and running with it building a huge framework of further conclusions around it is in my opinion questionable. First this conclusion has to yield marginal evidence of its feasibility, then you are able to create a further hypothesis engaged with further consequences.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-13T17:10:17.680Z · LW(p) · GW(p)

I do not appreciate being told that I "obviously" have not read something that I have, in fact, read. And if you were keeping track, I have previously sent you private messages correcting your misconceptions on that topic, so you should have known that. And now that I've hinted at why you think it's connected to MWI, I can see that that's just another misconception.

Your tone is antagonistic and I had to restrain myself from saying some very hurtful things that I would've regretted. You need to take a step back and think about what you're doing here, before you burn any more social bridges.

EDIT: Argh, restraint fail. That's what the two deleted comments below this are.

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2010-08-13T17:45:50.333Z · LW(p) · GW(p)

Your tone is antagonistic...

Is it, and that of EY? Are you telling him the same? Check this comment and tell me again that I am antagonistic.

If I come over as such, I'm sorry. I'm a bit stressed writing so many comments accusing me of trying to damage this movement or making false arguments when all I did was indeed trying to inquire about some problems I have, asking questions.

Replies from: jimrandomh, wedrifid
comment by jimrandomh · 2010-08-13T19:45:27.085Z · LW(p) · GW(p)

I think part of the reason this went over badly is that in the US, there is a well-known and widely hated talk show host named Glenn Beck whose favorite rhetorical trick is to disguise attacks as questions, saying things like "Is it really true that so-and-so eats babies?", repeating it enough times that his audience comes to believe that person eats babies, and then defending his accusations by saying "I'm just asking questions". So some of us, having been exposed to that in the past, see questions and rhetoric mixed a certain way, subconsciously pattern-match against that, and get angry.

comment by wedrifid · 2010-08-13T18:36:46.225Z · LW(p) · GW(p)

If I come over as such, I'm sorry. I'm a bit stressed writing so many comments accusing me of trying to damage this movement or making false arguments when all I did was indeed trying to inquire about some problems I have, asking questions.

I did get the impression that some took your questions as purely rhetorical, soldiers fighting against the credibility of SIAI. I took you as someone hoping to be convinced but with a responsible level of wariness.

Replies from: HughRistik
comment by HughRistik · 2010-08-13T18:45:50.894Z · LW(p) · GW(p)

I did get the impression that some took your questions as purely rhetorical, soldiers fighting against the credibility of SIAI. I took you as someone hoping to be convinced but with a responsible level of wariness.

That was my impression, also. As a result, I found many elements of the responses to XiXiDu to be disappointing. While there were a few errors in his post (e.g. attributing Kurweil views to SIAI), in general it should have been taken as an opportunity to clarify and throw down some useful links, rather than treat XiXiDu (who is also an SIAI donor!) as a low-g interloper.

comment by XiXiDu · 2010-08-13T17:17:19.456Z · LW(p) · GW(p)

Oh, you are the guy who's spreading all the misinformation about it just so nobody is going to ask more specific questions regarding that topic. Hah, I remember you know. Thanks, but no thanks.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-13T17:23:06.753Z · LW(p) · GW(p)

Your tone is antagonistic and I had to restrain myself from saying some very hurtful things that I would've regretted.

Oh, you are the guy who's spreading all the misinformation about it

Fuck you, and please leave. Is that the reaction you were hoping for, troll?

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2010-08-13T17:39:53.951Z · LW(p) · GW(p)

You wrote this and afterwards sending me a private message on how you are telling me this so that I shut up.

Why would I expect honest argumentation from someone who makes use of such tactics? Especially when I talked about the very same topic with you before just to find out that you do this deliberately?

comment by XiXiDu · 2010-08-13T18:09:39.383Z · LW(p) · GW(p)

Anyway, I herewith apologize unconditionally for any offence and deleted my previous comment.

Going to watch a movie now and eat ice cream. Have fun :-)

Replies from: jimrandomh
comment by jimrandomh · 2010-08-13T19:35:26.642Z · LW(p) · GW(p)

I apologize for my previous comment - I felt provoked, but regardless of the context, it was way out of line.

The thing with the banned topic is, I'm really trying to avoid thinking about it, and seeing it mentioned makes it hard to do that, so I feel annoyed whenever it's brought up. That's not something I'm used to dealing with, and it's a corner case that the usual rules of discourse don't really cover, so I may not have handled it correctly.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-13T19:42:41.476Z · LW(p) · GW(p)

It was my fault all the way to the OP. I was intrigued about the deletion incident and couldn't shut up and now I thought it was a good idea to inquire about questions that trouble me for so long and to to steer some debate by provoking strong emotions.

I actually understand that you do not want to think about it. It was a dumb idea to steer further debate into that direction. But how could I know before finding out about it? I'm not the personality type who's going to follow someone telling me not to read about something, to not even think about it.

I deleted the other comment as well.

comment by [deleted] · 2012-08-04T06:55:33.609Z · LW(p) · GW(p)

Right in the beginning of the sequence you managed to get phases wrong. Quick search turns up:

http://www.ex-parrot.com/~pete/quantum-wrong.html

http://www.poe-news.com/forums/spshort.php?pi=1002430803&ti=1002430709

http://physics.stackexchange.com/a/23833/4967

Ouch.

Rest of the argument... given relativistic issues in QM as described, QM is just approximation which does not work at the relevant scale, and so concluding existence of multiple worlds from it is very silly.

... a proposition far simpler than the argument for supporting SIAI ...

Indeed.

If you know all these things and you still can't tell that MWI is obviously true - a proposition far simpler than the argument for supporting SIAI - then we have here a question that is actually quite different from the one you seem to try to be presenting:

  • I do not have sufficient g-factor to follow the detailed arguments on Less Wrong. What epistemic state is it rational for me to be in with respect to SIAI?

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't.

Ghahahahaha. "A community blog devoted to refining the art of human rationality"... or in other world an online equivalent of a green-ink letter.

Replies from: Mitchell_Porter, None
comment by Mitchell_Porter · 2012-08-04T10:12:38.278Z · LW(p) · GW(p)

Right in the beginning of the sequence you managed to get phases wrong.

Hopefully this mistake will be fixed one day, so the sequence will be judged on the merits of the argument it presents, and not by the presence of a wrong factor of "i".

given relativistic issues in QM as described, QM is just approximation which does not work at the relevant scale, and so concluding existence of multiple worlds from it is very silly.

Nonrelativistic QM is an approximation to relativistic QFT, and while relativity certainly introduces a new problem for MWI, it remains true that QFT employs the superposition principle just as much as QM. It's a formalism of "many histories" rather than "many worlds", but the phenomenon of superposition, and therefore the possibility of parallel coexisting realities, is still there.

I would agree that it was foolish for Eliezer to flaunt his dogmatism about MWI as if that was evidence of superior rationality. What I would say is that he wasn't worse than physicists in general. Professional physicists who know far more about the subject than Eliezer still manage to say equally foolish things about the implications of quantum mechanics.

What the evidence suggests to me is that to discover the explanation of QM, you need deep technical knowledge, not just of QM but also QFT, and probably of quantum gravity, at least to the level of the holographic principle, and you also need a very powerful imagination. Possibly the correct answer is a variation on a concept we already possess: many worlds, Bohmian mechanics, loops in time, a 't Hooft cellular automaton. If so, then the big imaginative leap was already carried out, but the technicalities are still hard enough that we don't even know that it's the right type of answer. Eliezer-style dogmatism would be wrong for all the available explanations: we do not know which if any is right; at this stage there is no better strategy than pluralistic investigation, including hybridization of these supposedly distinct concepts. But it's also possible that the correct answer hasn't yet been conceived, even in outline, which is why imagination remains important, as well as technical knowledge.

If you accept this analysis, then it's easier to understand why interpretations of quantum mechanics present such a chaotic scene. The radical ontological differences between the candidate explanations create a lot of conceptual tension, and the essential role of subtle technicalities, and mathematical facts not yet known, in pointing the way to the right answer, mean that this conceptual tension can't be resolved by a simple adjudication like "non-collapse is simpler than collapse". The possibility that the answer is something we haven't even imagined yet, makes life even more difficult for people who can't bear to settle for Copenhagen positivism - should they just insist "there must be an answer, even if we don't know anything about how it works"?

It's therefore difficult to avoid both dogmatic rationalization and passive agnosticism. It's the sort of problem in which the difficulties are such that a return to basics - a review of "what I actually know, rather than what I habitually assume or say" - can take you all the way back to the phenomenological level - "under these circumstances, this is observed to occur".

For people who don't want to devote their lives to solving the problem, but who at least want to have a "rational" perspective on it, what I recommend is that you understand the phenomenological Copenhagen interpretation - not the one which says wavefunctions are real and they collapse when observed, just the one which says that wavefunctions are like probability distributions and describe the statistics of observable quantities - and that you also develop some idea of what's involved in all the major known candidate ontologies.

For readers of this site who believe that questions like this should be resolved by a quantified Occam's razor like Solomonoff induction: in principle, your first challenge is just to make the different theories commensurable - to find a common language precise enough that you can compare their complexity. In practice, that is a difficult enough task (on account of all these ideas being a little bit underspecified) that it couldn't be done without a level of technical engagement which meant you had joined the ranks of "people trying to solve the problem, rather than just pontificating about it".

Replies from: None
comment by [deleted] · 2012-08-04T10:43:45.500Z · LW(p) · GW(p)

Hopefully this mistake will be fixed one day, so the sequence will be judged on the merits of the argument it presents, and not by the presence of a wrong factor of "i".

The argument is pure incompetent self important rambling about nothing. The mistakes only make this easier to demonstrate to people who do not know QM, who assume it must have some merit because someone wasted time writing it up. Removal of mistakes would constitute deception.

Nonrelativistic QM is an approximation to relativistic QFT, and while relativity certainly introduces a new problem for MWI, it remains true that QFT employs the superposition principle just as much as QM.

Nonetheless, there is no satisfactory quantum gravity. It is still only an approximation to reality, and subsequently the mathematical artifacts it has (multiple realities) mean nothing. Even if it was exact it is questionable what is the meaning of such artifacts.

Professional physicists who know far more about the subject than Eliezer still manage to say equally foolish things about the implications of quantum mechanics.

They did not have the stupidity of not even learning it before trying to say something smart about it.

comment by [deleted] · 2012-08-04T09:04:14.836Z · LW(p) · GW(p)

The muckiness surrounding the interferometer is well-known; in fact, the PSE question was written by a LWer.

Rest of the argument... given relativistic issues in QM as described, QM is just approximation which does not work at the relevant scale, and so concluding existence of multiple worlds from it is very silly.

The conclusion isn't "MWI is true." The conclusion is "MWI is a simpler explanation than collapse (or straw-Copenhagen, as we in the Contrarian Conspiracy like to call it) for quantum phenomena, and therefore a priori more likely to be true."

And yes, it is also well-known that this quote is not Yudkowsky at his most charming. Try not to conflate him with either rationalism or the community (which are also distinct things!).

Replies from: aaronsw, None
comment by aaronsw · 2012-08-04T10:17:06.833Z · LW(p) · GW(p)

I have not read the MWI sequence yet, but if the argument is that MWI is simpler than collapse, isn't Bohm even simpler than MWI?

(The best argument against Bohm I can find on LW is a brief comment that claims it implies MWI, but I don't understand how and there doesn't seem to be much else on the Web making that case.)

Replies from: Oscar_Cunningham, None
comment by Oscar_Cunningham · 2012-08-04T10:41:55.262Z · LW(p) · GW(p)

MWI just calculates the wavefunction.

Copenhagen calculates the wavefunction but then has additional rules saying when some of the branches collapse.

Bohm calculates the wavefunction and then says that particles have single positions but are guided by the wavefunction.

Replies from: Quantumental
comment by Quantumental · 2012-08-04T11:16:02.864Z · LW(p) · GW(p)

But MWI doesn't get the right calculation in terms of probability

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-08-04T11:22:02.756Z · LW(p) · GW(p)

Good point. I'd say that it doesn't have any calculation of the probability. But some people hope that the probabilities can be derived from just MW. If they achieve this then it would be the simplest theory. But if they need extra hypotheses then it will gain complexity, and may well come out worse than Bohm.

comment by [deleted] · 2012-08-04T10:48:48.292Z · LW(p) · GW(p)

Mitchell_Porter makes the case, but reading him makes my brain shut down for lack of coherence. I assume Yudkowsky doesn't favor Bohm because it requires non-local hidden variables. Non-local theories are unexpected in physics, and local hidden variables don't exist.

Replies from: Mitchell_Porter, nshepperd, Quantumental
comment by Mitchell_Porter · 2012-08-04T11:29:26.911Z · LW(p) · GW(p)

There's more to Bohmian mechanics than you may think. There are actually observables whose expectation values correspond to the Bohmian trajectories - "weak-valued" position measurements. This is a mathematical fact that ought to mean something, but I don't know what. Also, you can eliminate the pilot wave from Bohmian mechanics. If you start with a particular choice of universal wavefunction, that will be equivalent to adding a particular nonlocal potential to a classical equation of motion. That nonlocal potential might be the product of a holographic transformation away from the true fundamental degrees of freedom, or it might approximate the nonlocal correlations induced by planck-scale time loops in the spacetime manifold.

I have never found the time or the energy to do my own quantum sequence, so perhaps it's my fault if I'm hard to understand. The impression of incoherence may also arise from the fact that I put out lots and lots of ideas. There are a lot of possibilities. But if you want an overall opinion on QM which you wish to be able to attribute to me, here it is:

The explanation of QM might be "Bohm", "Everett", "Cramer", "'t Hooft", or "None of the Above". By "Bohm", I don't just mean Bohmian mechanics, I mean lines of investigation arising from Bohmian mechanics, like the ones I just described. The other names in quotes should be interpreted similarly.

Also, we are not in a position to say that one of these five approaches is clearly favored over the others. The first four are all lines of investigation with fundamental questions unanswered and fundamental issues unresolved, and yet they are the best specific proposals that we have (unless I missed one). It's reasonable for a person to prefer one type of model, but in the current state of knowledge any such preference is necessarily superficial, and very liable to be changed by new information.

Replies from: None
comment by [deleted] · 2012-08-04T11:53:53.862Z · LW(p) · GW(p)

I have never found the time or the energy to do my own quantum sequence, so perhaps it's my fault if I'm hard to understand. The impression of incoherence may also arise from the fact that I put out lots and lots of ideas.

Well, that's understandable. Not everyone has all the free time in the world to write sequences.

It's reasonable for a person to prefer one type of model, but in the current state of knowledge any such preference is necessarily superficial, and very liable to be changed by new information.

That's exactly what I wish Yudkowsky's argument in the QM sequence would have been, but for some reason he felt the need to forever crush the hopes and dreams of the people clinging to alternative interpretations, in a highly insulting manner. What ever happened to leaving a line of retreat?

Replies from: None
comment by [deleted] · 2012-08-04T13:21:42.263Z · LW(p) · GW(p)

That's exactly what I wish Yudkowsky's argument in the QM sequence would have been, but for some reason he felt the need to forever crush the hopes and dreams of the people clinging to alternative interpretations, in a highly insulting manner. What ever happened to leaving a line of retreat?

Something feels very wrong about this sentence... I get a nagging feeling that you believe he has a valid argument, but he should have been nice to people who are irrationally clinging to alternative interpretations, via such irrational ways as nitpicking on the unimportant details.

Meanwhile, a coherent hypothesis: the guy does not know QM, thinks he knows QM, proceeds to explain whatever simplistic nonsense he thinks is the understanding of QM, getting almost everything wrong. Then interprets the discrepancies in his favour, and feels incredibly intelligent.

Replies from: None
comment by [deleted] · 2012-08-04T13:31:02.737Z · LW(p) · GW(p)

Something feels very wrong about this sentence... I get a nagging feeling that you believe he has a valid argument, but he should have been nice to people who are irrationally clinging to alternative interpretations, via such irrational ways as nitpicking on the unimportant details.

I believe he has a valid argument for a substantially weaker claim of the sort I described earlier.

He "should have been nice to people" (without qualification) by not trying to draw (without a shred of credible evidence) a link between rationality/intelligence/g-factor and (even a justified amount of) MWI-skepticism. It's hard to imagine a worse way to immediately put your audience on the defensive. It's all there in the manual.

Replies from: None
comment by [deleted] · 2012-08-04T14:30:53.747Z · LW(p) · GW(p)

I believe he has a valid argument for a substantially weaker claim of the sort I described earlier.

Why do you think so? Quantum mechanics is complicated, and questions of what is a 'better' theory are very subtle.

On the other hand, figuring out what claim your arguments actually support, is rather simple. You have an argument which: gets wrong elementary facts, gets wrong terminology, gets wrong the very claim. All the easy stuff is wrong. You still believe that it gets right some hard stuff. Why?

It's all there in the manual.

He should have left a line of retreat for himself.

Replies from: None
comment by [deleted] · 2012-08-04T22:08:24.252Z · LW(p) · GW(p)

Why do you think so?

For the reasons outlined above. Occam's razor + locality.

On the other hand, figuring out what claim your arguments actually support, is rather simple.

My argument is distinct from Yudkowsky's in that our claims are radically different. If you disagree that MWI is more probable than straw-Copenhagen, I'd like to know why.

You have an argument which: gets wrong elementary facts, gets wrong terminology, gets wrong the very claim. All the easy stuff is wrong. You still believe that it gets right some hard stuff. Why?

None of the "easy stuff" is pertinent to the argument that MWI is more probable than straw-Copenhagen. For example, the interferometer calculation is neither used as evidence that MWI is local, nor that MWI is less complicated. The calculation is independent of any interpretation, after all.

Replies from: None
comment by [deleted] · 2012-08-05T05:14:21.264Z · LW(p) · GW(p)

For the reasons outlined above. Occam's razor + locality.

if I stand a needle on it's tip on a glass plate, will needle remain standing indefinitely? No it probably won't even though by Occam's razor, zero deviation from vertical is (arguably) more probable than any other specific deviation from vertical. MWI seems to require exact linearity, and QM and QFT don't do gravity, i.e. are approximate. Linear is a first order approximation to nearly anything.

None of the "easy stuff" is pertinent to the argument that MWI is more probable than straw-Copenhagen.

Intelligence and careful thinking --> getting easy stuff right and maybe (very rarely) getting hard stuff right.

Lack of intelligence and/or careful thinking --> getting easy stuff wrong and getting hard stuff certainly wrong.

What is straw Copenhagen anyway? Objective collapse caused by consciousness? Copenhagen is not objective collapse. It is a theory for predicting and modelling the observations. With the MWI you still need to single out one observer, because something happens in real world that does single out one observer, as anyone can readily attest, and so there's no actual difference here in any math, it's only a difference in how you look at this math.

edit: ghahahahaha, wait, you literally think it has higher probability? (i seen another of the Yudkowsky's comments where he said something about his better understanding of probability theory) Well, here's the bullet: the probability of our reality being quantum mechanics or quantum field theory, within platonic space, is 0 (basically, vanishingly small, predicated on the experiments confirming general relativity all failing), because gravity exists and works so and so but that's not part of QFT. 0 times anything is still 0. (That doesn't mean the probability of alternate realities is 0, if there can be such a thing)

comment by nshepperd · 2012-08-04T14:37:52.251Z · LW(p) · GW(p)

From the one comment on Bohm I can find, it seems that he actually dislikes Bohm because the particles are "epiphenomena" to the pilot wave. Meaning the particles don't actually do anything except follow the pilot wave, and it's actually the the pilot wave itself that does all the computation (of minds and hence observers).

comment by Quantumental · 2012-08-04T11:04:43.951Z · LW(p) · GW(p)

Lack of coherence? where? It's true that Bohm requires non-local HV's, but there is a non-local flavor to MWI too. The states are still non-local. Local HV's do exist. Gerard 't Hooft is working on this as we speak: http://arxiv.org/find/all/1/all:+AND+hooft+AND+gerard+t/0/1/0/all/0/1

Replies from: None
comment by [deleted] · 2012-08-04T11:23:33.802Z · LW(p) · GW(p)

Lack of coherence? where?

His monologue on color, for instance.

The states are still non-local.

This assumption is made by every other interpretation of quantum mechanics I know. On the other hand, I'm not a physicist; I'm clearly not up to date on things.

Local HV's do exist.

I meant the classical HV theories that were ruled out by actual experiments detecting violations of Bell's inequality.

Replies from: Quantumental
comment by Quantumental · 2012-08-04T12:12:33.484Z · LW(p) · GW(p)

His monologue on color, for instance.

Well, you didn't link to his view of qualia, but to a link where he explains why MWI is not the "winner" or "preferred" as EY claimed so confidently in his series on QM. You might disagree with him on his stance on qualia ( I do too ) but it would be a logical fallacy to state that therefore all his other opinions are also incoherent.

Mitchell Porter's view on qualia is not non-sense either, it is highly controversial and speculative, no doubt. But his motivation is sound, he think that it is the only way to avoid some sort of dualism, so in that sense his view is even more reductionist than that of Dennett etc. He is also in good company with people like David Deutsch (another famous many world fundamentlist).

As for local hidden variables, obviously there does not exist a local HV that has been ruled out ;p but you claimed there was none in existence in general.

Replies from: None
comment by [deleted] · 2012-08-04T12:35:25.311Z · LW(p) · GW(p)

Maybe I should have said "reading him in general..."

The rest is quibbling over definitions.

comment by [deleted] · 2012-08-04T10:03:07.910Z · LW(p) · GW(p)

The muckiness surrounding the interferometer is well-known; in fact, the PSE question was written by a LWer.

Ahh, that would explain why a non-answer is the accepted one. Was this non-answer written by LWer by chance?

Rest of sequence is no better. Photon going in particular way is not really 'configuration' with a complex amplitude, I am not even sure the guy actually understands how interferometer works or what happens if length of one path is modified a little. Someone who can't correctly solve even a simplest QM problem has no business 'explaining' anything about QM by retelling popular books.

The conclusion isn't "MWI is true."

You clearly do not have enough g-factor:

If you know all these things and you still can't tell that MWI is obviously true

And yes, it is also well-known that this quote is not Yudkowsky at his most charming.

When people are at their most charming, they are pretending.

Try not to conflate him with either rationalism or the community (which are also distinct things!).

Rationalism? I see. This would explain why the community would take that seriously instead of pointing and laughing.

Replies from: None
comment by [deleted] · 2012-08-04T10:43:12.956Z · LW(p) · GW(p)

Ahh, that would explain why a non-answer is the accepted one. Was this non-answer written by LWer by chance?

Scott Aaronson is a not, as far as I know, a LWer, though he did an interview with Yudkowsky once on QM. He disagrees with him pretty substantially.

Someone who can't correctly solve even a simplest QM problem has no business 'explaining' anything about QM by retelling popular books.

I don't disagree?

The conclusion isn't "MWI is true."

You clearly do not have enough g-factor:

It's possible.

Rationalism? I see. This would explain why the community would take that seriously instead of pointing and laughing.

No, the other rationalism, rationality. My bad.

Replies from: None
comment by [deleted] · 2012-08-04T10:59:54.420Z · LW(p) · GW(p)

It's possible.

Was a joke.

No, the other rationalism, rationality. My bad.

Are you sure? I've seen posts speaking of 'aspiring rationalists'. It does make sense that rationalists would see themselves as rational, but it does not make sense for rational people to call themselves rationalists. Rationalism is sort of like a belief in power of rationality. It's to rationality as communism is to community.

Believing that the alternate realities must exist if they are a part of a theory, even if the same theory says that the worlds are unreachable, that's rationalism. Speaking of which, even a slightest non-linearity is incompatible with many worlds.

comment by red75 · 2010-08-13T06:23:05.899Z · LW(p) · GW(p)

There is something that makes me feel confused about MWI. Maybe it is its reliance on anthropic principle (probability of finding myself in a world where recorded history have probability P (according to Born's rule) must be equal to P). This condition depends on every existing universe, not just on ours. Thus it seems that to justify Born's rule we should leave observable evidence behind and trail along after unprovable philosophical ideas.

comment by XiXiDu · 2010-08-12T17:42:27.199Z · LW(p) · GW(p)

If the map is not the territory how is it that the maths and logic can assign some worlds, or infinite many, but not others with the attribute of being real and instantiated beyond the model?

Can selections for "realness" be justified or explained logically, is it a matter of deduction?

Say, what makes something a real thing versus an abstract matter. When does the map become the territory?

As far as I know the uniformity or different states of the universe are not claimed to be factual beyond the observable because we can deduce that it is logical to think one way or the other?

You say that once I read the relevant sequence I will understand. That might be so, as acknowledged in my post. But given my partial knowledge I'm skeptic that it is sound enough to allow for ideas to be taken serious enough such as that you can influence causally disconnected parts of the multiverse. That it would be a good idea tossing a quantum coin to make key decisions and so on.

It was however just one example to illustrate some further speculations that are based on the interpretation of a incomplete view of the world.

I think the idea of a Mathematical Universe is very appealing. Yet I'm not going to base decisions on this idea, not given the current state of knowledge.

If you claim that sufficient logical consistency can be used as a fundament for further argumentation about the real world, I'll take note. I have to think about it. People say, "no conclusions can be drawn if you fail to build a contradiction". They also say you have to make strong, falsifiable predictions. Further, people say that picking a given interpretation of the world has to have practical value to be capable of being differentiated from that which isn't useful.

It seems that you mainly study and observe nature with emphasis on an exclusively abstract approach rather than the empirical. As I said, I do not claim there's anything wrong with it. But so far I have my doubts.

MWI may be a logical correct and reasonable deduction. But does it provide guidance or increase confidence? Is it justified to be taken for granted, to be perceived as part of the territory simply because it makes sense? It is not a necessity.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-12T17:55:39.918Z · LW(p) · GW(p)

Your skepticism is aimed in the wrong direction and MWI does not say what you think it does. Read the sequence. When you're done you'll have a much better gut sense of the gap between SIAI and Charles Stross.