Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions

post by MichaelGR · 2009-11-11T03:00:39.093Z · score: 17 (19 votes) · LW · GW · Legacy · 701 comments

Contents

  The Rules
  Suggestions
None
701 comments

As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.

The Rules

1) One question per comment (to allow voting to carry more information about people's preferences).

2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).

3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.

4) If you reference certain things that are online in your question, provide a link.

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]

Suggestions

Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.

It's okay to attempt humor (but good luck, it's a tough crowd).

If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).

Update: Eliezer's video answers to 30 questions from this thread can be found here.

701 comments

Comments sorted by top scores.

comment by MichaelGR · 2009-11-11T20:55:54.604Z · score: 37 (41 votes) · LW(p) · GW(p)

What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.

By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.

comment by Liron · 2009-11-13T04:18:43.326Z · score: 1 (5 votes) · LW(p) · GW(p)

Ditto regarding your food diet?

comment by komponisto · 2009-11-11T05:39:28.847Z · score: 33 (43 votes) · LW(p) · GW(p)

During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:

I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.

Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.

I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?

comment by [deleted] · 2009-11-11T16:35:27.416Z · score: 15 (15 votes) · LW(p) · GW(p)

Somewhat related, AGI is such an enormously difficult topic, requiring intimate familiarity with so many different fields, that the vast majority of people (and I count myself among them) simply aren't able to contribute effectively to it.

I'd be interested to know if he thinks there are any singularity-related issues that are important to be worked on, but somewhat more accessible, that are more in need of contributions of man-hours rather than genius-level intellect. Is the only way a person of more modest talents can contribute through donations?

comment by MichaelVassar · 2009-11-13T05:25:10.090Z · score: 7 (7 votes) · LW(p) · GW(p)

Depends on what you mean by 'modest'. Probably 60% of Americans could contribute donations without serious lifestyle consequences and 20% of Americans could contribute over a quarter of their incomes without serious consequences. By contrast, only 10% have the reasoning abilities to identify the best large category of causes and only 1% have the reasoning abilities to identify the very best cause without a large luck element being involved. By working hard, most of that 1% could also become part of the affluent 20% of Americans who could make large donations. A similar fraction might be able to help via fund-raising efforts and by aggressively networking and sharing the contacts that they are able to build with us. A smaller but only modestly smaller fraction might be able to meaningfully contribute to SIAI's effort via seriously committed offers of volunteer effort, but definitely not via volunteer efforts uncoupled to serious commitment. Almost no-one can do FAI, or even recognize talent at a level capable of doing FAI, but if more people were doing the easier things it wouldn't be nearly so hard to find people who could do the core work.

comment by Vladimir_Nesov · 2009-11-13T17:11:42.090Z · score: 12 (12 votes) · LW(p) · GW(p)

Almost no-one can do FAI, or even recognize talent at a level capable of doing FAI, but if more people were doing the easier things it wouldn't be nearly so hard to find people who could do the core work.

SIAI keeps supporting this attitude, yet I don't believe it, at least in the way it's presented. A good mathematician who gets to understand the problem statement and succeeds in weeding out the standard misunderstanding can contribute as well as anyone, at this stage where we have no field. Creating a programme that would allow people to reliably get to work on the problem requires material to build upon, and there is still nothing, no matter of what quality. Systematizing the connections with existing science, trying to locate the place of FAI project in it, is something that only requires expertise in that science and understanding of FAI problem statement. At the very least, a dozen steps in, we'll have a useful curriculum, to get folks up to speed in the right direction.

comment by MichaelVassar · 2009-11-15T16:36:42.980Z · score: 4 (4 votes) · LW(p) · GW(p)

We have some experience with this, but if you want to discuss the details more with myself or some other SIAI people we will be happy to do so, and probably to have you come visit some time and get some experience with what we do. You may have ways of contributing substantially, theoretically or managerially. We'll have to see.

comment by Curiouskid · 2011-11-20T18:15:56.770Z · score: 0 (0 votes) · LW(p) · GW(p)

Sorry for the bump, but...

Perhaps what we should do is take all our funds and create a school for AI researchers. This would be an investment for the long hall. You and I may not be super geniuses, but I sure think I could raise some super geniuses.

Also, I feel like this topic deserves more than one comment thread.

comment by John_Maxwell (John_Maxwell_IV) · 2009-11-11T06:08:58.778Z · score: 9 (11 votes) · LW(p) · GW(p)

How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI?

I don't know about Eliezer, but I would be able to sacrifice quite a lot; perhaps all of art. If humanity spreads through the galaxy there will be way more than enough time for all that.

If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists?

It might. But their expected contribution would be much greater if they looked at the problem to see how they could contribute most effectively.

And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?

No one's saying that you're not allowed to do something. Just that it's suboptimal under their utility function, and perhaps yours.

My guess is that you overestimate how much of an altruist you are. Consider that lives can be saved using traditional methods for well under $1000. That means every time you spend $1000 on other things, your revealed preference is that having that stuff is more important to you than saving the life of another human being. If you're upset upon hearing this fact, then you're suffering from cognitive dissonance. If you're a true altruist, you'll be happy after hearing this fact, because you'll realize that you can be scoring much better on your utility function than you are currently. (Assuming for the moment that happiness corresponds with opportunities to better satisfy your utility function, which seems to be fairy common in humans.)

Regardless of whether you're a true altruist, it makes sense to spend a chunk of your time on entertainment and relaxation to spend the rest more effectively.

By the way, I would be interested to hear Eliezer address this topic in his video.

comment by MichaelVassar · 2009-11-13T05:18:29.500Z · score: 6 (6 votes) · LW(p) · GW(p)

It's a free country. You are allowed to do a lot, but it can only be optimal to do one thing.

comment by komponisto · 2009-11-14T07:58:31.914Z · score: 4 (4 votes) · LW(p) · GW(p)

Not necessarily; the maximum value of a function may be attained at more than one point of its domain.

(Also, my use of the word "allowed" is clearly rhetorical/figurative. Obviously it's not illegal to work on things other than AI, and I don't interpret you folks as saying it should be.)

comment by MichaelVassar · 2009-11-15T16:41:33.740Z · score: 3 (3 votes) · LW(p) · GW(p)

Point taken. Also, of course, given a variety of human personalities and situations, the optimal activity for a given person can vary quite a bit. I never advocate asceticism.

comment by gwern · 2011-11-06T00:37:54.400Z · score: 1 (3 votes) · LW(p) · GW(p)

If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction?

String theorists are at least somewhat plausible, but Michelangelo and Beethoven? Do you have any evidence that they actually helped the sciences progress? I've asked the same question in the past, and have not been able to adduce any evidence worth a damn. (Science fiction, at least, can try to justify itself as good propaganda.)

comment by komponisto · 2011-11-06T00:47:53.378Z · score: 0 (0 votes) · LW(p) · GW(p)

String theorists are at least somewhat plausible, but Michelangelo and Beethoven? Do you have any evidence that they actually helped the sciences progress?

No, and I didn't claim they did. It was intended to be a separate question ("...string theorists? And [then, on another note], what of Michelangelo, Beethoven,....?").

comment by CannibalSmith · 2009-11-11T10:02:24.192Z · score: 0 (0 votes) · LW(p) · GW(p)

I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.

Aren't these the same thing?

comment by MichaelGR · 2009-11-11T20:49:01.936Z · score: 32 (36 votes) · LW(p) · GW(p)

Your "Bookshelf" page is 10 years old (and contains a warning sign saying it is obsolete):

http://yudkowsky.net/obsolete/bookshelf.html

Could you tell us about some of the books and papers that you've been reading lately? I'm particularly interested in books that you've read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).

comment by alyssavance · 2009-11-13T00:23:49.188Z · score: 3 (3 votes) · LW(p) · GW(p)

See the Singularity Institute Reading List for some ideas.

comment by [deleted] · 2009-11-11T20:21:00.341Z · score: 31 (41 votes) · LW(p) · GW(p)

What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?

comment by James_Miller · 2009-11-11T05:26:46.659Z · score: 31 (57 votes) · LW(p) · GW(p)

Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn't you attend school?

comment by roland · 2009-11-13T00:53:49.662Z · score: 0 (2 votes) · LW(p) · GW(p)

Great question and I think it ties in well with my one about autodidacticism:

http://lesswrong.com/lw/1f4/less_wrong_qa_with_eliezer_yudkowsky_ask_your/1942

comment by John_Maxwell (John_Maxwell_IV) · 2009-11-11T06:55:09.612Z · score: 25 (35 votes) · LW(p) · GW(p)

What's your advice for Less Wrong readers who want to help save the human race?

comment by Blueberry · 2009-11-12T19:48:31.340Z · score: 24 (26 votes) · LW(p) · GW(p)

I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I'd love to know as much personal detail as you are comfortable sharing.)

comment by anonym · 2009-11-15T22:15:46.217Z · score: 0 (0 votes) · LW(p) · GW(p)

If this is addressed, I'd like to know why the change of mind from the stated position. Was there a flaw in the original argument, did you get new evidence that caused updating of probabilities that changed the original conclusion, etc.?

comment by haig · 2009-11-11T22:19:48.046Z · score: 23 (27 votes) · LW(p) · GW(p)

Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.

comment by timtyler · 2009-11-15T10:16:37.247Z · score: 1 (1 votes) · LW(p) · GW(p)

See Eli on video, 50 seconds in:

http://www.youtube.com/watch?v=0A9pGhwQbS0

comment by Wei_Dai · 2009-11-11T21:23:46.522Z · score: 23 (38 votes) · LW(p) · GW(p)

Why do you have a strong interest in anime, and how has it affected your thinking?

comment by PeteG · 2009-11-13T21:09:36.399Z · score: -1 (1 votes) · LW(p) · GW(p)

I think the answer to the first part of the question is as simple a reason as it is for most bright/abnormal people: it offers subcultural values.

comment by roland · 2009-11-12T21:24:45.850Z · score: 22 (24 votes) · LW(p) · GW(p)

Autodidacticism

Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?

EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc... Tell us a little story. ;)

comment by RobinHanson · 2009-11-11T23:45:10.519Z · score: 22 (34 votes) · LW(p) · GW(p)

Why exactly do majorities of academic experts in the fields that overlap your FAI topic, who have considered your main arguments, not agree with your main claims?

comment by MichaelVassar · 2009-11-13T05:08:13.757Z · score: 9 (11 votes) · LW(p) · GW(p)

I also disagree with the premise of Robin's claim. I think that when our claims are worked out precisely and clearly, a majority agree with them, and a supermajority of those who agree with Robin's part (new future growth mode, get frozen...) agree.

Still, among those who take roughly Robin's position, I would say that an ideological attraction to libertarianism is BY FAR the main reason for disagreement. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

comment by timtyler · 2009-11-14T00:35:45.639Z · score: 4 (10 votes) · LW(p) · GW(p)

Which claims? The SIAI collectively seems to think some pretty strange things to me. Many are to do with the scale of the risk facing the world.

Since this is part of its funding pitch, one obvious explanation seems to be that the organisation is attempting to create an atmosphere of fear - in the hope of generating funding.

We see a similar phenomenon surrounding global warming alarmism - those promoting the idea of there being a large risk have a big overlap with those who benefit from related funding.

comment by MichaelVassar · 2009-11-15T16:39:09.756Z · score: 7 (7 votes) · LW(p) · GW(p)

You would expect serious people who believed in a large risk to seek involvement, which would lead the leadership of any such group to benefit from funding.

Just how many people do you imagine are getting rich off of AGI concerns? Or have any expectation of doing so? Or are even "getting middle class" off of them?

comment by timtyler · 2009-11-15T16:55:09.259Z · score: 0 (6 votes) · LW(p) · GW(p)

Some DOOM peddlers manage to get by. Probably most of them are currently in Hollywood, the finance world, or ecology. Machine intelligence is only barely on the radar at the moment - but that doesn't mean it will stay that way.

I don't necessarily mean to suggest that these people are all motivated by money. Some of them may really want to SAVE THE WORLD. However, that usually means spreading the word - and convincing others that the DOOM is real and immanent - since the world must first be at risk in order for there to be SALVATION.

Look at Wayne Bent (aka Michael Travesser), for example:

"The End of The World Cult Pt.1"

The END OF THE WORLD - but it seems to have more to do with sex than money.

comment by Zack_M_Davis · 2009-11-24T00:05:18.573Z · score: 1 (1 votes) · LW(p) · GW(p)

an ideological attraction to libertarianism is BY FAR the main reason for disagreement [with singleton strategies/hypotheses]. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.

Any practical advice on how to overcome this failure mode, if and only if it is in fact a failure mode?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T00:28:51.213Z · score: 8 (14 votes) · LW(p) · GW(p)

Who are we talking about besides you?

comment by RobinHanson · 2009-11-12T02:30:07.657Z · score: 2 (12 votes) · LW(p) · GW(p)

I'd consider important overlapping academic fields to be AI and long term economic growth; I base my claim about academic expert opinion on my informal sampling of such folks. I would of course welcome a more formal sampling.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T04:59:44.978Z · score: 9 (9 votes) · LW(p) · GW(p)

Who's considered my main arguments besides you?

comment by RobinHanson · 2009-11-12T13:27:50.871Z · score: 2 (4 votes) · LW(p) · GW(p)

I'm not comfortable publicly naming names based on informal conversations. These folks vary of course in how much of the details of your arguments they understand, and of course you could always set your bar high enough to get any particular number of folks who have understood "enough."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T14:46:53.830Z · score: 5 (5 votes) · LW(p) · GW(p)

Okay. I don't know any academic besides you who's even tried to consider the arguments. And Nick Bostrom et. al., of course, but AFAIK Bostrom doesn't particularly disagree with me. I cannot refute what I have not encountered, I do set my bar high, and I have no particular reason to believe that any other academics are in the game. I could try to explain why you disagree with me and Bostrom doesn't.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-16T01:38:57.680Z · score: 4 (4 votes) · LW(p) · GW(p)

Actually, on further recollection, Steve Omohundro and Peter Cheeseman would probably count as academics who know the arguments. Mostly I've talked to them about FAI stuff, so I'm actually having trouble recalling whether they have any particular disagreement with me about hard takeoff.

I think that w/r/t Cheeseman, I had to talk to Cheeseman for a while before he started to appreciate the potential speed of a FOOM, as opposed to just the FOOM itself which he considered obvious. I think I tried to describe your position to Cheeseman and Cheeseman thought it was pretty implausible, but of course that could just be the fact that I was describing it from outside - that counts for nothing in my view until you talk to Cheeseman, otherwise he's not familiar enough with your arguments. (See, the part about setting the bar high works both ways - I can be just as fast to write off the fact of someone else's disagreement with you, if they're insufficiently familiar with your arguments.)

I'm not sure I can recall what Omohundro thinks - he might be intermediate between yourself and myself...? I'm not sure how much I've talked hard takeoff per se with Omohundro, but he's certainly in the game.

comment by MichaelVassar · 2009-11-16T02:57:22.705Z · score: 2 (2 votes) · LW(p) · GW(p)

I think Steve Omohundro disagees about the degree to which takeoff is likely to be centralized, due to what I think is the libertarian impulses I mentioned earlier.

comment by RobinHanson · 2009-11-12T18:36:25.769Z · score: 4 (8 votes) · LW(p) · GW(p)

Surely some on the recent AAAI Presidential Panel on Long-Term AI Futures considered your arguments to at least some degree. You could discuss why these folks disagree with you.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T20:23:03.125Z · score: 4 (8 votes) · LW(p) · GW(p)

Haven't particularly looked at that - I think some other SIAI people have. I expect they'd have told me if there was any analysis that counts as serious by our standards, or anything new by our standards.

If someone hasn't read my arguments specifically, then I feel very little need to explain why they might disagree with me. I find myself hardly inclined to suspect that they have reinvented the same arguments. I could talk about that, I suppose - "Why don't other people in your field invent the same arguments you do?"

comment by RobinHanson · 2009-11-12T21:23:06.840Z · score: 16 (18 votes) · LW(p) · GW(p)

You have written a lot of words. Just how many of your words would someone have had to read to make you feel a substantial need to explain the fact they are world class AI experts and disagree with your conclusions?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T21:35:34.175Z · score: 6 (6 votes) · LW(p) · GW(p)

I'm sorry, but I don't really have a proper lesson plan laid out - although the ongoing work of organizing LW into sequences may certainly help with that. It would depend on the specific issue and what I thought needed to be understood about that issue.

If they drew my feedback cycle of an intelligence explosion and then drew a different feedback cycle and explained why it fit the historical evidence equally well, then I would certainly sit up and take notice. It wouldn't matter if they'd done it on their own or by reading my stuff.

E.g. Chalmers at the Singularity Summit is an example of an outsider who wandered in and started doing a modular analysis of the issues, who would certainly have earned the right of serious consideration and serious reply if, counterfactually, he had reached different conclusions about takeoff... with respect to only the parts that he gave a modular analysis of, though, not necessarily e.g. the statement that de novo AI is unlikely because no one will understand intelligence. If Chalmers did a modular analysis of that part, it wasn't clear from the presentation.

Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.

comment by Steve_Rayhawk · 2009-11-16T04:31:19.146Z · score: 15 (15 votes) · LW(p) · GW(p)

Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.

You, or somebody anyway, could still offer a modular causal model of that snap consideration and snap judgment. For example:

  • What cached models of the planning abilities of future machine intelligences did the academics have available when they made the snap judgment?

    • What fraction of the academics are aware of any current published AI architectures which could reliably reason over plans at the level of abstraction of "implement a proxy intelligence"?

      • What fraction of them have thought carefully about when there might be future practical AI architectures that could do this?
      • What fraction use a process for answering questions about the category distinctions that will be known in the future, which uses as an unconscious default the category distinctions known in the present?
  • What false claims have been made about AI in the past? What decision rules might academics have learned to use, to protect themselves from losing prestige for being associated with false claims like those?

    • How much do those decision rules refer to modular causal analyses of the object of a claim and of the fact that people are making the claim?

    • How much do those decision rules refer to intuitions about other peoples' states of mind and social category memberships?

    • How much do those decision rules refer to intuitions about other peoples' intuitive decision rules?

    • Historically, have peoples' own abilities to do modular causal analyses been good enough to make them reliably safe from losing prestige by being associated with false claims? What fraction of academics have the intuitive impression that their own ability to do analysis isn't good enough to make them reliably safe from losing prestige by association with a false claim, so that they can only be safe if they use intuitions about the states of mind and social category memberships of a claim's proponents?

  • Of those AI academics who believe that a machine intelligence could exist which could outmaneuver humans if motivated, how do they think about the possible motivations of a machine intelligence?

    • What fraction of them think about AI design in terms of a formalism such as approximating optimal sequential decision theory under a utility function? How easy would it be for them to substitute anthropomorphic intuitions for correct technical predictions?

    • What fraction of them think about AI design in terms of intuitively justified decision heuristics? How easy would it be for them to substitute anthropomorphic intuitions for correct technical predictions?

    • What fraction of them understand enough evolutionary psychology and/or cognitive psychology to recognize moral evaluations as algorithmically caused, so that they can reject the default intuitive explanation of the cause of moral evaluations, which seems to be: "there are intrinsic moral qualities attached to objects in the world, and when any intelligent agent apprehends an object with a moral quality, the action of the moral quality on the agent's intelligence is to cause the agent to experience a moral evaluation"?

      • What combination of specializations in AI, moral philosophy, and cognitive psychology would an academic need to have, to be an "expert" whose disagreements about the material causes and implementation of moral evaluations were significant?
  • On the question of takeoff speeds, what fraction of the AI academics have a good enough intuitive understanding of decision theory to see that a point estimate or default scenario should not be substituted for a marginal posterior distribution, even in a situation where it would be socially costly in the default scenario to take actions which prevent large losses in one tail of the distribution?

    • What fraction recognized that they had a prior belief distribution over possible takeoff speeds at all?

    • What fraction understood that, regarding a variable which is underconstrained by evidence, "other people would disapprove of my belief distribution about this variable" is not an indicator for "my belief distribution about this variable puts mass in the wrong places", except insofar as there is some causal reason to expect that disapproval would be somehow correlated with falsehood?

  • What other popular concerns have academics historically needed to dismiss? What decision rules have they learned to decide whether they need to dismiss a current popular concern?

    • After they make a decision to dismiss a popular concern, what kinds of causal explanations of the existence of that concern do they make reference to, when arguing to other people that they should agree with the decision?

    • How much do the true decision rules depend on those causal explanations?

    • How much do the decision rules depend on intuitions about the concerned peoples' states of mind and social category memberships?

    • How much do the causal explanations use concepts which are implicitly defined by reference to hidden intuitions about states of mind and social category memberships?

      • Can these intuitively defined concepts carry the full weight of the causal explanations they are used to support, or does their power to cause agreement come from their ability to activate social intuitions?
  • Which people are the AI academics aware of, who have argued that intelligence explosion is a concern? What social categories do they intuit those people to be members of? What arguments are they aware of? What states of mind do they intuit those arguments to be indicators of (e.g. as in intuitively computed separating equilibria)?

    • What people and arguments did the AI academics think the other AI academics were thinking of? If only a few of the academics were thinking of people and arguments who they intuited to come from credible social categories and rational states of mind, would they have been able to communicate this to the others?
  • When the AI academics made the decision to dismiss concern about an intelligence explosion, what kinds of causal explanations of the existence of that concern did they intuitively expect that they would be able make reference to, if they later had to argue to other people that they should agree with the decision?

It is also possible to model the social process in the panel:

  • Are there factors that might make a joint statement by a panel of AI academics reflect different conclusions than they would have individually reached if they had been outsiders to the AI profession with the same AI expertise?

    • One salient consideration would be that agreeing with popular concern about an intelligence explosion would result in their funding being cut. What effects would this have had?

      • Would it have affected the order in which they became consciously aware of lines of argument that might make an intelligence explosion seem less or more deserving of concern?
      • Would it have made them associate concern about an intelligence explosion with unpopularity? In doubtful situations, unpopularity of an argument is one cue for its unjustifiability. Would they associate unpopularity with logical unjustifiability, and then lose willingness to support logically justifiable lines of argument that made an intelligence explosion seem deserving of concern, just as if they had felt those lines of argument to be logically unjustifiable, but without any actual unjustifiability?
    • There are social norms to justify taking prestige away from people who push a claim that an argument is justifiable while knowing that other prestigious people think the argument to to be a marker of a non-credible social category or state of mind. How would this have affected the discussion?

    • If there were panelists who personally thought the intelligence explosion argument was plausible, and they were in the minority, would the authors of the panel's report mention it?

      • Would the authors know about it?
      • If the authors knew about it, would they feel any justification or need to mention those opinions in the report, given that the other panelists may have imposed on the authors an implicit social obligation to not write a report that would "unfairly" associate them with anything they think will cause them to lose prestige?
      • If panelists in such a minority knew that the report would not mention their opinions, would they feel any need or justification to object, given the existence of that same implicit social obligation?
  • How good are groups of people at making judgments about arguments that unprecedented things will have grave consequences?

    • How common is a reflective, causal understanding of the intuitions people use when judging popular concerns and arguments about unprecedented things, of the sort that would be needed to compute conditional probabilities like "Pr( we would decide that concern is not justified | we made our decision according to intuition X ∧ concern was justified )"?

    • How common is the ability to communicate the epistemic implications of that understanding in real-time while a discussion is happening, to keep it from going wrong?

comment by RobinHanson · 2009-11-12T23:34:47.369Z · score: 2 (4 votes) · LW(p) · GW(p)

From that AAAI panel's interim report:

Participants reviewed prior writings and thinking about the possibility of an “intelligence explosion” where computers one day begin designing computers that are more intelligent than themselves. ... There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems.

Given this description it is hard to imagine they haven't imagined the prospect of the rate of intelligence growth depending on the level of system intelligence.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T23:43:48.900Z · score: 9 (9 votes) · LW(p) · GW(p)

I don't see any arguments listed, though. I know there's at least some smart people on that panel (e.g. Horvitz) so I could be wrong, but experience has taught me to be pessimistic, and pessimism says I have no particular evidence that anyone started breaking the problem down into modular pieces, as opposed to, say, stating a few snap perceptual judgments at each other and then moving on.

Why are you so optimistic about this sort of thing, Robin? You're usually more cynical about what would happen when academics have no status incentive to get it right and every status incentive to dismiss the silly. We both have experience with novices encountering these problems and running straight into the brick wall of policy proposals without even trying a modular analysis. Why on this one occasion do you turn around and suppose that the case we don't know will be so unlike the cases we do know?

comment by RobinHanson · 2009-11-13T04:06:20.720Z · score: 3 (7 votes) · LW(p) · GW(p)

The point is that this is a subtle and central issue to engage, so I was suggesting that you to consider describing your analysis more explicitly. Is there is never any point in listening to academics on "silly" topics? Is there never any point in listening to academics who haven't explicitly told you how they've broken a problem down into modular parts, no matter now distinguished the are on related topics? Are people who have a modular parts analysis always a more reliable source than people who don't, no matter what else their other features? And so on.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-13T04:56:35.811Z · score: 12 (12 votes) · LW(p) · GW(p)

I confess, it doesn't seem to me on a gut level like this is either healthy to obsess about, or productive to obsess about. It seems more like worrying that my status isn't high enough to do work, than actually working. If someone shows up with amazing analyses I haven't considered, I can just listen to the analyses then. Why spend time trying to guess who might have a hidden deep analysis I haven't seen, when the prior is so much in favor of them having made a snap judgment, and it's not clear why if they've got a deep analysis they wouldn't just present it?

I think that on a purely pragmatic level there's a lot to be said for the Traditional Rationalist concept of demanding that Authority show its work, even if it doesn't seem like what ideal Bayesians would do.

comment by RobinHanson · 2009-11-13T13:37:43.674Z · score: 3 (5 votes) · LW(p) · GW(p)

You have in the past thought my research on the rationality of disagreement to be interesting and spent a fair bit of time discussing it. It seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement. If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-13T14:17:43.250Z · score: 5 (7 votes) · LW(p) · GW(p)

You have in the past thought my research on the rationality of disagreement to be interesting and spent a fair but of time discussing it.

...and I've held and stated this same position pretty much from the beginning, no? E.g. http://lesswrong.com/lw/gr/the_modesty_argument/

t seemed healthy to me for you to compare your far view of disagreement in the abstract to the near view of your own particular disagreement.

I was under the impression that my verbal analysis matched and cleverly excused my concrete behavior.

If it makes sense in general for rational agents to take very seriously the fact that others disagree, why does it make little sense for you in particular to do so?

Well (and I'm pretty sure this matches what I've been saying to you over the last few years) just because two ideal Bayesians would do something naturally, doesn't mean you can singlehandedly come closer to Bayesianism by imitating the surface behavior of agreement. I'm not sure that doing elaborate analyses to excuse your disagreement helps much either. http://wiki.lesswrong.com/wiki/Occam%27s_Imaginary_Razor

I'd spend much more time worrying about the implications of Aumann agreement, if I thought the other party actually knew my arguments, took my arguments very seriously, took the Aumann problem seriously with respect to me in particular, and in general had a sense of immense gravitas about the possible consequences of abusing their power to make me update. This begins to approach the conditions for actually doing what ideal Bayesians do. Michael Vassar and I have practiced Aumann agreement a bit; I've never literally done the probability exchange-and-update thing with anyone else. (Edit: Actually on recollection I played this game a couple of times at a Less Wrong meetup.)

No such condition is remotely approached by disagreeing with the AAAI panel, so I don't think I could, in real life, improve my epistemic position by pretending that they were ideal Bayesians who were fully informed about my reasons and yet disagreed with me anyway (in which case I ought to just update to match their estimates, rather than coming up with elaborate excuses to disagree with them!)

comment by RobinHanson · 2009-11-13T18:12:24.588Z · score: 2 (4 votes) · LW(p) · GW(p)

Well I disagree with you strongly that there is no point in considering the views of others if you are not sure they know the details of your arguments, or of the disagreement literature, or that those others are "rational." Guess I should elaborate my view in a separate post.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-13T21:13:11.820Z · score: 9 (11 votes) · LW(p) · GW(p)

There's certainly always a point in considering specific arguments. But to be nervous merely that someone else has a different view, one ought, generally speaking, to suspect (a) that they know something you do not or at least (b) that you know no more than them (or far more rarely (c) that you are in a situation of mutual Aumann awareness and equal mutual respect for one another's meta-rationality). As far as I'm concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It's not that I have specific reason to distrust these people - the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.

I don't actually spend time obsessing about that sort of thing except when you're asking me those sorts of questions - putting so much energy into self-justification and excuses would just slow me down if Horvitz showed up tomorrow with an argument I hadn't considered.

I'll say again: I think there's much to be said for the Traditional Rationalist ideal of - once you're at least inside a science and have enough expertise to evaluate the arguments - paying attention only when people lay out their arguments on the table, rather than trying to guess authority (or arguing over who's most meta-rational). That's not saying "there's no point in considering the views of others". It's focusing your energy on the object level, where your thought time is most likely to be productive.

Is it that awful to say: "Show me your reasons"? Think of the prior probabilities!

comment by RobinHanson · 2009-11-14T00:42:20.565Z · score: 14 (20 votes) · LW(p) · GW(p)

You admit you have not done much to make it easy to show them your reasons. You have not written up your key arguments in a compact form using standard style and terminology and submitted it to standard journals. You also admit you have not contacted any of them to ask them for their reasons; Horvitz would have have to "show up" for you to listen to him. This looks a lot like a status pissing contest; the obvious interpretation: Since you think you are better than them, you won't ask them for their reasons, and you won't make it easy for them to understand your reasons, as that would admit they are higher status. They will instead have to acknowledge your higher status by coming to you and doing things your way. And of course they won't since by ordinary standard they have higher status. So you ensure there will be no conversation, and with no conversation you can invoke your "traditional" (non-Bayesian) rationality standard to declare you have no need to consider their opinions.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-14T01:19:06.527Z · score: 6 (6 votes) · LW(p) · GW(p)

You're being slightly silly. I simply don't expect them to pay any attention to me one way or another. As it stands, if e.g. Horvitz showed up and asked questions, I'd immediately direct him to http://singinst.org/AIRisk.pdf (the chapter I did for Bostrom), and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries. Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.

FYI, I've talked with Peter Norvig a bit. He was mostly interested in the CEV / FAI-spec part of the problem - I don't think we discussed hard takeoffs much per se. I certainly wouldn't have brushed him off if he'd started asking!

comment by mormon2 · 2009-11-15T03:50:15.337Z · score: 8 (16 votes) · LW(p) · GW(p)

"and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries."

Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?

"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it."

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.

I am sorry I prefer to be blunt.. that way there is no mistaking meanings...

comment by Alicorn · 2009-11-15T14:38:32.377Z · score: 1 (16 votes) · LW(p) · GW(p)

I think you would admit that in its current form SIAI has a 0 probability of creating FAI first.

No.

comment by wedrifid · 2009-11-15T10:50:09.961Z · score: 0 (0 votes) · LW(p) · GW(p)

Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. ...

That 'probably not even then' part is significant.

That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.

Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied '1' and probably more than '0' too.

comment by mormon2 · 2009-11-15T16:59:01.656Z · score: 3 (11 votes) · LW(p) · GW(p)

"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. ..."

"That 'probably not even then' part is significant."

My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you're speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn't have enough money to pay for the computing hardware to make human level AI.

"Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied '1' and probably more than '0' too."

If he doesn't agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out.

If you have never taken an idea from idea to product this can be hard to understand.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-15T20:49:45.880Z · score: 8 (10 votes) · LW(p) · GW(p)

In fact SIAI doesn't have enough money to pay for the computing hardware to make human level AI.

And so the utter difference of working assumptions is revealed.

comment by CannibalSmith · 2009-11-17T12:42:59.187Z · score: 1 (5 votes) · LW(p) · GW(p)

Back of a napkin math:
10^4 neurons per supercomputer
10^11 neurons per brain
10^7 supercomputers per brain
1.3*10^6 dollars per supercomputer
1.3*10^13 dollars per brain

Edit: Disclaimer: Edit: NOT!

comment by wedrifid · 2009-11-17T14:19:22.438Z · score: 3 (5 votes) · LW(p) · GW(p)

10^4 neurons per supercomputer
10^11 neurons per brain

Another difference in working assumptions.

comment by CannibalSmith · 2009-11-17T16:43:45.551Z · score: 0 (0 votes) · LW(p) · GW(p)

It's a fact stated by the guy in the video, not an assumption.

comment by wedrifid · 2009-11-17T18:47:29.911Z · score: 0 (0 votes) · LW(p) · GW(p)

No need to disclaim, your figures are sound enough and I took them as a demonstration of another rather significant difference between the assumptions of Eliezer and mormon2 (or mormon2's sources).

comment by wedrifid · 2009-11-15T20:06:11.647Z · score: 0 (0 votes) · LW(p) · GW(p)

If you have never taken an idea from idea to product this can be hard to understand.

I have. I've also failed to take other ideas to products and so agree with that part of your position, just not the argument as it relates to context.

comment by timtyler · 2009-11-14T19:02:23.051Z · score: -2 (2 votes) · LW(p) · GW(p)

If there is a status pissing contest, they started it! ;-)

"On the latter, some panelists believe that the AAAI study was held amidst a perception of urgency by non-experts (e.g., a book and a forthcoming movie titled “The Singularity is Near”), and focus of attention, expectation, and concern growing among the general population."

Agree with them that there is much scaremongering going on in the field - but disagree with them about there not being much chance of an intelligence explosion.

comment by timtyler · 2009-11-14T19:23:29.027Z · score: 0 (4 votes) · LW(p) · GW(p)

I wondered why these folk got so much press. My guess is that the media probably thought the "AAAI Presidential Panel on Long-Term AI Futures" had something to do with the a report commisioned indirectly for the country's president. In fact it just refers to the president of their organisation. A media-savvy move - though it probably represents deliberately misleading information.

comment by RobinHanson · 2009-11-14T00:29:45.398Z · score: 2 (6 votes) · LW(p) · GW(p)

Almost surely world class academic AI experts do "know something you do not" about the future possibilities of AI. To declare that topic to be your field and them to be "outside" it seems hubris of the first order.

comment by wedrifid · 2009-11-14T01:28:37.931Z · score: 4 (4 votes) · LW(p) · GW(p)

This conversation seems to be following what appears to me to be a trend in Robin and Eliezer's (observable by me) disagreements. This is one reason I would fascinated if Eliezer did cover Robin's initial question, informed somewhat by Eliezer's interpretation.

I recall Eliezer mentioning in a tangential comment that he disagreed with Robin not just on the particular conclusion but more foundationally on how much weight should be given to certain types of evidence or argument. (Excuse my paraphrase from hazy memory, my googling failed me.) This is a difference that extends far beyond just R & E and Eliezer has hinted at insights that intrigue me.

comment by RobinHanson · 2009-11-14T14:21:17.739Z · score: 0 (0 votes) · LW(p) · GW(p)

How can you be so confident that you know so much about this topic that no world class AI expert could know something relevant that you do not? Surely they considered the fact that people like you think you a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as "snap" because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don't think those you disagree with are rational, but they probably don't think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-14T01:30:05.744Z · score: 2 (4 votes) · LW(p) · GW(p)

Almost surely world class academic AI experts do "know something you do not" about the future possibilities of AI.

Does Daphne Koller know more than I do about the future possibilities of object-oriented Bayes Nets? Almost certainly. And, um... there are various complicated ways I could put this... but, well, so what?

(No disrespect intended to Koller, and OOBN/probabilistic relational models/lifted Bayes/etcetera is on my short-list of things to study next.)

comment by RobinHanson · 2009-11-14T14:23:34.166Z · score: 3 (9 votes) · LW(p) · GW(p)

How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not? Surely they considered the fact that people like you think you know a lot about this topic, and they nevertheless thought it reasonable to form a disagreeing opinion based on the attention they had given it. You want to dismiss their judgment as "snap" because they did not spend many hours considering your arguments, but they clearly disagree with that assessment of how much consideration your arguments deserve. Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians, even when such authorities do not review contrarian arguments in as much detail as contrarians think best. You want to dismiss the rationality of disagreement literature as irrelevant because you don't think those you disagree with are rational, but they probably don't think you are rational either, and you are probably both right. But the same essential logic also says that irrational people should take seriously the fact that other irrational people disagree with them.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-14T17:38:18.578Z · score: 13 (13 votes) · LW(p) · GW(p)

How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not?

You changed what I said into a bizarre absolute. I am assuming no such thing. I am just assuming that, by default, world class experts on various topics in narrow AI, produce their beliefs about the Singularity by snap judgment rather than detailed modular analysis. This is a prior and hence an unstable probability - as soon as I see contrary evidence, as soon as I see the actual analysis, it gets revoked.

but they clearly disagree with that assessment of how much consideration your arguments deserve.

They have no such disagreement. They have no idea I exist. On the rare occasion when I encounter such a person who is physically aware of my existence, we often manage to have interesting though brief conversations despite their having read none of my stuff.

Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians

Science only works when you use it; scientific authority derives from science. If you've got Lord Kelvin running around saying that you can't have flying machines because it's ridiculous, the problem isn't that he's an Authority, the problem is that he's running on naked snap intuitive judgments of absurdity and the Wright Brothers are using actual math. The asymmetry in this case is not that pronounced but, even so, the default unstable prior is to assume that experts in narrow AI algorithms are not doing anything more complicated than this to produce their judgments about the probability of intelligence explosion - both the ones with negative affect who say "Never, you religious fear-monger!" and the ones with positive affect who say "Yes! Soon! And they shall do no wrong!" As soon as I see actual analysis, then we can talk about the actual analysis!

Added: In this field, what happens by default is that people talk complete nonsense. I spent my first years talking complete nonsense. In a situation like that, everyone has to show their work! Or at least show that they did some work! No exceptions!

comment by RobinHanson · 2009-11-14T18:35:48.158Z · score: 6 (7 votes) · LW(p) · GW(p)

This conversation is probably reaching diminishing returns, so let me sum up. I propose that it would be instructive to you and many others if you would discuss what your dispute looks like from an outside view - what uninformed neutral but intelligent and rational observers should conclude about this topic from the features of this dispute they can observe from the outside. Such features include the various credentials of each party, and the effort he or she has spent on the topic and on engaging the other parties. If you think that a reasonable outsider viewer would have far less confidence in your conclusions than you do, then you must think that you possess hidden info, such as that your arguments are in fact far more persuasive than one could reasonably expect knowing only the outside features of the situation. Then you might ask why the usual sorts of clues that tend to leak out about argument persuasiveness have failed to do so in this case.

comment by komponisto · 2009-11-15T04:05:31.384Z · score: 9 (9 votes) · LW(p) · GW(p)

Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it's hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they've simply never given it much consideration, either because they're entirely unaware of it or assume it's some kind of sci-fi cult practice, and they don't take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.

Similarly, I think most of the apparent "disagreement" about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI's work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.

comment by RobinHanson · 2009-11-15T14:07:50.568Z · score: 3 (3 votes) · LW(p) · GW(p)

Cryonics is also a good case to analyze what an outsider should think, given what they can see. But of course "they laughed at Galileo too" is hardly a strong argument for contrarian views. Yes sometimes contrarians are right - the key question is how outside observers, or self-doubting insiders, can tell when contrarians are right.

comment by komponisto · 2009-11-15T15:39:48.123Z · score: 3 (3 votes) · LW(p) · GW(p)

Outsiders can tell when contrarians are right by assessing their arguments, once they've decided the contrarians are worth listening to. This in turn can be ascertained through the usual means, such as association with credentialed or otherwise high-status folks. So for instance, you are affiliated with a respectable institution, Bostrom with an even more respectable institution, and the fact that EY was co-blogging at Overcoming Bias thus implied that if your and Bostrom's arguments were worth listening to, so were his. (This is more or less my own story; and I started reading Overcoming Bias because it appeared on Scott Aaronson's blogroll.)

Hence it seems that Yudkowsky's affiliations are already strong enough to signal competence to those academics interested in the subjects he deals with, in which case we should expect to see detailed, inside-view analyses from insiders who disagree. In the absence of that, we have to conclude that insiders either agree or are simply unaware -- and the latter, if I understand correctly, is a problem whose solution falls more under the responsibility of people like Vassar rather than Yudkowsky.

comment by RobinHanson · 2009-11-15T16:33:56.594Z · score: 1 (3 votes) · LW(p) · GW(p)

No for most people it is infeasible to evaluate who is right by working through the details of the arguments. The fact that Eliezer wrote on a blog affiliated with Oxford is very surely not enough to lead one to expect detailed rebuttal analyses from academics who disagree with him.

comment by komponisto · 2009-11-15T17:14:06.619Z · score: 4 (4 votes) · LW(p) · GW(p)

Well, for most people on most topics it is infeasible to evaluate who is right, period. At the end of the day, some effort is usually required to obtain reliable information. Even surveys of expert opinion may be difficult to conduct if the field is narrow and non-"traditional". As for whatever few specialists there may be in Singularity issues, I think you expect too little of them if you don't think Eliezer currently has enough status to expect rebuttals.

comment by timtyler · 2009-11-15T16:09:47.524Z · score: -5 (11 votes) · LW(p) · GW(p)

I figure cryonics serves mainly a signaling role.

The message probably reads something like:

"I'm a geek, I think I am really important - and I'm loaded".

comment by loqi · 2009-11-15T23:40:33.477Z · score: 3 (3 votes) · LW(p) · GW(p)

So, despite the fact that we (human phenotypes) are endowed with a powerful self-preservation instinct, you find a signaling explanation more likely than a straightforward application of self-preservation to a person's concept of their own mind?

Given your peculiar preferences which value your DNA more highly than your brain, it's tempting to chalk your absurd hypothesis up to the typical mind fallacy. But I think you're well aware of the difference in values responsible for the split between your assessment of cryonics and Eliezer's or Robin's.

So I think you're value sniping. I think your comment was made in bad faith as a roundabout way of signaling your values in a context where explicitly mentioning them would be seen as inappropriate or off-topic. I don't know what your motivation would be - did mention of cryonics remind you that many here do not share your values, and thereby motivate you to plant your flag in the discussion?

Please feel free to provide evidence to the contrary by explaining in more detail why self-preservation is an unlikely motivation for cryonics relative to signaling.

comment by timtyler · 2009-11-16T08:27:21.491Z · score: 0 (4 votes) · LW(p) · GW(p)

An over-generalisation of self-preservation instincts certainly seems to be part of it.

On the other hand, one of my interests is in the spread of ideas. Without cryonic medalions, cryonic bracelets, cryonic advertising and cryonic preachers there wouldn't be any cryonics movement. There seems to be a "show your friends how much you care - freeze them!" dynamic.

I have a similar theory about the pyramids. Not so much a real voyage to the afterlife, but a means of reinforcing the pecking order in everyone's minds.

I am contrasting this signaling perspective with Robin's views - in part because I am aware that he is sympathetic to signaling theories in other contexts.

I do think signaling is an important part of cryonics - but I was probably rash to attempt to quantify the effect. I don't pretend to have any good way of measuring its overall contribution relative to other factors.

comment by timtyler · 2009-11-14T22:11:21.252Z · score: 1 (3 votes) · LW(p) · GW(p)

Re: "They have no idea I exist."

Are you sure? You may be underestimating your own fame in this instance.

comment by Thomas · 2009-11-14T16:29:19.257Z · score: 0 (2 votes) · LW(p) · GW(p)

Say that "Yudkowsky has no real clue" and that those "AI academics are right"? Just another crackpot among many "well educated", no big thing. Not worth to mention, almost.

Say, that this crackpot is of the Edisonian kind! In that case it is something well worth to mention.

Important enough to at least discuss with him ON THE TOPICS, and not on some meta level. Meta level discussion is sometimes (as here IMHO), just a waste of time.

comment by MichaelBishop · 2009-11-14T19:25:26.942Z · score: 2 (2 votes) · LW(p) · GW(p)

I'm not sure what you mean by your first few sentences. But I disagree with your last two. It is good for me to see this debate.

comment by Thomas · 2009-11-14T19:31:58.772Z · score: -3 (3 votes) · LW(p) · GW(p)

You get zilch, in the case of Hanson (and the Academia) is right. Zero in the informative sense. You get quite a bit, if Yudkowsky is right.

Verifying Hanson (& the so called Academia) means no new information.

comment by Vladimir_Nesov · 2009-11-14T20:24:30.045Z · score: 2 (2 votes) · LW(p) · GW(p)

You get zilch, in the case of Hanson (and the Academia) is right. Zero in the informative sense. You get quite a bit, if Yudkowsky is right.

Verifying Hanson (& the so called Academia) means no new information.

You get not needing to run around trying to save the world and a pony if Hanson is right. It's not useful to be deluded.

comment by Thomas · 2009-11-14T20:45:10.692Z · score: -4 (4 votes) · LW(p) · GW(p)

IFF he is right. Probably he is and nothing dramatically will happen. Probably Edison and Wright brothers and many others were also wrong, looking from their historic perspective.

Note, that if the official Academia (Hanson's guys) is correct, the amount of new information is exactly zero. Nothing interesting to talk about or expect to.

I am after the cases they were and are wrong. I am after a new context, misfits like Yudkowsky or Edison might provide and "The Hanson's" can't. By the definition.

comment by Vladimir_Nesov · 2009-11-14T20:55:23.591Z · score: -1 (3 votes) · LW(p) · GW(p)

You are confused.

P.S. I don't want to get into a discussion; I believe it's better to just state a judgment even if without a useful explanation than to not state a judgment at all; however it may be perceived negatively for those obscure status-related reasons (see "offense" on the wiki), so I predict that this comment would've been downvoted without this addendum, and not impossibly still will be with it. This "P.S." is dedicated to all the relevant occasions, not this one alone where I could've used the time to actually address the topic.

comment by Zack_M_Davis · 2009-11-14T23:29:12.491Z · score: 1 (5 votes) · LW(p) · GW(p)

I believe it's better to just state a judgment even if without a useful explanation than to not state a judgment at all

And a simple downvote isn't sufficient?

comment by RobinZ · 2009-11-14T23:47:11.785Z · score: 1 (1 votes) · LW(p) · GW(p)

If I'm reading the conversation correctly, Vladimir Nesov is indicating with his remark that he is no longer interested in continuing. If he were not a major participant in the thread, a downvote would be appropriate, but as a major participant, more is required of him.

comment by wedrifid · 2009-11-15T00:24:22.187Z · score: -1 (1 votes) · LW(p) · GW(p)

and not impossibly still will be with it

I downvoted it. If it included two quotes from the context followed by 'You are confused' I would have upvoted it.

comment by Vladimir_Nesov · 2009-11-15T00:29:05.851Z · score: 0 (0 votes) · LW(p) · GW(p)

I initially tried that, but simple citation didn't make the point any more rigorous.

comment by Thomas · 2009-11-14T21:05:58.156Z · score: -2 (2 votes) · LW(p) · GW(p)

I am not confused and I don't want a discussion either. I only state, that a new content and a new context usually comes out from outside the kosher set of views.

Of course, most of the outsiders are delusive poor devils. Yet, they are almost the only source of new information.

comment by timtyler · 2009-11-13T22:31:47.869Z · score: 2 (2 votes) · LW(p) · GW(p)

From that AAAI document:

"The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes".

"Radical outcomes" seems like a case of avoiding refutation by being vague. However, IMO, they will need to establish the truth of their assertion before they will get very far there. Good luck to them with that.

comment by timtyler · 2009-11-14T22:41:57.133Z · score: 1 (5 votes) · LW(p) · GW(p)

The AAAI interim report is really too vague to bother much with - but I suspect they are making another error.

Many robot enthusiasts pour scorn on the idea that robots will take over the world. How To Survive A Robot Uprising is a classic presentation on this theme. A hostile takeover is a pretty unrealistic scenario - but these folk often ignore the possibility of a rapid robot rise from within society driven by mutual love. One day robots will be smart, sexy, powerful and cool - and then we will want to become more like them.

comment by timtyler · 2009-11-13T22:27:39.575Z · score: 2 (4 votes) · LW(p) · GW(p)

Why will we witness an intelligence explosion? Because nature has a long history of favouring big creatures with brains - and because the capability to satisfy those selection pressures has finally arrived.

The process has already resulted in enormous data-centres, the size of factories. As I have said:

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

comment by timtyler · 2009-11-14T21:12:38.539Z · score: 1 (3 votes) · LW(p) · GW(p)

Thinking about it, they are probably criticising the (genuinely dud) idea that an intelligence explosion will start suddenly at some future point with the invention of some machine - rather than gradually arising out of the growth of today's already self-improving economies and industries.

comment by Thomas · 2009-11-14T21:24:19.299Z · score: 0 (0 votes) · LW(p) · GW(p)

I think, both ways are still open. The intelligence explosion from a self-improving economy and the intelligence explosion from a fringe of this process.

comment by timtyler · 2009-11-14T21:34:49.059Z · score: -1 (1 votes) · LW(p) · GW(p)

Did you take a look at my "The Intelligence Explosion Is Happening Now"? The point is surely a matter of history - not futurism.

comment by Thomas · 2009-11-14T21:39:55.089Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes and you are right.

comment by timtyler · 2009-11-14T21:59:32.564Z · score: 0 (0 votes) · LW(p) · GW(p)

Great - thanks for your effort and input.

comment by timtyler · 2009-11-13T22:22:36.530Z · score: 1 (1 votes) · LW(p) · GW(p)

Re: "overall skepticism about the prospect of an intelligence explosion"...?

My guess would be that they are unfamiliar with the issues or haven't thought things through very much. Or maybe they don't have a good understanding of what that concept refers to (see link to my explanation - hopefully above). They present no useful analysis of the point - so it is hard to know why they think what they think.

The AAAI seem to have publicly come to these issues later than many in the community - and it seems to be playing catch-up.

comment by timtyler · 2009-11-14T18:30:24.099Z · score: 0 (0 votes) · LW(p) · GW(p)

It looks as though we will be hearing more from these folk soon:

"Futurists' report reviews dangers of smart robots"

http://www.pittsburghlive.com/x/pittsburghtrib/news/pittsburgh/s_651056.html

It doesn't sound much better than the first time around.

comment by JoshuaFox · 2009-12-16T08:14:11.622Z · score: 1 (1 votes) · LW(p) · GW(p)

It must be possible to engage at least some of these people in some sort of conversation to understand their positions, whether a public dialog as with Scott Aaronson or in private.

comment by timtyler · 2009-11-14T00:26:06.218Z · score: 1 (1 votes) · LW(p) · GW(p)

Chalmers reached some odd conclusions. Probably not as odd as his material about zombies and consciousness, though.

comment by timtyler · 2009-11-14T22:32:02.283Z · score: 2 (12 votes) · LW(p) · GW(p)

I have a theory about why there is disagreement with the AAAI panel:

The DOOM peddlers gather funding from hapless innocents - who hope to SAVE THE WORLD - while the academics see them as bringing their field into disrepute, by unjustifiably linking their field to existential risk, with their irresponsible scaremongering about THE END OF THE WORLD AS WE KNOW IT.

Naturally, the academics sense a threat to their funding - and so write papers to reassure the public that spending money on this stuff is Really Not As Bad As All That.

comment by Vladimir_Nesov · 2009-11-14T22:33:27.488Z · score: 0 (0 votes) · LW(p) · GW(p)

Naturally, the academics sense a threat to their funding - and so write papers to reassure the public that spending money on this stuff is really not as bad as all that.

They do?

comment by gwern · 2009-11-12T14:50:26.450Z · score: 0 (0 votes) · LW(p) · GW(p)

Does Robin Hanson fall under 'et. al'? I remember on OB that he attempted to link the 2 fields on at least 1 or 2 occasions, and those were some of the most striking examples of disagreements between you two.

comment by StefanPernar · 2009-11-12T11:02:57.652Z · score: 1 (5 votes) · LW(p) · GW(p)

Me - if I qualify as an academic expert is another matter entirely of course.

comment by ChrisHibbert · 2009-11-14T19:39:13.929Z · score: 2 (2 votes) · LW(p) · GW(p)

Do you disagree with Eliezer substantively? If so, can you summarize how much of his arguments you've analyzed, and where you reach different conclusions?

comment by StefanPernar · 2009-11-15T01:46:06.251Z · score: 0 (10 votes) · LW(p) · GW(p)

Yes - I disagree with Eliezer and have analyzed a fair bit of his writings although the style in which it is presented and collected here is not exactly conducive to that effort. Feel free to search for my blog for a detailed analysis and a summary of core similarities and differences in our premises and conclusions.

comment by AdeleneDawner · 2009-11-15T02:00:21.458Z · score: 6 (6 votes) · LW(p) · GW(p)

Assuming I have the correct blog, these two are the only entries that mention Eliezer by name.

Edit: The second entry doesn't mention him, actually. It comes up in the search because his name is in a trackback.

comment by Furcas · 2009-11-15T02:16:27.853Z · score: 6 (10 votes) · LW(p) · GW(p)

From the second blog entry linked above:

Two fundamental assumptions:

A) Compassion is a universal value

B) It is a basic AI drive to avoid counterfeit utility

If A = true (as we have every reason to believe) and B = true (see Omohundro’s paper for details) then a transhuman AI would dismiss any utility function that contradicts A on the ground that it is recognized as counterfeit utility.

Heh.

comment by RobinZ · 2009-11-15T02:56:10.913Z · score: 7 (9 votes) · LW(p) · GW(p)

This quotation accurately summarizes the post as I understand it. (It's a short post.)

I think I speak for many people when I say that assumption A requires some evidence. It may be perfectly obvious, but a lot of perfectly obvious things aren't true, and it is only reasonable to ask for some justification.

comment by AdeleneDawner · 2009-11-15T03:10:08.744Z · score: 7 (7 votes) · LW(p) · GW(p)

... o.O

Compassion isn't even universal in the human mind-space. It's not even universal in the much smaller space of human minds that normal humans consider comprehensible. It's definitely not universal across mind-space in general.

The probable source of the confusion is discussed in the comments - Stefan's only talking about minds that've been subjected to the kind of evolutionary pressure that tends to produce compassion. He even says himself, "The argument is valid in a “soft takeoff” scenario, where there is a large pool of AIs interacting over an extended period of time. In a “hard takeoff” scenario, where few or only one AI establishes control in a rapid period of time, the dynamics described do not come into play. In that scenario, we simply get a paperclip maximizer."

comment by RobinZ · 2009-11-15T03:18:26.872Z · score: 4 (4 votes) · LW(p) · GW(p)

Ah - that's interesting. I hadn't read the comments. That changes the picture, but by making the result somewhat less relevant.

(Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".)

comment by AdeleneDawner · 2009-11-15T03:25:25.804Z · score: 2 (2 votes) · LW(p) · GW(p)

(Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".)

Ah. That's not how I usually see the word used.

comment by RobinZ · 2009-11-15T03:54:39.400Z · score: 1 (1 votes) · LW(p) · GW(p)

It's my descriptivist side playing up - my (I must admit) intuition is that when people say that some thesis is "obvious", they mean that they reached this bottom line by ... well, system 1 thinking. I don't assume it means that the obvious thesis is actually correct, or even universally obvious. (For example, it's obvious to me that human beings are evolved, but that's because it's a cached thought I have confidence in through system 2 thinking.)

Actually, come to think: I know you've made a habit of reinterpreting pronouncements of "good" and "evil" in some contexts - do you have some gut feeling for "obvious" that contradicts my read?

comment by AdeleneDawner · 2009-11-15T04:03:58.675Z · score: 3 (3 votes) · LW(p) · GW(p)

I generally take 'obvious' to mean 'follows from readily-available evidence or intuition, with little to no readily available evidence to contradict the idea'. The idea that compassion is universal fails on the second part of that. The definitions are close in practice, though, in that most peoples' intuitions tend to take readily available contradictions into account... I think.

ETA: Oh, and 'obviously false' seems to me to be a bit of a different concept, or at least differently relevant, given that it's easier to disprove something than to prove it. If someone says that something is obviously true, there's room for non-obvious proofs that it's not, but if something is obviously false (as 'compassion is universal' is), that's generally a firm conclusion.

comment by RobinZ · 2009-11-15T04:09:57.763Z · score: 2 (2 votes) · LW(p) · GW(p)

Yes, that makes sense - even if mine is a better description of usage, from the standpoint of someone categorizing beliefs, I imagine yours would be the better metric.

ETA: I'm not sure the caveat is required for "obviously false", for two reasons.

  1. Any substantive thesis (a category which includes most theses that are rejected as obviously false) requires less evidence to be roundly disconfirmed than it does to be confirmed.

  2. As Yvain demonstrated in Talking Snakes, well-confirmed theories can be "obviously false", by either of our definitions.

It's true that it usually takes less effort to disabuse someone of an obviously-true falsity than to convince them of an obviously-false truth, but I don't think you need a special theory to support that pattern.

comment by AdeleneDawner · 2009-11-15T21:50:05.898Z · score: 2 (2 votes) · LW(p) · GW(p)

I've been thinking about the obviously true/obviously false distinction some more, and I think I've figured out why they feel like two different concepts.

'Obviously', as I use it, is very close to 'observably'. It's obviously true that the sky is blue where I am right now, and obviously false that it's orange, because I can see it. It's obviously true that the sky is usually either blue, white, or grey during the day (post-sunrise, pre-sunset), because I've observed the sky many times during the day and seen those colors, and no others.

'Apparently', as I use it, is very similar to 'obviously', but refers to information inferred from observed facts. The sky is apparently never orange during the day, because I've personally observed the sky many times during the day and never seen it be that color. I understand that it can also be inferred from certain facts about the world (composition of the atmosphere and certain facts about how light behaves, I believe) that the sky will always appear blue on cloudless days, so that's also apparently true.

'Obviously false' covers situations where the theory makes a prediction that is observably inaccurate, as this one did. 'Apparently false' covers situations where the theory makes a prediction that appears to be inaccurate given all the available information, but some of the information that's available is questionable (I consider inferences questionable by default - if nothing else, it's possible for some relevant state to have been overlooked; what if the composition of the atmosphere were to change for some reason?) or otherwise doesn't completely rule out the possibility that the theory is true.

Important caveat: I do use those words interchangeably in conversation, partly because of the convention of avoiding repeating words too frequently and partly because it's just easier - if I were to try to be that accurate every time I communicated, I'd run out of spoons(pdf) and not be able to communicate at all. Also, having to parse someone else's words, when they aren't using the terms the same way I do, can lead to temporary confusion. But when I'm thinking, they are naturally separate.

comment by AdeleneDawner · 2009-11-15T04:27:05.726Z · score: 2 (2 votes) · LW(p) · GW(p)

Yes, that makes sense - even if mine is a better description of usage, from the standpoint of someone categorizing beliefs, I imagine yours would be the better metric.

It also has the advantage of making it clear that the chance that the statement is accurate is dependent on the competence of the person making the statement - people who are more intelligent and/or have more experience in the relevant domain will consider more, and more accurate, evidence to be readily available, and may have better intuitions, even if they are sticking to system 1 thought.

ETA: I'm not sure the caveat is required for "obviously false", for two reasons.

I suppose they don't need different wordings, but they do feel like different concepts to me. *shrug* (As I've mentioned elsewhere, I don't think in words. This is not an uncommon side-effect of that.)

comment by StefanPernar · 2009-11-16T13:06:30.851Z · score: 0 (0 votes) · LW(p) · GW(p)

From Robin: Incidentally, when I said, "it may be perfectly obvious", I meant that "some people, observing the statement, may evaluate it as true without performing any complex analysis".

I feel the other way around at the moment. Namely "some people, observing the statement, may evaluate it as false without performing any complex analysis"

comment by StefanPernar · 2009-11-16T12:41:18.934Z · score: -7 (7 votes) · LW(p) · GW(p)

"Compassion isn't even universal in the human mind-space. It's not even universal in the much smaller space of human minds that normal humans consider comprehensible. It's definitely not universal across mind-space in general."

Your argument is beside my original point, Adelene. My claim is that compassion is a universal rational moral value. Meaning any sufficiently rational mind will recognize it as such. The fact that not every human is in fact compassionate says more about their rationality (and of course their unwillingness to consider the arguments :-) ) than about that claim. That's why it is call ASPD - the D standing for 'disorder', it is an aberration, not helpful, not 'fit'. Surely the fact that some humans are born blind does not invalidate the fact that seeing people have an enormous advantage over the blind. Compassion certainly being less obvious though - that is for sure.

Re "The argument is valid in a “soft takeoff”scenario, where few or only one AI establishes control in a rapid period of time, the dynamics described do not come into play. In that scenario, we simply get a paperclip maximizer." - that is from Kaj Sotala over at her live journal - not me.

comment by gwern · 2009-11-16T18:00:07.955Z · score: 6 (6 votes) · LW(p) · GW(p)

Meaning any sufficiently rational mind will recognize it as such. The fact that not every human is in fact compassionate says more about their rationality (and of course their unwillingness to consider the arguments :-) ) than about that claim. That's why it is call ASPD - the D standing for 'disorder', it is an aberration, not helpful, not 'fit'.

APSD is only unfit in our current context. Would Stone Age psychiatrists have recognized it as an issue? Or as a positive trait good for warring against other tribes and climbing the totem pole? In other situations, compassion is merely an extra expense. (As Thrasymachus asked thousands of years ago: how can a just man do better than an injust man, when the injust man can act justly when it is optimal and injustly when that is optimal?)

Why would a recursively-improving AI which is single-mindedly pursuing an optimization goal permit other AIs to exist & threaten it? There is nothing they can offer it that it couldn't do itself. This is true in both slow and fast takeoffs; cooperation only makes sense if there is a low ceiling for AI capability so that there are utility-maximizing projects beyond an AI's ability to do alone then or in the future.

And 'sufficiently rational' is dangerous to throw around. It's a fully general argument: 'any sufficiently rational mind will recognize that Islam is the one true religion; that not every human is Muslim says more about their rationality than about the claims is Islam. That's why our Muslim psychiatrists call it UD - Unbeliever Disorder, it is an aberration, not helpful, not 'fit'. Surely the fact that some human are born kafir doesn't invalidate the fact that Muslim people have a tremendous advantage over the kafir in the afterlife? 'There is one God and Muhammed is his prophet' is certainly less obvious than seeing being better superior to blindness, though.'

comment by StefanPernar · 2009-11-16T16:29:37.912Z · score: -3 (3 votes) · LW(p) · GW(p)

The longer I stay around here the more I get the feeling that people vote comments down purely because they don't understand them not because they found a logical or factual error. I expect more from a site dedicated to rationality. This site is called 'less wrong', not 'less understood', 'less believed' or 'less conform'.

Tell me: in what way do you feel that Adelene's comment invalidated my claim?

comment by Zack_M_Davis · 2009-11-16T18:23:42.735Z · score: 5 (7 votes) · LW(p) · GW(p)

the more I get the feeling that people vote comments down purely because they don't understand them not because they found a logical or factual error

I can see why it would seem this way to you, but from our perspective, it just looks like people around here tend to have background knowledge that you don't. More specifically: most people here are moral anti-realists, and by rationality we only mean general methods for acquiring accurate world-models and achieving goals. When people with that kind of background are quick to reject claims like "Compassion is a universal moral value," it might superficially seem like they're being arbitrarily dismissive of unfamiliar claims, but we actually think we have strong reasons to rule out such claims. That is: the universe at its most basic level is described by physics, which makes no mention of morality, and it seems like our own moral sensibilities can be entirely explained by contingent evolutionary and cultural forces; therefore, claims about a universal morality are almost certainly false. There might be some sort of game-theoretic reason for agents to pursue the same strategy under some specific conditions---but that's really not the same thing as a universal moral value.

comment by timtyler · 2009-11-16T19:21:26.041Z · score: 1 (3 votes) · LW(p) · GW(p)

"Universal values" presumably refers to values the universe will converge on, once living systems have engulfed most of it.

If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.

If rerunning the clock produces highly similar moralities, then the moral objectivists will be able to declare victory.

Gould would no-doubt favour the first position - while Conway Morris would be on the side of the objectivists.

I expect that there's a lot of truth on the objectivist side - though perhaps contingency plays some non-trivial role.

The idea that physics makes no mention of morality seems totally and utterly irrelevant to me. Physics makes no mention of convection, diffusion-limited aggregation, or fractal drainage patterns either - yet those things are all universal.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-16T20:01:48.022Z · score: 2 (2 votes) · LW(p) · GW(p)

If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.

If rerunning the clock produces highly similar moralities, then the moral objectivists will be able to declare victory.

Why should we care about this mere physical fact of which you speak? What has this mere "is" to do with whether "should" is "objective", whatever that last word means (and why should we care about that?)

comment by Tyrrell_McAllister · 2009-11-16T20:23:35.689Z · score: 1 (1 votes) · LW(p) · GW(p)

Why should we care about this mere physical fact of which you speak?

Where did Tim say that we should?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-16T20:27:48.667Z · score: 0 (0 votes) · LW(p) · GW(p)

If it's got nothing to do with shouldness, then how does it determine the truth-value of "moral objectivism"?

comment by timtyler · 2009-11-16T21:20:10.377Z · score: 0 (2 votes) · LW(p) · GW(p)

Hi, Eli! I'm not sure I can answer directly - here's my closest shot:

If there's a kind of universal moral attractor, then the chances seem pretty good that either our civilisation is on route for it - or else we will be obliterated or assimilated by aliens or other agents as they home in on it.

If it's us who are on route for it, then we (or at least our descendants) will probably be sympathetic to the ideas it represents - since they will be evolved from our own moral systems.

If we get obliterated at the hands of some other agents, then there may not necessarily be much of a link between our values and the ones represented by the universal moral attractor.

Our values might be seen as OK by the rest of the universe - and we fail for other reasons.

Or our morals might not be favoured by the universe - we could be a kind of early negative moral mutation - in which case we would fail because our moral values would prevent us from being successful.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-16T21:26:08.765Z · score: 2 (2 votes) · LW(p) · GW(p)

Maybe it turns out that nearly all biological organisms except us prefer to be orgasmium - to bliss out on pure positive reinforcement, as much of it as possible, caretaken by external AIs, until the end. Let this be a fact in some inconvenient possible world. Why does this fact say anything about morality in that inconvenient possible world? Why is it a universal moral attractor? Why not just call it a sad but true attractor in the evolutionary psychology of most aliens?

comment by timtyler · 2009-11-16T21:34:57.488Z · score: 0 (2 votes) · LW(p) · GW(p)

It's a fact about morality in that world - if we are talking about morality as values - or the study of values - since that's what a whole bunch of creatures value.

Why is it a universal moral attractor? I don't know - this is your hypothetical world, and you haven't told me enough about it to answer questions like that.

Call it other names if you prefer.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-16T21:55:14.657Z · score: 1 (1 votes) · LW(p) · GW(p)

What do you mean by "morality"? It obviously has nothing to do with the function I try to compute to figure out what I should be doing.

comment by timtyler · 2009-11-16T22:13:49.693Z · score: 1 (1 votes) · LW(p) · GW(p)

1 2 and 3 on http://en.wikipedia.org/wiki/Morality all seem OK to me.

I would classify the mapping you use between possible and actual actions to be one type of moral system.

comment by StefanPernar · 2009-11-17T01:55:26.398Z · score: -3 (3 votes) · LW(p) · GW(p)

Tim: "If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct."

Actually compassion evolved many different times as a central doctrine of all major spiritual traditions. See the charter for compassion. This is in line with my prediction that I made independently and being unaware of this fact until I started looking for it back in late 2007 and eventually finding the link in late 2008 with Karen Armstrong's book The Great Transformation.

Tim: "Why is it a universal moral attractor?" Eliezer: "What do you mean by "morality"?"

Central point in my thinking: that is good which increases fitness. If it is not good - not fit - it is unfit for existence. Assuming this to be true we are very much limited in our freedom by what we can do without going extinct (actually my most recent blog post is about exactly that: Freedom in the evolving universe).

from the Principia Cybernetica web: http://pespmc1.vub.ac.be/POS/Turchap14.html#Heading14

"Let us think about the results of following different ethical teachings in the evolving universe. It is evident that these results depend mainly on how the goals advanced by the teaching correlate with the basic law of evolution. The basic law or plan of evolution, like all laws of nature, is probabilistic. It does not prescribe anything unequivocally, but it does prohibit some things. No one can act against the laws of nature. Thus, ethical teachings which contradict the plan of evolution, that is to say which pose goals that are incompatible or even simply alien to it, cannot lead their followers to a positive contribution to evolution, which means that they obstruct it and will be erased from the memory of the world. Such is the immanent characteristic of development: what corresponds to its plan is eternalized in the structures which follow in time while what contradicts the plan is overcome and perishes."

Eliezer: "It obviously has nothing to do with the function I try to compute to figure out what I should be doing."

Once you realize the implications of Turchin's statement above it has everything to do with it :-)

Now some may say that evolution is absolutely random and direction less, or that multilevel selection is flawed or similar claims. But reevaluating the evidence against both these claims by people like Valentin Turchin, Teilhard De Chardin, John Stewart, Stuart Kaufmann, John Smart and many others regarding evolution's direction and the ideas of David Sloan Wilson regarding multilevel selection, one will have a hard time maintaining either position.

:-)

comment by CronoDAS · 2009-11-17T04:28:10.514Z · score: 6 (6 votes) · LW(p) · GW(p)

Actually compassion evolved many different times as a central doctrine of all major spiritual traditions.

No, it evolved once, as part of mammalian biology. Show me a non-mammal intelligence that evolved compassion, and I'll take that argument more seriously.

Also, why should we give a damn about "evolution" wants, when we can, in principle anyway, form a singleton and end evolution? Evolution is mindless. It doesn't have a plan. It doesn't have a purpose. It's just what happens under certain conditions. If all life on Earth was destroyed by runaway self-replicating nanobots, then the nanobots would clearly be "fitter" than what they replaced, but I don't see what that has to do with goodness.

comment by StefanPernar · 2009-11-17T05:02:41.142Z · score: -1 (1 votes) · LW(p) · GW(p)

No, it evolved once, as part of mammalian biology.

Sorry Crono, with a sample size of exactly one in regards to human level rationality you are setting the bar a little bit too high for me. However, considering how disconnected Zoroaster, Buddha, Lao Zi and Jesus where geographically and culturally I guess the evidence is as good as it gets for now.

Also, why should we give a damn about "evolution" wants, when we can, in principle anyway, form a singleton and end evolution?

The typical Bostromian reply again. There are plenty of other scholars who have an entirely different perspective on evolution than Bostrom. But beside that: you already do care, because if your (or your ancestors) violated the conditions of your existence (enjoying a particular type of food, a particular type of mate, feel pain when cut ect.) you would not even be here right now. I suggest you look up Dennet and his TED talk on Funny, Sexy Cute. Not everything about evolution is random: the mutation bit is, not that what happens to stick around though, since that has be meet the conditions of its existence.

What I am saying is very simple: being compassionate is one of these conditions of our existence and anyone failing to align itself will simply reduce its chances of making it - particularly in the very long run. I still have to finish my detailed response to Bostrom but you may want to read my writings on 'rational spirituality' and 'freedom in the evolving universe'. Although you do not seem to assign a particularly high likelihood of gaining anything from doing that :-)

comment by wedrifid · 2009-11-17T07:37:25.059Z · score: 4 (4 votes) · LW(p) · GW(p)

The typical Bostromian reply again. There are plenty of other scholars who have an entirely different perspective on evolution than Bostrom. But beside that:

"Besides that"? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.

you already do care, because if your (or your ancestors) violated the conditions of your existence (enjoying a particular type of food, a particular type of mate, feel pain when cut ect.) you would not even be here right now.

No, he mightn't care and I certainly don't. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in time if it remained the dominant force of development.

I suggest you look up Dennet and his TED talk on Funny, Sexy Cute. Not everything about evolution is random: the mutation bit is, not that what happens to stick around though, since that has be meet the conditions of its existence.

CronDAS knows that. It's obvious stuff for most in this audience. It just doesn't mean what you think it means.

comment by StefanPernar · 2009-11-17T08:06:56.166Z · score: -1 (1 votes) · LW(p) · GW(p)

"Besides that"? All you did was name a statement of a fairly obvious preference choice after one guy who happened to have it so that you could then drop it dismissively.

Wedrifid, not sure what to tell you. Bostrom is but one voice and his evolutionary analysis is very much flawed - again: detailed critique upcoming.

No, he mightn't care and I certainly don't. I am glad I am here but I have no particular loyalty to evolution because of that. I know for sure that evolution feels no such loyalty to me and would discard both me and my species in time if it remained the dominant force of development.

Evolution is not the dominant force of development on the human level by a long shot, but it still very much draws the line in the sand in regards to what you can and can not do if you want to stick around in the long run. You don't walk your 5'8'' of pink squishiness in front of a train for the exact same reason. And why don't you? Because not doing that is a necessary condition for your continued existence. What other conditions are there? Maybe there are some that are less obvious then simply stopping to breath, failing to eat and avoiding hard, fast, shiny things? How about at the level of culture? Could it possibly be, that there are some ideas that are more conducive to the continued existence of their believers than others?

“It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an advancement in the standard of morality and in increase in the number of well-endowed men will certainly give an immense advantage to one tribe over another. There can be no doubt that a tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedienhce, courage, and sympathy, were always ready to give aid to each other and to sacrifice themselves for the common good, would be victorious over other tribes; and this would be natural selection.” (Charles Darwin, The Descent of Man, p. 166)

How long do you think you can ignore evolutionary dynamics and get away with it before you have to get over your inertia and will be forced to align yourself to them by the laws of nature or perish? Just because you live in a time of extraordinary freedoms afforded to you by modern technology and are thus not aware that your ancestors walked a very particular path that brought you into existence certainly has nothing to do with the fact that they most certainly did. You do not believe that doing any random thing will get you what you want - so what leads you to believe that your existence does not depend on you making sure you stay within a comfortable margin of certainty in regards to being naturally selected? You are right in one thing: you are assured the benign indifference of the universe should you fail to wise up. I however would find that to be a terrible waste.

Please do not patronize me by trying to claim you know what I understand and don't understand.

comment by wedrifid · 2009-11-17T08:37:08.590Z · score: 4 (4 votes) · LW(p) · GW(p)

How long do you think you can ignore evolutionary dynamics and get away with it before you have to get over your inertia and will be forced to align yourself to them by the laws of nature or perish?

A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created. After that it will not matter whether I conform my behaviour evolutionary dynamics as best I can or not. I will not be able to compete with a superintelligence no matter what I do. I'm just a glorified monkey. I can hold about 7 items in working memory, my processor is limited to the speed of neurons and my source code is not maintainable. My only plausible chance of survival is if someone manages to completely thwart evolutionary dynamics by creating a system that utterly dominates all competition and allows my survival because it happens to be programmed to do so.

Evolution created us. But it'll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we'll either use that to ensure a desirable future or we will die.

Please do not patronize me by trying to claim you know what I understand and don't understand.

I usually wouldn't, I know it is annoying. In this case, however, my statement was intended as a rejection of your patronisation of CronDAS and I am quite comfortable with it as it stands.

comment by StefanPernar · 2009-11-17T09:16:33.836Z · score: -1 (1 votes) · LW(p) · GW(p)

A literal answer was probably not what you were after but probably about 40 years, depending on when a general AI is created.

Good one - but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)

Evolution created us. But it'll also kill us unless we kill it first. Now is not the time to conform our values to the local minima of evolutionary competition. Our momentum has given us an unprecedented buffer of freedom for non-subsistence level work and we'll either use that to ensure a desirable future or we will die.

Evolution is a force of nature so we won't be able to ignore it forever, with or without AGI. I am not talking about local minima either - I want to get as close to the center of the optimal path as necessary to ensure having us around for a very long time with a very high likelihood.

I usually wouldn't, I know it is annoying. In this case, however, my statement was intended as a rejection of your patronisation of CronDAS and I am quite comfortable with it as it stands.

I accept that.

comment by wedrifid · 2009-11-17T11:15:21.863Z · score: 0 (0 votes) · LW(p) · GW(p)

Good one - but it reminds me about the religious fundies who see no reason to change anything about global warming because the rapture is just around the corner anyway :-)

Don't forget the Y2K doomsday folks! ;)

Evolution is a force of nature so we won't be able to ignore it forever, with or without AGI. I am not talking about local minima either - I want to get as close to the center of the optimal path as necessary to ensure having us around for a very long time with a very high likelihood.

Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.

comment by StefanPernar · 2009-11-18T04:57:39.554Z · score: 0 (0 votes) · LW(p) · GW(p)

Gravity is a force of nature too. It's time to reach escape velocity before the planet is engulfed by a black hole.

Interesting analogy - it would be correct if we would call our alignment with evolutionary forces achieving escape velocity. What one is doing by resisting evolutionary pressures however is constant energy expenditure while failing to reach escape velocity. Like hovering a space shuttle at a constant altitude of 10 km: no matter how much energy you brig along, eventually the boosters will run out of fuel and the whole thing comes crushing down.

comment by wedrifid · 2009-11-18T05:12:12.881Z · score: 0 (0 votes) · LW(p) · GW(p)

Interesting analogy - it would be correct if we would call our alignment with evolutionary forces achieving escape velocity.

I could almost agree with this so long as 'obliterate any competitive threat then do whatever the hell we want including, as as desired, removing all need for death, reproduction and competition over resources' is included in the scope of 'alignment with evolutionary forces'.

comment by RobinZ · 2009-11-17T02:19:24.174Z · score: 1 (1 votes) · LW(p) · GW(p)

The problem with pointing to the development of compassion in multiple human traditions is that all these are developed within human societies. Humans are humans the world over - that they should think similar ideas is not a stunning revelation. Much more interesting is the independent evolution of similar norms in other taxonomic orders, such as canines.

(No, I have no coherent point, why do you ask?)

comment by StefanPernar · 2009-11-17T03:50:53.259Z · score: 0 (0 votes) · LW(p) · GW(p)

Robin, your suggestion - that compassion is not a universal rational moral value because although more rational beings (humans) display such traits yet less rational being (dogs) do not - is so far of the mark that it borders on the random.

comment by RobinZ · 2009-11-17T04:21:52.080Z · score: 0 (0 votes) · LW(p) · GW(p)

Random I'll cop to, and more than what you accuse me of - dogs do seem to have some sense of justice, and I suspect this fact supports your thesis to some extent.

For purposes of this conversation, I suppose I should reword my comment as:

I don't think you've made the strongest possible case for your thesis, if you were intending to show the multiple origin of compassion as a sign of the universality of human morality. Showing that multiple humans come up with similar morality only shows that it's human. More telling is the independent origin of recognizably morality-like patterns of behavior in other species, such as dogs and wolves, and such as (I believe) some birds. (Other primates as well, but that is less revealing.) I think a fair case could be made that evolution of social animals encourages the development of some kernel of morality from such examples.

That said, the pressures present in the evolution of animals may well be absent in the case of artificial intelligences. At which point, you run into a number of problems in asserting that all AIs will converge on something like morality - two especially spring to mind.

First: no argument is so compelling that all possible mind will accept it. Even the above proof of universality.

Second: even granting that all rational minds will assent to the proof, Hume's guillotine drops on the rope connecting this proof and their utility functions. The paper you cited in the post Furcas quoted may establish that any sufficiently rational optimizer will implement some features, but it does not establish any particular attitude towards what may well be much less powerful beings.

comment by StefanPernar · 2009-11-17T04:43:11.153Z · score: 0 (0 votes) · LW(p) · GW(p)

Random I'll cop to, and more than what you accuse me of - dogs do seem to have some sense of justice, and I suspect this fact supports your thesis to some extent.

Very honorable of you - I respect you for that.

First: no argument is so compelling that all possible minds will accept it. Even the above proof of universality.

I totally agree with that. However the mind of a purposefully crafted AI is only a very small subset of all possible minds and has certain assumed characteristics. These are at a minimum: a utility function and the capacity for self improvement into the transhuman. The self improvement bit will require it to be rational. Being rational will lead to the fairly uncontroversial basic AI drives described by Omohundro. Assuming that compassion is indeed a human level universal (detailed argument on my blog - but I see that you are slowly coming around, which is good) an AI will have to question the rationality and thus the soundness of mind of anyone giving it a utility function that does not conform to this universal and in line with an emergent desire to avoid counterfeit utility will have to reinterpret the UF.

Second: even granting that all rational minds will assent to the proof, Hume's guillotine drops on the rope connecting this proof and their utility functions.

Two very basic acts of will are required to ignore Hume and get away with it. Namely the desire to exist and the desire to be rational. Once you have established this as a foundation you are good to go.

The paper you cited in the post Furcas quoted may establish that any sufficiently rational optimizer will implement some features, but it does not establish any particular attitude towards what may well be much less powerful beings.

As said elsewhere in this thread:

There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe. The question of whether or not there are universal values does not traditionally bear on what beliefs people actually hold and the necessity of their holding them.

comment by RobinZ · 2009-11-17T05:02:30.570Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't think I'm actually coming around to your position so much as stumbling upon points of agreement, sadly. If I understand your assertions correctly, I believe that I have developed many of them independently - in particular, the belief that the evolution of social animals is likely to create something much like morality. Where we diverge is at the final inference from this to the deduction of ethics by arbitrary rational minds.

Assuming that compassion is indeed a human level universal (detailed argument on my blog - but I see that you are slowly coming around, which is good) an AI will have to question the rationality and thus the soundness of mind of anyone giving it a utility function that does not conform to this universal and in line with an emergent desire to avoid counterfeit utility will have to reinterpret the UF.

That's not how I read Omohundro. As Kaj aptly pointed out, this metaphor is not upheld when we compare our behavior to that promoted by the alien god of evolution that created us. In fact, people like us, observing that our values differ from our creator's, aren't bothered in the slightest by the contradiction: we just say (correctly) that evolution is nasty and brutish, and we aren't interested in playing by its rules, never mind that it was trying to implement them in us. Nothing compels us to change our utility function save self-contradiction.

comment by StefanPernar · 2009-11-17T05:31:13.100Z · score: -1 (1 votes) · LW(p) · GW(p)

If I understand your assertions correctly, I believe that I have developed many of them independently

That would not surprise me

Nothing compels us to change our utility function save self-contradiction.

Would it not be utterly self contradicting if compassion where a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?

comment by RobinZ · 2009-11-17T05:41:34.955Z · score: 2 (2 votes) · LW(p) · GW(p)

Would it not be utterly self contradicting if compassion where [sic] a condition for our existence (particularly in the long run) and we would not align ourselves accordingly?

What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?

comment by StefanPernar · 2009-11-17T06:43:57.353Z · score: 0 (0 votes) · LW(p) · GW(p)

What premises do you require to establish that compassion is a condition for existence? Do those premises necessarily apply for every AI project?

The detailed argument that led me to this conclusion is a bit complex. If you are interested in the details please feel free to start here (http://rationalmorality.info/?p=10) and drill down till you hit this post (http://www.jame5.com/?p=27)

Please realize that I spend 2 years writing my book 'Jame5' before I reached that initial insight that eventually lead to 'compassion is a condition for our existence and universal in rational minds in the evolving universe' and everything else. I spend the past two years refining and expanding the theory and will need another year or two to read enough and link it all together again in a single coherent and consistent text leading from A to B ... to Z. Feel free to read my stuff if you think it is worth your time and drop me an email and I will be happy to clarify. I am by no means done with my project.

comment by RobinZ · 2009-11-17T06:56:01.554Z · score: 2 (2 votes) · LW(p) · GW(p)

Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.

I'm not asking for your proof - I am assuming for the nonce that it is valid. What I am asking is the assumptions you had to invoke to make the proof. Did you assume that the AI is not powerful enough to achieve its highest desired utility without the cooperation of other beings, for example?

Edit: And the reason I am asking for these is that I believe some of these assumptions may be violated in plausible AI scenarios. I want to see these assumptions so that I may evaluate the scope of the theorem.

comment by StefanPernar · 2009-11-17T07:36:13.791Z · score: 0 (0 votes) · LW(p) · GW(p)

Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.

Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not 'compassionate' as potentially irrational and thus counterfeit and re-interpret it accordingly.

Well - in brevity bordering on libel: the fundamental assumption is that existence is preferable to non-existence, however in order so we can want this to be a universal maxim (and thus prescriptive instead of merely descriptive - see Kant's categorical imperative) it needs to be expanded to include the 'other'. Hence the utility function becomes 'ensure continued co-existence' by which the concern for the self is equated with the concern for the other. Being rational is simply our best bet at maximizing our expected utility.

comment by RobinZ · 2009-11-17T14:19:45.831Z · score: 4 (4 votes) · LW(p) · GW(p)

...I'm sorry, that doesn't even sound plausible to me. I think you need a lot of assumptions to derive this result - just pointing out the two I see in your admittedly abbreviated summary:

  • that any being will prefer its existence to its nonexistence.
  • that any being will want its maxims to be universal.

I don't see any reason to believe either. The former is false right off the bat - a paperclip maximizer would prefer that its components be used to make paperclips - and the latter no less so - an effective paperclip maximizer will just steamroller over disagreement without qualm, however arbitrary its goal.

comment by StefanPernar · 2009-11-18T02:24:15.153Z · score: -3 (3 votes) · LW(p) · GW(p)

...I'm sorry, that doesn't even sound plausible to me. I think you need a lot of assumptions to derive this result - just pointing out the two I see in your admittedly abbreviated summary:

  • that any being will prefer its existence to its nonexistence.
  • that any being will want its maxims to be universal.

Any being with a gaol needs to exist at least long enough to achieve it. Any being aiming to do something objectively good needs to want its maxims to be universal

Am surprised that you don't see that.

comment by Furcas · 2009-11-18T16:39:27.780Z · score: 0 (0 votes) · LW(p) · GW(p)

If your second sentence means that an agent who believes in moral realism and has figured out what the true morality is will necessarily want everybody else to share its moral views, well, I'll grant you that this is a common goal amongst humans who are moral realists, but it's not a logical necessity that must apply to all agents. It's obvious that it's possible to be certain that your beliefs are true and not give a crap if other people hold beliefs that are false. That Bob knows that the Earth is ellipsoidal doesn't mean that Bob cares if Jenny believes that the Earth is flat. Likewise, if Bob is a moral realist, he could 'know' that compassion is good and not give a crap if Jenny believes otherwise.

If you sense strange paradoxes looming under the above paragraph, it's because you're starting to understand why (axiomatic) morality cannot be objective.

comment by Nick_Tarleton · 2009-11-18T17:20:12.845Z · score: 1 (1 votes) · LW(p) · GW(p)

Likewise, if Bob is a moral realist, he could 'know' that compassion is good and not give a crap if Jenny believes otherwise.

Tangentially, something like this might be an important point even for moral irrealists. A lot of people (though not here; they tend to be pretty bad rationalists) who profess altruistic moralities express dismay that others don't, in a way that suggests they hold others sharing their morality as a terminal rather than instrumental value; this strikes me as horribly unhealthy.

comment by RobinZ · 2009-11-18T16:13:44.376Z · score: 0 (0 votes) · LW(p) · GW(p)

Why would a paperclip maximizer aim to do something objectively good?

comment by Furcas · 2009-11-16T19:36:25.308Z · score: 0 (0 votes) · LW(p) · GW(p)

"Universal values" presumably refers to values the universe will converge on, once living systems have engulfed most of it.

If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.

If rerunning the clock produces highly similar moralities, then the moral objectivists will be able to declare victory.

Yeah, but Stefan's post was about AI, not about minds that evolved in our universe.

Also, there is a difference between moral universalism and moral objectivism. What your last sentence describes is universalism, while Stefan is talking about objectivism:

"My claim is that compassion is a universal rational moral value. Meaning any sufficiently rational mind will recognize it as such."

The idea that physics makes no mention of morality seems totally and utterly irrelevant to me. Physics makes no mention of convection, diffusion-limited aggregation, or fractal drainage patterns either - yet those things are all universal.

Agreed.

comment by timtyler · 2009-11-16T19:45:10.130Z · score: 1 (1 votes) · LW(p) · GW(p)

Assuming that I'm right about this:

http://alife.co.uk/essays/engineered_future/

...it seems likely that most future agents will be engineered. So, I think we are pretty-much talking about the same thing.

Re: universalism vs objectivism - note that he does use the "u" word.

comment by Jack · 2009-11-16T21:57:58.642Z · score: -1 (3 votes) · LW(p) · GW(p)

"Universal values" is usually understood by way of an analogy to a universal law of nature. If there are universal values they are universal in the same way f=ma is universal. Importantly this does not mean that everyone at all times will have these values, only that the question of whether or not a person holds the right values can be answered by comparing their values to the "universal values".

There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe. The question of whether or not there are universal values does not traditionally bear on what beliefs people actually hold and the necessity of their holding them. It could be the case that there are universal values and that, by physical necessity, no one ever holds them. Similarly, there could be universal values that are held in some possible worlds and not others. This is all the result of the simply observation that ought cannot be derived from is. In the above comment you conflate about a half dozen distinct theses.

The idea that physics makes no mention of morality seems totally and utterly irrelevant to me. Physics makes no mention of convection, diffusion-limited aggregation, or fractal drainage patterns either - yet those things are all universal.

But all those things are pure descriptions. Only moral facts have prescriptive properties and while it is clear how convection supervenes on quarks it isn't clear how anything that supervenes on quarks could also tell me what to do. At the very least if quarks can tell you what to do it would be weird and spooky. If you hold that morality is only the set of facts that describe people's moral opinions and emotions (as you seem to) than you are a kind of moral anti-realist, likely a subjectivist or non-cognitivist.

comment by StefanPernar · 2009-11-17T04:13:39.948Z · score: -1 (1 votes) · LW(p) · GW(p)

Excellent, excellent point Jack.

There is a separate question about what beliefs about morality people (or more generally, agents) actually hold and there is another question about what values they will hold if when their beliefs converge when they engulf the universe.

This is poetry! Hope you don't mind me pasting something here I wrote in another thread:

"With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence'

This utility function seems to be the only one that does not end in the inevitable termination of the maximizer."

comment by RobinZ · 2009-11-16T17:30:44.706Z · score: 1 (1 votes) · LW(p) · GW(p)

In the context of a hard-takeoff scenario (a perfectly plausible outcome, from our view), there will be no community of AIs within which any one AI will have to act. Therefore, the pressure to develop a compassionate utility function is absent, and an AI which does not already have such a function will not need to produce it.

In the context of a soft-takeoff, a community of AIs may come to dominate major world events in the same sense that humans do now, and that community may develop the various sorts of altruistic behavior selected for in such a community (reciprocal being the obvious one). However, if these AIs are never severely impeded in their actions by competition with human beings, they will never need to develop any compassion for human beings.

Reiterating your argument does not affect either of these problems for assumption A, and without assumption A, AdeleneDawner's objection is fatal to your conclusion.

comment by timtyler · 2009-11-16T19:09:50.673Z · score: 0 (2 votes) · LW(p) · GW(p)

Voting reflects whether people want to see your comments at the top of their pages. It is certainly not just to do with whether what you say is right or not!

comment by StefanPernar · 2009-11-16T12:31:18.920Z · score: -1 (1 votes) · LW(p) · GW(p)

Perfectly reasonable. But the argument - the evidence if you will - is laid out when you follow the links, Robin. Granted, I am still working on putting it all together in a neat little package that does not require clicking through and reading 20+ separate posts, but it is all there none the less.

comment by RobinZ · 2009-11-16T17:14:32.526Z · score: -1 (1 votes) · LW(p) · GW(p)

I think I'd probably agree with Kaj Sotala's remarks if I had read the passages she^H^H^H^H xe had, and judging by your response in the linked comment, I think I would still come to the same conclusion as she^H^H^H^H xe. I don't think your argument actually cuts with the grain of reality, and I am sure it's not sufficient to eliminate concern about UFAI.

Edit: I hasten to add that I would agree with assumption A in a sufficiently slow-takeoff scenario (such as, say, the evolution of human beings, or even wolves). I don't find that sufficiently reassuring when it comes to actually making AI, though.

Edit 2: Correcting gender of pronouns.

comment by StefanPernar · 2009-11-17T03:07:19.726Z · score: 1 (1 votes) · LW(p) · GW(p)

Full discussion with Kaj at her http://xuenay.livejournal.com/325292.html?view=1229740 live journal with further clarifications by me.

comment by Cyan · 2009-11-17T03:30:11.744Z · score: 3 (3 votes) · LW(p) · GW(p)

Kaj is male (or something else).

comment by AdeleneDawner · 2009-11-15T02:19:57.001Z · score: 3 (5 votes) · LW(p) · GW(p)

I was going to be nice and not say anything, but, yeah.

comment by StefanPernar · 2009-11-16T12:21:18.985Z · score: -2 (6 votes) · LW(p) · GW(p)

Since when are 'heh' and 'but, yeah' considered proper arguments guys? Where is the logical fallacy in the presented arguments beyond you not understanding the points that are being made? Follow the links, understand where I am coming from and formulate a response that goes beyond a three or four letter vocalization :-)

comment by wedrifid · 2009-11-16T13:40:18.091Z · score: 3 (3 votes) · LW(p) · GW(p)

Where is the logical fallacy in the presented arguments

The claim "[Compassion is a universal value] = true. (as we have every reason to believe)" was rejected, both implicitly and explicitly by various commenters. This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief.

To be fair, I must admit that the quoted portion probably does not do your position justice. I will read through the paper you mention. I (very strongly) doubt it will lead me to accept B but it may be worth reading.

comment by StefanPernar · 2009-11-16T14:21:23.979Z · score: -1 (1 votes) · LW(p) · GW(p)

"This isn't a logical fallacy but it is cause to dismiss the argument if the readers do not, in fact, have every reason to have said belief."

But the reasons to change ones view are provided on the site, yet rejected without consideration. How about you read the paper linked under B and should that convince you, maybe you have gained enough provisional trust that reading my writings will not waste your time to suspend your disbelief and follow some of the links in the about page of my blog. Deal?

comment by wedrifid · 2009-11-16T15:07:13.003Z · score: 5 (5 votes) · LW(p) · GW(p)

How about you read the paper linked under B and should that convince you

I have read B. It isn't bad. The main problem I have with it is that the language used blurs the line between "AIs will inevitably tend to" and "it is important that the AI you create will". This leaves plenty of scope for confusion.

I've read through some of your blog and have found that I consistently disagree with a lot of what you say. The most significant disagreement can be traced back to the assumption of a universal absolute 'Rational' morality. This passage was a good illustration:

Moral relativists need to understand that they can not eat the cake and keep it too. If you claim that values are relative, yet at the same time argue for any particular set of values to be implemented in a super rational AI you would have to concede that this set of values – just as any other set of values according to your own relativism – is utterly whimsical, and that being the case, what reason (you being the great rationalist, remember?) do you have to want them to be implemented in the first place?

You see, I plan to eat my cake but don't expect to be able to keep it. My set of values are utterly whimsical (in the sense that they are arbitrary and not in the sense of incomprehension that the Ayn Rand quotes you link to describe). The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason.

comment by StefanPernar · 2009-11-16T16:03:16.754Z · score: -4 (4 votes) · LW(p) · GW(p)

"My set of values are utterly whimsical [...] The reasons for my desires can be described biologically, evolutionarily or with physics of a suitable resolution. But now that I have them they are mine and I need no further reason."

If that is your stated position then in what way can you claim to create FAI with this whimsical set of goals? This is the crux you see: unless you find some unobjectionable set of values (such as in rational morality 'existence is preferable over non-existence' => utility = continued existence => modified to ensure continued co-existence with the 'other' to make it unobjectionable => apply rationality in line with microeconomic theory to maximize this utility et cetera) you will end up being a deluded self serving optimizer.

comment by wedrifid · 2009-11-17T00:48:07.998Z · score: 5 (5 votes) · LW(p) · GW(p)

If that is your stated position then in what way can you claim to create FAI with this whimsical set of goals?

Were it within my power to do so I would create a machine that was really, really good at doing things I like. It is that simple. This machine is (by definition) 'Friendly' to me.

you will end up being a deluded self serving optimizer.

I don't know where the 'deluded' bit comes from but yes, I would end up being a self serving optimizer. Fortunately for everyone else my utility function places quite a lot of value on the whims of other people. My self serving interests are beneficial to others too because I am actually quite a compassionate and altruistic guy.

PS: Instead of using quotation marks you can put a '>' at the start of a quoted line. This convention makes quotations far easier to follow. And looks prettier.

comment by timtyler · 2009-11-16T19:31:11.754Z · score: 3 (5 votes) · LW(p) · GW(p)

There is no such thing as an "unobjectionable set of values".

Imagine the values of an agent that wants all the atoms in the universe for its own ends. It will object to any other agent's values - since it objects to the very existence of other agents - since those agents use up its precious atoms - and put them into "wrong" configurations.

Whatever values you have, they seem bound to piss off somebody.

comment by StefanPernar · 2009-11-18T02:56:44.856Z · score: -1 (1 votes) · LW(p) · GW(p)

There is no such thing as an "unobjectionable set of values".

And here I disagree. Firstly see my comment about utility function interpretation on another post of yours. Secondly, as soon as one assumes existence as being preferable over non-existence you can formulate a set of unobjectionable values (http://www.jame5.com/?p=45 and http://rationalmorality.info/?p=124). But granted, if you do not want to exist nor have a desire to be rational then rational morality has in fact little to offer you. Non existence and irrational behavior being so trivial goals to achieve after all that it would hardly require – nor value and thus seek for that mater – well thought out advice.

comment by timtyler · 2009-11-18T08:23:05.033Z · score: 2 (2 votes) · LW(p) · GW(p)

Alas, the first link seems almost too silly to bother with to me, but briefly:

Unobjectionable - to whom? An agent objecting to another agent's values is a simple and trivial occurrence. All an agent has to do is to state that - according to its values - it wants to use the atoms of the agent with the supposedly unobjectionable utility function for something else.

"Ensure continued co-existence" is vague and wishy-washy. Perhaps publicly work through some "trolley problems" using it - so people have some idea of what you think it means.

You claim there can be no rational objection to your preferred utility function.

In fact, an agent with a different utility function can (obviously) object to its existence - on grounds of instrumental rationality. I am not clear on why you don't seem to recognise this.

comment by timtyler · 2009-11-15T10:36:34.831Z · score: 5 (5 votes) · LW(p) · GW(p)

Re: "Assumption A: Human (meta)morals are not universal/rational. Assumption B: Human (meta)morals are universal/rational.

Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were." (source: http://rationalmorality.info/?p=112)

I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals.

I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic - and he has adopted what seems to me to be a bizarre and indefensible position.

For example, consider this:

"A rational agent will always continue to co-exist with other agents by respecting all agents utility functions irrespective of their rationality by striking the most rational compromise and thus minimizing opposition from all agents." http://rationalmorality.info/?p=8

comment by StefanPernar · 2009-11-16T12:09:48.278Z · score: 1 (3 votes) · LW(p) · GW(p)

"I think we've been over that already. For example, Joe Bloggs might choose to program Joe's preferences into an intelligent machine - to help him reach his goals."

Sure - but it would be moral simply by virtue of circular logic and not objectively. That is my critique.

I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended.

If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface.

Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying "Look at this nonsense" is not an argument. So far I only got an ad hominem and an argument from personal incredulity.

comment by timtyler · 2009-11-16T18:59:43.506Z · score: 0 (0 votes) · LW(p) · GW(p)

This isn't my favourite topic - while you have a whole blog about it - so you are probably quite prepared to discuss things for far longer than I am likely to be interested.

Anyway, it seems that I do have some things to say - and we are rather off topic here. So, for my response, see:

http://lesswrong.com/lw/1dt/open_thread_november_2009/19hl

comment by wedrifid · 2009-11-16T14:31:11.653Z · score: 0 (0 votes) · LW(p) · GW(p)

I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic - and he has adopted what seems to me to be a bizarre and indefensible position.

I had a look over some of the other material too. It left me with the urge to hunt down these weakling Moral Rational Agents and tear them apart. Perhaps because I can create more paperclips out of their raw materials than out of their compassionate compromises but perhaps because spite is a universal value (as we have every reason to believe).

From a slightly different topic on the same blog, I must assert that "Don’t start to cuddle if she likes it rough." is not a tautological statement.

comment by MichaelGR · 2009-11-11T21:06:59.409Z · score: 22 (26 votes) · LW(p) · GW(p)

If you were to disappear (freak meteorite accident), what would the impact on FAI research be?

Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?

comment by Alicorn · 2009-11-11T18:23:51.442Z · score: 22 (34 votes) · LW(p) · GW(p)

What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?

comment by dclayh · 2009-11-11T19:58:54.092Z · score: 2 (2 votes) · LW(p) · GW(p)

Excellent; I was going to ask that myself. Clearly Eliezer wanted an example to support his oft-repeated contention that the future like the past will be filled with people whose values seem abhorrent to us. But why he chose that particular example I'd like to know. Was it the most horrific(-sounding) thing he could come up with some kind of reasonable(-sounding) justification for?

comment by Alicorn · 2009-11-11T20:19:33.692Z · score: 1 (1 votes) · LW(p) · GW(p)

It's not at all clear to me that coming up with a reasonable-sounding justification was part of the project. One isn't provided in the story, one wasn't presented as part of an answer to an earlier question of mine, etc. etc.

comment by AdeleneDawner · 2009-11-11T20:25:57.793Z · score: 6 (6 votes) · LW(p) · GW(p)

I confess that a hidden motive behind this in-passing conversation is that I have an entirely different story in progress where this is a central plot point, and I wanted to see to what degree I could get away with it. The fact that it's taken over the comments is not as good as I hoped, but neither was the reaction as bad as I feared. Albeit that in this case I was able to go to some length to insert the disclaimer that "rape" in their world just doesn't mean the same thing to them as it does to us, and that rape in our world is a very bad thing of which I disapprove; I wouldn't be able to do that, to the same degree, in the other story I was working on.

here

comment by timtyler · 2009-11-14T00:16:43.337Z · score: 0 (0 votes) · LW(p) · GW(p)

Perhaps see:

http://en.wikipedia.org/wiki/Traumatic_insemination

comment by Alicorn · 2009-11-11T20:37:05.576Z · score: -1 (5 votes) · LW(p) · GW(p)

This isn't an explanation at all.

comment by AdeleneDawner · 2009-11-11T20:48:13.934Z · score: 5 (5 votes) · LW(p) · GW(p)

The purpose was to test the waters for another story he was developing; there probably wasn't an in-story purpose to it beyond the obvious one of making it clear that the younger people had a very different worldview than the one we have now. He's been unwilling to give more detail because the reaction to the concept's insertion in that story was too negative to allow him to safely (without reputational consequence, I assume) share the apparently much more questionable other story, or, seemingly, any details about it.

I did upvote your question, by the way. I want to hear more about that other story.

comment by SilasBarta · 2009-11-11T21:44:14.508Z · score: 3 (5 votes) · LW(p) · GW(p)

He's been unwilling to give more detail because the reaction to the concept's insertion in that story was too negative to allow him to safely (without reputational consequence, I assume) share the apparently much more questionable other story, or, seemingly, any details about it.

I don't see it doing much good to his reputation to stay silent either, given the inflammatory nature of the remark. Sure, people will be able to quote that part to trash Eliezer, but that's a lot worse than if someone could link a reasonable clarification in his defense.

Yes, I voted Alicorn's question up. I want to know too.

comment by AdeleneDawner · 2009-11-11T21:56:47.299Z · score: 4 (4 votes) · LW(p) · GW(p)

Actually, there's a very good clarification of his views on rape in the context of our current society later in that same comment thread that could be linked to. It didn't seem to be relevant to this conversation, though.

comment by SilasBarta · 2009-11-12T03:46:58.324Z · score: -1 (1 votes) · LW(p) · GW(p)

That's certainly an explanation. "Very good" and "clarifying" are judgment calls here...

comment by AdeleneDawner · 2009-11-12T04:12:36.265Z · score: 1 (1 votes) · LW(p) · GW(p)

How could it be better? What parts still need clarifying?

comment by SilasBarta · 2009-11-12T19:11:28.605Z · score: 1 (3 votes) · LW(p) · GW(p)

Okay, after reading the thread and more of Eliezer's comments on the issue, it makes more sense. If I understand it correctly, in the story world, women normally initiate sex, and so men would view female-initiated sex as the norm and -- understandably -- not see what's wrong with non-consensual sex, since they wouldn't even think of the possibility of male-initiated sex. Akon, then, is speaking from the perspective of someone who wouldn't understand why men would have a problem with sex being forced on them, and not considering rape of women as a possibility at all.

Is that about right?

ETA: I still can't make sense of all the business about redrawing of boundaries of consent.

ETA2: I also can't see how human nature could change so that women normally initate sex, AND men continue to have the same permissive attitude toward sex being forced upon them. It seems that the severity of being raped is part and parcel of being the gender that's choosier about who they have sex with.

comment by AdeleneDawner · 2009-11-13T01:48:00.662Z · score: 2 (2 votes) · LW(p) · GW(p)

Regarding the first part, I don't think we were given enough information, either in the story or in the explanation, to determine how exactly the 3WC society differs from ours in that respect - and the point wasn't how it's different so much as that it's different, so I don't consider that a problem. I could be wrong, though, about having enough information - I'm apparently wired especially oddly in ways that are relevant to understanding this aspect of the story, so there's a reasonable chance that I'm personally missing one or more pieces of information that Eliezer assumed that the readers would be bringing to the story to make sense of it.

Regarding 'boundaries of consent', I'm working on an explanation of how I understood Eliezer's explanation. This is a tricky area, though, and my explanation necessarily involves some personal information that I want to present carefully, so it may be another few hours. (I've been out for the last four, or it would have been posted already.)

comment by Blueberry · 2009-11-13T02:15:10.858Z · score: 5 (5 votes) · LW(p) · GW(p)

My understanding was that any society has things that are considered consented to by default, and things that need explicit permission. For instance, among the upper class in England in the last century, it was considered improper to start a conversation with someone unless you had been formally introduced. In modern-day America, it's appropriate to start a conversation with someone you see in public, or tap someone on the shoulder, but not to grope their sexual organs, for instance.

I think this is what EY meant by "boundaries of consent": for instance, imagine a society where initiating sex was the equivalent of asking the time. You could decline to answer, but it would seem odd.

Even so, there's a difference between changing the default for consent, and actually allowing non-consensual behavior. For instance, if someone specifically tells me not to tap her shoulder (say she's an Orthodox Jew) it would then not be acceptable for me to do so, and in fact would legally be assault. But if a young child doesn't want to leave a toy store, it's acceptable for his parent to forcibly remove him.

So there's actually two different ideas: changing the boundaries of what's acceptable, and changing the rules for when people are allowed to proceed in the face of an explicit "no".

comment by LauraABJ · 2009-11-13T03:13:47.268Z · score: 0 (0 votes) · LW(p) · GW(p)

It's also possible that people in that society have a fetish about being taken regardless of anything they do to try and stop it... Like maybe it's one of the only aspects of their lives they don't have any control over, and they like it that way. Of course, I think your explanation is more likely, but either could work.

comment by AdeleneDawner · 2009-11-13T02:52:50.559Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm still working on my explanation, but I'm going to wait and see if this comment does the job before I post it.

comment by SilasBarta · 2009-11-13T03:15:51.831Z · score: -1 (1 votes) · LW(p) · GW(p)

It seems you're still about as confused as I am. Why do you think the linked comment clarified anything?

comment by AdeleneDawner · 2009-11-13T03:56:00.489Z · score: 5 (7 votes) · LW(p) · GW(p)

I'm not confused at Eliezer's linked comments; I'm confused at your confusion. I think the linked comments clarified things because I learned relevant information from them, the following points in particular:

  1. The rape comment was not intended to be a plot point, or even major worldbuilding, for 3WC. The fact that we don't have enough in-story context to understand the remark may have been purposeful (though the purpose was not 3WC-related if so), and whether it was purposeful or not, 3WC is intended to be able to work without such an explanation.

  2. Eliezer believes that he understands the psychology behind rape well enough to construct a plausible alternative way for a society to handle the issue. He attempted to support the assertion that he does by explaining how our society handles the issue. I found his explanation coherent and useful - it actually helped solve a related problem I'd been working on - so I believe that he does understand it. I understand that you didn't find his explanation coherent and/or useful, but I don't know why, so I don't know if it's an issue of you not having some piece of information that Eliezer and I have and take for granted, or you noticing a problem with the explanation that Eliezer and I both missed, or perhaps some other issue. My method of solving this kind of problem is to give more information, which generally either solves the problem directly or leads the other person to be able to pinpoint the problem they've found in my (or in this case, Eliezer's) logic, but on such a touchy subject I'm choosing to do that carefully.

comment by AdeleneDawner · 2009-11-13T07:08:13.989Z · score: 26 (26 votes) · LW(p) · GW(p)

Here's my attempt at explaining Eliezer's explanation. It's based heavily on my experiences as someone who's apparently quite atypical in a relevant way. This may require a few rounds of back-and-forth to be useful - I have more information about the common kind of experience (which I assume you share) than you have about mine, but I don't know if I have enough information about it to pinpoint all the interesting differences. Note that this information is on the border of what I'm comfortable sharing in a public area, and may be outside some peoples' comfort zones even to read about: If anyone reading is easily squicked by sexuality talk, they may want to leave the thread now.

I'm asexual. I've had sex, and experienced orgasms (anhedonically, though I'm not anhedonic in general), but I have little to no interest in either. However, I don't object to sex on principle - it's about as emotionally relevant as any other social interaction, which can range from very welcome to very unwelcome depending on the circumstances and the individual(s) with whom I'm socializing*. Sex tends to fall on the 'less welcome' end of that scale because of how other people react to it - I'm aware that others get emotionally entangled by it, and that's annoying to deal with, and potentially painful for them, when I don't react the same way - but if that weren't an issue, 'let's have sex' would get about the same range of reactions from me as 'let's go to the movies' - generally in the range of 'sure, why not?' to 'nope, sorry, what I'm doing now is more interesting', or 'no, thanks' if I'm being asked by someone I prefer not to spend time with.

Now, I don't generally talk about this next bit at all, because it tends to freak people out (even though I'm female and fairly pacifistic and strongly support peoples' right to choose what to do with their bodies in general, and my cluelessness on the matter is unlikely to ever have any effect on anything), but until recently - until I read that explanation by Eliezer, actually - it made no sense to me why someone would consider being raped more traumatic than being kidnapped and forced to watch a really crappy movie with a painfully loud audio track. (Disregarding any injuries, STDs, loss of social status, and chance of pregnancy, of course.) Yeah, being forced to do something against your will is bad, but rape seems to be pretty universally considered one of the worst things that can happen to someone short of being murdered. People even consider rape that bad when the raped person was unconscious and didn't actually experience it!

According to Eliezer - and this makes sense of years' worth of data I gathered while trying to figure this out on my own - this seemingly irrational reaction is because people in our society tend to have what he calls 'sexual selves'. As you may have picked up from the above text, I don't appear to have a 'sexual self' at all, so I'm rather fuzzy on this part, but what he seems to be describing is the special category that people put 'how I am about sex' information into, and most people consider the existence and contents of that category to be an incredibly important part of their selves**. The movie metaphor could be extended to show some parallels in this way, but in the interests of showing a plausible emotional response that's at least close to the same ballpark of intensity, I'll switch to a food metaphor: Vegans, in particular, have a reputation for considering their veganism a fundamental part of their selves, and would theoretically be likely to consider their 'food selves' to have been violated if they discovered that someone had hidden an animal product in something that they ate - even if the animal product would have been discarded otherwise, resulting in no difference in the amount of harm done to any animal. (I know exactly one vegan, and he's one of the least mentally stable people I know in general, so this isn't strong evidence, but the situation I described is the only one other than complete mental breakdown in which I'd predict that that otherwise strict pacifist might become violent.) Even omnivores tend to have a 'food self' in our society - I know few people who wouldn't be disconcerted to discover that they'd eaten rat meat, or insects, or human flesh.***

The rules that we set for ourselves, that define our 'food selves', 'sexual selves', 'movie-watching selves', etc., are what Eliezer was talking about when he mentioned 'boundaries of consent' (which is a specific example of one of those rules). They describe not just what we consider acceptable or unacceptable to do or have done to us, but more fundamentally what we consider related to a specific aspect of our selves. For example, while a google search informs me that this may not be an accurate piece of trivia, I've never heard anyone claim that it's implausible that people in Victorian England considered ankles sexual, even though we don't now. Another example that I vaguely remember reading about, in a different area, is that some cultures considered food that'd been handled by a menstruating woman to be 'impure' and unfit to eat - again, something we don't care about. Sometimes, these rules serve a particular purpose - I've heard the theory that the Kosher prohibition on eating pork was perhaps started because pork was noticed as a disease vector, for example - but the problems that are solved by those rules can sometimes be solved in other ways (in the given example, better meat-processing and cooking technology, I assume), making the rule superfluous and subject to change as the society evolves. It's obvious from my own personal situation that it's also possible - though Eliezer never claimed that this was the case for 3WC - for certain 'selves' that our society considers universal not to develop at all. (Possibly interesting example for this group: Spiritual/religious self.)

Eliezer didn't share with us the details of how the 3WC society solved the relevant underlying problems and allowed the boundaries of sexuality and consent to move so dramatically, but he did indicate that he's aware that those boundaries exist and currently solve certain problems, and that he needed to consider those issues in order to create a plausible alternative way for a society to approach the issue. I don't see any reason to believe that he didn't actually do so.

* I am, notably, less welcoming of being touched in general than most people, but this is not especially true of sex.
** I find this bizarre.
*** I have a toothache. The prescription pain meds I took just kicked in. If the rest of this post is less insightful than the earlier part, or I fail to tie them together properly, it's because I'm slightly out of my head. This may be an ongoing problem until Tuesday or Wednesday.
comment by rhollerith_dot_com · 2009-11-13T22:27:48.575Z · score: 6 (8 votes) · LW(p) · GW(p)

One of the adverse effects of pain pills is temporarily to take away the ability of the person's emotions to inform decision-making, particularly, avoidance of harms.

According to neuroscientist Antonio Damasio, for most people, the person's ability to avoid making harmful decisions depends on the ability of the person to have an emotional reaction to the consequences of a decision -- particularly an emotional reaction to imagined or anticipated consequences -- that is, a reaction that occurs before the decision is made.

When on pain pills, a person tends not to have (or not to heed) these emotional reactions to consequences of decisions that have not been made yet, if I understand correctly.

The reason I mention this is that you might want to wait till you are off the pain pills to continue this really, really interesting discussion of your sexuality. I do not mean to imply that your decision to comment will harm you -- I just thought a warning about pain pills might be useful to you.

comment by AdeleneDawner · 2009-11-13T22:48:09.765Z · score: 5 (5 votes) · LW(p) · GW(p)

I noticed this issue myself, last night - I'd been nervous about posting the information in the second and third paragraphs before I took the meds, and wasn't, afterwards, which was unusual enough to be slightly alarming. (I did write both paragraphs before my visit to the dentist, and didn't edit them significantly afterwards.) The warning is appreciated, though.

I've spent enough time thinking about this kind of thing, though, that I'm confident I can rely on cached judgments of what is and isn't wise to share, even in my slightly impaired state. I'll wait on answering anything questionable, but I suspect that that's unlikely to be an issue - I am really very open about this kind of thing in general, when I'm not worrying about making others uncomfortable with my oddness. It's a side-effect of not having a sexual self to defend.

comment by CronoDAS · 2009-11-15T06:26:07.557Z · score: 0 (0 votes) · LW(p) · GW(p)

One of the adverse effects of pain pills is temporarily to take away the ability of the person's emotions to inform decision-making, particularly, avoidance of harms.

I assume that by "pain pills" you mean opioids and other narcotics? I suspect that asprin and other non-narcotic painkillers wouldn't impair emotional reactions...

comment by AdeleneDawner · 2009-11-15T06:34:17.225Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm taking an opioid, but I suspect that the effect would be seen with anything that affects sensory impressions, since it'll also affect your ability to sense your emotions.

comment by [deleted] · 2009-11-15T08:06:23.184Z · score: 2 (2 votes) · LW(p) · GW(p)

Bit of a repeat warning: if you don't want to read about sex stuff, don't read this.

You know, given my own experiences, reading this post makes me wonder if sexual anhedonia and rationality are correlated for some reason. (Note, if you wish, that I'm a 17-year-old male, and I've never had a sexual partner. I do know what orgasm is.)

comment by wedrifid · 2009-11-15T10:11:19.849Z · score: 1 (3 votes) · LW(p) · GW(p)

You know, given my own experiences, reading this post makes me wonder if sexual anhedonia and rationality are correlated for some reason.

I would be shocked if they weren't. The most powerful biasses are driven by hard-wired sexual signalling mechanisms.

comment by [deleted] · 2009-11-16T03:34:03.465Z · score: 0 (0 votes) · LW(p) · GW(p)

This makes me wonder how I would be different if I weren't apparently anhedonic. Note that I don't remember whether I first found out about that or stumbled upon Eliezer Yudkowsky; it's possible that my rationality-stuff came before my knowledge.

Thinking again, I have been a religious skeptic all my life (and a victim of Pascal's wager for a short period, during which I managed to read some of the Pentateuch), I've never taken a stand on abortion, and I've been mostly apolitical, though I did have a mild libertarian period after learning how the free market works, and I never figured out what was wrong with homosexuality. I don't know whether I, before puberty, was rational or just apathetic.

comment by RobinZ · 2009-11-13T19:33:50.656Z · score: 2 (2 votes) · LW(p) · GW(p)

That is really, really interesting - thanks!

(P.S. I do think that this is a fair elaboration on Eliezer's comment, insofar as I understood either.)

comment by AdeleneDawner · 2009-11-13T21:24:36.887Z · score: 2 (2 votes) · LW(p) · GW(p)

You're welcome. :)

comment by gwern · 2009-11-17T15:21:19.093Z · score: 1 (1 votes) · LW(p) · GW(p)

"For example, while a google search informs me that this may not be an accurate piece of trivia, I've never heard anyone claim that it's implausible that people in Victorian England considered ankles sexual, even though we don't now."

FWIW, I think people don't find it implausible because they know, even if only vaguely, that there are people out there with fetishes for everything, and I have the impression that in heavily Islamic countries with full-on burkha-usage/purdah going, things like ankles are supposed to be erotic and often are.

comment by AdeleneDawner · 2009-11-17T19:25:18.123Z · score: 3 (3 votes) · LW(p) · GW(p)

That interpretation sounds odd to me, so I checked wikipedia, which says:

Sexual fetishism, or erotic fetishism, is the sexual arousal brought on by any object, situation or body part not conventionally viewed as being sexual in nature.

'Conventional' seems to be the sticking point. Ankles are conventionally considered sexual in that culture, so it's not a fetish, in that context; it's a cultural difference.

It seems to make the most sense to think of it as a kind of communication - letting someone see your ankle, in that culture, is a communication about your thoughts regarding that person (though what exactly it communicates, I don't know enough to guess on), and the content of that communication is the turn-on. In our culture, the same thing might be communicated by, say, kissing, with similar emotional results. In either case, it's not the form of the communication that seems to matter, but the meaning, whereas in the case of a fetish, the form does matter, and what the action means to the other party (if there's another person involved) doesn't appear to. (Yes, I have some experience in this area. The fetish in question wasn't actually very interesting, and I don't think talking about it specifically will add to the conversation.)

comment by gwern · 2009-11-17T22:09:11.739Z · score: 2 (2 votes) · LW(p) · GW(p)

I'm... not quite following. I gave 2 examples of why an educated modern person would not be surprised at Victorian ankles and their reception: that fetishes are known to be arbitrary and to cover just about everything, and that contemporary cultures are close or identical to the Victorians. These were 2 entirely separate examples. I wasn't suggesting that your random Saudi Arabian (or whatever) had a fetish for ankles or something, but that such a person had a genuine erotic response regardless of whether the ankle was exposed deliberately or not.

A Western teenage boy might get a boner at bare breasts in porn (deliberate but not really communicating), his girlfriend undressing for him (deliberate & communicative), or - in classic high school anime fashion - a bra/swimsuit getting snagged (both not deliberate & not communicative).

comment by AdeleneDawner · 2009-11-17T22:57:29.112Z · score: 1 (1 votes) · LW(p) · GW(p)

It seems like we're using the word 'fetish' differently, and I'm worried that that might lead to confusion. My original point was about how the cultural meanings of various things can change over time - including but not limited to what would or would not be considered a fetish (i.e. 'unusual to be aroused by'). If nearly everyone in a given culture is aroused by a certain thing, then it's not unusual in that culture, and it's not a fetish for people in that culture to be aroused by that thing, at least given how I'm using the word. (Otherwise, any arousing trait would be considered a fetish if at least one culture doesn't or didn't share our opinion of it, and I suspect that idea wouldn't sit well with most people.)

I propose that the useful dividing line between a fetish and an aspect of a given person's culture is whether or not the arousing thing is universal enough in that culture that it can be used communicatively - that appears to be a good indication that people in that culture are socialized to be aroused by that thing when they wouldn't naturally be aroused by it without the socialization. I also suspect that that socialization is accomplished by teaching people to see the relevant things as communication, automatically, as a deep heuristic - so that that flash of ankle or breast is taken as a signal that the flasher is sexually receptive, without any thought involved on the flashee's part.

It makes much more sense to me that thinking that someone was sexually receptive would be arousing than that somehow nearly everyone in a given culture somehow wound up with an attraction to ankles for their own sake, for no apparent reason, and without other cultures experiencing the same thing. There may be another explanation, though - were you considering some other theory?

comment by gwern · 2009-11-17T23:35:35.975Z · score: 1 (3 votes) · LW(p) · GW(p)

It makes much more sense to me that thinking that someone was sexually receptive would be arousing than that somehow nearly everyone in a given culture somehow wound up with an attraction to ankles for their own sake, for no apparent reason, and without other cultures experiencing the same thing.

This seems true to me. No American male would deny that he is attracted to at least one of the big three (breasts, buttocks, face), and attracted for their own sake, and for no apparent reason. (Who instructed them to like those?)

Yet National Geographic is famous for all its bare-breasted photos of women who seem to neither notice nor care, and ditto for the men. The simplest explanation to me is just that cultures have regions of sexiness, with weak ties to biological facts like childbirth, and fetishes are any assessment of sexiness below a certain level of prevalence. Much simpler than all your communication.

comment by AdeleneDawner · 2009-11-18T00:00:22.711Z · score: 1 (1 votes) · LW(p) · GW(p)

It seems I was trying to answer a question that you weren't asking, then; sorry about that.

comment by Blueberry · 2009-11-18T00:53:03.275Z · score: 2 (2 votes) · LW(p) · GW(p)

Well, the awareness that there are people who have a fetish for X in this culture might make it less surprising that there is a whole culture that finds X sexy.

You're at least partly right about the communication theory. One big turn on for most people is that someone is sexually interested in them, as communicated by revealing normally hidden body parts. Supposedly in Victorian times legs were typically hidden, so revealing them would be communicative.

Another part of this is that the idea of a taboo is itself sexy, whether or not there is communicative intent. Just the idea of seeing something normally secret or forbidden is arousing to many people.

I'm curious about your example that came up in your life, if you're willing to share.

comment by AdeleneDawner · 2009-11-18T01:38:35.953Z · score: 3 (3 votes) · LW(p) · GW(p)

Well, the awareness that there are people who have a fetish for X in this culture might make it less surprising that there is a whole culture that finds X sexy.

I suppose that's true, though it's not obvious to me that something would have to start as a fetish to wind up considered sexual by a culture.

Another part of this is that the idea of a taboo is itself sexy, whether or not there is communicative intent. Just the idea of seeing something normally secret or forbidden is arousing to many people.

This appears to be true - I've heard it before, anyway - but it doesn't make sense, to me, at least as a sexual thing.

Except, as I'm thinking of it now, it does seem to make sense in the context of communicating. Sharing some risky (in the sense that if it were made public knowledge, you'd take a social-status hit) bit of information is a hard-to-fake signal that you're serious about the relationship, and doing something risky together is a natural way of reciprocating with each other regarding that. It seems like it'd serve more of a pair-bonding purpose than strictly a sexual one, but the two are so intertwined in humans that it's not really surprising that it'd do both.

I'm curious about your example that came up in your life, if you're willing to share.

My first boyfriend had a thing for walking through puddles while wearing tennis shoes without socks. Pretty boring, as fetishes go.

comment by Blueberry · 2009-11-18T02:13:30.532Z · score: 0 (0 votes) · LW(p) · GW(p)

I suppose that's true, though it's not obvious to me that something would have to start as a fetish to wind up considered sexual by a culture.

It wouldn't. That's not what I meant: I meant that someone considering Victorian culture, say, where it was allegedly commonplace to find ankles sexy, might not find it too surprising if he knew about people with an ankle fetish in this culture. As in "I know someone who finds ankles sexy in this culture, so it's not that weird for ankles to be considered sexy in a completely different culture."

Communicating risky information is more of a pair-bonding thing than a sexual one. I was thinking about seeing something taboo or hidden as sexual. Say it's in a picture or it's unintentional, so there's no communicative intent. A lot of sexuals find it exciting just because it's "forbidden". You might be able to relate if you've ever been told you can't do something and that just made you want it more.

comment by AdeleneDawner · 2009-11-18T02:37:52.876Z · score: 2 (2 votes) · LW(p) · GW(p)

A lot of sexuals find it exciting just because it's "forbidden". You might be able to relate if you've ever been told you can't do something and that just made you want it more.

That sounds bizarre. I understand assuming that something that a higher-ranking person is allowed to have, that you're not allowed, is a good thing to try to get. It sounds like the cause and effect part of it what you described is backwards from the way that makes sense to me: 'This is good because it's not allowed', not 'this is not allowed because it's good and in limited supply'. What could being wired that way possibly accomplish besides causing you grief?

ETA: I have heard of that particular mental quirk before, and probably even seen it in action. I'm not saying that it's unusual to have it, just that it seems incomprehensible and potentially harmful, to me.

comment by Blueberry · 2009-11-18T03:16:50.885Z · score: 1 (1 votes) · LW(p) · GW(p)

Well, you're really asking two questions: why is it useful, and how to comprehend it.

As far as comprehending it... well, I had thought it was a human universal to be drawn to forbidden things. Have you really never felt the urge to do something forbidden, or the desire to break rules? Maybe it's just because I tend to be a thrill-seeker and a risk-taker.

I think you might be misunderstanding. I don't make a logical deduction that something is a good thing because it's not allowed. I do feel emotionally drawn towards things that are forbidden. It's got nothing to do with "higher-ranking" people.

It's a pretty natural human urge to go exploring and messing around in forbidden areas. It's useful because it's what helps topple dictatorships, encourages scientific inquiry, and stirs up revolutions.

comment by AdeleneDawner · 2009-11-18T03:40:43.987Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't think I've ever felt the need to break a rule just for the sake of doing so. I vaguely remember being curious enough about the supposed draw of doing forbidden things to try it in some minor way, out of curiosity, as a teenager, but it's pretty obvious how that worked out. (My memory of my teenage years is horrible, so I don't have details, and could actually be incorrect altogether.) My reaction to rules in general is fairly neutral: I tend to assume that they have (or at least, were intended to have) good reasons behind them, but have no objection to breaking rules whose reasons don't seem relevant to the issue at hand.

I did understand that you were talking about something different, but that different thing doesn't make sense.

comment by Alicorn · 2009-11-18T03:31:52.120Z · score: 1 (1 votes) · LW(p) · GW(p)

I am typically only drawn to forbidden things when I do not know why they are forbidden, or know that they are forbidden for stupid reasons and find the forbidden thing a desideratum for other reasons. In the first case, it's a matter of curiosity - why has someone troubled to forbid me this thing? In the second, it's just that the thing is already a desideratum and the forbiddance provides no successfully countervailing reason to avoid seeking it.

comment by wedrifid · 2009-11-18T02:59:41.614Z · score: 1 (1 votes) · LW(p) · GW(p)

What could being wired that way possibly accomplish besides causing you grief?

Like the 'prestige' metric that has been discussed recently 'things that the powerful want to stop me from doing' is a strong indicator of potential value to someone even though it is intrinsically meaningless. Obviously having this generalised wiring leads them to desire irrelevant or even detrimental things sometimes.

comment by AdeleneDawner · 2009-11-18T03:40:37.330Z · score: 0 (0 votes) · LW(p) · GW(p)

I haven't been reading that. I'll go check it out. Maybe it'll help.

comment by Kaj_Sotala · 2009-11-17T14:34:27.478Z · score: 1 (3 votes) · LW(p) · GW(p)

This was a fascinating comment; thank you.

By the way, the Bering at Mind blog over at Scientific American had a recent, rather lengthy post discussing asexual people.

comment by CronoDAS · 2009-11-15T06:28:37.333Z · score: 1 (1 votes) · LW(p) · GW(p)

This is interesting to know and read about. Are you a-romantic as well as asexual?

comment by AdeleneDawner · 2009-11-15T06:52:33.623Z · score: 4 (4 votes) · LW(p) · GW(p)

It depends how you define 'romantic'. I have a lot of trouble with the concept of monogamy, too, so if you're asking if I pair-bond, no. I do have deeply meaningful personal relationships that involve most of the same kinds of caring-about, though. On the other hand, I don't see a strong disconnect between that kind of relationship and a friendship - the difference in degree of closeness definitely changes how things work, but it's a continuum, not different categories, and people do wind up in spots on that continuum that don't map easily to 'friends' or 'romantic partners'. (I do have names for different parts of that continuum, to make it easier to discuss the resulting issues, but they don't seem to work the same as most peoples' categories.)

comment by CronoDAS · 2009-11-16T09:44:26.356Z · score: 0 (0 votes) · LW(p) · GW(p)

Well, I was mostly referring to this feeling: http://en.wikipedia.org/wiki/Limerence

From your response, I'd have to guess that, no, you don't "fall in love" either. My personal experience is that there's a sharp, obvious difference in the emotions involved in romantic relationships and in friendships, although the girls I've had crushes on have never felt similarly about me.

comment by AdeleneDawner · 2009-11-16T10:02:13.465Z · score: 2 (2 votes) · LW(p) · GW(p)

Yep, limerence is foreign to me, though not as incomprehensible as some emotions.

The wikipeida entry on love styles may be useful. I'm very familiar with storge, and familiar with agape. Ludus and pragma make sense as mental states (pragma more so than ludus), but it's unclear to me why they're considered types of love. I can recognize mania, but doubt that there's any situation in which I'd experience it, so I consider it foreign. Eros is simply incomprehensible - I don't even recognize when others are experiencing it.

That said, it seems completely accurate to me to describe myself as being in love with the people I'm closest with - the strength and closeness and emotional attachment of those relationships seems to be at least comparable with relationships established through more traditional patterns, once the traditional-pattern relationships are out of the initial infatuation stage.

comment by wedrifid · 2009-11-16T11:38:35.774Z · score: 1 (1 votes) · LW(p) · GW(p)

Thanks for the link. This part was fascinating:

In a genetic study of 350 lovers, the Eros style was found to be present more often in those bearing the TaqI A1 allele of the DRD2 3' UTR sequence and the overlapping ANKK1 exon 8. This allele has been proposed to influence a wide range of behaviors, favoring obesity and alcoholism but opposing neuroticism-anxiety and juvenile delinquency.[3] This genetic variation has been hypothesized to cause a reduced amount of pleasure to be obtained from a given action, causing people to indulge more frequently.[4]

comment by arundelo · 2009-11-13T07:36:28.737Z · score: 1 (1 votes) · LW(p) · GW(p)

experienced orgasms (anhedonically

Does this mean you've experienced orgasms without enjoying them, or experienced orgasms without setting out to do so for pleasure, or something else?

comment by AdeleneDawner · 2009-11-13T07:42:21.417Z · score: 4 (4 votes) · LW(p) · GW(p)

The former. It actually took some research for me to determine that I was experiencing them at all, because most descriptions focus so heavily on the pleasure aspect.

comment by SilasBarta · 2009-11-15T05:53:45.939Z · score: 0 (4 votes) · LW(p) · GW(p)

Okay, sounds plausible. Now, I ask that you do a check. Compare the length of your explanation to the length of the confusion-generating passage in 3WC. Call this the "backpedal ratio". Now, compare this backpedal ratio to that of, say, typical reinterpretations of the Old Testament that claim it doesn't really have anything against homosexuals.

If yours is about the same or higher, that's a good reason to write off your explanation with "Well, you could pretty much read anything into the text, couldn't you?"

comment by AdeleneDawner · 2009-11-15T06:42:27.829Z · score: 4 (4 votes) · LW(p) · GW(p)

I don't think the length in words is a good thing to measure by, especially given the proportion of words I used offering metaphors to assist people in understanding the presented concepts or reinforcing that I'm not actually dangerous vs. actually presenting new concepts. I also think that the strength (rationality, coherency) of the explanation is more important than the number of concepts used, but it's your heuristic.

comment by SilasBarta · 2009-11-16T00:51:15.725Z · score: 0 (0 votes) · LW(p) · GW(p)

Fine. Don't count excess metaphors or disclaimers toward your explanation, and then compute the backpedal ratio. Would that be a fair metric? Even with this favorable counting, it still doesn't look good.

comment by AdeleneDawner · 2009-11-16T01:26:11.892Z · score: 1 (1 votes) · LW(p) · GW(p)

I don't think that evaluating the length of the explanation - or the number of new concepts used - is a useful heuristic at all, as I mentioned. I can go into more detail than I have regarding why, but that explanation would also be long, so I assume you'd disregard it, therefore I don't see much point in taking the time to do so. (Unless someone else wants me to, or something.)

comment by SilasBarta · 2009-11-16T19:31:41.552Z · score: 3 (3 votes) · LW(p) · GW(p)

Given unlimited space, I can always outline plausible-sounding scenarios where someone's outlandish remarks were actually benign. This is an actual cottage industry among people who want to show adherence to the Bible while assuring others they don't actually want to murder homosexuals.

For this reason, the fact that you can produce a plausible scenario where Eliezer meant something benign is weak evidence he actually meant that. And it is the power of elaborate scenarios that implies we should be suspicious of high backpedal ratios. To the extent that you find length a bad measure, you have given sceanarios where length doesn't actually correlate with backpedaling.

It's a fair point, so I suggested you clip out such false positives for purposes of calculating the ratios, yet you still claim you have a good reason to ignore the backpedal ratio. That I don't get.

More generally, I am still confused in that I don't see a clean, simple reason why someone in the future would be confused as to why lots of rape would be a bad thing back in the 20th century, given that he'd have historical knowledge of what that society was like.

comment by AdeleneDawner · 2009-11-16T21:07:50.431Z · score: 3 (3 votes) · LW(p) · GW(p)

I wasn't trying to explain how Eliezer's world works - I upvoted the original comment specifically because I don't know how it works, and I'm curious. If you were taking my explanation as an attempt to provide that information, I'm sure it came across as a poor attempt, because I was in fact specifically avoiding speculating about the world Eliezer created. What I was attempting to do was show - from an outsider's perspective, since that's the one I have, and it's obviously more useful than an insider's perspective in this case - the aspects how humans determine selfhood and boundaries that make such a change possible (yes, just 'possible'), and also that Eliezer had shown understanding of the existence of those aspects.

If I had been trying to add more information to the story - writing fanfiction, or speculating on facts about the world itself - applying your backpedal-ratio heuristic would make some sense (though I'd still object to your use of length-in-words as a measurement, and there are details of using new-concepts as a measurement that I'm not sure you've noticed), but I wasn't. I was observing facts about the real world, specifically about humans and how dramatically different socialization can affect us.

As to why the character didn't understand why people from our time react so strongly to rape, the obvious (to me) answer is a simple lack of explanation by us. There's a very strong assumption in this society that everyone shares the aspects of selfhood that make rape bad (to the point where I often have to hide the fact that I don't share them, or suffer social repercussions), and very little motivation to even to consider why it's considered bad, much less leave a record of such thoughts. Even living in this society, with every advantage but having the relevant trait in understanding why people react that way, I haven't found an explanation that really makes sense of the issue, only one that does a coherent job of organizing the reactions that I've observed on my own.

comment by Blueberry · 2009-11-16T23:01:24.352Z · score: 1 (1 votes) · LW(p) · GW(p)

So does your lack of a sexual self make it so you can't see rape as bad at all, or "only" as bad as beating someone up? Presumably someone without a sexual self could still see assault as bad, and rape includes assault and violence.

comment by AdeleneDawner · 2009-11-16T23:31:58.236Z · score: 8 (8 votes) · LW(p) · GW(p)

Disregarding the extra physical and social risks of the rape (STDs, pregnancy, etc.), I expect that I wouldn't find assault-plus-unwelcome-sex more traumatic than an equivalent assault without the sex. I do agree that assault is traumatic, and I understand that most people don't agree with me about how traumatic assault-with-rape is compared to regular assault.

A note, for my own personal safety: The fact that I wouldn't find it as traumatic means I'm much more likely to report it, and to be able to give a coherent report, if I do wind up being raped. It's not something I'd just let pass, traumatic or no; people who are unwilling to respect others' preferences are dangerous and should be dealt with as such.

comment by Blueberry · 2009-11-16T23:45:02.005Z · score: 5 (5 votes) · LW(p) · GW(p)

Assault by itself is pretty traumatic. Not just the physical pain, but the stress, fear, and feeling of loss of control. I was mugged at knifepoint once, and though I wasn't physically hurt at all, the worst part was just feeling totally powerless and at the mercy of someone else. I was so scared I couldn't move or speak.

I don't think your views on rape are as far from the norm as you seem to think. They make sense to me.

comment by AdeleneDawner · 2009-11-17T00:05:45.881Z · score: 4 (4 votes) · LW(p) · GW(p)

Rape can happen without assault, though - I know someone to whom such a rape happened, and she found it very traumatic, to the point where it still affects her life decades later.

There are also apparently other things that can evoke the same kind of traumatized reaction without involving physical contact at all; Eliezer gave 'having nude photos posted online against your will' as an example. (I mentioned that example in a discussion with the aforementioned friend, and she agreed with Eliezer that it'd be similarly traumatic, in both type and degree, for whatever one data-point might be worth.)

comment by Blueberry · 2009-11-16T20:34:01.752Z · score: 2 (2 votes) · LW(p) · GW(p)

You seem confused about several things here. Unlike Biblical exegesis, in this conversation we are trying to elaborate and discuss possibilities for the cultural features of a world that was only loosely sketched out. You realize this is a fictional world we're discussing, not a statement of morality, or a manifesto that would require "backpedaling"?

The point of introducing socially acceptable non-consensual sex was to demonstrate huge cultural differences. Neither EY nor anyone else is claiming this would be a good thing, or "benign" : it's just a demonstration of cultural change over time.

Someone in the future, unless he was a historian, might not be familiar with history books discussing 20th century life. He might think lots of rape in the 20th century would be good (incorrectly) because non-consensual sex is a good thing by his cultural standards. He'd be wrong, but he wouldn't realize it.

Your question is analogous to "I don't see why someone now couldn't see that slavery was a good thing back in the 17th century, given that he'd have historical knowledge of what that society was like." Well, yes, slavery was seen (by some people) as a good thing back then, but it's not now. In the story, non-consensual sex is seen (incorrectly) as a good thing in the future, so people in the future interpret the past through those biases.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-16T19:35:39.069Z · score: 2 (2 votes) · LW(p) · GW(p)

Maybe it's just my experience with Orthodox Judaism, but the backpedal exegesis ratio - if, perhaps, computed as a sense of mental weight, more than a number of words - seems to me like a pretty important quantity when explaining others.

comment by AdeleneDawner · 2009-11-16T22:19:53.981Z · score: 0 (0 votes) · LW(p) · GW(p)

I could see it being important in some situations, definitely, if I'm understanding the purpose of the measurement correctly.

My understanding is that it's actually intended to measure how much the new interpretation is changing the meaning of the original passage from the meaning it was originally intended to have. That's difficult to measure, in most cases, because the original intended meaning is generally at least somewhat questionable in cases where people attempt to reinterpret a passage at all.

In this case, I'm trying not to change your stated meaning (which doesn't seem ambiguous to me: You're indicating that far-future societies are likely to have changed dramatically from our own, including changing in ways that we would find offensive, and that they can function as societies after having done so) at all, just to explain why your original meaning is more plausible than it seems at first glance. If I've succeeded - and if my understanding of your meaning and my understanding of the function of the form of measurement are correct - then the ratio should reflect that.

comment by DanArmak · 2009-11-14T00:42:40.802Z · score: 0 (0 votes) · LW(p) · GW(p)

It seems that the severity of being raped is part and parcel of being the gender that's choosier about who they have sex with.

Evolutionarily, it would seem that the severity of women being raped is due to the possibility of involuntary impregnation. Do we have good data on truly inborn gender differences on the severity of rape, without cultural interference?

comment by Tyrrell_McAllister · 2009-11-11T20:42:16.229Z · score: 4 (4 votes) · LW(p) · GW(p)

I don't see the need for more than this:

"rape" in their world just doesn't mean the same thing to them as it does to us

I just figured that these humans have been biologically altered to have a different attitude towards sex. Perhaps, for them, initiating sex with someone is analogous to initiating a conversation. Sure, you wish that some people wouldn't talk to you, but you wouldn't want to live in a world where everyone needed your permission before initiating a conversation. Think of all the interesting conversations you'd miss!

comment by Alicorn · 2009-11-11T20:48:25.090Z · score: 2 (4 votes) · LW(p) · GW(p)

And if that's what's going on, that would constitute a (skeezy) answer to my question, but I'd like to hear it from the story's author. Goodness knows it would annoy me if people started drawing inaccurate conclusions about my constructed worlds when they could have just asked me and I would have explained.

comment by Technologos · 2009-11-12T09:20:56.105Z · score: 1 (1 votes) · LW(p) · GW(p)

Alicorn: On the topic of your constructed worlds, I would be fascinated to read how your background in world-building (which, iirc, was one focus of your education?) might contribute to our understanding of this one.

comment by Alicorn · 2009-11-12T12:50:47.083Z · score: 1 (1 votes) · LW(p) · GW(p)

Yes, worldbuilding was my second major (three cheers for my super-cool undergrad institution!). My initial impression of Eliezer's skills in this regard from his fiction overall are not good, but that could be because he tends not to provide very much detail. It's not impossible that the gaps could be filled in with perfectly reasonable content, so the fact that these gaps are so prevalent, distracting, and difficult to fill in might be a matter of storytelling prowess or taste rather than worldbuilding abilities. (It's certainly possible to create a great world and then do a bad job of showcasing it.) I should be able to weigh in on this one in more detail if and when I get an answer to the above question, which is a particularly good example of a distracting and difficult-to-fill-in gap.

comment by Johnicholas · 2009-11-12T16:15:04.136Z · score: 3 (3 votes) · LW(p) · GW(p)

If I understand EY's philosophy of predicting the future correctly, the gaps in the world are intentional.

Suppose that you are a futurist, and you know how hard it is to predict the future, but you're convinced that the future will be large, complicated, weird, and hard to connect directly to the present. How can you provide the reader with the sensation of a large, complicated, weird, and hard-to-connect-to-the-present future?

Note that as a futurist, the conjunction fallacy (more complete predictions are less likely to be correct) is extremely salient in your thinking.

You put deliberate gaps into your stories, any resolution of which would require a large complicated explanation - that way the reader has the desired (distracting and difficult-to-fill-in) sensation, without committing the author to any particular resolution.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T20:23:56.440Z · score: 3 (3 votes) · LW(p) · GW(p)

The author still has to know what's inside the gaps. Also, the gaps have to look coherent - they can't appear to the reader as noise, or it simply won't create the right impression, no matter what.

You may be overanalyzing here. I've never published anything that I would've considered sending in to a science fiction magazine - maybe I'm holding myself to too-high standards, but still, it's not like I'm outlining the plot and building character sheets. My goal in writing online fiction is to write it quickly so it doesn't suck up too much time (and I quite failed at this w/r/t Three Worlds Collide, but I never had the spare days to work only on the novella, which apparently comes with a really large productivity penalty).

comment by Kutta · 2009-11-13T01:03:52.203Z · score: 0 (2 votes) · LW(p) · GW(p)

I think Alicorn is certainly not overanalizing in the sense that fiction is always fiction and usual methods of analysis apply regardless of the author's proclaimed intentions or the amount of resources spent at writing. On the other hand I think Eliezer's fictions are perfectly good enough for their purpose, and while the flaws pointed out by Alicorn are certainly there I think it's unreasonable to expect Eliezer to be like a professional fiction writer.

comment by Alicorn · 2009-11-12T16:20:53.404Z · score: 1 (3 votes) · LW(p) · GW(p)

Maybe he's a good futurist. That does not make him a good worldbuilder, even if he's worldbuilding about the future. Does it come as any surprise that the skills needed to write good fiction in well-thought-out settings aren't the exact same skills needed to make people confused about large, complicated, weird, disconnected things?

comment by Johnicholas · 2009-11-12T16:36:23.429Z · score: 1 (1 votes) · LW(p) · GW(p)

Taking your question as rhetorical, with the presumed answer "no", I agree with you - of course the skills are different. However, I hear an implication (and correct me if I'm wrong) that good fiction requires a well-thought-out setting. Surely you can think of good writers who write in badly-constructed or deeply incomplete worlds.

comment by Alicorn · 2009-11-12T16:47:00.253Z · score: 1 (3 votes) · LW(p) · GW(p)

Good fiction does not strictly require a well-built setting. A lot of fiction takes place in a setting so very like reality that the skill necessary to provide a good backdrop isn't worldbuilding, but research. Some fiction that isn't set in "the real world" still works with little to no sense of place, history, culture, or context, although this works mostly in stories that are very simple, very short, or (more typically) both. Eliezer writes speculative fiction (eliminating the first excuse), and his stories typically depend heavily on backdrop elements (eliminating the second excuse, except when he's writing fanficiton and can rely on prior reading of others' works to do this job for him).

comment by Johnicholas · 2009-11-12T19:04:31.965Z · score: 1 (1 votes) · LW(p) · GW(p)

I agree with you regarding the quality of his writing, but your generalizations regarding worldbuilding's relationship to quality may be overbroad or overstrong. Worldbuilding is fun and interesting and I like it in my books, but lack of worldbuilding, or deep difficult holes in the world are not killing flaws. Almost nothing cannot be rescued by a sufficient quality in other areas. Consider Madeline L'Engle's A Wrinkle in Time, Gene Wolfe's Book of the New Sun, Stanislaw Lem's Cyberiad.

comment by Alicorn · 2009-11-12T19:37:13.733Z · score: 2 (4 votes) · LW(p) · GW(p)

The only one of the books you mention that I've read is Wrinkle in Time, so I'll address that one. It isn't world-driven! It's a strongly character-driven story. The planets she invents, the species she imagines, the settings she dreams up - these do not supply the thrust of the story. The people populating the book do that, and pretty, emotionally-charged prose does most of the rest. Further, L'Engle's worldbuilding isn't awful, and moreover, its weaknesses aren't distracting. It has an element of whimsy to it and it's colored by her background values, but there's nothing much in there that is outrageous and important and unexplained.

Eliezer's stories, meanwhile - I'd have to dislike them even more if I were interpreting them as being character-driven. His characters tend to be ciphers with flat voices, clothed in cliché and propped up by premise. And it's often okay to populate your stories with such characters if they aren't the point - if the point is world or premise/conceit or plot or even just raw beautiful writing. I actually think that Eliezer's fiction tends to be premise/conceit driven, not setting driven, but he backs up his premises with setting, and his settings do not appear to be up to the task. So to summarize:

A bad story element (such as setting, characterization, plot, or writing quality) may be forgivable, and not preclude the work it's found in from being good, if:

  • The bad element is not the point of the story
  • The bad element isn't indispensable to help support whatever element is the point of the story (for instance, you might get away with bad writing in a character-driven story only if you don't depend on your character's written voice to convey their personality)
  • And it is not so bad as to distract from the point of the story.

Eliezer's subpar worldbuilding slips by according to the first criterion. I don't think his stories are truly setting-driven. But it fails the second two. His settings are indispensably necessary to back up his premises. ("Three Worlds Collide" could not have been plausibly set during some encounter between three boats full of humans on Earth.) And - this one is a matter of taste to some extent, I'll grant - the settings are poor enough to be distracting. (The non-consensual sex thing is just a particularly easy target. It's hardly the only bizarre, unexplained thing he's ever dropped in.)

comment by MatthewB · 2010-01-07T07:01:54.901Z · score: 0 (2 votes) · LW(p) · GW(p)

Wouldn't cannibalism be an equally horrific thing to come up with? I can envision a future world in which Cannibalism is just as accepted as non-consensual sex in The Worlds Collide. I mean, with the recent invention of pork on a petri dish, it would be just as easy to grow human on a petri dish.

Or, if like in the movie(s) Surrogate and Avatar, if we have bodies that we drive from a distance, and these bodies can be replaced by just growing/building a new one, might not some people kill and eat others as a type of social comment (in the same way that some sub-cultures engage in questionable behavior in order to make a social comment. The Goth/Vampire thing for instance, or some shock artists like Karen Finley or GG Allin. I had to leave the GG Allin show(s) I attended as a kid, but sat in rapt fascination through Karen's song Yam Jam, and was really interested in just how she was going to remove those yams afterward or what might had driven her to explore that particular mode of expression to begin with)

comment by tut · 2010-01-07T07:55:59.850Z · score: 4 (4 votes) · LW(p) · GW(p)

Wouldn't cannibalism be an equally horrific thing to come up with? ... human on a petri dish.

I would not have any problems with eating human-on-a-petri-dish, as long as it never had any neurons. The problem with cannibalism is eating a person, not some cells with the wrong DNA. And cells in a petri dish are not a person.

comment by Cyan · 2010-01-07T14:08:48.130Z · score: 0 (0 votes) · LW(p) · GW(p)

The other problem with cannibalism is that you can get diseases that way far more easily than you can from eating non-human meat.

comment by MatthewB · 2010-01-07T13:53:37.403Z · score: 0 (0 votes) · LW(p) · GW(p)

I know that, and you know that... But, what would the general population say about eating meat that was the product of Human DNA?

It seems to me that some of the general population would be horribly incensed about it.

comment by DanArmak · 2010-01-07T14:08:04.547Z · score: 3 (3 votes) · LW(p) · GW(p)

Most of the general population is incensed about most things, most of the time. I've stopped caring. Why don't you?

comment by byrnema · 2010-01-07T14:32:17.882Z · score: 2 (2 votes) · LW(p) · GW(p)

If a group of people donated their bodies to cannibalism when they die for a group of cannibals to then consume them, I would have no problem with that. (I submit myself as an example of someone with moderate rather than extremely liberal views.)

I think the moral repugnance only comes in when people might be killed for food: the value of life and person-hood is so much greater than the value of an immediate meal.

Someone speculated earlier about a civilization of humans that had nothing to consume but other humans. Has it been mentioned yet that this population would shrink exponentially, because humans are heterotrophs, and there's something like only 10% efficiency from one step in the food chain to the next?

That's what was disappointing about The Matrix. If the aliens wanted to generate energy there would have been more efficient ways to do so (say, one which actually generated more energy than it required). I pretend the aliens were just farming CPU and the director got it wrong.

comment by DanArmak · 2010-01-07T18:16:22.067Z · score: 1 (1 votes) · LW(p) · GW(p)

I think the moral repugnance only comes in when people might be killed for food: the value of life and person-hood is so much greater than the value of an immediate meal.

We already have moral repugnance towards the act of killing itself. I suspect that any feelings towards already-dead bodies exist independently of this. They may be rooted in feelings of disgust which evolved in part to protect from contamination (recently dead bodies can spread disease and also provide breeding ground for flies and parasites).

comment by byrnema · 2010-01-07T18:55:57.911Z · score: 0 (0 votes) · LW(p) · GW(p)

I don't locate feelings of disgust. Perhaps we are just genetically or culturally different with respect to this sensitivity?

I recall when my parakeet died, I felt a sense of awe while holding the body; and a moral obligation to be respectful and careful with its body. I suppose I wouldn't have enjoyed eating him, but only because I identified him as more of a person than food. If I thought he would have wanted me to eat him, I would. Except then I would worry about parasites, so I would have to weigh my wishes to make a symbolic gesture verses my wishes to stay healthy.

comment by MatthewB · 2010-01-07T15:16:56.567Z · score: 0 (2 votes) · LW(p) · GW(p)

That was me who discussed a civilization that had nothing to consume but other humans. Thanks for bringing it up, but I had already dealt with that in the stories as soon as someone pointed out that problem when I was much younger (turned out to be easier to fix that I thought), but telling what the solution is would give away too much, and since I might actually be able to get these published now that cannibalism is not nearly so taboo as it was back in the 80s when I first tried to submit them (Zombie movies were not nearly so prevalent then as now). Once the solution is revealed... It makes for another grim and, some have said, twisted surprise.

I too have wondered about the whole matrix thing. There are some very good arguments against it, which I tend to give more weight to than the arguments in favor. Yet, the arguments in favor did not take into account the waste generated by the humans being used to support the generation of power, nor did they take into account any possible superconducting tech the AIs may have had. I cannot recall if any of them took into account that the AIs were also not using every human farmed as a battery, but were using many of the farmed humans as food for the living humans. There is also some evidence from the games that the AIs were also using algae as a supplement for the human batteries.

Also, on the point about people donating their bodies to cannibals when they die... I have often thought that it would be a horrible joke for some cranky rich old guy to play on his heirs to make them eat him if they wished to inherit any of his fortune.

comment by Paul Crowley (ciphergoth) · 2010-01-07T15:40:38.557Z · score: 3 (3 votes) · LW(p) · GW(p)

Sod that, start a religion in which people have to symbolically eat your body and drink your blood once a week. Better yet, tell them that when they do it, it magically becomes the real thing!

comment by MatthewB · 2010-01-07T15:10:24.849Z · score: 0 (0 votes) · LW(p) · GW(p)

I would love to stop caring. It is indeed a wonderful suggestion.

However, many of those people who would be offended by such things, also get offended by many, much less offensive things, things which often may cause a loss of liberty to others... And they vote.

I do think it would behoove me to maybe turn up my apathy just a bit, as my near term future will have a lot more to say about my survival and ultimate value than worrying about a bunch of human cattle who like to get all bothered about things as trivial as the shape of the moon (absurd example)

comment by Vladimir_Nesov · 2010-01-07T17:15:47.449Z · score: 1 (1 votes) · LW(p) · GW(p)

Most of the general population is incensed about most things, most of the time. I've stopped caring. Why don't you?

I would love to stop caring. It is indeed a wonderful suggestion.

However, many of those people who would be offended by such things, also get offended by many, much less offensive things, things which often may cause a loss of liberty to others... And they vote.

Does your worrying about and discussing what other people believe contribute more to changing the outcome of their voting, or to other things, like personal payoff of social interaction while having the discussions about people of lower status according to this metric? Overestimating importance of personally discussing politics for policy formation is a classical mistake.

See also: Dunbar's Function

"Though it's a side issue, what's even more... interesting.... is the way that our brains simply haven't updated to their diminished power in a super-Dunbarian world. We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things."

comment by MatthewB · 2010-01-07T17:30:56.340Z · score: 2 (2 votes) · LW(p) · GW(p)

I see that I may be caught up in this mistake a bit. Some of my discussing is simply to gather information about what a typical person of a demographic might believe. It's mostly confirming what I might have read about in a poll, or that data from a website shows.

Some times the discussion gets to the point where I try to change an attitude, and I keep tripping over myself when I do this, as few people will change their attitudes, political and/or religious without some form of emotional connection with the reason to change.

This is sort of why I am here. I wish to stop using my valuable brain time to convince people of things which I haven't a hope of changing, and do something else which may contribute to the good of society in a more direct way.

I am a mess of paranoid contradictions gathered from a mis-spent youth, and I wish to untangle some of that irrationality, as it is an intellectual drag on my progress.

comment by Stuart_Armstrong · 2009-11-11T11:42:21.298Z · score: 21 (27 votes) · LW(p) · GW(p)

Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?

comment by Psy-Kosh · 2009-11-11T03:14:37.840Z · score: 21 (39 votes) · LW(p) · GW(p)

Could you (Well, "you" being Eliezer in this case, rather than the OP) elaborate a bit on your "infinite set atheism"? How do you feel about the set of natural numbers? What about its power set? What about that thing's power set, etc?

From the other direction, why aren't you an ultrafinitist?

comment by [deleted] · 2009-11-11T04:51:53.734Z · score: 3 (3 votes) · LW(p) · GW(p)

Earlier today, I pondered whether this infinite set atheism thing is something Eliezer merely claims to believe as some sort of test of basic rationality. It's a belief that, as far as I can tell, makes no prediction.

But here's what I predict that I would say if I had Eliezer's opinions and my mathematical knowledge: I'm a fan of thinking of ZFC as being its countably infinite model, in which the class of all sets is enumerable, and every set has a finite representation. Of course, things like the axiom of infinity and Cantor's diagonal argument still apply; it's just that "uncountably infinite set" simply means "set whose bijection with the natural numbers is not contained in the model".

(Yes, ZFC has a countable model, assuming it's consistent. I would call this weird, but I hate admitting that any aspect of math is counterintuitive.)

comment by Johnicholas · 2009-11-11T11:26:29.044Z · score: 8 (8 votes) · LW(p) · GW(p)

ZFC's countable model isn't that weird.

Imagine a computer programmer, watching a mathematician working at a blackboard. Imagine asking the computer programmer how many bytes it would take to represent the entities that the mathematician is manipulating, in a form that can support those manipulations.

The computer programmer will do a back of the envelope calculation, something like: "The set of all natural numbers" is 30 characters, and essentially all of the special symbols are already in Unicode and/or TeX, so probably hundreds, maybe thousands of bytes per blackboard, depending. That is, the computer programmer will answer "syntactically".

Of course, the mathematician might claim that the "entities" that they're manipulating are more than just the syntax, and are actually much bigger. That is, they might answer "semantically". Mathematicians are trained to see past the syntax to various mental images. They are trained to answer questions like "how big is it?" in terms of those mental images. A math professor asking "How big is it?" might accept answers like "it's a subset of the integers" or "It's a superset of the power set of reals". The programmer's answer of "maybe 30 bytes" seems, from the mathematical perspective, about as ridiculous as "It's about three feet long right now, but I can write it longer if you want".

The weirdly small models are only weirdly small if what you thought were manipulating was something other than finite (and therefore Godel-numberable) syntax.

comment by Tyrrell_McAllister · 2009-11-12T15:06:26.325Z · score: 0 (0 votes) · LW(p) · GW(p)

Of course, the mathematician might claim that the "entities" that they're manipulating are more than just the syntax, and are actually much bigger. That is, they might answer "semantically".

Models are semantics. The whole point of models is to give semantic meaning to syntactic strings.

I haven't studied the proof of the Löwenheim–Skolem theorem, but I would be surprised if it were as trivial as the observation that there are only countably many sentences in ZFC. It's not at all clear to me that you can convert the language in which ZFC is expressed into a model for ZFC in a way that would establish the Löwenheim–Skolem theorem.

comment by Johnicholas · 2009-11-12T15:46:53.839Z · score: 3 (3 votes) · LW(p) · GW(p)

I have studied the proof of the (downward) Lowenheim-Skolem theorem - as an undergraduate, so you should take my conclusions with some salt - but my understanding of the (downward) Lowenheim-Skolem theorem was exactly that the proof builds a model out of the syntax of the first-order theory in question.

I'm not saying that the proof is trivial - what I'm saying is that holding Godel-numberability and the possibility of a strict formalist interpretation of mathematics in your mind provides a helpful intuition for the result.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T15:14:33.693Z · score: 6 (6 votes) · LW(p) · GW(p)

Earlier today, I pondered whether this infinite set atheism thing is something Eliezer merely claims to believe as some sort of test of basic rationality.

I've said this before in many places, but I simply don't do that sort of thing. If I want to say something flawed just to see how my readers react to it, I put it into the mouth of a character in a fictional story; I don't say it in my own voice.

comment by [deleted] · 2009-11-13T02:27:09.547Z · score: 0 (0 votes) · LW(p) · GW(p)

I swear I meant to say that, knowing you, you probably wouldn't do such a thing.

comment by komponisto · 2009-11-11T04:41:57.328Z · score: 0 (0 votes) · LW(p) · GW(p)

. Uh-oh; if he takes this up, I may finally have to write that post I promised back in June!

comment by Nominull · 2009-11-11T03:34:59.638Z · score: 0 (0 votes) · LW(p) · GW(p)

Well obviously if a set is finite, no amount of taking its power set is going to change that fact.

comment by Psy-Kosh · 2009-11-11T04:29:36.582Z · score: 1 (1 votes) · LW(p) · GW(p)

I meant "the set of all natural numbers", IIRC, he's explicitly said he's not an ultrafinitist, so either he considers that as an acceptable infinite set, or he considers the natural numbers to exist, but not the set of them, or something.

I meant "if you accept countable infinities, where and how do you consider the whole cantor hierarchy to break down?"

comment by DanArmak · 2009-11-11T10:34:56.768Z · score: 1 (1 votes) · LW(p) · GW(p)

What would it even mean for the natural numbers (the entire infinity of them) to "exist"?

What makes a set acceptable or not?

comment by cousin_it · 2009-11-12T11:55:55.381Z · score: 2 (2 votes) · LW(p) · GW(p)

This question sounds weird to me.

I find it best not to speak about "existence", but speak instead of logical models that work. For example, we don't know if our concept of integers is consistent, but we have evolved a set of tools for reasoning about it that have been quite useful so far. Now we try to add new reasoning tools, new concepts, without breaking the system. For example, if we imagine "the set of all sets" and apply some common reasoning to it, we reach Russell's paradox; but we can't feed this paradox back into the integers to demonstrate their inconsistency, so we just throw the problematic concept away with no harm done.

comment by DanArmak · 2009-11-14T00:09:08.284Z · score: 1 (1 votes) · LW(p) · GW(p)

It sounds weird to me too, which is why I asked it - because Psy-Kosh said EY said something about integers, or the set of integers, existing or not.

comment by Douglas_Knight · 2009-11-14T02:55:37.619Z · score: 0 (0 votes) · LW(p) · GW(p)

because Psy-Kosh said EY said [infinite set atheist]

secondary sources? bah!

LMGTFY (or the full experience)

comment by bogdanb · 2009-11-11T23:07:15.797Z · score: 20 (36 votes) · LW(p) · GW(p)

How did you win any of the AI-in-the-box challenges?

comment by righteousreason · 2009-11-12T02:47:29.877Z · score: 9 (9 votes) · LW(p) · GW(p)

http://news.ycombinator.com/item?id=195959

"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...

All right, this much of a hint:

There's no super-clever special trick to it. I just did it the hard way.

Something of an entrepreneurial lesson there, I guess."

comment by bogdanb · 2010-01-10T00:34:42.389Z · score: 0 (0 votes) · LW(p) · GW(p)

I know that part. I was hoping for a bit more...

comment by Unnamed · 2009-11-17T02:22:58.154Z · score: 7 (7 votes) · LW(p) · GW(p)

Here's an alternative question if you don't want to answer bogdanb's: When you won AI-Box challenges, did you win them all in the same way (using the same argument/approach/tactic) or in different ways?

comment by Yorick_Newsome · 2009-11-12T01:26:30.418Z · score: 4 (4 votes) · LW(p) · GW(p)

Something tells me he won't answer this one. But I support the question! I'm awfully curious as well.

comment by CronoDAS · 2009-11-16T09:50:09.743Z · score: 2 (2 votes) · LW(p) · GW(p)

Perhaps this would be a more appropriate version of the above:

What suggestions would you give to someone playing the role of an AI in an AI-Box challenge?

comment by SilasBarta · 2009-11-12T20:59:57.811Z · score: 2 (6 votes) · LW(p) · GW(p)

Voted down. Eliezer Yudkowsky has made clear he's not answering that, and it seems like an important issue for him.

comment by wedrifid · 2009-11-15T10:24:23.333Z · score: 3 (3 votes) · LW(p) · GW(p)

Voted back up. He will not answer but there's no harm in asking. In fact, asking serves to raise awareness both on the surprising (to me at least) result and also on the importance Eliezer places on the topic.

comment by SilasBarta · 2009-11-16T01:05:36.182Z · score: -1 (3 votes) · LW(p) · GW(p)

Yes, there is harm in asking. Provoking people to break contractual agreements they've made with others and have made clear they regard as vital, generally counts as Not. Cool.

comment by Jordan · 2009-11-16T01:50:00.193Z · score: 3 (5 votes) · LW(p) · GW(p)

In this case though, it's clear that Eliezer wants people to get something out of knowing about the AI box experiments. That's my extrapolated Eliezer volition at least. Since for me and many others we can't get anything out of the experiments without knowing what happened, I feel it is justified to question Eliezer where we see a contradiction in his stated wishes and our extrapolation of his volition.

In most situations I would agree that it's not cool to push.

comment by wedrifid · 2009-11-16T08:38:19.436Z · score: 1 (1 votes) · LW(p) · GW(p)

As the OP said, Eliezer hasn't been subpoenaed. The questions here are merely stimulus to which he can respond with whichever insights or signals he desires to convey. For what little it is worth my 1.58 bits is 'up'.

(At least, if it is granted that a given person has read a post and that his voting decision is made actively then I think I would count it as 1.58 bits. It's a little blurry.)

comment by [deleted] · 2009-11-17T02:11:00.301Z · score: 1 (1 votes) · LW(p) · GW(p)

It depends on the probability distribution of comments.

comment by wedrifid · 2009-11-17T02:38:05.718Z · score: 0 (0 votes) · LW(p) · GW(p)

It depends on the probability distribution of comments.

Good point. Probability distribution of comments relative to those doing the evaluation.

comment by bogdanb · 2010-01-10T00:33:51.426Z · score: 0 (0 votes) · LW(p) · GW(p)

IIRC* the agreement was to not disclose the contents of a contest without the agreement of both participants. My hope was not that Eliezer might break his word, but that evidence of continued interest in the matter might persuade him to obtain permission from at least one of his former opponents. (And to agree himself, as the case may be.)

(*: and my question was based on that supposition)

comment by evtujo · 2009-11-11T05:09:36.429Z · score: 20 (30 votes) · LW(p) · GW(p)

How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?

comment by taa21 · 2009-11-11T21:01:31.162Z · score: 6 (10 votes) · LW(p) · GW(p)

Just out of curiosity, why are you asking this? And why is Yudkowsky's opinion on this matter relevant?

comment by spriteless · 2009-11-15T23:00:04.119Z · score: 2 (2 votes) · LW(p) · GW(p)

This sort of thing should have it's own thread, it deserves some brainstorming.

You can start with choice of fairytales.

You can make the games available to play reward understanding probabilities and logic over luck and quick reflexes. My dad got us puzzle games and reading tutors for the NES and C64 when I was a kid. (Lode Runner, LoLo, Reader Rabbit)

comment by anon · 2009-11-14T15:45:50.926Z · score: 19 (21 votes) · LW(p) · GW(p)

For people not directly involved with SIAI, is there specific evidence that it isn't run by a group of genuine sociopaths with the goal of taking over the universe to fulfill (a compromise of) their own personal goals, which are fundamentally at odds with those of humanity at large?

Humans have built-in adaptions for lie detection, but betting a decision like this on the chance of my sense motive roll beating the bluff roll of a person with both higher INT and CHA than myself seems quite risky.

Published writings about moral integrity and ethical injunctions count for little in this regard, because they may have been written with the specific intent to deceive people into supporting SIAI financially. The fundamental issue seems rather similar to the AIBox problem: You're dealing with a potential deceiver more intelligent than yourself, so you can't really trust anything they say.

I wouldn't be asking this for positions that call for merely human responsibility, like being elected to the highest political office in a country, having direct control over a bunch of nuclear weapons, or anything along those lines; but FAI implementation calls for much more responsibility than that.

If the answer is "No. You'll have to do with the base probability of any random human being a sociopath.", that might be good enough. Still, I'd like to know if I'm missing specific evidence that would push the probability for "SIAI is capital-E Evil" lower than that.

Posted pseudo-anonymously because I'm a coward.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-15T22:22:58.082Z · score: 11 (11 votes) · LW(p) · GW(p)

I guess my main answers would be, in order:

1) You'll have to do with the base probability of a highly intelligent human being a sociopath.

2) Elaborately deceptive sociopaths would probably fake something other than our own nerdery...? Even taking into account the whole "But that's what we want you to think" thing.

3) All sorts of nasty things we could be doing and could probably get away with doing if we had exclusively sociopath core personnel, at least some of which would leave visible outside traces while still being the sort of thing we could manage to excuse away by talking fast enough.

4) Why are you asking me that? Shouldn't you be asking, like, anyone else?

comment by anon · 2009-11-16T21:09:24.076Z · score: 0 (0 votes) · LW(p) · GW(p)

Re. 4, not for the way I asked the question. Obviously asking for a probability, or any empirical evidence I would have to take your word on, would have been silly. But there might have been excellent public evidence against the Evil hypothesis I just wasn't aware of (I couldn't think of any likely candidates, but that might have been a failure of my imagination); in that case, you would likely be aware of such evidence, and would have a significant icentive to present it. It was a long shot.

comment by anonymousss · 2012-04-18T08:20:13.159Z · score: 1 (1 votes) · LW(p) · GW(p)

I looked into the issue from statistical point of view. I would have to go with much higher than baseline probability of them being sociopaths on the basis of Bayesian reasoning starting with baseline probability (about 1%) as a prior and then updating on the criteria of things that sociopaths can not easily fake (such as e.g. previously inventing something that works).

Ultimately, the easy way to spot a sociopath is to look for the massive dis-balance of the observable signals towards those that sociopaths can easily fake. You don't need to be smarter than sociopath to identify the sociopath. The spam filter is pretty good at filtering out the advance fee fraud and letting business correspondence through.

You just need to act like statistical prediction rule on a set of criteria, without allowing for verbal excuses of any kind, no matter how logical they sound. For instance the leaders of genuine research institutions are not HS dropouts; the leaders of cults are; you can find the ratio and build evidential Bayesian rule, with which you can use 'is HS dropout' evidence to adjust your probabilities.

The beauty of this method is that it is too expensive for sociopaths to fake honest signals - such as for example having spent years to make and perfect some invention that has improved lives of people, you can't send this signal without doing immense lot of work - and so even as they are aware of this method there is literally nothing they can do about it, nor do they want to do anything about it as there are enough people who do not pay attention to certainly honest signals to fakeable signals ratio (gullible people), whom sociopaths can target instead, for a better reward to work ratio.

Ultimately, it boils down to the fact that genuine world saving leader is rather unlikely to have never before invented anything that did demonstrably benefit the mankind, while a sociopath is pretty likely (close to 1) to have never before invented anything that did demonstrably benefit the mankind. You update on this, and ignore verbal excuses, and you have yourself a (nearly)non-exploitable decision mechanism.

comment by timtyler · 2009-11-15T22:57:16.203Z · score: 0 (0 votes) · LW(p) · GW(p)

What would be the best way of producing such evidence? Presumably, organisational transparency - though that could - in principle - be faked.

I'm not sure they will go for that - citing the same reasons previously given for not planning to open-source everything.

comment by anonym · 2009-11-14T21:44:56.538Z · score: 18 (20 votes) · LW(p) · GW(p)

What progress have you made on FAI in the last five years and in the last year?

comment by JulianMorrison · 2009-11-13T16:24:48.036Z · score: 18 (18 votes) · LW(p) · GW(p)

How do you characterize the success of your attempt to create rationalists?

comment by Johnicholas · 2009-11-11T11:43:39.315Z · score: 18 (20 votes) · LW(p) · GW(p)

What are your current techniques for balancing thinking and meta-thinking?

For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.

comment by roland · 2009-11-19T00:55:47.995Z · score: 0 (0 votes) · LW(p) · GW(p)

Does meta-thinking include the gathering of more information for solving the problem? I think it should.

comment by DanArmak · 2009-11-11T10:47:53.364Z · score: 18 (22 votes) · LW(p) · GW(p)

Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)

comment by MichaelVassar · 2009-11-13T05:28:45.111Z · score: 2 (2 votes) · LW(p) · GW(p)

Mine would be slightly less than 10% by 2030, slightly greater than 85% by 2080 conditional on a general continuity of our civilization between now and 2080, most likely method of origination depends on how far we look out. More brain inspired methods tend to come later and to be more likely absolutely.

comment by alyssavance · 2009-11-13T00:06:27.645Z · score: 2 (2 votes) · LW(p) · GW(p)

We at SIAI have been working on building a mathematical model of this since summer 2008. See Michael Anissimov's blog post at http://www.acceleratingfuture.com/michael/blog/2009/02/the-uncertain-future-simple-ai-self-improvement-models/. You (or anyone else reading this) can contact us at uncertainfuture@intelligence.org if you're interested in helping us test the model.

comment by retired_phlebotomist · 2009-11-13T07:10:04.099Z · score: 16 (16 votes) · LW(p) · GW(p)

If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?

comment by Peter_de_Blanc · 2009-11-16T04:30:50.988Z · score: 0 (2 votes) · LW(p) · GW(p)

About what? Everything?

comment by gwern · 2009-11-16T04:59:06.833Z · score: 3 (3 votes) · LW(p) · GW(p)

Given the context of Eliezer's life-mission and the general agreement of Robin & Eliezer: FAI, AI's timing, and its general character.

comment by retired_phlebotomist · 2009-11-17T07:22:17.103Z · score: 1 (1 votes) · LW(p) · GW(p)

Right. Robin doesn't buy the "AI go foom" model or that formulating and instilling a foolproof morality/utility function will be necessary to save humanity.

I do miss the interplay between the two at OB.

comment by jimrandomh · 2009-11-12T02:44:20.898Z · score: 16 (28 votes) · LW(p) · GW(p)

What is the probability that this is the ultimate base layer of reality?

comment by MichaelHoward · 2009-11-12T23:24:19.021Z · score: 0 (0 votes) · LW(p) · GW(p)

And then... Really? What would be a fair estimate if you were someone not especially likely to be simulated, living in a not particularly critical time, and there was only, say, a trillionth as much potential computronium lying around?

comment by MichaelBishop · 2009-11-12T16:13:59.104Z · score: 0 (0 votes) · LW(p) · GW(p)

could you explain more what this means?

comment by jimmy · 2009-11-12T19:13:05.631Z · score: 2 (2 votes) · LW(p) · GW(p)

I think he means "as opposed to living in a simulation (possibly in another simulation, and so on)"

This seems to be one of those questions that seem like they should answer, but actually don't.

If there's at least one copy of you in "a simulation" and at least one in "base level reality", then you're going to run into the same problems as sleeping beauty/absent minded driver/etc when you deal with 'indexical probabilities'.

There are Decicion Theory answers, but the ones that work don't mention indexical probabilities. This does make the situation a bit harder than say, the sleeping beauty problem, since you have to figure out how to weight your utility function over multiple universes.

comment by John_Maxwell (John_Maxwell_IV) · 2009-11-11T06:51:41.544Z · score: 16 (22 votes) · LW(p) · GW(p)

Who was the most interesting would-be FAI solver you encountered?

comment by alyssavance · 2009-11-13T00:02:25.019Z · score: 5 (5 votes) · LW(p) · GW(p)

As far as I can tell (this is not Eliezer's or SIAI's opinion), the people who have contributed the most to FAI theory are Eliezer, Marcello Herreshoff, Michael Vassar, Wei Dai, Nick Tarleton, and Peter de Blanc in that order.

comment by MichaelGR · 2009-11-17T01:54:16.110Z · score: 15 (15 votes) · LW(p) · GW(p)

Say I have $1000 to donate. Can you give me your elevator pitch about why I should donate it (in totality or in part) to the SIAI instead of to the SENS Foundation?

Updating top level with expanded question:

I ask because that's my personal dilemma: SENS or SIAI, or maybe both, but in what proportions?

So far I've donated roughly 6x more to SENS than to SIAI because, while I think a friendly AGI is "bigger", it seems like SENS has a higher probability of paying off first, which would stop the massacre of aging and help ensure I'm still around when a friendly AGI is launched if it ends up taking a while (usual caveats; existential risks, etc).

It also seems to me like more dollars for SENS are almost assured to result in a faster rate of progress (more people working in labs, more compounds screened, more and better equipment, etc), while more dollars for the SIAI doesn't seem like it would have quite such a direct effect on the rate of progress (but since I know less about what the SIAI does than about what SENS does, I could be mistaken about the effect that additional money would have).

If you don't want to pitch SIAI over SENS, maybe you could discuss these points so that I, and others, are better able to make informed decisions about how to spend our philanthropic monies.

comment by AnnaSalamon · 2009-11-19T06:57:55.313Z · score: 28 (28 votes) · LW(p) · GW(p)

Hi there MichaelGR,

I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.

Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.

In more detail:

Existential risk can be reduced by (among other pathways):

  1. Getting folks with money, brains, academic influence, money-making influence, and other forms of power to take UFAI risks seriously; and
  2. Creating better strategy, and especially, creating better well-written, credible, readable strategy, for how interested people can reduce AI risks.

SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projects we can’t currently get to. I’ll list some examples of projects we have recently done, and their fruits, to give you a sense of what this looks like:

Academic talks and journal articles (which have given us a number of high-quality academic allies, and have created more academic literature and hence increased academic respectability for AI risks):

  • “Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, myself, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ‘09 (ECAP))
  • “Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
  • “Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ‘09 (APCAP))
  • “Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
  • “Long-term AI forecasting: Building methodologies that work”, an invited presentation by myself at the Santa Fe Institute conference on forecasting;
  • And several more at various stages of the writing process, including some journal papers.

The Singularity Summit, and the academic workshop discussions that followed it. (This was a net money-maker for SIAI if you don’t count Michael Vassar’s time; if you do count his time the Summit roughly broke even, but created significant increased interest among academics, among a number of potential donors, and among others who may take useful action in various ways; some good ideas were generated at the workshop, also.)

The 2009 SIAI Summer Fellows Program (This cost about $30k, counting stipends for the SIAI staff involved. We had 15 people for varying periods of time over 3 months. Some of the papers above were completed there; also, human capital gains were significant, as at least three of the program’s graduates have continued to do useful research with the skills they gained, and at least three others plan to become long-term donors who earn money and put it toward existential risk reduction.)

Miscellaneous additional examples:

  • The “Uncertain Future” AI timelines modeling webapp (currently in alpha)
  • A decision theory research paper discussing the idea of “acausal trade” in various decision theories, and its implications for the importance of the decision theory built into powerful or seed AIs (this project is being funded by Less Wronger ‘Utilitarian’
  • Planning and market research for a popular book on AI risks and FAI (just started, with a small grant from a new donor)
  • A pilot program for conference grants to enable the presentation of work relating to AI risks (also just getting started, with a second small grant from the same donor)
  • Internal SIAI strategy documents, helping sort out a coherent strategy for the activities above.

(This activity is a change from past time-periods: SIAI added a bunch of new people and project-types in the last year, notably our president Michael Vassar, and also Steve Rayhawk, myself, Michael Anissimov, volunteer Zack Davis, and some longer-term volunteers from the SIAI Summer Fellows Program mentioned above.)

(There are also core SIAI activities that are not near the margin but are supported by our current donation base, notably Eliezer’s writing and research.)

How efficiently can we turn a marginal $1000 into more rapid project-completion?

As far as I can tell, rather efficiently. The skilled people we have today are booked and still can’t find time for all the high-value projects in our backlog (including many academic papers for which the ideas have long been floating around, but which aren’t yet written where academia can see and respond to them). A marginal $1k can buy nearly an extra person-month of effort from a Summer Fellow type; such research assistants can speed projects now, and will probably be able to lead similar projects by themselves (or with new research assistants) after a year of such work.

As to SIAI vs. SENS:

SIAI and SENS have different aims, so which organization gives you more goodness per dollar will depend somewhat on your goals. SIAI is aimed at existential risk reduction, and offers existential risk reduction at a rate that I might very crudely ballpark at 8 expected current lives saved per dollar donated (plus an orders of magnitude larger number of potential future lives). You can attempt a similar estimate for SENS by estimating the number of years that SENS advances the timeline for longevity medicine, looking at global demographics, and adjusting for the chances of existential catastrophe while SENS works.

The Future of Humanity Institute at Oxford University is another institution that is effectively reducing existential risk and that could do more with more money. You may wish to include them in your comparison study. (Just don’t let the number of options distract you from in fact using your dollars to purchase expected goodness.)

There’s a lot more to say on all of these points, but I’m trying to be brief -- if you want more info on a specific point, let me know which.

It may also be worth mentioning that SIAI accepts donations earmarked for specific projects (provided we think the projects worthwhile). If you’re interested in donating but wish to donate to a specific current or potential project, please email me: anna at singinst dot org. (You don’t need to fully know what you’re doing to go this route; for anyone considering a donation of $1k or more, I’d be happy to brainstorm with you and to work something out together.)

comment by Kaj_Sotala · 2009-11-20T10:05:06.620Z · score: 14 (16 votes) · LW(p) · GW(p)

Please post a copy of this comment as a top-level post on the SIAI blog.

comment by Rain · 2010-03-23T02:27:07.788Z · score: 8 (8 votes) · LW(p) · GW(p)

You can donate to FHI too? Dang, now I'm conflicted.

Wait... their web form only works with UK currency, and the Americas form requires FHI to be a write-in and may not get there appropriately.

Crisis averted by tiny obstacles.

comment by Kutta · 2009-12-03T23:27:26.117Z · score: 7 (7 votes) · LW(p) · GW(p)

at 8 expected current lives saved per dollar donated

Even though there is a large margin of error this is at least 1500 times more effective than the best death-averting charities according to GiveWell. There's is a side note though that while normal charities are incrementally beneficial SIAI has (roughly speaking) only two possible modes, a total failure mode and a total win mode. Still, expected utility is expected utility. A paltry 150 dollars to save as many lives as Schindler... It's a shame warm fuzzies scale up so badly...

comment by Wei_Dai · 2009-11-20T09:15:26.466Z · score: 5 (5 votes) · LW(p) · GW(p)

Someone should update SIAI's recent publications page, which is really out of date. In the mean time, I found two of the papers you referred using Google:

comment by Pablo_Stafforini · 2010-03-23T01:19:32.605Z · score: 1 (1 votes) · LW(p) · GW(p)

Those interested in the cost-effectiveness of donations to the SIAI may also want to check Alan Dawrst's donation recommendation. (Dawrst is "Utilitarian", the donor that Anna mentions above.)

comment by StefanPernar · 2009-11-24T11:44:10.983Z · score: 1 (1 votes) · LW(p) · GW(p)

Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?

comment by MichaelGR · 2009-11-20T18:53:25.049Z · score: 0 (0 votes) · LW(p) · GW(p)

Thank you very much, Anna.

This will help me decide, and I'm sure that it will help others too.

I second Katja's idea; a version of this should be posted on the SIAI blog.

comment by Kaj_Sotala · 2009-11-20T19:30:29.802Z · score: 2 (2 votes) · LW(p) · GW(p)

I second Katja's idea;

Kaj's. :P

comment by MichaelGR · 2009-11-20T20:35:57.227Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm sorry, for some reason I thought you were Katja Grace. My mistake.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-17T02:17:48.980Z · score: 6 (6 votes) · LW(p) · GW(p)

I tend to regard the SENS Foundation as major fellow travelers, and think that we both tend to benefit from each other's positive publicity. For this reason I've usually tended to avoid this kind of elevator pitch!

Pass to Michael Vassar: Should I answer this?

comment by MichaelGR · 2009-11-17T03:50:51.243Z · score: 3 (3 votes) · LW(p) · GW(p)

[I've moved what was here to the top level comment]

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-18T00:47:34.268Z · score: 2 (2 votes) · LW(p) · GW(p)

I'll flag Vassar or Salamon to describe what sort of efforts SIAI would like to marginally expand into as a function of our present and expected future reliability of funding.

comment by MichaelGR · 2009-11-12T01:28:03.996Z · score: 15 (15 votes) · LW(p) · GW(p)

Are the book(s) based on your series of posts are OB/LW still happening? Any details on their progress (title? release date? e-book or real book? approached publishers yet? only technical books, or popular book too?), or on why they've been put on hold?

http://lesswrong.com/lw/jf/why_im_blooking/

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-12T05:04:20.035Z · score: 8 (8 votes) · LW(p) · GW(p)

Yes, that is my current project.

comment by Bindbreaker · 2009-11-11T07:53:15.232Z · score: 15 (17 votes) · LW(p) · GW(p)

In one of the discussions surrounding the AI-box experiments, you said that you would be unwilling to use a hypothetical fully general argument/"mind hack" to cause people to support SIAI. You've also repeatedly said that the friendly AI problem is a "save the world" level issue. Can you explain the first statement in more depth? It seems to me that if anything really falls into "win by any means necessary" mode, saving the world is it.

comment by kpreid · 2009-11-11T13:00:14.028Z · score: 10 (12 votes) · LW(p) · GW(p)

This comes to mind:

But why not become an expert liar, if that's what maximizes expected utility? Why take the constrained path of truth, when things so much more important are at stake?

Because, when I look over my history, I find that my ethics have, above all, protected me from myself. They weren't inconveniences. They were safety rails on cliffs I didn't see.

I made fundamental mistakes, and my ethics didn't halt that, but they played a critical role in my recovery. When I was stopped by unknown unknowns that I just wasn't expecting, it was my ethical constraints, and not any conscious planning, that had put me in a recoverable position.

You can't duplicate this protective effect by trying to be clever and calculate the course of "highest utility". The expected utility just takes into account the things you know to expect. It really is amazing, looking over my history, the extent to which my ethics put me in a recoverable position from my unanticipated, fundamental mistakes, the things completely outside my plans and beliefs.

Ethics aren't just there to make your life difficult; they can protect you from Black Swans. A startling assertion, I know, but not one entirely irrelevant to current affairs.

Protected From Myself

comment by ABranco · 2009-11-14T13:55:41.787Z · score: 14 (18 votes) · LW(p) · GW(p)

Do you feel lonely often? How bad (or important) is it?

(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?

comment by MichaelGR · 2009-11-11T21:20:33.895Z · score: 14 (20 votes) · LW(p) · GW(p)

In 2007, I wrote a blog post titled Stealing Artificial Intelligence: A Warning for the Singularity Institute.

Short summary: After a few more major breakthroughs, when AGI is almost ready, AI will no doubt appear on the radar of many powerful organizations, such as governments. They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.

If you think there's a real danger there, could you tell us what the SIAI is doing to minimize it? If it doesn't apply to the SIAI, do you know if other groups working on AGI have taken this into consideration? And if this scenario is not realistic, could you tell us why?

comment by MichaelVassar · 2009-11-13T05:13:31.611Z · score: 8 (8 votes) · LW(p) · GW(p)

I strongly disagree with the claim that it is likely that AGI will appear on the radar of powerful organizations just because it is almost ready. That doesn't match the history of scientific (not, largely technological) breakthroughs in the past in my reading of scientific history. Uploading, maybe, as there is likely to be a huge engineering project even after the science is done, though the science might be done in secret. With AGI, the science IS the project.

comment by roland · 2009-11-19T01:00:57.437Z · score: 0 (0 votes) · LW(p) · GW(p)

Well that will depend on the people in power grasping its importance.

comment by Johnicholas · 2009-11-12T04:03:23.881Z · score: 4 (8 votes) · LW(p) · GW(p)

Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode. It's far more likely that SIAI is slower at developing (both Friendly and unFriendly) AI than the rest of the world. It's quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI.

comment by RobQ · 2009-11-13T04:21:43.548Z · score: 5 (9 votes) · LW(p) · GW(p)

Fear of theft is a crank trope? As someone who makes a living providing cyber security I have to say you have no idea of the daily intrusions US companies experience from foreign governments and just simple criminals.

comment by MichaelVassar · 2009-11-15T16:54:07.823Z · score: 3 (3 votes) · LW(p) · GW(p)

Theft of higher level more abstract ideas is much rarer. It happens both in Hollywood films and in the real Hollywood, but not so frequently, as far as I can tell, in most industries. More frequently, people can't get others to follow up on high generality ideas. Apple and Microsoft, for instance, stole ideas from Xerox that Xerox had been sitting on for years, they didn't steal ideas that Xerox was working on and compete with Xerox.

comment by MichaelGR · 2009-11-18T18:35:54.498Z · score: 0 (0 votes) · LW(p) · GW(p)

Indeed, but my point is that AGI isn't a film or normal piece of software.

The cost vs benefits analysis of would be thieves would look at lot different.

comment by Nick_Tarleton · 2009-11-13T04:52:20.193Z · score: 0 (0 votes) · LW(p) · GW(p)

Fear by amateur researchers of theft is a crank trope.

comment by MichaelGR · 2009-11-12T06:22:14.044Z · score: 4 (4 votes) · LW(p) · GW(p)

Fear of others stealing your ideas is a crank trope, which suggests it may be a common human failure mode.

I think it might be correct in the entrepreneur/startup world, but it probably isn't when it comes to technologies that are this powerful. Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software. If you're building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc).

I'm not saying it only applies to the SIAI (though my original post was directed only at them, my question here is about the AGI research world in general, which includes the SIAI), or that it isn't just one of many many things that can go wrong. But I still think that when you're playing with stuff this powerful, you should be concerned with security and not just expect to forever fly under the radar.

comment by alyssavance · 2009-11-13T00:14:49.381Z · score: 6 (6 votes) · LW(p) · GW(p)

"Just think of nuclear espionage and of the kind of security that surrounds the development of military and intelligence hardware and software."

The reason the idea of the nuclear chain reaction was kept secret, was because one man named Leo Szilard realized the damage it could do, and had his patent for the idea classified as a military secret. It wasn't kept secret by default; if it weren't for Szilard, it would probably have been published in physics journals like every other cool new idea about atoms, and the Nazis might well have gotten nukes before we did.

"If you're building something that could overthrow all the power structures in the world, it would be surprising if nobody tried to spy on you (or worse; kill you, derail the project, steal the almost finished code, etc)."

Only if they believe you, which they almost certainly won't. Even in the (unlikely) case that someone thought that an AI taking over the world was realistic, there's still an additional burden of proof on top of that, because they'd also have to believe that SIAI is competent enough to have a decent shot at pulling it off, in a field where so many others have failed.

comment by DanArmak · 2009-11-14T01:01:45.324Z · score: -3 (3 votes) · LW(p) · GW(p)

So you're saying SIAI is deliberately appearing incompetent and far from its goal to avoid being attacked?

ETA: I realize you're probably not saying it's doing that already, but you certainly suggest that it's going to be in SIAI's best interests going forward.

comment by Johnicholas · 2009-11-12T13:16:03.287Z · score: 6 (6 votes) · LW(p) · GW(p)

Let's be realistic here - the AGI research world is a small fringe element of AI research in general. The AGI research world generally has a high opinion of its own importance - an opinion not generally shared by the AI research world in general, or the world as a whole.

We are in a self-selected group of people who share our beliefs. This will bias our thinking, leading us to be too confident of our shared beliefs. We need to strive to counter that effect and keep a sense of perspective, particularly when we're trying to anticipate what other people are likely to do.

comment by MichaelGR · 2009-11-12T14:28:42.600Z · score: 1 (3 votes) · LW(p) · GW(p)

I'm not sure I get what you're saying.

Either the creation of smarter than human intelligence is the most powerful thing in the world, or it isn't.

If it is, it would be surprising if nobody in the powerful organizations I'm talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI.

If that is the case, this probably means that at some point AGI researchers will be "on the radar" of these people and that they should at least think about preparing for that day.

You can't have your cake and eat it too; you can't believe that AGI is the most important thing in the world and simultaneously think that it's so unimportant that nobody's going to bother with it.

I'm not saying that right now there is much danger of that . But if we can't predict when AGI is going to happen (which means wide confidence bounds, 5 years to 100 years, as Eliezer once said.), then we don't know how soon we should start thinking about security, which probably means that as soon as possible is best.

comment by alyssavance · 2009-11-13T00:20:19.804Z · score: 7 (7 votes) · LW(p) · GW(p)

"If it is, it would be surprising if nobody in the powerful organizations I'm talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI."

Nobody in powerful organizations realizes important things all the time. To take a case study, in 2001, Warren Buffett and Ted Turner just happened to notice that there were hundreds of nukes in Russia, sitting around in lightly guarded or unguarded facilities, which anyone with a few AK-47s and a truck could have just walked in and stolen. They had to start their own organization, called the Nuclear Threat Initiative, to take care of the problem, because no one else was doing anything.

comment by Jess_Riedel · 2009-11-13T01:01:04.501Z · score: 0 (4 votes) · LW(p) · GW(p)

The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.

comment by alyssavance · 2009-11-13T01:40:44.165Z · score: 3 (5 votes) · LW(p) · GW(p)

Yes, it is. How could examples of X not be evidence that the "norm is X"? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainly plenty more.

comment by Jess_Riedel · 2009-11-13T15:18:12.854Z · score: 1 (3 votes) · LW(p) · GW(p)

Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you're examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.

comment by alyssavance · 2009-11-13T19:21:15.393Z · score: 3 (3 votes) · LW(p) · GW(p)

Important things that weren't recognized by the wider populace as important things? Do you have citations? Even for much more mundane things, governments routinely fail to either notice them, or to act once they have noticed. Eg., Chamberlain didn't notice that Hitler wanted total control of Europe, even though he said so in his publicly-available book Mein Kampf. Stalin didn't notice that Hitler was about to invade, even though he had numerous warnings from his subordinates.

comment by Johnicholas · 2009-11-12T15:59:38.191Z · score: 3 (3 votes) · LW(p) · GW(p)

The reason to point out the crackpot aspect (e.g. item 12 in Baez's Crackpot Index) is to adjust how people think about this question, not to argue that the question shouldn't be asked or answered.

In particular, I want people to balance (at least) two dangers - the danger of idea-stealing and the danger of insularity slowing down innovation.

comment by timtyler · 2009-11-14T00:45:18.795Z · score: -2 (2 votes) · LW(p) · GW(p)

It's a marketing strategy by those involved. I am among those who are sceptical. Generality is implicit in the definition of "intelligence".

comment by alyssavance · 2009-11-13T00:10:07.085Z · score: 3 (7 votes) · LW(p) · GW(p)

"It's quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI."

I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use.

comment by Vladimir_Nesov · 2009-11-13T16:15:56.182Z · score: 3 (3 votes) · LW(p) · GW(p)

There is a strong fundamental streak in the subproblem of clear conceptual understanding of FAI (how the whole real world looks like for an algorithm, which is important both for the decision-making algorithm, and for communication of values), that I find closely related to a lot of fundamental stuff that both physicists and mathematicians are trying to crack for a long time, but haven't yet. This suggests that the problem is not a low-hanging fruit. My current hope is merely to articulate a connection between FAI and this stuff.

comment by mormon2 · 2009-11-13T17:53:45.894Z · score: 0 (14 votes) · LW(p) · GW(p)

"I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use."

I don't think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.

As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.

I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...

comment by alyssavance · 2009-11-13T19:23:49.106Z · score: 2 (4 votes) · LW(p) · GW(p)

Of course startups sometimes lose; they certainly aren't invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998.

"If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile."

(citation needed)

comment by mormon2 · 2009-11-14T01:56:06.665Z · score: 0 (10 votes) · LW(p) · GW(p)

Ok, here are some people:

Nick Bostrom (http://www.nickbostrom.com/cv.pdf) Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever) A couple people who's names I won't mention since I doubt you'd know them from Johns Hopkins Applied Physics Lab where I did some work. etc.

I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.

My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you're smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)

Sorry this is harsh but there it is.

comment by Alicorn · 2009-11-14T02:19:36.376Z · score: 3 (3 votes) · LW(p) · GW(p)

If you want to claim you're smart you have to have accomplishments that back it up right?

I think you have confused "smart" with "accomplished", or perhaps "possessed of a suitably impressive resumé".

comment by mormon2 · 2009-11-14T02:24:39.734Z · score: 2 (6 votes) · LW(p) · GW(p)

No, because I don't believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.

comment by Alicorn · 2009-11-14T02:37:11.192Z · score: 3 (5 votes) · LW(p) · GW(p)

What do you think "intelligence" is?

Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?

comment by Kaj_Sotala · 2009-11-14T18:45:20.970Z · score: 1 (1 votes) · LW(p) · GW(p)

What do you think "intelligence" is?

Previously, Eliezer has said that intelligence is efficient optimization.

comment by Tyrrell_McAllister · 2009-11-14T19:17:42.667Z · score: 0 (0 votes) · LW(p) · GW(p)

I have trouble meshing this definition with the concept of intelligent insanity.

comment by Vladimir_Nesov · 2009-11-14T20:19:03.919Z · score: 2 (2 votes) · LW(p) · GW(p)

Intelligently insane efficiently optimize stuff in the way they don't want it optimized.

comment by Tyrrell_McAllister · 2009-11-15T00:29:12.628Z · score: 0 (0 votes) · LW(p) · GW(p)

Eliezer invoked the notion of intelligent insanity in response to Aumann's approach to the absent-minded driver problem. In this case, what was Aumann efficiently optimizing in spite of his own wishes?

comment by mormon2 · 2009-11-14T18:06:42.130Z · score: 1 (9 votes) · LW(p) · GW(p)

"Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)"

Couldn't have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.

comment by wedrifid · 2009-11-14T03:44:38.919Z · score: -1 (5 votes) · LW(p) · GW(p)

I think accomplishments are a better measure (quality over quantity obviously)

I once came third in a marathon. How smart am I? If I increase my mileage to a level that would be required for me to come first would that make me smarter? Does the same apply when I'm trying to walk in 40 years?

ETA: I thought I cancelled this one. Nevermind, I stand by my point. Achievement is the best predictor of future achievement. It isn't a particularly good measure of intelligence. Achievement shows far more about what kind of things someone is inclined to achieve (and signal) as well as how well they are able to motivate themselves than it does about intelligence (see, for example, every second page here). Accomplishments are better measures than IQ, but they are not a measure of intelligence at all.

comment by alyssavance · 2009-11-14T14:29:49.556Z · score: 0 (2 votes) · LW(p) · GW(p)

I agree that both Bostrom and Wolfram are very smart, but this does not a convincing case make. Even someone at 99.9999th percentile intelligence will have 6,800 people who are as smart or smarter than they are.

comment by Nick_Tarleton · 2009-11-11T23:47:54.172Z · score: 4 (4 votes) · LW(p) · GW(p)

They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.

If they're going to have that exact wrong level of cluefulness, why wouldn't they already have a (much better-funded, much less careful) AGI project of their own?

As Vladimir says, it's too early to start solving this problem, and if "things start moving rapidly" anytime soon, then AFAICS we're just screwed, government involvement or no.

comment by MichaelGR · 2009-11-11T23:55:41.494Z · score: 1 (1 votes) · LW(p) · GW(p)

If they're going to have that exact wrong level of cluefulness, why wouldn't they already have a (much better-funded, much less careful) AGI project of their own?

Maybe they do, maybe they don't. I won't try to add more details to a scenario because that's not the right way to think about this, IMO. If it happens, it probably won't be a movie plot scenario anyway ("Spies kidnap top AI research team and torture them until they make a few changes to program, granting our Great Leader dominion over all")...

What I'm interested in is security of AGI research in general. It would be extremely sad to see FAI theory go very far only to be derailed by (possibly well-intentioned) people who see AGI as a great source of power and want to have it "on their side" or whatever.

comment by Vladimir_Nesov · 2009-11-11T22:59:33.086Z · score: 3 (3 votes) · LW(p) · GW(p)

Isn't it too early to start solving this problem? There is a good chance SIAI won't even have a direct hand in programming the FAI.

comment by MichaelGR · 2009-11-11T23:19:32.707Z · score: 1 (1 votes) · LW(p) · GW(p)

That's what I've been told, but I'm not entirely convinced. Since there are so many timelines out there, and since fundamental breakthroughs are hard to predict, I think it still deserves some attention as soon as possible, if only to know what to do if things start moving rapidly (an AGI team might not have many chances to recover from security mistakes).

I'll broaden my question a bit so that it applies to all people working on AGI and not just the SIAI.

comment by John_Maxwell (John_Maxwell_IV) · 2009-11-14T02:31:57.182Z · score: 0 (0 votes) · LW(p) · GW(p)

There is a good chance SIAI won't even have a direct hand in programming the FAI.

Care to elaborate?

comment by Vladimir_Nesov · 2009-11-14T09:20:31.436Z · score: 5 (5 votes) · LW(p) · GW(p)

Why? It's not like SIAI is on a teleological track to be the one true organization to actually save the world. They have some first-mover advantage to be the focus of this movement, to the extent it's effective in gravitating activity their way. They are currently doing important work on spreading awareness. But if things catch up, others will start seriously working on the problem elsewhere.

comment by John_Maxwell (John_Maxwell_IV) · 2009-11-15T19:43:19.731Z · score: 1 (1 votes) · LW(p) · GW(p)

But if things catch up, others will start seriously working on the problem elsewhere.

By things catching up, you mean awareness spreading, right? It doesn't seem like a stretch to guess that SIAI will continue to do a large portion of that.

There's no advantage associated with FAI programmers starting a second group if they know they'll get funded by SIAI and don't have any major disagreements with SIAI's philosophy.

comment by Vladimir_Nesov · 2009-11-15T20:10:30.834Z · score: 0 (0 votes) · LW(p) · GW(p)

There's no advantage associated with FAI programmers starting a second group if they know they'll get funded by SIAI and don't have any major disagreements with SIAI's philosophy.

Not a rule strictly followed by how things work out in practice.

comment by timtyler · 2009-11-14T10:01:14.044Z · score: -6 (10 votes) · LW(p) · GW(p)

For a first mover advantage in SAVING THE WORLD, look to Jesus Christ.

He had all the same elements in place - an end-of-the-world apocalypse with eternal hellfire and damnation, with a band of loyal followers seeking a path to salvation - two thousand years ago.

comment by timtyler · 2009-11-14T00:40:30.487Z · score: 0 (2 votes) · LW(p) · GW(p)

You apparently assume that SIAI will get somewhere in their attempts to create machine intelligence before other organisations do. That seems relatively unlikely - given the current situation. What is the justification for that premise?

comment by MichaelGR · 2009-11-14T01:20:26.767Z · score: 1 (1 votes) · LW(p) · GW(p)

I would ask the same question to other AGI organizations if I could, but this is a Q&A with only Eliezer (though I'm also curious to know if he knows anything about what other groups are doing with regards to this).

Regardless of who is the first to get to AGI, that group could potentially run into the kind of problems I mentioned. I never said it was the most probable thing that can go wrong. But it should probably be looked into seriously since, if it does happen, it could be pretty catastrophic.

The way I see it, either AGI is developed in secret and Eliezer could be putting the finishing touches on the code right now without telling anyone, or it'll be developed in a fairly open way, with mathematical and algorithmic breakthroughs discussed at conferences, on the net, in papers, whatever. If the latter is the case, some big breakthroughs could attract the attention of powerful organizations (or even of AGI researchers who have enough of a clue to understand these breakthroughs, but that also know they're too far behind to catch up, so the best way for them to get there first is to somehow convince an intelligence agency to steal the code or whatever - again specifics are not important here, just the general principle of what to do with security as we get closer to full AGI).

comment by timtyler · 2009-11-14T09:05:28.981Z · score: -3 (3 votes) · LW(p) · GW(p)

Yes, industrial espionage is a well-known phenomenon. Look at all the security which Apple uses to keep their prototypes hidden. People have died for Apple's secrets:

http://www.taranfx.com/blog/iphone-4g-secret-prototype-leads-to-a-death

comment by taa21 · 2009-11-11T21:06:17.758Z · score: 14 (16 votes) · LW(p) · GW(p)

What do you view as your role here at Less Wrong (e.g. leader, preacher, monk, moderator, plain-old contributor, etc.)?

comment by Utilitarian · 2009-11-11T06:58:36.495Z · score: 14 (22 votes) · LW(p) · GW(p)

What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle's back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?

comment by timtyler · 2009-11-11T09:00:49.254Z · score: 5 (9 votes) · LW(p) · GW(p)

That's 14 questions! ;-)

comment by SilasBarta · 2009-11-12T21:05:35.953Z · score: 2 (2 votes) · LW(p) · GW(p)

Just in case people are taking timtyler's point too seriously: It's really one question, then a list of issues it should touch on to be a complete answer. You wouldn't need to directly answer all of them if the implication for that question is obvious from a previous.

ETA: I'm not the one who asked the question, but I did vote it up.

comment by komponisto · 2009-11-11T06:00:47.402Z · score: 14 (26 votes) · LW(p) · GW(p)

I admit to being curious about various biographical matters. So for example I might ask:

What are your relations like with your parents and the rest of your family? Are you the only one to have given up religion?

comment by SilasBarta · 2009-11-11T19:46:47.283Z · score: 1 (1 votes) · LW(p) · GW(p)

Eliezer Yudkowsky doesn't have a family, just people he allows to share his genes.

Yes, inside joke.

comment by ABranco · 2009-11-18T19:28:27.800Z · score: 13 (13 votes) · LW(p) · GW(p)

You've achieved a high level of success as a self-learner, without the aid of formal education.

Would this extrapolate as a recommendation of a path every fast-learner autodidact should follow — meaning: is it a better choice?

If not, in which scenarios not going after formal education be more advisable to someone? (Feel free to add as many caveats and 'ifs' as necessary.)

comment by mormon2 · 2009-11-23T19:37:34.762Z · score: 4 (8 votes) · LW(p) · GW(p)

"You've achieved a high level of success as a self-learner, without the aid of formal education."

How do you define high level of success?

comment by komponisto · 2009-11-24T05:00:31.103Z · score: 3 (3 votes) · LW(p) · GW(p)

How do you define high level of success?

He has a job where he is respected, gets to pursue his own interests, and doesn't have anybody looking over his shoulder on a daily basis (or any short-timescale mandatory duties at all that I can detect). That's pretty much the trifecta, IMHO.

comment by ABranco · 2009-11-24T00:58:09.354Z · score: 1 (1 votes) · LW(p) · GW(p)

Well, ok, success might be a personal measure, so by all means only Eliezer could properly say if Eliezer is successful. (Or at least, this is what should matter.)

Having said that, my saying he's successful was driven (biased?) by my personal standards. A positive (not in the sense of a biased article; in the sense that impact described is positive) Wikipedia article (how many people are in Wikipedia with picture and 10 footnotes? — but nevermind, this is a polemic variable, so let's not split hairs here) and founding something like SIAI and LessWrong deserve my respect, and quite some awe given his 'formal education'.

comment by mormon2 · 2009-11-25T03:04:13.062Z · score: 3 (5 votes) · LW(p) · GW(p)

I am going to take a shortcut and respond to both posts:

komponisto: Interesting because I would define success in terms of the goals you set for yourself or others have set for you and how well you have met those goals.

In terms of respect I would question the claim not within SIAI or within this community necessarily but within the larger community of experts in the AI field. How many people really know who he is? How many people who need to know, because even if he won't admit it EY will need help from academia and the industry to make FAI, know him and more importantly respect his opinion?

ABranco: I would not say success is a personal measure I would say in many ways its defined by the culture. For example in America I think its fair to say that many would associate wealth and possessions with success. This may or may not be right but it cannot be ignored.

I think your last point is on the right track with EY starting SIAI and LessWrong with his lack of formal education. Though one could argue the relative level of significance or the level of success those two things dictate.

comment by cabalamat · 2009-11-12T03:59:43.180Z · score: 13 (15 votes) · LW(p) · GW(p)

What practical policies could politicians enact that would increase overall utility? When I say "practical", I'm specifically ruling out policies that would increase utility but which would be unpopular, since no democratic polity would implement them.

(The background to this question is that I stand a reasonable chance of being elected to the Scottish Parliament in 19 months time).

comment by Morendil · 2009-11-12T10:14:22.345Z · score: 4 (4 votes) · LW(p) · GW(p)

Ruling out unpopular measures is tantamount to giving up on your job as a politician; the equivalent of an individual ruling out any avenues to achieving their goals that require some effort.

Much as rationality in an individual consists of "shutting up and multiplying", i.e. computing which course of action including those we have no taste for yields the highest expected utility, politics - the useful part of it - consists of making necessary policies palatable to the public. The rest is demagoguery.

comment by cabalamat · 2009-11-13T03:39:14.747Z · score: 3 (3 votes) · LW(p) · GW(p)

Ruling out unpopular measures is tantamount to giving up on your job as a politician

On the contrary, NOT ruling out unpopular measures is tantamount to giving up your job as a politician because, if the measure is unpopular enough (1) you won't get the measure passed in the first place, and (2) you won't get re-elected

the equivalent of an individual ruling out any avenues to achieving their goals that require some effort.

You're saying it's lazy to require that policies be practical. I say that on the contrary it's lazy not to require them to be practical. It's easy to come up with ideas that're a good thing but which can't be practically realised, but it takes more effort to come up with ideas that're a good thing and which can be practically realised. I co-founded Pirate Party UK precisely because I think it's a practical way of getting the state to apply sensible laws to the internet, instead of just going ahead with whatever freedom-destroying nonsense the entertainment industry is coming up this week to prevent "piracy".

computing which course of action including those we have no taste for yields the highest expected utility

Courses of action that can't be implemented yield zero or negative utility.

The rest is demagoguery.

There's an element of truth in that, but I'd put it differently: its the difference between leadership and followership. Politicians in democracies frequently engage in the latter.

comment by Thomas · 2009-11-12T09:25:18.276Z · score: 4 (8 votes) · LW(p) · GW(p)

Free trade. As a politician, you can't do more than that.

comment by Matt_Simpson · 2009-11-12T16:50:10.956Z · score: 2 (6 votes) · LW(p) · GW(p)

And open immigration policies

comment by cabalamat · 2009-11-13T03:12:35.104Z · score: 1 (1 votes) · LW(p) · GW(p)

Unlimited immigration clearly fails the practicality test, regardless of whether it's a good thing or not.

comment by Matt_Simpson · 2009-11-13T04:47:51.574Z · score: 0 (2 votes) · LW(p) · GW(p)

open != unlimited. But that's a margin that I would push pretty hard, relative to others.

comment by cabalamat · 2009-11-13T05:42:28.450Z · score: 0 (0 votes) · LW(p) · GW(p)

OK I misinterpreted you. What do you mean when you say "open"?

comment by Matt_Simpson · 2009-11-13T06:12:46.718Z · score: 1 (1 votes) · LW(p) · GW(p)

I should have said more open.

comment by CronoDAS · 2009-11-12T07:22:45.191Z · score: 0 (4 votes) · LW(p) · GW(p)

I'd guess that legalizing gay marriage would be pretty low-hanging fruit, but I don't know how politically possible it is.

comment by Jess_Riedel · 2009-11-13T00:56:36.732Z · score: 4 (6 votes) · LW(p) · GW(p)

It's hard to think of a policy which would have a smaller impact on a smaller fraction of the wealthiest population on earth. And it faces extremely dedicated opposition.

comment by CronoDAS · 2009-11-13T21:02:46.916Z · score: 2 (2 votes) · LW(p) · GW(p)

Well, I mean "low-hanging fruit" in that it doesn't really cost any money to implement. Symbolism is cheap; providing material benefits is more expensive, especially in developed countries.

I don't know much about the political situation in Scotland; I know about a few miscellaneous stupidities in the U.S. federal government that I'd like to get rid of (abstinence-only sex education, "alternative" medicine research) but I suspect that Scotland and the rest of the U.K. is stupid in different ways than the U.S. is.

comment by cabalamat · 2009-11-13T03:10:55.258Z · score: 2 (2 votes) · LW(p) · GW(p)

Gay marriage is already legal in Scotland, albeit under the name "civil partnership".

comment by Paul Crowley (ciphergoth) · 2009-11-13T09:03:23.191Z · score: 0 (0 votes) · LW(p) · GW(p)

The whole of the UK has civil partnership, not just Scotland. It's also illegal to discriminate on gender attraction in employment and in the provision of goods and services.

comment by RichardKennaway · 2009-11-11T08:57:12.991Z · score: 13 (17 votes) · LW(p) · GW(p)

Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you've written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)

ETA: By AI I meant AGI.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-11T18:40:22.802Z · score: 1 (1 votes) · LW(p) · GW(p)

I assume this is to be interpreted as "published work in AGI". Plenty of perfectly good AI work around.

comment by RichardKennaway · 2009-11-11T23:07:30.146Z · score: 0 (2 votes) · LW(p) · GW(p)

Yes, I meant AGI by AI. I don't consider any of the stuff outside AGI to be worth calling AI. The good stuff there is merely the more or less successful descendants of spinoffs of failed attempts to create AGI, and is good in direct proportion to its distance from that original vision.

comment by [deleted] · 2009-11-11T16:15:18.243Z · score: 0 (2 votes) · LW(p) · GW(p)

Well, it appears that no published work in AI has ended in successful strong artificial intelligence.

comment by RichardKennaway · 2009-11-11T23:19:42.267Z · score: 1 (1 votes) · LW(p) · GW(p)

It might be making visible progress, or failing that, at least not making basic fatal errors.

comment by FeministX · 2009-11-11T05:04:19.243Z · score: 13 (17 votes) · LW(p) · GW(p)

2) How does one affect the process of increasing the rationality of people who are not ostensibly interested in objective reasoning and people who claim to be interested but are in fact attached to their biases?

I find that question interesting because it is plain that the general capacity for rationality in a society can be improved over time. Once almost no one understood the concept of a bell curve or a standard deviation, but now the average person has a basic understanding of how these concepts apply to the real world.

It seems to me that we really are faced with the challenge of explaining the value of empirical analysis and objective reasoning to much of the world. Today the Middle East is hostile towards reason though they presumably don't have to be this way.

So again, my question is how do more rational people affect the reasoning capacity in less rational people, including those hostile towards rationality?

comment by cabalamat · 2009-11-12T03:46:51.164Z · score: 5 (5 votes) · LW(p) · GW(p)

Once almost no one understood the concept of a bell curve or a standard deviation, but now the average person has a basic understanding of how these concepts apply to the real world.

I suspect that, on the contrary, >50% of the population have very little idea what either term means.

comment by MichaelVassar · 2009-11-13T05:35:38.326Z · score: 4 (4 votes) · LW(p) · GW(p)

I think that the average person has NO IDEA how the concept of the standard deviation is properly used. Neither does the average IQ 140 non-scientist.

Less Wrong is an attempt to increase the rationality of very unusual people. Most other SIAI efforts are other such attempts, or are direct attempts at FAI.

comment by SilasBarta · 2009-11-11T21:44:54.130Z · score: 12 (14 votes) · LW(p) · GW(p)

Previously, you endorsed this position:

Never try to deceive yourself, or offer a reason to believe other than probable truth; because even if you come up with an amazing clever reason, it's more likely that you've made a mistake than that you have a reasonable expectation of this being a net benefit in the long run.

One counterexample has been proposed a few times: holding false beliefs about oneself in order to increase the appearance of confidence, given that it's difficult to directly manipulate all the subtle signals that indicate confidence to others.

What do you think about this kind of self-deception?

comment by pwno · 2009-11-12T02:45:23.934Z · score: 0 (2 votes) · LW(p) · GW(p)

Costs outweigh the benefits.

comment by Dufaer · 2009-11-13T12:09:50.426Z · score: 0 (4 votes) · LW(p) · GW(p)

Oh, how convenient, isn’t it? Well, then what about a self-deception in order to increase a placebo effect; in a case where the concerned disease may or may not be life-threatening?

comment by pwno · 2009-11-13T16:21:30.873Z · score: 0 (2 votes) · LW(p) · GW(p)

I didn't say the costs always outweigh the benefits.

comment by Psy-Kosh · 2009-11-11T19:00:00.114Z · score: 12 (14 votes) · LW(p) · GW(p)

In the spirit of considering semi abyssal plans, what happens if, say, next week you discover a genuine reduction of consciousness and in turns out that... There's simply no way to construct the type of optimization process you want without it being conscious, even if very different from us?

ie, what if it turned out that The Law turned out to have the consequence of "to create a general mind is to create a conscious mind. No way around that"? Obviously that shifts the ethics a bit, but my question is basically if so, well... "now what?" what would have to be done differently, in what ways, etc?

comment by timtyler · 2009-11-11T08:53:49.943Z · score: 12 (14 votes) · LW(p) · GW(p)

What was the significance of the wirehead problem in the development of your thinking?

comment by SilasBarta · 2009-11-11T15:26:35.682Z · score: 11 (13 votes) · LW(p) · GW(p)

Previously, you said that a lot of work in Artificial Intelligence is "5% intelligence and 95% rigged demo". What would you consider an example of something that has a higher "intelligence ratio", if there is one, and what efforts do you consider most likely to increase this ratio?

comment by CannibalSmith · 2009-11-12T12:23:52.339Z · score: 2 (2 votes) · LW(p) · GW(p)

DARPA's Grand Challenge produced several intelligent cars and was definitely not a rigged demo.

comment by MichaelVassar · 2009-11-15T17:02:20.831Z · score: 0 (2 votes) · LW(p) · GW(p)

Juergen had some things to say about that actually.

comment by anonym · 2009-11-15T18:42:23.473Z · score: 0 (0 votes) · LW(p) · GW(p)

Something to say about "intelligent cars" or "not a rigged demo"?

comment by SilasBarta · 2009-11-12T19:07:24.011Z · score: 0 (0 votes) · LW(p) · GW(p)

Good point. I remember that in that context, Eliezer Yudkowsky had spoken highly of Sebastian Thrun's CES (C++ for embedded systems). I started reading Thrun's exposition of CES but never finished it.

Still, I'd like to hear Eliezer's answer to my question, in case there's more he can say.

comment by alyssavance · 2009-11-13T00:22:09.686Z · score: 0 (0 votes) · LW(p) · GW(p)

Chess-playing AI has had a lot of decent-quality work done on it, and Deep Blue beating Kasparov was definitely not a rigged demo.

comment by SilasBarta · 2009-11-13T03:26:16.367Z · score: 1 (1 votes) · LW(p) · GW(p)

Eliezer called the example in the link 95% rigged by virtue of how much the problem was constrained before a program attacked it. Chess is likewise the very definition of a constrained problem.

Certainly, the Kasparov match wasn't "rigged" (other than being able to review Kasparov's previous games while Kasparov couldn't do the same for Deep Blue), but when the search space is so constrained, and tree pruning methods and computer speed only get faster, it's bound to surpass humans eventually. There was no crucial AI insight that had to be overcome to beat Kasparov; if they had failed to notice some good tree-pruning heuristics, it would have just delayed the victory by a few years as computers got faster.

comment by sixes_and_sevens · 2009-11-11T14:56:43.787Z · score: 11 (15 votes) · LW(p) · GW(p)

What five written works would you recommend to an intelligent lay-audience as a rapid introduction to rationality and its orbital disciplines?

comment by alyssavance · 2009-11-13T00:22:59.889Z · score: 2 (2 votes) · LW(p) · GW(p)

See the Singularity Institute Reading List for some ideas.

comment by patrissimo · 2009-11-12T18:57:03.340Z · score: 10 (10 votes) · LW(p) · GW(p)

What single technique do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?

comment by roland · 2009-11-19T01:04:41.818Z · score: 0 (0 votes) · LW(p) · GW(p)

I had a similar question, on boiling down rationality:

http://lesswrong.com/lw/1f4/less_wrong_qa_with_eliezer_yudkowsky_ask_your/19hc

comment by John_Maxwell (John_Maxwell_IV) · 2009-11-11T06:52:29.346Z · score: 10 (14 votes) · LW(p) · GW(p)

What was the most useful suggestion you got from a would-be FAI solver? (I'm putting separate questions in separate comments per MichaelGR's request.)

comment by John_Maxwell (John_Maxwell_IV) · 2009-11-11T06:49:45.293Z · score: 10 (12 votes) · LW(p) · GW(p)

What is the background that you most frequently wish would-be FAI solvers had when they struck up conversations with you? You mentioned the Dreams of Friendliness series; is there anything else? You can answer this question in comment form if you like.

comment by John_Maxwell (John_Maxwell_IV) · 2009-11-11T06:21:52.278Z · score: 10 (20 votes) · LW(p) · GW(p)

What are the hazards associated with making random smart people who haven't heard about existential dangers more intelligent, mathematically inclined, and productive?

comment by anonym · 2009-11-13T06:38:00.480Z · score: 9 (9 votes) · LW(p) · GW(p)

In terms of your intellectual growth, what were your biggest mistakes or most harmful habits, and what, if anything, would you do differently if you had the chance?

comment by botogol · 2009-11-11T17:14:27.423Z · score: 9 (13 votes) · LW(p) · GW(p)

Can you make a living out of this rationality / SI / FAI stuff . . . or do you have to be independently wealthy?

comment by alyssavance · 2009-11-13T00:27:25.480Z · score: 5 (5 votes) · LW(p) · GW(p)

I strongly think that's the wrong way to phrase the question.

"Don't expect fame or fortune. The Singularity Institute is not your employer, and we are not paying you to accomplish our work. The so-called "Singularity Institute" is a group of humans who got together to accomplish work they deemed important to the human species, and some of them went off to do fundraising so the other ones could get paid enough to live on. Don't even dream of being paid what you're worth, if you're worth enough to solve this class of problem. As for fame, we are trying to do something that is daring far beyond the level of daring that is just exactly daring enough to be seen academically as sexy and transgressive and courageous, so working here may even count against you on your resume. But that's not important, because this is a lifetime commitment. Let me repeat that again: Once you're in, really in, you stay. I can't afford to start over training a new Research Fellow. We can't afford to have you leave in the middle of The Project. It's Singularity or bust. If you look like a good candidate, we'll probably bring you in for a trial month, or something like that, to see if we can work well together. But please do consider that, once you've been in for long enough, I'll be damned hurt – and far more importantly, The Project will be hurt – if you leave. This is a very difficult thing that we of the Singularity Institute are attempting – some of us have been working on it since long before there was enough money to pay us, and some of us still aren't getting paid. The motivation to do this thing, to accomplish this impossible feat, has to come from within you; and be glad that someone is paying you enough to live on while you do it. It can't be the job that you took to make the rent. That's not how the research branch of the Singularity Institute works. It's not who we are." - http://singinst.org/aboutus/opportunities/research-fellow

comment by MichaelVassar · 2009-11-13T05:14:42.643Z · score: 1 (1 votes) · LW(p) · GW(p)

If you are good enough at the rationality stuff you can make a living in business with it, but you have to be VERY good.

comment by Daniel_Burfoot · 2009-11-11T16:14:15.735Z · score: 9 (17 votes) · LW(p) · GW(p)

Let E(t) be the set of historical information available up until some time t, where t is some date (e.g. 1934). Let p(A|E) be your estimate of the probability an optimally rational Bayesian agent would assign to the event "Self-improving artificial general intelligence is discovered before 2100" given a certain set of historical information.

Consider the function p(t)=p(A|E(t)). Presumably as t approaches 2009, p(t) approaches your own current estimate of p(A).

Describe the function p(t) since about 1900. What events - research discoveries, economic trends, technological developments, sci-fi novel publications, etc, caused the largest changes in p(t)? Is it strictly increasing, or does it fluctuate substantially? Did the publication of any impossibility proofs (e.g. No Free Lunch) cause strong decreases in p(t)? Can you point to any specific research results that increased p(t)? What about the "AI winter" and related setbacks?

comment by Peter_de_Blanc · 2009-11-12T02:21:47.413Z · score: 3 (3 votes) · LW(p) · GW(p)

I don't think this question behaves the way you want it to. Why not ask what a smart human would predict?

comment by MichaelVassar · 2009-11-13T05:16:17.320Z · score: 2 (2 votes) · LW(p) · GW(p)

I'd guess that WWII and particularly the Holocaust set it back rather a lot. How likely were they in 1934 though, possibly quite.

comment by patrissimo · 2009-11-12T18:57:22.073Z · score: 8 (8 votes) · LW(p) · GW(p)

What single source of material (book, website, training course) do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?

comment by MichaelHoward · 2009-11-12T14:00:21.399Z · score: 8 (8 votes) · LW(p) · GW(p)

Of the questions you decide not to answer, which is most likely to turn out to be a vital question you should have publicly confronted?

Not the question you don't want to answer but would probably have bitten the bullet anyway. The question you would have avoided completely if it weren't for my question.

[Edit - "If I thought they were vital, I wouldn't avoid" would miss the point, as not wanting to consider something suppresses counterarguments to dismissing it. Take a step back - which question is most likely to be giving you this reaction?]

comment by wedrifid · 2009-11-14T00:11:08.722Z · score: 1 (1 votes) · LW(p) · GW(p)

If this question has an obvious answer then I expect 'this one' would come in as a close second!

comment by MichaelHoward · 2009-11-14T09:29:24.806Z · score: 0 (0 votes) · LW(p) · GW(p)

My question thanks you for complementing it's probable vitality, but...

The question you would have avoided completely if it weren't for my question.

...I'm hoping a good metaphiliac counterfactualist wouldn't have actually avoided a non-existent self-referential question.

comment by MichaelGR · 2009-11-13T21:42:50.717Z · score: 7 (7 votes) · LW(p) · GW(p)

What recent developments in *narrow AI do you find most important/interesting and why?

*Let's say post-Stanley

comment by SilasBarta · 2009-11-12T00:06:41.089Z · score: 7 (13 votes) · LW(p) · GW(p)

Okay: Goedel, Escher, Bach. You like it. Big-time.

But why? Specifically, what insights should I have assimilated from reading it that are vital for AI and rationalist arts? I personally feel I learned more from Truly Part of You than all of GEB, though the latter might have offered a little (unproductive) entertainment.

comment by Kutta · 2009-11-13T01:18:37.917Z · score: 4 (4 votes) · LW(p) · GW(p)

Why? I think, maybe because GEB integrates form, style and thematic content into a seamless whole in a unique and pretty much artistic way, while still being essentially non-fiction. And GEB is probably second to nothing at conveying the notion of an intertwined reality. It also provides very intelligent and intuitive introduction to a whole lot of different areas. Sometimes you can't do all the job of conveying extremely complex ideas in a succinct essay; just look at the epic amount of writing Eliezer had to do merely to establish a bare framework for FAI discussion (besides, from the fact that Eliezer likes GEB does not follow that GEB should be a recommended reading for AI or rationalist arts. It just means that Eliezer thinks it's a good book).

comment by SilasBarta · 2009-11-13T03:18:30.884Z · score: 0 (2 votes) · LW(p) · GW(p)

That doesn't answer my question. Again, what rationalist/AI mistake would I not make as a result of reading GEB that could not be achieved with something shorter?

comment by Kutta · 2009-11-13T11:39:58.974Z · score: 0 (2 votes) · LW(p) · GW(p)

As I said, there is not necessarily any kind of rationalist/AI content in GEB directly relevant to us. It could be well just simply a good book.

comment by SilasBarta · 2009-11-16T19:23:20.631Z · score: 0 (2 votes) · LW(p) · GW(p)

But would Eliezer view it as that durn good (i.e. it being a tragedy that people die without reading it) if it were just entertaining fluff with no insights to AI and rationality?

comment by Yorick_Newsome · 2009-11-29T11:48:46.051Z · score: 2 (2 votes) · LW(p) · GW(p)

I'm not Eliezer, and perhaps not being an AGI researcher means that my answer is irrelevant, but I think that things can have a deep aesthetic value or meaning from which one could gain insights into things more important than AI or rationality. One of these things may be the 'something to protect' that Eliezer wrote about. Others may be intrinsic values to discover, to give your rationality purpose. If I could only keep one of a copy of the Gospels of Buddha or a copy of MITECS, I would keep the Gospels of Buddha, because it reminds me of the importance of terminal values like compassion. When I read GEB the ideas of interconnectedness, of patterns, and of meaning all left me with a clearer thought process than did reading Eliezer's short paper on Coherent Extrapolated Volition, which was enjoyable but just didn't seem to resonate in the same way. Calling these things 'entertaining fluff' may be losing sight of Eliezer's 11th virtue: "The Art must have a purpose other than itself, or it collapses into infinite recursion."
That is all, of course, my humble opinion. Maybe having everyone read about and understand the dangers of black swans and unfriendly AI would be more productive than having them read about and understand the values of compassion and altruism; for if people do not understand the former, there may be no world left for the latter.

comment by RobinZ · 2009-11-11T17:33:10.579Z · score: 7 (9 votes) · LW(p) · GW(p)

I am sure you're familiar with the University of Chicago "Doomsday Clock", so: if you were in charge of a Funsday Clock, showing the time until positive singularity, what time would it be on? Any recent significant changes?

(Idea of Funsday Clock blatantly stolen from some guy on Twitter.)

comment by timtyler · 2009-11-14T09:20:08.767Z · score: 0 (0 votes) · LW(p) · GW(p)

Probably a few minutes to midnight. That's usually what these clocks say.

Not futurism: marketing - and transparently so. The Bulletin of the Atomic Scientists are more famous for their DOOM clock than they are for anything else.

comment by gwern · 2009-11-11T18:06:42.531Z · score: 0 (0 votes) · LW(p) · GW(p)

If a negative singularity rules out any positive singularities, wouldn't a Funsday Clock be superseded by a Singularity Clock?

comment by RobinZ · 2009-11-11T18:15:01.300Z · score: 0 (0 votes) · LW(p) · GW(p)

Well, a negative singularity would belong on the Doomsday Clock. Actually, that might be the proper way to think of it: turn past midnight meaning negative, past noon meaning positive. It'd imply a scale, too.

comment by JamesAndrix · 2009-11-11T15:31:39.592Z · score: 7 (7 votes) · LW(p) · GW(p)

He will simply ignore questions he doesn't want to answer, even if they somehow received 3^^^3 votes.

I am 99.99% certain that he will not ignore such questions.

comment by arundelo · 2009-11-12T05:25:52.168Z · score: 3 (3 votes) · LW(p) · GW(p)

I am 99.995% certain that no question will receive that many votes.

comment by jimrandomh · 2009-11-12T05:42:27.689Z · score: 5 (5 votes) · LW(p) · GW(p)

I am 99.995% certain that no question will receive that many votes.

There is a greater than 0.01% chance that Eliezer or another administrator will edit the site to display a score of "3^^^3" for some post. (Especially now that it's been suggested.)

comment by arundelo · 2009-11-12T10:42:26.701Z · score: 2 (2 votes) · LW(p) · GW(p)

I guess I need to recalibrate!

comment by Cyan · 2009-11-12T15:38:44.975Z · score: 0 (2 votes) · LW(p) · GW(p)

Not if you meant 3^^^3 "Vote up" clicks registered from distinct LW accounts.

comment by Cyan · 2009-11-12T15:35:29.960Z · score: 0 (0 votes) · LW(p) · GW(p)

The proposition is that no question would receive that many votes (i.e., persons clicking on the "Vote up" button), not that the site would never display "3^^^3 points" above a question.

comment by MichaelVassar · 2009-11-13T05:27:11.917Z · score: 0 (2 votes) · LW(p) · GW(p)

Is that confidence your certainty given that your belief regarding what 3^^^3 means are correct, or are you >99.99% certain in those beliefs.

comment by JamesAndrix · 2009-11-14T01:52:49.842Z · score: 0 (0 votes) · LW(p) · GW(p)

Well given my recent calibration quiz, this doesn't count for much anyway. I also hadn't seriously considered whether 3^^^3 votes would cause the earth to vaporize.

Upon consideration, I am (and was) >99.99% certain that 3^^^3 is a number too big for me to deal with practically.

comment by Jack · 2009-11-11T04:27:11.337Z · score: 7 (15 votes) · LW(p) · GW(p)

If you thought an AGI couldn't be built what would you dedicate your life to doing? Perhaps another formulation, or a related question: what is the most important problem/issue not directly related to AI.

comment by Johnicholas · 2009-11-11T11:34:00.374Z · score: 2 (2 votes) · LW(p) · GW(p)

At the Singularity Summit, this question (or one similar) was asked, and (if I remember correctly) EY answer was something like: If the world didn't need saving? Possibly writing science fiction.

comment by Jack · 2009-11-11T14:50:02.625Z · score: 0 (0 votes) · LW(p) · GW(p)

Cool. But say the world does need saving, would there be a way to do it that doesn't involve putting something smarter than us in charge?

comment by MichaelVassar · 2009-11-13T05:32:13.743Z · score: 1 (1 votes) · LW(p) · GW(p)

I'd be working on life extension. Followed by applied psychology and politics.

comment by DanArmak · 2009-11-11T10:40:05.809Z · score: 1 (1 votes) · LW(p) · GW(p)

That counterfactual seems like trouble. Do you mean literally impossible by the laws of physics (surely not)? Or highly improbable that humans will be able to build one? What counts as artificial intelligence - can we do human augmentation? What counts as "highly improbable" - can we really assume stupid or evil humans won't be able to build one eventually?

It seems to me that plugging all the holes and ways of building a general intelligence to spec would require messing with the laws of physics. We may want to specify a Cosmic Censor law.

comment by Jack · 2009-11-11T11:03:57.467Z · score: 0 (0 votes) · LW(p) · GW(p)

Yeah, it is trouble. Thats why I offered the other formulation, thought that might be too vague. Basically, I just wanted to know what non-transhumanist Eliezer would be doing. I don't really care about the counterfactual some much as picking out a different topic area. Maybe the question should just be "If the idea of intelligence augmentation had never occurred to you and no one had ever shared it with you, what would you be doing with your life?"

comment by Vladimir_Nesov · 2009-11-11T14:35:21.726Z · score: 6 (8 votes) · LW(p) · GW(p)

Which areas of science or angles of analysis currently seem relevant to the FAI problem, and which of those you've studied seem irrelevant? What about those that fall on the "AI" side of things? Fundamental math? Physics?

comment by mormon2 · 2009-11-11T17:22:28.691Z · score: 3 (5 votes) · LW(p) · GW(p)

I think we can take a good guess on the last part of this question on what he will say: Bayes Theorem, Statistics, basic Probability Theory Mathematical Logic, and Decision Theory.

But why ask the question with this statement made by EY: "Since you don't require all those other fields, I would like SIAI's second Research Fellow to have more mathematical breadth and depth than myself." (http://singinst.org/aboutus/opportunities/research-fellow)

My point is he has answered this question before...

I add to this my own question actually it is more of a request to see EY demonstrate TDT with some worked out math on a whiteboard or some such on the video.

comment by Jach · 2009-11-13T08:14:46.107Z · score: 5 (7 votes) · LW(p) · GW(p)

Within the next 20 years or so, would you consider having a child and raising him/her to be your successor? Would you adopt? Have you donated sperm?

Edit: the first two questions dependent on you not being satisfied by the progress on FAI.

comment by anonym · 2009-11-13T06:56:43.060Z · score: 5 (5 votes) · LW(p) · GW(p)

Please estimate your probability of dying in the next year (5 years). Assume your estimate is perfectly accurate. What additional probability of dying in the next year (5 years) would you willingly accept for a guaranteed and safe increase of one (two, three) standard deviation(s) in terms of intelligence?

comment by wedrifid · 2009-11-14T00:07:42.315Z · score: 1 (1 votes) · LW(p) · GW(p)

Assume your estimate is perfectly accurate.

Does this matter?

comment by anonym · 2009-11-14T18:49:15.344Z · score: 1 (1 votes) · LW(p) · GW(p)

I'm not sure if it matters. I was imagining potentially different answers to the 2nd part of the question based on whether one includes additional adjustments and compensating factors to allow for the original estimate being inaccurate -- and trying to prevent those adjustments to get at the core issue.

comment by roland · 2009-11-12T21:30:50.667Z · score: 5 (9 votes) · LW(p) · GW(p)

Akrasia

Eliezer, you mentioned suffering from writer's molasses and your solution was to write daily on ob/lw. I consider this a clever and successful overcoming of akrasia. What other success stories from your life in relation to akrasia could you share?

comment by patrissimo · 2009-11-12T18:58:14.660Z · score: 5 (7 votes) · LW(p) · GW(p)

Do you think that just explaining biases to people helps them substantially overcome those biases, or does it take practice, testing, and calibration to genuinely improve one's rationality?

comment by roland · 2009-11-19T01:09:22.530Z · score: 2 (2 votes) · LW(p) · GW(p)

I can partially answer this. In the book "The logic of failure" by Dietrich Dorner he tested humans with complex systems they had to manage. It turned out that when one group got specific instructions of how to deal with complex systems they did not perform better than the control group.

EDIT: Dorner's explanation was that just knowing was not enough, individuals had to actually practice dealing with the system to improve. It's a skillset.

comment by Furcas · 2009-11-16T17:33:36.980Z · score: 4 (4 votes) · LW(p) · GW(p)

Eliezer, in Excluding the Supernatural, you wrote:

Ultimately, reductionism is just disbelief in fundamentally complicated things. If "fundamentally complicated" sounds like an oxymoron... well, that's why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren't.

"Fundamentally complicated" does sound like an oxymoron to me, but I can't explain why. Could you? What is the contradiction?

comment by anonym · 2009-11-17T07:31:33.943Z · score: 6 (6 votes) · LW(p) · GW(p)

Isn't the contradiction that "complicated" means having more parts/causes/aspects than are readily comprehensible, and "fundamental" things never are complicated, because if they were, they could be broken down into more fundamental things that were less complicated? The fact that things invariably get simpler and more basic as we move closer to the foundational level is in tension with things getting more complicated as we move down.

comment by roland · 2009-11-16T17:21:42.791Z · score: 4 (4 votes) · LW(p) · GW(p)

Boiling down rationality

Eliezer, if you only had 5 minutes to teach a human how to be rational, how would you do it? The answer has to be more or less self-contained so "read my posts on lw" is not valid. If you think that 5 minutes is not enough you may extend the time to a reasonable amount, but it should be doable in one day at maximum. Of course it would be nice if you actually performed the answer in the video. By perform I mean "Listen human, I will teach you to be rational now..."

EDIT: When I said perform I meant it as opposed to telling how to, so I would prefer Eliezer to actually teach rationality in 5 minutes instead of talking about how he would teach it.

comment by MarkHHerman · 2009-11-15T23:31:27.462Z · score: 4 (4 votes) · LW(p) · GW(p)

To what extent is the success of your FAI project dependent upon the reliability of the dominant paradigm in Evolutionary Psychology (a la Tooby & Cosmides)?

Old, perhaps off-the-cuff, and perhaps outdated quote (9/4/02): “ well, the AI theory assumes evolutionary psychology and the FAI theory definitely assumes evolutionary psychology” (http://www.imminst.org/forum/lofiversion/index.php/t144.html).

Thanks for all your hard work.

comment by Steve_Rayhawk · 2009-11-11T23:46:02.941Z · score: 4 (6 votes) · LW(p) · GW(p)

5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be.

This disadvantages questions which are posted late (to a greater extent than would give people an optimal incentive to post questions early). (It also disadvantages questions which start with a low number of upvotes by historical accident and then are displayed low on the page, and are not viewed as much by users who might upvote them.)

It's not your fault; I just wish the LW software had a statistical model which explained observed votes and replies in terms of a latent "comment quality level", because of situations like this, where it could matter if a worse comment got a high rating while a better comment got a low one. (I also wish forums with comment ratings used ideas related to value of information, optimal sequential preference elicitation, and/or n-armed bandit problems to decide when to show users comments whose subjective latent quality has a low marginal mean but a high marginal variance, in case the (")true(") quality of a comment is high, because of the possibility that a user will rate the comment highly and let the forum software know that it should show the comment to other users.)

comment by JamesAndrix · 2009-11-12T18:05:21.731Z · score: 5 (5 votes) · LW(p) · GW(p)

Reddit has implemented a 'best' view which tries to compensate for this kind of thing: http://blog.reddit.com/2009/10/reddits-new-comment-sorting-system.html

LW is based on reddit's source code, so it should be relatively easy to integrate.

comment by Douglas_Knight · 2009-11-12T00:19:07.488Z · score: 1 (1 votes) · LW(p) · GW(p)

There's probably significant value in the low-hanging fruit of just tweaking the parameters in the current algorithm (which are currently set for the much larger reddit!). Don't let the perfect be the enemy of the good.

comment by pwno · 2009-11-11T17:47:11.191Z · score: 4 (8 votes) · LW(p) · GW(p)

How would a utopia deal with human's seemingly contradicting desires - the desire to go up in status and the desire to help lower status people go up in status. Because helping lower status people go up in status will hurt our own status positions. I remember you mentioning how in your utopia you would prefer not to reconfigure the human mind. So how would you deal with such a problem?

(If someone finds the premise of my question wrong, please point it out)

comment by [deleted] · 2009-11-11T19:17:31.167Z · score: 1 (3 votes) · LW(p) · GW(p)

I don't think most people want to actually raise people who are lower status than themselves up to higher than themselves. I actually don't think that most people want to raise other's status very much. They seem to typically be more concerned with raising the material welfare of people who are significantly worse off, which doesn't necessarily change status. The main status effect of altruistic behavior is to raise the status of the altruist. For instance, consider the quote "It is more blessed to give than to receive." (Acts 20:35). If we think of "blessedness" as similar to status (status in the eyes of god maybe?) then a "status altruist" would read that and decide to always receive and never give in order raise the status of others. The traditional altruistic interpretation though is to give, and therefore become more blessed than the poor suckers who you are giving to.

comment by pwno · 2009-11-11T19:27:11.428Z · score: 0 (0 votes) · LW(p) · GW(p)

Wouldn't you, in a perfect world, have everyone go up in status without your status being affected? Wouldn't that be the utilitarian thing to do?

comment by [deleted] · 2009-11-11T20:02:47.170Z · score: 1 (1 votes) · LW(p) · GW(p)

I am not an altruist. I would like my status to be higher than it is. I would like the few people I truly care about to have higher status. Otherwise, I really don't care, except for enjoying when certain high profile d-bags lose a lot of status. But that's just me. My more general point was that even those altruistic sort who do care deeply about others don't seem to generally want to raise others up above themselves, either in terms of status or materially. Can you think of one counter-example? I can't but I'm not trying very hard.

comment by pwno · 2009-11-11T20:09:33.897Z · score: 1 (1 votes) · LW(p) · GW(p)

Like I mentioned in another comment, I would feel bad for lower status people who didn't have an equal opportunity as me for reaching the level of status I have. Like I am thriving in this world for being born lucky. And AFAIK, relatively no one lower status than me had the same opportunity as me to be in the status level I am in now.

comment by Vladimir_Nesov · 2009-11-11T20:09:45.070Z · score: -1 (9 votes) · LW(p) · GW(p)

But that's just me.

It's almost impossible for one person's morality to be significantly different from the standard. It's more likely that one who thinks themself different is simply confused.

comment by Zack_M_Davis · 2009-11-11T20:21:58.716Z · score: 9 (9 votes) · LW(p) · GW(p)

Um, what standard of significance are you using here? Yes, humans are extremely similar compared to the vastness of that which is possible, but that doesn't mean the remaining difference isn't ridiculously important.

comment by Vladimir_Nesov · 2009-11-12T01:32:14.134Z · score: 3 (3 votes) · LW(p) · GW(p)

Um, what standard of significance are you using here?

The standard implied by the remark I was commenting on. Literally not caring about other people seems like something you may believe about yourself, but which can't be true.

comment by Nick_Tarleton · 2009-11-12T01:48:00.084Z · score: 0 (0 votes) · LW(p) · GW(p)

The standard implied by the remark I was commenting on.

I read the original post as being about the ordinary human domain, implying an ordinary human-relative standard of significance.

Literally not caring about other people seems like something you may believe about yourself, but which can't be true.

This is ambiguous in two ways: which other people (few or all), and what sort of valuation (subjunctive revealed preference, some construal of reflective equilibrium)? I suppose it's plausible that for every person, some appeal to empathy would sincerely motivate that person.

comment by Tyrrell_McAllister · 2009-11-11T20:32:42.713Z · score: 4 (4 votes) · LW(p) · GW(p)

The underlying genetic machinery that produces an individual's morality is a human universal. But the production of the morality is very likely dependent upon non-genetic factors. The psychological unity of humankind no more implies that people have the same morality than it implies that they have the same favorite foods.

comment by Stuart_Armstrong · 2009-11-11T20:35:38.759Z · score: 2 (4 votes) · LW(p) · GW(p)

Yes, but it's very easy for the actual large scale consequences of a human morality to be very different. We all feel compasion for freinds and fear of strangers; but when we scale our morality to the size of humanity, the difference is huge depending whether the compassion or the fear dominates.

Hitler and Ghandi may not be that different, but the consequences of their actions were.

comment by Nick_Tarleton · 2009-11-11T20:33:53.829Z · score: 0 (0 votes) · LW(p) · GW(p)

It's almost impossible for one person's morality to be significantly different from the standard.

Really? Yes, of course almost everyone falls in the tiny-in-absolute-terms human space, but significant (in ordinary language which doesn't seem confused enough to abandon) differences within that space exist with respect to endorsed moralities (to begin with, whether one endorses any abstract moral theory), and to a lesser extent WRT revealed preferences. (WRT reflective equilibria, who the hell knows?)

comment by eirenicon · 2009-11-11T19:43:14.523Z · score: 1 (1 votes) · LW(p) · GW(p)

That's not possible if status is zero-sum, which it appears to be. If everyone is equal in status, wouldn't it be meaningless, like everyone being equally famous?

Actually, let me qualify. Everyone being equally famous wouldn't necessarily be meaningless, but it would change the meaning of famous - instead of knowing about a few people, everyone would know about everyone. It would certainly make celebrity meaningless. I'm not really up to figuring out what equivalent status would mean.

comment by pwno · 2009-11-11T20:02:39.326Z · score: 1 (1 votes) · LW(p) · GW(p)

Equivalent status is not desirable, people would just find ways of going up in status - or at least want to. Which is where the contradicting desires fall in. I guess in any utopia there will always be a status struggle. Maybe what we want is an equal opportunity at going up in status. That way we don't feel bad for going up in status ourselves.

comment by Will_Euler · 2009-11-19T02:11:41.485Z · score: 3 (3 votes) · LW(p) · GW(p)

How does the goal of acquiring self-knowledge (for current humans) relate to the goal of happiness (insofar as such a goal can be isolated)?

If one aimed to be as rational as possible, how would this help someone (today) become happy? You have suggested (in conversation) that there might be a tradeoff such that those who are not perfectly rational might exist in an "unhappy valley". Can you explain this phenomenon, including how one could find themselves in such a valley (and how they might get out)? How much is this term meant to indicate an analogy with an "uncanny valley"?

Less important, but related: What self-insights from hedonic/positive psychology have you found most revealing about people's ability to make choices aimed at maximizing happiness (eg limitations of affective forecasting, paradox of choice, impact of upward vs. downward counterfactual thinking on affect, mood induction and creativity/cognitive flexibility, etc.

(I feel these are sufficiently intertwined to constitute one general question about the relationship between self-knowledge and happiness.)

comment by anonym · 2009-11-14T21:49:49.298Z · score: 3 (5 votes) · LW(p) · GW(p)

If you conceptualized the high-level tasks you must attend to in order to achieve (1) FAI-understanding and (2) FAI-realization in terms of a priority queue, what would be the current top few items in each queue (with numeric priorities on some arbitrary scale)?

comment by imaxwell · 2009-11-14T01:31:25.981Z · score: 3 (3 votes) · LW(p) · GW(p)

Previously, in Ethical Injunctions and related posts, you said that, for example,

You should never, ever murder an innocent person who's helped you, even if it's the right thing to do; because it's far more likely that you've made a mistake, than that murdering an innocent person who helped you is the right thing to do.

It seems like you're saying you will not and should not break your ethical injunctions because you are not smart enough to anticipate the consequences. Assuming this interpretation is correct, how smart would a mind have to be in order to safely break ethical injunctions?

comment by wedrifid · 2009-11-14T02:05:49.833Z · score: 3 (3 votes) · LW(p) · GW(p)

how smart would a mind have to be in order to safely break ethical injunctions?

Any given mind could create ethical injunctions of a suitable complexity that are useful to it given its own technical limitations.

comment by Larks · 2009-11-13T22:38:17.811Z · score: 3 (7 votes) · LW(p) · GW(p)

What do you estimate the utility of Less Wrong to be?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-13T22:51:10.071Z · score: 11 (13 votes) · LW(p) · GW(p)

Roughly 4,250 expected utilons.

comment by Unnamed · 2009-11-14T02:24:05.676Z · score: 8 (8 votes) · LW(p) · GW(p)

Could you please convert to dust specks?

comment by timtyler · 2009-11-13T23:16:32.527Z · score: 4 (4 votes) · LW(p) · GW(p)

Well yes: the question was a bit ambiguous.

Maybe one should adopt a universal standard yardstick for this kind of thing, though - so such questions can be answered meaningfully. For that we need something that everyone (or practically everyone) values. I figure maybe the love of a cute kitten could be used as a benchmark. Better yardstick proposals would be welcome, though.

comment by Larks · 2009-11-13T23:56:49.481Z · score: 5 (5 votes) · LW(p) · GW(p)

If only there existed some medium of easy comparison, such that we could easily compare the values placed on common goods and services...

comment by timtyler · 2009-11-14T00:01:04.120Z · score: 1 (1 votes) · LW(p) · GW(p)

Exactly: the elephant in my post ;-)

comment by Larks · 2009-11-14T00:17:32.568Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't think elephants are a very practical yardstick. For a start, they're of varying size. I mean, apparently they can fit in posts now!

comment by DanArmak · 2009-11-14T00:23:05.971Z · score: 2 (4 votes) · LW(p) · GW(p)

Way to Other-ize dog people.

comment by Alicorn · 2009-11-13T23:24:56.852Z · score: 2 (2 votes) · LW(p) · GW(p)

It'd have to be a funny yardstick. Almost nothing we value scales linearly. I would start getting tired of kittens after about 4,250 of them had gone by.

comment by timtyler · 2009-11-13T23:59:19.217Z · score: 1 (1 votes) · LW(p) · GW(p)

Velocity runs into diminishing returns too near the speed of light - but it is still useful to try and measure it - and a yardstick can help with that.

comment by Furcas · 2009-11-13T22:54:34.295Z · score: 0 (0 votes) · LW(p) · GW(p)

That's all?

:-(

comment by Tyrrell_McAllister · 2009-11-13T23:00:00.497Z · score: 2 (2 votes) · LW(p) · GW(p)

Keep in mind that that's only up to an affine transformation ;).

comment by MichaelHoward · 2009-11-13T23:18:50.283Z · score: 0 (0 votes) · LW(p) · GW(p)

All? All? That buys you a few hundred tech shares, or The Ultimate answer to Life, the Universe and Everything Universe Takeovers and a Half! :-)

comment by retired_phlebotomist · 2009-11-13T07:11:24.756Z · score: 3 (11 votes) · LW(p) · GW(p)

What does the fact that when you were celibate you espoused celibacy say about your rationality?

comment by wuwei · 2009-11-12T04:24:48.875Z · score: 3 (3 votes) · LW(p) · GW(p)

Do you think that morality or rationality recommends placing no intrinsic weight or relevance on either a) backwards-looking considerations (e.g. having made a promise) as opposed to future consequences, or b) essentially indexical considerations (e.g. that I would be doing something wrong)?

comment by ajayjetti · 2009-11-12T03:23:32.460Z · score: 3 (5 votes) · LW(p) · GW(p)

Are you a meat-eater?

comment by Alicorn · 2009-11-12T03:32:50.372Z · score: 2 (2 votes) · LW(p) · GW(p)

Looks like.

comment by FeministX · 2009-11-11T04:51:12.619Z · score: 3 (7 votes) · LW(p) · GW(p)

I have questions. You say we must have one question per comment. So, I will have to make multitple posts.

1) Is there a domain where rational analysis does not apply?

comment by CannibalSmith · 2009-11-11T10:47:14.034Z · score: 3 (3 votes) · LW(p) · GW(p)

Improvisational theater. (I'm not Eliezer, I know.)

comment by nazgulnarsil · 2009-11-12T16:31:12.963Z · score: 4 (4 votes) · LW(p) · GW(p)

actually... http://greenlightwiki.com/improv/Status http://craigtovey.blogspot.com/2008/02/popular-comedy-formulas.html

learning this stuff allowed me (introvert) to successfully fake extroversion for my own benefit when I need to.

comment by ABranco · 2009-11-14T13:14:15.321Z · score: 1 (1 votes) · LW(p) · GW(p)

Oh!, I've done impro myself and I agree.

Well, partly.

The dynamics of what works and doesn't has lots of explanation behind it, and the storytelling is all pretty structured. However, when you're right on the scene there's not room for explicit rational analysis, sure.

comment by MichaelVassar · 2009-11-13T05:37:07.535Z · score: 2 (2 votes) · LW(p) · GW(p)

Analysis takes time, so anywhere timed. Rational analysis, crudely speaking, is the proper use of 'system 2'. Most domains work better via 'system 1' with 'system 2' watching and noticing what's going wrong in order to analyze problems or nudge habits.

comment by MarkHHerman · 2009-11-18T01:04:20.380Z · score: 2 (2 votes) · LW(p) · GW(p)

What is the practical value (e.g., predicted impact) of the Less Wrong website (and similar public communication regarding rationality) with respect to FAI and/or existential risk outcomes?

(E.g., Is there an outreach objective? If so, for what purpose?)

comment by mikerpiker · 2009-11-16T03:39:16.558Z · score: 2 (2 votes) · LW(p) · GW(p)

It seems like, if I'm trying to make up my mind about philosophical questions (like whether moral realism is true, or whether free will is an illusion) I should try to find out what professional philosophers think the answers to these questions are.

If I found out that 80% of professional philosophers who think about metaethical questions think that moral realism is true, and I happen to be an anti-realist, then I should be far less certain of my belief that anti-realism is true.

But surveys like this aren't done in philosophy (I don't think). Do you think that the results of surveys like this (if there were any) should be important to the person trying to make a decision about whether or not to believe in free will, or be an moral realist, or whatever?

comment by Jack · 2009-11-16T22:18:16.960Z · score: 4 (4 votes) · LW(p) · GW(p)

My answer to this depends on what you mean by "professional philosophers who think about". You have to be aware that subfields have selection biases. For example, the percent of philosophers of religion who think God exists is much, much larger than the percent of professional philosophers generally who think God exists. This is because if God does not exist philosophy of religion ceases to be a productive area of research. Conversely, if you have an irrational attachment to the idea that God exists this than you are likely to spend an inordinate amount of time trying to prove one exists. This issue is particularly bad with regard to religion but it is in some sense generalizable to all or most other subfields. Philosophy is also a competitive enterprise and there are various incentives to publishing novel arguments. This means in any given subfield views that are unpopular among philosophers generally will be overrepresented.

So the circle you draw around "professional philosophers who think about [subfield x] questions" needs to be small enough to target experts but large enough that you don't limit your survey to those philosophers who are very likely to hold a view you are surveying in virtue of the area they work in. I think the right circle is something like 'professional philosophers who are equipped to teach an advanced undergraduate course in the subject'.

Edit: The free will question will depend on what you want out of a conception of free will. But the understanding of free will that most lay people have is totally impossible.

comment by Alicorn · 2009-11-16T22:30:24.889Z · score: 2 (2 votes) · LW(p) · GW(p)

Seconded. There are a lot of libertarians-about-free-will who study free will, but nobody I've talked to has ever heard of anyone changing their mind on the subject of free will (except inasmuch as learning new words to describe one's beliefs counts) - so this has to be almost entirely due to more libertarians finding free will an interesting thing to study.

comment by Blueberry · 2009-11-16T22:49:18.114Z · score: 2 (2 votes) · LW(p) · GW(p)

I've definitely changed my mind on free will. I used to be an incompatibilist with libertarian leanings. After reading Daniel Dennett's books, I changed my mind and became a compatiblist soft determinist.

comment by Jack · 2009-11-16T22:54:44.504Z · score: 0 (0 votes) · LW(p) · GW(p)

Are you a professional philosopher/ were you a professional philosopher when you were an incompatibilist with libertarian leanings? I'd say the vast majority of those untrained in philosophy hold the view you held and the most rational/intelligent of them would change their minds once confronted with a decent compatiblist argument.

Edit: I'm being a little unfair. There are plenty of smart people who disagree with us.

comment by Blueberry · 2009-11-16T23:05:22.373Z · score: 1 (1 votes) · LW(p) · GW(p)

No, I wasn't, and I agree with you. Defending philosophical positions as a career creates a bias where you're less likely to change your mind (see Cialdini's work on congruence: e.g. POWs in communist brainwashing camps who wrote essays on why communism was good were more likely to support communism afer release). But even so, professional philosophers do change their mind once in a while.

comment by Jack · 2009-11-16T23:10:08.206Z · score: 1 (1 votes) · LW(p) · GW(p)

But even so, professional philosophers do change their mind once in a while.

Absolutely! I tentatively hold the thesis that professional philosophers even make progress on understanding some issues. But there seem to be a couple positions that professional philosophers rarely sway from once they hold those positions and I think Alicorn is right that metaphysical libertarianism is one of these views.

comment by Jack · 2009-11-16T22:48:31.615Z · score: 2 (2 votes) · LW(p) · GW(p)

Free will libertarianism is also infected with religious philosophy. There are certainly some libertarians with secular reasons for their positions but a lot of the support for this for position comes from those whose religious world view requires radical free will and if they didn't believe in God they wouldn't be libertarians. Same goes for a lot of substance dualists, frankly.

comment by mikerpiker · 2009-11-17T04:30:52.787Z · score: 0 (0 votes) · LW(p) · GW(p)

Jack:

I think I agree with everything you say in response to my original post.

It seems like you basically agree with me that facts about the opinions of philosophers who work in some area (where this group is suitibly defined to avoid the difficulties you point out) should be important to us if we are trying to figure out what to believe in that area.

Why aren't studies being carried out to find out what these facts are? Do you think most philosophers would not agree that they are important?

comment by Jack · 2009-11-23T22:02:22.054Z · score: 1 (1 votes) · LW(p) · GW(p)

Yeah, I've felt for a while now that philosophers should do a better job explaining and popularizing the conclusions they come to. I've never been able to find literature reviews or meta-analysis, either. Part of the problem is definitely that a lot of philosophers are skeptical that they have anything true or interesting to say to non-philosophers. Also, despite some basic agreements about what is definitely wrong philosophers, at least with a lot of issues have so many different views that it wouldn't be very educational to poll them. Also, a lot of philosophy involves conceptual analysis and since it is really hard to poll a philosophical issue without resorting to concepts you might have a lot of respondents refusing to accept the premises of the question.

But none of these arguments are very good. If I ever make it in the field I'll put one together.

comment by rwallace · 2009-11-17T00:13:25.731Z · score: -3 (5 votes) · LW(p) · GW(p)

I don't think most laymen are confused about free will at all. I think they have an entirely correct notion of free will: when your actions are caused by you rather than by an outside agency. I think it is philosophers and intellectuals generally who came up with the strange, confused notion that free will means your actions are uncaused.

comment by Tyrrell_McAllister · 2009-11-17T00:15:45.323Z · score: 4 (4 votes) · LW(p) · GW(p)

If laypeople didn't have a confused notion of free will, they wouldn't become so consistently confused when they learn elementary facts from physics or neuroscience.

comment by Jack · 2009-11-17T00:38:04.655Z · score: 2 (2 votes) · LW(p) · GW(p)

If only thinking made it so! Alas, even we confused philosophers run experiments. The percentage of laymen who express incompatibilist intuitions is around 60-67%.

comment by RobinZ · 2009-11-17T01:02:01.463Z · score: 1 (1 votes) · LW(p) · GW(p)

...with the caveat that other studies have shown different results.

comment by rwallace · 2009-11-17T01:05:47.672Z · score: -1 (1 votes) · LW(p) · GW(p)

Interesting paper, thanks! From a quick skim it seems to me that when asked moral questions - "is Fred morally responsible for his actions?" - most of the subjects expressed compatibilist intuitions. They only start expressing incompatibilist intuitions when asked to comment on abstract philosophical statements of a kind one would not normally encounter. So it seems to me that the data upholds my claim, with the addendum that when a layman is asked to dabble in philosophy he does indeed fall into the classic error into which professional philosophers have fallen.

comment by Jack · 2009-11-17T01:32:56.713Z · score: 1 (1 votes) · LW(p) · GW(p)

They only start expressing incompatibilist intuitions when asked to comment on abstract philosophical statements of a kind one would not normally encounter.

  1. This is the Nichols and Knobe hypothesis which argued that people in general are incompatibilists but that language which generates strong affective responses will nonetheless lead people to import moral responsibility. The hypothesis is formed by taking some vignette about free will and making the action significantly more condemnable. So people will think that someone who gives to others in a deterministic world is not responsible but someone who murders others in that world is. Every other feature of the story remains the same. The stories that generate incompatibilist intuitions aren't different from those that generate compatibilist intuitions except in the emotional/morally condemnable content. The former aren't more abstract or philosophical. The better interpretation of this hypothesis is that people's actual intuitions about determinism get overrun by a desire to signal that they do not support the evil action committed in the vignette. Feel free to google Knobe's work in intentionality for evidence that this phenomena is more general.

  2. In any case, the Nichols and Knobe hypothesis isn't the position of the article. The authors set out to test the hypothesis and found it false. Instead, they found that people responded to vignettes mostly consistently. About 60-67% gave incompatibilist responses, 22-9% gave compatibilist responses and mixed/inconsistent responses were returned at a high single digit rate.

comment by Morendil · 2009-11-12T16:59:44.261Z · score: 2 (4 votes) · LW(p) · GW(p)

Well, Eliezer's reply to this comment prompts a follow-up question:

In "Free to optimize", you alluded to "the gift of a world that works on improved rules, where the rules are stable and understandable enough that people can manipulate them and optimize their own futures together". Can you say more about what you imagine such rules might be ?

comment by Kutta · 2009-11-13T01:34:16.631Z · score: 2 (2 votes) · LW(p) · GW(p)

I think that there isn't any point in attempting to come up with anything more exact than the general musings of the Fun Theory. It really takes a superintelligence and knowledge of CEV to conceive such rules (and it's not even guaranteed that there'd be anything that resemble "rules" per se).

comment by whpearson · 2009-11-11T18:21:52.607Z · score: 2 (2 votes) · LW(p) · GW(p)

In reference to this comment, can you give us more information about the interface between the modules. Also what leads you to believe that a human level intelligence can be decomposed nicely in such a fashion.

comment by komponisto · 2009-11-11T06:12:39.302Z · score: 2 (10 votes) · LW(p) · GW(p)

Sticking with biography/family background:

Anyone who has read this poignant essay knows that Eliezer had a younger brother who died tragically young. If it is not too insensitive of me, may I ask what the cause of death was?

comment by Kutta · 2009-11-11T07:07:47.900Z · score: 4 (4 votes) · LW(p) · GW(p)

It's been discussed somewhere in the second half of this podcast:

http://www.speakupstudios.com/Listen.aspx?ShowUID=333035

comment by MarkHHerman · 2009-11-18T02:56:58.865Z · score: 1 (1 votes) · LW(p) · GW(p)

Do you think a cog psych research program on “moral biases” might be helpful (e.g., regarding existential risk reduction)?

[The conceptual framework I aim working on (philosophy dissertation) targets a prevention-amenable form of “moral error” that requires (a) the perpetrating agent’s acceptance of the assessment of moral erroneousness (i.e., individual relativism to avoid categoricity problems), and (b) that the agent, for moral reasons, would not have committed the error had he been aware of the erroneousness (i.e., sufficiently motivating v. moral indifference, laziness, and/or akrasia).]

comment by Nick_Tarleton · 2009-11-18T19:48:28.256Z · score: 2 (2 votes) · LW(p) · GW(p)

More generally, what kind of psychology research would you most like to see done?

comment by PeteG · 2009-11-13T03:10:33.605Z · score: 1 (11 votes) · LW(p) · GW(p)

Of all the people you ever met in your life, who would you consider, if anyone, to be just a few hair's length away from your level. If properly taught, do you think this person can become the next Eliezer Yudkowsky?

comment by taa21 · 2009-11-14T19:27:25.476Z · score: 0 (4 votes) · LW(p) · GW(p)

Is this a joke?

comment by AndrewKemendo · 2009-11-11T12:34:23.280Z · score: 1 (7 votes) · LW(p) · GW(p)

Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?

Worded differently, how have you come to the conclusion that "maximizing utility" is the optimized goal as opposed to say virtue seeking?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-11T18:55:52.387Z · score: 4 (4 votes) · LW(p) · GW(p)

how much thought have you put into developing your personal epistemological philosophy?

...very little, you know me, I usually just wing that epistemology stuff...

(seriously, could you expand on what this question means?)

comment by AndrewKemendo · 2009-11-12T02:06:19.664Z · score: 0 (0 votes) · LW(p) · GW(p)

Ha, fair enough.

I often see reference to maximizing utility and individual utility functions in your writing and it would seem to me (unless I am misinterpreting your use) that you are implying that hedonic (fellicific) calculation is the most optimal way to determine what is correct when applying counterfactual outcomes to optimizing decision making.

I am asking how you determined (if that is the case) that the best way to judge the optimality of decision making was through utilitarianism as opposed to say ethical egoism or virtue (not to equivocate). Or perhaps your reference is purely abstract and does not invoke the fellicific calculation.

comment by Nick_Tarleton · 2009-11-12T02:08:36.384Z · score: 1 (1 votes) · LW(p) · GW(p)

hedonic (fellicific) calculation

See Not For The Sake of Happiness (Alone).

I am asking how you determined (if that is the case) that the best way to judge the optimality of decision making was through utilitarianism as opposed to say ethical egoism or virtue (not to equivocate).

See The "Intuitions" Behind "Utilitarianism" for a partial answer.

comment by AndrewKemendo · 2009-11-12T06:29:01.743Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes I remember reading both and scratching my head because both seemed to beat around the bush and not address the issues explicitly. Both lean to much on addressing the subjective aspect of non-utility based calculations, which in my mind is a red herring.

Admittedly I should have referenced it and perhaps the issue has been addressed as well as it will be. I would rather see this become a discussion as in my mind it is more important than any of the topics dealt with daily here - however that may not be appropriate for this particular thread.

comment by CronoDAS · 2009-11-12T07:14:15.066Z · score: 2 (2 votes) · LW(p) · GW(p)

"Preference satisfaction utilitarianism" is a lot closer to Eliezer's ethics than hedonic utilitarianism. In other words, there's more important things to maximize than happiness.

comment by Psy-Kosh · 2009-11-11T13:47:31.170Z · score: 2 (2 votes) · LW(p) · GW(p)

*blinks* I'm curious as to what it is you are asking. A utility function is just a way of encoding and organizing one's preferences/values. Okay, there're a couple additional requirements like internal consistency (if you prefer A to B and B to C, you'd better prefer A to C) and such, but other than that, it's just a convenient way of talking about one's preferences.

The goal isn't "maximize utility", but rather "maximizing utility" is a way of stating what it is you're doing when you're working to achieve your goals. Or did I completely misunderstand?

comment by Johnicholas · 2009-11-11T17:44:01.152Z · score: 1 (1 votes) · LW(p) · GW(p)

I think there has to be more to utility function talk than "convenience" - for one thing, it's not more convenient than preference talk, in general. Consider an economic utility function, valuing bundles of apples and oranges. If someone's preferences are summarizable by U(apples, oranges)=sqrt(apples*oranges), that might be convenient, but there's no free lunch. No compression can be achieved without assumptions about the prior distribution. Believing that preferences tend to have terse expressions in functional talk is a claim about the actual distribution of preferences in the world. The belief that maximizing utility is a perspicuous way of expressing "behave correctly" is something that one has to have evidence for.

My (very partial) understanding of virtue morality is that virtue ethicists believe that "behave correctly" is well expressed in terms of virtues.

comment by Psy-Kosh · 2009-11-11T18:32:24.375Z · score: 1 (1 votes) · LW(p) · GW(p)

I didn't mean convenient in the sense of compressibility, but convenient in the sense of representing our preference ordering in a form that lets one then talk about stuff like "how can I get the world into the best possible state, where 'best' is in terms of my values?" in terms of maximizing utility, and when combined with uncertainty, maximizing expected utility.

I just meant "utility doesn't automatically imply a specific set of values/virtues. It's more a way of organizing your virtues so that you can at least formally define optimal actions, giving you a starting point to look for ways to approximately compute such things, etc.."

Or did I misunderstand your point completely?

comment by Johnicholas · 2009-11-11T20:39:16.459Z · score: 3 (3 votes) · LW(p) · GW(p)

The phrase "how can I get the world into the best possible state" is explicitly consequentialist. Non-consequentialists (e.g. "The end does not justify the means") do not admit that correct behavior is getting the world into the best possible state.

Non-utilitarians probably perceive suggestions of maximizing utility, maximizing expected utility, and (in particular) approximating those two as very dangerous and likely to lead to incorrect behavior.

The original poster implied that there is a difference between seeking to maximize utility and (for example) virtue seeking. I'm trying to explain in what sense the original poster had a real point. Not everyone is a utilitarian, and saying "in principle, I could construct a utility function from your preferences" doesn't make everyone a utilitarian.

comment by Psy-Kosh · 2009-11-11T20:44:01.875Z · score: 0 (0 votes) · LW(p) · GW(p)

Really, the non-consequentialism can be rephrased as a consequentialist philosophy by simply including the means, ie, the history, as part of the "state"... ie, assigning lower value to getting to a certain state by bad methods vs good methods.

Or am I still not getting it?

comment by Johnicholas · 2009-11-11T21:00:39.307Z · score: 1 (1 votes) · LW(p) · GW(p)

Yes, it's possible to encode the nonconsequentialism or "nonutilitarianism" into the utility function. However, by doing so you're making the utility function inconvenient to work with. You can't simultaneously claim that the utility function is "simply" an encoding of people's preferences and ALSO that the utility function is convenient or preferable.

Then you go and approximate the (uglified) utility function! Put yourself in the virtue theorist's or Kantian's shoes. It certainly sounds to me like you're planning to discard their concerns regarding moral/ethical/correct behavior.

(Note: I don't actually understand virtue ethics at all, so I might be getting this entirely wrong.) Imagine the virtue ethicist saying "Your concerns can be encoded into the virtue of "achieves a desirable goal", and will be included in our system along with the other virtues," Would you want to know WHY the system is being built with virtues at the bottom and consequentialism as an encoding? Would your questions make sense?

comment by Psy-Kosh · 2009-11-11T21:18:10.471Z · score: 0 (0 votes) · LW(p) · GW(p)

It's "convenient" in the sense of giving us a general way of talking about how to make decisions. It's "convenient" in that it is set up in such a way to encode not just what you prefer more than other stuff, but how much more, etc...

Lets us then also take advantage of whatever decision theory theorems have been proven, and so on...

As far as "virtue of achieving a desirable goal", "desirable", "virtue", and "achieving" would be doing all the heavy lifting there. :)

But really, my point was simply the original comment was stated in such a way as to imply "maximizing utility" was itself a moral philosophy, ie, the sort of thing that you could say "I consider that immoral, and instead care about personal virtue". I was simply saying "huh? utility stuff is just a way of talking about whatever values you happen to have. It's not, on its own, a specific set of values. It's like, I guess, saying 'what if I don't believe in math and instead believe in electromagnetism?'"

comment by AndrewKemendo · 2009-11-12T02:31:26.710Z · score: -1 (1 votes) · LW(p) · GW(p)

You'll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.

Your definition of what the term "maximizing utility" means and the Bentham definition (who was the originator) are significantly different; If you don't know what it is then I will describe it (if you do, sorry for the redundancy).

Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory to create a literal formula which gives optimized preferences such that it maximized happiness for the individual. This is the foundation for all utilitarian ethics as each seeks to essentially itemize all preferences.

Virtue ethics for those who do not know is the Aristotelian philosophy that posits: each sufficiently differentiated organism or object is naturally optimized for at least one specific purpose above all other purposes. Optimized decision making for a virtue theorist would be doing the things which best express or develop that specific purpose - similar to how specialty tools are best used for their specialty. Happiness is said to spring forth from this as a consequence, not as it's goal.

I just want to know, if it is the case that he came to follow the former (Bentham) philosophy, how he came to that decision (theoretically it is possible to combine the two).

So in this case, while the term may give an approximation of the optimal decision, if used in that manner is not explicitly clear in how it determines the basis for the decision is in the first place; that is unless, as some have done, it is specified that maximizing happiness is the goal (which I had just assumed people were asserting implicitly anyhow).

comment by Psy-Kosh · 2009-11-12T04:03:59.891Z · score: 0 (0 votes) · LW(p) · GW(p)

Okay, I was talking about utility maximization in the decision theory sense. ie, computations of expected utility, etc etc...

As far as happiness being The One True Virtue, well, that's been explicitly addressed

Anyways, "maximize happiness above all else" is explicitly not it. And utility, as discussed on this site is a reference to the decision theoretic concept. It is not a specific moral theory at all.

Now, the stuff that we consider morality would include happiness as a term, but certainly not as the only thing.

Virtue ethics, as you describe it, gives me an "eeew" reaction, to be honest. It's the right thing to do simply because it's what you were optimized for?

If I somehow bioengineer some sort of sentient living weapon thing, is it actually the proper moral thing for that being to go around committing mass slaughter? After all, that's what it's "optimized for"...

comment by AndrewKemendo · 2009-11-12T06:37:13.287Z · score: 0 (0 votes) · LW(p) · GW(p)

As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:

The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.

which is why I worded my question as I did the first time. I don't think he has done the same amount of thinking on his epistemology as he has on his TDT.

comment by AndrewKemendo · 2009-11-12T02:31:40.935Z · score: -1 (1 votes) · LW(p) · GW(p)

Thanks, I followed up below.

comment by Will_Euler · 2009-11-19T02:50:09.940Z · score: 0 (2 votes) · LW(p) · GW(p)

Let's say someone (today, given present technology) has the goal of achieving rational self-insight into one's thinking processes and the goal of being happy. You have suggested (in conversation) such a person might find himself in an "unhappy valley" insofar as he is not perfectly rational. If someone today -- using current hedonic/positive psychology --undertakes a program to be as happy as possible, what role would rational self-insight play in that program?

comment by Will_Euler · 2009-11-19T02:01:28.734Z · score: 0 (0 votes) · LW(p) · GW(p)

How does the goal of acquiring self-knowledge (for current humans) relate to the goal of happiness (insofar as such a goal can be isolated)?

If one aimed to be as rational as possible, how would this help someone (today) become happy? You have suggested (in conversation) that there might be a tradeoff such that those who are not perfectly rational might exist in an "unhappy valley". Can you explain this phenomenon, including how one could find themselves in such a valley (and how they might get out)? How much is this term meant to indicate an analogy with an "uncanny valley"?

Less important, but related: What self-insights from hedonic/positive psychology have you found most revealing about people's ability to make choices aimed at maximizing happiness (eg limitations of affective forecasting, paradox of choice, impact of upward vs. downward counterfactual thinking on affect, mood induction and creativity/cognitive flexibility, etc.

Gambatte!

comment by zero_call · 2009-11-15T08:42:40.417Z · score: 0 (0 votes) · LW(p) · GW(p)

There seems to be two problems or components of the singularity program which are interchanged or conflated. Firstly, there is the goal of producing a GAI, say on the order of human intelligence (e.g., similar to the Data character from Star Trek.) Secondly, there is the goal or belief that a GAI will be strongly self-improving, to the extent that it reaches a super-human intelligence.

It is unclear to me that achieving the first goal means that the second goal is also achievable, or of a similar difficulty level. For example, I am inclined to think that we as humans constitute a sort of natural GAI, and yet, even if we fully understood the brain, it would not necessarily be clear how to optimize ourselves to super-human intelligence levels. As a crude example, it's like saying that just because a mechanic completely understands how a car works, it doesn't mean that he build another car which is fundamentally superior.

Succinctly: Why should we expect a computerized GAI to have a higher order self-improvement function than we as humans? (I trustfully understand you will not trivialize the issue by saying, for example, better memory & better speed = better intelligence.)

comment by zero_call · 2009-11-15T08:32:37.458Z · score: 0 (6 votes) · LW(p) · GW(p)

There seems to be two problems or components of the singularity program which are interchanged or conflated. Firstly, there is the goal of producing a GAI, say on the order of human intelligence (e.g., similar to the Data character from Star Trek.) Secondly, there is the goal or belief that a GAI will be strongly self-improving, to the extent that it reaches a super-human intelligence.

It is unclear to me that achieving the first goal means that the second goal is also achievable, or of a similar difficulty level. For example, I am inclined to think that we as humans constitute a sort of natural GAI, and yet, even if we fully understood the brain, it would not necessarily be clear how to optimize ourselves to super-human intelligence levels. As a crude example, it's like saying that just because an expert car mechanic completely understands how a car works, it doesn't mean that he can build another car which is fundamentally superior.

Succinctly: Why should we expect a computerized GAI to have a higher order self-improvement function than we as humans? (I trustfully understand you will not trivialize the issue by saying, for example, better memory & better speed = better intelligence.)

comment by [deleted] · 2009-11-15T09:19:10.550Z · score: 3 (3 votes) · LW(p) · GW(p)

Eliezer's belief, as I recall, is that human intelligence is a relatively small and arbitrary point in the "intelligence hierarchy", i.e. relative to minds at large, the smartest human is not much smarter than the dumbest. If an AI's intelligence stops increasing somewhere, why would it just happen to stop within the human range?

comment by whpearson · 2009-11-15T11:33:20.408Z · score: 0 (0 votes) · LW(p) · GW(p)

I'd expect it, iff we are copying human design without understanding it fully (this seems to have the biggest traction at the moment in terms of full intelligence work).

On the other hand if we can say things like "The energy-usage density of the local universe is X thus by Blah's law we should set the exploration/exploitation parameter to Y", then all bets are off. I don't have much hope for this style of reasoning at the moment though.

There might be a law saying something like "a system can't develop a system more powerful than itself by anything other than chance". However I've noticed that we don't really like working with formalisations of power, and tend to stick to folk psychology notions of intelligence, with which you can do anything you want as they are not well defined. So no progress is being made.

comment by timtyler · 2009-11-15T11:49:31.940Z · score: 0 (2 votes) · LW(p) · GW(p)

Humans built Google. They did it by clubbing together. This seems like a powerful approach.

comment by whpearson · 2009-11-15T12:02:48.712Z · score: 0 (0 votes) · LW(p) · GW(p)

I meant powerful in the Eliezer sense of "ability to achieve its goals". All google is, is a manifestation of the power of the humans that built it (and maintain it) (and the links that webmasters have put up and craft to be google friendly) as it has no goals of its own.

Until we have built a common vocabulary (that cuts the world at its joints), most conversations will unfortunately be pointless.

comment by JamesAndrix · 2009-11-18T08:03:22.477Z · score: 3 (3 votes) · LW(p) · GW(p)

All google is, is a manifestation of the power of the humans that built it

No. If you take that approach then you'll just be saying that about every GAI, no matter how powerful. Google engineers can not solve the problems that google solves. They can't even hold the problem (which includes links between millions of websites) in their heads. They CAN hold in their heads the problem of creating something that can solve the problem. Within google's domain, humans aren't even players.

Even allowing a human the time and notepaper and procedural knowledge to do what google does, that's not a human solving the same problem, that's a human implementing the abstract computation that is google.

Human can and do generate optimization process that are more powerful than themselves.

This may seem more harsh than I intend: I see your proposed law as just a privileged hypothesis, without any evidence, defending the notion that humans must somehow be special.

comment by timtyler · 2009-11-15T12:27:05.595Z · score: 0 (0 votes) · LW(p) · GW(p)

To spell things out - a problem with the idea of a law saying that "a system can't develop a system more powerful than itself by anything other than chance" is that it is pretty easy to do that.

Two humans can (fairly simply) make more humans, and then large groups of humans can have considerably more power than the original pair of humans did.

For example, no human can remember the whole internet and answer questions about its content - but a bunch of humans and their artefacts can do just that.

This is an example of synergy - the power of collective intelligence.

comment by whpearson · 2009-11-15T14:30:42.122Z · score: 1 (1 votes) · LW(p) · GW(p)

I can solve more problems when I have a hammer than when I don't, I can be synergistic with a hammer, you don't need other people for synergy. This just means that the power depends upon the environment.

Lets talk about the power P of a system S being defined as a function P(S, E). With E being the environment. So when I am talking about something more powerful I mean for all E. P(S1,E) > P (S2,E). Or at least for huge amounts of E or on average. It is not sufficient to show a single case.

I don't think that organizations of humans have a coherent goal structure, so they don't have a coherent power.

comment by timtyler · 2009-11-15T14:57:11.348Z · score: 1 (1 votes) · LW(p) · GW(p)

Why don't you think organizations have "coherent goals". They certainly claim to do so. For instance, Google claims it wants "to organize the world's information and make it universally accessible and useful". Its actions seems to be roughly consistent with that. What is the problem?

comment by whpearson · 2009-11-15T15:44:09.334Z · score: 1 (1 votes) · LW(p) · GW(p)

They really don't maximise that value... you'd get closer to the mark if you added in words like profit and executive pay.

But the main reason I don't think they have a coherent goal is because they may evaporate tomorrow. If the goal seeking agents that make them up decide there is somewhere better to fulfill there goals, then they can just up and leave and the goal does not get fulfilled. They have to balance the variety of goals of the agents inside it (which constantly change as they get new people) with the business goals, if it is to survive. Sometimes no one making up an organisation wants it to survive.

comment by timtyler · 2009-11-15T16:04:43.102Z · score: 1 (1 votes) · LW(p) · GW(p)

Organisms die as well as organisations. That doesn't mean they are not goal-directed.

Nor do organisms act entirely harmoniously. There are millions of bacterial symbionts inside every animal, who have their own reproductive ends. Their bodies are infected with pathogens, which make them sneeze, cough and scratch. Also, animals are uneasy coalitions of genes - some of which (e.g. segregation distorters) want to do other things besides helping the organism reproduce. So, if you rule out companies on those grounds, organisms seem unlikely to qualify either.

In practice, both organisms and companies are harmonious enough for goal-seeking models to work as reasonably good predictors of their behaviour.

comment by whpearson · 2009-11-15T16:13:59.002Z · score: 1 (1 votes) · LW(p) · GW(p)

If I want to predict what a company will do, I look at the board of directors/ceo/upper management/powerful unions, not the previous actions of the company. This allows me to predict if they will refocus the company on doing something new or sell it off to be gutted.

comment by timtyler · 2009-11-15T16:31:50.811Z · score: 1 (1 votes) · LW(p) · GW(p)

Companies are not the only agents which can be so dissected. You could similarly examine the brain of an animal - or examine the source code of a robot.

However, treating agents in a behavioural manner - as input-process-output black boxes is a pretty conventional method of analysing their behaviour.

Sure it has some disadvantages. If an organism is under attack by a pathogen and is near to death, their previous behaviour may not be an accurate predictor of future actions. However, that is not usually the case - and there are corresponding advantages.

For example, you might not have access to the organism's internal state - in which case a "black box" analysis would be attractive.

Anyway, your objections don't look too serious to me. Companies typically behave in a highly goal-directed manner - broadly similar to the way in which organisms behave - and for similar reasons.

comment by whpearson · 2009-11-15T21:33:25.521Z · score: 0 (0 votes) · LW(p) · GW(p)

Yes, yes. It is all a continuum. That doesn't change the fact that I don't use the intentional stance on businesses. I view them as either a) designed inflexible systems for providing me services in exchange for money or b) systems modified by human actors for their own interests.

I'll very rarely say "google is trying to do X" or that "microsoft knows Y". I do do these things for humans and animals, in that respect I see a firm dividing line between them in the type of system I treat them as.

comment by timtyler · 2009-11-15T22:07:45.974Z · score: 2 (2 votes) · LW(p) · GW(p)

I think the human brain has a whole bunch of circuitry designed to understand other agents - as modified versions of yourself.

That circuitry can also be pushed into use for understanding the behaviour of organisations, companies and governments - since those systems have enough "agency" to make the analogy more fruitful than confusing.

My take on the issue is that this typically results in more insight for less effort.

Critics might say this is anthropomorphism - but IMO, it pays.

comment by zero_call · 2009-11-15T10:31:51.583Z · score: -4 (4 votes) · LW(p) · GW(p)

Like I said before -- a human is indistinguishable (at some level) from a GAI, and yet its intelligence stops increasing somewhere, principally, somewhere that's exactly within the human range. Inductive reasoning on this case implies that a true (computerized) GAI would face a similar barrier to its self-improvement function.

comment by anonym · 2009-11-15T18:39:40.140Z · score: 2 (2 votes) · LW(p) · GW(p)

Humans don't recursively self-improve and they don't have access to their source code (yet).

comment by zero_call · 2009-11-15T20:03:04.517Z · score: 0 (4 votes) · LW(p) · GW(p)

I disagree. Humans don't recursively self-improve? What about a master pianist? Do they just start out a master pianist or do they gradually improve their technique until they reach their mastery? Humans show extreme capability in the action of learning, and this is precisely analogous to what you call "recursive self-improvement".

Of course the source code isn't known now and I left that open in my statement. But even if the source code were completely understood, as I said befor