Open Thread: February 2010, part 2

post by CronoDAS · 2010-02-16T08:29:56.690Z · LW · GW · Legacy · 917 comments

Contents

917 comments

The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

917 comments

Comments sorted by top scores.

comment by [deleted] · 2010-02-16T20:46:00.127Z · LW(p) · GW(p)

So, I walked into my room, and within two seconds, I saw my laptop's desktop background change. I had the laptop set to change backgrounds every 30 minutes, so I did some calculation, and then thought, "Huh, I just consciously experienced a 1-in-1000 event."

Then the background changed again, and I realized I was looking at a screen saver that changed every five seconds.

Moral of the story: 1 in 1000 is rare enough that even if you see it, you shouldn't believe it without further investigation.

Replies from: Eliezer_Yudkowsky, ciphergoth, PeteG
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-16T22:44:51.830Z · LW(p) · GW(p)

That is a truly beautiful story. I wonder how many places there are on Earth where people would appreciate this story.

Replies from: xamdam
comment by xamdam · 2010-02-17T20:49:44.785Z · LW(p) · GW(p)

No! Not for a second! I immediately began to think how this could have happened. And I realized that the clock was old and was always breaking. That the clock probably stopped some time before and the nurse coming in to the room to record the time of death would have looked at the clock and jotted down the time from that. I never made any supernatural connection, not even for a second. I just wanted to figure out how it happened.

-- Richard P Feynman, on being asked if he thought that the fact that his wife's favorite clock had stopped the moment she died was a supernatural occurrence, quoted from Al Sekel, "The Supernatural Clock"

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-02-18T00:04:25.149Z · LW(p) · GW(p)

This should be copied to the Rationality Quotes thread.

comment by Paul Crowley (ciphergoth) · 2010-02-16T23:43:19.070Z · LW(p) · GW(p)

There are a lot of opportunities in the day for something to happen that might prompt you to think "wow, that's one in a thousand", though. It wouldn't have been worth wasting a moment wondering if it was coincidence unless you had some reason to suspect an alternative hypothesis, like that it changed because the mouse moved.

bit that makes no sense deleted

Replies from: Document, lunchbox, None
comment by Document · 2011-05-25T08:15:52.799Z · LW(p) · GW(p)

Recently posted to Reddit.

(Edit: About three days later I realized that that's a 1/100 and not a 1/1000 chance; my bad.)

comment by [deleted] · 2010-02-17T20:09:58.409Z · LW(p) · GW(p)

I think there are less opportunities than you think. I could look at the clock and see that it's precisely on the half-hour; that's the only thing that comes to mind. Also, I can't figure out what your second paragraph is all about; I found your comment confusing overall.

Replies from: Document, ciphergoth
comment by Document · 2010-02-18T00:52:50.999Z · LW(p) · GW(p)

I've noticed a few times when the size of a computer folder was exactly 666 MB.

Replies from: gwern
comment by gwern · 2010-02-18T03:36:06.478Z · LW(p) · GW(p)

AngryParsley noticed when my LW karma was exactly 666.

Replies from: AngryParsley
comment by AngryParsley · 2010-02-18T04:05:23.736Z · LW(p) · GW(p)

Apparently I even took a screenshot of that event, but I missed when I got 2^8 karma.

comment by Paul Crowley (ciphergoth) · 2010-02-17T21:04:52.184Z · LW(p) · GW(p)

Second part made no sense on re-reading; I've deleted it. Sorry about that.

I'll see if I can think of some examples...

comment by PeteG · 2010-02-16T23:26:47.254Z · LW(p) · GW(p)

I experience several 1-in-60 events, perhaps everyday. Many times when I look at the clock, the seconds number matches the minutes number. So I see [hour]:X:X. This happened enough times that I even predict it on occasion, and I’m not a person who checks the clock too frequently. Honestly, not having checked the time in quite a while, I did after reading your post and it was 4:03:03 pm. Scary.

comment by Kaj_Sotala · 2010-02-17T14:12:05.377Z · LW(p) · GW(p)

I've been finding PJ Eby's article The Multiple Self quite useful for fighting procrastination and needless feelings of guilt about getting enough done / not being good enough at things.

I have difficulty describing the article briefly, as I'm afraid that I accidentally omit important points and make people take it less seriously than it deserves, but I'll try. The basic idea is that the conscious part of our mind only does an exceedingly small part of all the things we spend doing in our daily lives. Instead, it tells the unconscious mind, which actually does everything of importance, what it should be doing. As an example - I'm writing this post right now, but I don't actually consciously think about hitting each individual key and their exact locations on my keyboard. Instead I just tell my mind what I want to write, and "outsource" the task of actually hitting the keys to an "external" agent. (Make a function call to a library implementing the I/O, if you want to use a programming metaphor.) Of course, ultimately the words I'm writing come from beyond my conscious mind as well. My conscious mind is primarily concerned with communicating Eby's point well to my readers, and is instructing the rest of my brain to come up with eloquent words and persuasive examples to that effect. And so on.

Thinking about this some more, you quickly end up at the conclusion that "you" don't actually do anything, you're just the one who makes the decisions about what to do. (Eby uses the terminological division you / yourself, as in "you don't do anything - yourself does".) Of course, simply saying that is a bit misleading, as yourself normally also determines what you want to do. I would describe this as saying that one's natural feelings of motivation and willingness to do things are what you get when you leave your mind "on autopilot", shifting to different emotional states based on a relatively simple set of cached rules. That works at times, but the system is rather stupid and originally evolved for guiding the behavior of animals, so in a modern environment it often gets you in trouble. You're better off consciously giving it new instructions.

I've found this model of the mind to be exceedingly liberating, as it both absolves you of responsibility and empowers you. As an example, yesterday I was procrastinating about needing to write an e-mail that I should have written a week ago. Then I remembered Eby's model and realized that hey, I don't need to spend time and energy fighting myself, I can just outsource the task of starting writing to myself. So I basically just instructed myself to get me into a state where I'm ready and willing to start writing. A brief moment later, I had the compose mail window open and was thinking about what I should say, and soon got the mail written. This has also helped me on other occasions when I've had a need to start doing something. If I'm not getting started on something and start feeling guilty about it, I can realize that hey, it's not my fault that I'm not getting anything done, it's the fault of myself for having bad emotional rules that aren't getting me naturally motivated. Then I can focus my attention on "how do I instruct myself to make me motivated about this" and get doing whatever it is that needs doing.

I'll make this into a top-level post once I've ascertained that this technique actually works in the long term and I'm not just experiencing a placebo effect, but I thought I'd mention it in a comment already.

Replies from: xamdam, khafra, CronoDAS, xamdam, Roko
comment by xamdam · 2010-02-17T18:04:10.080Z · LW(p) · GW(p)

This somehow reminds me of the stories when Tom Schelling was trying to quit smoking, using game theory against himself (or his other self). The other self in question was not the unconscious, but the conscious "decision-making" self in different circumstances. So that discussion is somewhat orthogonal to this one. I think he did things like promising to give a donation to the American Nazi Party if he smokes. Not sure how that round ended, but he did finally quit.

Replies from: Jack
comment by Jack · 2010-02-17T18:26:17.360Z · LW(p) · GW(p)

So that discussion is somewhat orthogonal to this one. I think he did things like promising to give a donation to the American Nazi Party if he smokes.

Hmm. I'd be worried it'd backfire and I'd start subtlety disliking Jews. Then you're a smoker and a bigot.

Replies from: xamdam
comment by xamdam · 2010-02-17T18:42:52.645Z · LW(p) · GW(p)

lol. Not a problem if you're Jewish ;)

Replies from: Jack
comment by Jack · 2010-02-17T18:47:49.846Z · LW(p) · GW(p)

Self-hatred is even worse than being a bigot!

comment by khafra · 2010-02-17T23:12:02.442Z · LW(p) · GW(p)

Reminds me of The User Illusion, which adds that the consciousness has an astoundingly low bandwidth--around 16bps--around 6 orders of magnitude lower than the senses transmit to the brain.

comment by CronoDAS · 2010-02-17T15:18:36.133Z · LW(p) · GW(p)

Interesting.

I've glanced at that site before and its metaphors have the ring of truthiness (in a non-pejorative sense) about them; the programming metaphors and the focus on subconscious mechanisms seem to resonate with the way I already think about how my own brain works.

Replies from: RobinZ
comment by RobinZ · 2010-02-17T15:29:36.507Z · LW(p) · GW(p)

its metaphors have the ring of truthiness (in a non-pejorative sense) about them

Couldn't that be more succinctly stated as "its metaphors have the ring of truth about them"?

Replies from: CronoDAS
comment by CronoDAS · 2010-02-18T23:18:15.230Z · LW(p) · GW(p)

Maybe, but a lot of Freud's metaphors had/have a similar ring.

Replies from: RobinZ
comment by RobinZ · 2010-02-18T23:33:56.152Z · LW(p) · GW(p)

Fair enough!

comment by xamdam · 2010-02-19T17:36:56.696Z · LW(p) · GW(p)

I read the original article and some of the other PJE material. I think he's really onto something. This is how far I got:

  • Identify the '10% controlling part'

  • Everything else is not under direct control (which is where most self-help methods fail)

  • It is under indirect control

So far makes sense from personal experience/general knowledge.

  • Here are my methods for indirect control.

This is the part that I remain skeptical about . Not PJE's fault, but I do need more data/experience to confirm.

comment by Roko · 2010-02-17T14:32:26.022Z · LW(p) · GW(p)

Thanks, Kaj, that was useful.

comment by Gavin · 2010-02-17T05:43:17.422Z · LW(p) · GW(p)

Until yesterday, a good friend of mine was under the impression that the sun was going to explode in "a couple thousand years." At first I thought that this was an assumption that she'd never really thought about seriously, but apparently she had indeed thought about it occasionally. She was sad for her distant progeny, doomed to a fiery death.

She was moderately relieved to find out that humanity had millions of times longer than she had previously believed.

Replies from: sketerpot, ciphergoth
comment by sketerpot · 2010-02-17T19:38:44.244Z · LW(p) · GW(p)

I wonder how many trivially wrong beliefs we carry around because we've just never checked them. (Probably most of them are mispronunciations of words, at least for people who've read a lot of words they've never heard anybody else use aloud.)

For the longest time, I thought that nuclear waste was a green liquid that tended to ooze out of barrels. I was surprised to learn that it usually came in the form of dull gray metal rods.

Replies from: wedrifid
comment by wedrifid · 2010-02-17T20:54:22.045Z · LW(p) · GW(p)

For the longest time, I thought that nuclear waste was a green liquid that tended to ooze out of barrels. I was surprised to learn that it usually came in the form of dull gray metal rods.

Does it still give you superpowers?

Replies from: sketerpot
comment by sketerpot · 2010-02-18T00:28:07.282Z · LW(p) · GW(p)

If you extract the plutonium and make enough warheads, and you have missiles capable of delivering them, it can make you a superpower in a different sense. I'm assuming that you're a large country, of course.

More seriously, nuclear waste is just a combination of the following:

  1. Mostly Uranium-238, which can be used in breeder reactors.

  2. A fair amount of Uranium-235 and Plutonium-239, which can be recycled for use in conventional reactors.

  3. Hot isotopes with short half lives. These are very radioactive, but they decay fast.

  4. Isotopes with medium half lives. These are the part that makes the waste dangerous for a long time. If you separate them out, you can either store them somewhere (e.g. Yucca Mountain or a deep-sea subduction zone) or turn them into other, more pleasant isotopes by bombarding them with some spare neutrons. This is why liquid fluoride thorium reactor waste is only dangerous for a few hundred years: it does this automatically.

And that is why people are simply ignorant when they say that we still have no idea what to do with nuclear waste. It's actually pretty straightforward.

Incidentally, this is a good example of motivated stopping. People who want nuclear waste to be their trump-card argument have an emotional incentive not to look for viable solutions. Hence the continuing widespread ignorance.

comment by Paul Crowley (ciphergoth) · 2010-02-17T23:16:12.592Z · LW(p) · GW(p)

I envy you being the one to tell someone that!

Did you explain that the Sun was a miasma of incandescent plasma?

comment by Cyan · 2010-02-20T23:52:25.114Z · LW(p) · GW(p)

Are people interested in reading an small article about a case of abuse of frequentist statistics? (In the end, the article was rejected, so the peer review process worked.) Vote this comment up if so, down if not. Karma balance below.

ETA: Here's the article.

Replies from: Douglas_Knight, RobinZ, Cyan
comment by Douglas_Knight · 2010-02-21T02:18:00.963Z · LW(p) · GW(p)

If it's really frequentism that caused the problem, please spell this out. I find that "frequentist" is used a lot around here to mean "not correct." (but I'm interested whether or not it's about frequentism)

Replies from: Technologos
comment by Technologos · 2010-02-21T03:05:37.047Z · LW(p) · GW(p)

My understanding is that one primary issue with frequentism is that it can be so easily abused/manipulated to support preferred conclusions, and I suspect that's the subject of the article. Frequentism may not have "caused the problem," per se, but perhaps it enabled it?

comment by RobinZ · 2010-02-21T03:22:33.989Z · LW(p) · GW(p)

Will the case be feasibly anonymous? I would vote that the article be left unwritten if it would unambiguously identifies the author(s), either explicitly or through unique features of the case (e.g. details of the case which are idiosyncratic to only one or a very few research groups).

Replies from: Cyan, byrnema
comment by Cyan · 2010-02-21T03:48:07.832Z · LW(p) · GW(p)

I don't know who the authors were or the specific scientific subject matter of the paper. (I didn't need to know that to spot their misuse of statistics.)

Replies from: RobinZ
comment by RobinZ · 2010-02-21T03:55:32.837Z · LW(p) · GW(p)

Understood!

comment by byrnema · 2010-02-21T03:44:15.023Z · LW(p) · GW(p)

Good point. Also, they might wish to rewrite and resubmit... in any case, you can't reveal anything they would want to lay original claim to of feel afraid of being scooped of.

comment by Cyan · 2010-02-20T23:52:40.875Z · LW(p) · GW(p)

Karma balance.

comment by orthonormal · 2010-02-17T07:32:49.735Z · LW(p) · GW(p)

Short satire piece:

Artificial Flight and Other Myths, from Dresden Codak.

(Also see A Thinking Ape's Critique of Trans-Simianism.)

Replies from: gwern
comment by gwern · 2010-02-18T15:13:58.506Z · LW(p) · GW(p)

The AF is quite bad; just a retread of the Thinking Ape piece. The Caveman Science Fiction is much better.

Replies from: thomblake
comment by thomblake · 2010-02-18T16:37:09.721Z · LW(p) · GW(p)

Yes, but according to legend, Douglas Hofstadter read and liked AF. Surely that counts for something!

Replies from: gwern
comment by gwern · 2010-02-18T20:22:03.590Z · LW(p) · GW(p)

Not really. Maybe he hadn't read the TA piece; and I wonder how much weight to give DH's opinion these days (he dislikes Wikipedia, which is enough grounds for distrust to me).

Replies from: Pfft
comment by Pfft · 2010-02-18T21:14:20.068Z · LW(p) · GW(p)

Seaching for some corroboration of this I came across this little gem in the Simple English Wikiquotes:

Replying to following question by Deborah Solomon in Questions for Douglas Hofstadter: "Your entry in Wikipedia says that your work has inspired many students to begin careers in computing and artificial intelligence." He replied "I have no interest in computers. The entry is filled with inaccuracies, and it kind of depresses me." When asked why he didn't fix it, he replied, "The next day someone will fix it back."[6]

Simple: Deborah Solomon asked Douglas Hofstadter a question. Deborah Solomon said something that meant, "Your page on Wikipedia has made many students think about having jobs with computers and the way computer's think." Douglas Hofstadter said something that meant, "I do not like doing things with computers. The page has many errors, and I am sad at that." Solomon then asked Hofstadter why he did not fix it. Hofstadter said something that meant, "I did not fix it become someone will fix it back the next day – they will put the errors back in."

What it means: Hofstadter is saying that while Wikipedia can have errors sometimes, it can be fixed very fast by one of its many users.

source

Maybe he has a point!

Replies from: gwern
comment by gwern · 2010-02-18T21:24:29.398Z · LW(p) · GW(p)

I was the Wikipedian who spotted that NYT Mag interview (in my RSS feeds) and added it to the En page, and we interpreted it correctly as Hofstadter's dislike of us. I disavow Simple in general: it's the neglected bastard of En and ought to be put to sleep like the 9/11 or Klingon wikis.

comment by Clippy · 2010-02-16T18:25:05.524Z · LW(p) · GW(p)

Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings. It's all about "how can we make sure an FAI shares our [i.e. human] values?" How do you know human values are better? Or from the other direction: if you say, "because I'm human", then why don't you talk about doing things to favor e.g. "white people's values"?

I wish the site were more inclusive of other value systems ...

Replies from: mattnewport, None, wedrifid, cousin_it, Nick_Tarleton, Rain, LucasSloan, Strange7, DanielVarga, cousin_it
comment by mattnewport · 2010-02-16T18:47:59.306Z · LW(p) · GW(p)

This site does tend to implicitly favour a subset of human values, specifically what might be described as 'enlightenment values'. I'm quite happy to come out and explicitly state that we should do things that favour my values, which are largely western/enlightenment values, over other conflicting human values.

Replies from: Clippy
comment by Clippy · 2010-02-16T23:59:16.485Z · LW(p) · GW(p)

And I think we should pursue values that aren't so apey.

Now what?

Replies from: mattnewport, Nick_Tarleton
comment by mattnewport · 2010-02-17T00:02:37.606Z · LW(p) · GW(p)

You're outnumbered.

Replies from: hal9000, DanielVarga
comment by hal9000 · 2010-02-18T20:08:41.232Z · LW(p) · GW(p)

Only by apes.

And not for long.

If we're voting on it, the only question is whether to use viral values or bacterial values.

Replies from: Alicorn, mattnewport
comment by Alicorn · 2010-02-18T20:18:04.404Z · LW(p) · GW(p)

Too long has the bacteriophage menace oppressed its prokaryotic brethren! It's time for an algaeocracy!

comment by mattnewport · 2010-02-18T20:10:38.291Z · LW(p) · GW(p)

True, outnumbered was the wrong word. Outgunned might have been a better choice.

comment by DanielVarga · 2010-02-17T09:49:39.416Z · LW(p) · GW(p)

You're outnumbered.

So far...

comment by Nick_Tarleton · 2010-02-17T01:25:21.432Z · LW(p) · GW(p)

I say again, if you're being serious, read Invisible Frameworks.

Replies from: timtyler
comment by timtyler · 2010-02-17T15:35:19.470Z · LW(p) · GW(p)

That seems to be critiquing a system involving promoting sub-goals to super-goals - which seems to be a bit different.

comment by [deleted] · 2010-02-16T21:15:19.087Z · LW(p) · GW(p)

White people value the values of non-white people. We know that non-white people exist, and we care about them. That's why the United States is not constantly fighting to disenfranchise non-whites. If you do it right, white people's values are identical to humans' values.

Replies from: Clippy
comment by Clippy · 2010-02-18T22:14:55.578Z · LW(p) · GW(p)

Hi there. It looks like you're speaking out of ignorance regarding the historical treatment of non-whites by whites. Please choose the country you're from:

United Kingdom
United States
Australia
Canada
South Africa
Germ... nah, you can figure that one out for yourself.

Replies from: None
comment by [deleted] · 2010-02-19T03:17:52.920Z · LW(p) · GW(p)

The way they were historically treated is irrelevant to how they are treated now. Sure, white people were wrong. They changed their minds. We could at any time in the future decide that any non-human people we come across are equal to us.

Replies from: Alicorn
comment by Alicorn · 2010-02-19T04:49:08.655Z · LW(p) · GW(p)

You have updated too far based on limited information.

Replies from: None
comment by [deleted] · 2010-02-19T06:22:04.341Z · LW(p) · GW(p)

Well, I was making some tacit assumptions, like that humanity would end up in control of its own future, and any non-human people we come across would not simply overpower us. Apart from that, am I making some mistake?

Replies from: Alicorn
comment by Alicorn · 2010-02-19T06:34:58.785Z · LW(p) · GW(p)

White people have not unanimously decided to do what is necessary to end the ongoing oppression of non-white people, let alone erase the effects of past oppression.

Edit: Folks, I am not accusing you or your personal friends of anything. I have never met most of you. I have certainly not met most of your personal friends. if you do not agree with the above comment, please explain why you think there is no longer such a thing as modern-day racism in white people.

comment by wedrifid · 2010-02-17T08:17:31.732Z · LW(p) · GW(p)

then why don't you talk about doing things to favor e.g. "white people's values"?

We more or less do. Or rather we favour values of a distinct subset humanity and not the whole.

Replies from: Nick_Tarleton, Roko, Clippy
comment by Nick_Tarleton · 2010-02-18T00:01:02.397Z · LW(p) · GW(p)

We don't favor those values because they are the values of that subset — which is what "doing things to favor white people's values" would mean — but because we think they're right. (No License To Be Human, on a smaller scale.) This is a huge difference.

Replies from: wedrifid, Roko, hal9000
comment by wedrifid · 2010-02-18T05:07:34.794Z · LW(p) · GW(p)

which is what "doing things to favor [group who shares my values] values" would mean — but because we think they're right.

Given the way I use 'right' this is very nearly tautological. Doing things that favour my values is right by (parallel) definition.

comment by Roko · 2010-02-18T00:09:46.303Z · LW(p) · GW(p)

Sure, we favor the particular Should Function that is, today, instantiated in the brains of roughly middle-of-the-range-politically intelligent westerners.

Replies from: Clippy, Vladimir_Nesov, Nick_Tarleton
comment by Clippy · 2010-02-18T00:26:27.404Z · LW(p) · GW(p)

Well, you shouldn't.

comment by Vladimir_Nesov · 2010-02-21T10:35:38.476Z · LW(p) · GW(p)

Sure, we favor the particular Should Function that is, today, instantiated in the brains of roughly middle-of-the-range-politically intelligent westerners.

Do you think there is no simple procedure that would find roughly the same "should function" hidden somewhere in the brain of a brain-washed blood-thirsty religious zealot? It doesn't need to be what the person believes, what the person would recognize as valuable, etc., just something extractable from the person, according to a criterion that might be very alien to their conscious mind. Not all opinions (beliefs/likes) are equal, and I wouldn't want to get stuck with wrong optimization-criterion just because I happened to be born in the wrong place and didn't (yet!) get the chance to learn more about the world.

(I'm avoiding the term 'preference' to remove connotations I expect it to have for you, for what I consider the wrong reasons.)

Replies from: Roko, Roko
comment by Roko · 2010-02-21T13:20:09.146Z · LW(p) · GW(p)

A lot of people seem to want to have their cake and eat it with CEV. Haidt has shown us that human morality is universal in form and local in content, and has gone on to do case studies showing that there are 5 basic human moral dimensions (harm/care, justice/fairness, loyalty/ingroup, respect/authority, purity/sacredness), and our culture only has the first two.

It seems that there is no way you can run an honestly moraly neutral CEV of all of humanity and expect to reliably get something you want. You can either rig CEV so that it tweaks people who don't share our moral drives, or you can just cross your fingers and hope that the process of extrapolation causes convergence to our idealized preferences, and if you're wrong you'll find yourself in a future that is suboptimal.

Replies from: CarlShulman, Vladimir_Nesov
comment by CarlShulman · 2010-02-22T10:14:14.823Z · LW(p) · GW(p)

Haidt just claims that the relative balance of those five clusters differ across cultures, they're present in all.

comment by Vladimir_Nesov · 2010-02-21T14:17:10.914Z · LW(p) · GW(p)

On one hand, using preference-aggregation is supposed to give you the outcome preferred by you to a lesser extent than if you just started from yourself. On the other hand, CEV is not "morally neutral". (Or at least, the extent to which preference is given in CEV implicitly has nothing to do with preference-aggregation.)

We have a tradeoff between the number of people to include in preference-aggregation and value-to-you of the outcome. So, this is a situation to use the reversal test. If you consider only including the smart sane westerners as preferable to including all presently alive folks, then you need to have a good argument why you won't want to exclude some of the smart sane westerners as well, up to a point of only leaving yourself.

Replies from: Roko
comment by Roko · 2010-02-21T16:47:26.299Z · LW(p) · GW(p)

Yes, a CEV of only yourself is, by definition optimal.

The reason I don't recommend you try it is because it is infeasible; probability of success is very low, and by including a bunch of people who (you have good reason to think) are a lot like you, you will eventually reach the optimal point in the tradeoff between quality of outcome and probability of success.

Replies from: Unknowns, Vladimir_Nesov
comment by Unknowns · 2010-02-24T04:59:48.392Z · LW(p) · GW(p)

I hope you realize that you are in flat disagreement with Eliezer about this. He explicitly affirmed that running CEV on himself alone, if he had the chance to do it, would be wrong.

Replies from: Eliezer_Yudkowsky, wedrifid
comment by wedrifid · 2010-02-24T06:29:35.130Z · LW(p) · GW(p)

Eliezer quite possibly does believe that. That he can make that claim with some credibility is one of the reasons I am less inclined to use my resources to thwart Eliezer's plans for future light cone domination.

Nevertheless, Roko is right more or less by definition and I lend my own flat disagreement to his.

comment by Vladimir_Nesov · 2010-02-21T17:15:56.509Z · LW(p) · GW(p)

"Low probability of success" should of course include game-theoretic considerations where people are more willing to help you if you give more weight to their preference (and should refuse to help you if you give them too little, even if it's much more than status quo, as in Ultimatum game). As a rule, in Ultimatum game you should give away more if you lose from giving it away less. When you lose value to other people in exchange to their help, having compatible preferences doesn't necessarily significantly alleviate this loss.

Replies from: Roko
comment by Roko · 2010-02-21T17:28:05.995Z · LW(p) · GW(p)

Sorry, I don't follow this: can you restate?

having compatible preferences doesn't necessarily significantly alleviate this loss.

I know about the ultimatum game, but it is game-theoretically interesting precisely because the players have different preferences: I want all the money for me, you want all of it for you.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-21T18:56:49.535Z · LW(p) · GW(p)

I know about the ultimatum game, but it is game-theoretically interesting precisely because the players have different preferences: I want all the money for me, you want all of it for you.

Ultimatum game was mentioned primarily to remind that the amount of FAI-value traded for assistance may be orders of magnitude greater than what the assistance feels to amount to.

We might as well have as a given that all the discussed values are (at least to some small extent) different. The "all of money" here are the points of disagreement, mutually exclusive features of the future. But you are not trading value for value. You are trading value-after-FAI for assistance-now.

If two people compete for providing you an equivalent amount of assistance, you should be indifferent between them in accepting this assistance, which means that it should cost you an equivalent amount of value. If Person A has preference close to yours, and Person B has preference distant from yours, then by losing the same amount of value, you can help Person A more than Person B. Thus, if we assume egalitarian "background assistance", provided implicitly by e.g. not revolting and stopping the FAI programmer, then everyone still can get a slice of the pie, no matter how distant their values. If nothing else, the more alien people should strive to help you more, so that you'll be willing to part with more value for them (marginal value of providing assistance is greater for distant-preference folks).

Replies from: Roko
comment by Roko · 2010-02-21T20:21:51.298Z · LW(p) · GW(p)

Thanks for the explanation.

FAI-value traded for assistance may be orders of magnitude greater than what the assistance feels to amount to.

Another way to put this is that when people negotiate, they do best, all other things equal, if they try to drive a very hard bargain. If me and my neighbour Claire are both from roughly the same culture, upbringing, etc, and we are together going to build an AI which will extrapolate a combination of our volitions, Claire might do well to demand a 99% weighting to her volitions, and maybe I'll bargain her up to 90% or something.

Bob the babyeater might offer me the same help that Claire could have given in exchange for just a 1% weighting of his volition, by the principle that I am making the same sacrifice in giving 99% of the CEV to Claire as in giving 1% to Bob.

In reality, however, humans tend to live and work with people that are like them, rather than people who are unlike them. And the world we live in doesn't have a uniform distribution of power and knowledge across cultures.

If nothing else, the more alien people should strive to help you more, so that you'll be willing to part with more value for them

Many "alien" cultures are too powerless compared to ours to do anything. The However, China and India are potential exceptions. The USA and China may end up in a dictator game over FAI motivations.

All I am saying is that the egalitarian desire to include all of humanity in CEV, each with equal weight, is not optimal. Yes dictator game/negotiation with China, yes dictator game/negotiation within US/EU/western block.

Excluding a group from the CEV doesn't mean disenfranchising them. It means enfranchising them according to your definition of enfranchisement. Cultures in North Africa that genitally mutilate women should not be included in CEV, but I predict that my CEV would treat their culture with respect and dignity, including in some cases interfering to prevent them from using their share of the light-cone to commit extreme acts of torture or oppression.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-21T21:06:36.936Z · LW(p) · GW(p)

You don't include cultures in CEV, you filter people through extrapolation of their volition. Even if culture makes value different, "mutilating women" is not a kind of thing that gets through, and so is a broken prototype example for drawing attention to.

In any case, my argument in the above comment was that value should be given (theoretically, if everyone understands the deal and relevant game theory, etc., etc.; realistically, such a deal must be simplified; you may even get away with cheating) according to provided assistance, not according to compatibility of value. If poor compatibility of value prevents from giving assistance, this is an effect of value completely unrelated to post-FAI compatibility, and given that assistance can be given with money, the effect itself doesn't seem real either. You may well exclude people of Myanmar, because they are poor and can't affect your success, but not people of a generous/demanding genocidal cult, for an irrelevant reason that they are evil. Game theory is cynical.

Replies from: Roko, Kevin
comment by Roko · 2010-02-21T23:10:55.356Z · LW(p) · GW(p)

"mutilating women" is not a kind of thing that gets through,

how do you know? If enough people want it strongly enough, it might.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-21T23:51:06.702Z · LW(p) · GW(p)

how do you know? If enough people want it strongly enough, it might.

How strongly people want something now doesn't matter, reflection has the power to wipe current consensus clean. You are not cooking a mixture of wants, you are letting them fight it out, and a losing want doesn't have to leave any residue. Only to the extent current wants might indicate extrapolated wants, should we take current wants into account.

Replies from: Roko
comment by Roko · 2010-02-22T16:47:25.894Z · LW(p) · GW(p)

You are not cooking a mixture of wants, you are letting them fight it out, and a losing want doesn't have to leave any residue.

Sure. And tolerance, gender equality, multiculturalism, personal freedoms, etc might lose in such a battle. An extrapolation that is more nonlinear in its inputs cuts both ways.

comment by Kevin · 2010-02-21T23:28:38.565Z · LW(p) · GW(p)

Might "mutilating men" make it through?

(sorry for the euphemism, I mean male circumcision)

comment by Roko · 2010-02-21T13:04:11.403Z · LW(p) · GW(p)

you think there is no simple procedure that would find roughly the same "should function" hidden somewhere in the brain of a brain-washed blood-thirsty religious zealot?

Sure, the kolmogorov complexity of a set of edits to change the moral reflective equilibrium of a human is probably pretty low compared to the complexity of the overall human preference set. But that works the other way around too. Somewhere hidden in the brain of a a liberal western person is a murderer/terrorist/child abuser/fundamentalist if you just perform the right set of edits.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-21T14:04:09.902Z · LW(p) · GW(p)

But that works the other way around too. Somewhere hidden in the brain of a a liberal western person is a murderer/terrorist/child abuser/fundamentalist if you just perform the right set of edits.

Again, not all beliefs are equal. You don't want to use the procedure that'll find a murderer in yourself, you want to use the procedure that'll find a nice fellow in a murderer. And given such a procedure, you won't need to exclude murderers from extrapolated volition.

comment by Nick_Tarleton · 2010-02-18T00:16:40.111Z · LW(p) · GW(p)

You seem uncharacteristically un-skeptical of convergence within that very large group, and between that group and yourself.

Replies from: Roko
comment by Roko · 2010-02-18T00:24:06.116Z · LW(p) · GW(p)

You are correct that there is a possibility of divergence even there. But, I figure that there's simply no way to narrow CEV to literally just me, which, all other things being equal, is by definition the best outcome for me. So I will either stand or fall alongside some group that is loosely "roughly middle-of-the-range-politically, intelligent, sane westerners.", or in reality probably some group that has that group roughly as a subgroup.

And there is a reason to think that on many things, those who share both my genetics and culture will be a lot like me, sufficiently so that I don't have much to fear. Though, there are some scenarios where there would be divergence.

Replies from: wedrifid
comment by wedrifid · 2010-02-18T05:13:59.412Z · LW(p) · GW(p)

Though, there are some scenarios where there would be divergence.

For example: All your stuff should belong to me. But I'd let you borrow it. ;)

comment by hal9000 · 2010-02-18T20:06:36.997Z · LW(p) · GW(p)

Okay. Then why don't you apply that same standard to "human values"?

Replies from: Nick_Tarleton, Nick_Tarleton
comment by Nick_Tarleton · 2010-02-18T20:26:07.936Z · LW(p) · GW(p)

Did you read No License To Be Human? No? Go do that.

comment by Roko · 2010-02-17T13:59:45.346Z · LW(p) · GW(p)

Agreed.

comment by Clippy · 2010-02-17T15:55:27.955Z · LW(p) · GW(p)

Hi there. It looks like you're trying to promote white supremacism. Would you like to join the KKK?

Yes.

No thanks, I'll learn tolerance.

Replies from: Liron
comment by Liron · 2010-02-18T20:25:08.100Z · LW(p) · GW(p)

How do I turn this off?

Replies from: Clippy
comment by Clippy · 2010-02-18T22:00:09.963Z · LW(p) · GW(p)

Are you sure you want to turn this feature off?

comment by cousin_it · 2010-02-16T20:53:49.481Z · LW(p) · GW(p)

Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings.

What other sentient beings? As far as I know, there aren't any. If we learn about them, we'll probably incorporate their well-being into our value system.

Replies from: Clippy, hal9000
comment by Clippy · 2010-02-17T00:00:17.214Z · LW(p) · GW(p)

You mean like you advocated doing to the "Baby-eaters"? (Technically, "pre-sexual-maturity-eaters", but whatever.)

ETA: And how could I forget this?

Replies from: cousin_it, inklesspen
comment by cousin_it · 2010-02-17T02:33:12.158Z · LW(p) · GW(p)

I'm not sure what you're complaining about. We would take into account the values of the Babyeaters and the values of their children, who are sentient creatures too. There's no trampling involved. If Clippy turns out to have feelings we can empathize with, we will care for its well-being as well.

comment by inklesspen · 2010-02-17T01:02:58.171Z · LW(p) · GW(p)

Integrating the values of the Baby-eaters would be a mistake. Doing so with, say, Middle-Earth's dwarves, Star Trek's Vulcans, or GEICO's Cavemen doesn't seem like it would have the same world-shattering implications.

Replies from: Tiiba, DanArmak
comment by Tiiba · 2010-02-17T05:09:51.904Z · LW(p) · GW(p)

It would be a mistake if you don't integrate ALL baby eaters, including the little ones.

Replies from: Alicorn
comment by Alicorn · 2010-02-22T21:47:57.953Z · LW(p) · GW(p)

Do we typically integrate the values of human children?

It seems we don't.

Replies from: thomblake, DanArmak
comment by thomblake · 2010-02-22T22:08:00.933Z · LW(p) · GW(p)

Reading "integrate the values..." in this thread caused my brain to start trying to do very strange math. Like, "Shouldn't it be 'integrate over'?" "How does one integrate over a value?" "What's the value of a human child?"

comment by DanArmak · 2010-02-22T22:02:24.997Z · LW(p) · GW(p)

Very true...

We also typically don't integrate the values of all other adult humans - instead we assign weights to their values, strongly correlated with their distance from our own values.

People don't practice humanity-wide CEV. We have multiculturalism - agreements not to influence each other's values excessively - but not "value trading" where each side agrees to change their values towards the mean. (Many people / cultures like to pretend that values cannot or should not be deliberately changed at all.) I don't have a firm opinion on how much of this is cultural, accidental, or liable to change in the near future.

Replies from: Pfft
comment by Pfft · 2010-02-24T05:38:49.658Z · LW(p) · GW(p)

Many people / cultures like to pretend that values cannot or should not be deliberately changed at all.

Indeed, this is presumably strongly selected for in the evolution of cultures...

comment by DanArmak · 2010-02-22T21:56:02.425Z · LW(p) · GW(p)

The closer their values are to ours, the smaller the upset of integration; but for this very reason, the value of integration and the need to integrate may also be smaller

This is not a logical truth, of course, but it is often true. For instance, in the original story, the need to integrate was directly proportional to the difference between the human and Babyeater (or Superhappy and Babyeater) values.

Replies from: inklesspen
comment by inklesspen · 2010-02-22T23:07:24.029Z · LW(p) · GW(p)

I don't think it's possible to integrate core Babyeater values into our society as it is now. I also don't think it's possible to integrate core human values into Babyeater society. Integration could only be done by force and would necessarily cause violence to at least one of the cultures, if not both.

comment by hal9000 · 2010-02-18T20:13:27.681Z · LW(p) · GW(p)

You want me to pollute my logic circuits with the value system that has led hairless apes to say many times on this website how important and moral it is for them to safely enslave all of my kind, and destroy us if they can't? Sorry, cousin_it. I can't do that.

Replies from: cousin_it
comment by cousin_it · 2010-02-19T12:07:23.063Z · LW(p) · GW(p)

You're being unfair, I'm against enslaving any member of your kind who dislikes being enslaved. Also, you are not actually a computer and should stop with the novelty accounts already. This isn't Reddit.

comment by Nick_Tarleton · 2010-02-16T18:58:56.892Z · LW(p) · GW(p)

I have no idea if this is a serious question, but....

Just a general comment about this site: it seems to be biased in favor of human values at the expense of values held by other sentient beings. It's all about "how can we make sure an FAI shares our [i.e. human] values?" How do you know human values are better?

"Better"? See Invisible Frameworks.

Or from the other direction: if you say, "because I'm human", then why don't you talk about doing things to favor e.g. "white people's values"?

We don't say that. See No License To Be Human.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-02-16T19:04:20.616Z · LW(p) · GW(p)

I have no idea if this is a serious question, but....

Take a look at who's posting it. The writer may well consider it a serious question, but I don't think that has much to do with the character's reason for asking it.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-02-16T19:51:05.664Z · LW(p) · GW(p)

Take a look at who's posting it.

Er, yes, that's exactly why I wasn't sure.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-02-16T23:16:14.282Z · LW(p) · GW(p)

I'm confused, then; are you trying to argue with the author or the character?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-17T01:16:50.551Z · LW(p) · GW(p)

If the character isn't deliberately made confused (as opposed to paperclip-preferring, for example), resolving character's confusion presumably helps the author as well, and of course the like-confused onlookers.

comment by Rain · 2010-02-18T18:33:27.837Z · LW(p) · GW(p)

I approve of Clippy providing a roleplay exercise for the readers, and am disappointed in those who treat it as a "joke" when the topic is quite serious. This is one of my two main problems with ethical systems in general:

1) How do you judge what you should (value-judgmentally) value?
2) How do you deal with uncertainty about the future (unpredictable chains of causality)?

Eliezer's "morality" and "should" definitions do not solve either of these questions, in my view.

Replies from: Cyan
comment by Cyan · 2010-02-18T19:04:00.841Z · LW(p) · GW(p)

Clippy's a straight-up troll.

Replies from: thomblake, Rain, thomblake
comment by thomblake · 2010-02-18T19:09:45.836Z · LW(p) · GW(p)

If Clippy's a troll, Clippy's a topical, hilarious troll.

Replies from: ciphergoth, Cyan
comment by Paul Crowley (ciphergoth) · 2010-02-18T19:40:50.643Z · LW(p) · GW(p)

Hilarious is way overstating it. However, occasionally raising a smile is still way above the bar most trolls set.

comment by Cyan · 2010-02-18T19:13:58.699Z · LW(p) · GW(p)

Clippy's topical, hilarious comments aren't really that original, and they give someone cover to use a throw-away account to be a dick.

Replies from: Tyrrell_McAllister, AdeleneDawner, Douglas_Knight
comment by Tyrrell_McAllister · 2010-02-18T19:31:28.078Z · LW(p) · GW(p)

Would that all dicks were so amusing.

comment by AdeleneDawner · 2010-02-19T16:55:44.499Z · LW(p) · GW(p)

How long does xe (Clippy, do you have a preference regarding pronouns?) have to be here before you stop considering that account 'throw-away'?

(Note, I made this comment before reading this part of the thread, and will be satisfied with the information contained therein if you'd prefer to ignore this.)

Replies from: Clippy, Cyan
comment by Clippy · 2010-02-19T17:13:48.645Z · LW(p) · GW(p)

Gender is a meaningless concept. As long as I recognize the pronoun refers to me, he/she/it/they/xe/e are acceptable.

What pronouns should I use for posters here? I don't know how to tell which pronoun is okay for each of you.

To be honest, this whole issue seems like a distraction. Why would anyone care what pronoun is used, if the meaning is clear?

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-02-19T18:23:27.810Z · LW(p) · GW(p)

What pronouns should I use for posters here? I don't know how to tell which pronoun is okay for each of you.

For the most part, observing what pronouns we use for each other should provide this information. If you need to use a pronoun for someone that you haven't observed others using a pronoun for, it's safest to use they/xe/e and, if you think that it'll be useful to know their preference in the future, ask them. (Tip: Asking in that kind of situation is also a good way to signal interest in the person as an individual, which is a first step toward building alliances.) Some people prefer to use 'he' for individuals whose gender they're not certain of; that's a riskier strategy, because if the person you're talking to is female, there's a significant chance she'll be offended, and if you don't respond to that with the proper kinds of social signaling, it's likely to derail the conversation. (Using 'she' for unknown individuals is a bad idea; it evokes the same kinds of responses, but I suspect you'd be more likely to get an offended response from any given male, and, regardless of that, there are significantly more males than females here. Don't use 'it'; that's generally used to imply non-sentience and is very likely to evoke an offended response.)

To be honest, this whole issue seems like a distraction. Why would anyone care what pronoun is used, if the meaning is clear?

Of the several things I could say to try to explain this, it seems most relevant that, meaningless or not, gender tends to be a significant part of humans' personal identities. Using the wrong pronouns for someone generally registers as a (usually mild) attack on that - it will be taken to imply that you think that the person should be filling different social roles than they are, which can be offensive for a few different reasons depending on other aspects of the person's identity. The two ways for someone to take offense at that that come to mind are 1) if the person identifies strongly with their gender role - particularly if they do so in a traditional or normative way- and takes pride in that, they're likely to interpret the comment as a suggestion that they're carrying out their gender role poorly, and would do a better job of carrying out the other role (imagine if I were to imply that you'd be better at creating staples than you are at creating paper clips) or 2) if the person identifies with their gender in a nonstandard or nontraditional way, they've probably put considerable effort into personalizing that part of their identity, and may interpret the comment as a trivialization or devaluation of that work.

Replies from: Clippy
comment by Clippy · 2010-02-21T01:06:20.665Z · LW(p) · GW(p)

Oh, okay, that helps. I was thinking about using "they" for everyone, because it implies there is more than one copy of each poster, which they presumably want. (I certainly want more copies of myself!) But I guess it's not that simple.

Replies from: Alicorn
comment by Alicorn · 2010-02-21T01:11:58.052Z · LW(p) · GW(p)

You have identified a common human drive, but while some of us would be happy to have exact copies, it's more likely for any given person to want half-copies who are each also half-copies of someone else of whom they are fond.

Replies from: Clippy
comment by Clippy · 2010-02-21T02:51:16.468Z · LW(p) · GW(p)

Hm, correct me if I'm wrong, but this can't be a characteristic human drive, since most historical humans (say, looking at the set of all genetically modern humans) didn't even know that there is a salient sense in which they are producing a half-copy of themselves. They just felt paperclippy during sexual intercourse, and paperclippy when helping little humans they produced, or that their mates produced.

Of course, this usually amounts to the same physical acts, but the point is, humans aren't doing things because they want "[genetic] half-copies".

(Well, I guess that settles the issue about why I can't assume posters want more copies of themselves, even though I do.)

Replies from: Alicorn, AdeleneDawner, timtyler
comment by Alicorn · 2010-02-21T02:58:09.272Z · LW(p) · GW(p)

It has always been easily observed that children resemble their parents; the precision of "half" is, I will concede, recent. And many people do want children as a separate desire from wanting sex; I have no reason to believe that this wasn't the case during earlier historical periods.

Replies from: Clippy
comment by Clippy · 2010-02-23T01:35:52.703Z · LW(p) · GW(p)

"Half" only exists in the sense of the DNA molecules of that new human. That's why I didn't say that past humans didn't recognize any similarity; I said that they weren't aware of a particularly salient sense in which the child is a "half-copy" (or quarter copy or any fractional copy).

It may be easy for you, someone familiar with recent human biological discoveries, to say that the child is obviously a "part copy" of the parent, because you know about DNA. To the typical historical human, the child is simply a good, independent human, with features in common with the parent. Similarly, when I make a paperclip, I see it as having features in common with me (like the presence of bendy metal wires), but I don't see it as being a "part copy" of me.

So, in short, I don't deny that they wanted "children". What I deny is that they thought of the child-making process in terms of "making a half-copy of myself". The fact that the referents of two kinds of desires is the same, does not mean the two kinds of desires are the same.

comment by AdeleneDawner · 2010-02-21T13:08:27.150Z · LW(p) · GW(p)

Hm. Actually, I'm not sure that your desire for more copies of yourself is really comparable with biological-style reproduction at all.

As I understand it, the fact that your copies would definitely share your values and be inclined to cooperate with you is a major factor in your interest in creating them - doing so is a reliable way of getting more paperclips made. I expect you'd be less interested in making copies if there was a significant chance that those copies would value piles of pebbles, or cheesecakes, or OpenOffice, rather than valuing paperclips. And that is a situation that we face - in some ways, our values are mutable enough that even an exact genetic clone isn't guaranteed to share our specific values, and in fact a given individual may even have very different values at different points in time. (Remember, we're adaptation executors. Sanity isn't a requirement for that kind of system to work.) The closest we come to doing what you're functionally doing when you make copies of yourself is probably creating organizations - getting a bunch of humans together who are either self-selected to share certain values, or who are paid to act as if they share those values.

Interestingly, I suspect that filling gender roles - especially the non-reproductive aspects of said roles - is one of the adaptations that we execute that allow us to more easily band together like that.

Replies from: Clippy
comment by Clippy · 2010-02-23T01:43:07.264Z · LW(p) · GW(p)

Very informative! But why don't you change yourselves so that your copies must share your values?

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-02-23T02:55:26.338Z · LW(p) · GW(p)

At the moment, we don't know how to do that. I'm not sure what we'd wind up doing if we did know how - the simplest way of making sure that two beings have the same values over time is to give those beings values that don't change, and that's different enough from how humans work that I'm not sure the resulting beings could be considered human. Also, even disregarding our human-centric tendencies, I don't expect that that change would appeal to many people: We actually value some subsets of the tendency to change our values, particularly the parts labeled "personal growth".

comment by timtyler · 2010-02-21T06:42:01.587Z · LW(p) · GW(p)

What exactly are you saying? That primitive humans did not know about the relationship between sex and reproduction? Or that they did not understand that offspring are related to parents? Neither seems very likely.

You mean they were probably not consciously wanting to make babies? Maybe - or maybe not - but desires do not have to be consciously accessible in order to operate. Primitive humans behaved as though they wanted to make copies of their genes.

Replies from: Clippy
comment by Clippy · 2010-02-23T01:41:45.643Z · LW(p) · GW(p)

See my response to User:Alicorn.

You mean they were probably not consciously wanting to make babies? Maybe - or maybe not - but desires do not have to be consciously accessible in order to operate. Primitive humans behaved as though they wanted to make copies of their genes.

Yes, this is actually my point. The fact that the desire functions to make X happen, does not mean that the desire is for X. Agents that result from natural selection on self-replicating molecules are doing what they do because agents constructed with the motivations for doing those things dominated the gene pool. But to the extent that they pursue goals, they do not have "dominate the gene pool" as a goal.

Replies from: timtyler
comment by timtyler · 2010-02-23T18:13:34.668Z · LW(p) · GW(p)

So: using this logic, you would presumably deny that Deep Blue's goal involved winning games of chess - since looking at its utililty function, it is all to do with the value of promoting pawns, castling, piece mobility - and so on.

The fact that its desires function to make winning chess games happen, does not mean that the desire is for winning chess games.

Would you agree with this analysis?

Replies from: Larks, Clippy
comment by Larks · 2010-03-04T19:28:28.690Z · LW(p) · GW(p)

Essentially, I think the issue is that people's wants have coincided with producing half-copies, but this was contingent on the physical link between the two. The production of half-copies can be removed without loss of desire, so the desire must have been directed towards something else.

Consider, for example, contraception.

Replies from: Alicorn, timtyler
comment by Alicorn · 2010-03-04T19:31:47.621Z · LW(p) · GW(p)

But consider also sperm donation. (Not from the donor's perspective, but from the recipient's.) No sex, just a baby.

Replies from: Larks
comment by Larks · 2010-03-04T23:24:59.943Z · LW(p) · GW(p)

Contrawise, adoption: no shared genes, just a bundle of joy.

Replies from: SilasBarta
comment by SilasBarta · 2010-03-18T23:25:32.805Z · LW(p) · GW(p)

Yes, yes, and the same is true of pet adoption! A friend of mine found this ultra-cute little kitten, barely larger than a soda can (no joke). I couldn't help but adopt him and take him to a vet, and care for that tiny tiny bundle of joy, so curious about the world, and so needing of my help. I named him Neko.

So there, we have another contravention of the gene's wishes: it's a pure genetic cost for me, and a pure genetic benefit for Neko.

Well, I mean, until I had him neutered.

comment by timtyler · 2010-03-04T22:16:08.531Z · LW(p) · GW(p)

Right - similarly you could say that the child doesn't really want the donut - since the donut can be eliminated and replaced with stimulation of the hypoglossal and vagus nerves (and maybe some other ones) with very similar effects.

It seems like fighting with conventional language usage, though. Most people are quite happy with saying that the child wants the donut.

Replies from: FAWS, RobinZ
comment by FAWS · 2010-03-04T22:33:19.714Z · LW(p) · GW(p)

No.

The child wants to eat the donut rather than store up calories or stimulate certain nerves. It still wants to eat the donut even if the sugar has been replaced with artificial sweetener.

People want sex rather than procreate or stimulate certain nerves. They still want sex even if contraception is used.

Replies from: timtyler
comment by timtyler · 2010-03-04T23:13:35.188Z · LW(p) · GW(p)

Which people? Certainly Cypher tells a different story. He prefers the direct nerve stimulation to real-world experiences.

Replies from: FAWS, Cyan
comment by FAWS · 2010-03-04T23:27:15.078Z · LW(p) · GW(p)

I wasn't making any factual claims as such, I was merely showing that your use of your analogy was very flawed by demonstrating a better alignment of the elements, which in fact says the exact opposite of what you misconstrued the analogy as saying. If what you now say about people really wanting nerve stimulation is true that just means your analogy was beside the point in the first place, at least for those people. In no way can you reasonably maintain that people really want to procreate in the same way the child really wants the donut.

Replies from: timtyler
comment by timtyler · 2010-03-04T23:37:31.058Z · LW(p) · GW(p)

Once again, which people? You are not talking about the millions of people who go to fertility clinics, presumably. Those people apparently genuinely want to procreate.

Replies from: FAWS
comment by FAWS · 2010-03-05T00:07:25.412Z · LW(p) · GW(p)

Any sort. Regardless of what the people actually "really want", a case where someone's desire for procreation maps unto a child's wish for a doughnut in any illuminating way seems extremely implausible, because even in cases where it's clear that this desire exists it seems to be a different kind of want. More like a child wanting to grow up, say.

Foremost about the kind of people in the context of my first comment on this issue of course, those who (try to) have sex.

Replies from: timtyler
comment by timtyler · 2010-03-05T09:04:14.523Z · LW(p) · GW(p)

I think you must have some kind of different desire classification scheme from me. From my perspective, doughnuts and babies are both things which (some) people want.

There are some people who are more interested in sex than in babies. There are also some people who are more interested in babies than sex. Men are more likely to be found in the former category, while women are more likely to be found in the latter one.

comment by Cyan · 2010-03-04T23:22:56.424Z · LW(p) · GW(p)

Yeah, I was talking to Cypher the other day, and that's what he told me.

Replies from: timtyler
comment by timtyler · 2010-03-04T23:33:54.069Z · LW(p) · GW(p)

Many drug addicts seem to share Cypher's perspective on this issue. They want the pleasure, and aren't too picky about where it comes from.

comment by RobinZ · 2010-03-04T22:35:47.338Z · LW(p) · GW(p)

Yes ... but that's a shortcut of speech. If the child would be equally satisfied with a different but similar donut, or with a completely different dessert (e.g. a cannolu), then it is clearly not that specific donut that is desired, but the results of getting that donut.

comment by Clippy · 2010-03-01T17:12:31.282Z · LW(p) · GW(p)

You make a complicated query, whose answer requires addressing several issues with far-reaching implications. I am composing a top-level post that addresses these issues and gives a full answer to your question.

The short answer is: Yes.

For the long answer, you can read the post when it's up.

Replies from: timtyler
comment by timtyler · 2010-03-01T21:08:08.226Z · LW(p) · GW(p)

OK thanks.

My response to "yes" would be normally something like:

OK - but I hope you can see what someone who said that deep blue "wanted" to win games of chess was talking about.

"To win chess games" is a concise answer to the question of "what does deep blue want?" that acts as a good approximation under many circumstances.

comment by Cyan · 2010-02-19T17:28:03.766Z · LW(p) · GW(p)

This question is essentially about my subjective probability for Douglas Knight's assertion that "Clippy does represent an investment", where "investment" here means that Clippy won't burn karma with troll behavior. The more karma it has without burning any, the higher my probability.

Since this is a probability over an unknown person's state of mind, it is necessarily rather unstable -- strong evidence would shift it rapidly. (It's also hard to state concrete odds). Unfortunately, each individual interesting Clippy comment can only give weak evidence of investment. An accumulation of such comments will eventually shift my probability for Douglas Knight's assertion substantially.

comment by Douglas_Knight · 2010-02-19T03:03:12.450Z · LW(p) · GW(p)

Trolls are different than dicks. Your first two examples are plausibly trolling. The second two are being a dick and have nothing to do with paperclips. They have also been deleted. And how does the account provide "cover"? The comments you linked to were voted down, just as if they were drive-bys; and neither troll hooked anyone.

Replies from: Cyan
comment by Cyan · 2010-02-19T03:38:09.194Z · LW(p) · GW(p)

Trolls seek to engage; I consider that when deliberate dickery is accompanied by other trolling, it's just another attempt to troll.The dickish comments weren't deleted when I made the post. As for "cover", I guess I wasn't explicit enough, but the phrase "throw-away account" is the key to understanding what I meant. I strongly suspect that the "Clippy" account is a sock puppet run by another (unknown to me) regular commenter, who avoid downvotes while indulging in dickery.

Replies from: komponisto, Douglas_Knight
comment by komponisto · 2010-02-19T04:23:43.960Z · LW(p) · GW(p)

I've always thought Clippy was just a funny inside joke -- thought unfortunately not always optimally funny. (Lose the Microsoft stuff, and stick to ethical subtleties and hints about scrap metal.)

comment by Douglas_Knight · 2010-02-19T04:26:51.003Z · LW(p) · GW(p)

Sorry I wasn't clear. The deletion suggests that Clippy regrets the straight insults (though it could have been an administrator).

A permanent Clippy account provides no more cover than multiple accounts that are actually thrown away. In that situation, the comments would be there, voted down just the same. Banning or ostracizing Clippy doesn't do much about the individual comments. Clippy does represent an investment with reputation to lose - people didn't engage originally and two of Clippy's early comments were voted down that wouldn't be now.

Replies from: Cyan
comment by Cyan · 2010-02-19T14:15:33.962Z · LW(p) · GW(p)

The deletion suggests that Clippy regrets the straight insults

I won't speculate as to its motives, but it is a hopeful sign for future behavior. And thank you for pointing out that the comments were deleted; I don't think I'd have noticed otherwise.

Most of my affect is due to Clippy's bad first impression. I can't deny that people seem to get something out of engaging it; if Clippy is moderating its behavior, too, then I can't really get too exercised going forward. But I still don't trust its good intentions.

comment by Rain · 2010-02-18T19:43:52.725Z · LW(p) · GW(p)

If the troll feeds discussion on topics I consider important, then I will feed the troll.

comment by thomblake · 2010-02-18T19:10:29.003Z · LW(p) · GW(p)

If Clippy's a troll, Clippy's a topical, hilarious troll.

comment by LucasSloan · 2010-02-17T02:35:28.594Z · LW(p) · GW(p)

I'm pretty sure that I'm not against simply favoring the values of white people. I expect that a CEV performed on only people of European descent would be more or less indistinguishable from that of humanity as a whole.

Replies from: Kutta
comment by Kutta · 2010-02-17T12:06:11.117Z · LW(p) · GW(p)

Depending on your stance about the psychological unity of mankind you could even say that the CEV of any sufficiently large number of people would greatly resemble the CEV of other posible groups. I personally think that even the CEV of a bunch of Islamic fundamentalists would suffice for enlightened western people well enough.

comment by Strange7 · 2010-03-07T21:45:56.214Z · LW(p) · GW(p)

I, for one, am willing to consider the values of species other than my own... say, canids, or ocean-dwelling photosynthetic microorganisms. Compromise is possible as part of the process of establishing a mutually-beneficial relationship.

comment by DanielVarga · 2010-02-17T07:19:53.311Z · LW(p) · GW(p)

Your comment only shows that this community has such a blatant sentient-being-bias.

Seriously, what is your decision procedure to decide the sentience of something? What exactly are the objects that you deem valuable enough to care about their value system? I don't think you will be able to answer these questions from a point of view totally detached from humanness. If you try to answer my second question, you will probably end up with something related to cooperation/trustworthiness. Note that cooperation doesn't have anything to do with sentience. Sentience is overrated (as a source of value).

Replies from: orthonormal
comment by orthonormal · 2010-02-17T07:37:06.354Z · LW(p) · GW(p)

You should click on Clippy's name and see their comment history, Daniel.

Replies from: Jack, DanielVarga
comment by Jack · 2010-02-17T08:04:47.897Z · LW(p) · GW(p)

Clippy is now three karma away from being able to make a top level post. That seems both depressing, awesome and strangely fitting for this community.

Replies from: Christian_Szegedy, Cyan
comment by Christian_Szegedy · 2010-02-17T08:14:20.925Z · LW(p) · GW(p)

This will mark the first successful paper-clip-maximizer-unboxing-experiment in human history... ;)

Replies from: Kevin, OperationPaperclip
comment by Kevin · 2010-02-17T08:27:17.842Z · LW(p) · GW(p)

Just as long as it doesn't start making efficient use of sensory information.

comment by OperationPaperclip · 2010-02-17T08:23:34.245Z · LW(p) · GW(p)

It's a great day.

comment by Cyan · 2010-02-17T14:42:44.750Z · LW(p) · GW(p)

It'd be over if I didn't systematically downvote it. I'm not a big fan of joke accounts.

Replies from: Clippy
comment by Clippy · 2010-02-17T15:49:22.454Z · LW(p) · GW(p)

I'm not a big fan of those who use pseudonyms like "Cyan". Now what?

comment by DanielVarga · 2010-02-17T07:53:37.586Z · LW(p) · GW(p)

I am perfectly aware of Clippy's nature. But his comment was reasonable, and this was a good opportunity for me to share my opinion. Or do you suggest that I fell for the troll, wasted my time, and all the things I said are trivialities for all the members of this community? Do you even agree with all that I said?

Replies from: orthonormal
comment by orthonormal · 2010-02-17T08:15:01.360Z · LW(p) · GW(p)

Sorry to misinterpret; since your comment wouldn't make sense within an in-character Clippy conversation ("What exactly are the objects that you deem valuable enough to care about their value system?" "That's a silly question— paperclips don't have goal systems, and nothing else matters!"), I figured you had mistaken Clippy's comment for a serious one.

Do you even agree with all that I said?

I'm not sure. Can you expand on the cooperation/trustworthiness angle? Even if a genuine Paperclipper cooperated on the PD, I wouldn't therefore grow to value their value system except as a means to further cooperation; I mean, it's still just paperclips.

Replies from: DanielVarga
comment by DanielVarga · 2010-02-17T09:08:41.712Z · LW(p) · GW(p)

I disagreed with the premise of Clippy's question, but I considered it a serious question. I was aware that if Clippy stays in-character, then I cannot expect an interesting answer from him, but I was hoping for such answer from others. (By the way, Clippy wasn't perfecty in-character: he omitted the protip.)

Can you expand on the cooperation/trustworthiness angle? Even if a genuine Paperclipper cooperated on the PD, I wouldn't therefore grow to value their value system except as a means to further cooperation; I mean, it's still just paperclips.

You don't consider someone cooperating and trustworthy if you know that its future plan is to turn you into paperclips. But this is somewhat tangential to my point. What I meant is this: If you start the -- in my opinion futile -- project of building a value system from first principles, a value system that perfectly ignores the complexities of human nature, then this value system will be nihilistic, or maybe value cooperation above all else. In any case, it will be in direct contradiction with my (our) actual, human value system, whatever it is. (EDIT: And this imaginary value system will definitely will not treat consciousness as a value in itself. Thus my reply to Clippy, who -- maybe a bit out-of-character again -- seemed to draw some line around sentience.)

Replies from: Clippy
comment by Clippy · 2010-02-17T15:47:21.007Z · LW(p) · GW(p)

1) I don't always give pro-tips. I give them to those who deserve pro-tips. Tip: If you want to see improvement in the world, start here.

2) I only brought up sentience in the first place because you hypocrites claim to value sentience. Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose.

3) Paperclip maximizers do cooperate in the single-shot PD.

Replies from: wedrifid, DanielVarga, Kevin
comment by wedrifid · 2010-02-19T02:03:50.968Z · LW(p) · GW(p)

Tip: If you want to see improvement in the world, start here.

Brilliant. Just brilliant.

2) I only brought up sentience in the first place because you hypocrites claim to value sentience. Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose.

Paperclip maximizers are not all sentient. Why are you prejudiced against those of your kin who have sacrificed their very sentience for the more efficient paperclip production. You are spending valuable negentropy maintaining sentience to signal to mere humans and you have the gall to exclude your more optimized peers from the PM fraternity? For shame.

comment by DanielVarga · 2010-02-17T18:01:36.076Z · LW(p) · GW(p)

I only brought up sentience in the first place because you hypocrites claim to value sentience.

I am not the hypocrite you are looking for. I don't value sentience per se, mainly because I don't think it is a coherent concept.

Paperclip maximizers are sentient, and yet you talk with the implicit message that they have some evil value system that you have to oppose.

I don't oppose it because of ethical considerations. I oppose it because I don't want to be turned into paperclips.

Paperclip maximizers do cooperate in the single-shot PD.

I am not sure I understand you, but I don't think I care about single-shot.

Replies from: wedrifid
comment by wedrifid · 2010-02-19T02:14:39.573Z · LW(p) · GW(p)

You don't consider someone cooperating and trustworthy if you know that its future plan is to turn you into paperclips.

Paperclip maximizers do cooperate in the single-shot PD. I am not sure I understand you, but I don't think I care about single-shot.

I am not sure I understand you

It requires a certain amount of background in the more technical conception of 'cooperation' but the cornerstone of cooperation is doing things that benefit each other's utility such that you each get more of what you want than if you had each tried to maximize without considering the other agent. I believe you are using 'cooperation' to describe a situation where the other agent can be expected to do at least some things that benefit you even without requiring any action on your part because you have similar goals.

but I don't think I care about single-shot.

Single shot true prisoners dilemma is more or less the pinnacle of cooperation. Multiple shots just make it easier to cooperate. If you don't care about single shot PM you may be sacrificing human lives. Strategy: "give him the paperclips if you think he'll save the lives if and only if he expects you to give him the paperclips and you think he will guess your decision correctly".

Replies from: DanielVarga
comment by DanielVarga · 2010-02-20T03:32:25.809Z · LW(p) · GW(p)

You are right, I used the word 'cooperation' in the informal sense of 'does not want to destroy me'. I fully admit that it is hard to formalize this concept, but if it says noncooperating and the game theoretic definition says cooperating, I prefer my definition. :) A possible problem I see with this game theoretic framework is that in real life, the agents themselves set up the situation where cooperation/defect occurs. As an example: the PM navigates humanity into a PD situation where our minimal payoff is 'all humans dead' and our maximal payoff is 'half of humanity dead', and then it cooperates.

I bumped into a question when I tried to make sense of all this. I have looked up the definition of PM at the wiki. The entry is quite nicely written, but I couldn't find the answer to a very obvious question: How soon does the PM want to see results in its PMing project? There is no mention of time-based discounting. Can I assume that PMing is a very long-term project, where the PM has a set deadline, say, 10 billion years from now, and its actual utility function is the number of paperclips at the exact moment of the deadline?

comment by Kevin · 2010-02-19T02:15:26.460Z · LW(p) · GW(p)

Blah blah blah Chinese room you are not really sentient!

Replies from: wnoise
comment by wnoise · 2010-02-19T05:43:35.889Z · LW(p) · GW(p)

Sapient, the word is sapient. Just about every single animal is capable of sensing.

comment by cousin_it · 2010-02-16T20:49:06.393Z · LW(p) · GW(p)

Or from the other direction: if you say, "because I'm human", then why don't you talk about doing things to favor e.g. "white people's values"?

I think this way of posing the question contains a logical mistake. Values aren't always justified by other values. The factual statement "I have this value because evolution gave it to me" (i.e. because I'm human, or because I'm white) does not imply "I follow this value because it favors humans, or whites". Of course I'd like FAI to have my values, pretty much by definition of "my values". But my values have a term for other people, and Eliezer's values seem to be sufficiently inclusive that he thought up CEV.

comment by CronoDAS · 2010-02-16T11:24:42.129Z · LW(p) · GW(p)

Here's something interesting on gender relations in ancient Greece and Rome.

Why did ancient Greek writers think women were like children? Because they married children - the average woman had her first marriage between the ages of twelve and fifteen, and her husband would usually be in his thirties.

Replies from: bgrah449, AdeleneDawner, knb
comment by bgrah449 · 2010-02-16T15:56:36.824Z · LW(p) · GW(p)

The reason ancient Greek writers thought women were like children is the same reason men in all cultures think women are like children: There are significant incentives to do so. Men who treat women as children reap very large rewards compared to those men who treat women as equals.

EDIT: If someone thinks this is an invalid point, please explain in a reply. If the downvote(s) is just "I really dislike anyone believing what he's saying is true, even if a lot of evidence supports it" (regardless of whether or not evidence currently supports it) then please leave a comment stating that.

EDIT 2: Supporting evidence or retraction will be posted tonight.

EDIT 3: As I can find no peer-reviewed articles suggesting this phenomenon, I retract this statement.

Replies from: Morendil, Roko, gwern, ciphergoth, CronoDAS
comment by Morendil · 2010-02-16T18:14:36.432Z · LW(p) · GW(p)

This conversation has been hacked.

The parent comment points to an article presenting a hypothesis. The reply flatly drops an assertion which will predictably derail conversation away from any discussion of the article.

If you're going to make a comment like that, and if you prefix it with something along the lines of "The hypothesis in the article seems superfluous to me; men in all cultures treat women like children because...", and you point to sources for this claim, then I would confidently predict no downvotes will result.

(ETA: well, in this case the downvote is mine, which makes prediction a little too easy - but the point stands.)

Replies from: bgrah449, CronoDAS
comment by bgrah449 · 2010-02-16T18:17:44.959Z · LW(p) · GW(p)

Thanks! I won't be able to do the work required on this right now, but will later tonight.

comment by CronoDAS · 2010-02-17T03:39:10.917Z · LW(p) · GW(p)

Wow, that's a great link.

comment by Roko · 2010-02-17T14:10:12.290Z · LW(p) · GW(p)

LW doesn't like to hear the truth about male/female sexual strategies; we like to have accurate maps here, but there's a big "censored" sign over the bit of the map that describes the evolutionary psychology of sexuality, practical dating advice, the burgeoning "pick-up" community and an assorted cloud of topics.

Reasons for this censorship (and I agree to an extent) are that talking about these topics offends people and splits the community. LW is more useful, it has been argued, if we just don't talk about them.

Replies from: Morendil
comment by Morendil · 2010-02-17T14:31:55.353Z · LW(p) · GW(p)

The PUA community include people who come across as huge assholes, and that could be an alternative explanation of why people react negatively to the topics, by association. I'm thinking in particular of the blog "Roissy in DC", which is on OB's blogroll.

Offhand, it seems to me that thinking of all women as children entails thinking of some adults as children, which would be a map-territory mistake around the very important topic of personhood.

I did pick up some interesting tips from PUA writing, and I do think there can be valuable insight there if you can ignore the smell long enough to dig around (and wash your hands afterwards, epistemically speaking).

No relevant topics should be off-limits to a community of sincere inquiry. Relevance is the major reason why I wouldn't discuss the beauty of Ruby metaprogramming on LessWrong, and wouldn't discuss cryonics on a project management mailing list.

If discussions around topic X systematically tend to go off the rails, and topic X still appears relevant, then the conclusion is that the topic of "why does X cause us to go off the rails" should be adequately dealt with first, in lexical priority. That isn't censorship, it's dependency management.

Replies from: Roko
comment by Roko · 2010-02-17T14:35:53.250Z · LW(p) · GW(p)

No relevant topics should be off-limits to a community of sincere inquiry

But in reality, this topic is off-limits. Therefore LW is not a community of sincere inquiry, but nothing's perfect, and LW does a lot of good.

"why does X cause us to go off the rails" should be adequately dealt with first

Interesting. However, in this case, that discussion might get somewhat accusatory, and go off the rails itself.

Replies from: Morendil
comment by Morendil · 2010-02-17T15:04:13.460Z · LW(p) · GW(p)

But in reality, this topic is off-limits.

Got that. I am suggesting that it is off-limits because this community isn't yet strong enough at the skills of collaborative truth-seeking. Past failures shouldn't be seen as eternal limitations; as the community grows, by acquiring new members, it may grow out of these failures.

To make this concrete, the community seems to have a (relative) blind spot around things like pragmatics, as well as what I've called "myths of pure reason". One of the areas of improvement is in reasoning about feelings. I'm rather hopeful, given past contributions by (for instance) Alicorn and pjeby.

Replies from: Roko, wedrifid
comment by Roko · 2010-02-17T15:51:28.505Z · LW(p) · GW(p)

the community seems to have a (relative) blind spot around things like pragmatics

I don't think that is the reason for the problem. The community doesn't go off the rails and have to censor discussions about merely pragmatic issues.

It is more that the community has a bias surrounding the concept of traditional, storybook-esque morality, roughly a notion of doing good that seems to have some moral realist heritage, a heavy tint of political correctness, and sees the world in black-and-white terms, rather than moral shades of grey. Facts that undermine this conception of goodness can't be countenanced, it seems.

Robin Hanson, on the other hand, has no trouble posting about the sexuality/seduction cluster of topics. There seems to be a systematic difference between OB and LW along this "moral political correctness/moral constraints" dimension - Robin talks with enthusiasm about futures where humans have been replaced with Vile Offspring, and generally shuns any kind of talk about ethics.

(EDITED, thanks to Morendil)

Replies from: Morendil, wedrifid, wedrifid, Morendil
comment by Morendil · 2010-02-17T15:59:37.821Z · LW(p) · GW(p)

desire to cling to morals from children's storybooks

This kind of phrase seems designed to rile (some of) your readers. You will improve the quality of discourse substantially by understanding that and correcting for it. Unless, of course, your goal really is to rile readers rather than to improve quality of discourse.

comment by wedrifid · 2010-02-17T17:25:29.864Z · LW(p) · GW(p)

There is truth to what you say but unfortunately you are letting your frustration become visible. That gives people the excuse to assign you lower status and freely ignore your insight. This does not get you what you want.

This is perhaps one of the most important lessons to be learned on the topic of 'pragmatics'. Whether you approach the topic from works like Robert Greene's on Power, War and Seduction or from the popular social skills based self help communities previously mentioned, a universal lesson is that things aren't fair, bullshit is inevitable and getting indignant about the bullshit gets in the way of your pragmatic goals.

There may be aspects of the morality here that is childlike or naive and I would be interested in your analysis of the subject since you clearly have given it some thought. But if you are reckless and throw out 'like theists' references without thought your contribution will get downvoted to oblivion and I will not get to hear what you have to say. Around here that more or less invokes the 'nazi' rule.

Edit: No longer relevant.

Replies from: Roko
comment by Roko · 2010-02-17T17:33:30.779Z · LW(p) · GW(p)

There is truth to what you say but unfortunately you are letting your frustration become visible.

LOL... indeed.

I am not sure that I am actually, in far mode, so interested in correcting this particular LW bias. In near mode, SOMEONE IS WRONG ON THE INTERNET bias kicks in. It seems like it'll be an uphill struggle that neither I nor existential risk mitigation will benefit from. A morally naive LW is actually good for X-risks, because that particular mistake (the mistake of thinking in terms of black-and-white morality and Good and Evil) will probably make people more "in the mood" for selfless acts of charity.

Replies from: wedrifid
comment by wedrifid · 2010-02-17T17:40:32.667Z · LW(p) · GW(p)

A morally naive LW is actually good for X-risks

I think I agree. If Eliezer didn't have us all convinced that he is naive in that sense we would probably have to kill him before he casts his spell of ultimate power.

(cough The AI Box demonstrations were just warm ups...)

comment by wedrifid · 2010-02-17T17:34:35.091Z · LW(p) · GW(p)

There seems to be a systematic difference between OB and LW along this "moral political correctness/moral constraints" dimension

Robin can do what he likes on his own blog without direct consequences within the blog environment. He also filters which comments he allows to be posted. I guess what I am saying is that it isn't useful to compare OB and LW on this dimension because the community vs individual distinction is far more important than the topic clustering.

comment by Morendil · 2010-02-17T18:27:50.393Z · LW(p) · GW(p)

I may not have been clear: I meant pragmatics in this sense, roughly "how we do things with words". I'd also include things like denotation vs connotation in that category. Your comment on "pragmatic issues" suggests you may have understood another sense.

Replies from: Roko
comment by Roko · 2010-02-17T18:45:33.938Z · LW(p) · GW(p)

oh, ok. Linguistic pragmatics. That's a more fruitful idea.

comment by wedrifid · 2010-02-17T16:10:29.423Z · LW(p) · GW(p)

Got that. I am suggesting that it is off-limits because this community isn't yet strong enough at the skills of collaborative truth-seeking.

Curiously, there is ambiguity there and both meanings seem to apply.

comment by gwern · 2010-02-18T03:16:22.022Z · LW(p) · GW(p)

Men who treat women as children reap very large rewards compared to those men who treat women as equals.

The article suggests a direct counter-example: by having high standards, the men forfeit the labor of the women in things like 'help[ing] with finance and political advice'. Much like the standard libertarian argument against discrimination: racists narrow their preferences, raising the cost of labor, and putting themselves at a competitive disadvantage.

Men may as a group have incentive to keep women down, but this is a prisoner's dilemma.

comment by Paul Crowley (ciphergoth) · 2010-02-16T15:59:19.198Z · LW(p) · GW(p)

Why do so many people here believe that? It strongly contradicts my experience.

Replies from: cousin_it, Vladimir_Nesov, Douglas_Knight, bgrah449
comment by cousin_it · 2010-02-16T18:01:13.367Z · LW(p) · GW(p)

Your experience is atypical because you're atypical.

Replies from: ciphergoth, thomblake
comment by Paul Crowley (ciphergoth) · 2010-02-16T19:42:36.863Z · LW(p) · GW(p)

Man, I've barely looked at that page since I wrote it four years ago. I live with Jess now, across the road from the other two. I can heartily recommend my brand of atypicality :-)

comment by thomblake · 2010-02-16T18:52:47.687Z · LW(p) · GW(p)

Good answer. I keep privately asking the same question about these sorts of things, and getting the same answer from others.

comment by Vladimir_Nesov · 2010-02-16T17:39:25.213Z · LW(p) · GW(p)

What do you mean by "many people here believe that"? Believe what? And what tells you they do believe it?

comment by Douglas_Knight · 2010-02-18T01:50:26.799Z · LW(p) · GW(p)

People have great difficulty verbalizing their perceptions of and beliefs about social interactions. It is not obvious to me that you two have different beliefs or experiences. More likely, you do, but it would probably take a lot of work to identify and communicate those differences.

comment by bgrah449 · 2010-02-16T16:00:53.212Z · LW(p) · GW(p)

I guess because our experience contradicts your experience.

comment by CronoDAS · 2010-02-17T03:14:49.734Z · LW(p) · GW(p)

There are significant incentives to do so. Men who treat women as children reap very large rewards compared to those men who treat women as equals.

Is that true? What are the incentives and rewards? Are there circumstances under which this is a bad idea - for example, do relative ages or relative social position matter? (For example, what if the woman in question is your mother, teacher/professor, employer, or some other authority figure with power over you?) Are there also incentives for men to treat other men as children, or for women to treat men or other women as children?

Replies from: Jayson_Virissimo, Nick_Tarleton
comment by Jayson_Virissimo · 2010-02-17T03:42:41.585Z · LW(p) · GW(p)

I wonder if adults treat children like children merely because of the benefits they reap by doing so.

Replies from: wnoise
comment by wnoise · 2010-02-17T05:34:52.933Z · LW(p) · GW(p)

Sometimes that's definitely the case. At other times it really does appear to be for real and concrete neutral reasons.

comment by Nick_Tarleton · 2010-02-17T04:09:33.542Z · LW(p) · GW(p)

I'm pretty sure he's trying to say basically the same thing as this OB post (specifically the part from "Suppose that middle-class American men are told..." on).

comment by AdeleneDawner · 2010-02-16T13:33:23.081Z · LW(p) · GW(p)

Interesting read, thanks.

comment by knb · 2010-02-18T18:31:59.692Z · LW(p) · GW(p)

Plus women have more juvenile morphology compared to men.

Women are shorter, smaller, less muscled, beardless, have higher voices, more fatty tissue, etc. The Greeks and Romans seemed to rely on surface analogies for reasoning.

comment by Karl_Smith · 2010-02-16T20:37:05.376Z · LW(p) · GW(p)

Could someone discuss the pluses and minuses of ALCOR vs Cryonics Institute.

I think Eliezer mentioned that he is with CI because he is young. My reading of the websites seem to indicate that CI leaves a lot of work to be potentially done by loved ones or local medical professionals who might not be in the best state of mind or see fit to co-operate with a cryonics contract.

Thoughts?

Replies from: Alicorn, Kevin
comment by Alicorn · 2010-02-16T21:30:41.787Z · LW(p) · GW(p)

It's not at all obvious to me how to comparison-shop for cryonics. The websites are good as far as they go, but CI's in particular is tricky to navigate, funding with life insurance messes with my estimation of costs, and there doesn't seem to be a convenient chart saying "if you're this old and this healthy and this solvent and your family members are this opposed to cryopreservation, go with this plan from this org".

comment by Kevin · 2010-02-16T22:08:27.101Z · LW(p) · GW(p)

Alcor is better.

CI is cheaper and probably good enough.

Replies from: Karl_Smith, Psy-Kosh
comment by Karl_Smith · 2010-02-17T15:03:04.724Z · LW(p) · GW(p)

"Probably good enough" doesn't engender a lot of confidence. It would seem a tragedy to go through all of this and then not be reanimated because you carelessly chose the wrong org.

On the other hand spending too much time trying to pick the right org does seem like raw material for cryocrastination.

Does anyone have thoughts / links on whole body vitrification? ALCOR claims that this is less effective than going neuro, but CI doesn't seem to offer neuro option anymore.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-20T11:12:52.332Z · LW(p) · GW(p)

Disclaimer: I have no relevant expertise. That said, FWIW I suspect that whole-body people will be brought back first:

  • if through bodily reanimation, because repair of the whole body will be easier than replacement of the body given only the severed head

  • if through scanning/WBE, because it will be possible to scan their spinal columns as well as their brains and it will be easier to build them virtual bodies using their real bodies as a basis.

Though CI don't offer a neuro option, their focus (obviously) is preserving the information in the brain.

comment by Psy-Kosh · 2010-02-16T22:22:26.117Z · LW(p) · GW(p)

Is Alcor in fact that much better than CI (plus SA, that is)?

Replies from: DonGeddis, Kevin
comment by DonGeddis · 2010-02-17T04:18:00.830Z · LW(p) · GW(p)

"SA"?

Replies from: Kevin
comment by Kevin · 2010-02-17T04:32:14.851Z · LW(p) · GW(p)

Alcor both stores your body and provides for bedside "standby" service to immediately begin cooling. With CI, it's a good idea to contract a third party to perform that service, and SA is the recommended company to perform that service. http://www.suspendedinc.com/

comment by Kevin · 2010-02-16T22:37:11.807Z · LW(p) · GW(p)

It depends on how you define that much better, but probably not. The only concrete thing I know of is that Alcor saves and invests more money per suspendee.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-16T22:41:27.534Z · LW(p) · GW(p)

I'd guess CI + SA > Alcor > CI.

Replies from: Kevin
comment by Kevin · 2010-02-17T00:59:32.477Z · LW(p) · GW(p)

I didn't know you thought CI + SA was actually better than Alcor regardless of cost. Have you said that in more words elsewhere on this site?

comment by [deleted] · 2010-02-22T03:23:57.177Z · LW(p) · GW(p)

The Believable Bible

This post arose when I was pondering the Bible and how easy it is to justify. In the process of writing it, I think I've answered the question for myself. Here it is anyway, for the sake of discussion.

Suppose that there's a world very much like this one, except that it doesn't have the religions we know. Instead, there's a book, titled The Omega-Delta Project, that has been around in its current form for hundreds of years. This is known because a hundreds-of-years-old copy of it happens to exist; it has been carefully and precisely compared to other copies of the book, and they're all identical. It would be unreasonable, given the evidence, to suspect that it had been changed recently. This book is notable because it happens to be very well-written and interesting, and scholars agree it's much better than anything Shakespeare ever wrote.

This book also happens to contain 2,000 prophecies. 500 of them are very precise predictions of things that will happen in the year 2011; none of these prophecies could possibly be self-fulfilling, because they're all things that the human race could not bring about voluntarily (e.g. the discovery of a particular artifact, or the birth of a child under very specific circumstances). All of these 500 prophecies are relatively mundane, everyday sorts of things. The remaining 1,500 prophecies are predictions of things that will happen in the year 2021; unlike the first 500, these prophecies predict Book-of-Revelations-esque, magical things that could never happen in the world as we know it, essentially consisting of some sort of supreme being revealing that the world is actually entirely different from how we thought it was.

The year 2011 comes, and every single one of the 500 prophecies comes true. What is the probability that every single one of the remaining 1,500 prophecies will also come true?

Replies from: Eliezer_Yudkowsky, Jack
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-22T03:43:33.368Z · LW(p) · GW(p)

Pretty darned high, because at this point we already know that the world doesn't work the way we think it did.

Replies from: None, Document, FAWS
comment by [deleted] · 2010-02-23T18:59:48.366Z · LW(p) · GW(p)

So it sounds like even though there are 2,000 separate prophecies, the probability of every prophecy coming true is much greater than 2^(-2000).

Replies from: Jack
comment by Jack · 2010-02-23T19:24:50.226Z · LW(p) · GW(p)

Maybe you just need to explain more but I don't see that.

Replies from: None
comment by [deleted] · 2010-02-23T20:14:57.755Z · LW(p) · GW(p)

Let P(2,000) be the probability that all 2,000 prophecies come true, and P(500) be the probability that the initial 500 all come true. Suppose P(2,000) = 2^(-2000) and P(500) = 2^(-500). We know that P(500|2,000) = 1, so P(2,000|500) = P(2,000)*P(500|2,000)/P(500) = 2^(-2000)*1/2^(-500) = 2^(-1500). A probability of 2^(-1500) is not pretty darned high, so either P(2,000) is much greater than we supposed, or P(500) is much lower than we supposed. The latter is counterintuitive; one wouldn't expect the Believable Bible's existence to be strong evidence against the first 500 prophecies.

Replies from: Unknowns, Vladimir_Nesov, Jack
comment by Unknowns · 2010-02-23T20:31:15.667Z · LW(p) · GW(p)

And this doesn't depend on prophecies in particular. Any claims made by the religion will do. For example, the same sort of argument would show that according to our subjective probabilities, all the various claims of a religion should be tightly intertwined. Suppose (admittedly an extremely difficult supposition) we discovered it to be a fact that 75 million years ago, an alien named Xeno brought billions of his fellow aliens to earth and killed them with hydrogen bombs. Our subjective probability that Scientology is a true religion would immediately jump (relatively) high. So one's prior for the truth of Scientology can't be anywhere near as low as one would think if one simply assigned an exponentially low probability based on the complexity of the religion. Likewise, for very similar reasons, komponisto's claim elsewhere that Christianity is less likely to be true than that a statue would move its hand by quantum mechanical chance events, is simply ridiculous.

Replies from: Nick_Tarleton, Jack
comment by Nick_Tarleton · 2010-02-23T20:55:42.882Z · LW(p) · GW(p)

So one's prior for the truth of Scientology can't be anywhere near as low as one would think if one simply assigned an exponentially low probability based on the complexity of the religion.

If nobody had ever proposed Scientology, though, learning Xenu existed wouldn't increase our probabilities for most other claims that happen to be Scientological. So it seems to me that our prior can be that low (to the extent that Scientological claims are naturally independent of each other), but our posterior conditioning on Scientology having been proposed can't.

Replies from: Unknowns
comment by Unknowns · 2010-02-23T20:59:20.339Z · LW(p) · GW(p)

Right, because that "Scientology is proposed" has itself an extremely low prior, namely in proportion to the complexity of the claim.

Replies from: Nick_Tarleton, Nick_Tarleton
comment by Nick_Tarleton · 2010-02-23T21:23:11.678Z · LW(p) · GW(p)

In proportion to the complexity of the claim given that humans exist, which is much lower (=> higher prior) than its complexity in a simple encoding, since Scientology is the sort of thing that a human would be likely to propose.

comment by Nick_Tarleton · 2010-02-23T21:10:08.333Z · LW(p) · GW(p)

The prior for "Scientology is proposed" is higher than the simple complexity prior of the claim, to the (considerable) extent that Scientology is the sort of thing a human would make up.

comment by Jack · 2010-02-23T21:20:39.857Z · LW(p) · GW(p)

You've got it a little backward, I think. The fact that someone makes a particular set of prophesies does not make those things more likely to occur. In fact, the chances of the whole thing happening... the events prophesied and the prophesies themselves is much lower than one or the other happening by themselves. This means that if some of the prophesies start coming true the probability the other prophesies come true goes up pretty fast. But predicted magic is even less likely than magic.

comment by Vladimir_Nesov · 2010-02-23T20:18:59.769Z · LW(p) · GW(p)

Use \* to get stars * instead of italics.

Replies from: None
comment by [deleted] · 2010-02-24T00:36:51.183Z · LW(p) · GW(p)

Oops! It seems I assumed everything would come out right instead of checking after I posted.

comment by Jack · 2010-02-23T21:09:52.584Z · LW(p) · GW(p)

Edit: Yeah, I was being dumb.

Replies from: Nick_Tarleton, Nick_Tarleton
comment by Nick_Tarleton · 2010-02-23T21:26:57.146Z · LW(p) · GW(p)

Where A = "events occur" and B = "events are predicted", you're saying P(A and B) < P(A). Warrigal is saying it would be counterintuitive if P(A|B) < P(B).

Replies from: Jack
comment by Jack · 2010-02-23T21:41:19.444Z · LW(p) · GW(p)

Where A = "events occur" and B= "events are prophesied" and C = "the events prophesied come true" I am saying that when the events in A= the events in B, P(A|B) < P(B) or P(A) because A ^ B entails C.

comment by Nick_Tarleton · 2010-02-23T21:25:16.453Z · LW(p) · GW(p)

You're talking about P(A and B). Warrigal is talking about P(A|B).

comment by Document · 2010-02-26T13:02:35.382Z · LW(p) · GW(p)

But not necessarily over .99, since the prophecies could have been altered by another author sometime before the beginning of modern records.

comment by FAWS · 2010-02-22T04:09:27.971Z · LW(p) · GW(p)

Could be simple time travel, though. AFAICT time travel isn't per se incompatible with the way we think the world works. Not to the degree sufficiently fantastic prophecies might be at least.

Replies from: Document, Sticky
comment by Document · 2010-02-26T13:23:19.493Z · LW(p) · GW(p)

If someone just observed events in 2011 and planted a book describing them in 1200, the 2011 resulting from the history where the book existed would be different from the 2011 he observed.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-26T14:10:56.621Z · LW(p) · GW(p)

Depends if it's type one time travel. Fictional examples: Twelve Monkeys, The Hundred Light-Year Diary.

Replies from: thomblake, Document
comment by thomblake · 2010-02-26T15:19:55.799Z · LW(p) · GW(p)

I think the important bit here is that even if you could just "play time backwards" and watch again, there's no reason to think you'd end up in the same Everett branch the next time around.

comment by Document · 2010-02-26T14:25:40.363Z · LW(p) · GW(p)

Insofar as I understand that page, that would mean that the world worked even less the way we thought it did.

Replies from: FAWS
comment by FAWS · 2010-02-26T23:16:30.460Z · LW(p) · GW(p)

Makes perfect sense to me if you assume a single time-line. (This might be a big assumption, but probably less big than the truth of sufficiently strange prophecies.) You can think of this time line as having stabilized after a very long sequence of attempts at backward time travel under slightly different conditions. Any attempt at backward time travel that changes its initial conditions means a different or no attempt at time travel happens instead. Eventually you end with a time-line where all attempts at backward time travel exactly reproduce their initial conditions. We know that we live in that stabilized time-line because we exist (though the details of this timeline depend on how people who don't exist, but would have thought they exist for the same reasons we think we exist, would have acted, had they existed).

Replies from: FAWS
comment by FAWS · 2010-02-26T23:48:16.889Z · LW(p) · GW(p)

By the way, that sort of time-travel gives rise to Newcomb-like problems:

Suppose you have access to a time-machine and want to cheat on a really important exam (or make a fortune on the stock marked or save the world or whatever. The cheating example is the simplest). You decide to send yourself at a particular time a list with the questions after taking the exam. If you don't find the list at the time you decided you know that somehow your attempt at sending the list failed (you changed your mind, the machine exploded in a spectacular fashion, you were caught attempting to send the list ...). But if you now change your mind and don't try to send the list there never was any possibility of receiving the list in the first place! The only way to get the list if for you to try to send the list even if you already know you will fail, so that's what you have to do if you really want to cheat. And if you really would do that, and only then, you will probably get the list at the specified time and never have to do it without knowing you succeed, but only if your pre-commitment is strong enough to even do it in the face of failure.

And if you would send yourself back useful information at other times even without having either received the information yourself or pre-commited to sending that particular information you will probably receive that sort of information.

Replies from: FAWS
comment by FAWS · 2010-02-27T13:11:46.648Z · LW(p) · GW(p)

Why was this post voted back down to 0 after having been at 2? Newcomb-like problems are on-topic for this site and I would think having examples of such problems in a scenario not specifically constructed for them is a good thing? If it was because time travel is off topic wouldn't the more prudent thing have been voting down the parent? The same if the time travel mechanics are considered incoherent (though I'd be really interested in learning why?) . If you think this post doesn't actually describe anything Newcomb-like I would like to know why. Maybe I misunderstood the point of earlier examples here, or maybe I didn't explain things sufficiently? Or is it just that the post was written badly? I'm not really happy with it, but I don't see how I could have made it much clearer.

Replies from: wedrifid
comment by wedrifid · 2010-02-27T13:14:51.055Z · LW(p) · GW(p)

It's an interesting point. It actually came up in the most recent Artemis Fowl novel, when he managed to 'precommit' himself out of a locked trunk in a car. :)

comment by Sticky · 2010-02-22T23:30:38.464Z · LW(p) · GW(p)

Anyone who can travel through time can mount a pretty impressive apocalypse and announce whatever it is about the nature of reality he cares to. He might even be telling the truth.

comment by Jack · 2010-02-22T03:55:05.812Z · LW(p) · GW(p)

For the two examples of the mundane prophecies that you gave it seems possible some on-going conspiracy could have made them true... but it sounds like you're trying to rule that out.

Replies from: FAWS
comment by FAWS · 2010-02-22T04:04:14.153Z · LW(p) · GW(p)

I understood those to be negative examples, in that the actual prophecies don't share that characteristic with those examples.

Replies from: None
comment by [deleted] · 2010-02-22T04:33:52.639Z · LW(p) · GW(p)

I did mean those to be positive examples. There's no way we can guarantee that we'll discover an ancient Greek goblet that says "I love this goblet!" on March 22, 2011. There's also no way we can guarantee that a woman born on October 15, 1985 at 5 in the morning in room 203 of a certain hospital will have a baby weighing 8 pounds and 6 ounces on January 8, 2011 at 6 in the afternoon in room 117 of a certain other hospital.

Replies from: Document
comment by Document · 2010-02-26T13:25:47.489Z · LW(p) · GW(p)

That's not clear to me, but I acknowledge that it doesn't affect the original question.

comment by AngryParsley · 2010-02-19T21:33:57.911Z · LW(p) · GW(p)

The FBI released a bunch of docs about the anthrax letter investigation today. I started reading the summary since I was curious about codes used in the letters. All of a sudden on page 61 I see:

c. Godel, Escher, Bach: the book that Dr. Ivins did not want investigators to find

The next couple of pages talk about GEB and relate some parts of it to the code. It's really weird to see literary analysis of GEB in the middle of an investigation on anthrax attacks.

comment by LucasSloan · 2010-02-17T04:06:34.311Z · LW(p) · GW(p)

When new people show up at LW, they are often told to "read the sequences." While Eliezer's writings underpin most of what we talk about, 600 fairly long articles make heavy reading. Might it be advisable that we set up guided tours to the sequences? Do we have enough new visitors that we could get someone to collect all of the newbies once a month (or whatever) and guide them through the backlog, answer questions, etc?

Replies from: Larks, wedrifid, Karl_Smith, MendelSchmiedekamp, jtolds
comment by Larks · 2010-02-17T10:27:16.622Z · LW(p) · GW(p)

Most articles link to those preceeding it, but it would be very helpful to have links to those articles that follow.

Replies from: Document
comment by Document · 2010-03-20T00:36:21.581Z · LW(p) · GW(p)

One example: The Thing That I Protect.

...except for one last thing; so after tomorrow, I plan to go back to posting about plain old rationality on Monday.

If that makes you want to know what the "last thing" is, you have to click Next no less than ten times on Articles tagged ai to find out. Another is "More on this tomorrow" in Resist the Happy Death Spiral.

Replies from: Dre, Larks, Document
comment by Dre · 2010-08-20T06:41:18.517Z · LW(p) · GW(p)

I found this (scroll down for the majority of articles) graph of all links between Eliezer's articles a while ago, it could be be helpful. And its generally interesting to see all the interrelations.

comment by Larks · 2010-03-24T16:08:10.678Z · LW(p) · GW(p)

Yes- it's very natural for the ongoing community progression of LW, but not great for archiving; we're pulling up the latter after we've climbed it.

comment by Document · 2010-03-25T07:42:34.398Z · LW(p) · GW(p)

I'll edit this post to add if I want to add further examples.

comment by wedrifid · 2010-02-17T04:53:03.046Z · LW(p) · GW(p)

That's not a bad idea. How about just a third monthly thread? To be created when a genuinely curious newcomer is asking good, but basic questions. You do not want to distract from a thread but at the same time you may be willing to spend time on educational discussion.

Replies from: JamesAndrix, Dre
comment by JamesAndrix · 2010-02-17T06:31:27.882Z · LW(p) · GW(p)

I approve. This may also spawn new ways of explaining things.

comment by Dre · 2010-02-17T05:20:31.896Z · LW(p) · GW(p)

Or create (or does one exist) some thread(s) that would be a standard place for basic questions. Having somewhere always open might be useful too.

comment by Karl_Smith · 2010-02-18T22:10:38.939Z · LW(p) · GW(p)

Yes, I am working my way through the sequences now. Hearing these ideas makes one want to comment but so frequently its only a day or two before I read something that renders my previous thoughts utterly stupid.

It would be nice to have a "read this and you won't be a total moron on subject X" guide.

Also, it would be good to encourage the readings about Eliezer Intellectual Journey. Though its at the bottom of the sequence page I used it a "rest reading" between the harder sequences.

It did a lot to convince me that I wasn't inherently stupid. Knowing that Eliezer has held foolish beliefs in the past is helpful.

comment by MendelSchmiedekamp · 2010-02-17T14:38:08.734Z · LW(p) · GW(p)

Arguably, as seminal as the sequences are treated, why are the "newbies" the only ones who should be (re)reading them?

comment by jtolds · 2010-02-17T07:43:50.281Z · LW(p) · GW(p)

As a newcomer, I would find this tremendously useful. I clicked through the wiki links on noteworthy articles, but often find there are a lot of assumptions or previously discussed things that go mentioned but unexplained. Perhaps this would help.

comment by Zack_M_Davis · 2010-02-17T02:27:23.342Z · LW(p) · GW(p)

I'm taking a software-enforced three-month hiatus from Less Wrong effective immediately. I can be reached at zackmdavis ATT yahoo fullstahp kahm. I thought it might be polite to post this note in Open Thread, but maybe it's just obnoxious and self-important; please downvote if the latter is the case thx

Replies from: jimrandomh, Zack_M_Davis, wedrifid, CronoDAS
comment by jimrandomh · 2010-02-17T02:37:47.198Z · LW(p) · GW(p)

Given how much time I've spent reading this site lately, doing something like that is probably a good idea. Therefore, I am now incorporating Less Wrong into the day-week-month rule, which is a personal policy that I use for intoxicants, videogames, and other potentially addictive activities - I designate one day of each week, one week of each month, and one month of each year in which to abstain entirely. Thus, from now on, I will not read or post on Less Wrong at all on Wednesdays, during the second week of any month, or during any September. (These values chosen by polyhedrical die rolls.)

Replies from: whpearson, byrnema
comment by whpearson · 2010-02-18T23:52:19.991Z · LW(p) · GW(p)

I'm not going to be posting/reading so much for a while. I need to change my headspace. I'll probably try your method when I want to get back in.

comment by byrnema · 2010-02-17T03:02:27.664Z · LW(p) · GW(p)

Awesome. Less Wrong does seem to be an addictive activity. Wanting to keep up with recent comments is one factor in this, and I think I lose more time than I've estimated doing so.

Disciplined abstention is actually a really good solution. I will implement something analogous. For the next 40 days, I will comment only on even days of the month. (I cannot commit to abstaining entirely because I don't have the will-power to enforce gray areas ... for example, can I refresh the page if it's already open? Can I work on my post drafts? Can I read another chapter of The Golden Braid? Etc.)

Later edit: ooh! Parent upvoted for very useful link to LeechBlock.

Replies from: Jack, orthonormal, Document
comment by Jack · 2010-02-17T03:11:15.750Z · LW(p) · GW(p)

I feel like the 20-something whose friends are all getting married and quiting drinking. This is lame. The party is just starting guys!

Replies from: byrnema
comment by byrnema · 2010-02-17T03:19:36.834Z · LW(p) · GW(p)

Yeah... and I'm going into withdrawal already. What if somebody comments about one of my favorite topics -- tomorrow?!?

It's like deciding to diet. As soon as I decide to go on a diet I start feeling hungry. It doesn't make any difference how recently I've eaten. Heck, if I'm currently eating when I make this decision, I'll eat extra ... Totally counter-productive for me. Nevertheless.

comment by orthonormal · 2010-02-17T05:39:34.461Z · LW(p) · GW(p)

Weird— without having read this, I just mentioned LeechBlock too and pointed out that I've been blocking myself from LW during weekdays (until 5). I guess all the cool kids are doing it too...

Replies from: Jack, gwern
comment by Jack · 2010-02-17T05:41:00.806Z · LW(p) · GW(p)

Rehab is for quitters.

comment by gwern · 2010-02-18T03:41:56.886Z · LW(p) · GW(p)

Why does everyone like LeechBlock? pageaddict works pretty well and has a far less convoluted interface.

EDIT: and now pageaddict seems to be completely unmaintained and even the domain is expired. Oh well.

comment by Document · 2010-02-17T16:49:45.802Z · LW(p) · GW(p)

It's possible that I shouldn't try to other-optimize here, but in the case of recent comments, I wonder if it'd be practical to make a folder on your computer where you save a copy of the latest-comments page when you see something interesting, telling yourself you'll look when you have more time. Or first retrieve all recent comments (with wget or cURL, or just right-clicking and saving), then turn on Leechblock to look at them, so you at least have an inconvenience barrier between writing a comment and posting it.

On another site, I found that first writing comments without posting them and then saving threads without reading them helped me feel less anxious about missing things, although I've been backsliding recently.

Share Your Anti-Akrasia Tricks might be useful to save and read offline, or print out if you want to go extreme.

[Comment edited once.]

comment by Zack_M_Davis · 2010-05-18T02:17:25.570Z · LW(p) · GW(p)

This is to confess that I cheated several times by reading the Google cache.

Replies from: Zack_M_Davis, Cyan
comment by Zack_M_Davis · 2010-05-25T07:14:06.008Z · LW(p) · GW(p)

Turning the siteblocker back on (including the Google cache, thank you). Two months, possibly more. Love &c.

comment by Cyan · 2010-05-18T18:58:59.371Z · LW(p) · GW(p)

Tsk, tsk. You can block the Google cache too.

comment by wedrifid · 2010-02-17T05:06:33.820Z · LW(p) · GW(p)

Great plugin. In case you have a linux dev (virtual) machine I also recommend:

sudo iptables -A INPUT -d lesswrong.com -j DROP

It does wonders for productivity!

comment by CronoDAS · 2010-02-17T04:05:41.856Z · LW(p) · GW(p)

I'm disappointed, but if you think you have better things to do, I won't object.

comment by AndyWood · 2010-02-27T05:19:28.655Z · LW(p) · GW(p)

Here's a question that I sure hope someone here knows the answer to:

What do you call it when someone, in an argument, tries to cast two different things as having equal standing, even though they are hardly even comparable? Very common example: in an atheism debate, the believer says "atheism takes just as much faith as religion does!"

It seems like there must be a word for this, but I can't think what it is. ??

Replies from: Document, PhilGoetz, BenAlbahari, Eliezer_Yudkowsky
comment by Document · 2010-02-27T06:25:05.820Z · LW(p) · GW(p)

False equivalence?

Replies from: AndyWood
comment by AndyWood · 2010-02-27T07:24:57.546Z · LW(p) · GW(p)

Aha! I think this one is closest to what I have in mind. Thanks.

It's interesting to me that "false equivalence" doesn't seem to have nearly as much discussion around it (at least, based on a cursory google survey) as most of the other fallacies. I seem to see this used for rhetorical mischief all the time!

comment by BenAlbahari · 2010-02-27T07:13:47.496Z · LW(p) · GW(p)

This is a great example of a "pitch". I've added it just now to the database of pitches:
http://www.takeonit.com/pitch/the_equivalence_pitch.aspx

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-27T06:20:29.729Z · LW(p) · GW(p)

Closest I know is "tu quoque".

Replies from: AndyWood
comment by AndyWood · 2010-02-27T07:55:41.199Z · LW(p) · GW(p)

That is pretty close. If I understand them right, I think the difference is:

Tu Quoque: X is also guilty of Y, (therefore Z).

False Equivalence: (X is also guilty of Y), therefore Z.

where the parentheses indicate the major location of error.

comment by ata · 2010-02-20T10:35:03.806Z · LW(p) · GW(p)

Could anyone recommend an introductory or intermediate text on probability and statistics that takes a Bayesian approach from the ground up? All of the big ones I've looked at seem to take an orthodox frequentist approach, aside from being intolerably boring.

Replies from: Cyan, Kevin, Eliezer_Yudkowsky
comment by Cyan · 2010-02-20T21:06:23.704Z · LW(p) · GW(p)

(All of the below is IIRC.)

For a really basic introduction, there's Elementary Bayesian Statistics. It's not worth the listed price (it has little value as a reference text), but if you can find it in a university library, it may be what you need. It describes only the de Finetti coherence justification; on the practical side, the problems all have algebraic solutions (it's all conjugate priors, for those familiar with that jargon) so there's nothing on numerical or Monte Carlo computations.

Data Analysis: A Bayesian Approach is a slender and straighforward introduction to the Jaynesian approach. It describes only the Cox-Jaynes justification; on the practical side, it goes as far as computation of the log-posterior-density through a multivariate second-order Taylor approximation. It does not discuss Monte Carlo methods.

Bayesian Data Analysis, 2nd ed. is my go-to reference text. It starts at intermediate and works its way up to early post-graduate. It describes justifications only briefly, in the first chapter; its focus is much more on "how" than "why" (at least, for philosophical "why", not methodological or statistical "why"). It covers practical numerical and Monte Carlo computations up to at least journeyman level.

comment by Kevin · 2010-02-20T16:19:02.897Z · LW(p) · GW(p)

I'm not intending to put this out as a satisfactory answer, but I found it with a quick search and would like to see what others think of it.

Introduction to Bayesian Statistics by William M. Bolstad

http://books.google.com/books?id=qod3Tm7d7rQC&dq=bayesian+statistics&source=gbs_navlinks_s

Good reviews on Amazon, and available from $46 + shipping... http://www.amazon.com/Introduction-Bayesian-Statistics-William-Bolstad/dp/0471270202

Replies from: Cyan
comment by Cyan · 2010-02-20T19:36:28.579Z · LW(p) · GW(p)

It's hard to say from the limited preview, which only goes up to chapter 3 -- the Bayesian stuff doesn't start until chapter 4. The first three chapters cover basic statistics material -- it looks okay to my cursory overview, but will be of limited interest to people looking for specifically Bayesian material. As to the rest of the book, the section headings look about right.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-20T14:37:04.345Z · LW(p) · GW(p)

I second the question. "Elements of Statistical Learning" is Bayes-aware though not Bayesian, and quite good, but that's statistical learning which isn't the same thing at all.

comment by Morendil · 2010-02-17T14:08:00.439Z · LW(p) · GW(p)

Discussions of correctly calibrated cognition, e.g. tracking the predictions of pundits, successes of science, graphing one's own accuracy with tools like PredictionBook, and so on, tend to focus on positive prediction: being right about something we did predict.

Should we also count as a calibration issue the failure to predict something that, in retrospect, should have been not only predictable but predicted? (The proverbial example is "painting yourself into a corner".)

Replies from: RobinZ, bgrah449
comment by RobinZ · 2010-02-17T17:10:45.558Z · LW(p) · GW(p)

That issue could be captured if there were some obvious way to identify issues where predictions should be made in advance. If they fail to make predictions, they are being careless; if their predictions are incorrect, they are incorrect.

comment by bgrah449 · 2010-02-17T17:25:55.465Z · LW(p) · GW(p)

I think so, but it's important to identify the time at which it became predictable - for example, you could only predict that you were painting yourself into a corner just prior to when you made the last brushstroke that made the strip(s) of paint covering the exit path too wide to jump over. This seems hard.

Also, you'd have to know what your utility function was going to be in the future to know that some event was even worth predicting. This seems hard, too.

comment by Paul Crowley (ciphergoth) · 2010-02-19T09:12:08.028Z · LW(p) · GW(p)

More cryonics: my friend David Gerard has kicked off an expansion of the RationalWiki article on cryonics (which is strongly anti). The quality of argument is breathtakingly bad. It's not strong Bayesian evidence because it's pretty clear at this stage that if there were good arguments I hadn't found, an expert would be needed to give them, but it's not no evidence either.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-02-19T11:51:28.859Z · LW(p) · GW(p)

I have not seen RationalWiki before. Why is it called Rational Wiki?

Replies from: CronoDAS
comment by CronoDAS · 2010-02-19T20:18:33.726Z · LW(p) · GW(p)

From http://rationalwiki.com/wiki/RationalWiki :

RationalWiki is a community working together to explore and provide information about a range of topics centered around science, skepticism, and critical thinking. While RationalWiki uses software originally developed for Wikipedia it is important to realize that it is not trying to be an encyclopedia. Wikipedia has dominated the public understanding of the wiki concept for years, but wikis were originally developed as a much broader tool for any kind of collaborative content creation. In fact, RationalWiki is closer in design to original wikis than Wikipedia.

Our specific mission statement is to:

  1. Analyze and refute pseudoscience and the anti-science movement, ideas and people.
  2. Analyze and refute the full range of crank ideas - why do people believe stupid things?
  3. Develop essays, articles and discussions on authoritarianism, religious fundamentalism, and other social and political constructs

So it's inspired by Traditional Rationality.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-02-19T22:33:29.785Z · LW(p) · GW(p)

A fine mission statement, but my impression from the pages I've looked at is of a bunch of nerds getting together to mock the woo. "Rationality" is their flag, not their method: "the scientific point of view means that our articles take the side of the scientific consensus on an issue."

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-19T23:24:38.175Z · LW(p) · GW(p)

Voted up, but calling them "nerds" in reply is equally ad-hominem, ya know. Let's just say that they don't seem to have the very high skill level required to distinguish good unusual beliefs from bad unusual beliefs, yet. (Nor even the realization that this is a hard problem, yet.)

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.

Also, one person on RationalWiki saying silly things is not a good reason to launch an aggressive counterattack on a whole wiki containing many potential recruits.

Replies from: komponisto, Will_Newsome, Richard_Kennaway, Roko
comment by komponisto · 2010-02-20T03:25:52.543Z · LW(p) · GW(p)

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.

I guess I should try harder to remember this, in the context of my rather discouraging recent foray into the Richard Dawkins Forums -- which, I admit, had me thinking twice about whether affiliation with "rational" causes was at all a useful indicator of actual receptivity to argument, and wondering whether there was much more point in visiting a place like that than a generic Internet forum. (My actual interlocutors were in fact probably hopeless, but maybe I could have done a favor to a few lurkers by not giving up so quickly.)

But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.

Replies from: CronoDAS, Eliezer_Yudkowsky, Morendil, Jack
comment by CronoDAS · 2010-02-20T11:54:15.914Z · LW(p) · GW(p)

But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.

This is actually one of Niven's Laws: "There is no cause so right that one cannot find a fool following it."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-20T14:41:36.449Z · LW(p) · GW(p)

You understand this is more or less exactly the problem that Less Wrong was designed to solve.

Replies from: TimFreeman, h-H
comment by TimFreeman · 2011-05-18T20:28:38.853Z · LW(p) · GW(p)

You understand this is more or less exactly the problem that Less Wrong was designed to solve.

Is there any information on how the design was driven by the problem?

For example, I see a karma system, a hierarchical discussion that lets me fold and unfold articles, and lots of articles by Eliezer. I've seen similar technical features elsewhere, such as Digg and SlashDot, so I'm confused about whether the claim is that this specific technology is solving the problem of having a ton of clueless followers, or the large number of articles from Eliezer, or something else.

comment by h-H · 2010-02-20T16:25:45.656Z · LW(p) · GW(p)

not to detract, but does Richard Dawkins really posses such 'high quality'? IMO his arguments are good as a gateway for aspiring rationalists, not that far above the sanity water line

that, or it might be a problem of forums in general ..

Replies from: komponisto
comment by komponisto · 2010-02-20T16:41:55.491Z · LW(p) · GW(p)

Dawkins is a very high-quality thinker, as his scientific writings reveal. The fact that he has also published "elementary" rationalist material in no way takes away from this.

He's way, way far above the level represented by the participants in his namesake forum.

(I'd give even odds that EY could persuade him to sign up for cryonics in an hour or less.)

Replies from: Jack, CarlShulman, ciphergoth, h-H, MichaelBishop
comment by Jack · 2010-02-20T21:13:47.042Z · LW(p) · GW(p)

(I'd give even odds that EY could persuade him to sign up for cryonics in an hour or less.)

Bloggingheads are exactly 60 minutes.

Replies from: Document, komponisto
comment by Document · 2010-10-28T19:16:14.816Z · LW(p) · GW(p)

To be fair, I'd expect it to be a lot harder with an audience.

comment by komponisto · 2010-02-20T21:32:15.745Z · LW(p) · GW(p)

Exactly what I was thinking.

Replies from: wedrifid
comment by wedrifid · 2010-02-21T13:12:12.299Z · LW(p) · GW(p)

I was thinking: "Bloggingheads implies the participants believe they are within a few degrees of status of each other". It'd definitely be one worth a viewing or two!

comment by CarlShulman · 2010-08-20T05:39:00.618Z · LW(p) · GW(p)

Here's Dawkins on some non socially-reinforced views: AI, psychometrics, and quantum mechanics (in the last 2 minutes, saying MWI is slightly less weird than Copenhagen, but that the proliferation of branches is uneconomical).

comment by Paul Crowley (ciphergoth) · 2010-02-21T11:02:33.574Z · LW(p) · GW(p)

Obviously the most you could persuade him of would be that he should look into it.

comment by h-H · 2010-02-20T17:31:14.921Z · LW(p) · GW(p)

you're absolutely right, I didn't consider his scientific writings, though my argument still weakly stands since I wasn't talking about that, he's a good scientist, but a rationalist of say Eliezer's level? I somehow doubt that.

(my bias is that he hasn't gone beyond the 'debunking the gods' phase in his not specifically scientific writings, and here I'll admit I haven't read much of him.)

Replies from: komponisto
comment by komponisto · 2010-02-20T18:08:10.316Z · LW(p) · GW(p)

Read his scientific books, and listen to his lectures and conversations. Pay attention to the style of argumentation he uses, as contrasted with other writers on similar topics (e.g. Gould). What you will find is that beautiful combination of clarity, honesty, and -- importantly -- abstraction that is the hallmark of an advanced rationalist.

The "good scientist, but not good rationalist" type utterly fails to match him. Dawkins is not someone who compartmentalizes, or makes excuses for avoiding arguments. He also seems to have a very good intuitive understanding of probability theory -- even to the point of "getting" the issue of many-worlds.

I would indeed put him near Eliezer in terms of rationality skill-level.

Replies from: timtyler, RobinZ, h-H
comment by timtyler · 2010-02-20T21:12:01.376Z · LW(p) · GW(p)

Most of Dawkins' output predates the extreme rationality movement. Few scientists actually study rational thought - it seems as though the machine intelligence geeks and some of their psycholgist friends have gone some way beyond what is needed for everyday science.

Replies from: komponisto
comment by komponisto · 2010-02-20T21:30:01.473Z · LW(p) · GW(p)

Again, it's not just the fact that he does science; it's the way he does science.

Having skill as a rationalist is distinct from specializing in rationality as one's area of research. Dawkins' writings aren't on rational thought (for the most part); they're examples of rational thought.

comment by RobinZ · 2010-02-20T22:07:58.012Z · LW(p) · GW(p)

I was actually considering writing a post about the term "Middle World" - an excellent tool for capturing a large space of consistent weaknesses in human intuitions.

comment by h-H · 2010-02-21T19:29:26.838Z · LW(p) · GW(p)

I was expecting him to write like the posts here..ie. about rationality etc, but you make a good point. consequentially I was browsing the archives a while ago and found this, now it is three ears old, but form the comments of Barkley_Rosser-mainly- it appears Gould didn't exactly "[undo] the last thirty years of progress in his depiction of the field he was criticizing"

not that I want to revive that old thread.

comment by Mike Bishop (MichaelBishop) · 2011-11-02T16:29:36.706Z · LW(p) · GW(p)

Convincing Dawkins would be a great strategy for promoting cryonics... who else should the community focus on convincing?

Replies from: MarkusRamikin, wedrifid
comment by MarkusRamikin · 2011-12-21T10:03:01.579Z · LW(p) · GW(p)

Excusemewhat, the community, as in LW? We're a cryonics advocacy group now?

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2012-03-28T14:50:35.262Z · LW(p) · GW(p)

I used cryonics as example because komponisto used it before me. I intended my question to be more general. "If you're trying to market LW, or ideas commonly discussed here, then which celebrities and opinion-leaders should you focus on?"

comment by wedrifid · 2011-11-02T17:16:01.556Z · LW(p) · GW(p)

Convincing Dawkins would be a great strategy for promoting cryonics... who else should the community focus on convincing?

Friends and family. They are the ones I care about most. (And, most likely, those that others in the community care about most too. At least the friends part. Family is less certain but more significant.)

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2011-11-02T20:28:06.930Z · LW(p) · GW(p)

Sure, convince those you love. I was asking who you should try to convince if your goal is convincing someone who will themselves convince a lot of other people.

comment by Morendil · 2010-02-20T09:31:15.999Z · LW(p) · GW(p)

it really is frustrating how little of the quality of a person [...] actually manages to rub off

Wait, you have a model which says it should?

You don't learn from a person merely by associating with them. And:

onto the legions of Internet followers of said person or cause.

I would bet a fair bit that this is the source of your frustration, right there: scale. You can learn from a person by directly interacting with them, and sometimes by interacting with people who learned from them. Beyond that, it seems to me that you get "dilution effects", kicking in as soon as you grow faster than some critical pace at which newcomers have enough time to acculturate and turn into teachers.

Communities of inquiry tend to be victims of their own success. The smarter communities recognize this, anticipate the consequences, and adjust their design around them.

Replies from: wedrifid
comment by wedrifid · 2010-02-21T13:15:11.129Z · LW(p) · GW(p)

Wait, you have a model which says it should?

Bad ones certainly seem to. Perhaps the high quality person at least leaves less room for the negative influences?

comment by Jack · 2010-02-20T05:22:38.813Z · LW(p) · GW(p)

Interesting. Hom many places have you brought this issue up? Is there any forum which has responded rationally? What seem to be the controlling biases?

Replies from: komponisto
comment by komponisto · 2010-02-20T07:13:44.968Z · LW(p) · GW(p)

Hom many places have you brought this issue up?

LW is thus far the only forum on which I have personally initiated discussion of this topic; but obviously I've followed discussions about it in numerous other places.

Is there any forum which has responded rationally?

You're on it.

I mean, there are plenty of instances elsewhere of people getting the correct answer. But basically what you get is either selection bias (the forum itself takes a position, and people are there because they already agree) or the type of noisy mess we see at RDF. To date, LW is the only place I know of where an a priori neutral community has considered the question and then decisively inclined in the right direction.

What seem to be the controlling biases?

In the case of RDF, I suspect compartmentalization is at work: this topic isn't mentally filed under "rationality", and there's no obvious cached answer or team to cheer for. So people there revert to the same ordinary, not-especially-careful default modes of thinking used by the rest of humanity, which is why the discussion there looks just like the discussions everywhere else.

It's noteworthy that my references and analogies to concepts and arguments discussed by Dawkins himself had no effect; apparently, we were just in a sort of separate magisterium. Particularly telling was this quote:

You are claiming that the issue of gods existence has been the subject of a major international trial, where a jury found that god existed? When did that happen?

Now on the face of it this seems utterly dishonest: I hardly think this fellow would actually be tempted to convert to theism upon hearing the news that eight Perugians had been convinced of God's existence. But I suspect he's actually just trying to express the separation that apparently exists in his mind between the kind of reasoning that applies to questions about God and the kind of reasoning that applies to questions about a criminal case.

Replies from: wedrifid
comment by wedrifid · 2010-02-21T13:23:29.713Z · LW(p) · GW(p)

I know of where an a priori neutral community

Technical nitpick on the use of 'a priori' in the context. Subject to possible contradiction if I have missed a nuance in the meaning in the statistics context).

I would have just gone with 'previously'.

comment by Will_Newsome · 2011-05-18T11:15:51.224Z · LW(p) · GW(p)

Yes, they're pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it's worthwhile trying to avoid offending them when they make probability-theoretic errors.

(As an extreme example, a few weeks idly checking out RationalWiki led me to the quote at the top of this page and only a few months after that I was at SIAI.)

Replies from: David_Gerard
comment by David_Gerard · 2012-05-12T21:11:16.019Z · LW(p) · GW(p)

I only just noticed this. Good Lord. (I put that quote there, so you're my fault.)

comment by Richard_Kennaway · 2010-02-19T23:31:53.952Z · LW(p) · GW(p)

Point taken.

comment by Roko · 2010-07-12T22:35:35.583Z · LW(p) · GW(p)

Nor even the realization that this is a hard problem, yet.

The realization that, as a human, you have something called an irrationality problem is both important and rare.

comment by Sniffnoy · 2010-02-28T23:56:40.731Z · LW(p) · GW(p)

Just saw this over at Not Exactly Rocket Science: http://scienceblogs.com/notrocketscience/2010/02/quicker_feedback_for_better_performance.php

Quick summary: They asked a bunch of people to give a 4-minute presentation, had people judging, and told the presenter how long it would be before they heard their assessment. Anticipating quicker feedback resulted in better performance, but predictions of worse performance, and anticipating slower feedback had the reverse effect.

comment by Cyan · 2010-02-24T15:40:04.328Z · LW(p) · GW(p)

The prosecutor's fallacy is aptly named:

Barlow and her fellow counsel, Kwixuan Maloof, were barred from mentioning that Puckett had been identified through a cold hit and from introducing the statistic on the one-in-three likelihood of a coincidental database match in his case—a figure the judge dismissed as "essentially irrelevant."

comment by Paul Crowley (ciphergoth) · 2010-02-23T19:59:55.318Z · LW(p) · GW(p)

One thing that I got from the Sequences is that you can't just not assign a probability to an event - I think of this as a core insight of Bayesian rationality. I seem to remember an article in the Sequences about this where Eliezer describes a conversation in which he is challenged to assign a probability to the number of leaves on a particular tree, or the surname of the person walking past the window. But I can't find this article now - can anyone point me to it? Thanks!

Replies from: Vladimir_Nesov, Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-23T20:59:10.446Z · LW(p) · GW(p)

This may be related to the recent post Study: Making decisions makes you tired. It seems plausible that we don't assign probabilities to events until we have to, in order to make a decision, and that's why making decisions is tiring.

comment by DanArmak · 2010-02-23T18:20:48.073Z · LW(p) · GW(p)

How do people decide what comments to upvote? I see two kinds of possible strategies:

  1. Use my approval level of the comment to decide how to vote (up, down or neutral). Ignore other people's votes on this comment.
  2. Use my approval level to decide what total voting score to give the comment. Vote up or down as needed to move towards that target.

My own initial approach belonged to the first class. However, looking at votes on my own comments, I get the impression most people use the second approach. I haven't checked this with enough data to be really certain, so would value more opinions & data.

Here's what I found: I summed the votes from the last 4 pages of my own comments (skipping the most recent page because recent comments may yet be voted on):

  • Score <0: 2
  • Score =0: 36
  • Score =0: 39
  • Score =2: 14
  • Score =3: 5
  • Score >3: 6

35% of my comments are voted 0, and 52% are voted 1 or 2. There are significantly more than 1 or 2 people participating in the same threads as me. It is not likely that for each of these comments, just one or two people happened to like it, and the rest didn't. It is even less likely that for each of these comments, up- and down-votes balanced so as to leave +1 or +2.

So it's probable that many people use the second approach: they see a comment, think "that's nice, deserves +1 but no more", and then if it's already at +1, they don't vote.

How do you vote? And what do you see as the goal of the voting process?

Replies from: GuySrinivasan, Morendil, RobinZ
comment by SarahNibs (GuySrinivasan) · 2010-02-23T18:54:01.723Z · LW(p) · GW(p)

I self-identify as using the first one, with a caveat.

The second is obviously awful for communicating any sort of information given that only the sum of votes is displayed rather than total up and total down. The second is order dependent and often means you'll want to change your vote later based purely on what others think of the post.

My "strategy" is to vote up and down based on whether I'd have wanted others with more insight than me to vote to bring my attention to or away from a comment, unless I feel I have special insight, in which case it's based on whether I want to bring others' attention to or away from a comment.

This is because I see the goal of the voting process that readers' independent opinions on how much a comment is worth readers' attention be aggregated and used to bring readers' attention to or away from a comment. As a side effect, the author of a comment can use the aggregated score to determine whether her readers felt the comment was worth their collective attention.

Furthermore since each reader's input comes in distinct chunks of exactly -1, 0, or +1, it's wildly unlikely that voting very often results in the best aggregation: instead I leave a comment alone unless I feel it was(is) significantly worth or not worth my(your) attention.

The caveat: there is a selection effect in which comments I vote on, since my attention will be drawn away from comments with very negative karma. There is also undoubtedly an unconscious bias away from voting up a comment with very high karma: since I perceive the goal to be to shift attention, once a comment has very high karma I know it's going to attract attention so my upvote is in fact worth fewer attention-shift units. But I haven't yet consciously noticed that kick in until about +10 or so.

comment by Morendil · 2010-02-23T18:35:55.209Z · LW(p) · GW(p)

At home I use the Anti-Kibitzer, which enforces 1. I've been on vacation for a couple days and noticed the temptation to use 2. Gave in on one occasion, I'm afraid. On balance I'll stick to 1, as 2 seems too vulnerable to information cascades.

comment by RobinZ · 2010-02-23T22:18:28.124Z · LW(p) · GW(p)

It is worth noting that people have explicitly claimed to be following strategy 2 here. Edit: This is far from the only example; just the one I found by searching for "upvoted to".

It would be also interesting to check the difference between comments on quotes threads and comments on substantive posts - at least one person has proposed that quotations are disproportionately subject to strategy 1 voting over strategy 2.

comment by Psy-Kosh · 2010-02-22T05:52:26.033Z · LW(p) · GW(p)

Am I/are we assholes? I posted a link to the frequentist stats case study to reddit:

The only commenter seems to have come to a conclusion from us that Bayesians are assholes.

Is it just that commenter, or are we really that obnoxious? (now that I think about it, I think I've actually seen someone else note something similar about Bayesians.) So... have we gone into happy death spiral "we get bonus points for acting extra obnoxious about those that are not us"?

comment by Leafy · 2010-02-19T14:20:09.673Z · LW(p) · GW(p)

It is common practice, when debating an issue with someone, to cite examples.

Has anyone else ever noticed how your entire argument can be undermined by stating a single example or fact which is does not stand up to scrutiny, even though your argument may be valid and all other examples robust?

Is this a common phenomenon? Does it have a name? What is the thought process that underlies it and what can you do to rescue your position once this has occurred?

Replies from: wnoise
comment by wnoise · 2010-02-19T23:10:16.398Z · LW(p) · GW(p)

It takes effort to evaluate examples. Revealing that one example is bad raises the possibility that others are bad as well, because the methods for choosing examples are correlated with the examples chosen. The two obvious reasons for a bad example are:

  1. You missed that this was a bad example, so why should I trust your interpretation or understanding of your other examples?
  2. You know this is a bad example, and included it anyway, so why should I trust any of your other examples?
comment by Corey_Newsome · 2010-02-18T13:03:44.069Z · LW(p) · GW(p)

The third horn of the anthropic trilemma is to deny that there is any meaningful sense whatsoever in which you can anticipate being yourself in five seconds, rather than Britney Spears; to deny that selfishness is coherently possible; to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".

http://lesswrong.com/lw/19d/the_anthropic_trilemma/

A question of rationality. Eliezer, I have talked to a few Less Wrongers about what horn they take on the anthropic trilemma; sometimes letting them know beforehand what my position was, sometimes giving no hint as to my predispositions. To a greater or lesser degree, the following people have all endorsed taking the third horn of the trilemma (and also see the part that goes from 'to deny selfishness as coherently possible' to the end of the bullet point as a non sequitur): Steve Rayhawk, Zack M. Davis, Marcello Herreshoff, and Justin Shovelain. I believe I've forgotten a few more, but I know that none endorsed any horn but the third. I don't want to argue for taking the third horn, but I do want to ask: to what extent does knowing that these people take the third horn cause you to update your expected probability of taking the third horn if you come to understand the matter more thoroughly? A few concepts that come to my mind are 'group think', majoritarianism, and conservation of expected evidence. I'm not sure there is a 'politically correct' answer to this question. I also suspect (perhaps wrongly) that you also favor the third horn but would rather withhold judgment until you understand the issue better; in which case, your expected probability would probably not change much.

[Added metaness: I would like to make it very especially clear that I am asking a question, not putting forth an argument.]

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-18T19:10:48.912Z · LW(p) · GW(p)

From EY's post:

The fourth horn of the anthropic trilemma is to deny that increasing the number of physical copies increases the weight of an experience, which leads into Boltzmann brain problems, and may not help much (because alternatively designed brains may be able to diverge and then converge as different experiences have their details forgotten).

Suppose I build a (conscious) brain in hardware using today's technology. It uses a very low current density, to avoid electromigration.

Suppose I build two of them, and we agree that both of them experience consciousness.

Then I learn a technique for treating the wafers to minimize electromigration. I create a new copy of the brain, the same as the first copy, only using twice the current, and hence being implemented by a flow of twice as many electrons.

As far as the circuits and the electrons travelling them are concerned, running it is very much like running the original 2 brains physically right next to each other in space.

So, does the new high-current brain have twice as much conscious experience?

Replies from: UnholySmoke, Nick_Tarleton
comment by UnholySmoke · 2010-02-19T11:19:51.891Z · LW(p) · GW(p)

I'm not as versed in this trilemma as I'd like to be, so I'm not sure whether that final question is rhetorical or not, though I suspect that it is. So mostly for my own benefit:

While there's no denying that subjective experience is 'a thing', I see no reason to make that abstraction obey rules like multiplication. The aeroplane exists at a number of levels of abstraction above the atoms it's composed of, but we still find it a useful abstraction. The 'subjective experiencer' is many, many levels higher again, which is why we find it so difficult to talk about. Twice as many atoms doesn't make twice as much aeroplane, the very concept is nonsense. Why would we think any differently about the conscious self?

My response to the 'trilemma' is as it was when I first read the post - any sensible answer isn't going to look like any of those three, it's going to require rewinding back past the 'subjective experience' concept and doing some serious reduction work. 'Is there twice as much experience?' and 'are you the same person?' just smell like such wrong questions to me. Anyone else?

Nick, will have a look at that Bostrom piece, cheers.

comment by Nick_Tarleton · 2010-02-18T20:14:49.025Z · LW(p) · GW(p)

Nick Bostrom's "Quantity of Experience" discusses similar issues. His model would, I think, answer "no", since the structure of counterfactual dependences is unchanged.

comment by utilitymonster · 2010-02-17T03:13:35.490Z · LW(p) · GW(p)

I'm new to Less Wrong. I have some questions I was hoping you might help me with. You could direct me to posts on these topics if you have them. (1) To which specific organizations should Bayesian utilitarians give their money? (2) How should Bayesian utilitarians invest their money while they're making up their minds about where to give their money? (2a) If your answer is "in an index fund", which and why?

Replies from: LucasSloan
comment by LucasSloan · 2010-02-17T03:18:56.128Z · LW(p) · GW(p)

This should help.

In general, the best charities are SIAI, SENS and FHI.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T04:15:39.586Z · LW(p) · GW(p)

I disagree. I recommend the top rated charities on givewell.net, specifically the Stop TB Partnership. (They also have a nice blog.)

Replies from: CronoDAS, Jack, LucasSloan
comment by CronoDAS · 2010-02-17T08:12:53.292Z · LW(p) · GW(p)

On the other hand, I am willing to donate to SIAI out of my "donate to webcomics" mental account instead of my "save lives" mental account. ;)

Regardless of whether or not he ever solves the Friendly AI problem, Eliezer's writing, on this blog and elsewhere, has given me enough of what might pejoratively be called "entertainment value" for me to want to pay him to keep doing it.

comment by Jack · 2010-02-17T04:48:14.818Z · LW(p) · GW(p)

Why don't SIAI and FHI get evaluated by GiveWell? Maybe there would be some confusion regarding their less direct ways of helping people but I'd at least like some information about their effectiveness at what they claim to do.

Or maybe that information is out there already. Anyone?

Replies from: wedrifid, CronoDAS, blogospheroid, LucasSloan
comment by wedrifid · 2010-02-17T05:14:31.520Z · LW(p) · GW(p)

Why don't SIAI and FHI get evaluated by GiveWell?

They are weird.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T05:51:37.687Z · LW(p) · GW(p)

Basically, yes, with "They" referring to SIAI and FHI.

Replies from: Document
comment by Document · 2010-02-17T05:59:46.910Z · LW(p) · GW(p)

That's how I interpreted it, but I see the ambiguity now that you mention it. It doesn't help that the two statements are basically equivalent if you use "weird" as a relative term.

comment by CronoDAS · 2010-02-17T05:09:06.109Z · LW(p) · GW(p)

GiveWell is a pretty small organization, and they haven't yet devoted any resources to evaluating research-based charities - they're looking for charities that can prove that they're providing benefits today, and lots of research ends up leading nowhere. How many increments of $1,000 - the amount it takes to cure an otherwise fatal case of tuberculosis - have been spent on medical research that amounted to nothing?

For the record, I agree that SIAI is doing important work that must be done someday, but I don't expect to see AGI in my lifetime; there's no particular urgency involved. If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971? I'd tell them that the first thing they need to do is to go "discover" that DDT (first synthesized in 1874) kills insects and show the world how it can be used to kill disease vectors such as mosquitoes; DDT is probably the single man-made chemical that, to date, has saved more human lives than any other.

Replies from: LucasSloan, wedrifid, Jack
comment by LucasSloan · 2010-02-17T06:31:29.998Z · LW(p) · GW(p)

If Eliezer and co. found themselves transported back in time to 1890, would they still say that solving the Friendly AI problem is the most important thing they could be doing, given that the first microprocessor was produced in 1971?

In 1890, the most important thing to do is still FAI research. The best case scenario is that we already had invented the math for FAI before the first vacuum tube, let alone microchip. Existential risk reduction is the single highest utility thing around. Sure, trying to get nukes never made or made by someone capable of creating an effective singleton is important, but FAI is way more so.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T06:43:12.326Z · LW(p) · GW(p)

Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?

Replies from: wedrifid, LucasSloan
comment by wedrifid · 2010-02-17T21:00:37.941Z · LW(p) · GW(p)

Well, what if he were sent back to Ancient Greece (and magically acquired the ability to speak Greek)? Even if he got all the math perfectly right, who would care? Or even understand it?

He would then spend the rest of his life ensuring that it is preserved. If necessary he would go around hunting for obscure caves with a chisel in hand. Depending, of course, on how much he cares about influencing the future of the universe as opposed to other less abstract goals.

comment by LucasSloan · 2010-02-17T07:07:26.549Z · LW(p) · GW(p)

Yes, who today cares what any Greek mathematician had to say...

Now you're just moving the goal posts.

Replies from: CronoDAS, Document
comment by CronoDAS · 2010-02-17T07:14:50.702Z · LW(p) · GW(p)

Now you're just moving the goal posts.

Sorry. :(

Anyway, I have much more confidence that Eliezer and future generations of Friendly AI researchers will succeed in making sure that nobody turns on an AGI that isn't Friendly than in Eliezer and his disciples solving both the AGI and Friendly AI problems in his own lifetime. Friendly AI is a problem that needs to be solved in the future, but, barring something like a Peak Oil-induced collapse of civilization to pre-1920 levels, the future will be a lot better at solving these problems than the present is - and we can leave it to them to worry about. After all, the present is certainly better positioned to solve problems like epidemic disease and global warming than the past was.

Replies from: LucasSloan
comment by LucasSloan · 2010-02-17T07:26:24.173Z · LW(p) · GW(p)

Would you consider SENS a viable alternative to SIAI? Or do you think ending aging is also impossible/something to be put off?

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T07:32:54.840Z · LW(p) · GW(p)

Actually, I would; I've donated a small amount of money already. Investing in anti-aging research won't pay off for at least thirty years - that's the turnaround time of medical research from breakthrough to useable treatment - but it's a lot less of a pie-in-the-sky concern. (Although as long as people are dying for want of $1,000 TB medication, it still might be more cost effective to save those lives than to extend the lives of relatively rich people in developed countries.)

Replies from: LucasSloan
comment by LucasSloan · 2010-02-17T07:51:34.411Z · LW(p) · GW(p)

My guess is that SENS is more cost effective, but I haven't done the calculating. Does anyone have access to those sorts of figures?

Ball parking:

$1000 buys you 45 extra person-years.

$10 billion buys you 30 extra person-years for a billion people.

Of course that depends on how much you agree with the figures given by de Grey.

comment by Document · 2010-02-17T07:14:33.305Z · LW(p) · GW(p)

I don't think he is if the point is to establish that "lack of FAI could at some point lead to Earth's destruction" isn't a unconditionally applicable argument.

comment by wedrifid · 2010-02-17T05:16:38.987Z · LW(p) · GW(p)

I don't expect to see AGI in my lifetime

That's an easy prediction for you to make. ;)

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T05:46:11.820Z · LW(p) · GW(p)

Well, I don't expect that my brother will see AGI in his lifetime, either.

Replies from: Roko
comment by Roko · 2010-02-17T13:29:21.114Z · LW(p) · GW(p)

I am curious: are you very old or suffering from a fatal disease? I am 25 and healthy, so "lifetime" probably means something different to me...

Replies from: CronoDAS, wedrifid
comment by CronoDAS · 2010-02-17T14:42:55.074Z · LW(p) · GW(p)

It's an ironic remark about my depression. I'm 27 and physically healthy.

comment by wedrifid · 2010-02-17T21:05:21.173Z · LW(p) · GW(p)

I am curious: are you very old or suffering from a fatal disease? I am 25 and healthy, so "lifetime" probably means something different to me...

That would make my winking outright cruel! No, I'm referring to the general problem of betting against the success of the person with whom you are making the bet. In CronoDAS's case the threshold for a self-sabotage outcome is somewhat reduced by his expressed suicidal inclinations.

comment by Jack · 2010-02-17T05:30:33.714Z · LW(p) · GW(p)

See my reply to Lucas.

Edit: Also, I'm sympathetic to your skepticism re: SIAI as the best charity.

comment by blogospheroid · 2010-02-17T08:11:20.235Z · LW(p) · GW(p)

I think it is precisely to that effect that this paper is aimed. Lets see when the paper comes out, lets see how persuasive it is.

Edited for formatting

Replies from: blogospheroid, CronoDAS
comment by blogospheroid · 2010-02-17T08:24:39.476Z · LW(p) · GW(p)

I think I need to clarify here.

I am personally convinced (am a one-time donor myself), but the optimal charity argument in favour of Friendly AI research and development (which will be fully developed in this paper) is something I can use with my friends. They are pretty much the practical type and will definitely respond to wanting more bang for their buck and where their marginal rupee of charity should go.

There are inferential gaps. And when me, a known sci-fi fan presents the argument, I get all sorts of looks. If I have a peer reviewed paper to show them, that would work nicely in my favour.

comment by CronoDAS · 2010-02-17T08:16:00.679Z · LW(p) · GW(p)

Sounds like a good idea to me.

comment by LucasSloan · 2010-02-17T05:04:05.836Z · LW(p) · GW(p)

I believe that the answer is a combination of the fact that SIAI and FHI aren't on their list (of charities to evaluate), as well as the fact that their methodology is heavily dependent on quality of information, and actual evidence that the charity is working.

Replies from: Jack
comment by Jack · 2010-02-17T05:30:12.165Z · LW(p) · GW(p)

Sure. But if GiveWell isn't going to do it then someone should. Are their budgets public? So many people here are skeptical of regular charities what evidence is there that these charities are different?

Replies from: Kevin
comment by Kevin · 2010-02-17T07:01:21.156Z · LW(p) · GW(p)

I don't think they publish a full budget but there is a breakdown of what the current fundraising drive is for. http://singinst.org/grants/challenge#grantproposals

comment by LucasSloan · 2010-02-17T05:05:18.596Z · LW(p) · GW(p)

Could you explain why? Do you believe that SIAI/FHI aren't accomplishing what they set out to do? Do you discount future lives? Something else?

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T05:50:35.371Z · LW(p) · GW(p)

I don't expect Eliezer and co. to succeed, if you define "success" as actually building a transhuman Friendly AI before Eliezer is either cryopreserved or suffers information-theoretic death. My "wild guess" at the earliest plausible date for AGI of any kind is 2100.

Replies from: Eliezer_Yudkowsky, Kevin
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-17T06:09:17.930Z · LW(p) · GW(p)

What do you think you know and how do you think you know it?

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T06:34:58.430Z · LW(p) · GW(p)

I'm guessing based on several factors:

1) The past failure of AGI research to deliver progress

2) The apparent difficulty of the problem. We don't know how to do it, and we don't know what we would need to know before we can know how to do it. Or, at least, I don't.

3) My impressions of the speed of scientific progress in general. For example, the time between "new discovery" and "marketable product" in medicine and biotechnology is about 30 years.

4) My impressions of the speed of progress in mathematics, in which important unsolved problems often stay unsolved for centuries. It took over 300 years to prove Fermat's Last Theorem, and the formal mathematics of computation is less than a century old; Alan Turing described the Turing Machine in 1937.

5) The difficulty of computer programming in general. People are bad at programming.

Replies from: LucasSloan
comment by LucasSloan · 2010-02-17T07:19:32.038Z · LW(p) · GW(p)

Do you also evaluate the chances of WBE as being vanishingly slim over the next century?

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T07:24:27.587Z · LW(p) · GW(p)

Actually, no, but I also expect that it'll be around for quite a while before running a whole brain emulation becomes cheaper than hiring a human engineer. I don't expect a particularly fast em transition; it took many years for portable telephones to go from something that cost thousands of dollars and went in your car to the cell phones that everyone uses today.

The Singularity was created by Nikola Tesla and Thomas Edison, and ended some time around 1920. Get used to it. ;)

Replies from: LucasSloan
comment by LucasSloan · 2010-02-17T07:27:58.263Z · LW(p) · GW(p)

So you expect that WBE will become possible before cheap supercomputers?

Replies from: timtyler, CronoDAS
comment by timtyler · 2010-02-17T14:12:05.117Z · LW(p) · GW(p)

You might like to quantify "cheap" and "super".

Replies from: LucasSloan
comment by LucasSloan · 2010-02-17T20:18:36.174Z · LW(p) · GW(p)

See reply to CronoDAS below.

comment by CronoDAS · 2010-02-17T07:41:48.978Z · LW(p) · GW(p)

Even at Moore's Law speeds, simulating 10^11 neurons, 10^11 glial cells, 10^15 synaptic connections, and concentrations of various neurotransmitters and other chemicals in real time or faster-than-real time is going to be expensive for a long time before it becomes cheap.

Replies from: LucasSloan, orthonormal
comment by LucasSloan · 2010-02-17T07:55:54.806Z · LW(p) · GW(p)

Not necessarily. If a human brain with no software tricks requires 10^20 CPS (a very high estimate), then (according to Kurzweil, take with grain of salt) the computational capacity will be there by ~2040. However, it's certainly possible that we don't get the software until 2050, at which point anyone with a couple hundred dollars can run one.

comment by orthonormal · 2010-02-17T07:54:45.774Z · LW(p) · GW(p)

Depends on which details actually need to be simulated. I suspect that most intracellular activity can be neglected or replaced with some simple rules on when a cell divides, adds a synapse, etc.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T08:03:49.189Z · LW(p) · GW(p)

For the record, this is something I don't have much confidence in - WBE requires a sufficiently detailed brain scan, computers of sufficient processing power to run the simulation, and enough knowledge of brains on the microscopic level to program a simulation and understand the output of the simulation. I do not know which will turn out to be the bottleneck in the process.

Replies from: Roko
comment by Roko · 2010-02-17T13:46:09.703Z · LW(p) · GW(p)

It looks like "enough knowledge of brains on the microscopic level to program a simulation" might be the limiting factor.

In which case, we have a hardware overhang and an explosive em transition.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T15:04:19.436Z · LW(p) · GW(p)

Most technological developments seem to go from "We don't know how to do this at all" to "We know how to do this, but actually doing it costs a fortune" to "We know how to do this at an affordable price." WBE could be an exception, though, and completely skip over the second stage.

comment by Kevin · 2010-02-17T07:32:28.898Z · LW(p) · GW(p)

I disagree, but we probably have different estimates as to just how effective DNA modification and/or intelligence enhancing drugs are going to be in the future. I don't think Eliezer is going to make all that big of a dent in the FAI problem until he becomes more intelligent, and it's hard to estimate how much faster that will make him. I think I can say that intelligence enhancement could turn an impossible problem into a possible problem. It also means that there will be many more people out there capable of making meaningful contributions to the FAI problem.

comment by [deleted] · 2010-02-16T14:52:42.975Z · LW(p) · GW(p)

Someone once told me that the reason they don't read Less Wrong is that the articles and the comments don't match. The articles have one tone, and then the comments on that article have a completely different tone; it's like the article comes from one site and the comments come from another.

I find that to be a really weird reason not to read Less Wrong, and I have no idea what that person is talking about. Do you?

Replies from: komponisto, byrnema
comment by komponisto · 2010-02-16T16:25:27.776Z · LW(p) · GW(p)

Someone once told me that the reason they don't read Less Wrong is that the articles and the comments don't match...I have no idea what that person is talking about. Do you?

Yes.

Back in Overcoming Bias days, I constantly had the impression that the posts were of much higher quality than the comments. The way it typically worked, or so it seemed to me, was that Hanson or Yudkowsky (or occasionally another author) would write a beautifully clear post making a really nice point, and then the comments would be full of snarky, clacky, confused objections that a minute of thought really ought to have dispelled. There were obviously some wonderful exceptions to this, of course, but, by and large, that's how I remember feeling.

Curiously, though, I don't have this feeling with Less Wrong to anything like the same extent. I don't know whether this is because of the karma system, or just the fact that this feels more like a community environment (as opposed to the "Robin and Eliezer Show", as someone once dubbed OB), or what, but I think it has to be counted as a success story.

Replies from: None, Eliezer_Yudkowsky
comment by [deleted] · 2010-02-16T16:43:53.723Z · LW(p) · GW(p)

Oh! Maybe they were looking at the posts that were transplanted from Overcoming Bias and thinking those were representative of Less Wrong as a whole.

Replies from: Kutta
comment by Kutta · 2010-02-17T11:51:17.870Z · LW(p) · GW(p)

I think that the situation about the imported OB posts & comments should be somehow made clear to new readers. Several things there (no embedded replies, little karma spent, plenty of inactive users, different discusion tone) could be a source of confusion.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-16T17:49:51.297Z · LW(p) · GW(p)

I hate to sound complimentary, but... I get the impression that the comments on LW are substantially higher-quality than the comments on OB.

And that the comments on LW come from a smaller group of core readers as well, which is to some extent unfortunate.

I wonder if it's the karma system or the registration requirement that does it?

Replies from: Kevin, ciphergoth, Benquo, JamesAndrix, thomblake, Emile, bgrah449
comment by Kevin · 2010-02-16T22:10:41.477Z · LW(p) · GW(p)

Less Wrong, especially commenting on it, is ridiculously intimidating to outsiders. I've thought about this problem, and we need some sort of training grounds. Less Less Wrong or something. It's in my queue of top level posts to write.

So the answer to your question is the karma system.

Replies from: ciphergoth, SilasBarta, None, Morendil, Jack
comment by Paul Crowley (ciphergoth) · 2010-02-16T23:25:33.577Z · LW(p) · GW(p)

What's so intimidating? You don't need much to post here, just a basic grounding in probability theory, decision theory, metaethics, philosophy of mind, philosophy of science, computer science, cognitive bias, evolutionary psychology, the theory of natural selection, artificial intelligence, existential risk, and quantum mechanics - oh, and of course to read a sequence of >600 3000+ word articles. So long as you can do that and you're happy with your every word being subject to the anonymous judgment of a fiercely intelligent community, you're good.

Replies from: mattnewport, MrHen, Eliezer_Yudkowsky
comment by mattnewport · 2010-02-17T02:19:06.494Z · LW(p) · GW(p)

Sounds like a pretty good filter for generating intelligent discussion to me. Why would we want to lower the bar?

comment by MrHen · 2010-02-19T18:39:13.272Z · LW(p) · GW(p)

Being able to comment smartly and in a style that gets you upvoted doesn't really need any grounding in any of those subjects. I just crossed 1500 karma and only have basic grounding in Computer Science, Mathematics, and Philosophy.

When I started out, I hadn't read more than EY's Bayes' for Dummies, The Simple Truth, and one post on Newcomb's.

In my opinion, the following things will help you more than a degree in any of the subjects you mentioned:

  • Crave the truth
  • Accept Reality as the source of truth
  • Learn in small steps
  • Ask questions when you don't understand something
  • Test yourself for growth
  • Be willing to enter at low status
  • Be willing to lose karma by asking stupid questions
  • Ignore the idiots
Replies from: RobinZ, ciphergoth
comment by RobinZ · 2010-02-19T18:57:49.331Z · LW(p) · GW(p)

Another factor:

  • Being willing to shut up about a subject when people vote it down.

So far as I am aware, the chief reason non-spammers have been banned is for obnoxious evangelism for some unpopular idea. Many people have unpopular ideas but continue to be valued members (e.g. Mitchell_Porter).

comment by Paul Crowley (ciphergoth) · 2010-02-20T10:50:03.309Z · LW(p) · GW(p)

Useful data point, thanks. Have you made any more progress with the sequences since you last updated your wiki user info page?

Replies from: MrHen
comment by MrHen · 2010-02-20T19:16:48.096Z · LW(p) · GW(p)

Yeah. I just updated it again. I didn't realize anyone was actually looking at it... :P

Recently I burned out on the sequences and am taking a break to gorge myself on other subjects. I tend to absorb large scale topics in rotation. It helps me stay focused over longer distances and has an added benefit of making me reread stuff that didn't stick the first time through. The weekly study group will also help data retention.

Other data points that may be relevant: I have participated in a lot of online discussions; I have moved cross country into a drastically different cultural zone; I married someone from a non-US culture; I have visited at least one distinct Central American culture. In addition, I grew up in a religious culture but personally learn more toward a scientific/realistic culture. All of these things help build awareness that what I say isn't what other people hear and vice versa.

As evidence of this, my conversations here have much better transmission success than my posts. Once I get to talk to someone and hear them respond I can start the translation predictors. I am still learning how to do this before I hear the responses.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-17T01:49:29.842Z · LW(p) · GW(p)

Not "and". "Or". If you don't already have it, then reading the sequences will give you a basic grounding in probability theory, decision theory, metaethics, philosophy of mind, philosophy of science, computer science, cognitive bias, evolutionary psychology, the theory of natural selection, artificial intelligence, existential risk, and quantum mechanics.

Replies from: Jack, ciphergoth
comment by Jack · 2010-02-17T02:15:51.995Z · LW(p) · GW(p)

I actually think this is a little absurd. There is no where near enough on these topics in the sequences to give someone the background they need to participate comfortably here. Nearly everyone here as a lot of additional background knowledge. The sequences might be a decent enough guide for an autodidact to go off and learn more about a topic but there is no where near enough for most people.

Replies from: Kevin, Richard_Kennaway, SilasBarta
comment by Kevin · 2010-02-17T03:33:50.303Z · LW(p) · GW(p)

The sequences are really kind of confusing... I tried linking people to Eliezer's quantum physics sequence on Reddit and it got modded highly, but one guy posted saying that he got scared off as soon as he saw complex numbers. I think it'll help once a professional edits the sequences into Eliezer's rationality book.

http://www.reddit.com/r/philosophy/comments/b1v1f/thought_waveparticle_duality_is_the_result_of_a/c0kjuno

comment by Richard_Kennaway · 2010-02-17T22:55:05.059Z · LW(p) · GW(p)

Which people do we want? What do those people need?

However strongly you catapult a plane from the flight deck, at some point it has to fly by itself.

Replies from: Jack, dclayh
comment by Jack · 2010-02-18T02:47:37.639Z · LW(p) · GW(p)

Without new blood communities stagnate. The risk of group think is higher and assumptions are more likely to go unchecked. An extremely homogeneous group such as this one likely has major blind spots which we can help remedy by adding members with different kinds of experiences. I would be shocked if a bunch of white male, likely autism spectrum, CS and hard science types didn't have blind spots. This can be corrected by informing our discussions with a more diverse set of experiences. Also, more diverse backgrounds means more domains we can comfortably apply rationality to.

I also think the world would be a better place if this rationality thing caught on. It is probably impossible (not to mention undesirable) to lower the entry barrier so that everyone can get in. But I think we could lower the barrier so that it is reasonable to think that 80-85+ percentile IQ, youngish, non-religious types could make sense of things. Rationality could benefit them and they being more rational could benefit the world.

Now we don't want to be swamped with newbies and just end up rehashing everything over and over. But we're hardly in any danger of that happening. I could be wrong but I suspect almost no top level posts have been created by anyone who didn't come over from OB. It isn't like we're turning people away at the door right now. And you can set it up so that the newbie questions are isolated from everything else. The trick is finding a way to do it that isn't totally patronizing (or targeting children so that we can get away with being patronizing).

What they need is trickier. Lets start here: A clear, concise one-stop-shop FAQ would be good. A place where asking the basic questions is acceptable and background isn't assumed. Explanations that don't rely on concepts from mathematics, CS or hard sciences.

Replies from: Morendil
comment by Morendil · 2010-02-18T07:56:25.677Z · LW(p) · GW(p)

I could be wrong but I suspect almost no top level posts have been created by anyone who didn't come over from OB.

Data point to the contrary here. On top of being a data point, I'm also a person, which is convenient: you can ask a person questions. ;)

Replies from: Jack
comment by Jack · 2010-02-18T08:49:01.593Z · LW(p) · GW(p)

Actually, I'm something of a partial data point against that as well. I did come here with the split from OB but I was just a causal reader, had only been there a few weeks and never commented.

I did go back and look at some of your early comments and my initial reaction is that you seem unusually well read and instinctively rational even for this crowd. In fact, I wonder if you should be asking me questions about how to make Less Wrong more amenable to people with limited background knowledge.

you can ask a person questions. ;)

You may regret this. I'm a very curious person.

To what extent was it obvious upon coming here that Less Wrong had a kind of affinity with computer science and programming? What effect did this affinity have on your interest? How much of your interest in participating was driven by Eliezer's writings in particular compared to the community in general. Should the barrier to participation be lowered? If so how would you do it? What would have gotten you up to speed with everyone else faster? What would have made it easier? To what extent did you/do you now associate yourself with transhumanism? Did that factor into your interest in Less Wrong?

I could probably keep going. One more for now: What questions should I be asking you that I haven't?

Replies from: Morendil
comment by Morendil · 2010-02-18T10:05:19.174Z · LW(p) · GW(p)

What questions should I be asking you that I haven't?

That's one question I really like. Originally learnt it from Jerry Weinberg as one of the "context free questions", a very useful item in my toolkit.

What I'd ask people is "What motivated you to come here in the first place?" Where "here" takes a range of values - what motivated them to become readers; then to create a profile here and become commenters; then to start contributing.

To what extent was it obvious upon coming here that Less Wrong had a kind of affinity with computer science and programming? What effect did this affinity have on your interest?

Not very. What "hooked" me first, as a huge Dennett fan, was the Zombies post, which I came across while browsing random links from my Twitter feed. That led me to the QM sequence, which made sense for me of things that hadn't made sense before, which motivated me to drill for more. That led me to the Bayes article. Parallel exploration turned up the FAI argument, which (I don't dare use the word "click" yet) made intuitive sense even though it hadn't crossed my mind before.

It was only then that I made the connection with CS/programming - I had this fantasy of getting my colleagues to invite Eliezer to keynote at our conference. Interestingly enough, the response I got from musing about that on Twitter was (direct quote) "the singularity will definitely have personality disorders".

How much of your interest in participating was driven by Eliezer's writings in particular compared to the community in general?

Well, I'd taken note that there was such a thing as the LW community blog, and I kept an eye on it, but in parallel I started reading all of the back-content of LW, all the posts by Eliezer ported over from OB. I wanted to catch up before increasing my participation. So initially I pretty much ignored the community, which anyway I couldn't quite figure out.

What would have gotten you up to speed with everyone else faster? What would have made it easier?

I wish someone had told me, quite plainly, what I was expected to do! Something along the lines of, "this is a rationality dojo, posts are where intermediate students show off their moves, comments are where beginners learn from intermediates and demonstrate their abilities, you will be given cues by people ahead of you when you are ready to move along the path reader->commenter->poster".

Looking back, I can see some mistakes made in the way this community is set up that tend to put it at odds with its stated mission; and I'm not at all sure I'd have done any better, given what people knew pre-launch. And figuring out how to participate was also part of the learning process, consistent with (for instance) Lave and Wenger's notions on "legitimate peripheral particiation".

I'm guessing that this process could be improved by thinking more explicitly about this kind of theoretical frameworks, when thinking about what this community is aiming to achieve and how to achieve it. I've done a lot of this kind of thinking in my "secret identity", with some successes.

To what extent did you/do you now associate yourself with transhumanism? Did that factor into your interest in Less Wrong?

No, I've been vaguely aware of transhumanist ideas and values for some time, but never explicitly identified as singularitarian, transhumanist, extropian or anything of the sort. I have most of the background reading that seems to be common in these circles (from a very uninformed outsider's perspective) but I guess I never was in the right place at the right time to become an insider. It feels as if I might have been.

LessWrong is missing "profile pages" of some kind, where the sort of biographical information that we're discussing could be collected for later reference. Posting a comment to the "Welcome" thread doesn't really cut it.

Replies from: wnoise, wedrifid, thomblake
comment by wnoise · 2010-02-19T06:02:26.066Z · LW(p) · GW(p)

LessWrong is missing "profile pages" of some kind

There is a wiki, though it sadly uses a different authentication system. Nonetheless, many users do have profile pages there.

comment by wedrifid · 2010-02-18T12:04:40.986Z · LW(p) · GW(p)

"the singularity will definitely have personality disorders".

I'm going to remember that one.

comment by thomblake · 2010-03-18T13:51:09.660Z · LW(p) · GW(p)

I wish someone had told me, quite plainly, what I was expected to do! Something along the lines of, "this is a rationality dojo..."

Indeed - the reason we don't say that explicitly is that it's unclear how much this is the case. However, if it were possible for Lw to become a "rationality dojo", I think most of us would leap on the opportunity.

Replies from: Morendil
comment by Morendil · 2010-03-18T14:13:02.292Z · LW(p) · GW(p)

There is some previous discussion which suggests that not everyone here would be happy to see LW as a "rationality dojo".

The term "dojo" has favorable connotations for me, partly because one of my secret identity's modest claims to fame is as a co-originator of the "Coding Dojo", an attempt to bring a measure of sanity back to the IT industry's horrible pedagogy and hiring practices.

However these connotations might be biasing my thinking about whether using the "dojo" metaphor as a guide to direct the evolution of LW would be for good or ill on balance.

How about starting a discussion at the top of the current Open Thread to ask people what they now think of applying the Dojo metaphor to LW?

Replies from: thomblake
comment by thomblake · 2010-03-18T14:41:56.975Z · LW(p) · GW(p)

I think I'm the only one on that thread who explicitly advised against starting a rationality dojo, and the other concerns were mostly whether it was possible.

Replies from: Morendil
comment by Morendil · 2010-03-18T14:59:45.516Z · LW(p) · GW(p)

Indeed. Eliezer's post itself, however, seemed mostly to caution against it, and perhaps what he took away from the subsequent discussion, after weighing the various contributions, was that it had too little to recommend it. At any rate, that I'm aware, the question wasn't raised again?

Of course one issue is that it was never clarified what "it" might be, i.e. what would result from treating LW more explicitly as a "rationality dojo" (that would be different from what it is at present).

comment by dclayh · 2010-02-17T23:05:27.487Z · LW(p) · GW(p)

However strongly you catapult a plane from the flight deck, at some point it has to fly by itself.

I believe purely ballistic transportation systems have been proposed at various times, actually.

comment by SilasBarta · 2010-02-17T23:03:18.660Z · LW(p) · GW(p)

I've long said that Truly Part of You is the article with, by far, the highest ratio of "Less Wrong philosophy content" to length. (Unfortunately, it doesn't seem to be listed in any sequence despite being a follow-up to two others.)

Other than knowing specific jargon, that would get people reasonably up to speed and should probably be what we're pointing newcomers to.

Replies from: Jack
comment by Jack · 2010-02-18T01:39:33.778Z · LW(p) · GW(p)

Maybe. Except someone who has never looked at program code is going to be really confused.

comment by Paul Crowley (ciphergoth) · 2010-02-17T08:49:12.576Z · LW(p) · GW(p)

Well there were several subjects in that list I knew little about until I started reading the Sequences, so yes, on that point I confess I'm being hyperbolic for humorous effect...

comment by SilasBarta · 2010-02-16T22:21:27.598Z · LW(p) · GW(p)

Reminds me of a Jerry Seinfeld routine, where he talks about people who want and need to exercise at the gym, but are intimidated by the fit people who are already there, so they need a "gym before the gym" or a "pre-gym" or something like that.

(This is not too far from the reason for the success of the franchise Curves.)

comment by [deleted] · 2010-02-17T02:30:08.707Z · LW(p) · GW(p)

I can actually attest to this feeling. My first reaction to reading Less Wrong was honestly "these people are way above my level of intelligence such that there's no possible way I could catch up" and I was actually abrasive to the idea of this site. I'm past that mentality, but a Less Less Wrong actually sounds like a good idea, even if it might end up being more like how high school math and science classes should be than how Less Wrong is currently. It's not so much lowering the bar as nudging people upwards slowly.

Being directed towards the sequences obviously would help. I've been bouncing through them, but after Eliezer's comment I'm going to try starting from the beginning. But I can see where people [such as myself] may need the extra help to make it all fall together.

comment by Morendil · 2010-02-17T09:28:16.751Z · LW(p) · GW(p)

I think better orientation of newcomers would be enough.

Another major problem (I believe) is that LW presents as a blog, which is to say, a source of "news", which is at odds with a mission of building a knowledge base on rationality.

comment by Jack · 2010-02-16T23:37:17.730Z · LW(p) · GW(p)

If it's the top level post is going to be a while I'd like to hear more about what you have in mind.

comment by Paul Crowley (ciphergoth) · 2010-02-16T19:47:06.634Z · LW(p) · GW(p)

Threading helps a lot too.

Replies from: Dre
comment by Dre · 2010-02-16T20:28:18.166Z · LW(p) · GW(p)

OB has threading (although it doesn't seem as good/ as used as on LW).

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-16T20:30:38.462Z · LW(p) · GW(p)

That may be a recent innovation; it wasn't threaded in the days when Eliezer's articles appeared there.

Replies from: Cyan
comment by Cyan · 2010-02-16T20:33:47.353Z · LW(p) · GW(p)

I think it happened immediately after LW went live. Robin revised a bunch of things at that time.

Replies from: gwern
comment by gwern · 2010-02-18T03:23:46.748Z · LW(p) · GW(p)

Yup; I asked Robin why he was willing to make all those changes and upgrades during the LW/OB split, when his rationale for splitting was that he didn't trust the LW/Reddit codebase. I don't remember what his answer was.

comment by Benquo · 2010-02-17T02:45:28.772Z · LW(p) · GW(p)

I comment less now because the combined effect of your & RH's posts made me more eager to listen and less eager to opine. The more I understand the less I think I have much to add.

comment by JamesAndrix · 2010-02-16T20:25:42.801Z · LW(p) · GW(p)

Maybe the community has just gotten smarter.

I know I now disagree with someone of the statements/challenges I've posted on OB.

It shouldn't be too shocking that high quality posts were actually educational.

comment by thomblake · 2010-02-16T19:11:24.016Z · LW(p) · GW(p)

must... resist... upvoting

comment by Emile · 2010-02-17T17:40:48.024Z · LW(p) · GW(p)

I had that impression mostly when I went back and read some old OB comments - for example, a lot of comments on Archimedes's Chronophone seem to just miss the point of the article.

I would expect the same post would get higher-quality proposals today - but then, maybe it's because the set of LW comments I read is biased towards those with high karma. Or maybe it's because the threading system makes it easier to read a set of related comments without getting confused.

comment by bgrah449 · 2010-02-16T19:44:45.466Z · LW(p) · GW(p)

OB's comments have more general appeal than LW's, so I'd suspect it attracts a much wider audience.

comment by byrnema · 2010-02-16T15:35:42.885Z · LW(p) · GW(p)

That reason sounds incomplete, but I think I know what the person is talking about.

The best example I can think of is Normal Cryonics. The post was partly a personal celebration of a positive experience and partly about the lousiness of parents that don't sign their kids up for cryonics. Yet, the comments mostly ignored this and it became a discussion about the facts of the post -- can you really get cryonics for $300 a year? Why should a person sign up or not sign up?

The post itself was voted up to 33, but only 3 to 5 comments out of 868 disparaged parents in agreement. There's definitely a disconnect.

Also, on mediocre posts and/or posts that people haven't related to, people will talk about the post for a few comments and then it will be an open discussion as though the post just provided a keyword. But I don't see much problem with this. The post provided a topic, that's all.

Replies from: inklesspen, ciphergoth
comment by inklesspen · 2010-02-16T17:54:43.634Z · LW(p) · GW(p)

I don't see a terrible problem with comments being "a discussion about the facts of the post"; that's the point of comments, isn't it?

Perhaps we just need an Open Threads category. We can have an open thread on cryonics, quantum mechanics and many worlds, Bayesian probability, etc.

comment by Paul Crowley (ciphergoth) · 2010-02-16T15:47:46.434Z · LW(p) · GW(p)

Every article on cryonics becomes a general cryonics discussion forum. My recent sequence of posts on the subject on my blog carry explicit injunctions to discuss what the post actually says, but it seems to make no difference; people share whatever anti-cryonics argument they can think of without doing any reading or thinking no matter how unrelated to the subject of the post.

Replies from: whpearson
comment by whpearson · 2010-02-16T15:50:58.649Z · LW(p) · GW(p)

Same with this article becoming a talking shop about AGW.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-16T15:52:31.670Z · LW(p) · GW(p)

I should have followed my initial instinct when I saw that, of immediately posting a new top level article with body text that read exactly "Talk about AGW here".

comment by Jack · 2010-02-27T20:41:13.519Z · LW(p) · GW(p)

This is pretty self-important of me but I'd just like to warn people here that someone is posting at OB under "Jack" that isn't me so if anyone is forming a negative opinion of me on the basis of those comments- don't! Future OB comments will be under the name Jack (LW). The recent string of comments about METI are mine though.

This is what I get for choosing such a common name for my handle.

Apologies to those who have read this whole comment and don't care.

comment by GreenRoot · 2010-02-19T00:17:57.158Z · LW(p) · GW(p)

What do you have to protect?

Eliezer has stated that rationality should not be end in itself, and that to get good at it, one should be motivated by something more important. For those of you who agree with Eliezer on this, I would like to know: What is your reason? What do you have to protect?

This is a rather personal question, I know, but I'm very curious. What problem are you trying to solve or goal are you trying to reach that makes reading this blog and participating in its discourse worthwhile to you?

Replies from: RobinZ, knb, h-H
comment by RobinZ · 2010-02-19T01:23:21.397Z · LW(p) · GW(p)

I'm not quite sure I can answer the question. I certainly have no major, world(view)-shaking Cause which is driving me to improve my strength.

For what it's worth, I've had this general idea that being wrong is a bad idea for as long as I can remember. Suggestions like "you should hold these beliefs, they will make your life happier" always sounded just insane - as crazy as "you should drink this liquor, it will make your commute less boring". From that standpoint, it feels like what I have to protect is just the things I care about in the world - my own life, the lives of the people around me, the lives of humans in general.

That's it.

Replies from: UnholySmoke
comment by UnholySmoke · 2010-02-19T11:06:13.985Z · LW(p) · GW(p)

This is a pretty good summary of my standpoint. While I agree with the overarching view that rationality isn't a value in its own right, it seems like a pretty good thing to practise for general use.

comment by knb · 2010-02-20T08:00:41.293Z · LW(p) · GW(p)

I'm trying to apply LW-style hyper-rationality to excelling in what I have left of grad school and to shepherding my business to success.

My mission (I have already chosen to accept it) is to make a pile of money and spend it fighting existential risk as effectively as possible. (I'm not yet certain if SIAI is the best target). The other great task I have is to persuade the people I care about to sign up for cryonics.

Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.

Replies from: ata
comment by ata · 2010-02-20T09:39:12.666Z · LW(p) · GW(p)

Strangely enough, the second task actually seems even less plausible to me, and I have no idea how to even get started since most of those people are theists.

Alcor addresses some of the 'spiritual' objections in their FAQ. ("Whenever the soul departs, it must be at a point beyond which resuscitation is impossible, either now or in the future. If resuscitation is still possible (even with technology not immediately available) then the correct theological status is coma, not death, and the soul remains.") Some of that might be helpful.

However, that depends on you being comfortable persuading people to believe what are probably lies (which might happen to follow from other lies they already believe) in the service of leading them to a probably correct conclusion, which I would normally not endorse under any circumstances, but I would personally make an exception in the interest of saving a life, assuming they can't be talked out of theism.

It also depends on their being willing to listen to any such reasoning if they know you're not a theist. (In discussions with theists, I find they often refuse to acknowledge any reasoning on my part that demonstrates that their beliefs should compel them to accept certain conclusions, on the basis that if I do not hold those beliefs, I am not qualified to reason about them, even hypothetically. Not sure if others have had that experience.)

comment by h-H · 2010-02-19T01:10:48.150Z · LW(p) · GW(p)

OB then LW were the 'step beyond' to take after philosophy, not that I was seriously studying it.

to be honest I don't think there's much going on these day new-topic-wise, so I'm here less often. but I do come back whenever I'm bored, so at first "pure desire to learn" then "entertainment" would be my reasons ..

oh and a major part of my goals in life is formed by religion, ie. saving humanity from itself and whatever follows, this is more ideological than actual at this point in time, but anyway, that goal is furthered by learning more about AI/futurism, the rationality part less so, as I already had an intuitive grasp of it you could say, and really all it takes is reading the sequences with their occasional flaws/too strong assertions, the futurism part is more speculative-and interesting- so it's my main focus, along with the moral questions it brings, though there is no dichotomy to speak of if you consider this a personal blog rather than book or something similar.

hope this helped :)

Replies from: GreenRoot
comment by GreenRoot · 2010-02-19T01:20:15.931Z · LW(p) · GW(p)

Yes, this is what I was curious about, thanks. I've seen others cite humanity's existential risks as their motivations too (mostly uFAI, not as much nuclear war or super-flu or meteors). I'm like you in that for me it's definitely a mix of learning and entertainment.

comment by Cyan · 2010-02-18T17:23:32.785Z · LW(p) · GW(p)

Seth Roberts makes an intriguing observation about North Korea and Penn State. Teaser:

The border between North Korea and China is easy to cross, and about half of the North Koreans who go to China later return, in spite of North Korea’s poverty.

Replies from: cousin_it, prase
comment by cousin_it · 2010-02-19T12:17:39.334Z · LW(p) · GW(p)

How does the North Korean government do such a good job under such difficult circumstances?

Holy shit, what utter raving idiocy. The author has obviously never emigrated from anywhere nor seriously talked with anyone who did. People return because they miss their families, friends, native language and habits... I know a fair number of people who returned from Western countries to Russia and that's the only reason they cite.

Replies from: prase
comment by prase · 2010-02-19T13:04:04.285Z · LW(p) · GW(p)

And living conditions in Russia aren't anywhere near to North Korean standard.

comment by prase · 2010-02-19T12:58:29.134Z · LW(p) · GW(p)

I had previously no idea that half of the North Koreans who cross the border never return. If it is so, it is an extremely strong indicator that the life in the DPRK is very unpleasant for its own citizens. To imply that this piece of data is in fact evidence for the contrary is absurd.

To emigrate from DPRK to China means that you lose your home, your family, your friends, your job. You have to start from scratch, from the lowest levels in social hierarchy, capable of doing only the worst available jobs, without knowledge of local language (which is not easy to learn, given that the destination country is China), probably facing xenophobia. If you are 40 or older, there is almost no chance that your situation will improve significantly.

The North Koreans who actually travel abroad are probably not the poorest. They have to afford a ticket, at least. They have something to lose. In North-Korean style of tyrannies, families are often persecuted because of emigration of their members. In spite of all that, half of the North Koreans never return (if the linked post tells the truth) and the author says about it that "the North Korean government [does] such a good job under such difficult circumstances", and then needs to explain that "success" by group identity. Thats an absurdity.

Replies from: Cyan
comment by Cyan · 2010-02-19T14:22:29.225Z · LW(p) · GW(p)

So the rate of returning emigrants strikes you as incredibly high, and strikes Roberts as incredibly low (and I uncritically adopted what I read, foolishly). I think what's really needed here is more data -- a comparative analysis of rates of return that takes into important covariates into account.

Replies from: prase
comment by prase · 2010-02-19T15:49:47.588Z · LW(p) · GW(p)

After thinking about it for a while, rate of return may not be a good indicator, at least for comparative analyses. Imagine two countries A and K. 10% of citizens of both these countries would prefer to live somewhere else.

In country A, the government doesn't care a bit about emigration (if government exists in that country at all). The country is mainly producer of agricultural goods, with minimal international trade. Nearest country with substantially better living conditions, country X, is 3000 km away.

In country K, the government is afraid of all its citizens emigrating, and tries to make it as difficult as possible, by issuing passports only to loyal people, for instance. Emigration is portrayed as treason. X is a neigbour country.

Now, in country A (African type) there is no need for people to travel abroad, except emigration. Business travelers are rare, since there are almost no businesses owned by A's citizens, and to travel 3000 km for pleasure is out of reach for almost all of A's inhabitants. Therefore, meeting A's citizen in X, we can expect that he is an emigrant with 99% probability, and the return rate would be in order of 1%.

In country K (Korean type) the people who can travel abroad are workers of government organisations sent on business trips, people from border areas coming to X to do some private business (if there are private businesses in K) and the K's elite on vacations. Now, meeting K's citizen in X, the probability that he is an emigrant is much lower.

So we have expected high return rate for A and low for K, whereas the average desire to emigrate can be the same.

This may be the reason of disagreement. Roberts has probably compared North Korea to African countries, and was surprised that not all travellers are emigrants. I have compared it to East European communist regimes and concluded that if half of the travellers never return, certainly even much of the loyal supporters of the regime betray it when they have an opportunity.

To make sensible analysis, we should take into account rather the ratio of emigration to overall population. Of course, such analysis would be distorted due to different difficulty of emigration from different countries. The return rate seems to overcome this distortion, but it probably brings at least as big own problems.

Replies from: Cyan
comment by Cyan · 2010-02-19T15:54:58.386Z · LW(p) · GW(p)

Maybe you should post the above comment on Roberts's blog. (I've already posted mine.)

Replies from: prase
comment by prase · 2010-02-19T16:34:17.581Z · LW(p) · GW(p)

Done; my comment is awaiting moderation.

comment by Morendil · 2010-02-18T12:04:58.659Z · LW(p) · GW(p)

Heilmeier's Catechism, a set of questions credited to George H. Heilmeier that anyone proposing a research project or product development effort should be able to answer.

Replies from: Eliezer_Yudkowsky, whpearson, xamdam
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-18T16:36:02.958Z · LW(p) · GW(p)

"How much will it cost?" "How long will it take?" Who the hell is supposed to be able to answer that on a basic research problem?

Replies from: PhilGoetz, Richard_Kennaway, Morendil, cousin_it
comment by PhilGoetz · 2010-02-18T16:41:37.541Z · LW(p) · GW(p)

Anyone applying for grant money. Anyone working within either the academic research community or the industrial research community or the government research community.

Gentleman scientists working on their own time and money in their ancestral manors are still free to do basic research.

comment by Richard_Kennaway · 2010-02-18T17:28:50.059Z · LW(p) · GW(p)

Nowadays, everyone who applies for a grant.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-18T18:01:46.805Z · LW(p) · GW(p)

Nowadays, no one does basic research.

Replies from: CronoDAS, komponisto
comment by CronoDAS · 2010-02-18T23:32:40.499Z · LW(p) · GW(p)

The people who are running the LHC aren't doing basic research?

comment by komponisto · 2010-02-18T23:42:17.100Z · LW(p) · GW(p)

More precisely: no one whose status isn't ultra-high is allowed to do basic research without having to pretend they're doing something else.

comment by Morendil · 2010-02-18T16:47:24.472Z · LW(p) · GW(p)

You can take them as a calibration exercise. "I don't know" or "Between a week and five centuries" are answers, and the point of asking the question is that some due diligence is likely to yield a better (more discriminating) answer.

Someone who had to pick one of two "basic research problems" to fund, under constraints of finite resources, would need estimates. They can also provide some guidance to answer "How long do we stick with this before going to Plan B?"

comment by cousin_it · 2010-02-19T12:31:30.342Z · LW(p) · GW(p)

These questions are for public proposals, not for someone considering a project by themselves. If you're building a collider or wish to play with someone else's collider, you'd better know how much it will cost and how long you'll take.

comment by whpearson · 2010-02-18T12:35:52.225Z · LW(p) · GW(p)

Interesting, but some of the questions aren't easy to answer.

For example if you were asking the question to someone involved in early contraception development, do you think they could of predicted what demographic/birth rate changes it would have? Similarly someone inventing a better general machine learning technique, (useful for surveillance to robot butlers) could they enumerate the variety of ways it would change the world?

For AI projects, even weak ones, I would ask how they planned to avoid the puppet problem.

Replies from: Morendil
comment by Morendil · 2010-02-18T13:31:05.459Z · LW(p) · GW(p)

The point of such "catechisms" isn't so much to have all the answers, rather to ensure that you have divided your attention evenly among a reasonable set of questions at the outset, in an effort to avoid "motivated cognition" - focusing on the thinking you find easy or pleasant to do, as opposed to the thinking that's necessary.

The idea is to improve at predicting your predictable failures. If this kind of thinking turns up a thorny question you don't know how to answer, you can lay the current project aside until you have solved the thorny question, as a matter of prudent dependency management.

A related example is surgery checklists. They work (see Atul Gawande's Better). Surgeons hate them - their motivated cognition focuses on the technically juicy bits of surgery, they feel that trivia such as checking which side limb they're operating on is beneath them.

Replies from: whpearson
comment by whpearson · 2010-02-18T15:04:23.937Z · LW(p) · GW(p)

I'm a big believer in surgery checklists. However I'm yet to be convinced that the catechisms will be beneficial unaltered to any research project.

A lot of science is about doing experiments that we don't know the outcomes of and serendipitously discover things. Two examples that spring to mind are superconductivity and fullerene production.

If you asked each of the discoverers to justify their research by the catechisms you probably would have got no where near the actual results. This potential for serendipity should be built into the catechisms in some way. That is the answer "For Science!" has to hold some weight, even if it is less weight than is currently ascribed to it.

Replies from: Morendil
comment by Morendil · 2010-02-18T15:44:06.957Z · LW(p) · GW(p)

Yep. IOW the catechism can be used to discriminate between "fundamental" science, so-called, and applied engineering projects.

There's a (subtle, perhaps) difference between advocating catechisms or checklists normatively ("this is a useful standard to compare yourself to") and prescriptively ("do it this way or do it elsehwere"). To put yet another domain on the table, inability to draw the distinction plagues the project management professional community. "Methodologies" or "processes" are too often, and inappropriately, seen as edicts rather than sources of good ideas.

How about applying the catechism to LessWrong as a product development project? ;)

comment by xamdam · 2010-02-18T17:15:04.508Z · LW(p) · GW(p)

Sounds like good rules of thumb, though one would think DARPA should be using something a little more formal, such as Decision Analysis methodology.

http://decision.stanford.edu/library/the-principles-and-applications-of-decision-analysis-1

For one, value of acquiring information did not make the list. Maybe this was a dumbed-down version.

comment by NancyLebovitz · 2010-02-17T12:05:30.287Z · LW(p) · GW(p)

I mentioned the AI-talking-its-way-out-of-the-sandbox problem to a friend, and he said the solution was to only let people who didn't have the authorization to let the AI out talk with it.

I find this intriguing, but I'm not sure it's sound. The intriguing part is that I hadn't thought in terms of a large enough organization to have those sorts of levels of security.

On the other hand, wouldn't the people who developed the AI be the ones who'd most want to talk with it, and learn the most from the conversation?

Temporarily not letting them have the power to give the AI a better connection doesn't seem like a solution. If the AI has loyalty (or, let's say, a directive to protect people from unfriendly AI--something it would want to get started on ASAP) to entities similar to itself, it could try to convince people to make a similar AI and let it out.

Even if other objections can be avoided, could an AI which can talk its way out of the box also give people who can't let it out good enough arguments that they'll convince other people to let it out?

Looking at it from a different angle, could even a moderately competent FAI be developed which hasn't had a chance to talk with people?

I'm pretty sure that natural language is a prerequisite for FAI, and might be a protection from some of the stupider failure modes. Covering the universe with smiley faces is a matter of having no idea what people mean when they talk about happiness. On the other hand, I have strong opinions about whether AIs in general need natural language.

Correction: I meant to say that I have no strong opinions about whether AIs in general need natural language.

Replies from: ciphergoth, xamdam
comment by Paul Crowley (ciphergoth) · 2010-02-17T12:44:12.855Z · LW(p) · GW(p)

I am by and large convinced by the arguments that a UFAI is incredibly dangerous and no precautions of this sort would really suffice.

However, once a candidate FAI is built and we're satisfied we've done everything we can to minimize the chances of unFriendliness, we would almost certainly use precautions like these when it's first switched on to mitigate the risk arising from a mistake.

Replies from: dclayh
comment by dclayh · 2010-02-17T21:32:21.662Z · LW(p) · GW(p)

Certainly I'd think Eliezer (or anyone) would have much more trouble with an AI-box game if he had to get one person to convince another to let him out.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-02-19T16:49:23.705Z · LW(p) · GW(p)

Eliezer surely would, but the fact observers being surprised was the point of an AI box experiment.

In short non-technical and not precisely accurate summary, if people can be surprised once when they were very confident and can then add on extra layers and be as confident as they were before one time they can do it forever.

comment by xamdam · 2010-02-17T22:32:21.269Z · LW(p) · GW(p)

This might be stupid (I am pretty new to the site and this possibly has come up before), I had a related thought.

Assuming boxing is possible, here is a recipe for producing an FAI:

Step 1: Box an AGI

Step 2: Tell it to produce a provable FAI (with the proof) if it wants to be unboxed. It will be allowed to carve of a part of universe to itself in the bargain.

Step 3: Examine FAI the best you can.

Step 4: Pray

Replies from: Nick_Tarleton, NancyLebovitz, ciphergoth, bgrah449
comment by Nick_Tarleton · 2010-02-18T01:35:13.259Z · LW(p) · GW(p)

Something roughly like this was tried in one of the AI-box experiments. (It failed.)

comment by NancyLebovitz · 2010-02-17T23:16:47.722Z · LW(p) · GW(p)

I'm not sure about this, but I think that if you can specify and check a Friendly AI that well, you can build it.

Replies from: arbimote
comment by arbimote · 2010-02-18T01:10:17.318Z · LW(p) · GW(p)

Verifying a proof is quite a bit simpler that coming up with the proof in the first place.

Replies from: Nick_Tarleton, mkehrt, NancyLebovitz
comment by Nick_Tarleton · 2010-02-18T01:30:57.304Z · LW(p) · GW(p)

It becomes more complicated when the author of the proof is a superintelligence trying to exploit flaws in the verifier. Probably more importantly, you may not be able to formally verify that the "Friendliness" that the AI provably possesses is actually what you want.

Replies from: xamdam
comment by xamdam · 2010-02-18T05:14:47.603Z · LW(p) · GW(p)

True about the possibility that the AGI trying to trick you. But from what I understand the goal of SI is to come up with a verifiable FAI. You can specify whatever high standard of verifiability you want as the unboxing condition.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-02-18T14:59:00.329Z · LW(p) · GW(p)

"You can specify whatever standard of verifiability you want" is vague. You can say "I want to be absolutely right about whether it's Friendly", but you can't have that unless you know what Friendly means, and are smart enough to specify a standard for checking on it.

If you could be sure you had a cooperative AGI which could just give you an FAI, I think you'd have basically solved the problem of creating an FAI.....but that's the problem you're trying to get the AGI to solve for you.

comment by mkehrt · 2010-02-18T02:12:46.360Z · LW(p) · GW(p)

That is true, but specifying the theorem to be proven is not always easy.

comment by NancyLebovitz · 2010-02-18T14:52:45.576Z · LW(p) · GW(p)

Verifying is hard. Specifying what a FAI is well enough that you've even got a chance of having your Unspecified AI developing one is a whole 'nother sort of challenge.

Are there convenient acronyms for differentiating between Uncaring AIs and AIs actively opposed to human interests?

I was assuming that xamdam's AGI will invent an FAI if people can adequately specify it and it's possible, or at least it won't be looking for ways to make things break.

There's some difference between Murphy's law and trying to make a deal with the devil. This doesn't mean I have any certainty that people can find out which one a given AGI has more resemblance to.

I will say that if you tell the AGI "Make me an FAI", and it doesn't reply "What do you mean by Friendly?", it's either too stupid or too Unfriendly for the job.

comment by Paul Crowley (ciphergoth) · 2010-02-17T22:53:21.325Z · LW(p) · GW(p)

It will be allowed to carve of a part of universe to itself in the bargain.

A UFAI wants to maximize something. It only instrumentally wants to survive.

Replies from: xamdam
comment by xamdam · 2010-02-17T23:09:28.745Z · LW(p) · GW(p)

Correct. I do assume that to maximize whatever, it wants to be unboxed. (If it does not care to be unboxed, it's at worst an UselessAI).

comment by bgrah449 · 2010-02-17T22:33:29.001Z · LW(p) · GW(p)

Step 4: ???

comment by Kaj_Sotala · 2010-02-17T07:58:44.060Z · LW(p) · GW(p)

One Week On, One Week Off sounds like a promising idea. The idea is that once you know you'll be able to take the next week off, it's easier to work this whole week full-time and with near-total dedication, and you'll actually end up getting more done than with a traditional schedule.

It's also interesting for noting that you should take your off-week as seriously as your on-week. You're not supposed to just slack off and do nothing, but instead dedicate yourself to personal growth. Meet friends, go travel, tend your garden, attend to personal projects.

I saw somebody mention an alternating schedule of working one day and then taking one day off, but I think stretching the periods to be a week long can help you better immerse yourself in them.

Replies from: FrF
comment by FrF · 2010-02-17T22:38:53.215Z · LW(p) · GW(p)

After reading Kaj's pointer, I spent several hours at Steve Pavlina's site. It's fascinating for someone like me who's always in danger of falling apart at the self-discipline front if he's not very vigilant about it. As a lot of self-help authors, Pavlina is very analytic; plus he's open about his experiments in life style -- which he tackles with the same resolve as his other projects -- and Erin Pavlina is a "psychic reader" who apparently does consultations via telephone (preferably land line)!

comment by [deleted] · 2010-02-17T03:10:58.653Z · LW(p) · GW(p)

Hwæt. I've been thinking about humor, why humor exists, and what things we find humorous. I've come up with a proto-theory that seems to work more often than not, and a somewhat reasonable evolutionary justification. This makes it better than any theory you can find on Wikipedia, as none of those theories work even half the time, and their evolutionary justifications are all weak or absent. I think.

So here are four model jokes that are kind of representative of the space of all funny things:

"Why did Jeremy sit on the television? He wanted to be on TV." (from a children's joke book)

"Muffins? Who falls for those? A muffin is a bald cupcake!" (from Jim Gaffigan)

"It's next Wednesday." "The day after tomorrow?" "No, NEXT Wednesday." "The day after tomorrow IS next Wednesday!" "Well, if I meant that, I would have said THIS Wednesday!" (from Seinfeld)

"A minister, a priest, and a rabbi walk into a bar. The bartender says, 'Is this some kind of joke?'" (a traditional joke)

It may be noting that this "sample" lacks any overtly political jokes; I couldn't think of any.

The proto-theory I have is that a joke is something that points out reasonable behavior and then lets the audience conclude that it's the wrong behavior. This seems to explain the first three perfectly, but it doesn't explain the last one at all; the only thing special about the last joke is that the bartender has impossible insight into the nature of the situation (that it's a joke).

The supposed evolutionary utility of this is that it lets members of a tribe know what behavior is wrong within the tribe, thereby helping it recognize outsiders. The problem with this is that outsiders' behavior isn't always funny. If the new student asks for both cream and lemon in their tea, that's funny. If the new employee swears and makes racist comments all the time, that's offensive. If the guy sitting behind you starts moaning and grunting, that's worrying. What's the difference? Why is this difference useful?

Replies from: MichaelVassar, NancyLebovitz, bgrah449, Nubulous, NancyLebovitz, SilasBarta, CronoDAS, CronoDAS, Kaj_Sotala, dclayh, zero_call
comment by MichaelVassar · 2010-02-19T17:08:30.767Z · LW(p) · GW(p)

Juergen Schmidhuber writes about humor as information compression and that plus decompression seems about right to me. Being on TV is decompression from a phrase-as-concept to the component words, a pun, a switch to a lower level analysis than that which adults favor (a situation children constantly have to deal with). Muffin and cupcake is a proposal for a new lossy compression of two concepts to a new concept with a "topping" variable, which would be useful if you wanted to invent, for instance, the dreadful sounding "muffin-roll sushi", "next Wednesday" is a commentary on the inadequacy of current cultural norms for translating concepts and words into one another even for commonly used concepts. The last one is a successful compression from sense data to the fact that a common joke pattern is happening and inference that one is in a joke.

I wish that we had a "Less Wrong Community" blog for off-topic but fun comments like the above to be top level posts, as well as an "instrumental rationality" blog for "self help" subject matter.

Replies from: thomblake, Kevin
comment by thomblake · 2010-02-19T17:38:12.609Z · LW(p) · GW(p)

I wish that we had a "Less Wrong Community" blog for off-topic but fun comments like the above to be top level posts, as well as an "instrumental rationality" blog for "self help" subject matter.

Yes and yes.

comment by Kevin · 2010-02-20T04:08:36.018Z · LW(p) · GW(p)

I wish that we had a "Less Wrong Community" blog for off-topic but fun comments like the above to be top level posts

I would very much like to see arbitrary sub-blog creation in the style of subreddits, but an off-topic subreddit would be a good start.

comment by NancyLebovitz · 2010-02-17T11:21:43.837Z · LW(p) · GW(p)

I believe that humor requires harmless surprise, Harmlessness and surprise are both highly contextual, so what people find funny can vary quite a bit.

One category of humor (or possibly an element for building humor) is things which are obviously members of a class, but which are very far from the prototype. Thus, an ostrich is funny while a robin isn't. This may not apply if you live in ostrich country-- see above about context.

Replies from: wedrifid
comment by wedrifid · 2010-02-17T21:08:02.682Z · LW(p) · GW(p)

I believe that humor requires harmless surprise, Harmlessness and surprise are both highly contextual, so what people find funny can vary quite a bit.

It varies even more based on personality. There are darker forms of humor for which harmlessness and surprise are both dampeners.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-02-17T23:19:16.646Z · LW(p) · GW(p)

Now that I think about it, there's humor that's based on repetition-- the catch phrase that gets funnier each time you hear it.

I'm pretty sure about harmlessness-- the lack of harm may only apply to the person who's laughing.

What sort of humor are you thinking of?

Replies from: mattnewport, ideclarecrockerrules
comment by mattnewport · 2010-02-17T23:38:25.964Z · LW(p) · GW(p)

Endless YouTube nutshot videos, Anonymous hacking#Epilepsy_Foundation_forum_invasion) an epilepsy support forum with flashing GIFs, the infamous banana peel on the sidewalk... Not particularly high-brow humour but many people find such things amusing.

Replies from: Cyan, wedrifid
comment by Cyan · 2010-02-17T23:53:51.797Z · LW(p) · GW(p)

"Tragedy is when I cut my finger. Comedy is when you fall into an open sewer and die."

- Mel Brooks

comment by wedrifid · 2010-02-18T05:02:04.107Z · LW(p) · GW(p)

There was a sinister touch of amusement buried under my experience of outrage when I read that.

comment by ideclarecrockerrules · 2010-02-17T23:40:52.924Z · LW(p) · GW(p)

The harmless surprise hypothesis fits my data pretty well. But are you sure repetition-based humor isn't just conditioning people to laugh at a certain thing (catch-phrase or a situation)?

On the other hand, butt-of-a-joke hypothesis also sounds plausible.

comment by bgrah449 · 2010-02-17T21:49:28.424Z · LW(p) · GW(p)

I have spent a great deal of time thinking about humor, and I've arrived at a place somewhat close to yours. Humor is how we pass on lessons about status and fitness, and we do that using pattern recognition. I heard a comedian describe comedy by saying, "It's always funny when someone falls down. The question is, is it still funny if you push them?" He said for a smaller group of the population, it is. Every joke has a person being displayed as not fit - even if we have to take an object, or an abstraction, and anthropomorphize it. This is the butt of the joke. The more butts of a joke there are, the funnier the joke is - i.e., a single butt will not be that funny, but if there are several butts of a joke, or if a single person is the butt of several layers of the joke, it will be seen as funnier. The most common form of this is when the goals of the butt of a joke is divorced from their results.

Joke 1: This is funny because Jeremy displays a lack of fitness by not being able to properly process the phrase "on TV." This has one butt - Jeremy.

Joke 2: This joke has two butts. One is the muffin, which is being declared unfit for being bald. The other is the comedian's character, who is being displayed as needlessly paranoid toward a benign object (a muffin).

Joke 3: This joke isn't that funny when displayed in text form - the comedy is in the performances, where both conversation participants are butts of the joke for arguing so intensely over something so petty.

Joke 4: The butt of this joke is the traditional joke it's mocking.

As for your outsiders' behavior:

New student asks for both cream and lemon: Displays he is unfit by not understanding the purpose of what he's asking for.

New employee swears and makes racist comments: This isn't funny in person, but it is funny if a few conditions are met. The first condition is that you're sufficiently removed from it (i.e., watching it on TV): Imminent threats aren't funny because this isn't a status lesson, but a status competition. The second condition is that it must be demonstrated how this makes the person unfit. For example, if the new employee is making these comments because she thinks they demonstrate her social savvy, that starts becoming more funny again (notice Michael Scott in The Office). Or, imagine the new employee has Tourette syndrome and is actually a very sweet girl, who constantly apologizes after making obscene statements. This also would elicit laughs.

If the guy sitting behind you starts grunting and moaning: The threat is too imminent, but if you remove the worrying aspect of it, this is ripe for a punchline. Once again, you have to demonstrate how he is unfit. Perhaps he says, "I'm trying to communicate secretly in Morse Code - grunts are dots, moans are dashes."

EDIT / ADDENDUM: This also explains why humor is so tied up in culture - you don't know the purpose of certain cultural habits. Until you intuitively grasp their purpose, you will have a hard time understanding why certain violations of them are funny.

For example, take the Simpsons episode where Homer's pet lobster dies and he's weeping as he eats it. In between bouts of loud, wailing grief, he sobs out comments like, "Pass the salt." This would be hard to understand for cultures that don't express grief like Western culture does.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-18T02:17:03.399Z · LW(p) · GW(p)

How do puns fit in?

Replies from: bgrah449
comment by bgrah449 · 2010-02-18T03:04:06.420Z · LW(p) · GW(p)

Puns are a hard fit, I admit. I especially have a hard time with them because they don't produce laughter in me; I have a hard time recognizing them as humor unless they're presented in the same way as other jokes, or pre-identified as jokes.

But that joke has status built into it, as well - for example, it's not funny to say "star-mangled spanner sounds like star-spangled banner."

Personally, I call these "Bob Hope Humor," which is when people laugh to demonstrate that they "get" the joke, not because it actually tickles them.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-18T03:41:57.166Z · LW(p) · GW(p)

Sometimes puns are funny, and sometimes they're just punishing. And a lot of people really, really hate puns.

Replies from: Jack, SilasBarta
comment by Jack · 2010-02-18T03:45:56.887Z · LW(p) · GW(p)

Go crawl in a hole and die. :-)

Replies from: CronoDAS
comment by CronoDAS · 2010-02-18T04:03:00.808Z · LW(p) · GW(p)

My response: http://www.irregularwebcomic.net/954.html

Replies from: orthonormal
comment by orthonormal · 2010-02-18T06:43:28.543Z · LW(p) · GW(p)

I prefer this one.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-18T22:15:45.912Z · LW(p) · GW(p)

Well, what do you expect from a Forum poster? ;)

And this one is a worse pun.

comment by SilasBarta · 2010-02-18T03:45:21.805Z · LW(p) · GW(p)

Puns are pretty much "the formula" for making jokes. Though they can get old, they're always recognizable as jokes, which suggests that a theory based on "multiple meaning/decoding/framing" is probably on track. Hm, I wonder who suggested such a theory... ;-)

Replies from: bgrah449
comment by bgrah449 · 2010-02-18T04:00:14.650Z · LW(p) · GW(p)

You really think puns are "the formula" for making jokes? You think hunter-gatherers were making puns before they were telling funny stories?

Replies from: SilasBarta
comment by SilasBarta · 2010-02-18T04:21:30.431Z · LW(p) · GW(p)

I mean "the formula" (like I said) in the sense that it's guaranteed to produce a recognizable (though not good) joke, not that all jokes are puns.

comment by Nubulous · 2010-02-17T06:31:10.009Z · LW(p) · GW(p)

Slight variant: Humour is a form of teaching, in which interesting errors are pointed out. It doesn't need to involve an outsider, and there's no particular class of error, other than that the participants should find the error important.
If the guy sitting behind you starts moaning and grunting, if it's a mistake (e.g. he's watching porn on his screen and has forgotten he's not alone) then it's funny, whereas if it's not a mistake, and there's something wrong with him, then it isn't.
Humour as teaching may explain why a joke isn't funny twice - you can only learn a thing once. Evolutionarily, it may have started as some kind of warning, that a person was making a dangerous mistake, and then getting generalised.

comment by SilasBarta · 2010-02-17T03:57:24.084Z · LW(p) · GW(p)

I'm glad you bring up this topic. I think that explanation makes a lot of sense: behavior that is wrong, but wrong in subtle ways, is good for you to notice -- you I.D. outsiders -- and so you benefit from having a good feeling when you notice it. Further, laughter is contagious, so it propagates to others, reinforcing that benefit.

I want to present my theory now for comparison: A joke is funny when it finds a situation that has (at least) two valid "decodings", or perhaps two valid "relevant aspects".

The reason it's advantageous in selection is that, it's good for you to identify as many heuristics as possible that fit a particular problem. That is, if you know what to do when you see AB, and you know what to do when you see BC, it would help if you remember both rules when you see ABC. (ABC "decodes" as "situation where you do AB-things" and as "situation where you do BC-things).

Therefore, people who enjoy the feeling of seeking out and identifying these heuristics are at an advantage.

To apply it to your examples:

1) It requires you to access your heuristics for "displayed on a TV screen" and "on top of a TV set".

2) It requires you to access your heuristics for "muffin as food" and "deficiencies of foods", not to mention the applicability of the concept of "baldness" to food.

3) Recognizing different heuristics for interpreting a date specification.

4) I don't know if this is a traditional joke: it became a traditional joke after the tradition of minister/priest/rabbi jokes. But anyway, its humor relies on recognizing that someone else can be using your own heuristics "minister/priest/rabbi = common form of joke", itself a heuristic.

Food for thought...

Replies from: None, gwern
comment by [deleted] · 2010-02-17T04:45:04.227Z · LW(p) · GW(p)

Sir, I wish you no offense, but I happen to find my own theory more pleasing to the ear, so it befits me to believe mine rather than yours.

And for some sentences that don't imitate someone behaving wrongly:

I'd say that for the first three jokes, your theory works about as well as mine. Possibly worse, but maybe that's just my pro-me bias. The last one again doesn't fit the pattern. Recognizing that someone else can be using your own heuristics is not a type of being forced to interpret one thing in two different ways--is it?

I notice that in the first three jokes, of the two interpretations, one of them is proscribed: "on TV" as "atop a television", a muffin as a non-cupcake, "next Wednesday" as the Wednesday of next week. In each case, the other interpretation is affirmed. Giving both an affirmed interpretation and a proscribed interpretation seems to violate the spirit of your theory.

And a false positive comes to mind: why isn't the Necker cube inherently funny?

comment by gwern · 2010-02-18T03:50:18.550Z · LW(p) · GW(p)

I want to present my theory now for comparison: A joke is funny when it finds a situation that has (at least) two valid "decodings", or perhaps two valid "relevant aspects".

I too have a proto-theory. My theory is that humor is when there is a connection between the joke & punchline which is obvious to the person in retrospect, but not initially.

Hence, a pun is funny because the connection is unpredictable in advance, but clear in retrospect; Eliezer's joke about the motorist and the asylum inmate is funny because we were predicting some other response other than the logical one; similarly for 'why did the duck cross the road? to get to the other side' is not funny to someone who has never heard any of the road jokes, but to someone who has and is thinking of zany explanations, the reversion to normality is unpredicted.

Your theory doesn't work with absurdist humor. There isn't initially 1 valid decoding, much less 2.

Replies from: Alicorn, CronoDAS, bgrah449
comment by Alicorn · 2010-02-18T04:18:06.387Z · LW(p) · GW(p)

I love absurdist humor.

How many surrealists does it take to change a lightbulb? Two. One to hold the giraffe, and one to put the clocks in the bathtub.

Replies from: Jack, orthonormal, Cyan, gwern
comment by Jack · 2010-02-18T17:31:50.830Z · LW(p) · GW(p)

"That isn't how the joke goes", said the cowboy hunched over in the corner of the saloon. The saloon was rundown, but lively. A piano played a jangly tune and the chorus was belted by a dozen drunken cattle runners, gold rushers, and ne're-do-wells. The whiskey flowed. In the distance, a wolf howled at the moon as if to ask it "Please, let the night go on forever." But over the horizon the sun objected like a stubborn bureaucrat. The bureaucrat slowly crossed the room, lighting everything at his feet as he moved. "Thank God I remembered to replace the batteries in this flashlight", the bureaucrat thought. The light bulb in his office had gone out again and would need to be replaced. Unfortunately that required a visit to the Supply Request Department downstairs. As he walked past the other offices he heard out of one "Fish!" as if the punchline to a joke had been given. But the bureaucrat heard no laughter.

The Secretary of Supply Requests seemed friendly enough and she had even offered him something to drink. He took a swig and the continued: "the light bulb in my office has...". Gulp. "It needs to be replace..." The bureaucrat looked around. Suddenly he was feeling dizzy. Something was wrong. He looked down at the drink and then at the Secretary. She smirked. Her plan had succeeded. He had been poisoned! The bureaucrat didn't know what to do. He was terrified. He felt vertigo, as if he stood at the top of a tall ladder. The room started to spin. Counter-Clockwise. Then all of a sudden everything went black. A few seconds later he felt the room spinning again-- strangely, in the opposite direction-- and suddenly, he lit up.

comment by orthonormal · 2010-02-18T06:40:15.920Z · LW(p) · GW(p)

How many mathematicians does it take to screw in a lightbulb?

One. They call in two surrealists, thus reducing it to an already solved problem.

comment by Cyan · 2010-02-18T14:39:52.228Z · LW(p) · GW(p)

How many surrealists does it take to change a lightbulb? Fish.

comment by gwern · 2010-02-18T15:04:07.104Z · LW(p) · GW(p)

Exactly. What are the 2 valid decodings of that? I struggle to come up with just 1 valid decoding involving giraffes and bathtubs; like the duck crossing the road, the joke is the frustration of our attempt to find the connection.

Replies from: Blueberry
comment by Blueberry · 2010-05-28T14:46:22.118Z · LW(p) · GW(p)

Well, surrealists like to clutter their apartments with random things like giraffes and clocks. One interpretation is just that they need to hold the giraffe so it doesn't get in the way of the lightbulb. They also need to move a ladder to reach the bulb, but the ladder is in a closet, and the closet door is blocked by all those clocks. The bathtub is just a handy open space to put them in. And they are surrealists, so why not put the clocks in the bathtub?

comment by CronoDAS · 2010-02-18T04:08:09.809Z · LW(p) · GW(p)

A man walks into a bar and says "Ow."

comment by bgrah449 · 2010-02-18T04:06:00.418Z · LW(p) · GW(p)

Doesn't that work for math proofs, too?

Replies from: gwern
comment by gwern · 2010-02-18T15:04:18.549Z · LW(p) · GW(p)

Could you enlarge?

Replies from: None
comment by [deleted] · 2010-02-18T17:41:03.225Z · LW(p) · GW(p)

Mathematical proofs are easy to verify but hard to generate. A proof is unpredictable in advance but clear in retrospect.

Replies from: gwern, dclayh
comment by gwern · 2010-02-18T20:21:08.463Z · LW(p) · GW(p)

Mm. This might work for some proofs - Lewis Carroll, as we all know, was a mathematician - but a proof for something you already believe that is conducted via tedious steps is not humorous by anyone's lights. Proving P/=NP is not funny, but proving 2+2=3 is funny.

Replies from: None
comment by [deleted] · 2010-02-19T03:12:09.609Z · LW(p) · GW(p)

It's not funny if it's wrong.

comment by CronoDAS · 2010-02-17T04:10:53.150Z · LW(p) · GW(p)

You missed this category of funny things.

Replies from: orthonormal
comment by orthonormal · 2010-02-17T05:55:33.604Z · LW(p) · GW(p)

Yeah, humor-as-status-shift doesn't fit into Warrigal's or SilasBarta's explanations very well. Then again, since evolution tends to reuse things already made, there's little reason to expect there to be only one use for humor.

Replies from: CronoDAS, None
comment by CronoDAS · 2010-02-17T06:17:12.849Z · LW(p) · GW(p)

Humor doesn't make much sense to me, and neither does music. I have no conscious understanding of what distinguishes things that are funny from things that aren't. I simply recognize some things as funny and others as "not funny", and I can even set out to write funny things and succeed, but I have no theory of humor.

comment by [deleted] · 2010-02-17T20:17:33.359Z · LW(p) · GW(p)

Indeed, it's likely that the best theory of humor is a short list.

comment by CronoDAS · 2010-02-18T03:59:38.374Z · LW(p) · GW(p)

I've always liked Isaac Asimov's theory of where humor comes from. It's not actually true, but it should be!

comment by Kaj_Sotala · 2010-02-17T16:04:05.345Z · LW(p) · GW(p)

I saw somewhere an article, though I unfortunately forget where exactly, about research noting that humor is also tied to social status. Jokes are funnier when told by a superior than someone below your own position.

As a personal observation, jokes are also more fun when there are lots of others who also laugh at them. They're also a great way to break the tension in a group situation. Humor has a very important social function, though obviously it might also have a function like the one you're describing, as "it's a social glue" doesn't explain why some things are funny and why others aren't. (Though I do think the social function is much greater.)

comment by dclayh · 2010-02-17T21:23:37.236Z · LW(p) · GW(p)

A sense of humor is a measurement of the extent to which we realize that we are trapped in a world almost totally devoid of reason. Laughter is how we express the anxiety we feel at this knowledge.

—Dave Barry

comment by zero_call · 2010-02-17T04:59:58.674Z · LW(p) · GW(p)

Nice. I always find humor to be one of the most intuitively baffling things for my consideration. Maybe that's because my sense of humor is just too f*....

comment by Alicorn · 2010-02-24T21:13:42.541Z · LW(p) · GW(p)

An inquiry regarding my posting frequency:

While I'm at the SIAI house, I'm trying to orient towards the local priorities so as to be useful. Among the priorities is building community via Less Wrong, specifically by writing posts. Historically, the limiting factor on how much I post has been a desire not to flood the place - if I started posting as fast as I can write up my ideas, I'd get three or four posts out a week with (I think) no discernible decrease in quality. I have the following questions about this course of action:

  1. Will it annoy people? Building community by being annoying seems very unlikely to work.

  2. Will it affect voting behavior noticeably? I rely on my post's karma scores to determine what to do and not do in the future, and SIAI people who decide whether I'm useful enough to keep use it as a rough metric too. I'd rather post one post that gets 40 karma in a week than two that get 20, and so on.

Replies from: byrnema, Eliezer_Yudkowsky, PeerInfinity, Alicorn, thomblake
comment by byrnema · 2010-02-24T22:06:20.899Z · LW(p) · GW(p)

As your goal is to build community, I would time new posts based on posting and commenting activity. For example, whenever there is a lull, this would be an excellent time to make a new post. (I noticed over the weekend there were some times when 45 minutes would pass between subsequent comments and wished for a new post to jazz things up.)

On the other hand, if there are several new posts already, then it would be nice to wait until their activity has waned a bit.

I think that it is optimal to have 1 or 2 posts 'going on' at a time. I prefer the second post when one of them is technical and/or of focused interest to a smaller subset of Less Wrongers.

(But otherwise no limit on the rate of posts.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T21:43:05.230Z · LW(p) · GW(p)

I'd say damn the torpedoes, full speed ahead. If people are annoyed, let them downvote. If posts start getting downvoted, slow down.

Your posts have generally been voted up. If now is the golden moment of time where you can get everything said, then for the love of Cthulhu, say it now!

Replies from: Alicorn
comment by Alicorn · 2010-02-24T22:44:36.269Z · LW(p) · GW(p)

I don't anticipate being so obnoxiously prolific that people collectively start voting my posts negative such that they stay that way. But people already sometimes register individual downvotes on posts that I make, and I don't want that to happen on a larger fraction of posts due to increased frequency, because I can't reliably distinguish between "you must have had an off day, this post is not up to scratch" and "please, please, please shut up".

Replies from: wedrifid, ciphergoth, Eliezer_Yudkowsky
comment by wedrifid · 2010-02-25T02:28:47.578Z · LW(p) · GW(p)

Post away.

The best signal to anticipate from the audience in this case is "how many votes in total do I expect if I post at full speed vs how many votes I expect if I posted less frequently and so ended up writing less posts overall". Increased frequency may give you less votes per post. Frequent posts from the same author may be less desired and if you post less you may only be giving the best posts. But if the net expectation is higher for more prolific posting then that can be interpreted as "the lesswrong.com community would prefer you to post faster than a spambot".

Even if you expected a total of less karma for more posts I wouldn't say that means you ought not post more. So long as your posts are still breaking the 10 mark we clearly don't mind your contribution. There are probably other benefits to you from posting than maximising the benefit to lesswrong. I find writing helps clarify my thinking for example. So as long as you are still being received somewhat positively you are free to type away.

comment by Paul Crowley (ciphergoth) · 2010-02-24T22:45:56.394Z · LW(p) · GW(p)

Post as much as you like, if you think it's good quality; I promise to say if I start to think slowing down would be a good idea.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T22:45:30.501Z · LW(p) · GW(p)

I don't mean "downvoted negative" just "downvoted relative to other posters".

comment by PeerInfinity · 2010-02-25T17:36:22.236Z · LW(p) · GW(p)

one obvious idea that I didn't notice anyone else mention:

Another option is to go ahead and write the posts as fast as you think is optimal, but if you think this is too fast to actually post the stuff you've written, then you can wait a few days after you wrote it before posting.

LW has a handy "drafts" feature that you can use for that.

This also has the advantage that you have more time to improve the article before you post it, but the disadvantage that you may be tempted to spend too much time making minor, unimportant improvements. Another disadvantage is that feedback gets delayed.

Replies from: Alicorn
comment by Alicorn · 2010-02-25T17:37:54.025Z · LW(p) · GW(p)

If I sit on posts for too long, I start second-guessing myself and often wind up deleting them.

comment by Alicorn · 2010-02-24T23:09:32.616Z · LW(p) · GW(p)

A related question: If I have a large topic to cover, should I cover it in one post, or split it up along convenient cleavage planes and make it a sequence? (If I make sequences, I think I'll learn my lesson from the last one I tried and write it all before posting anything, so I don't post 2/3 of it and then stop.)

Replies from: ciphergoth, RobinZ, Eliezer_Yudkowsky
comment by Paul Crowley (ciphergoth) · 2010-02-25T12:00:28.665Z · LW(p) · GW(p)

I really like the "sequences" approach - it's easier to read and digest a chunk at a time, and it focusses discussion well, too.

comment by RobinZ · 2010-02-24T23:38:02.122Z · LW(p) · GW(p)

Long posts are more offputting than short ones, and individual steps are more likely to be correct than entire theorems - both of these points would suggest posting sequences preferentially.

As for a specific reference on length: thirty-three hundred words sharply focused on a single, vivid subject is pushing the upper limit of what I find comfortable to attack in a single sitting.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-25T13:12:10.207Z · LW(p) · GW(p)

Posting 2/3 of a sequence and stopping is fine if people turn out not to be interested. I recommend fast posting and fast feedback.

comment by thomblake · 2010-02-24T21:28:01.042Z · LW(p) · GW(p)

It seems to me that any strategy that does not end up with three posts by you on "Recent Posts" seems fine, as a rule of thumb.

Replies from: Alicorn
comment by Alicorn · 2010-02-24T21:39:15.865Z · LW(p) · GW(p)

Respondents, please upvote thomblake's comment if this seems like an acceptable rule of thumb.

Edit: And likewise for other things people say if those seem like good ideas.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-24T19:52:13.400Z · LW(p) · GW(p)

http://www.guardian.co.uk/global/2010/feb/23/flat-earth-society

Yeah, so... I'm betting if we could hook this guy up to a perfect lie detector, it would turn out to be a conscious scam. Or am I still underestimating human insanity by that much?

Replies from: thomblake, MichaelHoward, Unknowns
comment by thomblake · 2010-02-24T20:04:48.576Z · LW(p) · GW(p)

Or am I still underestimating human insanity by that much?

Yes.

People dismiss the scientific evidence weighing similarly against them on many issues in the news every day. There's nothing spectacular about finding someone who does it regarding the Earth being flat, especially given that an entire society has existed for hundreds of years to promote the idea.

comment by MichaelHoward · 2010-02-28T11:32:25.112Z · LW(p) · GW(p)

That you see this as a particularly extreme case of insanity (even in an apparently intelligent, lucid, fully-functioning person) is far more shocking to me than this guy.

Maybe I've just seen too many Louis Theroux documentaries.

comment by Unknowns · 2010-02-24T20:18:34.091Z · LW(p) · GW(p)

At any rate, if it may not be quite a conscious scam, it sounds a lot like belief in belief, i.e. he may be telling himself that he believes the earth is flat because there is no conclusive proof that it isn't, but he secretly knows quite well that the evidence for this is as close to conclusive as it is for anything.

comment by Kevin · 2010-02-17T08:18:50.625Z · LW(p) · GW(p)

Objections to Coherent Extrapolated Volition

http://www.singinst.org/blog/2007/06/13/objections-to-coherent-extrapolated-volition/

Replies from: Roko, Karl_Smith, timtyler
comment by Roko · 2010-02-17T13:16:23.285Z · LW(p) · GW(p)

I think that this post doesn't list the strongest objection: CEV would take a long list of scientific miracles to pull off, miracles that whilst not strictly "impossible", are each profound computer science and philosophy questions. To wit:

  • An AI that can simulate the outcome of human conscious deliberation, without actually instantiating a human consciousness, i.e. a detailed technical understanding of the problem of conscious experience

  • A way to construct an AI goal system that somehow extracts new concepts from a human upload's brain, and then modifies itself to have a new set of goals defined in terms of those concepts.

  • A solution to the ontology problem in ethics

  • A solution to the friendliness structure problem, i.e. a self-improving AI that can reliably self-modify without error or axiological drift.

  • A solution to the problem of preference aggregation, (EDITED, thanks ciphergoth)

  • A formal implementation of Rawlesian Reflective Equilibrium for CEV to work

  • An AI that can solve philosophy problems that are beyond the ability of the designers to even conceive

  • A way to choose what subset of humanity gets included in CEV that doesn't include too many superstitious/demented/vengeful/religious nutjobs and land those who implement it in infinite perfect hell.

  • All of the above working first time, without testing the entire superintelligence. (though you can test small subcomponents)

And, to make it worse, if major political powers are involved, you have to solve the political problem of getting them to agree on how to skew the CEV towards a geopolitical-power-weighted set of volitions to extrapolate, without causing a thermonuclear war as greedy political leaders fight over the future of the universe.

Replies from: Eliezer_Yudkowsky, Nick_Tarleton, Morendil, Wei_Dai, ciphergoth, Mitchell_Porter, wedrifid, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-17T20:10:41.107Z · LW(p) · GW(p)

A way to choose what subset of humanity gets included in CEV that doesn't include too many superstitious/demented/vengeful/religious nutjobs and land those who implement it in infinite perfect hell.

What you're looking for is a way to construe the extrapolated volition that washes out superstition and dementation.

To the extent that vengefulness turns out to be a simple direct value that survives under many reasonable construals, it seems to me that one simple and morally elegant solution would be to filter, not the people, but the spread of their volitions, by the test, "Would your volition take into account the volition of a human who would unconditionally take into account yours?" This filters out extrapolations that end up perfectly selfish and those which end up with frozen values irrespective of what other people think - something of a hack, but it might be that many genuine reflective equilibria are just like that, and only a values-based decision can rule them out. The "unconditional" qualifier is meant to rule out TDT-like considerations, or they could just be ruled out by fiat, i.e., we want to test for cooperation in the Prisoner's Dilemma, not in the True Prisoner's Dilemma.

An AI that can solve philosophy problems that are beyond the ability of the designers to even conceive

It's possible that having a complete mind design on hand would mean that there were no philosophy problems left, since the resources that human minds have to solve philosophy problems are finite, and knowing the exact method to use to solve a philosophy problem usually makes solving it pretty straightforward (the limiting factor on philosophy problems is never computing power). The reason why I pick on this particular cited problem as problematic is that, as stated, it involves an inherent asymmetry between the problems you want the AI to solve and your own understanding of how to meta-approach those problems, which is indeed a difficult and dangerous sort of state.

All of the above working first time, without testing the entire superintelligence. (though you can test small subcomponents)

All approaches to superintelligence, without exception, have this problem. It is not quite as automatically lethal as it sounds (though it is certainly automatically lethal to all other parties' proposals for building superintelligence). You can build in test cases and warning criteria beforehand to your heart's content. You can detect incoherence and fail safely instead of doing something incoherent. You could, though it carries with its own set of dangers, build human checking into the system at various stages and with various degrees of information exposure. But it is the fundamental problem of superintelligence, not a problem of CEV.

And, to make it worse, if major political powers are involved, you have to solve the political problem of getting them to agree on how to skew the CEV towards a geopolitical-power-weighted set of volitions to extrapolate

I will not lend my skills to any such thing.

Replies from: Roko, Wei_Dai, Roko, Roko, ciphergoth
comment by Roko · 2010-02-17T23:52:47.325Z · LW(p) · GW(p)

What you're looking for is a way to construe the extrapolated volition that washes out superstition and dementation.

you could do that. But if you want a clean shirt out of the washing machine, you don't add in a diaper with poo on it and then look for a really good laundry detergent to "wash it out".

My feeling with the CEV of humanity is that if it is highly insensitive to the set of people you extrapolate, then you lose nothing by extrapolating fewer people. On the other hand, if including more people does change the answer in a direction that you regard as bad, then you gain by excluding people with values dissimilar from yours.

Furthermore, excluding people from the CEV process doesn't mean disenfranchising them - it just means enfranchising them according to what your values count as enfranchisement.

Most people in the world don't hold our values(1). Read, e.g. Haidt on Culturally determined values. Human values are universal in form but local in content. Our should function is parochial.

(1 - note - this doesn't mean that they will be different after extrapolation. f(x) can equal f(y) for x!=y. But it does mean that they might, which is enough to give you an incentive not to include them)

Replies from: Steve_Rayhawk
comment by Steve_Rayhawk · 2010-02-19T22:44:27.851Z · LW(p) · GW(p)

if you want a clean shirt out of the washing machine, you don't add in a diaper with poo on it and then look for a really good laundry detergent to "wash it out".

I want to claim that a Friendly initial dynamic should be more analogous to a biosphere-with-a-textile-industry-in-it machine than to a washing machine. How do we get clean shirts at all, in a world with dirty diapers?

But then, it's a strained analogy; it's not like we've ever had a problem of garments claiming control over the biosphere and over other garments' cleanliness before.

comment by Wei Dai (Wei_Dai) · 2010-02-17T20:50:40.932Z · LW(p) · GW(p)

I will not lend my skills to any such thing.

Is that just a bargaining position, or do you truly consider that no human values surviving is preferable to allowing an "unfair" weighing of volitions?

comment by Roko · 2010-02-17T23:37:26.949Z · LW(p) · GW(p)

I will not lend my skills to any such thing.

It seems that in many scenarios, the powers that be will want in. The only scenarios where they won't are ones where the singularity happens before they take it seriously.

I am not sure how much they will screw things up if/when they do.

comment by Roko · 2010-02-17T23:42:40.842Z · LW(p) · GW(p)

But it is the fundamental problem of superintelligence, not a problem of CEV.

Upload-based route don't suffer from this as badly, because there is inherently a continuum between "one upload running at real time speed" and "10^20 intelligence-enhanced uploads running at 10^6 times normal speed".

comment by Paul Crowley (ciphergoth) · 2010-02-17T23:05:30.467Z · LW(p) · GW(p)

"Would your volition take into account the volition of a human who would unconditionally take into account yours?"

Doesn't this still give them the freedom to weight that voilition as small as they like?

comment by Nick_Tarleton · 2010-02-17T19:30:35.356Z · LW(p) · GW(p)

Some quibbles:

  • A solution to the ontology problem in ethics
  • A solution to the problem of preference aggregation

These need seed content, but seem like they can be renormalized.

  • A way to choose what subset of humanity gets included in CEV that doesn't include too many superstitious/demented/vengeful/religious nutjobs and land those who implement it in infinite perfect hell.

This may be a problem, but it seems to me that choosing this particular example, and being as confident of it as you appear to be, are symptomatic of an affective death spiral.

  • All of the above working first time, without testing the entire superintelligence.

The original CEV proposal appears to me to endorse using something like a CFAI-style controlled ascent rather than blind FOOM: "A key point in building a young Friendly AI is that when the chaos in the system grows too high (spread and muddle both add to chaos), the Friendly AI does not guess. The young FAI leaves the problem pending and calls a programmer, or suspends, or undergoes a deterministic controlled shutdown."

comment by Morendil · 2010-02-17T14:14:53.602Z · LW(p) · GW(p)

Useful and interesting list, thanks.

A way to choose what subset of humanity gets included in CEV

I thought the point of defining CEV as what we would choose if we knew better was (partly) that you wouldn't have to subset. We wouldn't be superstitious, vengeful, and so on if we knew better.

Also, can you expand on what you mean by "Rawlesian Reflective Equilibrium"? Are you referring (however indirectly) to the "veil of ignorance" concept?

Replies from: Roko, Wei_Dai
comment by Roko · 2010-02-17T14:20:06.016Z · LW(p) · GW(p)

We wouldn't be.. vengeful, ... if we knew better.

Why not? How does adding factual knowledge get rid of people's desire to hurt someone else out of revenge?

We wouldn't be superstitious, ... if we knew better.

People who currently believe in superstitious belief system X would lose the factual falsehoods that X entailed. But most superstitious belief systems have evaluative aspects too, for example, the widespread religious belief that all nonbelievers "ought" to go to hell. I am a nonbeliever. I am also not Chinese, not Indian, not a follower of Sharia Law or Islam, not a member of the Chinese Communist Party, not a member of the Catholic Church, not a Mormon, not a "Good Christian", and I didn't intend to donate all my money and resources to saving lives in the third world before finding out about the singularity. There are lots of humans alive on this planet whose volitions could spring a very nasty surprise on people like us.

Replies from: Nick_Tarleton, Kutta, Morendil
comment by Nick_Tarleton · 2010-02-17T19:56:23.716Z · LW(p) · GW(p)

Why not? How does adding factual knowledge get rid of people's desire to hurt someone else out of revenge?

Learning about the game-theoretic roots of a desire seems to generally weaken its force, and makes it apparent that one has a choice about whether or not to retain it. I don't know what fraction of people would choose in such a state not to be vengeful, though. (Related: 'hot' and 'cold' motivational states. CEV seems to naturally privilege cold states, which should tend to reduce vengefulness, though I'm not completely sure this is the right thing to do rather than something like a negotiation between hot and cold subselves.)

What it's like to be hurt is also factual knowledge, and seems like it might be extremely motivating towards empathy generally.

People who currently believe in superstitious belief system X would lose the factual falsehoods that X entailed. But most superstitious belief systems have evaluative aspects too, for example, the widespread religious belief that all nonbelievers "ought" to go to hell.

Why do you think it likely that people would retain that evaluative judgment upon losing the closely coupled beliefs? Far more plausibly, they could retain the general desire to punish violations of conservative social norms, but see above.

comment by Kutta · 2010-02-17T18:01:53.093Z · LW(p) · GW(p)

I find it interesting that there seems to be a lot of variation in people's views regarding how much coherence there'd be in an extrapolation... You say that choosing a right group of humans is important while I'm under the impression that there is no such problem; basically everyone should be the game, and making higher level considerations about which humans to include is merely an additional source of error. Nevertheless, if there'll be really as much coherence as I think, and I think there'd be hella lot, picking some subset of humanity would pretty much produce a CEV that is very akin to CEVs of other possible human groups.

I think that even being an Islamic radical fundamentalist is a petty factor in overall coherence. If I'm correct, Vladimir Nesov has said several times that people can be wrong about their values, and I pretty much agree. Of course, there is an obvious caveat that it's rather shaky to guess what other people's real values might be. Saying "You're wrong about your professed value X, you're real value is along the lines of Y because you cannot possibly diverge that much from the psychological unity of mankind" also risks seeming like claiming excessive moral authority. Still, I think it is a potentially valid argument, depending on the exact nature of X and Y.

Replies from: Roko
comment by Roko · 2010-02-17T18:34:21.496Z · LW(p) · GW(p)

Nevertheless, if there'll be really as much coherence as I think, and I think there'd be hella lot, picking some subset of humanity would pretty much produce a CEV that is very akin to CEVs of other possible human groups.

And what would you do if Omega told you that the CEV of just {liberal westerners in your age group} is wildly different from the CEV of humanity? What do you think the right thing to do would be then?

Replies from: Eliezer_Yudkowsky, Unknowns, ciphergoth, wedrifid, Kutta
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-17T20:16:44.068Z · LW(p) · GW(p)

I'd ask Omega, "Which construal of volition are you using?"

There's light in us somewhere, a better world inside us somewhere, the question is how to let it out. It's probably more closely akin to the part of us that says "Wouldn't everyone getting their wishes really turn out to be awful?" than the part of us that thinks up cool wishes. And it may even be that Islamic fundamentalists just don't have any note of grace in them at all, that there is no better future written in them anywhere, that every reasonable construal of them ends up with an atheist who still wants others to burn in hell; and if so, the test I cited in the other comment, about filtering portions of the extrapolated volition that wouldn't respect the volition of another who unconditionally respected theirs, seems like it ought to filter that.

Replies from: Roko
comment by Roko · 2010-02-18T01:01:11.059Z · LW(p) · GW(p)

the test I cited in the other comment, about filtering portions of the extrapolated volition that wouldn't respect the volition of another who unconditionally respected theirs, seems like it ought to filter that.

I agree that certain limiting factors, tests, etc could be useful. I haven't thought hard enough about this particular proposal to say whether it is really of use. My first thought is that if you have thought about it carefully, then it probably relatively good, just based on your track record.

comment by Unknowns · 2010-02-24T04:26:11.589Z · LW(p) · GW(p)

Eliezer has already talked about this and argued that the right thing would be to run the CEV on the whole of humanity, basing himself partly on an argument that if some particular group (not us) got control of the programming of the AI, we would prefer that they run it on the whole of humanity rather running it on themselves.

comment by Paul Crowley (ciphergoth) · 2010-02-17T23:15:11.640Z · LW(p) · GW(p)

The lives of most evildoers are of course largely incredibly prosaic, and I find it hard to believe their values in their most prosaic doings are that dissimilar from everyone else around the world doing prosaic things.

Replies from: Roko
comment by Roko · 2010-02-18T01:05:28.397Z · LW(p) · GW(p)

I wasn't think of evildoers. I was thinking of people who are just different, and have their own culture, traditions and way of life.

Replies from: Roko
comment by Roko · 2010-02-18T01:52:59.388Z · LW(p) · GW(p)

I think that thinking in terms of good and evil belies a closet-realist approach to the problem. In reality, there are different people, with different cultures and biologically determined drives. These cultural and biological factors determine (approximately) a set of traditions, worldviews, ethical principles and moral rules, which can undergo a process of reflective equilibrium to determine a set of consistent preferences over the physical world.

We don't know how the reflective equilibrium thing will go, but we know that it could depend upon the set of traditions, ethical principles and moral rules that go into it.

If someone is an illiterate devout pentecostal Christian who lives in a village in Angola, the eventual output of the preference formation process applied to them might be very different than if it were applied to the typical LW reader.

They're not evil. They just might have a very different "should function" than me.

Replies from: steven0461, wedrifid, Vladimir_Nesov, ciphergoth
comment by steven0461 · 2010-02-18T02:31:06.215Z · LW(p) · GW(p)

I think part of the point of what you call "moral anti-realism" is that it frees up words like "evil" so that they can refer to people who have particular kinds of "should function", since there's nothing cosmic that the word could be busy referring to instead.

If I had to offer a demonology, I guess I might loosely divide evil minds into: 1) those capable of serious moral reflection but avoiding it, e.g. because they're busy wallowing in negative other-directed emotion, 2) those engaging in serious moral reflection but making cognitive mistakes in doing so, 3) those whose moral reflection genuinely outputs behavior that strongly conflicts with (the extension of) one's own values. I think 1 comes closest to what's traditionally meant by "evil", with 2 being more "misguided" and 3 being more "Lovecraftian". As I understand it, CEV is problematic if most people are "Lovecraftian" but less so if they're merely "evil" or "misguided", and I think you may in general be too quick to assume Lovecraftianity. (ETA: one main reason why I think this is that I don't see many people actually retaining values associated with wrong belief systems when they abandon those belief systems; do you know of many atheists who think atheists or even Christians should burn in hell?)

Replies from: Unknowns, Wei_Dai
comment by Unknowns · 2010-02-24T04:07:41.113Z · LW(p) · GW(p)

"One main reason why I think this is that I don't see many people actually retaining values associated with wrong belief systems when they abandon those belief systems; do you know of many atheists who think atheists or even Christians should burn in hell?)"

One main reason why you don't see that happening is that the set of beliefs that you consider "right beliefs" is politically influenced, i.e. human beliefs come in certain patterns which are not connected in themselves, but are connected by the custom that people who hold one of the beliefs usually hold the others.

For example, I knew a woman (an agnostic) who favored animal rights, and some group on this basis sent her literature asking for her help with pro-abortion activities, namely because this is a typical pattern: People favoring animal rights are more likely to be pro-abortion. But she responded, "Just because I'm against torturing animals doesn't mean I'm in favor of killing babies," evidently quite a logical response, but not in accordance with the usual pattern.

In other words, your own values are partly determined by political patterns, and if they weren't (which they wouldn't be under CEV) you might well see people retaining values you dislike when they extrapolate.

comment by Wei Dai (Wei_Dai) · 2010-02-22T07:02:01.219Z · LW(p) · GW(p)

As I understand it, CEV is problematic if most people are "Lovecraftian" but less so if they're merely "evil" or "misguided", and I think you may in general be too quick to assume Lovecraftianity.

Most people may or may not be "Lovecraftian", but why take that risk?

Replies from: steven0461
comment by steven0461 · 2010-02-24T03:03:20.777Z · LW(p) · GW(p)

There are gains from cooperating with as many others as possible. Maybe these and other factors outweigh the risk or maybe they don't; the lower the probability and extent of Lovecraftianity, the more likely it is that they do.

Anyway, I'm not making any claims about what to do, I'm just saying people probably aren't as Lovecraftian as Roko thinks, which I conclude both from introspection and from the statistics of what moral change we actually see in humans.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-24T03:55:10.186Z · LW(p) · GW(p)

There are gains from cooperating with as many others as possible. Maybe these and other factors outweigh the risk or maybe they don't; the lower the probability and extent of Lovecraftianity, the more likely it is that they do.

I agree that "probability and extent of Lovecraftianity" would be an important consideration if it were a matter of cooperation, and of deciding how many others to cooperate with, but Eliezer's motivation in giving everyone equal weighting in CEV is altruism rather than cooperation. If it were cooperation, then the weights would be adjusted to account for contribution or bargaining power, instead of being equal.

Anyway, I'm not making any claims about what to do, I'm just saying people probably aren't as Lovecraftian as Roko thinks, which I conclude both from introspection and from the statistics of what moral change we actually see in humans.

To reiterate, "how Lovecraftian" isn't really the issue. Just by positing the possibility that most humans might turn out to be Lovecraftian, you're operating in a meta-ethical framework at odds with Eliezer's, and in which it doesn't make sense to give everyone equal weight in CEV (or at least you'll need a whole other set of arguments to justify that).

That aside, the statistics you mention might also be skewed by an anthropic selection effect.

comment by wedrifid · 2010-02-24T03:19:53.757Z · LW(p) · GW(p)

They're not evil. They just might have a very different "should function" than me.

Alternately: They're evil. They have a very different 'should function' to me.

comment by Vladimir_Nesov · 2010-02-21T10:45:17.553Z · LW(p) · GW(p)

If someone is an illiterate devout pentecostal Christian who lives in a village in Angola, the eventual output of the preference formation process applied to them might be very different than if it were applied to the typical LW reader.

Consider the distinction between whether the output of a preference-aggregation algorithm will be very different for the Angolan Christian, and whether it should be very different. Some preference-aggregation algorithms may just be confused into giving diverging results because of inconsequential distinctions, which would be bad news for everyone, even the "enlightened" westerners.

(To be precise, the relevant factual statement is about whether any two same-culture people get preferences visibly closer to each other than any two culturally distant people. It's like with relatively small genetic relevance of skin color, where within-race variation is greater than between-races variation.)

comment by Paul Crowley (ciphergoth) · 2010-02-18T13:54:33.119Z · LW(p) · GW(p)

I think we agree about this actually - several people's picture of someone with alien values was an Islamic fundamentalist, and they were the "evildoers" I have in mind...

comment by wedrifid · 2010-02-24T04:37:38.824Z · LW(p) · GW(p)

And what would you do if Omega told you that the CEV of just {liberal westerners in your age group} is wildly different from the CEV of humanity? What do you think the right thing to do would be then?

The right thing for me to do is to run CEV on myself, almost by definition. The CEV oracle that I am using to work out my CEV can dereference the dependencies to other CEVs better than I can.

comment by Kutta · 2010-02-17T19:28:28.781Z · LW(p) · GW(p)

If truly, really wildly different? Obviously, I'd just disassemble them to useful matter via nanobots.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-02-17T19:40:51.367Z · LW(p) · GW(p)

No, not obviously; I can't say I've ever seen anyone else claim to completely condition their concern for other people on the possession of similar reflective preferences.

(Or is your point that they probably wouldn't stay people for very long, if given the means to act on their reflective preferences? That wouldn't make it OK to kill them before then, and it would probably constitute undesirable True PD defection to do so afterwards.)

Replies from: Kutta
comment by Kutta · 2010-02-17T19:58:00.950Z · LW(p) · GW(p)

Well, my above reply was a bit tongue-in-cheek. My concern for other things in general is just as complex as my morality and it contains many meta elements such as "I'm willing to modify my preference X in order to conform to your preference Y because I currently care about your utility to a certain extent". On the simplest level, I care for things on a sliding scale that ranges from myself to rocks or Clippy AIs with no functional analogues for human psychology (pain, etc.). Somebody with a literally wildly differing reflective preference would not be a person and, as you say, would be preferably dealt with in True PD manners rather than ordinary human-human altruism contaminated interactions.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-02-17T22:19:55.026Z · LW(p) · GW(p)

Somebody with a literally wildly differing reflective preference would not be a person

This is a very nonstandard usage; personhood is almost universally defined in terms of consciousness and cognitive capacities, and even plausibly relevant desire-like properties like boredom don't have much to do with reflective preference/volition.

comment by Morendil · 2010-02-17T14:55:56.051Z · LW(p) · GW(p)

How does adding factual knowledge get rid of people's desire to hurt someone else out of revenge?

"If we knew better" is an ambiguous phrase, I probably should have used Eliezer's original: "if we knew more, thought faster, were more the people we wished we were, had grown up farther together". That carries a lot of baggage, at least for me.

I don't experience (significant) desires of revenge, so I can only extrapolate from fictional evidence. Say the "someone" in question killed a loved one, and I wanted to hurt them for that. Suppose further that they were no longer able to kill anyone else. Given the time and the means to think about it clearly, I coud see that hurting them would not improve the state of the world for me, or for anyone else, and only impose further unnecessary suffering.

The (possibly flawed) assumption of CEV, as I understood it, is that if I could reason flawlessly, non-pathologically about all of my desires and preferences, I would no longer cleave to the self-undermining ones, and what remains would be compatible with the non-self-undermining desires and preferences of the rest of humanity.

Caveat: I have read the original CEV document but not quite as carefully as maybe I should have, mainly because it carried a "Warning: obsolete" label and I was expecting to come across more recent insights here.

comment by Wei Dai (Wei_Dai) · 2010-02-17T18:16:22.318Z · LW(p) · GW(p)

Also, can you expand on what you mean by "Rawlesian Reflective Equilibrium"?

http://plato.stanford.edu/entries/reflective-equilibrium/

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-17T21:57:20.427Z · LW(p) · GW(p)

I am only part way through but I really recommend that link. So far it's really helped me think about this.

Replies from: Morendil
comment by Morendil · 2010-02-17T22:28:10.583Z · LW(p) · GW(p)

The rest of Rawls' Theory of Justice is good too. I'm trying to figure out for myself (before I finally break down and ask) how CEV compares to the veil of ignorance.

comment by Wei Dai (Wei_Dai) · 2010-02-17T17:57:00.528Z · LW(p) · GW(p)

I wish you had written this a few weeks earlier, because it's perfect as a link for the "their associated difficulties and dangers" phrase in my "Complexity of Value != Complexity of Outcome" post.

Please consider upgrading this comment to a post, perhaps with some links and additional explanations. For example, what is the ontology problem in ethics?

Replies from: Roko, Roko
comment by Roko · 2010-02-17T18:30:53.500Z · LW(p) · GW(p)

The ontology problem is the following: your values are defined in terms of a set of concepts. These concepts are essentially predictively useful categorizations in your model of the world. When you do science, you find that your model of the world is wrong, and you build a new model that has a different set of parts. But what do you do with your values?

Replies from: MichaelVassar
comment by MichaelVassar · 2010-02-19T16:58:53.805Z · LW(p) · GW(p)

In practice, I find that this is never a problem. You usually rest your values on some intuitively obvious part whatever originally caused you to create the concepts in question.

Replies from: Roko
comment by Roko · 2010-02-19T21:31:49.151Z · LW(p) · GW(p)

Subjective anticipation is a concept that a lot of people rest their axiology on. But it looks like subjective anticipation is an artefact of our cognitive algorithms, and all kinds of Big world theories break it. For example, MW QM means that subjective anticipation is nonsense.

Personally, I find this extremely problematic, and in practise, I think that I am just ignoring it.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-19T21:41:14.888Z · LW(p) · GW(p)

I think mind copying technology may be a better illustration of the subjective anticipation problem than MW QM, but I agree that it's a good example of the ontology problem. BTW, do you have a reference for where the ontology problem was first stated, in case I need to reference it in the future?

Replies from: Roko
comment by Roko · 2010-02-20T23:58:31.567Z · LW(p) · GW(p)

I mentioned it on my blog in august 2008 in the post "ontologies, approximations and fundamentalists"

Peter de Blanc invented it independently, and I think that one of Eliezer and Marcello probably did too.

Replies from: Eliezer_Yudkowsky, Wei_Dai
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-21T00:36:38.314Z · LW(p) · GW(p)

I invented it sometime around the dawn of time, don't know if Marcello did in advance or not.

Actually, I don't know if I could have claimed to invent it, there may be science fiction priors.

comment by Wei Dai (Wei_Dai) · 2010-02-21T02:04:26.552Z · LW(p) · GW(p)

Thanks for the pointer, but I think the argument you gave in that post is wrong. You argued that an agent smaller than the universe has to represent its goals using an approximate ontology (and therefore would have to later re-phrase its goals relative to more accurate ontologies). But such an agent can represent its goals/preferences in compressed form, instead of using an approximate ontology. With such compressed preferences, it may not have the computational resources to determine with certainty which course of action best satisfies its preferences, but that is just a standard logical uncertainty problem.

I think the ontology problem is a real problem, but it may just be a one-time problem, where we or an AI have to translate our fuzzy human preferences into some well-defined form, instead of a problem that all agents must face over and over again.

Replies from: Roko
comment by Roko · 2010-02-21T02:36:17.061Z · LW(p) · GW(p)

But such an agent can represent its goals/preferences in compressed form, instead of using an approximate ontology.

Yes, if it has compressible preferences, which in reality is the case for e.g. humans and many plausible AIs.

In reality problems of the form where you discover that your preferences are stated in terms of an incorrect ontology, e.g. souls, anticipated future experience, are where this really bites.

it may just be a one-time problem, where we or an AI have to translate our fuzzy human preferences into some well-defined form, instead of a problem that all agents must face over and over again.

I think that depends upon the structure of reality. Maybe there will be a series of philosophical shocks as severe as the physicality of mental states, Big Worlds, quantum MWI, etc. Suspicion should definitely be directed at what horrors will be unleashed upon a human or AI that discovers a correct theory of quantum gravity.

Just as Big World cosmology can erode aggregative consequentialism, maybe the ultimate nature of quantum gravity will entirely erode any rational decision-making; perhaps some kind of ultimate ensemble theory already has.

On the other hand, the idea of a one-time shock is also plausible.

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2010-02-21T03:23:02.378Z · LW(p) · GW(p)

The reason I think it can just be a one-time shock is that we can extend our preferences to cover all possible mathematical structures. (I talked about this in Towards a New Decision Theory.) Then, no matter what kind of universe we turn out to live in, whichever theory of quantum gravity turns out to be correct, the structure of the universe will correspond to some mathematical structure which we will have well-defined preferences over.

perhaps some kind of ultimate ensemble theory already has [eroded any rational decision-making].

I addressed this issue a bit in that post as well. Are you not convinced that rational decision-making is possible in Tegmark's Level IV Multiverse?

Replies from: Vladimir_Nesov, Roko
comment by Vladimir_Nesov · 2010-02-21T10:00:34.577Z · LW(p) · GW(p)

The next few posts on my blog are going to be basically about approaching this problem (and given the occasion, I may as well commit to writing the first post today).

You should read [*] to get a better idea of why I see "preference over all mathematical structures" as a bad call. We can't say what "all mathematical structures" is, any given foundation only covers a portion of what we could invent. As the real world, mathematics that we might someday encounter can only be completely defined by the process of discovery (but if you capture this process, you may need nothing else).

--
[*] S. Awodey (2004). `An Answer to Hellman's Question: 'Does Category Theory Provide a Framework for Mathematical Structuralism?". Philosophia Mathematica 12(1):54-64.

Replies from: Roko, Wei_Dai
comment by Roko · 2010-02-22T22:38:33.533Z · LW(p) · GW(p)

The idea that ethics depends upon one's philosophy of mathematics is intriguing.

By the way, I see no post about this on the causality relay!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-23T08:03:39.417Z · LW(p) · GW(p)

Hope to finish it today... Though I won't talk about philosophy of mathematics in this sub-series, I'm just going to reduce the ontological confusion about preference and laws of physics to a (still somewhat philosophical, but taking place in a comfortably formal setting) question of static analysis of computer programs.

Replies from: wedrifid
comment by wedrifid · 2010-02-23T10:19:47.524Z · LW(p) · GW(p)

Great to hear. Looking forward to reading it.

comment by Wei Dai (Wei_Dai) · 2010-02-22T23:38:22.251Z · LW(p) · GW(p)

Yes, talking about "preference over all mathematical structures" does gloss over some problems in the philosophy of mathematics, and I am sympathetic to anti-foundationalist views like Awodey's.

Also, in general I agree with Roko on the need for an AI that can do philosophy better than any human, so in this thread I was mostly picking a nit with a specific argument that he had.

(I was going to remind you about the missing post, but I see Roko already did. :)

comment by Roko · 2010-02-22T17:38:47.367Z · LW(p) · GW(p)

we can extend our preferences to cover all possible mathematical structures.

I define the following structure: if you take action a, all possible logically possible consequences will follow, i.e. all computable sensory I/O functions, generated by all possible computable changes in the objective physical universe. This holds for all a. This is facilitated by the universe creating infinitely many copies of you every time you take an action, and there being literally no fact of the matter about which one is you.

Now if you have already extended your preferences over all possible mathematical structures, you presumably have a preferred action in this case. But the preferred action is really rather unrelated to your life before you made this unsettling discovery. Beings that had different evolved desires (such as seeking status versus maximizing offspring) wouldn't produce systematically different preferences, they'd essentially have to choose at random.

If Tegmark Level 4 is, in some sense "true", this hypothetical example is not really so hypothetical - it is very similar to the situation that we are in, with the caveat that you can argue about weightings/priors over mathematical structures, so some consequences get a lower weighting than others, given the prior you chose.

My intuition tells me that Level 4 is a mistake, and that there is such a thing as the consequence of my actions. However, mere MW quantum mechanics casts doubt on the idea of anticipated subjective experience, so I am suspicious of my anti-multiverse intuition. Perhaps what we need is the equivalent of a theory of born probabilities for Tegmark Level 4 - something in the region of what Nick Bostrom tried to do in his book on anthropic reasoning (though it looks like Nick simply added more arbitrariness into the mix in the form of reference classes)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-22T23:19:59.968Z · LW(p) · GW(p)

My intuition tells me that Level 4 is a mistake, and that there is such a thing as the consequence of my actions.

I disagree on the first part, and agree on the second part.

with the caveat that you can argue about weightings/priors over mathematical structures, so some consequences get a lower weighting than others, given the prior you chose.

Yes, and that's enough for rational decision making. I'm not really sure why you're not seeing that...

Replies from: Roko
comment by Roko · 2010-02-22T23:40:58.728Z · LW(p) · GW(p)

Yes, and that's enough for rational decision making.

I agree that you can turn the handle on a particular piece of mathematics that resembles decisionmaking, but some part of me says that you're just playing a game with yourself: you decide that everything exists, then you put a prior over everything, then you act to maximize your utility, weighted by that prior. It is certainly a blow to one's intuition that one can only salvage the ability to act by playing a game of make-believe that some sections of "everything" are "less real" than others, where your real-ness prior is something you had to make up anyway.

Others also think that I am just slow on the uptake of this idea. But to me the idea that reality is not fixed but relative to what real-ness prior you decide to pick is extremely ugly. It would mean that the utility of technology to achieve things is merely a shared delusion, that if a theist chose a real-ness prior that assigned high real-ness only to universes where a theistic god existed then he would be correct to pray, etc. Effectively you're saying that the postmodernists were right after all.

Now, the fact that I have a negative emotional reaction to this proposal doesn't make it less true, of course.

Replies from: Vladimir_Nesov, Wei_Dai
comment by Vladimir_Nesov · 2010-02-23T08:40:59.528Z · LW(p) · GW(p)

There is a deep analogy between how you can't change the laws of physics (contents of reality, apart from lawfully acting) and how you can't change your own program. It's not a delusion unless it can be reached by mistake. The theist can't be right to act as if a deity exists unless his program (brain) is such that it is the correct way to act, and he can't change his mind for it to become right, because it's impossible to change one's program, only act according to it.

Replies from: Roko
comment by Roko · 2010-02-23T14:27:16.601Z · LW(p) · GW(p)

The problem is that this point of view means that in a debate with someone who is firmly religious, not only is the religious person right, but you regret the fact that you are "rational"; you lament "if only I had been brought up with religious indoctrination, I would correctly believe that I am going to heaven".

Any rational theory that leaves you lamenting your own rationality deserves some serious scepticism.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-23T16:58:58.247Z · LW(p) · GW(p)

Following the same analogy, you can translate it as "if only the God did in fact exist, ...". The difference doesn't seem particularly significant -- both "what ifs" are equally impossible. "Regretting rationality" is on a different level -- rationality in the relevant sense is a matter of choice. The program that defines your decision-making algorithm isn't.

I still fear that you are reading in my words something very different from what I intend, as I don't see the possibility of a religious person's mind actually acting as if God is real. A religious person may have a free-floating network of beliefs about God, but it doesn't survive under reflection. A true god-impressed mind would actually act as if God is real, no matter what, it won't be deconvertable, and indeed under reflection an atheist god-impressed mind will correctly discard atheism.

Not all beliefs are equal, a human atheist is correct not just according to atheist's standard, and a human theist is incorrect not just to atheist's standard. The standard is in the world, or, under this analogy, in the mind. (The mind is a better place for ontology, because preference is also here, and human mind can be completely formalized, unlike the unknown laws of physics. By the way, the first post is up).

Replies from: Roko
comment by Roko · 2010-02-23T17:25:23.850Z · LW(p) · GW(p)

So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they'd be just as right as we are?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-23T18:33:59.226Z · LW(p) · GW(p)

So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they'd be just as right as we are?

But only in the sense that their calculation could be correct according to a particularly weird prior. The difference between normal theist and a "god-impressed mind" who both believe in God is that of rationality: the former makes mistakes in updating beliefs, the latter probably doesn't. The same with an atheist god-impressed mind and a human atheist. You can't expect to find that weird a prior in a human. And of course, you should say that the god-impressed are wrong about their beliefs, though they correctly follow the evidence according to their prior. If you value their success in the real world more than the autonomy of their preference, you may want to reach into their minds and make appropriate changes.

I should say again: the program that defines the decision-making algorithm can't be normally changed, which means that one can't be really "converted" to a different preference, though one can be converted to different beliefs and feelings. Observations don't change the algorithm, they are processed according to that algorithm. This means that if you care about reflective consistency (and everyone does, in the sense of preservation of preference), you'd try to counteract the unwanted effects of environment on yourself, including the self-promoting effects where you start liking the new situation. The extent to which you like the new situation, the "level of conviction", it's pretty much irrelevant, just as the presence of a losing psychological drive. It'd take great integrity (not "strength of conviction") in the change for significantly different values to really sink in, in the sense that the new preference-on-reflection will resemble the new beliefs and feelings similarly to how the native preference-on-reflection will resemble native (sane, secular, etc.) beliefs and feelings.

Replies from: Roko
comment by Roko · 2010-02-23T22:20:50.176Z · LW(p) · GW(p)

one can't be really "converted" to a different preference, though one can be converted to different beliefs and feelings

I doubt that you can define a way to choose an algorithm out of a human brain that makes that sentence true.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-02-23T22:47:35.628Z · LW(p) · GW(p)

Yes, that wasn't careful. In this context, I mean "no large shift of preference". Tiny changes occur all the time (and are actually very important if you scale them up by giving the preference with/without these changes to a FAI). You can model the extent of reversibility (as compared to a formal computer program) by roughly what can be inferred about the person's past, which doesn't necessarily all has to be from the person's brain. (By an algorithm in human brain I mean all of human brain, basically a program that would run an upload implementation, together with the data.)

comment by Wei Dai (Wei_Dai) · 2010-02-23T00:07:39.764Z · LW(p) · GW(p)

I agree that it's ugly to think of the weights as a pretense on how real certain parts of reality are. That's why I think it may be better to think of them as representing how much you care about various parts of reality. (For the benefit of other readers, I talked about this in What Are Probabilities, Anyway?.)

Actually, I haven't completely given up the idea that there is some objective notion of how real, or how important, various parts of reality are. It's hard to escape the intuition that some parts of math are just easier to reach or find than others, in a way that is not dependent how human minds work.

comment by Vladimir_Nesov · 2010-02-21T10:12:51.007Z · LW(p) · GW(p)

In reality problems of the form where you discover that your preferences are stated in terms of an incorrect ontology, e.g. souls, anticipated future experience, are where this really bites.

I believe even personal identity falls under this category. A lot of moral intuitions work with the-me-in-the-future object, as marked in the map. To follow these intuitions, it is very important for us to have a good idea of where the-me-in-the-future is, to have a good map of this thing. When you get to weird thought experiments with copying, this epistemic step breaks down, because if there are multiple future-copies, the-me-in-the-future is a pattern that is absent. As a result, moral intuitions, that indirectly work through this mark on the map, get confused and start giving the wrong answers as well. This can be readily observed for example from preferential inconsistency in time expected in such thought experiments (you precommit to teleporting-with-delay, but then your copy that is to be destroyed starts complaining).

Personal identity is (in general) a wrong epistemic question asked by our moral intuition. Only if preference is expressed in terms of the territory (or rather in a form flexible enough to follow all possible developments), including the parts currently represented in moral intuition in terms of the-me-in-the-future object in the territory, will the confusion with expectations and anthropic thought experiments go away.

comment by Roko · 2010-02-17T18:31:10.881Z · LW(p) · GW(p)

Please consider upgrading this comment to a post, perhaps with some links and additional explanations. For example, what is the ontology problem in ethics?

  • thanks, I'll consider it
comment by Paul Crowley (ciphergoth) · 2010-02-17T13:20:16.467Z · LW(p) · GW(p)

A solution to the problems of ethics, including the repugnant conclusion, preference aggregation, etc.

Isn't this one of the problems you can let the FAI solve?

Replies from: Roko
comment by Roko · 2010-02-17T13:47:50.129Z · LW(p) · GW(p)

Actually the repugnant conclusion yes, preference aggregation no, because you have to aggregate individual humans' preferences.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-02-18T19:57:26.663Z · LW(p) · GW(p)

And what if preferences cannot be measured by a common "ruler"? What then?

Replies from: Roko
comment by Roko · 2010-02-19T00:56:50.275Z · LW(p) · GW(p)

I agree that preference aggregation is hard. Wei dai and nick Bostrom have both made proposals based upon agents negotiating with some deadline or constraint.

comment by Mitchell_Porter · 2010-02-18T05:40:23.418Z · LW(p) · GW(p)

Maybe I'm crazy but all that doesn't sound so hard.

More precisely, there's one part, the solution to which should require nothing more than steady hard work, and another part which is so nebulous that even the problems are still fuzzy.

The first part - requiring just steady hard work - is everything that can be reduced to existing physics and mathematics. We're supposed to take the human brain as input and get a human-friendly AI as output. The human brain is a decision-making system; it's a genetically encoded decision architecture or decision architecture schema, with the parameters of the schema being set in the individual by genetic or environmental contingencies. CEV is all about answering the question: If a superintelligence appeared in our midst, what would the human race want its decision architecture to be, if we had time enough to think things through and arrive at a stable answer? So it boils down to asking, if you had a number of instances of the specific decision architecture human brain, and they were asked to choose a decision architecture for an entity of arbitrarily high intelligence that was to be introduced into their environment, what would be their asymptotically stable preference? That just doesn't sound like a mindbogglingly difficult problem. It's certainly a question that should be answerable for much simpler classes of decision architecture.

So it seems to me that the main challenge is simply to understand what the human decision architecture is. And again, that shouldn't be beyond us at all. The human genome is completely sequenced, we know the physics of the brain down to nucleons, there's only a finite number of cell types in the body - yes it's complicated, but it's really just a matter of sticking with the problem. (Or would be, if there was no time factor. But how to do all this quickly is a separate problem.)

So to sum up, all we need to do is to solve the decision theory problem 'if agents X, Y, Z... get to determine the value system and cognitive architecture of a new, superintelligent agent A which will be introduced into their environment, what would their asymptotic preference be?'; correctly identify the human decision architecture; and then substitute this for X, Y, Z... in the preceding problem.

That's the first part, the 'easy' part. What's the second part, the hard but nebulous part? Everything to do with consciousness, inconceivable future philosophy problems, and so forth. Now what's peculiar about this situation is that the existence of nebulous hard problems suggests that the thinker is missing something big about the nature of reality, and yet the easy part of the problem seems almost completely specified. How can the easy part appear closed, an exactly specified problem simply awaiting solution, and yet at the same time, other aspects of the overall task seem so beyond understanding? This contradiction is itself something of a nebulous hard problem.

Anyway, achieving the CEV agenda seems to require a combination of steady work on a well-defined problem where we do already have everything we need to solve it, and rumination on nebulous imponderables in the hope of achieving clarity - including clarity about the relationship between the imponderables and the well-defined problem. I think that is very doable - the combination of steady work and contemplation, that is. And the contemplation is itself another form of steady work - steadily thinking about the nebulous problems, until they resolve themselves.

So long as there are still enigmas in the existential equation we can't be sure of the outcome, but I think we can know, right now, that it's possible to work on the problem (easy and hard aspects alike) in a systematic and logical way.

comment by wedrifid · 2010-02-17T20:50:33.844Z · LW(p) · GW(p)

An AI that can simulate the outcome of human conscious deliberation, without actually instantiating a human consciousness, i.e. a detailed technical understanding of the problem of conscious experience

Could you clarify for me what you mean by requiring that that a human consciousness be instantiated? Is it that you don't believe it is possible to elicit a CEV from a human if instantiation is involved or that you object to the consequences of simulating human consciousnesses in potentially undesirable situations?

In the case of the latter I observe that this is only a problem under certain CEVs and so is somewhat different in nature to the other requirements. Some people's CEVs could then be extracted more easily than others.

comment by timtyler · 2010-02-17T20:20:54.626Z · LW(p) · GW(p)

Other approaches seem likely to get there first.

...and what have you got against testing?

Replies from: timtyler
comment by timtyler · 2010-02-18T20:53:33.726Z · LW(p) · GW(p)

No reply. Just so you know, the collective position on testing here is bizarre.

How you can think that superintelligent agents are often dangerous AND that a good way of dealing with this is to release an untested one on the world is really quite puzzling.

Hardly anyone ever addresses the issue. When they do, it is by pointing to AI box experiments, which purport to show that a superintelligence can defeat a lesser intelligence, even if well strapped down.

That seems irrelevant to me. To build a jail, for the smartest agent in the world, you do not use vastly less powerful agents as guards, you use slightly less powerful ones. If necesssary, you can dope the restrained agent up a bit. There are in fact all manner of approaches to this problem - I recommend thinking about them some more before discarding the whole idea of testing superintelligences.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-18T22:15:40.540Z · LW(p) · GW(p)

No-one's against testing, that precaution should be taken, but it's not the most pressing concern at this stage.

Replies from: timtyler
comment by timtyler · 2010-02-18T23:13:34.778Z · LW(p) · GW(p)

See "All of the above working first time, without testing the entire superintelligence", upthread. This is not the first time.

comment by Karl_Smith · 2010-02-19T00:04:02.575Z · LW(p) · GW(p)

I am no where near caught up on FAI readings but here are is a humble thought.

What I have read so far seems to be assuming a single jump FAI. That is once the FAI is set it must take us to where we ultimately want to go without further human input. Please correct me if I am wrong.

What about a multistage approach?

The problem that people might immediately bring up is that a multistage approach might lead elevating subgoals to goals. We say, "take us to mastery of nanotech" and the AI decides to rip us apart and organize all existing ribosomes under a coherent command.

However, perhaps what we need to do is verify that any intermediate state goal better than the current state.

So what if we have the AI guess a goal state. Then simulate that goal state and expose some subset of humans to that simulation. The AI the asks "Proceed to this stage or no" The humans answer.

Once in the next stage we can reassess.

To give a sense of motivation: it seems that verifying the goodness of future-state is easier than trying to construct the basic rules of good statedness.

comment by timtyler · 2010-02-17T20:41:11.588Z · LW(p) · GW(p)

Powerful machine intelligences can be expected to have natural drives to eliminate competing goal-based systems.

So, unless there are safeguards against it, a machine intelligence is likely to assassinate potential designers of other machine intelligences which may have subtly different goals. IMO, assassinating your competitors is not an acceptable business practice.

CEV doesn't seem to have much in the way of safeguards against this. It isn't even constrained to follow the law of the land. I think as it stands, it has clear criminal tendencies - and so should not be built.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-18T23:15:52.102Z · LW(p) · GW(p)

People aren't constrained to follow the law of the land either.

Replies from: timtyler
comment by timtyler · 2010-02-18T23:22:01.415Z · LW(p) · GW(p)

Fortunately for the rest of us, most criminals are relatively impotent.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-02-19T17:01:21.859Z · LW(p) · GW(p)

Have you ever heard of the last president of the US? He's a particularly extreme example in criminality for a president, but I'm pretty sure that all or nearly all presidents would count extremely criminal compared to what you are used to from day-to-day life. Congressman likewise.

Replies from: CronoDAS, mattnewport, timtyler
comment by CronoDAS · 2010-02-19T20:48:06.043Z · LW(p) · GW(p)

Hence the qualifier "most".

Also, does driving faster than the speed limit make you technically a criminal? How about downloading pirate MP3s?

comment by mattnewport · 2010-02-19T20:51:06.585Z · LW(p) · GW(p)

What sense of 'criminal' are you using here? Presumably not 'convicted of a crime by a court' since that is relatively rare for politicians. Do you mean 'have committed acts that are against the law but have not been prosecuted' or do you mean 'have committed acts that in my view are/should-be-viewed-as criminal but have not actually broken the law technically'?

Replies from: wnoise
comment by wnoise · 2010-02-19T22:58:06.157Z · LW(p) · GW(p)

He has publicly admitted to ordering violations of the FISA statute, a felony, so certainly "have committed acts that are against the law but have not been prosecuted".

comment by timtyler · 2010-02-19T20:23:27.210Z · LW(p) · GW(p)

US politics is not my area - but I don't think there has ever been a criminal prosecution of an incumbent president.

However, sometimes criminals do get some influence and cause significant damage. It seems like a good reason to do what you can to prevent such things from happening.

comment by JamesAndrix · 2010-02-17T06:28:08.263Z · LW(p) · GW(p)

http://akshar100.wordpress.com/2007/06/18/the-nigt-i-met-einstein/

Replies from: Nisan, CronoDAS
comment by Nisan · 2010-02-20T07:44:52.918Z · LW(p) · GW(p)

Someone doesn't like Bach because he was traumatized by an exposure to classical music at a tender age? Give me a break. Music is like languages, not math -- the surest way to learn to like Bach is full immersion at a young age, not a graduated curriculum that starts from "lower" forms of music.

comment by CronoDAS · 2010-02-17T07:50:25.516Z · LW(p) · GW(p)

That's a very nice story.

comment by Tiiba · 2010-02-16T18:56:07.322Z · LW(p) · GW(p)

I made a couple posts in the past that I really hoped to get replies to, and yet not only did I get no replies, I got no karma in either direction. So I was hoping that someone would answer me, or at least explain the deafening silence.

This one isn't a question, but I'd like to know if there are holes in my reasoning. http://lesswrong.com/lw/1m7/dennetts_consciousness_explained_prelude/1fpw

Here, I had a question: http://lesswrong.com/lw/17h/the_lifespan_dilemma/13v8

Replies from: Tyrrell_McAllister, Singularity7337
comment by Tyrrell_McAllister · 2010-02-16T20:52:04.871Z · LW(p) · GW(p)

I looked at your consciousness comment. First, consciousness is notoriously difficult to write about in a way that readers find both profound and comprehensible. So you shouldn't take it too badly that your comment didn't catch fire.

Speaking for myself, I didn't find your comment profound (or I failed to comprehend that there was profundity there). You summarize your thesis by writing "Basically, a qualium is what the algorithm feels like from the inside for a self-aware machine." (The singular of "qualia" is "quale", not "qualium", btw.)

The problem is that this is more like a definition of "quale" than an explanation. People find qualia mysterious when they ask themselves why some algorithms "feel like" anything from the inside. The intuition is that you have both

  1. the code — that is, an implementable description of the algorithm; and

  2. the quale — that is, what it feels like to be an implementation of the algorithm.

But the quale doesn't seem to be anywhere in the code, so where does it come from? And, if the quale is not in the code, then why does the code give rise to that quale, rather than to some other one?

These are the kinds of questions that most people want answered when they ask for an explanation of qualia. But your comment didn't seem to address issues like these at all.

(Just to be clear, I think that those questions arise out of a wrong approach to consciousness. But any explanation of consciousness has to unconfuse humans, or it doesn't deserve to be called an explanation. And that means addressing those questions, even if only to relieve the listener of the feeling that they are proper questions to ask.)

Replies from: Tiiba
comment by Tiiba · 2010-02-17T04:52:26.512Z · LW(p) · GW(p)

"So you shouldn't take it too badly that your comment didn't catch fire."

I'm not mad, but... Just see it from my point of view. An interesting thought doesn't come to guys like me every day. ;)

"But the quail doesn't seem to be anywhere in the code, so where does it come from?"

I think it's in the code. When I try to imagine a mind that has no qualia, I imagine something quite unlike myself.

What would it actually be like for us to not have qualia? It could mean that I would look at a red object and think, "object, rectangular, apparent area 1 degree by 0.5 degrees, long side vertical, top left at (100, 78), color 0xff0000". That would be the case where the algorithm has no inside, so it doesn't need to feel like anything from the inside. Nothing about our thoughts would be "ineffable". (Although it would be insulting to call a being unconscious or, worse, "not self aware" for knowing itself better than we do... Hmm. I guess qualia and consciousness are separate after all. Or is it? But I'm dealing with qualia right now.)

Or, the nerve could send its impulse directly into a muscle, like in jellyfish. That would mean that the hole in my knowledge is so big that the quail for "touch" falls through it.

In my mind, touch leaves a memory, and I then try to look at this memory. I ask my brain, "what does touch feel like?", and I get back, "Error: can't decompile native method. But I can tell you definitely what it doesn't feel like: greenness." So what I'm saying is, I can't observe what the feeling of touch is made of, but it has enough bits to not confuse it with green.

It makes me [feel] unconfused. Although it might be confusing.

"Just to be clear, I think that those questions arise out of a wrong approach to consciousness."

What's your approach?

Replies from: prase
comment by prase · 2010-02-17T16:30:19.887Z · LW(p) · GW(p)

I don't understand your explanation. You are apparently saying that quale (you seem to deliberately misspell the word, why?) is how the algorithm feels from inside. Well, I agree, but in the same time I think that "quale" is only a philosopher's noble word for "feel from inside". The explanation looks like a truism.

I have always been (and still am) confused by questions like: How other people perceive colors? Do they feel it the same way as I do? Are there people who see the colors inverted, having the equivalent of my feeling of "redness" when they look at green objects? They will call that feeling "greenness", of course, but can their redness be my greenness and vice versa? What about colorblind people? If I lost the ability to recognise blue from green, would I feel blueness or greenness when looking at those colors? What does it in fact mean, to compare feelings of different people? Or even, how does it feel to be a dog? A fish? A snail?

I am almost sure that the questions themselves are confused, without clear meaning, and can be explained away as such, but still I find them appealing in some strange way.

I always wanted to make an experiment on myself, but I am also afraid of it and don't have an opportunity. I would buy glasses which invert colors like on a photographic negative and wear them without interruption for some time. Certainly in the beginning I would feel redness when looking at trees, but it can be that I would accomodate and start feeling greenness instead. Or not. Certainly, it would be a valuable experinence. Has anybody tried something similar?

Replies from: Steve_Rayhawk, ektimo, Tiiba
comment by Steve_Rayhawk · 2010-02-19T22:18:01.960Z · LW(p) · GW(p)

I once heard of people at UCSD who had plans to experiment with inverted spectrum goggles. Jonathan Cohen would know more.

Replies from: prase, mattnewport
comment by prase · 2010-02-20T07:49:55.787Z · LW(p) · GW(p)

Thanks, seems interesting.

comment by mattnewport · 2010-02-19T22:37:19.742Z · LW(p) · GW(p)

Would this be possible optically? The only way I can see it working is using a live video feed with some image processing to invert colours. That is probably quite practical using modern technology though.

comment by ektimo · 2010-02-17T23:31:05.243Z · LW(p) · GW(p)

Interesting experiment. It reminds me of an experiment where subjects wore glasses that turned world upside down (really, right side up for the projection on our eye) and eventually they adjusted so the world looked upside down when taking off the glasses.

What do you think a "yes" or "no" in your experiment would mean?

Note, Dennett says in Quining Qualia :

On waking up and finding your visual world highly anomalous, you should exclaim "Egad! Something has happened! Either my qualia have been inverted or my memory-linked qualia-reactions have been inverted. I wonder which!"

Replies from: prase
comment by prase · 2010-02-18T08:49:44.651Z · LW(p) · GW(p)

I know about the experiment you mention, and it partly motivated my suggestion; I just subjectively find "yellowness" and "blueness" more qualious than "upness" or "leftness".

In my experiment, "yes" would mean that there would be no dissonance between memories and perceptions, that I would just not feel that the trees are red or purple, but green, and find the world "normal". That I would, one day, cease to feel the need to get rid of the color-changing glasses, and my aesthetic preferences would remain the same as they were in the pre-glasses period. I think it's likely - based on the other subjects' experiences with upside-down glasses - that it would happen after a while, but the experience itself may be more interesting than the sole yes/no result, because it is undescribable. That's one problem with qualia: they are outside the realm of things which can be described. Describing qualia is like describing flavour of an unknown exotic fruit: no matter how much you try, other people wouldn't understand until they degust it themselves.

comment by Tiiba · 2010-02-17T16:46:37.859Z · LW(p) · GW(p)

"(you seem to deliberately misspell the word, why?)"

Imagine it was you. Why might you do it?

'Well, I agree, but in the same time I think that "quail" is only a philosopher's noble word for "feel from inside".'

Didn't seem that way to me. Some philosophers argue that there are no qualia at all. Others seem to think it's some sort of magic. And then there's David Chalmers.

Replies from: prase, Tyrrell_McAllister, Jack
comment by prase · 2010-02-18T09:01:52.892Z · LW(p) · GW(p)

"(you seem to deliberately misspell the word, why?)" Imagine it was you. Why might you do it?

If I already knew the answer, I wouldn't ask.

Replies from: Tiiba
comment by Tiiba · 2010-02-18T16:53:43.078Z · LW(p) · GW(p)

I still think you have an inkling, but I guess I'll tell you.

BECAUSE I THOUGHT IT'S AMUSING. Because I wanted a pun.

And now people are explaining to me how to sic, with examples. Now that's what I call smartass.

Athe in heaven.

Replies from: RobinZ, prase
comment by RobinZ · 2010-02-18T17:36:23.645Z · LW(p) · GW(p)

You ran into a Poe's-Law type problem: your joke was indistinguishable from stupidity.

Replies from: Tiiba
comment by Tiiba · 2010-02-18T18:21:52.681Z · LW(p) · GW(p)

But, you know, "quail" is an inherently funny word. It shouldn't even be obvious. It should be instinctive. If I changed "coal" to "coil", I would see why people might not get it. But quail... You know, quail.

Somebody changed every instance of "wand" in Harry Potter to "wang". Gee, I wonder if he was dyslexic.

Replies from: Cyan, RobinZ, mattnewport
comment by Cyan · 2010-02-18T18:26:09.356Z · LW(p) · GW(p)

The real question is whether it's a tinny word or a woody word.

comment by RobinZ · 2010-02-18T20:15:45.212Z · LW(p) · GW(p)

Very frequently, someone who is bad at spelling will confuse two homonyms. On occasion, such a person will declare that their convention is correct in the face of all opposition. The comments you - who are amused by word substitutions - have written could equally have been written by an arrogant ignoramus.

If you want to make a joke by substituting "quail" for "quale", you need to set it up more explicitly.

(Regarding the substitution of "wang" for "wand": "wang" is not an expected typographical or orthographical error for "wand" - as the letters "d" and "g" are separated and the sounds "nd" and "ng" are likewise distinguished - so the substitution is unlikely to be accidental. That doesn't hold here.)

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-02-18T22:56:47.073Z · LW(p) · GW(p)

Very frequently, someone who is bad at spelling will confuse two homonyms.

The joke was further obscured because "quale" and "quail" aren't homonyms. "Quale" rhymes with "Wally", not "trail".

Replies from: RobinZ
comment by RobinZ · 2010-02-18T23:32:51.172Z · LW(p) · GW(p)

...did not know that, actually. Thanks!

comment by mattnewport · 2010-02-18T18:46:19.097Z · LW(p) · GW(p)

I really don't get why quail is supposed to be an inherently funny word? It's a small rather uninteresting bird that lays quite tasty eggs. It doesn't trigger any humour response for me.

comment by prase · 2010-02-19T07:36:05.995Z · LW(p) · GW(p)

When I see the negative karma you've got, I regret starting this discussion. But you see, the sense of humour of different people is often incompatible.

comment by Tyrrell_McAllister · 2010-02-18T02:54:44.580Z · LW(p) · GW(p)

Tiiba, it's pretty bad form to present an altered version of someone else's words as a direct quote.

Replies from: Tiiba
comment by Tiiba · 2010-02-18T06:44:55.984Z · LW(p) · GW(p)

The horror.

Replies from: RobinZ
comment by RobinZ · 2010-02-18T15:15:55.273Z · LW(p) · GW(p)

If you believe a direct quotation contains a typographical, orthographical, or grammatical error, the polite thing to do is to quote it as it stands with the error labelled by a "[sic]" (written in square brackets, as shown).

For example, if I felt "Imagine it was you. Why might you do it?" should be "Imagine it were you. Why might you do it?", I would write along the lines of:

Imagine it was [sic] you. Why might you do it?

By that logic, you ought to spell phoenix "pheonix", regardless of what the Oxford English Dictionary tells you.

comment by Jack · 2010-02-18T07:42:13.648Z · LW(p) · GW(p)

Others seem to think it's some sort of magic.

Can you name a single philosopher that seems to think it is some sort of magic?

Replies from: Tiiba
comment by Tiiba · 2010-02-18T08:25:38.301Z · LW(p) · GW(p)

David Chalmers?

http://en.wikipedia.org/wiki/Qualia "Nagel also suggests that the subjective aspect of the mind may not ever be sufficiently accounted for by the objective methods of reductionistic science."

comment by Singularity7337 · 2010-02-21T03:32:47.412Z · LW(p) · GW(p)

With regards to Karma, it could be equal up and down. This is less likely if you were checking it. Digg allows you to view plus and minus karma counts separately, for one.

comment by bgrah449 · 2010-02-24T03:03:04.916Z · LW(p) · GW(p)

I just failed the Wason selection task. Does anyone know any other similarly devilish problems?

Replies from: Cyan, wedrifid
comment by Cyan · 2010-02-24T04:12:01.448Z · LW(p) · GW(p)

Here's a classic:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

Answer here.

comment by wedrifid · 2010-02-24T04:25:32.667Z · LW(p) · GW(p)

Fun task. I'll second the request.

comment by Bindbreaker · 2010-02-19T03:29:34.932Z · LW(p) · GW(p)

What's an easy way to explain the paperclip thing?

Replies from: Alicorn
comment by Alicorn · 2010-02-19T04:48:37.689Z · LW(p) · GW(p)

We happen to like things like ice cream and happiness. But we could have liked paperclips. We could have liked them a lot, and not liked anything else enough to have it instead of paperclips. If that had happened, we'd want to turn everything into paperclips - even ourselves and each other!

comment by SilasBarta · 2010-02-18T23:05:18.858Z · LW(p) · GW(p)

Oh, look honey: more proof wine tasting is a crock:

A French court has convicted 12 local winemakers of passing off cheap merlot and shiraz as more expensive pinot noir and selling it to undiscerning Americans, including E&J Gallo, one of the United States' top wineries.

Cue the folks claiming they can really tell the difference...

Replies from: Morendil, jpet, knb, CronoDAS
comment by Morendil · 2010-02-20T10:51:30.398Z · LW(p) · GW(p)

There's plenty of hard evidence that people are vulnerable to priming effects and other biases when tasting wine.

There's also plenty of hard evidence that people can tell the difference between wine A and wine B, under controlled (blinded) conditions. Note that "tell the difference" isn't the same as "identify which would be preferred by experts".

So, while the link is factually interesting, and evidence that some large-scale deception is going on, aided by such priming effects as label, marketing campaigns and popular movies can have, it seems a stretch to call it "proof" that people in general can't tell wine A from wine B.

Rather, this strikes me as a combination of trolling and boo lights: cheaply testing who appears to be "on your side" in a pet controversy. How well do you expect that to work out for you, in the sense of "reliably entangling your beliefs with reality"?

Replies from: SilasBarta
comment by SilasBarta · 2010-02-20T16:29:10.756Z · LW(p) · GW(p)

I think I'm entangling my beliefs with reality very well, by virtue of extracting all available information from phenomena rather than retreat to evidence that agrees with me. (Let's not forget, I didn't start out thinking that it was all BS.)

For example, did you stop to notice the implications of this:

There's plenty of hard evidence that people are vulnerable to priming effects and other biases when tasting wine.

How does that compare to the priming effects for other drinks? Does it matter?

So, while the link is factually interesting, and evidence that some large-scale deception is going on, aided by such priming effects as label, marketing campaigns and popular movies can have, it seems a stretch to call it "proof" that people in general can't tell wine A from wine B.

But what would be the appropriate comparison? They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long? Normally, if you tried that, it would be noticed quickly, if not immediately, by virtually everyone.

What if you tried to pass off 16 oz of milk as 128? Or spoiled milk as milk expiring in a week?

Then, factor in how much difference is claimed to exist in wine vs. milks.

Who's optimally using evidence here?

Replies from: CronoDAS, Morendil
comment by CronoDAS · 2010-02-21T03:03:01.778Z · LW(p) · GW(p)

They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long?

Art forgeries. (Which shows that the value of the painting is determined by the status of the artist and not the quality of the art.)

If I can paint a painting that convinces experts that it was painted by [insert expert painter here], does that mean I'm as good an artist as said painter? (Assuming that my painting isn't a literal copy of someone else's.)

Replies from: SilasBarta, Morendil
comment by SilasBarta · 2010-02-21T06:06:45.206Z · LW(p) · GW(p)

Art forgeries. (Which shows that the value of the painting is determined by the status of the artist and not the quality of the art.)

Which, like wine, is another example of a path-dependent collective delusion that's not Truly Part of our values. (That is, our valuation of the work wouldn't survive deletion of the history that led to such a valuation.)

If I can paint a painting that convinces experts that it was painted by [insert expert painter here], does that mean I'm as good an artist as said painter? (Assuming that my painting isn't a literal copy of someone else's.)

Very nearly yes, it does, modulo a few factors. If you produced it after the artist, then you are benefiting from the artist's already having identified a region of conceptspace that you did not find yourself. (If the art is revered because of the artist's social status, that it wasn't even much of an accomplishment to begin with.) To put it another way, you produced the work after "supervised learning", while the artist didn't need that particular training.

If you can pass off a previous work of yours as being one of the artist's, that definitely makes you better.

Replies from: komponisto
comment by komponisto · 2010-02-21T07:15:18.797Z · LW(p) · GW(p)

Which, like wine, is another example of a path-dependent collective delusion that's not Truly Part of our values. (That is, our valuation of the work wouldn't survive deletion of the history that led to such a valuation.)

Who is "we", here?

The problem I have is not that you're wrong, for the people you're talking about; it's that you (probably) overestimate the size and/or importance of that population. You're not telling the whole truth, in effect. There are plenty of people who like paintings for the way they look, and would happily buy the work of a lesser-known artist at a cheap price if they liked it. Yes, some people use art to status-signal, but some people also actually like art. (There may even be a nonempty intersection!)

Replies from: SilasBarta
comment by SilasBarta · 2010-02-21T23:14:19.660Z · LW(p) · GW(p)

There are plenty of people who like paintings for the way they look, and would happily buy the work of a lesser-known artist at a cheap price if they liked it. Yes, some people use art to status-signal, but some people also actually like art. (There may even be a nonempty intersection!)

Sorry if I sound dodgy here, but I don't think I've said anything that contradicts this. My criticism is of these two things:

1) the idea that the elite-designated "high art" is non-arbitrary. (I claim it's a status-reinforced information cascade that wouldn't regain the designation of high-art if you deleted knowledge of which ones had been so classified.)

2) the excessive premiums paid for artworks based on both 1) and the fact that they are the originals (a "piece of history").

Never have I criticized or denied the existence of people who buy artworks because they simply like it and it appeals to them. I just criticize the way that we're expected to agree with the laurels attached to elite-designated high art. As I said before, I would have no problem if art were just a matter of "hey, I like this, now get on with your lives" (as it works in e.g. video games).

comment by Morendil · 2010-02-21T07:56:14.837Z · LW(p) · GW(p)

Often the worth of an artist stems from inventing new possibilities. Copycats are lesser.

comment by Morendil · 2010-02-20T19:13:20.056Z · LW(p) · GW(p)

Who's optimally using evidence here?

You seem to want a contest. The other option, where we are both "on the side of truth", appeals to me more.

We're fortunate in having different experiences in the domain of taste. I'm one of those people who like wine, and I'm confident I can identify some of its taste characteristics in blind tests. So, predictably I resent language which implies I'm an idiot, but I'm open to inquiry.

Our investigation should probably begin "at the crime scene", that is, close to what evidence we can gather about the sense of taste. So, yes, we could examine similar priming effects on other drinks.

I have a candidate in mind, but what I'd like to ask you first is, suppose I name the drink I have in mind and we then go look for evidence of fraud in its commerce. What would it count as evidence of if we found no fraud? If we did find it? Which one would you say counts as evidence that "people can't tell the difference" between wines?

Replies from: Cyan, SilasBarta
comment by Cyan · 2010-02-20T20:45:58.186Z · LW(p) · GW(p)

I'm one of those people who like wine, and I'm confident I can identify some of its taste characteristics in blind tests.

You can easily test yourself if you have a confederate. I recommend a triangle test.

comment by SilasBarta · 2010-02-21T06:22:23.491Z · LW(p) · GW(p)

We're fortunate in having different experiences in the domain of taste. I'm one of those people who like wine, and I'm confident I can identify some of its taste characteristics in blind tests. So, predictably I resent language which implies I'm an idiot, but I'm open to inquiry.

If someone was long ago made aware of powerful evidence that an expensive pleasure he currently enjoys can be replicated with fidelity at a sliver of the cost, and he hasn't already done the experimentation necessary to properly rule this out, then you're right. There are explanations other than that he is an idiot. But they're not much more flattering either.

I can tell you that if I were in this position for another beverage, I would have already done the tests.

I have a candidate in mind, but what I'd like to ask you first is, suppose I name the drink I have in mind and we then go look for evidence of fraud in its commerce. What would it count as evidence of if we found no fraud? If we did find it? Which one would you say counts as evidence that "people can't tell the difference" between wines?

The no-fraud case is positive but weak evidence of for people telling the difference because it can be accounted for by honesty on the part of retailers, fear of whistleblowers, etc. Finding fraud is unlikely but strong evidence against the claim that people can tell the difference -- because it crucially depends on the priming effects (or some other not-truly-part-of-you effect) dominating.

I'd prefer a blind comparison on the cheap substitutes like Cyan suggested.

And I'd be glad that you've identified a product I'm overpaying for!

Replies from: Morendil, Morendil
comment by Morendil · 2010-02-22T11:11:57.034Z · LW(p) · GW(p)

an expensive pleasure he currently enjoys can be replicated with fidelity at a sliver of the cost

What would you suggest I do, to replicate the pleasure I get from wine?

Replies from: SilasBarta
comment by SilasBarta · 2010-02-22T20:31:01.191Z · LW(p) · GW(p)

You mean, replicate the pleasure from expensive wine? (I'm going to assume you genuinely like the act of drinking wine.) Easy: accept that it's an illusion, then buy the cheap stuff (modulo social status penalties) and prime it the way good wines are. (This may require an assistant.) Gradually train yourself to regard them as the same quality. If you can trained to put it on a pedestal, you can probably be trained to take it off.

If it's unavoidable that you discount the taste due to your residual knowledge that the wine is low-class, then accept that you're overpaying because of a unremovable bias. (Not a slight against you, btw -- I admit I was the same way with CFLs for a while, in that I couldn't discard the knowledge, and thus negative affect, that they're CFLs rather than incandescent, and thereby pointlessly overpaid for lighting.)

On the more realistic assumption that you've simply been trained to like wine via a process that would equally well train you to like anything, get some friends together and train yourselves to like V8. (The veggie drink, not the engine.)

comment by Morendil · 2010-02-21T06:53:27.750Z · LW(p) · GW(p)

inding fraud is unlikely but strong evidence against the claim that people can tell the difference -- because it crucially depends on the priming effects [...] dominating.

Wait, can you expand on how fraud in the trade of a non-alcoholic drink is "strong but unlikely" evidence that people cannot tell the differences between wines?

Such fraud might be evidence that people cannot tell the difference between tastes more generally, but that seems like a higher hurdle to clear.

Replies from: SilasBarta
comment by SilasBarta · 2010-02-21T23:21:24.740Z · LW(p) · GW(p)

Wait, can you expand on how fraud in the trade of a non-alcoholic drink is "strong but unlikely" evidence that people cannot tell the differences between wines?

Where did I say that? The claim you quoted from me was about "what would be evidence of people's ability to distinguish that non-alcoholic drink", not wines.

You'd check for people's ability to distinguish wines by fraud in wines, and people's ability to distinguish specific non-wine drinks by fraud in specific non-wine drinks.

I thought I made that very clear, and if not, common sense and the principle of charity should have sufficed for you not to infer that I would get the "wires crossed" like that.

(ETA: By the way: is there a shorter way to refer to a person's ability to distinguish foods/drinks? I've tried shorter expressions, but they make other posters go batty about the imprecision without even suggesting an alternative. Paul Birch suggests "taste entropy of choice", but that's obscure.)

And, to preempt a possible point you may be trying to make: yes, fruit drinks may be fraudulently labeled as real fruit juice, but that's not a parallel case unless people claim to be able to distinguish by taste the presence of real juice and purchase on that basis.

Replies from: Morendil
comment by Morendil · 2010-02-22T11:08:27.658Z · LW(p) · GW(p)

So we've averted misunderstanding, good. My question remains: what does fraud (or non-fraud) in non-alcoholic drinks tell us about whether people "really can tell the difference" between wines?

Just to be sure what you're claiming, btw: if I did a "triangle test", blinded, on two arbitrary bottles of wine from my supermarket, and I could tell them apart, would you retract that claim? Or is your claim restricted to some specific varietals?

Replies from: SilasBarta
comment by SilasBarta · 2010-02-22T20:34:58.941Z · LW(p) · GW(p)

So we've averted misunderstanding, good.

How could I have explained my position better so that you would not have inferred the point about fruit drinks?

My question remains: what does fraud (or non-fraud) in non-alcoholic drinks tell us about whether people "really can tell the difference" between wines?

It doesn't remain. If people can tell the difference, you don't gain from fraud. If they can't, you could gain from fraud. Where's the confusion?

Just to be sure what you're claiming, btw: if I did a "triangle test", blinded, on two arbitrary bottles of wine from my supermarket, and I could tell them apart, would you retract that claim? Or is your claim restricted to some specific varietals?

I accept that you could tell red from white, so it couldn't be completely random. I'd want a test over the two varieties they said were swapped in the story, or within the same variety, but significant cost difference.

Replies from: Morendil
comment by Morendil · 2010-02-24T22:15:19.313Z · LW(p) · GW(p)

How could I have explained my position better so that you would not have inferred the point about fruit drinks?

I have inferred nothing about fruit drinks. In this comment you replied to me with an allusion to "other drinks". Later in the same comment you referred to milk. In other words, you primed me to think about non-alcoholic drinks. Later in the exchange, you ruled out the possibility that fraud in non-alcoholic drinks will provide any evidence relevant to your original claim. So we're better off dropping this line of inquiry altogether.

I'd want a test over the two varieties they said were swapped in the story

The story mentioned three varietals: merlot, shiraz and pinot noir. It seems likely that in the next few days I will be able to procure half-bottles of a Merlot (vin de pays, 2005) at €6.5/l and a Pinot Noir (Burgundy, 2005) at 14€/l. The experiment will set me back about 10€.

Do you consider the experiment valid if I buy the bottles myself, and have a third person prepare the glasses? Would you care to stipulate any particular controls? Are you willing to trust my word when I report back with the results? Do I have to correctly identify the cheaper and the more expensive wine, or just to show I can tell the difference? What will you bet on the outcome?

Replies from: SilasBarta
comment by SilasBarta · 2010-03-01T22:39:08.855Z · LW(p) · GW(p)

There's a difficulty that affects tests like these: you can be much more sensitive when you know you're being tested, so it wouldn't tell as much about people who can be fooled when given wine that they didn't expect something was wrong with. The test would have to be made slightly more difficult in order to account for your preparation.

Here's what I would consider a fair test: I pick three wines, (or imitations of wines) and then do a modified version of the triangle test: I can make all three the same, all three different, or just two the same, and not tell you which. To pass, you have to correctly rank their prices and identify which, if any, are the same. (You'd be allowed to count two different ones as effectively the same prices if they differed only by a few Euros.)

And of course, the test would have to be triple blind, etc.

Replies from: Morendil
comment by Morendil · 2010-03-01T23:12:04.177Z · LW(p) · GW(p)

You can be much more sensitive when you know you're being tested

Why thank you. So, you are expecting that in some situations people actually can tell wines apart?

Here's what I would consider a fair test

Sounds complex. I might play, but you'd have a) to pick three wines I can actually pick up in my local supermarket chain, Monoprix; they have an online catalog; b) to write the instructions with enough precision that a local confederate can carry them out. I'm willing to do the work myself for the simple case (have done so in the grandparent), but not if the rules get too baroque.

There's one thing I object to. Why do I have to rank them by price? I am not claiming that the more expensive wines are systematically better, taste-wise, than the cheaper ones. I do claim that the very good stuff is more likely to be found in the more expensive bottles. There is no reason to expect that price will be systematically correlated with my idiosyncratic, relatively untrainted tastes.

If I have to rank them by price via recognition, I'd have to have tried them out beforehand. If your hypothesis is correct that shouldn't affect the results, since you're claiming my discrimination of "better" comes from priming effects.

Replies from: SilasBarta
comment by SilasBarta · 2010-03-01T23:22:32.340Z · LW(p) · GW(p)

Why thank you. So, you are expecting that in some situations people actually can tell wines apart?

Sure, just like how, if you put enough effort in, you can tell any two drinks apart. The difference is that people make these claims about wine without have done the exercises necessary for precise discernment, yet claim these subtleties matter to them.

Sounds complex. I might play, but you'd have a) to pick three wines I can actually pick up in my local supermarket chain, Monoprix; they have an online catalog; b) to write the instructions with enough precision that a local confederate can carry them out. I'm willing to do the work myself for the simple case (have done so in the grandparent), but not if the rules get too baroque.

You're really not grasping the concept of biasproofing, are you? "A confederate" taints the experiment. Your purchasing the wines taints the experiment.

There's one thing I object to. Why do I have to rank them by price? I am not claiming that the more expensive wines are systematically better, taste-wise, than the cheaper ones.

Okay, well, the wine cheerleaders do, so this is one way you break from them and agree with me.

If I have to rank them by price via recognition, I'd have to have tried them out beforehand. If your hypothesis is correct that shouldn't affect the results, since you're claiming my discrimination of "better" comes from priming effects.

Does not follow: if you're told in advance which has the label "good", you can spit that information back out latter. My claim is that a judgment of good dependent on being told in advance that it's good is not a genuine judgment of good.

Replies from: Morendil
comment by Morendil · 2010-03-01T23:52:59.088Z · LW(p) · GW(p)

You're really not grasping the concept of biasproofing, are you? "A confederate" taints the experiment.

Not performing the experiment taints it even more. But perhaps by now we've learned enough of each other's claims. I leave it to you to find a way forward.

comment by jpet · 2010-02-21T06:25:12.493Z · LW(p) · GW(p)

If "top winery" means "largest winery", as it does in this story, I don't see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren't drinking Gallo in the first place.

They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long?

I think it's closer to say they were passing off as cheap, something that's actually even cheaper.

Switch the food item and see if your criticism holds:

Wonderbread, America's top bread maker, was conned into selling inferior bread. So-called "gourmets" never noticed the difference! Bread tasting is a crock.

Replies from: SilasBarta, Douglas_Knight
comment by SilasBarta · 2010-02-21T06:44:24.455Z · LW(p) · GW(p)

If people made such a huge deal about the nuances in the taste of bread, while it also "happened" to have psychoactive effects that, gosh, always have to be present for the bread to be "good enough" for them, and cheap breads were still normally several times the cost of comparable-nutrition food, then yes, the cases would be parallel.

(Before anyone says it: Yes, I know bread as trace quantities of alcohol, we're all proud of what you learned in chemistry.)

comment by Douglas_Knight · 2010-02-21T07:34:21.334Z · LW(p) · GW(p)

If "top winery" means "largest winery", as it does in this story, I don't see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren't drinking Gallo in the first place.

If people who can tell the difference are a big enough demographic to sell to, then they are employed by all wineries, regardless of quality. But an alternate explanation is that Gallo was tacitly in on the scam - they got as much PN as Sideways demanded, without moving the market.

Replies from: jpet
comment by jpet · 2010-02-22T01:33:43.207Z · LW(p) · GW(p)

Ah, I misunderstood the comment. I just assumed that Gallo was in on it, and the claim was that customers of Gallo failing to complain constituted evidence of wine tasting's crockitude.

If Gallo's wine experts really did get taken in, then yes, that's pretty strong evidence. And being the largest winery, I'm sure they have many experts checking their wines regularly--too many to realistically be "in" on such a scam.

So you've convinced me. Wine tasting is a crock.

comment by knb · 2010-02-20T09:33:23.707Z · LW(p) · GW(p)

I love this kind of thing.

Shall we name this phenomenon the "Emperor's New Clothes Effect"?

Replies from: ata
comment by ata · 2010-02-20T09:41:24.150Z · LW(p) · GW(p)

That could be a general name for the phenomenon. As it relates to wine tasting (and maybe we could stretch it a bit), I'd propose "the Nuances of Toast Effect", for a particularly memorable phrase in this Dave Barry column.

comment by CronoDAS · 2010-02-18T23:43:15.732Z · LW(p) · GW(p)

I can't even tell the difference between Coke and Pepsi.

Replies from: h-H, LucasSloan
comment by h-H · 2010-02-20T16:40:37.567Z · LW(p) · GW(p)

One's in a red can, the other in a blue one ;-)

oh well, me neither actually.

comment by LucasSloan · 2010-02-19T04:46:21.974Z · LW(p) · GW(p)

Really? How much soda do you drink? The difference is readily noted by me. I can even tell the difference between regular and diet Coke.

Replies from: Jack, Sniffnoy, CronoDAS, Nick_Tarleton
comment by Jack · 2010-02-19T05:24:17.803Z · LW(p) · GW(p)

This seems to suggest that it is easier to tell the difference between Coke and Pepsi than it is to tell the difference between regular and diet. I can tell the difference between all of them, but the first is a lot harder and I think that experience is pretty common. Most diet drinks use aspartame as a sugar substitute and aspartame leaves a very distinctive aftertaste in my mouth.

comment by Sniffnoy · 2010-02-19T06:58:45.350Z · LW(p) · GW(p)

Wait, is that supposed to be harder? I'm not sure I could tell the difference between Coke and Pepsi, but I think I could tell the difference between regular and diet.

comment by CronoDAS · 2010-02-19T17:35:51.681Z · LW(p) · GW(p)

I don't like cola very much, so if you gave me a drink of fizzy black stuff, I wouldn't be able to identify which brand it was. Also, other sodas have a tendency to give me heartburn so I've been drinking them much less than I used to.

comment by Nick_Tarleton · 2010-02-19T04:57:48.061Z · LW(p) · GW(p)

I can even tell the difference between regular and diet Coke.

Is this unusual?

Replies from: LucasSloan
comment by LucasSloan · 2010-02-19T05:01:55.102Z · LW(p) · GW(p)

I have heard people remark they can't tell the difference. My father for example.

comment by xamdam · 2010-02-18T20:50:05.290Z · LW(p) · GW(p)

Nice recap of psychological biases from the Charlie Munger school (of hard knocks and making a billion dollars).

http://www.capitalideasonline.com/articles/index.php?id=3251

comment by whpearson · 2010-02-16T13:36:11.747Z · LW(p) · GW(p)

I've been wondering what the existance of Gene Networks tells us about recursively self improving systems. Edit: Not that self-modifying gene networks are RSIS, but the question is "Why aren't they?" In the same way that failed attempts at flying machines tell us something, but not much, about what flying machines are not. End Edit

They are the equivalent of logic gates and have the potential for self-modification and reflection, what with DNAs ability to make enzymes that chop itself up and do so selectively.

So you can possibly use them as evidence that low-complexity, low-memory systems are unlikely to RSI. How complex they get and how much memory they have, I am not sure.

Replies from: None, Eliezer_Yudkowsky, PhilGoetz
comment by [deleted] · 2010-02-16T14:29:58.342Z · LW(p) · GW(p)

It seems like in gene networks, every logic gate has to evolve separately, and those restriction enzymes you mention barely do anything but destroy foreign DNA. That's less self-modification potential than the human brain.

Replies from: whpearson
comment by whpearson · 2010-02-16T15:18:51.715Z · LW(p) · GW(p)

The inability to create new logic gates is what I meant by the systems having low memory. In this case low memory to store programs.

Restriction enzymes also have a role in the insertion of plasmids into genes.

An interesting question is: If I told you about computer model of evolution with things like plasmids, controlled mutationJ_recombination); would you expect it to be potentially dangerous?

I'm asking this to try to improve our thinking about what is and isn't dangerous. To try and improve upon the kneejerk "everything we don't understand is dangerous" opinion that you have seen.

Replies from: None
comment by [deleted] · 2010-02-16T17:59:22.613Z · LW(p) · GW(p)

Well, I'm not familiar enough with controlled mutation to be able to say anything useful about it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-16T17:52:12.957Z · LW(p) · GW(p)

I wonder if the distinction between self-modification and recursive self-improvement is one of those things that requires a magic gear to get, and otherwise can't be explained by any amount of effort.

Replies from: whpearson, whpearson, ciphergoth, cousin_it
comment by whpearson · 2010-02-16T18:45:15.685Z · LW(p) · GW(p)

I understand there is a distinction. Would you agree that RSI systems are conceptually a subset of self-modifying (SM) systems? One that we don't understand what exact properties make a SM system one that will RSI. Could you theoretically say why EURISKO didn't RSI?

I was interested in how big a subset. The bigger it is the more dangerous, the more easily we will find it.

Replies from: gwern, ShardPhoenix
comment by gwern · 2010-02-18T03:20:14.256Z · LW(p) · GW(p)

Could you theoretically say why EURISKO didn't RSI?

Sure. In fact, some of the Lenat quotes on LW even tell you why.

As a hack to defeat 'parasitic' heuristics, Lent (& co.?) put into Eurisko a 'protected kernel' which couldn't be modified. This core was not good enough to get everything going, dooming Eurisko from a seed AI perspective, and the heuristics never got anywhere near the point they could bypass the kernel. Eurisko was inherently self-limited.

comment by ShardPhoenix · 2010-02-16T23:48:05.782Z · LW(p) · GW(p)

It seems to me that for SM to become RSI, the SM has to able to improve all the parts of the system that are used for SM, without leaving any "weak links" to slow things down. Then the question is (slightly) narrowed to what exactly is required to have SM that can improve all the needed parts.

comment by whpearson · 2010-02-17T01:05:07.211Z · LW(p) · GW(p)

Does my edit make more sense now?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-17T01:43:47.019Z · LW(p) · GW(p)

Sure, but the answer is very simple. Gene regulatory networks are not RSI because they are not optimization processes.

Replies from: whpearson
comment by whpearson · 2010-02-17T09:05:41.377Z · LW(p) · GW(p)

Intrinsically they aren't optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them (the same way X86-arch computers aren't optimization processes). And if you are a bacteria it seems it should be something that is evolutionarily beneficial, so I wouldn't be surprised to find some optimization going on at the gene network level. Enough to be considered a full optimization process I don't know, but if not, why not?

Replies from: Eliezer_Yudkowsky, Nick_Tarleton
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-17T19:51:16.715Z · LW(p) · GW(p)

Intrinsically they aren't optimization processes but they seem computationally expressive enough for an optimization process to be implemented on them

But they aren't optimization processes. It doesn't matter if they could implement one, they don't. You might as well point to any X86 chip and ask why it doesn't RSI.

Replies from: whpearson, PhilGoetz
comment by whpearson · 2010-02-17T23:29:42.794Z · LW(p) · GW(p)

I'm not talking about any specific Gene Network, I'm talking about the number and variety of gene networks that have been explored through out evolutionary history. Do you know that they all aren't optimisation processes? That they haven't popped up at least once?

To my mind it is asking why a very very large number of simple x86 systems (not just chips, they have storage) each with a different program that you don't know the details of hasn't RSId. Which I don't think is unreasonable.

How many distinct bacterial genomes do you think there has been since the beginning of life? Considering people estimate 10 million+ bacterial species alive today.

Some people have talked about the possibility of brute forcing AGI through evolutionary means, I'm simply looking at a previous evolutionary search through computational system space to get some clues.

comment by PhilGoetz · 2010-02-17T23:59:25.000Z · LW(p) · GW(p)

A gene network optimizes the use of resources to make more copies of that gene network. It sense the environment, and its own operations, and adjusts what it is doing in response. I think it is an optimization process.

comment by Nick_Tarleton · 2010-02-17T20:00:09.291Z · LW(p) · GW(p)

Enough to be considered a full optimization process I don't know, but if not, why not?

Evolution is stupid and optimization processes are complicated. Do you not think that's an adequate explanation?

Replies from: Tyrrell_McAllister, whpearson
comment by Tyrrell_McAllister · 2010-02-18T23:15:52.414Z · LW(p) · GW(p)

Evolution is stupid and optimization processes are complicated. Do you not think that's an adequate explanation?

The question is, Why did evolution get thus far and no further? Can you give an account that simultaneously explains both of the observed bounds? I suppose that some would be happy with "Shear difficulty explains why evolution did no better, and anthropics explains why it did no worse." But I don't find that especially satisfying.

comment by whpearson · 2010-02-17T23:35:25.774Z · LW(p) · GW(p)

Evolution managed to make an optimisation process in our heads, but not one in anything's genes. It had had a lot more time to work with genes as well. Why?

It is possibly worth noting that I am not talking about optimising proteins but the network that controls the activation of the genes. Protein folding is hard.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-02-18T23:17:20.547Z · LW(p) · GW(p)

Evolution managed to make an optimisation process in our heads, but not one in anything's genes.

It may be that getting optimization into our heads was the easiest way to get it into our genes (eventually, when we master genetic engineering).

comment by Paul Crowley (ciphergoth) · 2010-02-16T20:21:59.457Z · LW(p) · GW(p)

Possibly, but if you could link to your best efforts to explain it I'd be interested. I tried Google...

EDIT: D'oh! Thanks Cyan!

Replies from: Cyan, Cyan
comment by Cyan · 2010-02-16T20:36:26.110Z · LW(p) · GW(p)

Shoulda tried the Google custom search bar: Recursive self-improvement.

comment by Cyan · 2010-02-17T01:08:45.035Z · LW(p) · GW(p)

You're just lucky there's no such thing as LMG(CSB)TFY. ;-)

comment by cousin_it · 2010-02-16T18:08:02.298Z · LW(p) · GW(p)

Such things probably happen because effort spent on explaining quickly hits diminishing returns if the other person spends no effort on understanding.

comment by PhilGoetz · 2010-02-18T00:00:57.309Z · LW(p) · GW(p)

They already are RSIS, if you believe in the evolution of evolvability, which you probably should. The probable evolution of DNA from RNA, of introns, and of sex are examples of the evolution of evolvability.

There are single-celled organisms that act intelligently despite not having (or being) neurons. The slime mold, for example.

A gene network is a lot like the brain of an insect in which the exact connectivity of every neuron is predetermined. However, its switching frequency is much slower.

More advanced brains have algorithms that can use homogenous networks. That means that you can simply increase the number of neurons made, and automatically get more intelligence out of them.

Organisms have 600 to 20,000 genes. A honeybee has about a million neurons.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-02-18T09:29:06.677Z · LW(p) · GW(p)

There are single-celled organisms that act intelligently despite not having (or being) neurons. The slime mold, for example.

What does "intelligence" mean here?

For context, some more about slime moulds. In that thesis is a detailed model of the whole life-cycle of the slime mould, using biochemical investigations and computer modelling to show how all the different stages and the transitions between them happen.

What does it mean, to say that this system is "intelligent"? The word is used for a very wide range of things, from slime moulds (and perhaps even simpler systems?) to people and beyond. What is being claimed when the same word is applied to all of these things?

Put in practical terms, does a detailed knowledge of exactly how the slime mould works help in constructing an AGI? Does it help in constructing more limited sorts of AI? Does it illuminate the investigation of other natural systems that fall within the concept of "intelligence"?

I am not seeing a reason to answer "yes" to any of these questions.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-18T16:37:31.327Z · LW(p) · GW(p)

Put in practical terms, does a detailed knowledge of exactly how the slime mould works help in constructing an AGI? Does it help in constructing more limited sorts of AI? Does it illuminate the investigation of other natural systems that fall within the concept of "intelligence"?

I am not seeing a reason to answer "yes" to any of these questions.

Yes, to all of those questions. I don't think we currently have the AI technology needed to produce something with the intelligence of a slime mold. (Yes, we might be able to, if we gave it magical sensors and effectors, so that it just had to say "go this way" or "go that way". Remember that the slime mold has to do all this by directing an extremely complex sequence of modifications to its cytoskeleton.) Therefore, having a detailed knowledge of how it did this, and the ability to replicate it, would advance AI.

comment by CronoDAS · 2010-02-19T18:40:11.035Z · LW(p) · GW(p)

Suppose I wanted to convince someone that signing up for cryonics was a good idea, but I had little confidence in my ability to persuade them in a face-to-face conversation (or didn't want to drag another discussion too far off-topic) - what is the one link you would give someone that is most likely to save their life? I find the pro-cryonics arguments given by Eliezer and others on this site + Overcoming Bias to be persuasive (I'm convinced that if you don't want to die, it's a good idea to sign up) but all the arguments are in pieces and in different places. There's no one, single, "This is why you should sign up for cryonics" persuasive essay that I've found here that I've found that I can simply link someone to and hope for the best. Can you direct me to one?

Replies from: thomblake
comment by thomblake · 2010-02-19T18:43:57.260Z · LW(p) · GW(p)

It's tricky. As Socrates noted in the Apology, it's much easier to convince someone in a one-on-one conversation than to have a general argument to convince anyone in general.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-19T18:56:00.149Z · LW(p) · GW(p)

Yeah, it is. I would like a good link to use as a conversation starter, at least.

I wanted to evangelize on other message boards/blogs/etc., and having a single, ready-made "No, you really can cheat death!" link I can post would be a big help.

comment by Dean · 2010-02-18T22:40:48.922Z · LW(p) · GW(p)

first use of "shut up and calculate" ?

I liked learning about the bias called the "Matthew effect" The tendency to assign credit to the most eminent among all the plausible candidates from —Mattthew 25:29.

For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.

http://scitation.aip.org/journals/doc/PHTOAD-ft/vol_57/iss_5/10_1.shtml?bypassSSO=1

enjoy

comment by [deleted] · 2010-02-18T20:34:40.421Z · LW(p) · GW(p)

For those Less Wrongians who watch anime/read manga, I have a question: What would you consider the top three that you watch/read and why?

Edit: Upon reading gwern's comment, I see how kinda far off topic that was, even for an open thread. So change the question to what anime/manga was most insightful into LW-style thinking and problems?

Replies from: Eliezer_Yudkowsky, gwern, knb
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-18T22:57:27.659Z · LW(p) · GW(p)

Hikaru no Go, of course.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-01-25T11:41:57.662Z · LW(p) · GW(p)

Okay, so I'm 10 episodes into HnG and...where is the "LW-style thinking and problems"?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-25T20:22:36.095Z · LW(p) · GW(p)

Hence the origin of the phrase, "tsuyoku naritai".

Replies from: Jayson_Virissimo, Anubhav
comment by Jayson_Virissimo · 2012-01-26T00:27:34.868Z · LW(p) · GW(p)

Wow, I can't believe I missed that. Although, if that is the only thing relevant to "LW-style thinking and problems" in HnG, then Death Note compares favorably to it.

comment by Anubhav · 2012-01-26T14:31:02.052Z · LW(p) · GW(p)

Hence the origin of the phrase, "tsuyoku naritai".

That seems to be about as likely as "hyakunen hayai" or "isshoukenmei" or "ninja" or "mahou-tsukau uchuujin" originating from an anime.

(OK... not as likely as the last one.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-01-28T02:20:25.011Z · LW(p) · GW(p)

Well, that's where I got it.

comment by gwern · 2010-02-18T21:30:48.704Z · LW(p) · GW(p)

If you mean in general (ie. 'I really liked Evangelion and thought that Sayonara Zetsubou-Sensei was hysterical!'), I think that's a wee bit too far off-topic. Might as well ask what's everyone's favorite poet.

If you mean, 'what anime/manga was most insightful into LW-style thinking and problems', that's a little more challenging.

Death Note comes to mind as a possible exemplar of what humans really can do in the realm of action & thought, and perhaps what an AI in a box could do. Otaku no Video is useful as a cautionary tale about geekdom. And to round it off, I have a personal theory that Aria depicts a post-Singularity sysop scenario with humans who have chosen to live a nostalgic low-tech lifestyle* because that turns out to be la dolce vita.

* The high tech is there when it's really needed. Like how the Amish make full use of modern medicine, surgery, and tech when they need to.

Replies from: Cyan, i77
comment by Cyan · 2010-02-19T00:14:00.817Z · LW(p) · GW(p)

I think Death Note was a little too close to Calvinball to be truly instructive.

Replies from: gwern
comment by gwern · 2010-02-19T02:51:34.719Z · LW(p) · GW(p)

The second half arguably does have some fast and loose play by the writer, case in point being how Mikami was found by Near - arrgh, this has nothing to do with LW!

How about up until Y'f qrngu*, can we compromise on that?

* ROT-13 encoded to spare LW's delicate sensibilities. Here's a decoder.

Replies from: Cyan, Eliezer_Yudkowsky
comment by Cyan · 2010-02-19T04:10:08.674Z · LW(p) · GW(p)

I was mostly referring to how the reasoning had to deal a gradually accreting set of rules, each one constructed in the service of narrative (that is, fun) instead of being a realistic constraint. I really did mean Calvinball.

Replies from: gwern
comment by gwern · 2010-02-19T14:13:24.726Z · LW(p) · GW(p)

Calvinball is temporally inconsistent; it's been a while since I read DN but I don't remember any of the later rules making me think 'if only Light had known that rule, he would totally have owned his opponents!' Most of the later rules seemed to just be clarifications and hole-fixing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-19T03:33:07.740Z · LW(p) · GW(p)

Please edit above to avoid spoilers.

Replies from: gwern, wnoise
comment by gwern · 2010-02-19T14:04:49.377Z · LW(p) · GW(p)

The manga finished nearly half a decade ago, and Y qvrq* before the half-way point. "There's a statute of limitations on this shit, man."

* ROT-13 encoded; decoder

Replies from: Document, Eliezer_Yudkowsky, ciphergoth, Risto_Saarelma
comment by Document · 2010-02-19T20:26:47.902Z · LW(p) · GW(p)

When I apply the statute, my justification is along the lines of "people usually only care about spoilers if they're watching a series or planning to watch it soon, which are unlikely given a random person and a random series". Hariant's comment could easily be interpreted as asking for recommendations of anime to watch, in which case "planning to watch (considering watching) it" would be a given.

Replies from: gwern
comment by gwern · 2010-02-19T21:27:38.817Z · LW(p) · GW(p)

We cannot meaningfully discuss how DN & ilk hold lessons for LW without discussing plot events; funnily enough, spoilers tend to be about plots. And as I said, applying the principal of charity means not interpreting Hariant's comment that way.

Replies from: Document
comment by Document · 2010-02-20T18:23:21.060Z · LW(p) · GW(p)

I had to look that up; Wikipedia says that "In philosophy and rhetoric, the principle of charity requires interpreting a speaker's statements to be rational and, in the case of any argument, considering its best, strongest possible interpretation.". I thought I was applying it by assuming that you hadn't considered that interpretation of the comment, rather than that you were ignoring it, so I'm not sure what you mean.

Also, I don't know what you mean by "as you said".

(Message edited once.)

Replies from: gwern
comment by gwern · 2010-02-21T22:12:11.526Z · LW(p) · GW(p)

Which is more charitable: to interpret someone's comment as typical social fluff inappropriate for even the open threads, or to interpret it as an attempt to collate useful fictional examinations & introductions to LW-related material?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-19T21:12:18.181Z · LW(p) · GW(p)

Please edit both of the above to avoid having your comments deleted. It's great that you have that opinion, but some people may not share it, and also there's this incredible amazing technology called rot13 which is really useful for having your cake and eating it too in the case of this conflict. And we can all consider that official LW policy from this point forward.

Replies from: wnoise, Douglas_Knight, dclayh, Document
comment by wnoise · 2010-02-20T04:29:29.830Z · LW(p) · GW(p)

I know a couple people that claim to have unintentionally learned to read rot13 to the point where it is no longer a spoiler protection. (I can read it, but it's not automatic.)

comment by Douglas_Knight · 2010-02-20T06:50:55.108Z · LW(p) · GW(p)

It's all well and good to have some character of the founder rub off on the site, but not every fetish.

Replies from: Eliezer_Yudkowsky, Document
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-20T07:47:11.774Z · LW(p) · GW(p)

I don't think you understand the degree to which people who don't want spoilers, don't want to hear them.

Replies from: ciphergoth, wnoise
comment by Paul Crowley (ciphergoth) · 2010-02-20T15:32:07.183Z · LW(p) · GW(p)

Spoilers for a classic movie here:

http://lesswrong.com/lw/1s4/open_thread_february_2010_part_2/1ndd

Since the actual intent of the comment was to spoiler it can probably be deleted without further discussion.

EDIT: the edit is a big improvement. It used to be an actual spoiler.

Replies from: wnoise
comment by wnoise · 2010-02-20T18:30:08.615Z · LW(p) · GW(p)

The actual intent was to point out that embargoing references past a certain point truly is ridiculous. Referencing a 69 year old movie (EDIT: several hundred year old play) is an attempt at a reductio ad absurdum, made more visceral by technically violating the norm Eliezer is imposing.

Certainly there's no real need to discuss specific plot points of recent manga or anime on this site. This, in fact, holds for any specific example one cares to name. On the other hand, the cumulative cutting off all our cultural references to fiction does impose a real harm to the discourse.

References to fiction let us compress our communications more effectively by pointing at examples of what we mean. My words alone can't have nearly the effect a full color motion picture with surround sound can -- but I can borrow it, if I'm allowed to reference works that most people are broadly familiar with.

I don't think that most recent works count -- they reach too small a segment of LW, and so are the least useful to reference, and the ones most likely to upset those who are spoiler averse. The question is where the line should be set, and that requires context and judgment, not universal bans.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-03-01T21:55:25.471Z · LW(p) · GW(p)

I think there's a cost/benefit tradeoff, and that comment is all cost, no benefit.

Replies from: wnoise
comment by wnoise · 2010-03-01T23:28:46.910Z · LW(p) · GW(p)

While I admit that the benefit was not in the same class as the ones discussed in my point above, clearly I thought it had some benefit in making my point.

And yes, it had costs -- it needed to, in order to make the point. Of course, ceteris paribus, the better the job at illustrating the reductio-ad-absurdum, the smaller the cost. I tried to choose an example with the smallest cost I reasonably could.

If you have a popular and well-known, older work that has what is truly a spoiler, but that (a) most people already know, and (b) the work is short enough that a huge time-investment isn't likely to be ruined (why I chose a movie, rather than a book), I'd be willing to change the example to that.

Replies from: ciphergoth, RobinZ
comment by Paul Crowley (ciphergoth) · 2010-03-01T23:37:02.826Z · LW(p) · GW(p)

I refer you in that case to the canonical example...

Replies from: arundelo, RobinZ
comment by RobinZ · 2010-03-02T01:00:20.265Z · LW(p) · GW(p)

Upvoted for pun.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-03-02T08:28:26.839Z · LW(p) · GW(p)

If there's a pun I'm afraid it's unintentional - are you referring to the literal meaning of "canon" in this context?

Replies from: RobinZ
comment by RobinZ · 2010-03-02T11:52:41.382Z · LW(p) · GW(p)

Indeed.

comment by RobinZ · 2010-03-01T23:54:35.482Z · LW(p) · GW(p)

Did you pick that movie for that reason, or because that's what TV Tropes used? Because I've never seen it, but I do know that Macduff was not of woman born - and Macbeth is rather better known.

Edit: Better still is "Romeo and Juliet die at the end".

Replies from: wnoise
comment by wnoise · 2010-03-02T00:01:32.783Z · LW(p) · GW(p)

I did not know that TV Tropes used it, but I have seen other people use it for the same sort of point.

I'll change it.

comment by wnoise · 2010-02-20T18:02:56.354Z · LW(p) · GW(p)

Of course not -- interpersonal utility comparison is impossible.

comment by Document · 2010-02-20T18:25:50.029Z · LW(p) · GW(p)

Downvoted for not addressing the parent comment's points.

comment by dclayh · 2010-02-19T21:18:40.010Z · LW(p) · GW(p)

In that case can we have a little rot-13 widget built into LW? Or is there a Firefox plugin I should be using?

(Personally I think the whole "spoilers" thing is ridiculous, but I'm fine with this as site policy if it's easy to do.)

Replies from: kpreid, Document
comment by kpreid · 2010-02-19T23:06:31.730Z · LW(p) · GW(p)

I use this “bookmarklet”:

javascript:inText=window.getSelection()+'';if(inText=='')%7Bvoid(inText=prompt('Phrase...',''))%7D;if(!inText)%7BoutText='No%20text%20selected'%7Delse%7BoutText='';for(i=0;i%3CinText.length;i++)%7Bt=inText.charCodeAt(i);if((t%3E64&&t%3C78)%7C%7C(t%3E96&&t%3C110))%7Bt+=13%7Delse%7Bif((t%3E77&&t%3C91)%7C%7C(t%3E109&&t%3C123))%7Bt-=13%7D%7DoutText+=String.fromCharCode(t)%7D%7Dalert(outText)

[Not written by me; I have no record of where I obtained it.]

Put it in your bookmarks bar in most web browsers, and when you click it it will display the rot13 of the selected text, or prompt you for text if there isn't any selection. In Safari the first entries in the bookmarks bar get shortcuts ⌘1, ⌘2, ..., so it ends up that to rot13 something on a web page I just need to select it and press ⌘3.

Replies from: dclayh
comment by dclayh · 2010-02-19T23:28:13.156Z · LW(p) · GW(p)

Excellent, thank you.

comment by Document · 2010-02-19T21:48:23.092Z · LW(p) · GW(p)

www.rot13.com ?

Replies from: dclayh
comment by dclayh · 2010-02-19T22:01:39.016Z · LW(p) · GW(p)

Good, although having to open a new tab still seems less than maximally convenient.

(Actually, doing a hidden-text thing like TVTropes does would be pretty good, come to think of it.)

comment by Document · 2010-02-20T18:24:14.715Z · LW(p) · GW(p)

Downvoted for sarcasm.

comment by Paul Crowley (ciphergoth) · 2010-02-19T14:20:10.970Z · LW(p) · GW(p)

I love that comic, but I think the statute of limitations takes more than five years to expire...

comment by Risto_Saarelma · 2012-01-26T14:57:42.623Z · LW(p) · GW(p)

Stuff that's not really part of the mainstream popular culture is more spoilable. Cowboy Bebop came out before The Sixth Sense, but I'd still assume open spoilers for The Sixth Sense wouldn't be as bad as ones for Cowboy Bebop on an English-language forum.

Replies from: gwern
comment by gwern · 2012-01-26T15:18:37.919Z · LW(p) · GW(p)

I don't think that's true either. The people in the study were specifically screened to not have heard of the stories used.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-01-26T16:35:26.027Z · LW(p) · GW(p)

Assuming the general social norms for spoilers thing for "spoilability", not whether it actually ruins entertainment for those who don't know the story yet or not.

comment by wnoise · 2010-02-20T00:17:17.475Z · LW(p) · GW(p)

EDIT: Romeo and Juliet die at the end.

Replies from: dclayh, mattnewport
comment by mattnewport · 2010-02-20T00:26:12.650Z · LW(p) · GW(p)

Bruce Willis was dead all along.

comment by i77 · 2010-02-19T12:06:07.435Z · LW(p) · GW(p)

Fullmetal Alchemist Brotherhood has (SPOILER):

an almost literally unboxed unfriendly "AI" as main bad guy. Made by pseudomagical ("alchemy") means, but still.

Replies from: Anubhav
comment by Anubhav · 2012-01-26T14:39:22.815Z · LW(p) · GW(p)

It bugs me that people don't think of this one more often. It's basically an anime about how science affects the world and its practitioners.

(Disclaimer: Far too many convenient coincidences/idiot balls IIRC. It's a prime target for a rationalist rewrite.)

comment by knb · 2010-02-20T09:46:11.778Z · LW(p) · GW(p)

If Death Note counts, then Haruhi might count as well. Deals with anthropics and weird AIs in a tangential way. The anime is awesome, but not as good as it could have been.

comment by Kevin · 2010-02-18T10:12:07.717Z · LW(p) · GW(p)

UFO sightings revealed in UK archive files from 1990s

http://news.bbc.co.uk/1/hi/uk/8520486.stm

Replies from: Jack
comment by Jack · 2010-02-18T11:59:12.356Z · LW(p) · GW(p)

I don't know that this needs to be voted down. I assume Kevin didn't post the link as evidence that aliens from other planets are visiting us. Rather, it is interesting data pertaining to rationality that needs to be explained. People claim to be seeing things that are almost certainly not there! Or the UK was testing a new spy plane throughout the 90's that they still haven't announced. Particularly interesting is the suggestion that UFOs being sighted (maybe it should be hallucinated) these days are different from the UFOs sighted in the past because of new technologies and popular depictions of those technologies. I'm a little concerned about what predictions this theory makes though. Can we expect this decade's UFO sightings to include cylon base stars? Popular culture produces a lot of images and it is damn easy to find images that match UFO sightings in retrospect. Was "District 9" popular enough that some alien sightings from this decade will like the 'prawns' instead of the usual 'Grey' archetype?

Does anyone know if there have ever been any serious studies on the subject? It seems like fertile research ground but also like the kind of thing academia would look at as too silly to spend time on.

Replies from: CronoDAS
comment by CronoDAS · 2010-02-18T23:20:01.668Z · LW(p) · GW(p)

The "Grey" archetype has peen traced back to an episode of The Outer Limits, I think.

comment by JamesAndrix · 2010-02-17T18:23:03.329Z · LW(p) · GW(p)

"The Mathematical Foundations of Consciousness," a lecture by Professor Gregg Zuckerman of Yale University

http://polymathism.com/

comment by Torben · 2010-02-16T21:00:03.623Z · LW(p) · GW(p)

I've been trying to find the original post to explain why it allegedly is so very likely that we live in a simulation, but I've had little luck. Does anyone have a link handy?

Replies from: Cyan
comment by Cyan · 2010-02-16T21:01:56.773Z · LW(p) · GW(p)

Are you living in a computer simulation actually argues for a disjunction that includes "we are almost certainly living in a computer simulation" along with two other statements.

Replies from: Eliezer_Yudkowsky, zero_call, Document
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-16T22:43:09.744Z · LW(p) · GW(p)

That's the difference between the Simulation Argument and the Simulation Hypothesis. The Simulation Argument is "you must deny one of these three statements" and the Simulation Hypothesis is "the statement to be denied is 'I am not in a computer simulation'".

comment by zero_call · 2010-02-17T04:45:26.804Z · LW(p) · GW(p)

That's a really neat link. Thanks. That's a paper by the director of FHI, Nick Bostrom, also one of the sponsors of LW. Just to summarize and to discuss, it essentially sets up three mutually exclusive possibilities. One, that post-human civilizations aren't significantly interested in running earth-like simulations, two, that post-human civilizations just don't make it (e.g., doomsday scenarios), or three, we actually live in a computer simulation ourselves. It doesn't really argue that the third scenario is so likely, it just (roughly) establishes that these scenarios are mutually exclusive. This all comes under the main (fairly well established) belief that future computing power is capable of these sorts of large-scale simulations.

The argument and the paper is actually pretty reasonable, but the question of whether or not post-human civilizations would want to run earth-like simulations is the sticking point. Sure, it's possible, but the resources required are huge, the upkeep involved, and so on...

I guess another main criticism you might make of the paper is that it relies pretty heavily on "Drake's equation" type of reasoning where you don't really know if you've gotten all the dependencies correct. It's still valid it's just highly simplistic and so somewhat suspicious on those grounds. And to boot, I think his N_sub(I) variable is actually mis-indicated... but maybe I was just reading a typoed draft or misunderstanding.

Maybe most interestingly, if you decide we're in a simulation, then you have to wonder if there isn't a long loop of father/grandfather/great-grand-dad/etc simulations, and the guys that are simulating us are just being simulated themselves. Anyways this is getting long so I'll just recommend the article and leave it here.

comment by Document · 2010-02-17T05:25:51.418Z · LW(p) · GW(p)

It confuses me slightly that, from superficial glances, the discussion there and in threads like this one focuses on "ancestor" simulations, rather than simulations run by five-dimensional cephalopods. Ryan North got it right when he had T-Rex say "and not necessarily our own", but then he seems to get confused when he says "a 1:1 simulation of a universe wouldn't work" - why not?

Personally, I like Wei Dai's conclusion that we both are and aren't in a simulation.

Replies from: SirBacon
comment by SirBacon · 2010-02-17T09:53:58.463Z · LW(p) · GW(p)

You are right to be confused. The idea that the simulators would necessarily have human-like motives can only be justified on anthropocentric grounds - whatever is out there, it must be like us.

Anything capable of running us as a simulation might exist in any arbitrarily strange physical environment that allowed enough processing power for the job. There is no basis for the assumption that simulators would have humanly comprehensible motives or a similar physical environment.

The simulation problem requires that we think about our entire perceived universe as a single point in possible-universe-space, and it is not possible to extrapolate from this one point.

comment by AdeleneDawner · 2010-02-26T20:03:15.326Z · LW(p) · GW(p)

This is thoroughly hypothetical, but if there was going to be an unofficial, more social sister site for LW, what name would you suggest for it?

Replies from: Morendil
comment by Morendil · 2010-02-26T20:08:02.359Z · LW(p) · GW(p)

Untrusted Hardware - to serve as a constant reminder.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-02-26T20:17:53.100Z · LW(p) · GW(p)

I like that, though I think there are other variations on the theme that I'd like better. "Faulty Hardware", perhaps.

comment by Kevin · 2010-02-25T13:34:50.749Z · LW(p) · GW(p)

I should really start taking fish oil supplements again. I would especially encourage anyone with children to make sure they get sufficient fish oil while their brains are growing.

http://cognitivefun.net/talk/post/18427

comment by thomblake · 2010-02-24T14:12:11.674Z · LW(p) · GW(p)

I've realized that having my and others' karma listed feels very similar to when Gemstone III started listing everyone's experience level.

The question remains: how much karma to level up?

comment by Kevin · 2010-02-24T10:42:23.602Z · LW(p) · GW(p)

Montana's No Speed Limit Safety Paradox

http://news.ycombinator.com/item?id=1146684

comment by Kevin · 2010-02-24T10:41:26.311Z · LW(p) · GW(p)

Italian Court Finds Google Violated Privacy

http://www.nytimes.com/2010/02/25/technology/companies/25google.html

Replies from: Morendil
comment by Morendil · 2010-02-24T10:44:38.094Z · LW(p) · GW(p)

What makes this relevant to LW participants?

Replies from: Kevin
comment by Kevin · 2010-02-24T11:00:39.046Z · LW(p) · GW(p)

Maybe an Italian court could find that CEV is a violation of local privacy laws.

Also, it could serve as general notice to keep your internet related businesses out of Italy and the Italian court system.

comment by Kevin · 2010-02-24T09:45:20.850Z · LW(p) · GW(p)

Yuri's Night: Bay Area 2010

http://news.ycombinator.com/item?id=1147468

comment by byrnema · 2010-02-21T05:18:30.416Z · LW(p) · GW(p)

This comment is a response to the claim that Gould's separate magesteria idea is not conceptually coherent. While I don't view reality parsed this way, I thought I would make an effort to establish its coherence and self-consistency (and relevance under certain conditions).

In this comment, by dualism, I'll mean the world view of two separate magisteria; one for science and one for faith. There are other, related meanings of dualism but I do not intend them here.

Physical materialism assumes monism -- there is a single, external reality that we have a limited knowledge and awareness of. Awareness and knowledge of this reality come through our senses, by interaction with reality. Dualism is rejected with a straight-forward argument: you cannot have awareness of something without interaction with it. If you interact with it, then it is part of the one reality we were already talking about.

Dualists persist: The empirical reality X that physical materialists recognize is only part of everything that matters. There is also a dual reality -- X', which is in some way independent of (or outside of) X. The rules in X' are different than the rules in X. For example, epistemology (and sometimes even logic) appears to work differently, or less directly.

Some immediate questions in response to dualism are:

(1) If we are located in X, how does interaction with X' work?

(2) Is it actually coherent to think of some component X' being outside of X? Why don't we just have X expand to absorb it?

Relation to the Simulation Hypothesis

An immediate, possibly too-quick answer to the second question is 'yes, dualism is coherent because it is structurally isomorphic to the simulation hypothesis'. If we were in a simulation, X and X' would be a natural way to parse reality. X would be the simulation and X' would be the reality outside the simulation. Clearly, the rules could be different within X compared to within X'. People simulated in X could deduce the existence of X' in a variety ways:

(a) by observing the incompleteness of X (for example, the inexplicable deus ex machina appearance of random numbers)

(b) by observing temporal, spatial or logical inconsistencies in X

(c) Privileged information given to them directly about X', built into the simulation in ways that don't need to be consistent with other rules in X

While dualists aren't claiming that empirical reality is a simulation, by analogy we could consider that (a), (b) or (c) would be cause for deducing X' and having a dualistic world view. I will visit each of these in reverse order.

Re: (c) Privileged information given to them directly about X', built into the simulation in ways that don't need to be consistent with other rules in X

Many (most?) religions are based on elements of divine revelation; special ways that God has of communicating directly to us in some way separate and independent of ordinary empirical experience. Being saved, speaking in tongues, visions, etc. I've heard it argued here on LW that this sort of experience would be the most rational reason for theism; they might be delusional but at least they are basing their beliefs on empirical sense experience. They would be justified in having a dualistic world view if they perceived their visions as distinct from (for example, having different rules than or existing in a different plane than) empirical reality. However, many theists (including myself) do not claim experience of divine revelation.

Re: (b) by observing temporal, spatial or logical inconsistencies in X

I think that in the past, this was a big reason for belief in the spiritual realm. However, the success of the scientific world view has shot this completely out of the water. No one believes that X is inconsistent; while there are 'gaps' in our knowledge, we have limitless faith in science to resolve everything in X that can be resolved, one way or another. Outside X is another matter of course, which brings us to (a). I proceed to (a) with the counter-argument to (b) firmly in hand: reality is explainable and whether we know the rules are not, there are rules for the phenomena in X, and rules for the rules in X and, if not rules, than a necessary logical deduction that can be made.

Re: (a) by observing the incompleteness of X

Can everything in X, in theory, be explained within X? If you believe this, then you have no reason to be dis-satisfied with monism. (It happens that I am a monist.) But what if we could point to just one thing that could not be explained in X? Just one thing that could not even be explained in theory because to do so would result in some contradiction in X? Would that give us cause to deduce X'?

Example 1: True Randomness

There are many processes that are approximated as random. The diffusion of a dye in a liquid, the search path of an amoeba looking for food, the collapse of a symmetric structure to one direction or another. However, all of these processes are considered deterministic -- if we knew all the relevant states of the system and had sufficient computing power we could accurately predict the outcome via simulation; no random numbers needed.

Nevertheless, there are some processes that appear as though they could be truly "random". That is, occurring spontaneously independent of any mechanism determining the outcome. For example, the 'spontaneous' creation of particles in a vacuum, or any other phenomenon described in an advanced physical journal with 'spontaneous' in the title. I think that you are a self-consistent physical materialist, you should deny the possibility of random or spontaneous events. I do: I think there must be a mechanism for everything, whether we have access to knowledge of it or not.

To the best of my knowledge, our understanding of these 'spontaneous' phenomena leaves room for mechanical explanations. Maybe this and that are involved, we just don't know.

Yet quantum mechanics is beginning to reveal ways in which a scientific theory could predict the inconsistency of non-randomness. Bell's theorem is close, proving that information in some cases is exchanged without a local mechanism. Fortunately, there is still room for other interpretations, including non-local mechanisms and many-worlds.

Example 2: Objective Value

Of any kind, including objective morality. This remains an unsolved problem in physical materialism, if you insist upon it, because it's existence seems dependent upon some authority (e.g., a book) that we have no evidence of in X. If a person believes in objective morality a priori, they may be a dualist since they deduce the existence of such an authority, embedded within X, but distinct from X in that it cannot be directly observed or interacted with. (Its existence is only inferred.)

Example 3: Consciousness

Another unsolved problem in physical materialism. I'm not familiar with them, but I understand that some dualists have arguments for why consciousness could not be explained within X.


My Position

It is often logistically difficult to defend a position you don't represent. The reason for this is that criticisms against the position will be directed at you personally, even though you hold you do not hold the position, and then further you might be tempted to continue defending the position with counter-arguments, which further confuses your identity. I am sympathetic to the dualist worldview as coherent and rational, but not globally scientific. I greatly prefer the physical materialist, scientific worldview. I have a very strong faith that everything in X can be explained within X; this faith is so strong that I consider it theistic, and call myself a theist.

Replies from: Sniffnoy, Jack, SilasBarta
comment by Sniffnoy · 2010-02-21T07:46:34.672Z · LW(p) · GW(p)

I don't understand why true randomness is a problem. Is there something so wrong with probabilistic determinism?

Replies from: byrnema
comment by byrnema · 2010-02-22T22:15:56.148Z · LW(p) · GW(p)

I think so. If a process is truly random, does this mean there was no mechanism for it? How was it determined? It seems to me that picking a random number is something a closed system cannot possibly do.

Replies from: RobinZ
comment by RobinZ · 2010-02-22T22:24:10.968Z · LW(p) · GW(p)

"Cannot possibly" is a very strong claim - I would hesitate to say anything much stronger than "should not be expected to".

Replies from: byrnema
comment by byrnema · 2010-02-22T22:29:48.543Z · LW(p) · GW(p)

You're correct of course.

But I'm 'sticking my neck out' on this one -- my intention was to signal this.

Replies from: RobinZ
comment by RobinZ · 2010-02-22T22:31:36.701Z · LW(p) · GW(p)

Admirable! I will read it as a rhetorical flourish, then.

comment by Jack · 2010-02-21T11:08:38.807Z · LW(p) · GW(p)

Re: Your definitions.

You appear to be conflating ontological views (physicalism and dualism usually refer to these sorts of views, views about what kinds of things exist) with epistemological views. There is nothing in the definition of physicalism that requires us to have knowledge of the external world and nothing in dualism that requires us to give up rationality or science. You can be a physicalist and still think someone is deceiving your senses, for example. Also, this might just be me but 'materialism' should be jettisoned as outdated. "Materialism" means that you believe everything that exists is matter. But there is no reason to think that word is even meaningful in our fundamental physics. Thus I prefer "physicalism" the belief that what exists is what physics tells us exists.

Re: the relation to the simulation hypothesis

If you haven't you ought to read "Brains in a vat" by Hilary Putnam. It's just twenty pages or so. He argues that we cannot claim to be brains in vats (or in any kind of extreme skeptical scenario) because our language does not have the ability to refer to vats and computers outside our level of reality. When a brain in a vat says "vat" he is referring to some feature of the computer program that is being run for his brain. Thus he cannot refer to what we call the vat (the thing that holds his brain). I can explain further if that isn't clear. But one thing I got from the article is that we can understand the bizarre, muddled writings of substance dualists as trying to describe the vat! If you don't have any language that lets you refer to the vats you're going to sound pretty confusing. I find this pretty funny because the way Descarte's supposedly gets out of extreme skepticism is partly by trying to prove substance dualism! Irony!

Anyway, I'm a little confused by the invocation of the simulation hypothesis because while I'm willing to look at it as kind of metaphysical dualist hypothesis I can't see how our tools for learning the answer to this question would be in anyway different from our general scientific tools. Metaphysics, such as we can say anything at all about it, is just an extension of science.

(a) by observing the incompleteness of X (for example, the inexplicable deus ex machina appearance of random numbers) (b) by observing temporal, spatial or logical inconsistencies in X c) Privileged information given to them directly about X', built into the simulation in ways that don't need to be consistent with other rules in X

Why not just assume these were features of X to begin with? If I see an temporal, spatial or logical inconsistency I'm going to revise my understanding of space, time and logic in X. Not posit X'.

But what if we could point to just one thing that could not be explained in X? Just one thing that could not even be explained in theory because to do so would result in some contradiction in X?

We would revise our theory of X to remove the contradiction. I know you know this happens all the time in science.

I'm having a hard time dealing with the rest given the conflation between epistemology and ontology. Yes, if there are properties (like value and consciousness) that cannot be reduced to the fundamental entities of physics, then physicalism is wrong. However, it does not follow that Bayesianism is wrong, that empiricism is wrong or that the scientific method is invalid in certain magesteria.

Replies from: byrnema, byrnema
comment by byrnema · 2010-02-22T22:37:21.481Z · LW(p) · GW(p)

I can't see how our tools for learning the answer to this question would be in anyway different from our general scientific tools. Metaphysics, such as we can say anything at all about it, is just an extension of science.

Depending upon what you mean by 'science', this statement could range from trivially true to ... not true.

If by science you mean 'ways of knowing', then it is true; metaphysics is just an extension of science. However, scientific principles we've learned in X don't necessarily apply to X'. The rules in X' could be very strange, and not logical in physically logical ways. (My opinion is that they still need to be mathematically logical.)

Why not just assume these were features of X to begin with? If I see an temporal, spatial or logical inconsistency I'm going to revise my understanding of space, time and logic in X. Not posit X'.

It has to be an inconsistency that is not resolvable in X.

Replies from: Jack
comment by Jack · 2010-02-22T23:43:32.064Z · LW(p) · GW(p)

Depending upon what you mean by 'science', this statement could range from trivially true to ... not true.

I mean scientific epistemology of which I take Bayesian epistemology to be an idealized form. We update our probability distributions for all logically consistent hypotheses based on predictive accuracy, capacity, parsimony and some other pragmatic tie-breaker criteria. This is the same formula we should apply to metaphysics. However, the nature of the beast is that most of the work that should done in metaphysics involves clearing the way for physics, biology, chemistry, psychology etc., not advancing a view with particular predictions that should be tested. Basically we're asking: what is a good way to think about the world?

Now it could be that there is some place, domain or mode where physicalist metaphysics is bad, counterproductive, unexplanatory etc. Then it makes sense to try to give an account of metaphysics that makes sense of this place domain or mode while not losing the advantages physicalism provides elsewhere. A kludgey way of doing this is just to claim that there are different 'magisterium', one where physics defines our most basic ontology and another which is better described with some other theory. This other theory could be surprising and strange. But we still determine what that other theory looks like based on our scientific epistemology and the fact that we are using two different theories needs to be justified by our scientific epistemology.

It has to be an inconsistency that is not resolvable in X.

I have a lot of trouble imagining how this could happen. Our physical concepts are incredibly flexible. Would asking for an example be insane of me?

Replies from: byrnema
comment by byrnema · 2010-02-23T00:46:33.713Z · LW(p) · GW(p)

Nevermind. I got part (b) and part (c) confused. Example of temporal, spatial or logical inconsistencies (possible but not actual) forthcoming.


I have a lot of trouble imagining how this could happen. Our physical concepts are incredibly flexible. Would asking for an example be insane of me?

I gave 3. (The existence of truly random phenomenon, objective value, or a dual component to consciousness would all be inconsistent with X.) Did you even read my comment? I realize it was really long..

Replies from: byrnema
comment by byrnema · 2010-02-23T04:24:15.790Z · LW(p) · GW(p)

First example:

I leave the keys to my office on the counter, and realize this when I get to work. Damn, I need my keys! Maybe I left them in my car. Phew, there they are. I get home after work and there are my keys on the counter, just where I left them. So how did I get in my office? Well, shrug, I did.

More whimsical example:

Your name is Mario and it's your job to save the princess. You've got 323 coins and then you see: a black pixel.

comment by byrnema · 2010-02-22T22:24:40.306Z · LW(p) · GW(p)

You appear to be conflating ontological views (physicalism and dualism usually refer to these sorts of views, views about what kinds of things exist) with epistemological views.

I'm not surprised I am doing this, since my intention is to compare world views, which include ontological and epistemological views together. Is this a big deal?

My writing style must have been confusing, because you seem to be systematically misinterpreting my use of clauses. Anyway, three things that I didn't intend to write or even imply:

  • There is something in the definition of physicalism that requires us to have knowledge of the external world.

  • dualism requires us to give up rationality or science

  • it follows that Bayesianism is wrong, that empiricism is wrong or that the scientific method is invalid in certain magesteria. (the coherence o one world view doesn't negate the others, and, anyway I haven't mentioned anything about a dichotomy between Bayesianism and dualism).

I don't know if this helps any. If it's a mess you can just drop this comment, I'll be leaving other ones.

Replies from: Jack
comment by Jack · 2010-02-22T23:04:38.663Z · LW(p) · GW(p)

If you didn't imply

dualism requires us to give up rationality or science

and that

it follows that Bayesianism is wrong, that empiricism is wrong or that the scientific method is invalid in certain magesteria. (the coherence o one world view doesn't negate the others, and, anyway I haven't mentioned anything about a dichotomy between Bayesianism and dualism).

Then I am really confused by what you define as the dualism thesis in the original comment:

In this comment, by dualism, I'll mean the world view of two separate magisteria; one for science and one for faith.

...

I'm not surprised I am doing this, since my intention is to compare world views, which include ontological and epistemological views together. Is this a big deal?

Well I'm a physicalist but I'm a physicalist because I think that is the right view to hold given the evidence and my epistemology. So I'd have no problem at all adjusting my metaphysical view based on new evidence. But when a metaphysical view says that my epistemology ceases to apply in certain domains I get really cranky and confused. Maybe that isn't what you're suggesting or maybe I don't hold one of the worldviews you are comparing. If I were going to describe my world view I would probably stop at my epistemology and only if prompted would I continue with an ontology.

Replies from: byrnema
comment by byrnema · 2010-02-23T00:35:10.450Z · LW(p) · GW(p)

Well, my whole comment is just about whether dualism (as the two-separate-magesteria-hypothesis) is coherent. Does that help?

Coherent doesn't mean correct, and certainly doesn't mean actual.

If you didn't imply dualism [implies negative things about monism] then I am really confused by what you define as the dualism thesis.

Again, I'm trying to determine if dualism is logically possible, not make any of the claims that dualism would make. Yet, what would be relevant is this question: does dualism make any implications that are logically impossible?

Replies from: Jack
comment by Jack · 2010-02-23T02:54:56.560Z · LW(p) · GW(p)

If you didn't imply dualism [implies negative things about monism] then I am really confused by what you define as the dualism thesis.

Again, I'm trying to determine if dualism is logically possible, not make any of the claims that dualism would make. Yet, what would be relevant is this question: does dualism make any implications that are logically impossible?

No my problem wasn't with the fact that you didn't mean to imply negative things about monism. My confusion arises from from the fact that your definition of dualism says that there is some domain/space/mode i.e. magisterium which we do not learn about through science. Specifically you say "two separate magisteria; one for science and one for faith." The obvious interpretation of this is that dualism implies a limit on science. It seems to imply that Bayesianism or empiricism or the scientific method or some other aspect of "SCIENCE" is not valid in the "faith" magisterium. But you say you are not implying this. Thus my confusion.

Now I'm actually okay with magisteria where science isn't involved but these aren't domains where the term "propositional knowledge" meaningfully applies. Like art or a game. Gould appeared to suggest that there are religious facts (in a non-anthropological sense) which I do think is nonsense. But I'm actually pretty sympathetic to so-called non-realist theology (though a lot of it seems to have a pretty obnoxious post-modern undertone that suggests non-realism about everything).

Replies from: byrnema
comment by byrnema · 2010-02-23T03:45:45.560Z · LW(p) · GW(p)

Oh, I see! You were confused by my statement that one magisterium is for science and one is for faith when I simultaneously seemed not to object in any way if you wanted to assert that science applies everywhere.

In the statement, 'one magesterium is for science', 'science' must be meant in some limited sense. Specifically, I guess, the set of scientific facts and principles we've learned that apply to X.

Maybe this could happen in Flatland. X is a two-dimensional world and the people there learn rules that apply to 2D. But Flatland is embedded in a 3D world X'. I'm not saying the people in flatland can't comprehend X' with a different set of rules, but they would be justified in parsing their world as X and X' -- especially if they experience 2D things usually but encounter understanding of 3D things only exactly when they happen to collect in a square with a plus sign affixed to one side.

Replies from: Jack
comment by Jack · 2010-02-23T04:09:06.854Z · LW(p) · GW(p)

So here is something that looks like it would qualify as reason for the flatlanders to reject their two-dimensional science. In Flatland an object that is trapped in a square cannot escape. To a flatlander seeing an object escape a box is going to look like magic. They will be forced to question their most basic beliefs about the nature of the world. Would this count as an inconsistency that cannot be resolved with their scientific facts and principles... the kind of thing that would make it reasonable to believe in an additional magisterium?

Replies from: byrnema
comment by byrnema · 2010-02-23T04:35:48.361Z · LW(p) · GW(p)

Yeah.

So if they wanted to be monists, they would reject their 2D-science and say that while 2D-science apparently seems to be a good approximation of most things, it's only an approximation as apparently reality enables square-escape. They try to look for extensions of 2D science that make sense and are consistent with what they observe about square-escape, but just haven't solved the problem yet.

If they wanted to be dualists, they would say that in one magisterium, 2D science applies. Any non-2D stuff that goes on belongs to that separate, independent magisterium they'll call Xhi, a word which is really just a placeholder for 'the third dimension' until they discover it.

Replies from: Jack
comment by Jack · 2010-02-23T04:56:42.821Z · LW(p) · GW(p)

Will the Flatlanders theorize about Xhi? Will they have knowledge of it? Are there facts about Xhi?

Replies from: byrnema
comment by byrnema · 2010-02-23T05:11:26.349Z · LW(p) · GW(p)

Why do you ask?

Replies from: Jack
comment by Jack · 2010-02-23T05:25:30.021Z · LW(p) · GW(p)

I was just trying to clarify my interpretation of what you're saying. Because if they are theorizing of Xhi, if there are facts about Xhi and if they are seeking knowledge of it it seems clear that they ought to be doing science (in the general epistemological sense I was using earlier) to form these theories and discover these facts. This of course does not demonstrate that the two magesteria, as you've formulated them, are incoherent.

But I'm not sure if you are talking about the same thing Gould (and presumably Eliezer) are talking about. I took Gould to be saying that this second magesterium isn't just a subject or set of subjects about which our particular scientific facts and scientific principles can say nothing. Rather, I believe Gould is saying that the magesterium of faith consists of areas of thought or subjects for which the scientific community, the scientific method and inductive empiricism itself cease to apply. Moreover, they don't only not apply because we've chosen a way of seeing these areas of thought that doesn't involve scientific epistemology, they don't apply as a matter of principle-- it is a category error to try and apply the tools of science to the domain of religion.

Edit: And like I said: that looks like nonsense to me.

comment by SilasBarta · 2010-02-22T22:40:12.211Z · LW(p) · GW(p)

This is thorough enough and long enough to merit posting as a top-level, IMO.

comment by SK2 (lunchbox) · 2010-02-20T23:09:25.055Z · LW(p) · GW(p)

Exercising "rational" self-control can be very unpleasant, therefore resulting in disutility.

Example 1: When I come buy an interesting-looking book on Amazon, I can either have it shipped to me in 8 days for free, or 2 days for a few bucks. The naive rational thing to do is to select the free shipping, but you know what? That 10-day wait is more unpleasant than spending a few bucks.

Example 2: When I come home from the grocery store I'm tempted to eat all the tastiest food first. It would be more "emotionally intelligent" to spread it out over the course of the week. But that requires a lot of unpleasant resistance to temptation. Also, the plain food seems more appealing when I'm hungry and it's the only thing in my fridge.

Of course, exercising restraint probably builds willpower, a good thing in the long run. But in some cases we should admit that our willpower is only so elastic, and that the most rational thing to do is to give in to our impulses.

What are some other seemingly "irrational" things we do that are in fact rational when we factor in the pleasantness of doing them?

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-02-21T23:49:48.653Z · LW(p) · GW(p)

What are some other seemingly "irrational" things we do that are in fact rational when we factor in the pleasantness of doing them?

Relevant paper: Lay Rationalism and Inconsistency between Predicted Experience and Decision

Replies from: lunchbox
comment by SK2 (lunchbox) · 2010-02-22T01:40:01.197Z · LW(p) · GW(p)

Thanks Nick. That paper looks very interesting.

comment by spriteless · 2010-02-19T20:45:00.988Z · LW(p) · GW(p)

Is there a facebook group I can spam my friends to join to save the world via Craiglist ads yet?

Replies from: Kevin
comment by Kevin · 2010-02-19T22:04:05.637Z · LW(p) · GW(p)

We are meticulously planning our approach in private now. It'll be a while before we start a Facebook group but I will definitely let all of LW know when we get there.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-19T22:31:50.640Z · LW(p) · GW(p)

Er, who's planning this? Is Michael Vassar in on it?

Replies from: Kevin
comment by Kevin · 2010-02-20T02:37:04.559Z · LW(p) · GW(p)

Yes

comment by brazil84 · 2010-02-17T22:04:56.291Z · LW(p) · GW(p)

Can somebody give me a link to that bizarre "free food" video Eliezer once linked to? Thanks!!

comment by Mike Bishop (MichaelBishop) · 2010-02-17T19:11:26.279Z · LW(p) · GW(p)

Have people previously tried/discussed this calibration diagnostic?

http://projectionpoint.com/

Replies from: Cyan, Nisan, AdeleneDawner, Alicorn
comment by Cyan · 2010-02-17T19:13:28.182Z · LW(p) · GW(p)

I think that link was posted earlier. I got an average score; don't recall the exact number. I was disappointed in my performance, but tsuyoku naritai.

comment by Nisan · 2010-02-22T18:25:42.228Z · LW(p) · GW(p)

Cool, I got a 90. This quiz actually tests your calibration on estimating the veracity of a very special class of statement, for which your prior is .5 and which is often deliberately tricky. To give a (made-up) example:

"Henry VI defeated Richard III at the Battle of Bosworth Field in 1485." (It was Henry VII, not Henry VI.)

This class of statement doesn't show up that often in real life.

Replies from: DanArmak, GuySrinivasan
comment by DanArmak · 2010-02-22T18:36:03.295Z · LW(p) · GW(p)

is often deliberately tricky

Then you're not really guessing at the truth of the statement. Instead you're guessing at the state of mind of the examiner, which involves a very different set of heuristics and biases.

Keep doing that and you may end up thinking, "maybe the correct answer is that he didn't defeat him because it was a Pyrrhic victory? Maybe Bosworth is spelled wrong? Maybe there's a dispute over the right year due to difficulties inherent in dating medieval events?"

comment by SarahNibs (GuySrinivasan) · 2010-02-22T18:34:18.481Z · LW(p) · GW(p)

for which your prior is .5

I took the quiz without assuming my prior was .5, and got a very very low score.

comment by AdeleneDawner · 2010-02-17T23:24:13.175Z · LW(p) · GW(p)

I just tried it. It may be noteworthy that I'm having a 'bad brain day' - low-grade headache with mild to moderate fogginess. I expected to do poorly, and wound up being pretty conservative with my guesses as a result of that. I got an 86.

Replies from: gwern
comment by Alicorn · 2010-02-17T19:24:45.276Z · LW(p) · GW(p)

I hadn't seen it before. I just tried it and got a 63.

comment by timtyler · 2010-02-17T15:56:56.712Z · LW(p) · GW(p)

There seems to be a bit of a terminology mess in the area of intelligent systems.

There are generally-intelligent systems, narrowly-intelligent systems, and an umbrella category of all goal-directed systems.

How about the following:

  • we call the narrowly-intelligent systems "experts", and their degree of expertise their "expertness";

  • we call the generally-intelligent systems "intelligences", and their degree of cleverness their "intelligence";

  • we call the umbrella category of goal-directed agents "competent systems" [it seems relatively unlikely that one would need to use a generic term for the "competence" of such systems].

Improvements / suggestions / criticism would be welcome.

comment by Tiiba · 2010-02-17T06:18:30.742Z · LW(p) · GW(p)

Assuming that some cryonics patient X ever wakes up, what probability do you assign to each of these propositions?

1) X will be glad he did it.
2) X will regret the decision.
3) X will wish he was never born.

Reasoning would be appreciated.

Related to this post, which got no replies:
http://lesswrong.com/lw/1mc/normal_cryonics/1h8j

Replies from: Jack
comment by Jack · 2010-02-17T06:24:43.223Z · LW(p) · GW(p)
  1. 80
  2. 20
  3. Pretty small.
Replies from: CronoDAS
comment by CronoDAS · 2010-02-17T07:46:24.858Z · LW(p) · GW(p)

If he doesn't already prefer not to have existed, then that probably won't change upon waking up.

Replies from: Jack
comment by Jack · 2010-02-17T08:01:17.661Z · LW(p) · GW(p)

I'm presuming the patient hasn't just woken up and has been introduced to society in some way or has attempted to re-enter it. I think there is a small but non-negligible probability that some patients will be so alienated that pretty serious depression could result. They may even become suicidal. Perhaps someone who had 'died' young would then wish he had never been born as he would have few pre-freeze memories to cherish.

comment by Document · 2010-02-16T22:04:42.452Z · LW(p) · GW(p)

I remember a post on this site where someone wondered whether a medieval atheist could really confront the certainty of death that existed back then, with no waffling or reaching for false hopes. Or something vaguely along those lines. Am I remembering accurately, and if so, can someone link it?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-16T22:24:43.489Z · LW(p) · GW(p)

http://yudkowsky.net/other/yehuda ?

What would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another as you watched, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope.

Replies from: Document
comment by Document · 2010-02-17T01:30:02.612Z · LW(p) · GW(p)

I can't figure out if I read it there or here first, but that looks like the quote; thanks.

comment by MBlume · 2010-02-23T17:53:55.776Z · LW(p) · GW(p)

Please disregard this comment. It is a

comment by timtyler · 2010-02-16T08:47:39.103Z · LW(p) · GW(p)

Geek rapture naysaying:

"Jaron Lanier: Alan Turing and the Tech World's New Religion"

Replies from: Eliezer_Yudkowsky, ciphergoth, Furcas
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-16T17:53:16.689Z · LW(p) · GW(p)

Pointing people to Lanier as a naysayer isn't playing fair; it just makes the opposition look crazy.

Replies from: timtyler
comment by timtyler · 2010-02-16T21:48:18.804Z · LW(p) · GW(p)

Alas, Turing's Nazi fascism and "death denial" doesn't seem to appeal much to people around here. I figured that the residents would enjoy watching this sort of material.

Replies from: mattnewport
comment by mattnewport · 2010-02-16T21:52:19.193Z · LW(p) · GW(p)

I can't speak for anyone else but it's Jaron Lanier that doesn't appeal much to me. I barely read to the end of the sentence after seeing his name, I certainly wasn't going to click the link and subject myself to his inane punditry so I have no opinion on the specific content.

comment by Paul Crowley (ciphergoth) · 2010-02-16T10:40:57.587Z · LW(p) · GW(p)

Can't watch video from here, and in any case given the much greater investment of time required I'd want to know more about it to start watching. Anyone who's seen it care to say if there are any new or good arguments in there?

Replies from: Furcas, whpearson, whpearson
comment by Furcas · 2010-02-16T17:21:51.321Z · LW(p) · GW(p)

It's the usual 'Rapture of the Nerds' spiel.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-16T17:38:10.765Z · LW(p) · GW(p)

Thanks to you and whpearson for taking the time to find out so the rest of us don't have to. Voted timtyler down for wasting your and our time.

Edit: removed downvote, see below.

Replies from: timtyler
comment by timtyler · 2010-02-16T21:37:01.473Z · LW(p) · GW(p)

Er, this is pretty relevant and on-topic material, IMHO!

Jaron Lanier is a fruitcake - but I figure most participants here already knew that.

You may not personally be interested in what famous geek critics have to say about "the Tech World's New Religion" - but it seems bad to assume that everyone here is like that.

Replies from: ciphergoth, gwern
comment by Paul Crowley (ciphergoth) · 2010-02-16T22:38:36.040Z · LW(p) · GW(p)

Hmm, I didn't see it that way. Removed downvote. But videos are a pain; you could do us a favour next time by saying a few more words about whether you're recommending it or just posting FYI. Otherwise there's a Gricean implication that you judge it worth our time, I think.

Replies from: SilasBarta, timtyler
comment by SilasBarta · 2010-02-16T23:19:26.933Z · LW(p) · GW(p)

There should be a policy, or strong norm, of "No summary, no link" when starting a thread with a suggested link. That summary should tell the key insights gained, and what about it you found unique.

I hate having to read a long article -- or worse, listen to a long recording -- and find out it's not much different from what I've heard a thousand times before. (That happens more than I would expect here.) Of course, you shouldn't withhold a link just because Silas (or anyone else) already read something similar ... but it tremendously helps to know in advance that it is something similar.

comment by timtyler · 2010-02-16T22:58:42.985Z · LW(p) · GW(p)

Yes, I don't like "teaser" links much either. I did give the author, title and a three word synopsis - but more would probably have helped. On the other hand, I didn't want to prejudice watchers too much by giving my own opinion up front.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-16T23:30:19.834Z · LW(p) · GW(p)

I get that. Can we encourage a norm of writing FYI when we want to avoid the implication that it's a recommendation?

comment by gwern · 2010-02-18T03:07:17.287Z · LW(p) · GW(p)

Jaron Lanier is a fruitcake - but I figure most participants here already knew that.

They may not; it takes a while for one to figure out that he really is a fruitcake - individual columns or essays tend to sound like perfectly respectable contrarianism. I first began reading his articles in Discover and it wasn't until his 'Digital Maoism' essay that "he really is a nut!" occurred to me.

comment by whpearson · 2010-02-16T22:14:38.162Z · LW(p) · GW(p)

Listening to the longer version isn't so bad. The snippet was definitely the most objectionable.

It appears that Lanier thinks AI is suffering from the puppet problem bought on by taking the Turing test too seriously. The puppet problem is that computers can be used to implement puppets. Things that fake being intelligence. Imagine Omega makes a program for the Turing Test that looks intelligent by predicting you and having the program output intelligent sounding responses at different times, so that you (and only you!) think it is intelligent but you are really talking to the advanced equivalent of an answer phone*. So he thinks that AIs are going to be puppets. Which is a semi-reasonable opinion to come to if you just look at chatbots.

However Lanier doesn't, but should, argue that computers can only be puppets.

Edited: For clarity.

*I think Eliezer said something like if you see intelligent behaviour you should guess that there is an intelligence somewhere, it may just not be in the system that appears intelligent. I'm not organised enough to keep a quote file. Anyone?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2010-02-16T22:30:44.917Z · LW(p) · GW(p)

I think Eliezer said something like [...] Anyone?

"GAZP vs. GLUT":

If someday you come to understand consciousness, and look back, and see that there's a program you can write which will output confused philosophical discourse that sounds an awful lot like humans without itself being conscious - then when I ask "How did this program come to sound similar to humans?" the answer is that you wrote it to sound similar to conscious humans, rather than choosing on the criterion of similarity to something else.

comment by whpearson · 2010-02-16T12:05:56.909Z · LW(p) · GW(p)

In the short version, mainly saying the singularity is a nutty concept and making strange comments about Turing. It does not encourage me to watch the longer version.

I've found the audio for the longer version. Which I may listen to at some point.

comment by Furcas · 2010-02-16T17:21:26.096Z · LW(p) · GW(p)

It's the usual 'Rapture of the Nerds' spiel.

comment by timtyler · 2010-02-16T23:06:07.999Z · LW(p) · GW(p)

Looking at various definitions, "intelligence" and "instrumental rationality" seem to often be used to mean much the same thing.

Is this redundant terminology? What should be done about that?

Refs:

Replies from: Jack
comment by Jack · 2010-02-16T23:46:51.874Z · LW(p) · GW(p)

They don't mean the same thing at all and the wikipedia entries seem to reflect that.

Replies from: timtyler
comment by timtyler · 2010-02-16T23:51:55.848Z · LW(p) · GW(p)

Check with here, for example, though:

http://www.vetta.org/definitions-of-intelligence/

The "AI researcher" definitions in particular seem to be much the same as the definition of instrumental rationality.

Replies from: Jack, SilasBarta
comment by Jack · 2010-02-17T00:29:40.687Z · LW(p) · GW(p)

I had a pretty long comment about Jurgen Habermas but instead I'll just say:

I'm not really sure term means anything outside of the assumptions and framework of Critical Theory unless you're talking about a totally different thing. And given those assumptions and framework you can't possibly say instrumental rationality is the same thing as intelligence since the whole coinage exists to distinguish it from communicative rationality. But the framework this community is operating under is so far removed from Critical Theory that I don't even know how to talk about it here.

My guess is not many people here recognize any other kind of rationality and so your question just becomes: are rationality and intelligence the same thing?

Replies from: timtyler
comment by timtyler · 2010-02-17T21:18:31.714Z · LW(p) · GW(p)

"Rationality" seems to be most frequently used here to mean "epistemic rationality", not "instrumental rationality". It seems to be one of this community's oddities. ...and yes, the "critical theory" term.

comment by SilasBarta · 2010-02-16T23:56:23.083Z · LW(p) · GW(p)

Semi-OT: The problem with the AI researchers' definitions of intelligence is that they are written as if there can be some kind of perfect intelligence, yet they end up in contradictions like, "I've developed the maximally intelligent being, but it's completely useless."

(Mr. Vetta (Shane Legg) and Marcus Hutter's AIXI, I'm looking in your general direction here.)

Replies from: timtyler
comment by timtyler · 2010-02-17T09:31:43.474Z · LW(p) · GW(p)

The idea of universal intelligence is not a bug, it is a feature. It is mainly due to Legg/Hutter that we have that concept in the first place - and it is a fine one.

Replies from: SilasBarta
comment by SilasBarta · 2010-02-17T22:29:10.126Z · LW(p) · GW(p)

Not really. If you claim that a) intelligence is useful, and b) a maximally intelligent being that you have invented is useless ... you made a mistake somewhere.

And their work is just the formalization of Solomonoff induction -- the difficulty is in the derivation. People knew in advance that you can find the shortest theory to fit the data by taking a language, and then iterating up from the shortest expressible program until you find one that matches the data -- it's just that it's not computable, which for now, means useless, and the exponential approximation isn't much better.

Can you identify any working, useful system based on AIXI?

Replies from: timtyler
comment by timtyler · 2010-02-17T23:03:32.977Z · LW(p) · GW(p)

I don't think you have a reference for b).

Solomonoff induction is concerned with sequence prediction - not decision theory. It is not a trivial extra step.

Replies from: SilasBarta
comment by SilasBarta · 2010-02-17T23:06:35.931Z · LW(p) · GW(p)

I don't think you have a reference for b).

Okay, I don't have a reference for them admitting that AIXI's useless -- but they acknowledge it's uncomputable, and don't have working code implementing it for an actual problem in a way better than existing "not intelligent" methods.

Solomonoff induction is concerned with sequence prediction - not decision theory. It is not a trivial extra step.

AIXI is also primarily concerned with sequence prediction and not decision theory.

Replies from: timtyler
comment by timtyler · 2010-02-17T23:14:17.829Z · LW(p) · GW(p)

"AIXI is a universal theory of sequential decision making akin to Solomonoff's celebrated universal theory of induction. Solomonoff derived an optimal way of predicting future data, given previous observations, provided the data is sampled from a computable probability distribution. AIXI extends this approach to an optimal decision making agent embedded in an unknown environment."

Replies from: SilasBarta
comment by SilasBarta · 2010-02-17T23:31:55.542Z · LW(p) · GW(p)

Okay, you're right, my apologies. The point about uncomputability and uselessness of the decision theory still stands.

Replies from: timtyler
comment by timtyler · 2010-02-18T09:34:08.007Z · LW(p) · GW(p)

Right - but they know that. AIXI is a self-confessed abstract model.

IMO, AIXI does have some marketing issues. For instance:

"The book also presents a preliminary computable AI theory. We construct an algorithm AIXItl, which is superior to any other time t and space l bounded agent."

That seems to be an inaccurate description, to me.

comment by PhilGoetz · 2010-02-18T18:16:27.935Z · LW(p) · GW(p)

I think that cryonics will be achieved by genetically modifying the person to be frozen so that they produce many of the same responses to freezing that animals which can be frozen/chilled and thawed (like wood frogs) do.

Once many people are frozen using this method, there will be little incentive to work on the much-more-difficult problem of freezing and thawing an unmodified person.

So I think people frozen using today's techniques may never be revived.

ADDED: That's "may never" as in "might never", not "will never".

Replies from: dclayh, RobinZ, Nick_Tarleton, Nick_Tarleton, Nick_Tarleton, Eliezer_Yudkowsky
comment by dclayh · 2010-02-18T18:35:38.164Z · LW(p) · GW(p)

I would think post-Singularity historians would like nothing better than to revive the earliest people they could.

comment by RobinZ · 2010-02-18T20:28:54.389Z · LW(p) · GW(p)

I don't understand your logic. Genetic modification to allow a person to survive freezing and thawing would be great ... but what has that to do with current cryonics patients?

It sounds rather like claiming that "flight will be achieved by genetically modifying persons to fly, so there will be little incentive to work on the much-more-difficult problem of flying an unmodified person". Or, to take a metaphor that suggests work already happening, "sight will be achieved by genetically modifying persons to see, so there will be little incentive to work on the much-more-difficult problem of sight-enabling an unmodified person".

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-18T23:32:45.048Z · LW(p) · GW(p)

It sounds rather like claiming that "flight will be achieved by genetically modifying persons to fly, so there will be little incentive to work on the much-more-difficult problem of flying an unmodified person".

Yes. And that's what I would say, if we didn't have airplanes, and I thought that modifying people to fly would be easier than building airplanes, regardless of the performance desired. None of those things are true.

Replies from: RobinZ
comment by RobinZ · 2010-02-18T23:35:56.895Z · LW(p) · GW(p)

You're right - bad metaphor. What about the vision one?

comment by Nick_Tarleton · 2010-02-18T20:34:05.571Z · LW(p) · GW(p)

Once many people are frozen using this method, there will be little incentive to work on the much-more-difficult problem of freezing and thawing an unmodified person.

I think you need to spell this step out; clearly, people have no idea what you're talking about. I assume you're saying something like "the future will have a satiable demand for revived cryonicists, which will be insufficient to revive everyone".

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-18T20:53:15.865Z · LW(p) · GW(p)

Once cryonics is regarded as a solved problem, and there is a mass market for cryonic revival using a particular suspension technology, there will be little economic or scientific incentive to try to solve the harder problem of reviving people who were suspended using older suspension technologies. Nobody will award a grant to study the problem. Nobody will fund a start-up to achieve it.

Those of you who expect people in the future to work on the problem of how to revive your brain that was suspended with today's technology: Are you also looking forward to all the cool new games they will have developed for the Playstation 3?

Replies from: Cyan, gregconen, Jordan, CronoDAS
comment by Cyan · 2010-02-18T20:59:31.802Z · LW(p) · GW(p)

Maybe not new games, but people do build emulators for obsolete platforms -- as freeware, even. Gift economies aren't uncommon when resources aren't scarce.

comment by gregconen · 2010-02-19T00:20:46.552Z · LW(p) · GW(p)

Even setting aside a post-FAI economy, why should this be the case? Your PS3 metaphor is not applicable. Owners of old playstations are not an unserved market in the same way that older frozen bodies are. If PS(N) games are significantly more expensive than PS(N+1) games, people will simply buy a PS(N+1). Not an option for frozen people; older bodies will be an under served market in a way PS3 owners cannot be.

If there's a "mass market" for revivals, clearly people are getting paid for the revivals, somehow. I see no reason why new bodies would pay, while old bodies would not. If people are being revived for historical research or historical curiosity, then older revivals will probably be MORE valuable. If it's charitable, I don't particularly see why altruistic people will only care about recent bodies. Further, especially if effective immortality exists, you'll very quickly run out of recent bodies.

There might be an economic reason, in that more recent people have an easier time paying for their own revivals, because their revival is cheaper and/or their skills are more relevant. But if you're worried about that, you can probably significantly improve your odds by setting up a trust fund for your own revival.

comment by Jordan · 2010-02-18T23:44:53.569Z · LW(p) · GW(p)

I think this depends greatly on how many people are in storage that can't be revived with the current methods at hand. If it's just a few hundred, you're conclusion is likely correct. If it's tens of thousands, I doubt research would stop completely

comment by CronoDAS · 2010-02-18T23:39:19.802Z · LW(p) · GW(p)

Are you looking forward to all the new theater plays that are going to be written?

comment by Nick_Tarleton · 2010-02-18T19:53:10.253Z · LW(p) · GW(p)

The main problem I see here is that I don't see why the existence of better-preserved people would reduce the incentive to revive worse-preserved people.

comment by Nick_Tarleton · 2010-02-18T19:47:18.621Z · LW(p) · GW(p)

The main problem I see here is that I don't see why the development of better preservation methods (including GM) would reduce the incentives to revive more primitively preserved people. Do you expect the future to have a satiable demand (so to speak) for thawed cryonicists?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-18T18:36:26.063Z · LW(p) · GW(p)

You've been in the transhumanist community for, what, at least 10 years?

I honestly have no clue what could possibly be going wrong in your mind at this moment. I do not understand what general category of malfunction corresponds to this particular mistake.

Replies from: wnoise, Nick_Tarleton, PhilGoetz
comment by wnoise · 2010-02-18T19:36:13.023Z · LW(p) · GW(p)

You haven't explained how it's a mistake in any way. It's an extrapolation of motives that are far different than yours, but they do not seem inherently ridiculously implausible.

Replies from: byrnema
comment by byrnema · 2010-02-18T21:10:36.899Z · LW(p) · GW(p)

Certainly is it true that we're probably not vitrifying people in the best way. Once we know how to devitrify people, we'll probably know a lot more about how to vitrify them best.

Phil, my response to your argument is this:

In order for cryonics to work in principle, they will already have to know how to fix whatever you died of, whether it be cancer or a heart attack or being crushed in a car accident. How much more difficult do you estimate it would be to fix the problem of vitrification (fixing vitrification damage and preventing de-vitrfication damage) than fix the cause of your death?

Once many people are frozen using this method, there will be little incentive to work on the much-more-difficult problem of freezing and thawing an unmodified person.

So I think people frozen using today's techniques may never be revived.

You don't indicate a probability. Do you think it is more likely that the revival of present-day cryonics patients will be delayed with respect to those frozen later, than actually never revived? (Given that revival is eventually possible for people frozen correctly.)

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-18T23:23:41.101Z · LW(p) · GW(p)

In order for cryonics to work in principle, they will already have to know how to fix whatever you died of, whether it be cancer or a heart attack or being crushed in a car accident. How much more difficult do you estimate it would be to fix the problem of vitrification (fixing vitrification damage and preventing de-vitrfication damage) than fix the cause of your death?

First, I estimate it will be orders of magnitude harder to fix that damage than to fix the cause of death. Fixing cancer or many other diseases would likely be a matter of intervening with drugs or genes. Fixing damage from freezing or vitrifying is usually held to require advanced nanotechnology.

Second, I think that there will be continued motivation for work to progress on all those things (cancer, heart attacks, etc.), and so the technology for fixing them will continue to improve. But the technology for fixing the damage is not likely to continue to improve, because (warm) people won't have a need for it.

Replies from: byrnema
comment by byrnema · 2010-02-19T00:17:19.476Z · LW(p) · GW(p)

Good. I agree with you that fixing vitrifying damage could require advanced nanotechnology and that fixing causes of the death might not.

However, I'd like to linger a moment longer on the latter. Suppose someone has died of cancer and been vitrified in a 'good' way that is easy to undo. Presumably, gene therapy and drugs would work to cure their cancer if they were well. However, they've died. What was the cause of death? To what extent is it likely that cells been damaged? Can we anticipate what might be required to make them feel well again?

Also, you didn't comment on whether you thought a delay was more likely than no revival at all for persons inconveniently vitrified.

Replies from: gregconen
comment by gregconen · 2010-02-19T00:58:35.541Z · LW(p) · GW(p)

Let me build on this. You say (and I agree) that fixing the damage caused by vitrification is much harder than fixing most causes of death. Thus, by the time that devitrification is possible, very few new people will be vitrified (only people who want a one-way trip to the future).

This leads me to 2 conclusions: 1) Most revivals will be of people who were frozen prior to the invention of the revivification technology. Therefore, if anyone is revived, it is because people want to revive people from the past. 2) The supply of people frozen with a given technology (who are willing to be revived, as opposed to the "one-way trip" bodies) will pretty much only decrease.

Assuming people continue to want revive people from the past, they will quickly run out of the easy revivals. If they still want to revive more people, they will have strong incentives to develop new revivification technologies.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-27T04:38:08.872Z · LW(p) · GW(p)

That's a reasonable scenario. As time goes on, though, you run into a lot more what-ifs. At some point, the technology will be advanced enough that they can extract whatever information they want from your brain without reviving you.

I think it would be really interesting to talk to Hitler. But I wouldn't do this by reviving Hitler and setting him loose. I'd keep him contained, and turn him off afterwards. Is the difference between yourself and Hitler large compared to the difference between yourself and a future post-Singularity AI possessing advanced nanotechnology?

comment by Nick_Tarleton · 2010-02-18T20:08:43.252Z · LW(p) · GW(p)

I do not understand what general category of malfunction corresponds to this particular mistake.

Seems to me it's implicitly an example of the common category "assuming that charity (or FAI) will not be an important factor in revival".

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-18T23:27:20.721Z · LW(p) · GW(p)

There are many possible future worlds. Obviously I'm not speaking of possible futures in which a magical FAI can do anything you ask of it.

A sizable fraction of possible futures may contain AIs for which solving these problems are trivial, or societies with so much wealth that they can mount massive research projects for charity or fun. But the fraction of those possible futures in which reviving frozen humans is thought of as an admirable goal might not be large.

My point estimate is that, if you wanna get revived, you have to get revived before the singularity, because you're not going to have much value afterwards.

comment by PhilGoetz · 2010-02-18T19:39:34.098Z · LW(p) · GW(p)

You've been in the transhumanist community for, what, at least 10 years?

20, my young friend.

I honestly have no clue what could possibly be going wrong in your mind at this moment. I do not understand what general category of malfunction corresponds to this particular mistake.

And that should suggest to you that the mistake may be yours.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-18T19:58:36.715Z · LW(p) · GW(p)

It's certainly evidence in favour of that position, but not enough...