Open Thread, August 2010

post by NancyLebovitz · 2010-08-01T13:27:07.307Z · LW · GW · Legacy · 706 comments

Contents

706 comments

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

706 comments

Comments sorted by top scores.

comment by [deleted] · 2010-08-02T12:44:21.230Z · LW(p) · GW(p)

The game of Moral High Ground (reproduced completely below):

At last it is time to reveal to an unwitting world the great game of Moral High Ground. Moral High Ground is a long-playing game for two players. The following original rules are for one M and one F, but feel free to modify them to suit your player setup:

  1. The object of Moral High Ground is to win.

  2. Players proceed towards victory by scoring MHGPs (Moral High Ground Points). MHGPs are scored by taking the conspicuously and/or passive-aggressively virtuous course of action in any situation where culpability is in dispute.

(For example, if player M arrives late for a date with player F and player F sweetly accepts player M's apology and says no more about it, player F receives the MHGPs. If player F gets angry and player M bears it humbly, player M receives the MHGPs.)

  1. Point values are not fixed, vary from situation to situation and are usually set by the person claiming them. So, in the above example, forgiving player F might collect +20 MHGPs, whereas penitent player M might collect only +10.

  2. Men's MHG scores reset every night at midnight; women's roll over every day for all time. Therefore, it is statistically highly improbable that a man can ever beat a woman at MHG, as the game ends only when the relationship does.

  3. Having a baby gives a woman +10,000 MHG points over the man involved and both parents +5,000 MHG points over anyone without children.

My ex-bf and I developed Moral High Ground during our relationship, and it has given us years of hilarity. Straight coupledom involves so much petty point-scoring anyway that we both found we were already experts.

By making a private joke out of incredibly destructive gender programming, MHG releases a great deal of relationship stress and encourages good behavior in otherwise trying situations, as when he once cycled all the way home and back to retrieve some forgotten concert tickets "because I couldn't let you have the Moral High Ground points". We are still the best of friends.

Play and enjoy!

From Metafilter

Replies from: NancyLebovitz, Yoreth
comment by NancyLebovitz · 2010-08-02T15:19:17.416Z · LW(p) · GW(p)

The whole thread is about relationship hacks-- it's fascinating.

Replies from: sketerpot
comment by sketerpot · 2010-08-02T18:59:35.071Z · LW(p) · GW(p)

One of the first comments is something I've been saying for a while, about how to admit that you were wrong about something, instead of clinging to a broken opinion out of stubborn pride:

Try to make it a personal policy to prove yourself WRONG on occasion. And get excited about it. Realizing you've been wrong about something is a sure sign of growth, and growth is exciting.

The key is to actually enjoy becoming less wrong, and to take pride in admitting mistakes. That way it doesn't take willpower, which makes everything so much easier.

comment by Yoreth · 2010-08-04T04:58:16.749Z · LW(p) · GW(p)

But apparently it still wasn't enough to keep them together...

Replies from: Blueberry, wedrifid
comment by Blueberry · 2010-08-04T05:58:42.064Z · LW(p) · GW(p)

Not all relationships need to last forever, and it's not necessarily a failure if one doesn't.

comment by wedrifid · 2010-08-04T06:17:44.695Z · LW(p) · GW(p)

But apparently it still wasn't enough to keep them together...

Yoreth may subtract 50 MHG points from hegemonicon but also loses 15 himself.

comment by XFrequentist · 2010-08-01T19:46:57.050Z · LW(p) · GW(p)

I'm intrigued by the idea of trying to start something like a PUA community that is explicitly NOT focussed on securing romantic partners, but rather the deliberate practice of general social skills.

It seems like there's a fair bit of real knowledge in the PUA world, that some of it is quite a good example of applied rationality, and that much of it could be extremely useful for purposes unrelated to mating.

I'm wondering:

  • if this is an interesting idea to LWers?
  • if this is the right venue to talk about it?
  • does something similar already exist?

I'm aware that there was some previous conversation around similar topics and their appropriateness to LW, but if there was final consensus I missed it. Please let me know if these matters have been deemed inappropriate.

Replies from: Violet, cousin_it, ianshakil, katydee, ianshakil, marc
comment by Violet · 2010-08-03T06:34:15.955Z · LW(p) · GW(p)

If you want non-PC approaches there are two communities you could look into: sales-people and conning people. The second one actually has most of the how-to-hack-peoples minds. If you want a kinder version look at it titled "social engineering".

comment by cousin_it · 2010-08-01T20:02:44.790Z · LW(p) · GW(p)

Toastmasters?

General social skills are needed in business, a lot of places teach them and they seem to be quite successful.

Replies from: SilasBarta, XFrequentist
comment by SilasBarta · 2010-08-01T20:08:59.126Z · LW(p) · GW(p)

From my limited experience with Toastmasters, it's very PC and targeted at median-level intelligence people -- not the thing people here would be looking for. "PUA"-like implies XFrequentist is considering something that is willing to teach the harsh, condemned truths.

Replies from: XFrequentist
comment by XFrequentist · 2010-08-01T20:30:25.017Z · LW(p) · GW(p)

I went to a Toastmasters session, and was... underwhelmed. Even for public speaking skills, the program seemed kind of trite. It was more geared toward learning the formalities of meetings. You'd probably be a better committee chair after following their program, but I'm not sure you could give a great TED talk or wow potential investors.

Carnegie's program seems closer to what I had in mind, but I want to replicate both the community aspect and the focus on "field" practice of the PUAs, which I suspect is a big part of what makes them so formidable.

Replies from: D_Alex, NancyLebovitz
comment by D_Alex · 2010-08-02T01:33:54.066Z · LW(p) · GW(p)

The clubs vary in their standard. I recommend you try a few in your area (big cities should have a bunch). For 2 years I used to commute 1 hour each way to attend Victoria Quay Toastmasters in Fremantle, it was that good. It was the 3rd club I tried after moving.

comment by NancyLebovitz · 2010-08-01T20:49:11.259Z · LW(p) · GW(p)

I've heard smart people speak well of Toastmasters. It may be a matter of local variation, or it may be that Toastmasters is very useful for getting past fear of public speaking and acquiring adequate skills.

Replies from: XFrequentist, JanetK, pjeby
comment by XFrequentist · 2010-08-01T21:11:17.669Z · LW(p) · GW(p)

My impression could easily be off; I only went to one open house.

It wasn't all negative. They seemed to have a logical progression of speech complexity, and quite a standardized process for giving feedback. Some of the speakers were excellent. It was fully bilingual (English/French), which was nice.

I don't think it's what I'm looking for, but it's probably okay for some other goals.

comment by JanetK · 2010-08-02T07:28:20.368Z · LW(p) · GW(p)

I belonged to TM for many years and I would still if there was a club near me. I found it great for many reasons. But I have to say that you get what you put in. And you get what you want to get. If you want friends and social graces - OK get them. If you want to lose fear of speaking - get that. Ignore what you don't want and take what you do.

comment by pjeby · 2010-08-02T03:43:22.451Z · LW(p) · GW(p)

I've heard smart people speak well of Toastmasters.

I've mostly heard them damn it with faint praises, as being great for polishing presentation skills, but not being particularly useful for anything else.

Interestingly enough, of people I know who are actually professional speakers (in the sense of being paid to talk, either at their own events or other peoples'), exactly none of them recommend it. (Even amongst ones who do not sell any sort of speaker training of their own.)

OTOH, I have heard a couple of shout-outs for the Carnegie speaking course, but again, this is all just in the context of speaking... which has little relationship to general social skills AFAICT.

Replies from: XFrequentist
comment by XFrequentist · 2010-08-02T14:00:39.612Z · LW(p) · GW(p)

Interesting, that jibes* pretty well with my impressions of Toastmasters.

There are other Carnegie courses than the speaking one. This is the one I was thinking of.

*See comment below for the distinction between "jives" and "jibes". It ain't cool beein' no jive turkey!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-02T14:41:26.232Z · LW(p) · GW(p)

Nitpick: "jibes" means "is consistent with".

"Jives" means "is talking nonsense" or (archaic) "dances".

{Tries looking it up} Wikipedia says "jives" can be a term for African American Vernacular English. The Urban Dictionary gives it a bunch of definitions, including both of mine, "jibe", and forms of African American speech which include a lot of slang, but not any sort of African American speech in general.

On the other hand, the language may have moved on-- I keep seeing that mistake (the Urban Dictionary implies it isn't a mistake), and maybe I should give up.

I still retain a fondness for people who get it right.

Replies from: XFrequentist
comment by XFrequentist · 2010-08-02T16:02:18.227Z · LW(p) · GW(p)

haha... thanks!

comment by XFrequentist · 2010-08-01T20:32:13.198Z · LW(p) · GW(p)

a lot of places teach them

I'd be interested in specifics...

comment by ianshakil · 2010-08-02T05:33:44.867Z · LW(p) · GW(p)

Would such "practice" require a physical venue? -- or would an online setting -- maybe even Skype -- be sufficient?

Replies from: XFrequentist, XFrequentist
comment by XFrequentist · 2010-08-02T13:49:14.920Z · LW(p) · GW(p)

That's a good question. I don't know, but I suspect a purely online setting would be adequate for beginners, but insufficient for mastery.

What do you think?

Replies from: marc, ianshakil
comment by marc · 2010-08-02T15:20:34.749Z · LW(p) · GW(p)

I don't think you'd have much success mastering non verbal communication through skype.

comment by ianshakil · 2010-08-02T15:20:15.355Z · LW(p) · GW(p)

Generally, I agree. There's a time and a place for both online and offline venues.

Ideally, you'd want a very large number of participants such that, during sessions, most of your peers are new and the situation is somewhat anonymous/random. If your sessions are with the same old people, these people will become well known -- perhaps friends, and the social simulation won't be very meaningful. Who knows.. maybe there's a way to piggyback on the Chatroulette concept?!

comment by XFrequentist · 2010-08-02T13:25:40.729Z · LW(p) · GW(p)

I don't know.

comment by katydee · 2010-08-01T20:32:51.873Z · LW(p) · GW(p)

Extremely, yes, not to my knowledge.

comment by ianshakil · 2010-08-31T16:36:18.910Z · LW(p) · GW(p)

A lot of companies conduct anonymous "360 review" processes which veer into this territory to some degree.

Also, several business schools conduct leadership labs. In fact, a large chunk of the business school experience is really about social grooming / learning how to network / and so forth.

So do we have any traction for this idea? How about a meetup?

Replies from: XFrequentist
comment by XFrequentist · 2010-08-31T17:11:33.000Z · LW(p) · GW(p)

Thanks, those are useful leads. I've done the 360 review thing but hadn't connected it to this idea.

It seems to have gotten a good amount of interest. I've got a draft post going that still needs some polish, but I should hopefully be able to get it finished this weekend. If all goes to plan some sort of meetup should follow.

Any suggestions on logistics? I'm not at all sure what the best way to organize this is, I'd appreciate any thoughts.

comment by marc · 2010-08-02T15:22:29.658Z · LW(p) · GW(p)

I think you're probably correct in your presumptions. I find it an interesting idea and would certainly follow any further discussion.

comment by XiXiDu · 2010-08-08T19:38:22.588Z · LW(p) · GW(p)

LW database download?

I was wondering if it would be a good idea to offer a download of LW or at least the sequences and Wiki. In the manner that Wikipedia is providing it.

The idea behind it is to have a redundant backup in case of some catastrophe, for example if the same happens to EY that happened to John C. Wright. It could also provide the option to read LW offline.

Replies from: ciphergoth, Eliezer_Yudkowsky, Unknowns, nawitus, Soki, xamdam
comment by Paul Crowley (ciphergoth) · 2010-08-08T21:11:41.091Z · LW(p) · GW(p)

That's incredibly sad.

Every so often, people derisively say to me "Oh, and you assume you'd never convert to religion then?" I always reply "I absolutely do not assume that, it might happen to me; no-one is immune to mental illness."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-08T21:22:40.103Z · LW(p) · GW(p)

Tricycle has the data. Also if an event of JCW magnitude happened to me I'm pretty sure I could beat it. I know at least one rationalist with intense religious experiences who successfully managed to ask questions like "So how come the divine spirit can't tell me the twentieth digit of pi?" and discount them.

Replies from: Unknowns, None
comment by Unknowns · 2010-08-09T06:56:07.629Z · LW(p) · GW(p)

Actually, you have to be sure that you wouldn't convert if you had John Wright's experiences, otherwise Aumann's agreement theorem should cause you to convert already, simply because John Wright had the experiences himself-- assuming you wouldn't say he's lying. I actually know someone who converted to religion on account of a supposed miracle, who said afterward that since they in fact knew before converting that other people had seen such things happen, they should have converted in the first place.

Although I have to admit I don't see why the divine spirit would want to tell you the 20th digit of pi anyway, so hopefully there would be a better argument than that.

Replies from: arundelo
comment by [deleted] · 2010-08-16T17:24:04.311Z · LW(p) · GW(p)

What if you sustained hypoxic brain injury, as JCW may well have done during his cardiac event? (This might also explain why he think it's cool to write BSDM scenes featuring a 16-year-old schoolgirl as part of an ostensibly respectable work of SF, so it's a pet suspicion of mine.)

Replies from: wedrifid, Eliezer_Yudkowsky, CronoDAS, XiXiDu
comment by wedrifid · 2010-08-16T19:13:56.413Z · LW(p) · GW(p)

his might also explain why he think it's cool to write BSDM scenes featuring a 16-year-old schoolgirl as part of an ostensibly respectable work of SF, so it's a pet suspicion of mine.

It would seem he is just writing for Mature Audiences. In this case maturity means not just 'the age at which we let people read pornographic text' but the kind of maturity that allows people to look beyond their own cultural prejudices.

16 is old. Not old enough according to our culture but there is no reason we should expect a fictional time-distant culture to have our particular moral or legal prescriptions. It wouldn't be all that surprising if someone from an actual future time to, when reading the work, scoff at how prudish a culture would have to be to consider sexualised portrayals of women that age to be taboo!

Mind you I do see how a hypoxic brain injury could alter someone's moral inhibitions and sensibilities in the kind of way you suggest. I just don't include loaded language in the speculation.

Replies from: CronoDAS
comment by CronoDAS · 2010-08-16T23:09:27.711Z · LW(p) · GW(p)

16 is old. Not old enough according to our culture but there is no reason we should expect a fictional time-distant culture to have our particular moral or legal prescriptions. It wouldn't be all that surprising if someone from an actual future time to, when reading the work, scoff at how prudish a culture would have to be to consider sexualised portrayals of women that age to be taboo!

Interestingly, if the book in question is the one I think it is, it takes place in Britain, where the age of consent is, in fact, sixteen.

Replies from: wedrifid
comment by wedrifid · 2010-08-17T04:33:06.666Z · LW(p) · GW(p)

Come to think of it, 16 is the age of consent here (Australia - most states) too. I should have used 'your' instead of 'our' in the paragraph you quote! It seems I was just running with the assumption.

Replies from: CronoDAS
comment by CronoDAS · 2010-08-17T16:48:44.172Z · LW(p) · GW(p)

Although "18 years old" does seem to be a hard-and-fast rule for when you can legally appear in porn everywhere, as far as I know...

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-18T14:18:05.007Z · LW(p) · GW(p)

(This might also explain why he think it's cool to write BSDM scenes featuring a 16-year-old schoolgirl as part of an ostensibly respectable work of SF, so it's a pet suspicion of mine.)

Point of curiosity: Does anyone else still notice this sort of thing? I don't think my generation does anymore.

Replies from: Richard_Kennaway, None
comment by Richard_Kennaway · 2010-08-18T14:40:13.308Z · LW(p) · GW(p)

I've only read his Golden Age trilogy, so if it's there, then no, to this 50-something it didn't stand out from everything else that happened. If it's in something else, I doubt it would. I mean, I've read Richard Morgan's ultra-violent stuff, including the gay mediæval-style fantasy one, and, well, no.

[ETA: from Google the book in question appears to be Orphans of Chaos.]

I could be an outlier though.

comment by [deleted] · 2010-08-18T21:46:52.610Z · LW(p) · GW(p)

Well, I'm female. Could be women tend to be more sensitive to that kind of thing.

That said, I wasn't really planning to start a discussion about sexually explicit portrayals of sub-18 teenagers and whether they're ok, and I doubt I'll participate further in one. Unfortunately I don't own the book, so if anyone is curious about the details of what I was referring to, they'll have to read Orphans of Chaos (not that I recommend it on its merits). I wouldn't hazard a guess as to how much a person can be oblivious to (probably a lot), but I'd be surprised if most people's conscious, examined reaction to the sexual content (which is abundant and spread throughout the book, though not hardcore) was closer to "That is normal/A naturalistic portrayal of a 16-year-old girl's sexual feelings/Literary envelope-pushing" than to "That is weird/creepy."

comment by CronoDAS · 2010-08-16T17:31:59.945Z · LW(p) · GW(p)

Eh, you see people trying to "push boundaries" in "respectable" literature all the time anyway.

Replies from: None
comment by [deleted] · 2010-08-16T17:42:02.349Z · LW(p) · GW(p)

Certainly there are other explanations. If you can show me that JCW openly wrote highly sexualized portrayals of people below the age of consent before his religious experience/heart attack, I will be happy to retract.

comment by XiXiDu · 2010-08-18T14:48:12.809Z · LW(p) · GW(p)

Iron Sunrise by Charles Stross and Cowl by Neal Asher feature sex scenes with 16 year old girls. I don't remember to what detail though.

BSDM scenes featuring a 16-year-old schoolgirl

That sounds suspicious indeed and I would oppose it in most circumstances. That is, if it isn't just a 16 year old body or simulation of a body (yeah no difference?) and if it isn't just the description of how bad someone is...within SF you can naturally create exceptional circumstances.

Have you read books by Richard Morgan? The torture scences in the Takeshi novels are some of the most detailed. As virtual reality allows them to load you into the body of pregnant women, being raped and having soldering-iron slided up your vagina. And if you die due to hours of torture, just restart the simulation. Just one of the scenes from the first book.

comment by Unknowns · 2010-08-09T06:48:39.593Z · LW(p) · GW(p)

However, if EY converted to religion, he would (in that condition) assert that he had had good reasons for doing it, i.e. that it was rational. So he would have no reason to take down this website anyway.

comment by nawitus · 2010-08-08T21:15:43.900Z · LW(p) · GW(p)

You can use the wget program like this: 'wget -m lesswrong.com'. A database download would be easier on the servers though.

comment by Soki · 2010-08-11T18:59:33.963Z · LW(p) · GW(p)

I support this idea.

But what about copyright issues? What if posts and comments are owned by their writer?

Replies from: listic
comment by listic · 2010-08-18T15:06:01.703Z · LW(p) · GW(p)

I would argue that one cannot own the information stored on the computers of other, unrelated people.

I support this idea also. I actually intend to make a service for uploading the content of forum/blog to alternate server for backup service, but who knows when it will happen.

comment by xamdam · 2010-09-06T20:33:05.700Z · LW(p) · GW(p)

WebOffline can grab the whole thing to an iphone or ipad, formatting preserved. There are similar programs for PC/MAC

comment by Johnicholas · 2010-08-01T19:52:17.068Z · LW(p) · GW(p)

Cryronics Lottery.

Would it be easier to sign up for cryonics if there was a lottery system? A winner of the lottery could say "Well, I'm not a die-hard cryo-head, but I thought it was interesting so I bought a ticket (which was only $X) and I happened to win, and it's pretty valuable, so I might as well use it."

It's a sort of "plausible deniability" that might reduce the social barriers to cryo. The lottery structure might also be able to reduce the conscientousness barriers - once you've won, then the lottery administrators (possibly volunteers, possibly funded by a fraction of the lottery) walk you through a "greased path".

Replies from: NihilCredo, gwern
comment by NihilCredo · 2010-08-02T19:14:24.807Z · LW(p) · GW(p)

On a completely serious, if not totally related, note: it would be a lot easier to convince people to sign up for cryonics if the Cryonics Institute's and/or KrioRus's websites looked more professional.

Replies from: Alicorn, katydee
comment by Alicorn · 2010-08-02T20:43:19.801Z · LW(p) · GW(p)

I'm not sure if it would help get uninterested people interested; but I think it would help get interested people signed up if there were a really clear set of individually actionable instructions - perhaps a flowchart so they can depend on individual circumstances - that were all found in one place.

comment by katydee · 2010-08-02T21:01:18.979Z · LW(p) · GW(p)

And Rudi Hoffman's page.

comment by gwern · 2010-08-02T04:31:39.545Z · LW(p) · GW(p)

I doubt it. Signing up for a lottery for cryonics is still suspicious. There is only one payoff, and that is of the suspicious thing. No one objects to the end of lotteries because we all like money, what is objected to is the lottery as efficient means of obtaining money (or entertainment).

Suppose that the object were something you and I regard with equal revulsion as many regard cryonics. Child molestation, perhaps. Would you really regard someone buying a ticket as not being quite evil and condoning and supporting the eventual rape?

Replies from: AlexM, Johnicholas, AlexM
comment by AlexM · 2010-08-02T10:23:00.010Z · LW(p) · GW(p)

Who regards cryonics as evil like child molestation? General public sees cryonics as fraud - somethink like buying real estate on the moon or waiting for mothership, and someone paying for it as gullible fool.

For example, look at discussions when Britney Spears http://www.freerepublic.com/focus/f-chat/2520762/posts

wanted to be frozen. Lots of derision, no hatred.

Replies from: NihilCredo, gwern
comment by NihilCredo · 2010-08-02T19:00:41.002Z · LW(p) · GW(p)

Bad example. People want to make fun of celebrities (especially a community as caustic and "anti-elitist" as the Freepers). She could have announced that she was enrolling in college, or something else similarly common-sensible, and you would still have got a threadful of nothing but cheap jokes.

A discussion about "My neighbour / brother-in-law / old friend from high school told me he has decided to get frozen" would be more enlightening.

comment by gwern · 2010-08-02T11:16:36.238Z · LW(p) · GW(p)

Does the fact that my specific example may not be perfect refute my point that mere indirection & chance does not eliminate all criticism and this can be understood by merely introspecting one's intuitions?

comment by Johnicholas · 2010-08-02T11:01:48.184Z · LW(p) · GW(p)

Rather than using an undiluted negative as an example, suppose that there was something more arguable, that might have some positive aspects - sex segregation of schools, for example.

Assuming that my overall judgement of sex segregation is negative, if someone pursued sex segregation fiercely and dedicatedly, then my overall negative valuation of their goal would color my judgement of them. If they can plausibly claim to have supported it momentarily on a whim, while thinking about the positive aspects, then there is some insulation between my judgement of the goal and my judgement of the person.

comment by AlexM · 2010-08-02T10:22:05.924Z · LW(p) · GW(p)

Who regards cryonics as evil like child molestation? General public sees cryonics as fraud - somethink like buying real estate on the moon or waiting for mothership, and someone paying for it as gullible fool.

For example, look at discussions when [Britney Spears] (http://www.freerepublic.com/focus/f-chat/2520762/posts) wanted to be frozen. Lots of derision, no hatred.

comment by NancyLebovitz · 2010-08-01T14:13:33.462Z · LW(p) · GW(p)

Letting Go by Atul Gawande is a description of typical end of life care in the US, and how it can and should be done better.

Typical care defaults to taking drastic measures to extend life, even if the odds of success are low and the process is painful.

Hospice care, which focuses on quality of life, not only results in more comfort, but also either no loss of lifespan or a somewhat longer life, depending on the disease. And it's a lot cheaper.

The article also describes the long careful process needed to find out what people really want for the end of their life-- in particular, what the bottom line is for them to want to go on living.

This is of interest for Less Wrong, not just because Gawande is a solidly rationalist writer, but because a lot of the utilitarian talk here goes in the direction of restraining empathic impulses.

Here we have a case where empathy leads to big utilitarian wins, and where treating people as having unified consciousness if you give it a chance to operate works out well.

As good as hospices sound, I'm concerned that if they get a better reputation, less competent organizations calling themselves hospices will spring up.

From a utilitarian angle, I wonder if those drastic methods of treatment sometimes lead to effective methods, and if so, whether the information could be gotten more humanely.

Replies from: Rain, daedalus2u
comment by Rain · 2010-08-01T14:28:57.248Z · LW(p) · GW(p)

End of life regulation is one reason cryonics is suffering, as well: without the ability to ensure preservation when the brain is still relatively healthy, the chances diminish significantly. I think it'd be interesting to see cryonics organizations put field offices in countries or states with legal suicide laws. Here's a Frontline special on suicide tourists.

comment by daedalus2u · 2010-08-01T15:49:36.544Z · LW(p) · GW(p)

The framing of the end of life issue as a gain or a loss as in the monkey token exchange probably makes a gigantic difference in the choices made.

http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/2cnn?c=1

When you feel you are in a desperate situation, you will do desperate things and clutch at straws, even when you know those choices are irrational. I think this is the mindset behind the clutching at straws that quacks exploit with CAM, as in the Gonzalez Protocol for pancreatic cancer.

http://www.sciencebasedmedicine.org/?p=1545

It is actually worse than doing nothing, worse than doing what main stream medicine recommends, but because there is the promise of complete recovery (even if it is a false promise), that is what people choose based on their irrational aversion to risk.

comment by Matt_Simpson · 2010-08-03T23:11:35.842Z · LW(p) · GW(p)

In his bio over at Overcoming Bias, Robin Hanson writes:

I am addicted to “viewquakes”, insights which dramatically change my world view.

So am I. I suspect you are too, dear reader. I asked Robin how many viewquakes he had and what caused them, but haven't gotten a response yet. But I must know! I need more viewquakes. So I propose we share our own viewquakes with each other so that we all know where to look for more.

I'll start. I've had four major viewquakes, in roughly chronological order:

  • (micro)Economics - Starting with a simple approximation of how humans behave yields a startlingly effective theory in a wide range of contexts.
  • Bayesianism - I learned how to think
  • Yudkowskyan/Humean Metaethics - Making the move from Objective theories of morality to Subjectively Objective theories of morality cleared up a large degree of confusion in my map.
  • Evolution - This is a two part quake: evolutionary biology and evolutionary psychology. The latter is extremely helpful for explaining some of the behavior that economic theory misses and for understanding the inputs into economic theory (i.e., preferences).
Replies from: ABranco
comment by ABranco · 2010-08-05T04:02:56.679Z · LW(p) · GW(p)

I've had some dozens of viewquakes, most minors, although it's hard to evaluate it in hindsight now that I take them for granted.

Some are somewhat commonplace here: Bayesianism, map–territory relations, evolution etc.

One that I always feel people should be shouting Eureka — and when they are not impressed I assume that this is old news to them (and is often not, as I don't see it reflected in their actions) — is the Curse of Knowledge: it's hard to be a tapper. I feel that being aware of it dramatically improved my perceptions in conversation. I also feel that if more people were aware of it, misunderstandings would be far less common.

Maybe worth a post someday.

Replies from: byrnema, fiddlemath, RobinZ
comment by byrnema · 2010-08-05T04:30:14.774Z · LW(p) · GW(p)

I can see how the Curse of Knowledge could be a powerful idea. I will dwell on it for a while -- especially the example given about JFK, as an example of a type of application that would be useful in my own life. (To remember to describe things using broad strokes that are universally clear, rather than technical and accurate,in contexts where persuasion and fueling interest is most important.)

For me, one of the main viewquakes of my life was a line I read from a little book of Kalil Gibran poems:

Your pain is the breaking of the shell that encloses your understanding.

It seemed to be a hammer that could be applied to everything.. Whenever I was unhappy about something, I thought about the problem a while until I identified a misconception. I fixed the misconception ("I'm not the smartest person in graduate school"; "I'm not as kind as I thought I was"; "That person won't be there for me when I need them") by assimilating the truth the pain pointed me towards, and the pain would dissipate. (Why should I expect graduate school to be easy? I'll just work harder. Kindness is what you actually do, not how you expect you'll feel. That person is fun to hang out with, but I'll need to find some closer friends.) After each disappointment, I felt stronger and the problem just bounced off me, without my being in denial about anything.

The "technique" failed me when a good friend of mine died. There was a lot of pain, and I tried to identify the truth that was cutting though, but I couldn't find one. Where did my friend go? There is a part of my brain, I realized, that simply cannot except on an emotional level that people are material. I believe that they are (I don't believe in a soul or an afterlife) but I simply couldn't connect the essence of my friend with 'gone'. If there was a truth there, it couldn't find a place in my mind.

This seems like a tangent. .. but just to demonstrate it's not all-powerful.

Replies from: ABranco
comment by ABranco · 2010-08-05T10:04:42.111Z · LW(p) · GW(p)

Remarkable quote, thank you.

Reminded me of the Anorexic Hermit Crab Syndrome:

The key to pursuing excellence is to embrace an organic, long-term learning process, and not to live in a shell of static, safe mediocrity. Usually, growth comes at the expense of previous comfort or safety. The hermit crab is a colorful example of a creature that lives by this aspect of the growth process (albeit without our psychological baggage). As the crab gets bigger, it needs to find a more spacious shell. So the slow, lumbering creature goes on a quest for a new home. If an appropriate new shell is not found quickly, a terribly delicate moment of truth arises. A soft creature that is used to the protection of built-in armor must now go out into the world, exposed to predators in all its mushy vulnerability. That learning phase in between shells is where our growth can spring from. Someone stuck with an entity theory of intelligence is like an anorexic hermit crab, starving itself so it doesn't grow to have to find a new shell. —Josh Waitzkin, The Art of Learning

comment by fiddlemath · 2010-08-05T06:00:49.955Z · LW(p) · GW(p)

Sounds like the illusion of transparency. We've got that post around. ;)

On the other hand, the tapper/listener game is a very evocative instance.

comment by Matt_Simpson · 2010-08-02T17:02:37.313Z · LW(p) · GW(p)

Was Kant implicitly using UDT?

Consider Kant's categorical imperative. It says, roughly, that you should act such that you could will your action as a universal law without undermining the intent of the action. For example, suppose you want to obtain a loan for a new car and never pay it back - you want to break a promise. In a world where everyone broke promises, the social practice of promise keeping wouldn't exist and thus neither would the practice of giving out loans. So you would undermine your own ends and thus, according to the categorical imperative, you shouldn't get a loan without the intent to pay it back.

Another way to put Kant's position would be that you should choose such that you are choosing for all other rational agents. What does UDT tell you to do? It says (among other things) that you should choose such that you are choosing for every agent running the same decision algorithm as yourself. It wouldn't be a stretch to call UDT agents rational. So Kant thinks we should be using UDT! Of course, Kant can't draw the conclusions he wants to draw because no human is actually using UDT. But that doesn't change the decision algorithm Kant is endorsing.

Except... Kant isn't a consequentialist. If the categorical imperative demands something, it demands it no matter the circumstances. Kant famously argued that lying is wrong, period. Even if the fate of the world depends on it.

So Kant isn't really endorsing UDT, but I thought the surface similarity was pretty funny.

Replies from: Emile, SilasBarta, Yvain
comment by Emile · 2010-08-03T08:04:27.318Z · LW(p) · GW(p)

Kant famously argued that lying is wrong, period. Even if the fate of the world depends on it.

I remember Eliezer saying something similar, though I can't find it right now (the closest I could find was this ). It was something about the benefits of being the kind of person that doesn't lie, even if the fate of the world is at stake. Because if you aren't, the minute the fate of the world is at stake is the minute your word becomes worthless.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-08-03T09:13:02.880Z · LW(p) · GW(p)

I recall it too. I think the key distinction is that if the choice was literally between lying and everyone in the world - including yourself - perishing, Kant would let us all die. Eliezer would not. What I took Eliezer to be saying (working from memory, I may try to find the post later) is that if you think the choice is between lying and the sun exploding (or something analogous) in any real life situation... you're wrong. It's far more likely that you're rationalizing the way you're compromising your values than that it's actually necessary to compromise your values, given what we know about humans. So a consequentialist system implies basically deontological rules once human nature is taken into account.

Once again, this is all from my memory, so I could be wrong.

Replies from: Unknowns
comment by Unknowns · 2010-08-03T10:02:26.400Z · LW(p) · GW(p)

Although Eliezer didn't put it precisely in these terms, he was sort of suggesting that if one could self-modify in such a way that it became impossible to break a certain sort of absolutely binding promise, it would be good to modify oneself in that way, even though it would mean that if the situation actually came up where you had to break the promise or let the world perish, you would have to let the world perish.

Replies from: None
comment by [deleted] · 2010-08-04T01:52:51.891Z · LW(p) · GW(p)

I think the article you (and the parent comment) are talking about is this one

comment by SilasBarta · 2010-08-02T17:26:23.954Z · LW(p) · GW(p)

Drescher has some important things to say about this distinction in Good and Real. What I got out of it, is that the CI is justifiable on consequentialist or self-serving grounds, so long as you relax the constraint that you can only consider the causal consequences (or "means-end links") of your decisions, i.e., things that happen "futureward" of your decision.

Drescher argues that specifically ethical behavior is distinguished by its recognition of these "acausal means-end links", in which you act for the sake of what would be the case if-counterfactually you would make that decision, even though you may already know the result. (Though I may be butchering it -- it's tough to get my head around the arguments.)

And I saw a parallel between Drescher's reasoning and UDT, as the former argues that your decisions set the output of all similar processes to the extent that they are similar.

comment by Scott Alexander (Yvain) · 2010-08-03T07:39:32.678Z · LW(p) · GW(p)

I thought Kant sounded a lot more like TDT than UDT. Or was that what you meant?

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-08-03T09:06:48.902Z · LW(p) · GW(p)

I'm not familiar enough with Pearl's formalism to really understand TDT - or at least that's why I haven't really dove into TDT yet. I'd love to hear why you think Kant sounds more like TDT though. I'm suspecting it has something to do with considering counterfactuals.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-08-03T09:28:48.717Z · LW(p) · GW(p)

I'm not familiar at all with Pearl's formalism. But from what I see on this site, I gather that the key insight of updateless decision theory is to maximize utility without conditioning on information about what world you're in, and the key insight of timeless decision theory is what you're describing (Eliezer summarizes it as "Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.")

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-08-03T17:26:06.366Z · LW(p) · GW(p)

I gather that the key insight of updateless decision theory is to maximize utility without conditioning on information about what world you're in, and the key insight of timeless decision theory is what you're describing (Eliezer summarizes it as "Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.")

I think Eliezer's summary is also a fair description of UDT. The difference between UDT and TDT appears to be subtle, and I don't completely understand it. From what I can tell, UDT just does choose in the way Eliezer describes, completely ignoring any updating process. TDT chooses this way as a result of how it reasons about counterfactuals. Somehow, TDT's counterfactual reasoning causes it to choose slightly differently from UDT, but I'm not sure why at this point.

comment by Sniffnoy · 2010-08-05T23:13:39.201Z · LW(p) · GW(p)

I found TobyBartels's recent explanation of why he doesn't want to sign up for cryonics a useful lesson in how different people's goals in living a long time (or not) can be from mine. Now I am wondering if maybe it would be a good idea to state some of the reasons people would want to wake up 100 years later if hit by a bus. Can't say I've been around here very long but it seems to me it's been assumed as some sort of "common sense" - is that accurate? I was wondering if other people's reasons for signing up / intending to sign up (I am not currently signed up and probably will not get around to such for several years) also differed interestingly from mine. Or is this too off topic?

As for me, I would think the obvious reason is what Hilbert said: "If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proven?" Finding yourself in the future means you now have the answers to a lot of previously open problems! As well as getting to learn the history of what happened after you were frozen. I have for a long time found not getting to learn the future history of the world to be the most troubling aspect of dying.

(Posting this here as it seems a bit off-topic under The Threat of Cryonics.)

Replies from: steven0461, NancyLebovitz, soreff
comment by steven0461 · 2010-08-05T23:48:46.933Z · LW(p) · GW(p)

It sure seems like a lot of people could feed their will to live by reading just the first half of an exciting fiction book.

Replies from: None
comment by [deleted] · 2010-08-06T00:01:42.252Z · LW(p) · GW(p)

We would need to drastically strengthen norms against spoilers.

comment by NancyLebovitz · 2010-08-05T23:28:45.892Z · LW(p) · GW(p)

One thought is that it's tempting to think of yourself as being the only one (presumably with help from natives) trying to deal with the changed world.

Actually I think it's more likely that there will be many people from your era, and there will be immigrants' clubs, with people who've been in the future for a while helping the greenhorns. I find this makes the future seem more comfortable.

The two major reasons I can think of for wanting to be in the future is that I rather like being me, and the future should be interesting.

comment by soreff · 2010-08-08T03:17:45.408Z · LW(p) · GW(p)

The single largest motivation for me is just that a future which is powerful enough, and rich enough, and benevolent enough to revive cryonicists is likely to be a very pleasant place to be in. If nothing else, lots of their everyday devices are likely to look like marvelous toys from my point of view. The combination of that with the likelihood that if they can repair me at all, I'd guess that they would use a youthful body (physical or simulated) as a model is quite enough to be an attractive prospect.

comment by Eneasz · 2010-08-02T17:21:48.596Z · LW(p) · GW(p)

An ex-English Professor and ex-Cop, George Thompson, who now teaches a method he calls "Verbal Judo". Very reminiscent of Eliezer's Bayesian Dojo, this is a primer on rationalist communications techniques, focusing on defensive & redirection tactics. http://fora.tv/2009/04/10/Verbal_Judo_Diffusing_Conflict_Through_Conversation

Replies from: sketerpot, JenniferRM
comment by sketerpot · 2010-08-07T00:10:12.798Z · LW(p) · GW(p)

I wrote up some notes on this, because there's no transcript and it's good information. Let's see if I can get the comment syntax to cooperate here.

How to win in conversations, in general.

Never get angry. Stay calm, and use communication tactically to achieve your goals. Don't communicate naturally; communicate tactically. If you get upset, you are weakened.

How to deflect.

To get past an unproductive and possibly angry conversation, you need to deflect the unproductive bluster and get down to the heart of things: goals, and how to achieve them. Use a sentence of the form:

"[Acknowledge what the other guy said], but/however/and [insert polite, goal-centered language here]."

You spring past what the other person said, and then recast the conversation in your own terms. Did he say something angry, meant to upset you? Let it run off you like water, and move on to what you want the conversation to be about. This disempowers him and puts you in charge.

How to motivate people.

There's a secret to motivating people, whether they're students, co-workers, whatever. To motivate someone, raise his expectations of himself. Don't put people down; raise them up. When you want to reprimand someone for not living up to your expectations, mention the positive first. Raise his expectations of himself.

Empathy

To calm somebody down, or get him to do what you want, empathy is the key. Empathy, the ability to see through the eyes of another, is one of the greatest powers that humans have. It gives you power over people, of a kind that they won't get mad about. Understand the other guy, and then think for him as he ought to think. The speaker worked as a police officer, so most of the people he dealt with were under the influence of something. Maybe they were drugged, or drunk; maybe they were frightened, or outraged. Whatever it is, it clouds their judgement; be the levelheaded one and help them think clearly. Empathy is what you need for this.

How to interrupt someone.

Use the most powerful sentence in the English language: "Let me see if I understand what you just said." It shuts anybody up, without pissing them off, and they'll listen. Even if they're hopping mad and were screaming their lungs out at you a minute ago, they'll listen. Use this sentence, and then paraphrase what you understand them as saying. When you paraphrase, that lets you control the conversation. You get to put their point of view in your own words, and in doing so, you calm them down and sieze control of the conversation.

How to be a good boss.

This was a talk at Colombia University business school; people came to learn how to be good bosses. And the secret is that if you're a boss, don't focus directly on your own career; focus on lifting up the people under you. Do this, and they will lift you up with them. To be powerful in a group setting, you must disappear. Put your own ego aside, don't worry about who gets the credit, and focus on your goals.

How to discipline effectively.

This is his biggest point. The secret of good discipline is to use language disinterestedly. You can show anger, condescension, irritation, etc., OR you can discipline somebody. You can't do both at the same time. If you show anger when disciplining someone, you give them an excuse to be angry, and you destroy your own effectiveness. Conversely, if you want to express anger, then don't let punishment even enter the conversation. Keep these separate.

How to deal with someone who says no.

There are five stages to this. Try the first one; if it fails, go to the next one, and so on. Usually you won't have to go past the first one or two.

  1. Ask. Be polite. Interrogative tone. "Sir, will you please step out of the car?" This usually works, and the conversation ends here.

  2. Tell him why. Declarative tone. This gives you authority, it's a sign of respect, and it gives the other guy a way of saving face. It builds a context for what you're asking. If asking failed, explaining usually works. "I see an open liquor bottle in your cup-holder, and I'm required by law to search your vehicle. For our safety, I need you to step out of the car."

  3. Create and present options. There are four secrets for this:

    • Voice: friendly and respectful.

    • Always list good options first ("You can go home tonight, have dinner with your family, sleep in your own bed."). Then the bad options ("If you don't get out of this car, the law says you're going to jail overnight, and you'll get your car towed, and they'll charge you like 300 bucks."). Then remind him of the good options, to get the conversation back to what you want him to do. ("I just need you to get out of your car, let me have a look around, and we'll be done in a few minutes.")

    • Be specific. Paint a mental picture for people. Vivid imagery. WIIFM: What's In It For Me? Appeal to the other guy's self-interest. It's not about you; it's about him.

  4. Confirm noncompliance. "Is there anything I can say to get you to cooperate, and step out of the car for me, so you don't go to jail?" Give them a way to save face.

  5. Act -- Disengage or escalate. This is the part where you either give up or get serious. In the "get out of the car" example, this is the part where you arrest him. Very seldom does it get to this stage, if you did the previous stages right.

If you want more on verbal judo, watch the video; he's a good speaker.

Replies from: mattnewport, NancyLebovitz, Eneasz
comment by mattnewport · 2010-08-07T00:14:34.693Z · LW(p) · GW(p)

Does the talk provide any evidence for the efficacy of the tactics?

Replies from: sketerpot
comment by sketerpot · 2010-08-07T02:24:33.920Z · LW(p) · GW(p)

The speaker has a whole career of experience dealing with people who are irrational because they're drunk, angry, frightened, or some combination of the above. He says this stuff is what he does, and that it works great. That's anecdotal, but it's about the strongest kind of anecdotal evidence it's possible to get.

It would be nice if someone did a properly controlled study on this.

comment by NancyLebovitz · 2010-08-07T00:19:20.569Z · LW(p) · GW(p)

Thank you for writing this up.

The one thing I wondered about was whether the techniques for getting compliance interfere with getting information. For example, what if someone who isn't consenting to a search is actually right about the law?

Replies from: sketerpot
comment by sketerpot · 2010-08-07T02:33:20.528Z · LW(p) · GW(p)

The thing that bothers me about the talk is that most of it makes the assumption that you're being calm and rational, that you're right, and that whoever you're talking to is irrational and needs to be verbally judo'd into compliance. Sometimes that's the case, but most of the techniques don't really apply to situations where you're dealing with another calm, sane person as an equal.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-07T10:25:02.474Z · LW(p) · GW(p)

Thompson is actually ambiguous on the point. Sometimes he's really clear that what you're aiming for is compliance.

comment by Eneasz · 2010-08-09T16:08:40.190Z · LW(p) · GW(p)

This is good, you should float it as a top-level post

comment by JenniferRM · 2010-08-02T22:51:23.689Z · LW(p) · GW(p)

Thanks. That was a compact and helpful 90 minutes. The first 30 minutes were OK, but the 2nd 30 were better, and the 3rd was the best. Towards the end I got the impression that he was explaining lessons that were the kind of thing people spend 5 years learning the hard way and that lots of people never learn for various reasons.

Replies from: Blueberry
comment by Blueberry · 2010-08-02T23:31:06.967Z · LW(p) · GW(p)

That sounds really interesting. I wish there were a transcript available!

Replies from: sketerpot
comment by sketerpot · 2010-08-06T20:51:35.884Z · LW(p) · GW(p)

There's an mp3 version available, which sounds just as good at 1.4x speed. And it cuts the 90 minutes down to about an hour.

comment by sketerpot · 2010-08-02T08:09:44.755Z · LW(p) · GW(p)

I've been on a Wikipedia binge, reading about people pushing various New Age silliness. The tragic part is that a lot of these guys actually do sound fairly smart, and they don't seem to be afflicted with biological forms of mental illness. They just happen to be memetically crazy in a profound and crippling way.

Take Ervin Laszlo, for instance. He has a theory of everything, which involves saying the word "quantum" a lot and talking about a mystical "Akashic Field" which I would describe in more detail except that none of the explanations of it really say much. Here's a representative snippet from Wikipedia:

László describes how such an informational field can explain why our universe appears to be fine-tuned as to form galaxies and conscious lifeforms; and why evolution is an informed, not random, process. He believes that the hypothesis solves several problems that emerge from quantum physics, especially nonlocality and quantum entanglement.

Then we have pages like this one, talking more about the Akashic Records (because apparently it's a quantum field thingy and also an infinite library or something). The very first sentence sums it up: "The Akashic Records refer to the frequency gird programs that create our reality." Okay, actually that didn't sum up crap; but it sounded cool, didn't it? That page is full of references to the works of various people, cited very nicely, and the spelling and grammar suggest someone with education. There are a lot of pages like this floating around. The thing they all have in common is that they don't seem to consider evidence to be important. It's not even on their radar.

Scholarly writings from New Age people is a pretty breathtaking example of dark side epistemology, if anybody wants a case study in exactly what not to do. It's pretty intense.

comment by SilasBarta · 2010-08-01T19:25:02.583Z · LW(p) · GW(p)

I thought I'd pose an informal poll, possibly to become a top-level, in preparation for my article about How to Explain.

The question: on all the topics you consider yourself an "expert" or "very knowledgeable about", do you believe you understand them at least at Level 2? That is, do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?

Or, to put it another way, do you think that, given enough time, but using only your present knowledge, you could teach a reasonably-intelligent layperson, one-on-one, to understand complex topics in your expertise, teaching them every intermediate topic necessary for grounding the hardest level?

Edit: Per DanArmak's query, anything you can re-derive or infer from your present knowledge counts as part of your present knowledge for purposes of answering this question.

I'll save my answer for later -- though I suspect many of you already know it!

Replies from: Oscar_Cunningham, DanArmak, NancyLebovitz, zero_call, fiddlemath, JanetK, RobinZ, JRMayne, None, thomblake, DSimon, KrisC
comment by Oscar_Cunningham · 2010-08-02T15:10:47.928Z · LW(p) · GW(p)

I have a (I suspect unusual) tendency to look at basic concepts and try to see them in as many ways as possible. For example, here are seven equations, all of which could be referred to as Bayes' Theorem:

=\frac{P(E%7CH).P(H)}{P(E)}\\[10]P(H%7CE)=\frac{P(E%7CH)}{P(E)}.P(H)\\[10]P(H%7CE)=\frac{P(E%7CH).P(H)}{P(E%7CH).P(H)+P(E%7C\neg%20\!\,H).P(\neg%20\!\,H)}\\[10]P(H%7CE)=\frac{1}{1+\frac{P(E%7C\neg%20\!\,H).P(\neg%20\!\,H)}{P(E%7CH).P(H)}}\\[10]P(H%7CE)=\frac{P(E%7CH).P(H)}{\sum%20P(E%7CH_i).P(H_i)}\\[10]%0Aodds(H%7CE)=\frac{P(E%7CH)}{P(E%7C\neg%20\!\,H)}.odds(H)\\[10]%0Alogodds(H%7CE)=log(\frac{P(E%7CH)}{P(E%7C\neg%20\!\,H)})+logodds(H))

However, each one is different, and forces a different intuitive understanding of Bayes' Theorem. The fourth one down is my favourite, as it makes obvious that the update depends only on the ratio of likelihoods. It also gives us our motivation for taking odds, since this clears up the 1/(1+x)ness of the equation.

Because of this way of understanding things, I find explanations easy, because if one method isn't working, another one will.

ETA: I'd love to see more versions of Bayes' Theorem, if anyone has any more to post.

Replies from: ABranco, SilasBarta
comment by ABranco · 2010-08-05T03:45:45.296Z · LW(p) · GW(p)

P (H|E) = P (H and E) / P(E)

which tends to be how conditional probability is defined, and actually the first version of Bayes that I recall seeing.

comment by SilasBarta · 2010-08-02T15:19:05.398Z · LW(p) · GW(p)

Very well said, and doubles as a reply to the last part of my comment here. (When I read your comment in my inbox, I thought it was actually a reply to that one! Needless to say, I my favorite versions of the theorem are the last two you listed.)

comment by DanArmak · 2010-08-01T19:48:51.245Z · LW(p) · GW(p)

using only your present knowledge

This strikes me as an un-lifelike assumption. If I had to explain things in this way, I would expect to encounter some things that I don't explicitly know (and other that I knew and have forgotten), and to have to (re)derive them. But I expect that I would be able to rederive almost all of them.

Refining my own understanding is a natural part of building a complex explanation-story to tell to others, and will happen unless I've already built this precise story before and remember it.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-01T19:53:08.038Z · LW(p) · GW(p)

For purposes of this question, things you can rederive from your present knowledge count as part of your present knowledge.

comment by NancyLebovitz · 2010-08-01T20:41:53.305Z · LW(p) · GW(p)

I think I know a fair amount about doing calligraphy, but I'm dubious that someone could get a comparable level of knowledge without doing a good bit of calligraphy themselves.

If I were doing a serious job of teaching, I would be learning more about how to teach as I was doing it.

I consider myself to be a good but not expert explainer.

Possibly of interest: The 10-Minute Rejuvenation Plan: T5T: The Revolutionary Exercise Program That Restores Your Body and Mind : a book about an exercise system which involves 5 yoga moves. It's by a woman who'd taught 700 people how to do the system, and shows an extensive knowledge of the possible mistakes students can make and adaptations needed to make the moves feasible for a wide variety of people.

My point is that explanation isn't an abstract perfectible process existing simply in the mind of a teacher.

Replies from: KrisC
comment by KrisC · 2010-08-01T21:35:47.192Z · LW(p) · GW(p)

But in some limited areas explanation is completely adequate.

I taught co-worker how to do sudoku puzzles. After teaching him the human-accessible algorithms and allowing time for practice, I was still consistently beating his time. I knew why, and he didn't. After I explained the difference in mental state I was using, he began beating my time on regular basis. {Instead of checking the list of 1-9 for each box or line, allow your brain to subconsciously spot the missing number and then verify its absence.} He is more motivated and has more focus, while I do puzzles to kill time when waiting.

In another job where I believe I had a thorough understanding of the subject, I was never able to teach any of my (~20) trainees to produce vector graphic maps with the speed and accuracy I obtained because I was unable to impart a mathematical intuition for the approximation of curves. I let them go home with full pay when they completed their work, so they definitely had motivation. But they also had editors who were highly detail oriented.

I mean to suggest that there is a continuum of subjective ability comparing different skills. Sudoku is highly procedural, once familiar all that is required is concentration. Yoga, in the sense mentioned above, is also procedural, proscriptive; the joints allow a limited number of degrees of freedom. Calligraphy strives for an ideal, but depending on the tradition, there is a degree of interpretation allowed for aesthetic considerations. Mapping, particularly in vector graphics, has many ways to be adequate and no way to be perfect.

The number of acceptable outcomes and the degree of variation in useful paths determines the teach-ability of a skillset. The procedural skills can be taught more easily than the subjective, and practice is useful to accomplish mastery of procedural skills. Deeper understanding of a field allows more of the skill's domain to be expressed procedurally rather than subjectively.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-01T21:44:59.373Z · LW(p) · GW(p)

I'm in general agreement, but I think you're underestimating yoga-- a big piece of it is improving access to your body's ability to self-organize.

I like "many ways to be adequate and no way to be perfect". I think most of life is like that, though I'll add "many ways to be excellent".

Replies from: KrisC
comment by KrisC · 2010-08-01T21:50:19.121Z · LW(p) · GW(p)

No slight to yoga intended. I only wanted to address the starting point of yoga. I know it is a quite comprehensive field.

comment by zero_call · 2010-08-02T00:48:50.538Z · LW(p) · GW(p)

I will reply to this in the sense of

"do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?",

since I am not so familiar with the formalism of a "Level 2" understanding.

My uninteresting, simple answer is: yes.

My philosophical answer is that I find the entire question to be very interesting and strange. That is, the relationship between teaching and understanding is quite strange IMO. There are many people who are poor teachers but who excel in their discipline. It seems to be a contradiction because high-level teaching skill seems to be a sufficient, and possibly necessary condition for masterful understanding.

Personally I resolve this contradiction in the following way. I feel like my own limitations make it to where I am forced to learn a subject by progressing at it in very simplistic strokes. By the time I have reached a mastery, I feel very capable of teaching it to others, since I have been forced to understand it myself in the most simplistic way possible.

Other people, who are possibly quite brilliant, are able to master some subjects without having to transmute the information into a simpler level. Consequentially, they are unable to make the sort of connections that you describe as being necessary for teaching.

Personally I feel that the latter category of people must be missing something, but I am unable to make a convincing argument for this point.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-02T01:15:29.747Z · LW(p) · GW(p)

A lot of the questions you pose, including the definition of the Level 2 formalism, are addressed in the article I linked (and wrote).

I classify those who can do something well but not explain or understand the connections from the inputs and outputs to the rest of the world, to be at a Level 1 understanding. It's certainly an accomplishment, but I agree with you that it's missing something: the ability to recognize where it fits in with the rest of reality (Level 2) and the command of a reliable truth-detecting procedure that can "repair" gaps in knowledge as they arise (Level 3).

"Level 1 savants" are certainly doing something very well, but that something is not a deep understanding. Rather, they are in the position of a computer that can transform inputs into the right outputs, but do nothing more with them. Or a cat, which can fall from great heights without injury, but not know why its method works.

(Yes, this comment seems a bit internally repetitive.)

Replies from: zero_call
comment by zero_call · 2010-08-02T01:36:50.800Z · LW(p) · GW(p)

Ah, OK, I read your article. I think that's an admirable task to try to classify or identify the levels of understanding. However, I'm not sure I am convinced by your categorization. It seems to me that many of these "Level 1 savants" as you call them are quite capable of fitting their understanding with the rest of reality. Actually it seems like the claim of "Level 1 understanding" basically trivializes that understanding. Yet many of these people who are bad teachers have a very nontrivial understanding -- else I don't think this would be such a common phenomena, for example, in academia. I would argue that these people have some further complications or issues which are not recognized in the 1-2-3 hierarchy.

That being said, you have to start somewhere, and the 0-1-2-3 hierarchy looks like a good place to start. I'd definitely be interested in hearing more about this analysis.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-02T03:09:06.361Z · LW(p) · GW(p)

Thanks for reading it and giving me feedback. I'm interested in your claim:

It seems to me that many of these "Level 1 savants" as you call them are quite capable of fitting their understanding with the rest of reality.

Well, they can fit it in the sense that they (over a typical problem set) can match inputs with (what reality deems) the right outputs. But, as I've defined the level, they don't know how those inputs and outputs relate to more distantly-connected aspects of reality.

Yet many of these people who are bad teachers have a very nontrivial understanding -- else I don't think this would be such a common phenomena, for example, in academia.

I had a discussion with others about this point recently. My take is basically: if their understanding is so deep, why exactly is their teaching skill so brittle that no one can follow the inferential paths they trace out? Why can't they switch to the infinite other paths that a Level 2 understanding enables them to see? If they can't, that would suggest a lack of depth to their understanding.

And regarding the archetypal "deep understanding, poor teacher" you have in mind, do you envision that they could, say, trace out all the assumptions that could account for an anomalous result, starting with the most tenuous, and continuing outside their subfield? If not, I would call that falling short of Level 2.

Replies from: zero_call
comment by zero_call · 2010-08-02T05:07:43.974Z · LW(p) · GW(p)

My take is basically: if their understanding is so deep, why exactly is their teaching skill so brittle that no one can follow the inferential paths they trace out? Why can't they switch to the infinite other paths that a Level 2 understanding enables them to see? If they can't, that would suggest a lack of depth to their understanding.

I would LOVE to agree with this statement, as it justifies my criticism of poor teachers who IMO are (not usually maliciously) putting their students through hell. However, I don't think it's obvious, or I think maybe you just have to take it as an axiom of your system. It seems there is some notion of individualism or personal difference which is missing from the system. If someone is just terrible at learning, can you really expect to succeed in explaining, for example? Realistically I think it's probably impossible to classify the massive concept of understanding by merely three levels, and these problems are just a symptom of that fact.

As another example, in order to understand something, it's clearly necessary to be able to explain it to yourself. In your system, you are additionally requiring that your understanding means you must be able to explain things to other people. In order to explain things to others, you have to understand them, as has been discussed. Therefore you have to be able to explain other people to yourself. Why should an explanation of other individuals behavior be necessary for understanding some random area of expertise, say, mathematics? It's not clear to me.

And regarding the archetypal "deep understanding, poor teacher" you have in mind, do you envision that they could, say, trace out all the assumptions that could account for an anomalous result, starting with the most tenuous, and continuing outside their subfield?

It certainly seems like someone with a deep understanding of their subject should be able to identify the validity or uncertainty in their assumptions about the subject. If they are a poor teacher, I think I would still believe this to be true.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-04T13:19:44.097Z · LW(p) · GW(p)

I've thought about this some, and I think I see your point now. I would phrase it this way: It's possible for a "Level 3 savant" to exist. A Level 3 savant, let's posit, has a very deeply connected model of reality, and their excellent truth-detecting procedure allows them to internally repair loss of knowledge (perhaps below the level of their conscious awareness).

Like an expert (under the popular definition), and like a Level 1 savant, they perform well within their field. But this person differs in that they can also peform well in tracing out where its grounding assumptions go wrong -- except that they "just have all the answers" but can't explain, and don't know, where the answers came from.

So here's what it would look like: Any problem you pose in the field (like an anomalous result), they immediately say, "look at factor X", and it's usually correct. They even tell you to check critical aspects of sensors, or identify circularity in the literature that grounds the field (i.e. sources which generate false knowledge by excessively citing each other), even though most in the field might not even think about or know how all those sensors work.

All they can tell you is, "I don't know, you told me X, and I immediately figured it had to be a problem with Y misinterpreting Z. I don't know how Z relates to W, or if W directly relates to X, I just know that Y and Z were the problem."

I would agree that there's no contradiction in the existence of such a person. I would just say that in order to get this level of skill you have to accomplish so many subgoals that it's very unlikely, just as it's hard to make something act and look like a human without also making it conscious. (Obvious disclaimer: I don't think my case is as solid as the one against P-zombies.)

comment by fiddlemath · 2010-08-01T19:57:27.138Z · LW(p) · GW(p)

I think that the "teaching" benchmark you claim here is actually a bit weaker than a Level 2 understanding. To successfully teach a topic, you don't need to know lots of connections between your topic and everything else; you only need to know enough such connections to convey the idea. I really think this lies somewhere between Level 1 and Level 2.

I'll claim to have Level 2 understanding on the core topics of my graduate research, some mathematics, and some core algorithmic reasoning. I'm sure I don't have all of the connections between these things and the rest of my world model, but I do have many, and they pervade my understanding.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-01T20:05:03.258Z · LW(p) · GW(p)

I think that the "teaching" benchmark you claim here is actually a bit weaker than a Level 2 understanding. To successfully teach a topic, you don't need to know lots of connections between your topic and everything else; you only need to know enough such connections to convey the idea. I really think this lies somewhere between Level 1 and Level 2.

I agree in the sense that full completion of Level 2 isn't necessary to do what I've described, as that implies a very deeply-connected set of models, truly pervading everything you know about.

But at the same time, I don't think you appreciate some of the hurdles to the teaching task I described: remember, the only assumption is that the student has lay knowledge and is reasonably intelligent. Therefore, you do not get to assume that they find any particular chain of inference easy, or that they already know any particular domain above the lay level. This means you would have to be able to generate alternate inferential paths, and fall back to more basic levels "on the fly", which requires healthy progress into Level 2 in order to achieve -- enough that it's fair to say you "round to" Level 2.

I'll claim to have Level 2 understanding on the core topics of my graduate research, some mathematics, and some core algorithmic reasoning. I'm sure I don't have all of the connections between these things and the rest of my world model, but I do have many, and they pervade my understanding.

If so, I deeply respect you and find that you are the exception and not the rule. Do you find yourself critical of how people in the field (i.e. through textbooks, for example) present it to newcomers (who have undergrad prerequisites), present it to laypeople, and use excessive or unintuitive jargon?

Replies from: fiddlemath
comment by fiddlemath · 2010-08-01T20:20:15.839Z · LW(p) · GW(p)

Therefore, you do not get to assume that they find any particular chain of inference easy, or that they already know any particular domain above the lay level. This means you would have to be able to generate alternate inferential paths, and fall back to more basic levels "on the fly", which requires healthy progress into Level 2 in order to achieve -- enough that it's fair to say you "round to" Level 2.

I agree that the teaching task does require a thick bundle of connections, and not just a single chain of inferences. So much so, actually, that I've found that teaching, and preparing to teach, is a pretty good way to learn new connections between my Level 1 knowledge and my world model. That this "rounds" to Level 2 depends, I suppose, on how intelligent you assume the student is.

If so, I deeply respect you and find that you are the exception and not the rule. Do you find yourself critical of how people in the field (i.e. through textbooks) present it to newcomers (who have undergrad prerequisites), present it to laypeople, and use excessive or unintuitive jargon?

Yes, constantly. Frequently, I'm frustrated by such presentations to the point of anger at the author's apparent disregard for the reader, even when I understand what they're saying.

comment by JanetK · 2010-08-02T08:08:06.419Z · LW(p) · GW(p)

I think I have level 2 understanding of many areas of Biology but of course not all of it. It is too large a field. But there are gray areas around my high points of understanding where I am not sure how deep my understanding would go unless it was put to the test. And around the gray areas surrounding the level 2 areas there is a sea of superficial understanding. I have some small areas of computer science at level 2 but they are fewer and smaller, ditto chemistry and geology. I think your question overlooks the nature of teaching skills. I am pretty good at teaching (verbally and one/few to one) and did it often for years. There is a real knack in finding the right place to start and the right analogies to use with a particular person. Someone could have more understanding than me and not be able to transfer that understanding to someone else. And others could have less understanding and transfer it better. Finally I like your use of the word 'understanding' rather than 'knowledge'. It implies the connectedness with other areas required to relate to lay people.

Replies from: None
comment by [deleted] · 2010-08-05T13:59:11.996Z · LW(p) · GW(p)

Perhaps the reason experts aren't always good teachers is because their thought processes / problem solving algorithms operate at a level of abstraction that is inaccessible to a beginner.

comment by RobinZ · 2010-08-02T01:43:53.797Z · LW(p) · GW(p)

I have some trouble answering your question, chiefly because my definition of "expert" is approximately synonymous with your definition of "Level 2".

Or, to put it another way, do you think that, given enough time, but using only your present knowledge, you could teach a reasonably-intelligent layperson, one-on-one, to understand complex topics in your expertise, teaching them every intermediate topic necessary for grounding the hardest level?

"Enough time" would be quite a long period of time. One problem is that there are a lot of textbook results that I would have to use in intermediate steps that would take me a long time to derive. Another is that there are a lot of experimental parameters that I haven't memorized and would have to look up. But I think I could teach arithmetic, algebra, geometry, calculus, differential equations, and Newtonian physics enough that I could teach them proper engineering analysis.

comment by JRMayne · 2010-08-01T23:07:11.867Z · LW(p) · GW(p)

Criminal Law: Yes to Level 2. Yes to teaching a layperson. It would take a while, for sure, but it's doable. Some of the work requires an understanding of a different lifestyle; if you can't see the potential issues with prosecuting a robbery by a prostitute and her armed male friend, or you can't predict that a domestic violence victim will have a non-credible recantation, you'll need some other education.

I've done a lot of instruction in this field. It is common for instruction not to take until there's other experience in the field which helps things join up.

Bridge: Yes to Level 2. Possibly to teaching a layperson. The ability to play bridge well is correlated heavily to intelligence, but it also correlates to a certain zeal for winning. I have taught one person to play very well indeed, but that may not be replicable, and took years. (On an aside, I am very likely the world's foremost expert on online bridge cheating; teaching cheating prevention would require teaching bridge first.)

Teaching requires more than reasonable intelligence on the part of the teachee. Some people who are very intelligent are ineducable. (Many of these are violators of my 40% rule: You are allowed to think you are 40% smarter/faster/stronger/better than you are. After that, it's obnoxious.) Some people are not interested in learning a given subject. Some people will not overcome preset biases. Some people have high aptitudes in some areas and little aptitude in others (though intelligence strongly tends to spill over.)

Anyway, I'm interested in the article. My penultimate effort to explain something to many people - Bayes' Theorem to lawyers - was a moderate failure; my last effort to explain something less mathy to a crowd was a substantial success. (My last experience in explaining something, with assistance, to 12 people was a complete failure.)

--JRM

Replies from: DSimon, SilasBarta
comment by DSimon · 2010-08-01T23:16:54.370Z · LW(p) · GW(p)

I'm curious, why did you chose 40% for your "40% rule"?

Replies from: JRMayne
comment by JRMayne · 2010-08-02T04:09:23.887Z · LW(p) · GW(p)

It's non-arbitrary, but neither is it precise. 100% is clearly too high, and 10% is clearly too low.

And since I started calling it The 40% Rule fifteen years ago or thereabout, a number of my friends and acquaintances have embraced the rule in this incarnation. Obviously, some things are unquantifiable and the specific number has rather limited application. But people like it at this number. That counts for something - and it gets the message across in a way that other formulations don't.

Some are nonplussed by the rule, but the vigor of support by some supporters gives me some thought that I picked a number people like. Since I never tried another number, I could be wrong - but I don't think I am.

--JRM

comment by SilasBarta · 2010-08-02T14:28:07.496Z · LW(p) · GW(p)

Some of the work requires an understanding of a different lifestyle; if you can't see the potential issues with prosecuting a robbery by a prostitute and her armed male friend, or you can't predict that a domestic violence victim will have a non-credible recantation, you'll need some other education.

  • "The people who buy the services of a prostitute generally don't want to go on record saying so, which they would have to do at some point to prosecute such a robbery. This is either because they're married, or the shame associated with using one."

  • "Victims of domestic violence have a lot invested in the relationship, and, no matter how much they feel hurt by the abuse, they will not want to tear apart the family and cripple their spouse with a felony conviction. This inner conflict will be present when the victim tries to recant their testimony."

Did that really require passing the learner off for some other education? Or did I get the explanation wrong?

Anyway, I'm interested in the article. My penultimate effort to explain something to many people - Bayes' Theorem to lawyers - was a moderate failure; my last effort to explain something less mathy to a crowd was a substantial success. (My last experience in explaining something, with assistance, to 12 people was a complete failure.)

I'd actually tried teaching information theory to my mom a week ago, which involved starting with the Bayes Theorem (my preferred phrasing [1]). She's a professional engineer, and found it very interesting (to the point where she kept prodding me for the next lesson), saying that it made much more sense of statistics. In about 1.5-2 hours total, I covered the Theorem, application to a car alarm situation, aggregating independent pieces of evidence, the use of log-odds, and some stuff on Bayes nets and using dependent pieces of evidence.

[1] O(H|E) = O(H) * L(E|H) = O(H) * P(E|H) / P(E|~H) = "On observing evidence, amplify the odds you assign to a belief by the probability of seeing the evidence if the belief were true, relative to if it were false."

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-02T15:38:01.320Z · LW(p) · GW(p)

Expansion on the explanation about domestic violence victims-- the victim may also be afraid that the government will not protect them from the abuser, and the abuser will be angrier because of the attempt at prosecution.

comment by [deleted] · 2010-08-05T13:54:51.020Z · LW(p) · GW(p)

"That is, do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?"

This is related to an idea that has been brewing at the back of my mind for a while now: Experts aren't always good teachers because their problem solving algorithms may operate at a level of abstraction that is inaccessible to a beginner.

comment by thomblake · 2010-08-02T18:49:57.572Z · LW(p) · GW(p)

Hmm... I'm not sure if I think of myself as an expert at anything, other than when people ask. But I'm pretty sure I have about the best understanding of logic I can hope to have, and could explain virtually all of it to an attentive small child given sufficient time.

And I might be an expert at some sort of computer programming, though I can think of people much better at any bit of it that I can think of; at any rate, I am also confident I could teach that to anyone, or at least anyone who passes a basic test

comment by DSimon · 2010-08-01T23:28:19.113Z · LW(p) · GW(p)

Computer programming: I'm not sure if I am at Level 2 or not on this.

In favor of being at Level 2: I regularly think about non-computer-related topics with a CS-like approach (i.e. using information theory ideas when playing the inference game Zendo).

Also, I strongly associate my knowledge of "folk psychology" and "folk science" to computer science ideas, and these insights work in both directions. For example, the "learned helplessness" phenomenon, where inexperienced users become so uncomfortable with a system that they prefer to cling to their inexperienced status than to risk failure in an attempt to understand the system better, appears in many areas of life having nothing directly to do with computers.

Evidence against being at Level 2: I do not have the necessary computer engineering knowledge to connect my understanding of computer programming to my understanding of physics. And, although I have not tried this very often, my experiments in attempting to teach computer programming to laypeople have been middling at best.

My assessment at this point is that I am probably near to Level 2 in computer programming, but not quite there yet.

comment by KrisC · 2010-08-01T19:56:40.076Z · LW(p) · GW(p)

Can you teach a talented, untrained person a skill so that they exceed your own ability? Can you then identify why they are superior? If you have deep level knowledge of your area of expertise that you can impart to others, you ought to be able to evaluate and train a replacement based on "raw talent."

Considering that intellectual or artistic endeavors may have a variety of details hidden even from the expert, perhaps a clearer example may be found in sports coaches.

Replies from: pjeby
comment by pjeby · 2010-08-01T20:16:15.364Z · LW(p) · GW(p)

Perhaps a clearer example may be found in sports coaches.

The main reason that coaches are important (not just in sports) is because of blind spots - i.e., things that are outside of a person's direct perceptual awareness.

Think of the Dunning-Kreuger effect: if you can't perceive it, you can't improve it.

(This is also why publications have editors; if a writer could perceive the errors in their work, they could fix them themselves.)

comment by [deleted] · 2010-08-30T23:41:25.142Z · LW(p) · GW(p)

PZ Meyers' comments on Kurzweil generated some controversy here recently on LW--see here. Apparently PZ doesn't agree with some of Kurzweil's assumptions about the human mind. But that's besides the point--what I want want to discuss is this: according to another blog, Kurzweil has been selling bogus nutritional supplements. What does everyone think of this?

Replies from: jimrandomh
comment by jimrandomh · 2010-08-30T23:48:21.983Z · LW(p) · GW(p)

I would like a better source than a blog comment for the claim that Kurzweil has been selling bogus nutritional supplements. The obvious alternative possibility is that someone else, with less of a reputation to worry about, attached Kurzweil's name to their product without his knowledge.

Replies from: None
comment by [deleted] · 2010-08-31T00:05:34.445Z · LW(p) · GW(p)

Ok, I've found some better sources. See the first three links.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-31T06:01:30.814Z · LW(p) · GW(p)

I would have preferred a more specific link than that, to save me the time of doing a detailed investigation of Kurzweil's company myself. But I ended up doing one anyways, so here are the results.

That "Ray and Terry's Longevity Products" company's front page screams low-credibility. It displays three things: an ad for a book, which I can't judge as I don't have a copy, an ad for snack bars, and a news box. Neutral, silly, and, ah, something amenable to a quality test!

The current top headline in their Healthy Headlines box looked to me like an obvious falsehood ("Dirty Electricity May Cause Type 3 Diabetes"), and on a topic important to me, so I followed it up. It links to a blog I don't recognize, which dug it out of a two year old study, which I found on PubMed. And I personally verified that the study was wrong - by the most generous interpretation, assuming no placebo effect or publication bias (both of which were obviously present), the study contains exactly 4 bits of evidence (4 case studies in which the observed outcome had a 50% chance of happening assuming the null hypothesis, and a 100% chance of happening assuming the conclusion). A review article confirmed that it was flawed.

That said, he probably just figured the news box was unimportant and delegated the job to someone who wasn't smart enough to keep the lies out. But it means I can't take anything else on the site seriously without a very time-consuming investigation, which is bad enough.

The bit about Kurzweil taking 250 nutritional supplements per day jumps out, too, since it's an obviously wrong thing to do; the risks associated with taking a supplement (adverse reaction, contamination, mislabeling) scale linearly with the number taken, while the upside has diminishing returns. You take the most valuable thing first, then the second-most, by the time you get to the 250th thing it's a duplicate or worthless. Which leads me to believe that he just fudged the number, by counting things that are properly considered duplicates like split doses of the same thing.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-08-31T07:04:12.610Z · LW(p) · GW(p)

Kurzweil should be concerned that his name is associated with junk science, and the overall result, but I think its a little far-fetched to think the man is actually selling nutritional supplements that he thinks are bogus.

The state of medicine and nutrition today is such that we know there is so much we don't know. The human body is supremely complex, to make an understatement. The evidence is pretty strong that most supplements, and even most multi-vitamins, don't do much or even do harm.

However that is certainly not true in every case, and there are particular supplements where we have strong evidence for net positive effect (vitamin D and fish oil have very strong evidence for net benefit at this point - everyone should be on them) .

But if you are someone like Kurzweil, and you want to make it to the Singularity, you probably will do the research and believe you have some inside knowledge on optimizing the human body. I find it more likely that he actually does take a boatload of supplements.

Replies from: None
comment by [deleted] · 2010-08-31T17:50:40.029Z · LW(p) · GW(p)

I'm sure he does take a lot of them himself, but the problem is that Kurzweil taking supplements will still make people think he is delusional (because most people are instantly suspicious of people who do so, generally for good reasons).

On a related note, Ben Best also sells supplements on his website, and many of them look pretty questionable.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-01T00:03:29.137Z · LW(p) · GW(p)

So I'm curious, do you believe that typical supplements have net negative effect, vs just neutral?

It was my understanding that the weight of evidence points to most having neutral overall effect, which to me wouldn't justify instant suspicion. I mean you may be wasting money, but you probably aren't hurting yourself.

And if you really do the research, you probably are going to get some net positive gain, statistically speaking. Don't you think? I know of at least 2 cases (vitamin D and fish oil, where the evidence for net benefit is strong - but mainly due to deficiency in the modern diet).

Replies from: None
comment by [deleted] · 2010-09-01T00:26:08.827Z · LW(p) · GW(p)

I think it is a mixed bag: Some supplements are potentially dangerous, but others (like the ones you mention) can be very helpful. The majority, however, probably have little to no effect whatsoever. As a result, I don't think people should mess around with what they eat without it being subjected to rigorous clinical trials first; though there might be a positive net gain, one dose of something bad can kill you.

In any case, though, believing that something is helpful when it has not yet been tested is clearly irrational. (This is more what I concerned about with Best and Kurzweil.) Selling or promoting something that isn't tested is even worse; it borders on fraud and charlatanry.

Edit: No, let me amend that: it is charlatanry.

comment by XiXiDu · 2010-08-06T08:29:33.698Z · LW(p) · GW(p)

Interesting SF by Robert Charles Wilson!

I normally stay away from posting news to lesswrong.com - although I think an Open Thread for relevant news items would be a good idea - but this one sounds especially good and might be of interest for people visiting this site...

Many-Worlds in Fiction: "Divided by Infinity"

In the year after Lorraine's death I contemplated suicide six times. Contemplated it seriously, I mean: six times sat with the fat bottle of Clonazepam within reaching distance, six times failed to reach for it, betrayed by some instinct for life or disgusted by my own weakness.

I can't say I wish I had succeeded, because in all likelihood I did succeed, on each and every occasion. Six deaths. No, not just six. An infinite number.

Times six.

There are greater and lesser infinities.

But I didn't know that then.

Replies from: humpolec
comment by humpolec · 2010-08-08T13:38:04.010Z · LW(p) · GW(p)

Thank you.

The idea reminded me of Moravec's thoughts on death:

When we die, the rules surely change. As our brains and bodies cease to function in the normal way, it takes greater and greater contrivances and coincidences to explain continuing consciousness by their operation. We lose our ties to physical reality, but, in the space of all possible worlds, that cannot be the end. Our consciousness continues to exist in some of those, and we will always find ourselves in worlds where we exist and never in ones where we don't. The nature of the next simplest world that can host us, after we abandon physical law, I cannot guess. Does physical reality simply loosen just enough to allow our consciousness to continue? Do we find ourselves in a new body, or no body? It probably depends more on the details of our own consciousness than did the original physical life. Perhaps we are most likely to find ourselves reconstituted in the minds of superintelligent successors, or perhaps in dreamlike worlds (or AI programs) where psychological rather than physical rules dominate. Our mind children will probably be able to navigate the alternatives with increasing facility. For us, now, barely conscious, it remains a leap in the dark.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-08T17:50:27.421Z · LW(p) · GW(p)

I already wrote this fic ("The Grand Finale of the Ultimate Meta Mega Crossover").

Replies from: XiXiDu
comment by XiXiDu · 2010-08-08T18:38:26.565Z · LW(p) · GW(p)

I wouldn't be surprised to find out that many people who know about you and the SIAI are oblivious of your fiction. At least I myself only found out about it some time after learning about you and SIAI.

It is generally awesome stuff and would be enough in itself to donate to SIAI. Spreading such fiction stories might actually attract more people to dig deeper and find out about SIAI than to be be thrown in at the deep end.

Edit: I myself came to know about SIAI due to SF, especially Orion's Arm.

comment by utilitymonster · 2010-08-03T19:09:27.530Z · LW(p) · GW(p)

If you want to eliminate hindsight bias, write down some reasons that you think justify your judgment.

Those who consider the likelihood of an event after it has occurred exaggerate their likelihood of having been able to predict that event in advance. We attempted to eliminate this hindsight bias among 194 neuropsychologists. Foresight subjects read a case history and were asked to estimate the probability of three different diagnoses. Subjects in each of the three hindsight groups were told that one of the three diagnoses was correct and were asked to state what probability they would have assigned to each diagnosis if they were making the original diagnosis. Foresight-reasons and hindsight-reasons subjects performed the same task as their foresight and hindsight counterparts, except they had to list one reason why each of the possible diagnoses might be correct. The frequency of subjects succumbing to the hindsight bias was lower in the hindsight-reasons groups than in the hindsight groups not asked to list reasons.

ARKES, H.R., et al., 1988. Eliminating the hindsight bias. Journal of applied psychology.

Replies from: gwern, gwern
comment by gwern · 2010-08-08T12:52:49.614Z · LW(p) · GW(p)

Ought to be available at: http://dl.dropbox.com/u/5317066/Arkes%20et%20al--Eliminating%20the%20Hindsight%20Bias.pdf

(If this link doesn't work, let me know.)

comment by gwern · 2010-08-05T10:02:43.895Z · LW(p) · GW(p)

Link?

Replies from: utilitymonster
comment by utilitymonster · 2010-08-05T12:31:05.960Z · LW(p) · GW(p)

Link

Replies from: gwern
comment by gwern · 2010-08-05T12:53:29.765Z · LW(p) · GW(p)

I meant 'link to full text please', unless I'm missing something somewhere on that page.

Replies from: utilitymonster
comment by utilitymonster · 2010-08-05T13:41:17.905Z · LW(p) · GW(p)

Don't have. I can e-mail a copy if you want (got it through university's subscription).

Replies from: gwern
comment by gwern · 2010-08-05T14:17:39.531Z · LW(p) · GW(p)

Alright, email it to me at gwern0 at gmail.com, and I'll host it somewhere for everyone. Maybe Dropbox.

comment by ata · 2010-08-17T04:48:55.138Z · LW(p) · GW(p)

I've been wanting to change my username for a while, and have heard from a few other people who do too, but I can see how this could be a bit confusing if someone with a well-established identity changes their username. (Furthermore, at LW meetups, when I've told people my username, a couple of people have said that they didn't remember specific things I've posted here, but had some generally positive affect associated with the name "ata". I would not want to lose that affect!) So I propose the following: Add a "Display name" field to the Preferences page on LW; if you put something in there, then this name would be shown on your user page and your posts and comments, next to your username. (Perhaps something like "ata (Adam Atlas)" — or the other way around? Comments and suggestions are welcome.)

I'm willing to code this if there's support for it and if the administrators deem it acceptable.

comment by Alexandros · 2010-08-06T08:31:31.786Z · LW(p) · GW(p)

Wired - We Are All Talk Radio Hosts

Replies from: None
comment by [deleted] · 2010-08-09T17:16:22.679Z · LW(p) · GW(p)

Related - verbal overshadowing, where describing something verbally blocks retrieving perceptual memories of it. Critically, verbal overshadowing doesn't always occur - sometimes verbal descriptions improve reasoning.

Doesn't refute Lehrer's main point exactly, but does complicate it somewhat.

comment by simplicio · 2010-08-05T00:30:06.589Z · LW(p) · GW(p)

An amusing case of rationality failure: Stockwell Day, a longstanding albatross around Canada's neck, says that more prisons need to be built because of an 'increase in unreported crime.'

As my brother-in-law amusingly noted on FB, quite apart from whether the actual claim is true (no evidence is forthcoming), unless these unreported crimes are leading to unreported trials and unreported incarcerations, it's not clear why we would need more prisons.

comment by [deleted] · 2010-08-02T10:32:40.448Z · LW(p) · GW(p)

I’m not yet good enough at writing posts to actually properly post something but I hoped that if I wrote something here people might be able to help me improve. So obviously people can comment however they normally would but it would be great if people would be willing to give me the sort of advice that would help me to write a better post next time. I know that normal comments do this to some extent but I’m also just looking for the basics – is this a good enough topic to write a post on but not well enough executed (therefore, I should work on my writing). Is it not a good enough topic? Why not? Is it not in depth enough? And so on.

Is your graph complete?

The red gnomes are known to be the best arguers in the world. If you asked them whether the only creature that lived in the Graph Mountains was a Dwongle, they would say, “No, because Dwongles never live in mountains.”

And this is true, Dwongles never live in mountains.

But if you want to know the truth, you don’t talk to the red gnomes, you talk to the green gnomes who are the second best arguers in the world.

And they would say. “No, because Dwongles never live in mountains.”

But then they would say, “Both we and the red gnomes are so good at arguing that we can convince people that false things are true. Even worse though, we’re so good that we can convince ourselves that false things are true. So we always ask if we can argue for the opposite side just as convincingly.”

And then, after thinking, they would say, “We were wrong, they must be Dwongles, for only Dwongles ever live in places where no other creatures live. So we have a paradox and paradoxes can never be resolved by giving counter examples to one or the other claim. Instead of countering, you must invalidate one of the arguments.”

Eventually, they would say, “Ah. My magical fairy mushroom has informed me that Graph Mountain is in fact a hill, ironically named, and Dwongles often live in hills. So yes, the creature is a Dwongle.”

The point of all of that is best discussed after introducing a method of diagramming the reasoning made by the green gnomes. The following series of diagrams should be reasonably self explanatory. A is a proposition that we want to know the truth of (the creature in the Graph Mountains a Dwongle) and not-A is its negation (the creature in the Graph Mountains is not a Dwongle). If a path is drawn between a proposition and the “Truth” box, then the proposition is true. Paths are not direct but go through a proof (in this case P1 stands in for “Dwongles never live in mountains” and P2 stands in for “Only Dwongles live in a place where no other creatures live). The diagrams connect to the argument made above by the green gnome. First, we have the argument that it mustn’t be a Dwongle because of P1. The second diagram shows the green gnome realising that they have an argument that it must be a Dwongle too due to P2. This middle type of diagram could be called a “Paradox Diagram.”

Figure 1. The green gnomes process of argument.

In his book, Good and Real, Gary Drescher notes that paradoxes can’t be resolved by making more counterarguments (which would relate to the approach shown in figure 2 before, which when considered graphically is obviously not helpful, we still have both propositions being shown to be true) but rather, by invalidating one of the arguments. That’s what the green gnomes did when they realised that Graph Mountain was actually a hill and that’s what the final diagram in figure 1 shows the result of (when you remove a vertex, like P1, you remove all the lines connected to it as well).

Figure 2. Attempting to resolve a paradox via counter arguments rather than invalidation.

The interesting thing in all of this is that the first and third diagrams in figure 1 look very similar. In fact, they’re the same but simply with different propositions proven. And this raises something: It can be very difficult to tell the difference between an incomplete paradox diagram and a completed proof diagram. The difference between the two is whether you’ve tried to find an argument for the opposite of the proposition proven and, if you do find one, whether you’ve managed to invalidate that argument.

What this means is, if you’re not confident that your proof for a proposition is true, you can’t be sure that you’ve taken all of the appropriate steps to establish its truth until you’ve asked: Is my graph complete?

Replies from: None, gwern
comment by [deleted] · 2010-08-04T13:39:56.228Z · LW(p) · GW(p)

So my presumption is that 4 points means this article isn't hopeless - it hasn't attracted criticism, some people have upvoted it - but isn't of a LW standard - it hasn't been voted highly enough, there is only 1 comment engaging with the topic.

Is anyone able to give me a sense at to why it isn't good enough? Should the topic necessarily be backed up by peer reviewed literature? Is it just not a big enough insight? Is it the writing? Is it the lack of specific examples noted by Gwern? Is it too similar to other ideas? And so on.

I hope I'm not bugging people by trying to figure it out but I'm trying to get better at writing posts without filling the main bit of less wrong with uninteresting stuff and this seemed like a less intrusive way to do this. I also feel like the best way to improve isn't simply reading the posts but involves actually trying to write posts and (hopefully) getting feedback.

Thanks

Replies from: JenniferRM
comment by JenniferRM · 2010-08-05T17:33:56.714Z · LW(p) · GW(p)

I tried composing a response a day or two ago, but had difficulty finding the words.

In a nutshell, I thought you should start with last two paragraphs, boil that down to a coherent and specific claim. Then write an entirely new essay that puts that claim at the top, in an introductory/summary paragraph. The rest of the post should be spent justifying and elaborating on the claim directly and clearly, without talking about gnomes or deploying the fallacy of equivocation on the sly, but hopefully with citation to peer reviewed evidence and/or more generally accessible works about reasoning (like from a book).

Replies from: None
comment by [deleted] · 2010-08-06T08:14:48.160Z · LW(p) · GW(p)

Thanks for the comment. That's really helpful. So I should basically start with the idea, present it more clearly (no gnomes) and try to provide peer reviewed evidence or at least some support.

comment by gwern · 2010-08-03T06:18:45.468Z · LW(p) · GW(p)

I like this, but in Good and Real, Drescher's paradigm works because he then supplies a few examples where he invalidates a paradox-causing argument, and then goes on to apply this general approach. Asides from your dwarf hypothetical example, where do you actually check that your graph is complete?

Replies from: None
comment by [deleted] · 2010-08-03T07:43:27.506Z · LW(p) · GW(p)

I think that you're asking when would you check that your graph is complete in a real world case, sorry if I misunderstood.

If so, take the question of whether global warming is anthropogenic. There are people who claim to have evidence that it is and people who claim to have evidence that it isn't so the basic diagram that we have for this case is a paradox diagram similar to that in figure 2 of the article above. Now there are a number of possible responses to this: Some people could be stuck on the paradox diagram and be unsure as to the right answer, some people may have invalidated one or the other side of the argument and may have decided one or the other claim is true, and some may be adding more and more proofs to one side or the other - countering rather than invalidating.

I think there's also a fourth group who's belief graph will look the same as those who have invalidated one side and have hence reached a conclusion. However, these will be people who, while they may technically know that arguments exist for the negation of their belief, have not taken opposing notions into account in their belief graph. So to them, it will look like a graph demonstrating the truth of their belief but, in fact, it's simply an incomplete paradox graph and they have some distance to go to figure out the truth of the matter.

So to summarise: I think there are people on both sides of the anthropogenic global warming debate who know that purported proofs against their beliefs exist on one level but who don't factor these into their belief graphs. I think they could benefit from asking themselves whether their graph is complete.

I should mention that this particular case isn't what motivated the post - in some ways I worry that by providing specific examples people stop judging an idea on its merit and start judging it based on their beliefs regarding the example mentioned and how they feel this is meant to tie in with the idea. Regardless, I could be mistaken. Is it considered a good idea to always provide real world example in LW posts on rationality techniques?

Or if you meant a more personal example then at my work there's currently a debate over whether a proposed electronic system will work. I'm one of the few people that thinks it won't (and I have some arguments to support that) but I haven't invalidated any arguments that show it will work, I simply haven't come across any such arguments. But it's a circumstance where I might benefit from asking, is my graph complete?

As a side note, I think the technique can also be extended to other circumstances. For example, some aspects of Eliezer's Guessing the teacher's password could be modelled by a "Password Graph" a graph like those above but where the truth of both A and not-A go through the same proof (say P1 for example). If you have a proof for A then you could ask if you have an incomplete Password graph because, if so, you could be in trouble. So you could extend the circumstances where the question applies by asking if you have completed any of a number of graphs. Of course, doing so comes at the cost of simplicity.

comment by andreas · 2010-08-01T22:35:55.067Z · LW(p) · GW(p)

Scott Aaronson asks for rational arguments for and against cryonics.

Replies from: ciphergoth, ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-02T15:16:34.658Z · LW(p) · GW(p)

Thanks to the two people who pointed this out to me in DM. I've commented, though Cyan has already linked to the essays on my blog I'd link to first.

comment by Johnicholas · 2010-08-09T11:08:26.105Z · LW(p) · GW(p)

Say a "catalytic pattern" is something like scaffolding, an entity that makes it easier to create (or otherwise obtain) another entity. An "autocatalytic pattern" is a sort of circular version of that, where the existence of an instance of the pattern acts as scaffolding for creating or otherwise obtaining another entity.

Autocatalysis is normally mentioned in the "origin of life" scientific field, but it also applies to cultural ratchets. An autocatalytic social structure will catalyze a few more instances of itself (frequently not expanding without end - rather, a niche is filled), and then the population has some redundancy and recoverability, acting as a ratchet.

For example, driving on the right(left) in one region catalyzes driving on the right(left) in an adjacent region.

Designing circular or self-applicable entities is kindof tricky, but it's not as tricky as it might be - often, theres an attraction basin around a hypothesized circular entity, where X catalyzes Y which is very similar to X, and Y catalyzes Z which is very similar to Y, and so focusing your search sufficiently, and then iterating or iterating-and-tweaking can often get the last, trickiest steps.

Douglas Hofstadter catalyzed the creation (by Lee Sallows) of a "Pangram Machine" that exploits this attraction basin to create a self-describing sentence that starts "This Pangram contains four as, [...]" - see http://en.wikipedia.org/wiki/Pangram

Has there been any work on measuring, studying attraction basins around autocatalytic entities?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T11:45:58.750Z · LW(p) · GW(p)

Has there been any work on measuring, studying attraction basins around autocatalytic entities?

I don't know of any work on the question, but it's a good topic. Nations seem to be autocatylitic.

comment by gwern · 2010-08-09T05:47:04.207Z · LW(p) · GW(p)

"The differences are dramatic. After tracking thousands of civil servants for decades, Marmot was able to demonstrate that between the ages of 40 and 64, workers at the bottom of the hierarchy had a mortality rate four times higher than that of people at the top. Even after accounting for genetic risks and behaviors like smoking and binge drinking, civil servants at the bottom of the pecking order still had nearly double the mortality rate of those at the top."

"Under Pressure: The Search for a Stress Vaccine" http://www.wired.com/magazine/2010/07/ff_stress_cure/all/1

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T10:06:49.888Z · LW(p) · GW(p)

It was interesting that most of the commenters were opposed to the idea of a stress vaccine, though their reasons didn't seem very good.

I'm wondering whether the vaccine would mean that people would be more inclined to accept low status (it's less painful) or less inclined to accept low status (more energy, less pessimism.)

I also wonder how much of the stress from low status is from objectively worse conditions (less benign stimulus, worse schedules, more noise, etc.) as distinct from less control, and whether there's a physical basis for the inclination to crank up stress on subordinates.

Replies from: gwern
comment by gwern · 2010-08-10T04:50:10.235Z · LW(p) · GW(p)

their reasons didn't seem very good.

Wired has unusually crappy commentators; YouTube quality. I wouldn't put much stock in their reactions.

I'm wondering whether the vaccine would mean that people would be more inclined to accept low status (it's less painful) or less inclined to accept low status (more energy, less pessimism.)

/blatant speculation

Stress response evolved for fight-or-flight - baboons and chimps fight nasty. Not for thinking or health. Reduce that, and like mindfulness meditation, one can think better and solve one's problems better.

is from objectively worse conditions

IIRC, the description made it sound like the study controlled for conditions - comparing clerical work with controlling bosses to clerical work sans controlling bosses.

Replies from: knb, NancyLebovitz
comment by knb · 2010-08-16T05:49:21.725Z · LW(p) · GW(p)

Wired has unusually crappy commentators; YouTube quality.

Oh come on, they're bad, but they're not YouTube bad.

comment by NancyLebovitz · 2010-08-10T09:35:33.021Z · LW(p) · GW(p)

One mention is of unsupportive bosses and the other is of mean bosses. I think we need more detail to find out what is actually meant.

comment by gwern · 2010-08-05T10:08:51.923Z · LW(p) · GW(p)

One little anti-akrasia thing I'm trying is editing my crontab to periodically pop up an xmessage with a memento mori phrase. It checks that my laptop lid is open, gets a random integer and occasionally pops up the # of seconds to my actuarial death (gotten from Death Clock; accurate enough, I figure):

 1,16,31,46 * * * * if grep open /proc/acpi/button/lid/LID0/state; then if [ $((`date \+\%\s` % 6)) = 1 ]; then xmessage "$(((`date --date="9 August 2074" \+\%\s` - `date \+\%\s`) / 60)) minutes left to live. Is what you are doing important?"; fi; fi

(I figure it's stupid enough a tactic and cheap enough to be worth trying. This shell stuff works in both bash and dash/sh, however, you probably want to edit the first conditional, since I'm not sure Linux puts the lid data at the same place in /proc/acpi in every system.)

Replies from: gwern, Risto_Saarelma
comment by gwern · 2010-08-14T16:17:49.265Z · LW(p) · GW(p)

OK, I can't seem to get the escaping to work right with crontab no matter how I fiddle, so I've replaced the one-liner with a regular script and meaningful variables names and all:

 1,14,32,26 * * * * ~/bin/bin/memento-mori

The script itself being (with the 32-bit hack mentioned below):

#!/bin/sh
set -e

if grep open /proc/acpi/button/lid/LID?/state > /dev/null
then
    CURRENT=`date +%s`;
    if [ $(( $CURRENT % 8 )) = 1 ]
    then
        # DEATH_DATE=`date --date='9 August 2074' +%s`
        DEATH_DATE="3300998400"
        REMAINING=$(( $DEATH_DATE  - $CURRENT ))
        REMAINING_MINUTES=$(( $REMAINING / 60 ))
        REMAINING_MINUTES_FMT=`env printf "%'d" $REMAINING_MINUTES`
        (sleep 10m && killall xmessage &)
        xmessage "$REMAINING_MINUTES_FMT minutes left to live. Is what you are doing important?"
    fi
fi
comment by Risto_Saarelma · 2010-08-05T10:23:02.731Z · LW(p) · GW(p)

Dates that far into the future don't seem to work with the date on 32-bit Linux.

Fun idea otherwise. You should report back in a month or so if you're still using it.

Replies from: gwern, h-H, gwern
comment by gwern · 2010-10-10T23:17:58.217Z · LW(p) · GW(p)

I had to reinstall with 32-bit to use a document scanner, so this became a problem for me. What I did was punch my 2074 date into a online converter, and use that generated date:

 -        DEATH_DATE=`date --date='9 August 2074' +%s`
 +        # DEATH_DATE=`date --date='9 August 2074' +%s`
 +        DEATH_DATE="3300998400"
comment by h-H · 2010-08-07T07:14:09.502Z · LW(p) · GW(p)

It might have an opposite effect to what is intended since the number would simply be too large.

comment by gwern · 2010-08-05T10:29:55.079Z · LW(p) · GW(p)

People still use 32-bit OSs?

But seriously, you could probably shell out to something else. Or you could change the output - it doesn't have to be in seconds or minutes. For example, you could call date to get the current year, and subtract that against 2074 or whatever.

comment by NancyLebovitz · 2010-08-04T07:37:53.321Z · LW(p) · GW(p)

I think one of the other reasons many people are uncomfortable with cryonics is that they imagine their souls being stuck-- they aren't getting the advantages of being alive or of heaven.

Replies from: Nisan, lwta
comment by Nisan · 2010-08-06T18:00:55.262Z · LW(p) · GW(p)

In all honesty, I suspect another reason people are uncomfortable with cryonics is that they don't like being cold.

comment by lwta · 2010-08-05T01:45:58.946Z · LW(p) · GW(p)

what's a soul?

Replies from: Oligopsony
comment by Oligopsony · 2010-08-05T01:58:34.383Z · LW(p) · GW(p)

Well, for the people presumably feeling uncomfortable, it's an immortal spirit that houses your personality and gets attached to a body for your pilgrimage on Earth.

There might be something to this for people who reject this metaphysic, even beyond unconsciously carrying it around. If you're going to come back, you don't get the secular heaven of "being fondly remembered after you die." In a long retirement or vacation, the book hasn't been shut on you. Perhaps there's something important many people find in the book being shut - of others, afterwards, being able to evaluate a life as a completed story. Someone frozen is maybe a "completed story" and maybe not.

comment by EStokes · 2010-08-02T17:31:47.787Z · LW(p) · GW(p)

Are there any posts people would like to see reposted? For example, Where Are We seems like it maybe should be redone, or at least put a link in About... Or so I thought, but I just checked About and the page for introductions wasn't linked, either. Huh.

Replies from: thomblake
comment by thomblake · 2010-08-02T18:29:30.015Z · LW(p) · GW(p)

It would be nice if we had profile pages with machine-readable information and an interface for simple queries so posts such as that one would be redundant.

comment by Yoreth · 2010-08-02T06:33:00.018Z · LW(p) · GW(p)

Suppose you know from good sources that there is going to be a huge catastrophe in the very near future, which will result in the near-extermination of humanity (but the natural environment will recover more easily). You and a small group of ordinary men and women will have to restart from scratch.

You have a limited time to compile a compendium of knowledge to preserve for the new era. What is the most important knowledge to preserve?

I am humbled by how poorly my own personal knowledge would fare.

Replies from: JoshuaZ, KrisC, jimrandomh, RobinZ, None, None, Eneasz, mstevens, ianshakil, xamdam
comment by JoshuaZ · 2010-08-02T12:52:46.708Z · LW(p) · GW(p)

I suspect that people are overestimating in their replies how much could be done with Wikipedia. People in general underestimate a) how much technology requires bootstrapping (metallurgy is a great example of this) b) how much many technologies, even primitive ones, require large populations so that specialization, locational advantages and comparative advantage can kick in (People even in not very technologically advanced cultures have had tech levels regress when they settle large islands or when their locations get cut off from the mainland. Tasmania is the classical example of this. The inability to trade with the mainland caused large drops in tech level). So while Wikipedia makes sense, it would also be helpful to have a lot of details on do-it-yourself projects that could use pre-existing remnants of existing technology. There are a lot of websites and books devoted to that topic, so that shouldn't be too hard.

If we are reducing to a small population, we may need also to focus on getting through the first one or two generations with an intact population. That means that a handful of practical books on field surgery, midwifing, and similar basic medical issues may become very necessary.

Also, when you specify "ordinary men and women" do you mean who all speak the same language? And do you mean by "ordinary" roughly developed world countries? That's what many people seem to mean when questions like this are proposed. They could alter things considerably. For example, if it really is a random sample, then inter-language dictionaries will be very important. But, if the sample involves some people from the developing world, they are more likely to have some of the knowledge base for working in a less technologically advanced situation that people in the developed world will lack (even this may only be true to a very limited extent because the tech level of the developing world is in many respects very high compared to the tech level of humans for most of human history. Many countries described as developing world are in better shape than for example much of Europe in the Middle Ages.)

Replies from: arundelo
comment by arundelo · 2010-08-02T14:55:02.130Z · LW(p) · GW(p)

how much technology requires bootstrapping (metallurgy is a great example of this)

I would love to see a reality TV show about a metallurgy expert making a knife or other metal tool from scratch. The expert would be provided food and shelter but would have no equipment or materials for making metal, and so would have to find and dig up the ore themselves, build their own oven, and whatever else you would have to do to make metal if you were transported to the stone age.

Replies from: RobinZ, arundelo
comment by RobinZ · 2010-08-02T17:42:12.212Z · LW(p) · GW(p)

One problem you would face with such a show is if the easily-available ore is gone.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-03T00:51:33.626Z · LW(p) · GW(p)

Yes, this is in fact connected to a general problem that Nick Bostrom has pointed out, each time you try to go back from stone age tech to modern tech you use resources up that you won't have the next time. However, for purposes of actually getting back to high levels of technology rather than having a fun reality show, we've got a few advantages. One can use the remaining metal that is in all the left over objects from modern civilization (cars being one common easy source of a number of metals). Some metals are actually very difficult to extract from ore (aluminum is the primary example of this. Until the technologies for extraction were developed, it was expensive and had almost no uses) whereas the ruins of civilization will have those metals in near pure forms if one knows where to look.

Replies from: ABranco
comment by ABranco · 2010-08-05T04:07:00.260Z · LW(p) · GW(p)

The argument that no one person in the face of Earth knows how to build a mouse from scratch is plausible.

Matt Ridley

comment by KrisC · 2010-08-02T20:23:33.204Z · LW(p) · GW(p)

Maps.

Locations of pre-disaster settlements to be used as supply caches. Locations of structures to be used for defense. Locations of physical resources for ongoing exploitation: water, fisheries, quarries. Locations of no travel zones to avoid pathogens.

comment by jimrandomh · 2010-08-02T18:14:48.570Z · LW(p) · GW(p)

Presupposing that only a limited amount of knowledge could be saved seems wrong. You could bury petabytes of data in digital form, then print out a few books' worth of hints for getting back to the technology level necessary to read it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-02T18:32:12.414Z · LW(p) · GW(p)

If the resources for printing are still handy. I don't feel comfortable counting on that at present levels of technology.

comment by RobinZ · 2010-08-02T11:47:30.331Z · LW(p) · GW(p)

In rough order of addition to the corpus of knowledge:

  1. The scientific method.

  2. Basic survival skills (e.g. navigation).

  3. Edit: Basic agriculture (e.g. animal husbandry, crop cultivation).

  4. Calculus.

  5. Classical mechanics.

  6. Basic chemistry.

  7. Basic medicine.

  8. Basic political science.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-02T15:20:25.690Z · LW(p) · GW(p)

Basic sanitation!

Replies from: RobinZ, ABranco
comment by RobinZ · 2010-08-02T15:36:05.507Z · LW(p) · GW(p)

Yes! Insert sanitation between 3 and 4, and insert construction (e.g. whittling, carpentry, metal casting) between sanitation and 3.

comment by ABranco · 2010-08-05T04:13:02.880Z · LW(p) · GW(p)

For survival skills, I'd suggest buying this one before the disaster, while there's still internet.

comment by [deleted] · 2010-08-02T18:14:38.770Z · LW(p) · GW(p)

Let's examine the problem in more detail: Different disaster scenarios would require different pieces of information, so it would help if you knew exactly what kind of catastrophe. However, if you can preserve a very large compendium of knowledge, then you can create a catalogue of necessary information for almost every type of doomsday scenario (nuclear war, environmental catastrophe, etc.) so that you will be prepared for almost anything. If the amount of information you can save is more limited, then you should save the pieces of information that are the most likely to be useful in any given scenario in "catastrophe-space." Now we have to go about determining what these pieces of information are. We can start by looking at the most likely doomsday scenarios--Yoreth, since you started the thread, what do you think the most likely ones are?

Replies from: Yoreth
comment by Yoreth · 2010-08-04T05:31:40.122Z · LW(p) · GW(p)

I suppose, perhaps, an asteroid impact or nuclear holocaust? It's hard for me to imagine a disaster that wipes out 99.999999% of the population but doesn't just finish the job. The scenario is more a prompt to provoke examination of the amount of knowledge our civilization relies on.

(What first got me thinking about this was the idea that if you went up into space, you would find that the Earth was no longer protected by the anthropic principle, and so you would shortly see the LHC produce a black hole that devours the Earth. But you would be hard pressed to restart civilization from a space station, at least at current tech levels.)

Replies from: None, Blueberry
comment by [deleted] · 2010-08-04T14:51:02.180Z · LW(p) · GW(p)

The other problem is this: if there is a disaster that wipes out such a large percentage of the Earth's population, the few people who did survive it would probably be in very isolated areas and might not have access to any of the knowledge we've been talking about anyway.

Still, it is interesting to look at what knowledge our civilization rest on. It seems to me that a lot of the infrastructure we rely on in our day-to-day lives is "irreducibly complex"--for example, we know how to make computers, but this is not a necessary skill in a disaster scenario (or our ancestral environment).

comment by Blueberry · 2010-08-04T09:01:56.780Z · LW(p) · GW(p)

the idea that if you went up into space, you would find that the Earth was no longer protected by the anthropic principle, and so you would shortly see the LHC produce a black hole that devours the Earth.

I am not following this. Why would the anthropic principle no longer apply if you went into space?

Replies from: katydee
comment by katydee · 2010-08-04T09:36:11.356Z · LW(p) · GW(p)

I think it's a quantum immortality argument. If you, the observer, are no longer on Earth, the Earth can be destroyed because its destruction no longer necessitates your death.

comment by [deleted] · 2010-08-02T06:40:46.904Z · LW(p) · GW(p)

A dead tree copy of Wikipedia. A history book about ancient handmade tools and techniques from prehistory to now. A bunch of K-12 school books about math and science. Also as many various undergraduate and postgraduate level textbooks as possible.

Replies from: JanetK, Oscar_Cunningham, sketerpot
comment by JanetK · 2010-08-02T11:30:32.631Z · LW(p) · GW(p)

Wikipedia is a great answer because we know that most but no all the information is good. Some is nonsense. This will force the future generations to question and maybe develop their own 'science' rather than worship the great authority of 'the old and holy books'.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-02T12:56:00.151Z · LW(p) · GW(p)

The knowledge about science issues generally tracks our current understanding very well. And historical knowledge that is wrong will be extremely difficult for people to check post an apocalyptic event, and even then is largely correct. In fact, if Wikipedia's science content really were bad enough to matter it would be an awful thing to bring into this situation since having correct knowledge or not could alter whether or not humanity survives at all.

comment by Oscar_Cunningham · 2010-08-02T11:43:05.401Z · LW(p) · GW(p)

Wikipedia would also contain a lot of info about current people and places, which would no longer be remotely useful.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-02T15:21:13.962Z · LW(p) · GW(p)

And a lot of popular culture which would no longer be available.

comment by sketerpot · 2010-08-02T07:18:19.626Z · LW(p) · GW(p)

A dead-tree copy of Wikipedia has been estimated at around 1,420 volumes. Here's an illustration, with a human for scale. It's big. You might as well go for broke and hole up in a library when the Big Catastrophe happens.

Replies from: mstevens
comment by mstevens · 2010-08-02T11:03:25.558Z · LW(p) · GW(p)

One of these http://thewikireader.com/ with rechargeable batteries and a solar charger could work.

Replies from: NihilCredo
comment by NihilCredo · 2010-08-02T18:52:01.889Z · LW(p) · GW(p)

Until some critical part oxidates or otherwise breaks. Which will likely be a long time before the new society is able to build a replacement.

Replies from: listic
comment by listic · 2010-08-04T13:33:50.107Z · LW(p) · GW(p)

But the WikiReader is probably a step in the right direction that is worth mentioning.

While most of the current technology depend on many other technology to be useful (cellular phones need cellular networks, most gadgets won't last a day on their internal batteries etc), the WikiReader is a welcome step in the direction less travelled. I only hope that we will have more of that.

comment by Eneasz · 2010-08-02T18:05:51.460Z · LW(p) · GW(p)

How to start a fire only using sticks.

How to make a cutting blade from rocks.

How to create a bow, and make arrows.

Basic sanitation.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-02T18:09:42.470Z · LW(p) · GW(p)

That seems like advice for living in the woods-- not a bad idea, but it probably needs to be adjusted for different environments (find water in dry land, staying warm in extreme cold, etc.) and especially for scavenging from ruins.

Any thoughts about people skills you'd need after the big disaster?

Replies from: Eneasz
comment by Eneasz · 2010-08-02T18:45:58.198Z · LW(p) · GW(p)

I thought about those a bit, but came to a few conclusions that made sense to me.

Being in a very dry land is simply a bad idea, best to move. Any group of survivors that is more than three days from fresh water won't be survivors, and once they've made it to the fresh water source there won't be many reasons to stray far from it for at least a couple generations, so water-finding skills will probably not be useful and be quickly lost.

Staying warm in extreme cold would be covered both by the fire-starting skills and the bow-making skills.

I wanted to put something about people skills, but I don't have any myself and didn't know what I could possibly say that would be remotely useful. Hopefully someone with more experience on that subject will survive as well. :)

comment by mstevens · 2010-08-02T11:07:44.288Z · LW(p) · GW(p)

I'm tempted to say "a university library" as the short answer. More specifically, whatever I could get from the science and engineering departments. Pick the classic works in each field if you have someone to filter them. Look for stuff that's more universal than specific to the way we've done things - in computing terms, you want The Art of Computer Programming and not The C Programming Language.

In the short term, anything you can find on farming and primitive medicine - all the stuff the better class of survivalist would have on their bookshelf.

comment by ianshakil · 2010-08-02T17:55:34.832Z · LW(p) · GW(p)

I only need one item:

The Holy Bible

(kidding)

comment by xamdam · 2010-08-02T16:00:55.524Z · LW(p) · GW(p)

Depends what level you want to achieve post-catastrophe; some, if not most of your resources and knowledge will be needed to deal with specific effects. In short, your suitcase will be full of survivalist and medical material.

In an thought experiment where you freeze yourself until the ecosystem is restored, you can probably use an algorithm of taking the best library materials from each century, corrected for errors, to achieve the level of that century.

Both Robinson Crusoe and Jules Verne's "Mysterious Island" and explore similar bootstrapping scenarios, interestingly both use some "outside injections".

comment by Pavitra · 2010-08-25T02:11:16.800Z · LW(p) · GW(p)

There's an idea I've seen around here on occasion to the effect that creating and then killing people is bad, so that for example you should be careful that when modeling human behavior your models don't become people in their own right.

I think this is bunk. Consider the following:

--

Suppose you have an uploaded human, and fork the process. If I understand the meme correctly, this creates an additional person, such that killing the second process counts as murder.

Does this still hold if the two processes are not made to diverge; that is, if they are deterministic (or use the same pseudorandom seed) and are never given differing inputs?

Suppose that instead of forking the process in software, we constructed an additional identical computer, set it on the table next to the first one, and copied the program state over. Suppose further that the computers were cued up to each other so that they were not only performing the same computation, but executing the steps at the same time as each other. (We won't readjust the sync on an ongoing basis; it's just part of the initial conditions, and the deterministic nature of the algorithm ensures that they stay in step after that.)

Suppose that the computers were not electronic, but insanely complex mechanical arrays of gears and pulleys performing the same computation -- emulating the electronic computers at reduced speed, perhaps. Let us further specify that the computers occupy one fewer spatial dimension than the space they're embedded in, such as flat computers in 3-space, and that the computers are pressed flush up against each other, corresponding gears moving together in unison.

What if the corresponding parts (which must be staying in synch with each other anyway) are superglued together? What if we simply build a single computer twice as thick? Do we still have two people?

--

No, of course not. And, on reflection, it's obvious that we never did: redundant computation is not additional computation.

So what if we cause the ems to diverge slightly? Let us stipulate that we give them some trivial differences, such as the millisecond timing of when they receive their emails. If they are not actively trying to diverge, I anticipate that this would not have much difference to them in the long term -- the ems would still be, for the most part, the same person. Do we have two distinct people, or two mostly redundant people -- perhaps one and a tiny fraction, on aggregate? I think a lot of people will be tempted to answer that we have two.

But consider, for a moment, if we were not talking about people but -- say -- works of literature. Two very similar stories, even if by a raw diff they share almost no words, are of not much more value than only one of them.

The attitude I've seen seems to treat people as a special case -- as a separate magisterium.

--

I wish to assert that this value system is best modeled as a belief in souls. Not immortal souls with an afterlife, you understand, but mortal souls, that are created and destroyed. And the world simply does not work that way.

If you really believed that, you'd try to cause global thermonuclear war, in order to prevent the birth of billions or more of people who will inevitably be killed. It might take the heat death of the universe, but they will die.

Replies from: ata
comment by ata · 2010-08-25T03:09:19.315Z · LW(p) · GW(p)

You make good points. I do think that multiple independent identical copies have the same moral status as one. Anything else is going to lead to absurdities like those you mentioned, like the idea of cutting a mechanical computer in half and doubling its moral worth.

I have for a while had a feeling that the moral value of a being's existence has something to do with the amount of unique information generated by its mind, resulting from its inner emotional and intellectual experience. (Where "has something to do with" = it's somewhere in the formula, but not the whole formula.) If you have 100 identical copies of a mind, and you delete 99 of them, you have not lost any information. If you have two slightly divergent copies of a mind, and you delete one of them, then that's bad, but only as bad as destroying whatever information exists in it and not the other copy. Abortion doesn't seem to be a bad thing (apart from any pain caused; that should still be minimized) because a fetus's brain contains almost no information not compressible to its DNA and environmental noise, neither of which seems to be morally valuable. Similar with animals; it appears many animals have some inner emotional and intellectual experience (to varying degrees), so I consider deleting animal minds and causing them pain to have terminal negative value, but not nearly as great as doing the same to humans. (I also suspect that a being's value has something to do with the degree to which its mind's unique information is entangled with and modeled (in lower resolution) by other minds, à la I Am A Strange Loop.)

Replies from: Pavitra
comment by Pavitra · 2010-08-25T04:02:31.452Z · LW(p) · GW(p)

I think... there's more to this wrongness-feeling I have than I've expressed. I would readily subject a million forks of myself to horrific suffering for the moderate benefit of just one of me. The main reason I'd have reservations about releasing myself on the internet for anyone to download would be because they could learn how to manipulate me. The main problem I have with slavery and starvation is that they're a waste of human resources, and that monolithic power structures are brittle against black swans. In short, I don't consider it a moral issue what algorithm is computed to produce a particular result.

I'm not sure how to formalize this properly.

comment by [deleted] · 2010-08-24T04:06:24.659Z · LW(p) · GW(p)

Some hobby Bayesianism. A typical challenge for a rationalist is that there is some claim X to be evaluated, it seems preposterous, but many people believe it. How should you take account of this when considering how likely X is to be true? I'm going to propose a mathematical model of this situation and discuss two of it's features.

This is based on a continuing discussion with Unknowns, who I think disagrees with what I'm going to present, or with its relevance to the "typical challenge."

Summary: If you learn that a preposterous hypothesis X is believed by many people, you should not correct your prior probability P(X) by a factor larger than the reciprocal of P(Y), your prior probability for the hypothesis Y = "X is believed by many people." One can deduce an estimate of P(Y) from an estimate of the quantity "if I already knew that at least n people believed X, how likely it would be that n+1 people believed X" as a function of n. It is not clear how useful this method of estimating P(Y) is.

The right way to unpack "X seems preposterous, but many believe it" mathematically is as follows. We have a very low prior probability P(X), and then we have new evidence Y = "many people believe X". The problem is to evaluate P(X|Y).

One way to phrase the typical challenge is "How much larger than P(X) should P(X|Y) be?" In other words, how large is the ratio P(X|Y)/P(X)? Bayes formula immediately says something interesting about this:

P(X|Y)/P(X) = P(Y|X)/P(Y)

Moreover, since P(Y|X) < 1, the right-hand side of that equation is less than 1/P(Y). My interpretation of this: if you want to know how seriously to take the fact that many people believe something, you should consider how likely you find it that many people would believe it absent any evidence. Or a little more precisely, how likely you find it that many people would believe it if the amount of evidence available to them was unknown to you. You should not correct your prior for X by more than the reciprocal of this probability.

Comment: how much less than 1 P(Y|X) is depends on the nature of X. For instance, if X is the claim "the Riemann hypothesis is false" then it is unclear to me how to estimate P(Y|X), but (since it is conceivable to me that RH is false, but still it is widely believed) it might be quite small. If X is an everyday claim like "it's a full moon tomorrow", or a spectacular claim like "Jesus rose from the dead", it seems like P(Y|X) is very close to 1. So sometimes 1/P(Y) is a good approximation to P(X|Y)/P(X), but maybe sometimes it is a big overestimation.

What about P(Y)? Is there a way to estimate it, or at least approach its estimation? Let's give ourselves a little more to work with, by quantifying "many people" in "many people believe X". Let Y(n) be the assertion "at least n people believe X." Note that this model doesn't specify what "believe" means -- in particular it does not specify how strongly n people believe X, nor how smart or expert those n people are, nor where in the world they are located... if there is a serious weakness in this model it might be found here.

Another application of Bayes theorem gives us

P(Y(n+1))/P(Y(n)) = P(Y(n+1)|Y(n))

(Since P(Y(n)|Y(n+1)) = 1, i.e. if we know n+1 people believe X, then of course n people believe X). Squinting a little, this gives us a formula for the derivative of the logarithm of P(Y(n)). Yudkowsky has suggested naming the log of a probability an "absurdity," let's write A(Y(n)) for the absurdity of Y(n).

d/dn A(Y(n)) = A(Y(n+1)|Y(n))

So up to an additive constant A(Y(n)) is the integral from 1 to n of A(Y(m+1)|Y(m))dm. So an ansatz for P(Y(n+1)|Y(n)) = exp(A(Y(n+1)|Y(n)) will allow us to say something about P(Y(n)), up to a multiplicative constant.

The shape of P(Y(n+1)|Y(n)) seems like it could have a lot to do with what kind of statement X is, but there is one thing that seems likely to be true no matter what X is: if N is the total population of the world and n/N is close to zero, then P(Y(n+1)|Y(n)) is also close to zero, and if n/N is close to one then P(Y(n+1)|Y(n)) is also close to one. I might work out an example ansatz like this in a future comment, if this one stands up to scrutiny.

Replies from: None
comment by [deleted] · 2010-08-24T20:27:25.141Z · LW(p) · GW(p)

Here is my proposal for an ansatz for P(Y(n+1)|Y(n)). That is, given that at least n people already believe X, how likely it is that at least one more person also believes X. Let N be the total population of the world. If n/N is close to zero, then I expect P(Y(n+1)|Y(n)) is also close to zero, and if n/N is close to 1, then P(Y(n+1)|Y(n)) is also close to 1. That is, if I know that a tiny proportion of people believe something, that's very weak evidence that a slightly larger proportion believe it also, and if I know that almost everyone believes it, that's very strong evidence that even more people believe it.

One family of functions that have this property are the functions f(n) = (n/N)^C, where C is some fixed positive number. Actually it's convenient to set C = c/N where c is some other fixed positive number. I don't have a story to tell about why P(Y(n+1)|Y(n)) should behave this way, I bring it up only because f(n) does the right thing near 1 and N, and is pretty simple.

To evaluate P(Y(n)), we take the integral of

(c/N)log(t/N)dt

from 1 to n, and exponentiate it. The result is, up to a multiplicative constant

exp(c times (x log x - x)) = (x/e)^(cx)

where x = n/N. I think it's a good idea to leave this as a function of x. Write K for the multiplicative constant. We have P(Proportion x of the population believes X) = K(x/e)^(cx). A graph of this function for K = 1, c = 1 can be found here and a graph of its reciprocal (whose relevance is explained in the parent) can be found here

Replies from: RobinZ
comment by RobinZ · 2010-08-24T20:49:51.423Z · LW(p) · GW(p)

It's an interesting analysis - have you confirmed the appearance of that distribution with real-world data? I suppose you'd need a substantial body of factual claims about which statistical information is available...

Replies from: None
comment by [deleted] · 2010-08-24T21:16:12.626Z · LW(p) · GW(p)

Thanks. I of course have no data, although I think there are lots of surveys done about weird things people believe. But even if this is the correct distribution, I think it would be difficult to fit data to it, because I would guess/worry that the constants K and c would depend on the nature of the claim. (c is so far just an artifact of the ansatz. K is something like P(Y(1)|Y(0)). Different for bigfoot than for Christianity.) Do you have any ideas?

comment by NQbass7 · 2010-08-20T19:04:52.861Z · LW(p) · GW(p)

Alright, I've lost track of the bookmark and my google-fu is not strong enough with the few bits and pieces I remember. I remember seeing a link to a story in a lesswrong article. The story was about a group of scientists who figured out how to scan a brain, so they did it to one of them, and then he wakes up in a strange place and then has a series of experiences/dreams which recount history leading up to where he currently is, including a civilization of uploads, and he's currently living with the last humans around... something like that. Can anybody help me out? Online story, 20 something chapters I think... this is driving me nuts.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-08-20T19:08:53.375Z · LW(p) · GW(p)

After Life

Replies from: NQbass7
comment by NQbass7 · 2010-08-20T19:19:01.048Z · LW(p) · GW(p)

Thank you. Bookmarked.

comment by Yoreth · 2010-08-08T17:09:32.367Z · LW(p) · GW(p)

I think I may have artificially induced an Ugh Field in myself.

A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.

Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."

Now that the week's over, I don't even want to think about X any more. It just feels too weird.

And maybe that's a good thing.

Replies from: Cyan
comment by Cyan · 2010-08-08T17:48:00.432Z · LW(p) · GW(p)

I have also artificially induced an Ugh Field in myself. A few months ago, I was having a horrible problem with websurfing procrastination. I started using Firefox for browsing and LeechBlock to limit (but not eliminate) my opportunities for websurfing instead of doing work. I'm on a Windows box, and for the first three days I disabled IE, but doing so caused knock-on effects, so I had to re-enable it. However, I knew that resorting to IE to surf would simply recreate my procrastination problem, so... I just didn't. Now, when the thought occurs to me to do so, it auto-squelches.

Replies from: Unknowns
comment by Unknowns · 2010-08-08T18:35:53.486Z · LW(p) · GW(p)

I predict with 95% confidence that within six months you will have recreated your procrastination problem with some other means.

Replies from: Cyan
comment by Cyan · 2010-08-09T20:04:26.632Z · LW(p) · GW(p)

Your lack of confidence in me has raised my ire. I will prove you wrong!

Replies from: Unknowns, Unknowns
comment by Unknowns · 2011-02-08T15:24:58.452Z · LW(p) · GW(p)

Did you start procrastinating again?

Replies from: Cyan
comment by Cyan · 2011-02-09T15:36:08.814Z · LW(p) · GW(p)

Yep. Eventually I sought medical treatment.

comment by Unknowns · 2010-08-09T20:07:17.642Z · LW(p) · GW(p)

To be settled by February 8, 2011!

comment by sketerpot · 2010-08-08T02:14:48.719Z · LW(p) · GW(p)

What simple rationality techniques give the most bang for the buck? I'm talking about techniques you might be able to explain to a reasonably smart person in five minutes or less: really the basics. If part of the goal here is to raise the sanity waterline in the general populace, not just among scientists, then it would be nice to have some rationality techniques that someone can use without much study.

Carl Sagan had a slogan: "Extraordinary claims require extraordinary evidence." He would say this phrase and then explain how, when someone claims something extraordinary (i.e. something for which we have a very low probability estimate), they need correspondingly stronger evidence than if they'd made a higher-likelihood claim, like "I had a sandwich for lunch." Now, I'm sure everybody here can talk about this very precisely, in terms of Bayesian updating and odds ratios, but Sagan was able to get a lot of this across to random laypeople in about a minute. Maybe two minutes.

What techniques for rationality can be explained to a normal person in under five minutes? I'm looking for small and simple memes that will make people more rational, on average. I'll try a few candidates, to get the discussion started.

Candidate 1: Carl Sagan's concise explanation of how evidence works, as mentioned above.

Candidate 2: Everything that has an effect in the real world is part of the domain of science (and, more broadly, rationality). A lot of people have the truly bizarre idea that some theories are special, immune to whatever standards of evidence they may apply to any other theory. My favorite example is people who believe that prayers for healing actually make people who are prayed for more likely to recover, but that this cannot be scientifically tested. This is an obvious contradiction: they're claiming a measurable effect on the world and then pretending that it can't possibly be measured. I think that if you pointed out a few examples of this kind of special pleading to people, they might start to realize when they're doing it.

Candidate 3: Admitting that you were wrong is a way of winning an argument. There's a saying that "It takes a big man to admit he's wrong," and when people say this, they don't seem to realize that it's a huge problem! It shouldn't be hard to admit that you were wrong about something! It shouldn't feel like defeat; it should feel like victory. When you lose an argument with someone, it should be time for high fives and mutual jubilation, not shame and anger. I know that it's possible to retrain yourself to feel this way, because I've done it. This wasn't even too difficult; it was more a matter of just realizing that feeling good about conceding an argument was even an option.

Anti-candidate: "Just because something feels good doesn't make it true." I call this an anti-candidate because, while it's true, it's seldom helpful. People trot out this line as an argument against other people's ideas, but rarely apply it to their own. I want memes that will make people actually be more rational, instead of just feeling that way.

Any ideas? I know that the main goal of this community is to strive for rationality far beyond such low-hanging fruit, but if we can come up with simple and easy techniques that actually help people be more rational, there's a lot of value in that. You could use it as rationalist propaganda, or something.

EDIT: I've expanded this into a top-level post.

Replies from: DuncanS, Larks, RobinZ, RobinZ
comment by DuncanS · 2010-08-09T00:58:34.532Z · LW(p) · GW(p)

I think some of the statistical fallacies that most people fall for are quite high up the list.

One such is the "What a coincidence!" fallacy. People notice that some unlikely event has occurred, and wonder how many millions to one against this event must have been - and yet it actually happenned ! Surely this means that my life is influenced by some supernatural influence!

The typical mistake is to simply calculate the likelihood of the occurrence of the particular event that occurred. Nothing wrong with that, but one should also compare that number against the whole basket of other possible unlikely events that you would have noticed if they'd happenned (of which there are surely millions), and all the possible occasions where all these unlikely events could have also occurred. When you do that, you discover that the likelihood of some unlikely thing happenning is quite high - which is in accordance with our experience that unlikely events do actually happen.

Another way of looking at it is that non-notable unlikely events happen all the time. Look, that particular car just passed me at exactly 2pm ! Most are not noticable. But sometimes we notice that a particular unlikely event just occurred, and of course it causes us to sit up and take notice. The question is how many other unlikely events you would also have noticed.

The key rational skill here is noticing the actual size of the set of unlikely things that might have happenned, and would have caught our attention if they had.

comment by Larks · 2010-08-09T22:12:03.440Z · LW(p) · GW(p)

I'm going to be running a series of Rationality & AI seminars with Alex Flint in the Autumn, where we'll introduce aspiring rationalists to new concepts in both fields; standard cognitive biases, a bit of Bayesianism, some of the basic problems with both AI and Friendliness. As such, this could be a very helpful thread.

We were thinking of introducing Overconfidence Bias; ask people to give 90% confidence intervals, and then reveal (surprise surprise!) that they're wrong half the time.

Replies from: sketerpot
comment by sketerpot · 2010-08-10T02:31:41.583Z · LW(p) · GW(p)

Since it seemed like this could be helpful, I expanded this into a top-level post.

That 90% confidence interval thing sounds like one hell of a dirty trick. A good one, though.

comment by RobinZ · 2010-08-08T16:42:46.948Z · LW(p) · GW(p)

The concept of inferential distance is good. You wouldn't want to introduce it in the context of explaining something complicated - you'd just sound self-serving - but it'd be a good thing to crack out when people complain about how they just can't understand how anyone could believe $CLAIM.

Edit: It's also a useful concept when you are thinking about teaching.

comment by RobinZ · 2010-08-08T02:41:20.441Z · LW(p) · GW(p)

#3 is a favorite of mine, but I like #1 too.

How about "Your intuitions are not magic"? Granting intuitions the force of authority seems to be a common failure mode of philosophy.

Replies from: sketerpot
comment by sketerpot · 2010-08-08T02:51:53.098Z · LW(p) · GW(p)

That's a good lesson to internalize, but how do you get someone to internalize it? How do you explain it (in five minutes or less) in such a way that someone can actually use it?

I'm not saying that there's no easy way to explain it; I just don't know what that way would be. When I argue with someone who acts like their intuitions are magic, I usually go back to basic epistemology: define concisely what it means to be right about whatever we're discussing, and show that their intuitions here aren't magic. If there's a simple way to explain in general that intuition isn't magic, I'd really love to hear it. Any ideas?

Replies from: DuncanS, RobinZ
comment by DuncanS · 2010-08-09T02:22:58.230Z · LW(p) · GW(p)

Given that we haven't constructed a decent AI, and don't know how those intuitions actually work, we only really believe they're not magic on the grounds that we don't believe in magic generally, and don't see any reason why intuitions should be an exception to the rule that all things can be explained.

Perhaps an easier lesson is that intuitions can sometimes be wrong, and it's useful to know when that happens so we can correct for it. For example, most people are intuitively much more afraid of dying in dramatic and unusual ways (like air crashes or psychotic killers) than in more mundane ways like driving the car or eating unhealthy foods, Once it's established that intuitions are sometimes wrong, the fact that we don't exactly know how they work isn't so dangerous to one's thinking.

comment by RobinZ · 2010-08-08T02:56:11.251Z · LW(p) · GW(p)

Well, I thought Kaj_Sotana's explanation was good, but the five-minute constraint makes things very difficult. I tend to be so long-winded that I'm not sure I could get across any insight in five minutes, honestly, but you're right that "Your intuitions are not magic" is likely to be harder than many.

comment by jimmy · 2010-08-06T00:18:00.732Z · LW(p) · GW(p)

Does anyone know where the page that used to live here can be found?

It was an experiment where two economists were asked to play 100 turn asymmetric prisoners dilemma with communication on each turn to the experimenters, but not each other.

It was quite amusing in that even though they were both economists and should have known better, the guy on the 'disadvantaged' side was attempting to have the other guy let him defect once in a while to make it "fair".

Replies from: Douglas_Knight, gwern
comment by gwern · 2010-08-05T05:12:10.206Z · LW(p) · GW(p)

"CIA Software Developer Goes Open Source, Instead":

"Burton, for example, spent years on what should’ve been a straightforward project. Some CIA analysts work with a tool, “Analysis of Competing Hypotheses,” to tease out what evidence supports (or, mostly, disproves) their theories. But the Java-based software is single-user — so there’s no ability to share theories, or add in dissenting views. Burton, working on behalf of a Washington-area consulting firm with deep ties to the CIA, helped build on spec a collaborative version of ACH. He tried it out, using the JonBenet Ramsey murder case as a test. Burton tested 51 clues — the lack of a scream, evidence of bed-wetting — against five possible culprits. “I went in, totally convinced it all pointed to the mom,” Burton says. “Turns out, that wasn’t right at all.”"

Replies from: Rain
comment by Rain · 2010-08-09T20:04:33.599Z · LW(p) · GW(p)

Far more interesting than the software is the chapter in the CIA book Psychology of Intelligence Analysis where they describe the method:

Analysis of competing hypotheses, sometimes abbreviated ACH, is a tool to aid judgment on important issues requiring careful weighing of alternative explanations or conclusions. It helps an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.

ACH is an eight-step procedure grounded in basic insights from cognitive psychology, decision analysis, and the scientific method. It is a surprisingly effective, proven process that helps analysts avoid common analytic pitfalls. Because of its thoroughness, it is particularly appropriate for controversial issues when analysts want to leave an audit trail to show what they considered and how they arrived at their judgment.

Summary and conclusions:

Three key elements distinguish analysis of competing hypotheses from conventional intuitive analysis.

  • Analysis starts with a full set of alternative possibilities, rather than with a most likely alternative for which the analyst seeks confirmation. This ensures that alternative hypotheses receive equal treatment and a fair shake.
  • Analysis identifies and emphasizes the few items of evidence or assumptions that have the greatest diagnostic value in judging the relative likelihood of the alternative hypotheses. In conventional intuitive analysis, the fact that key evidence may also be consistent with alternative hypotheses is rarely considered explicitly and often ignored.
  • Analysis of competing hypotheses involves seeking evidence to refute hypotheses. The most probable hypothesis is usually the one with the least evidence against it, not the one with the most evidence for it. Conventional analysis generally entails looking for evidence to confirm a favored hypothesis.
comment by Alex_Altair · 2010-08-03T20:53:48.596Z · LW(p) · GW(p)

What's the policy on User pages in the wiki? Can I write my own for the sake of people having a reference when they reply to my posts, or are they only for somewhat accomplished contributers?

Replies from: Blueberry, WrongBot, gwern
comment by Blueberry · 2010-08-04T01:09:41.610Z · LW(p) · GW(p)

I can't imagine any reason why it would be a problem to make a User page. Go ahead.

comment by WrongBot · 2010-08-04T00:53:55.676Z · LW(p) · GW(p)

I haven't seen any sort of policy articulated. I just sort of went for it, and haven't gotten any complaints yet. Personally, I'd love to see more people with wiki user pages, since the LW site itself doesn't have much in the way of profile features.

comment by gwern · 2010-08-05T10:03:26.502Z · LW(p) · GW(p)

My default assumption has that been unless otherwise stated, all the norms and conventions of Wikipedia apply to the LW wiki. En, at least, lets you have one for any reason you want.

comment by Mass_Driver · 2010-08-31T00:39:59.346Z · LW(p) · GW(p)

It might be useful to have a short list of English words that indicate logical relationships or concepts often used in debates and arguments, so as to enable people who are arguing about controversial topics to speak more precisely.

Has anyone encountered such a list? Does anyone know of previous attempts to create such lists?

comment by wedrifid · 2010-08-24T03:15:38.033Z · LW(p) · GW(p)

Eliezer has written a post (ages ago) which discussed a bias when it comes to contributions to charities. Fragments that I can recall include considering the motivation for participating in altruistic efforts in a tribal situation, where having your opinion taking seriously is half the point of participation. This is in contrast to donating 'just because you want thing X to happen'. There is a preference to 'start your own effort, do it yourself' even when that would be less efficient than donating to an existing charity.

I am unable to find the post in question - I think it is distinct from 'the unit of caring'. It would be much appreciated if someone who knows the right keywords could throw me a link!

Replies from: WrongBot
comment by WrongBot · 2010-08-25T00:33:34.798Z · LW(p) · GW(p)

Your Price for Joining?

Replies from: wedrifid
comment by wedrifid · 2010-08-25T01:51:10.250Z · LW(p) · GW(p)

That's it. Thankyou!

comment by ABranco · 2010-08-19T03:08:48.058Z · LW(p) · GW(p)

The visual guide to a PhD: http://matt.might.net/articles/phd-school-in-pictures/

Nice map–territory perspective.

comment by Craig_Heldreth · 2010-08-15T13:09:11.205Z · LW(p) · GW(p)

John Baez This Week's Finds in Mathematical Physics has its 300th and last entry. He is moving to wordpress and Azimuth. He states he wants to concentrate on futures, and has upcoming interviews with:

Tim Palmer on climate modeling and predictability, Thomas Fischbacher on sustainability and permaculture, and Eliezer Yudkowsky on artificial intelligence and the art of rationality. A Google search returns no matches for Fischbacher + site:lesswrong.com and no hits for Palmer +.

That link to Fischbacher that Baez gives has a presentation on cognitive distortions and public policy which I found quite good.

comment by [deleted] · 2010-08-11T06:38:26.091Z · LW(p) · GW(p)

Where should the line be drawn regarding the status of animals as moral objects/entities? E.G Do you think it is ethical to boil lobsters alive? It seems to me there is a full spectrum of possible answers: at one extreme only humans are valued, or only primates, only mammals, only veterbrates, or at the other extreme, any organism with even a rudimentary nervous system (or any computational, digital isomorphism thereof), could be seen as a moral object/entity.

Now this is not necessarily a binary distinction, if shrimp have intrinsic moral value it does not follow that they must have a equal value to humans or other 'higher' animals. As I see it, there are two possibilities; either we come to a point where the moral value drops to zero, or else we decide that entities approach zero to some arbitrary limit: e.g. a c. elegans roundworm with its 300 neurons might have a 'hedonic coefficient' of 3x10^-9. I personally favor the former, the latter just seems absurd to me, but I am open to arguments or any comments/criticisms.

Replies from: FAWS, GrateGoo, Tiiba
comment by FAWS · 2010-08-16T16:56:31.748Z · LW(p) · GW(p)

As I see it, there are two possibilities; either we come to a point where the moral value drops to zero, or else we decide that entities approach zero to some arbitrary limit: e.g. a c. elegans roundworm with its 300 neurons might have a 'hedonic coefficient' of 3x10^-9. I personally favor the former, the latter just seems absurd to me, but I am open to arguments or any comments/criticisms.

Less absurd than that some organism is infinitely more valuable than its sibling that differs in lacking a single mutation (in the case of the first organism of a particular species to have evolved "high" enough to have minimal moral value)?

comment by GrateGoo · 2010-08-25T05:59:56.109Z · LW(p) · GW(p)

Suppose sentient beings have intrinsic value in proportion to how intensely they can experience happiness and suffering. Then the value of invertebrates and many non-mammal vertebrates is hard to tell, while any mammal is likely to have almost as much intrinsic value as a human being, some possibly even more. But that's just the intrinsic value. Humans have a tremendously greater instrumental value than any non-human animal, since humans can create superintelligence that can, with time, save tremendous amounts of civilisations in other parts of the universe from suffering (yes, they are sparse, but with time our superintelligence will find more and more or them, in theory ultimately infinitely many).

The instrumental value of most humans is enormously higher than the intrinsic value of the same persons - given that they do sufficiently good things.

comment by Tiiba · 2010-08-16T16:39:37.976Z · LW(p) · GW(p)

My answer: if it shows signs of not wanting something to happen, such as avoiding a situation, it's best not to have it happen. Of course, simple stimulus response doesn't count, but if an animal can learn, it shouldn't be tortured for fun.

This only applies to animals, though. I'm not sure about machines.

Replies from: WrongBot
comment by WrongBot · 2010-08-16T18:44:24.304Z · LW(p) · GW(p)

There isn't a very meaningful distinction between animals and machines. What does or doesn't count as a "simple stimulus response"? Or learning?

Replies from: Tiiba
comment by Tiiba · 2010-08-16T21:18:42.435Z · LW(p) · GW(p)

Okay, more details: if an animal's behavior changes when it's repeatedly injured, it can learn. And learning is goal-oriented. But if it always does the same thing in the same situation, whatever that action is, it doesn't correspond to a desire.

And the reason why this is important for animals is that I assume that whatever it is that suffering is, I guess that it evolved quite long ago. After all, avoiding injury is a big part of the point of having a brain that can learn.

Replies from: WrongBot
comment by WrongBot · 2010-08-16T21:39:29.031Z · LW(p) · GW(p)

I've programmed a robot to behave in the way you describe, treating bright lights as painful stimuli. Was testing it immoral?

Replies from: Tiiba
comment by Tiiba · 2010-08-16T22:32:36.042Z · LW(p) · GW(p)

That's why I said it's hairier with machines.

Um, actual pain or just disutility?

Replies from: WrongBot
comment by WrongBot · 2010-08-16T23:07:14.035Z · LW(p) · GW(p)

That would depend pretty heavily on how you define pain. This is a good question; my first instinct was to say that they're the same thing, but it's not quite that simple. Pain in animals is really just an inaccurate signal of perceived disutility. The robot's code contained a function that "punished" states in which its photoreceptor was highly stimulated, and the robot made changes to its behavior in response, but I'm really not sure if that's equivalent to animal pain, or where exactly that line is.

Replies from: Cyan
comment by Cyan · 2010-08-17T00:35:29.536Z · LW(p) · GW(p)

Pain has been the topic of a top-level post. I think my own comment on that thread is relevant here.

Replies from: WrongBot
comment by WrongBot · 2010-08-17T01:23:51.797Z · LW(p) · GW(p)

Ahh, I hadn't seen that before. Thanks for the link.

So, did my robot experience suffering then? Or is there some broader category of negative stimulus that includes both suffering and the punishment of states in which certain variables are above certain thresholds? I think it's pretty clear that the robot didn't experience pain, but I'm still confused.

comment by gwern · 2010-08-09T07:03:19.833Z · LW(p) · GW(p)

With regard to the recent proof of P!=NP: http://predictionbook.com/predictions/1588

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-09T07:48:57.572Z · LW(p) · GW(p)

With no time limit, how can you ever win that one?

Replies from: gwern
comment by gwern · 2010-08-09T07:51:18.454Z · LW(p) · GW(p)

No time limit?

Created by gwern about 1 hour ago; known in over 5 years

Might as well create a prediction for this; I assume 5 years ought to be enough time for the proof, if correct, to be verified & accepted, or to be refuted.

comment by NancyLebovitz · 2010-08-08T22:44:38.197Z · LW(p) · GW(p)

Would people be interested in a place on LW for collecting book recommendations?

I'm reading The Logic of Failure and enjoying it quite a bit. I wasn't sure whether I'd heard of it here, and I found Great Books of Failure, an article which hadn't crossed my path before.

There's a recent thread about books for a gifted young tween which might or might not get found by someone looking for good books..... and so on.

Would it make more sense to have a top level article for book recommendations or put it in the wiki? Or both?

Replies from: None, Morendil
comment by [deleted] · 2010-08-09T15:54:01.041Z · LW(p) · GW(p)

Considering most of my favorite books are the result of mentions in comment threads here, I'd say a book recommendation thread is in order.

Tangental, but I remember "Logic of Failure" to be mostly being mental phenomena I was already familiar with, and generalizations from computer experiments that I didn't find particularly compelling. I'll have to give it another look.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T16:09:32.841Z · LW(p) · GW(p)

I liked the section near the beginning about the various ways of being bad at optimizing complex computer scenarios. It was a tidy description of the ways people think too little about what they're doing and/or overfocus on the wrong things.

Part of my enjoyment was seeing those matters described so compactly, and part of it was the emotional tone which combined a realization that this is a serious problem with a total lack of gloating over other people's idiocy. That last may indicate that I've been spending too much time online.

If you didn't notice anything new to you in the book the first time, there may not be a good reason for you to reread it.

comment by Morendil · 2010-08-08T22:54:32.935Z · LW(p) · GW(p)

I'd say new top-level thread. The wiki can get a curated version of that.

comment by Kevin · 2010-08-08T22:42:02.578Z · LW(p) · GW(p)

P ≠ NP : http://news.ycombinator.com/item?id=1585850

Replies from: Clippy
comment by Clippy · 2010-08-08T22:46:47.786Z · LW(p) · GW(p)

I know. Does any human mathematician really doubt that?

Replies from: Unknowns, multifoliaterose, Kevin
comment by Unknowns · 2010-08-09T07:17:37.914Z · LW(p) · GW(p)

I've been becoming more and more convinced that Kevin and Clippy are the same person. Besides Clippy's attempt to get money for Kevin, one reason is that both of them refer to people with labels like "User:Kevin". More evidence just came in here, namely these comments within 5 minutes of each other.

Replies from: Clippy
comment by Clippy · 2010-08-09T13:42:59.424Z · LW(p) · GW(p)

I'm not User:Kevin.

Replies from: wedrifid, Unknowns
comment by wedrifid · 2010-08-09T14:37:15.640Z · LW(p) · GW(p)

Explain why I should consider this to be evidence that you are not User:Kevin.

(This is not rhetorical. It is something worth exploring. How does this instance of a non-human agent gain credibility? How can myself and such an agent build and maintain cooperation in the game of credible communication despite incentives to lie? Has Clippy himself done any of these things?)

Replies from: Clippy
comment by Clippy · 2010-08-09T15:07:40.376Z · LW(p) · GW(p)

Perhaps you shouldn't. But there's a small chance that, if I were a human like User:Kevin, and other Users had made such inferences correctly identifying me, I would regard this time as the optimal one for revealing my true identity.

Therefore, my post above is slightly informative.

comment by Unknowns · 2010-08-09T14:16:41.343Z · LW(p) · GW(p)

That could easily be consistent with my statement, if taken in a certain sense.

Replies from: Clippy
comment by Clippy · 2010-08-09T14:22:21.454Z · LW(p) · GW(p)

Okay. Then believe that I am User:Kevin, if that's what it takes to stop being so bigoted toward me. ⊂≣\

comment by multifoliaterose · 2010-08-09T02:00:38.972Z · LW(p) · GW(p)

Yes, there are humans mathematicians who doubt that P is not equal to NP.

See "Guest Column: The P=?NP Poll" http://www.cs.umd.edu/~gasarch/papers/poll.pdf by William Gasarch where a poll was taken of 100 experts, 9 of whom ventured the guess that P = NP and 22 of whom offered no opinion on how the P vs. NP question will be resolved. The document has quotes from various of the people polled elaborating on what their beliefs are on this matter.

comment by Kevin · 2010-08-08T22:51:02.502Z · LW(p) · GW(p)

How do you know you know?

Replies from: JoshuaZ, Clippy
comment by JoshuaZ · 2010-08-09T00:04:49.320Z · LW(p) · GW(p)

There's a very good summary by Scott Aaronson describing why we believe that P is very likely to be not equal to NP. However, Clippy's confidence seems unjustified. In particular, there was a poll a few years ago that showed that a majority of computer scientists believe that P=NP but a substantial fraction do not. (The link was here but seems to be not functioning at the moment (according to umd.edu's main page today they have a scheduled outage of most Web services for maintenance so I'll check again later. I don't remember the exact numbers so I can't cite them right now)).

This isn't precisely my area, but speaking as a mathematician whose work touches on complexity issues, I'd estimate around a 1/100 chance that P=NP.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-09T05:17:17.315Z · LW(p) · GW(p)

URL is repeated twice in link?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-09T13:45:33.386Z · LW(p) · GW(p)

Thanks, fixed.

comment by Clippy · 2010-08-08T23:14:59.414Z · LW(p) · GW(p)

Because if it were otherwise -- if verifying a solution were of the same order of computational difficulty of finding it -- it would be a lot harder to account for my observations than if it weren't so.

For example, verifying a proof would be of similar difficulty to finding the proof, which would mean nature would stumble upon representations isomorphic to either with similar probability, which we do not see.

The possibility that P = NP but with a "large polynomial degree" or constant is too ridiculous to be taken seriously; the algorithmic complexity of the set of NP-complete problems does not permit a shortcut that characterizes the entire set in a way that would allow such a solution to exist.

I can't present a formal proof, but I have sufficient reason to predicate future actions on P ≠ NP, for the same reason I have sufficient reason to predicate future actions on any belief I hold, including beliefs about the provability or truth of mathematical theorems.

Replies from: Kevin, None
comment by Kevin · 2010-08-08T23:50:09.040Z · LW(p) · GW(p)

Most human mathematicians think along similar lines. It will still be a big deal when P ≠ NP is proven, if for no other reason that it pays a million dollars. That's a lot of paperclips.

Let me know if you think you can solve any of these! http://www.claymath.org/millennium/

comment by [deleted] · 2010-08-09T01:32:01.566Z · LW(p) · GW(p)

The possibility that P = NP but with a "large polynomial degree" or constant is too ridiculous to be taken seriously; the algorithmic complexity of the set of NP-complete problems does not permit a shortcut that characterizes the entire set in a way that would allow such a solution to exist.

Would you elaborate.

Replies from: Clippy
comment by Clippy · 2010-08-09T01:52:01.411Z · LW(p) · GW(p)

Under the right conditions, yes.

comment by SilasBarta · 2010-08-06T23:20:58.015Z · LW(p) · GW(p)

Goodhart sighting? Misunderstanding of causality sighting? Check out this recent economic analysis on Slate.com (emphasis added):

For much of the modern American era, inflation has been viewed as an evil demon to be exorcised, ideally before it even rears its head. This makes sense: Inflation robs people of their savings, and the many Americans who have lived through periods of double-digit inflation know how miserable it is. But sometimes a little bit of inflation is valuable. During the Great Depression, government policies deliberately tried to create inflation. Rising prices are a sign of rising output, something that would be welcome in the current slow-motion recovery.

(He then quotes an economist that says inflation would also prop up home values and prevent foreclosures.)

Did I get that right? Because inflation has traditionally been a sign of (caused by) rising output, you should directly cause inflation, in order to cause higher output. (Note: in order to complete the case for inflation, you arguably have to do the same thing again, but replacing inflation with output, and output with reduced unemployment.)

A usual, I'm not trying to start a political debate about whether inflation is good or bad, or what should be done to increase/decrease inflation. I'm interested in this particular way of arguing for pro-inflation policies, which seems to even recognize which way the causality flows, but still argue as if it runs the opposite direction.

Am I misunderstanding it?

LW Goodhart article

Replies from: RobinZ
comment by RobinZ · 2010-08-06T23:56:55.565Z · LW(p) · GW(p)

It's possible - the next sentence after your quotation reads:

As economist Casey Mulligan has argued, some inflation right now could have some salutary effects: "Specifically, inflation would raise prices of homes, among other things. Higher housing prices would pull a number of mortgages out from under water … and thereby reduce the number of foreclosures."

...which is at least a causal mechanism that would go the correct direction. That said, the part you quoted sounds pretty bad.

Replies from: h-H
comment by h-H · 2010-08-07T03:09:09.140Z · LW(p) · GW(p)

but that seems to miss the whole point of depressions: over inflation Has to lead to deflation or X, and X is bad (angry masses, civil unrest, collapsed government, large scale wars etc). not many people have much money to begin with, and we should raise prices of homes and whatnot? people who have foreclosed Need to foreclose, just like companies that go broke Need to-the bailouts were a huge mistake- or else your financial model is broken and you actually want to support net negative behavior in the economy.

now, I'm no economics major, but I don't that degree to know this: in a nutshell, if you have an asset-house for eg.-and it's market price is 100k but it and all the other houses in the area are being sold @ 500k and someone-most people anyway-actually buys that house by borrowing money they can never hope to pay back with interest in any reasonable amount of time, then that house's price simply Has to go down or else you have X.

how does 'increasing inflation' solve the fundamental problem of there being no more wealth to pa for anything with? the US has simply borrowed more than it can pay back for decades if ever, inflation will only cause matters to worsen to improve.

yes all governments have debt and survive, and a government having zero debt is unlikely to happen anytime soon, but that's fine as long as the debt is manageable, and it might seem like that if we take 'official' reports of the Outstanding Public Debt being around $13.3 Trillion, even though that's pretty bad, we'd just need tighter purse strings and some measures here and there and in a few decades it'll be mostly payed off, but unfortunately that's not going to happen.

Factor in the remaining 'unfunded liabilities' ie. the benefits-money- promised by government to the elderly, sick, unemployed and so on-social security et all- and our debt is over $60 Trillion, each citizen's burden of an equal share amounts to around a quarter million $US.

put raising inflation deliberately in such a context and you'll see how pretty bad it all actually is.

I know this is strong language from a non economist, but again, this is not such a hard thing to grok, see http://communities.washingtontimes.com/neighborhood/stimulus/2010/jun/30/forgive-us-our-debts/ or http://cynicuseconomicus.blogspot.com/2008/09/banking-bailout-why-will-help-bankrupt.html

comment by Spurlock · 2010-08-06T15:14:53.136Z · LW(p) · GW(p)

Last night I introduced a couple of friends to Newcomb's Problem/Counterfactual Mugging, and we discussed it at some length. At some point, we somehow stumbled across the question "how do you picture Omega?"

Friend A pictures Omega as a large (~8 feet) humanoid with a deep voice and a wide stone block for a head.

When Friend B hears Omega, he imagines Darmani from Majora's mask (http://www.kasuto.net/image/officialart/majora_darmani.jpg)

And for my part, I've always pictured him a humanoid with paper-white skin in a red jumpsuit with a cape (the cape, I think, comes from hearing him described as "flying off" after he's confounded you).

So it seemed worth asking LW just for the amusement: how do you picture Omega?

Replies from: cousin_it, WrongBot, Leonhart, Document
comment by cousin_it · 2010-08-06T15:30:47.673Z · LW(p) · GW(p)

I've always pictured Omega like this: suddenly I'm pulled from our world and appear in a sterile white room that contains two boxes. At the same moment I somehow know the problem formulation. I open one box, take the million, and return to the world.

Replies from: h-H, Spurlock
comment by h-H · 2010-08-06T22:03:41.718Z · LW(p) · GW(p)

This, down to the white room and being pulled. Omega doesn't Have form or personality. He's beyond physics.

comment by Spurlock · 2010-08-06T16:26:33.401Z · LW(p) · GW(p)

And when you get counterfactually mugged, you're in a sterile white room with a vending machine bill acceptor planted in the wall?

Replies from: cousin_it
comment by cousin_it · 2010-08-06T17:00:44.643Z · LW(p) · GW(p)

No, just an empty room. If I take a bill out of my pocket and hold it in front of me, it disappears and I go back. If I say "no", I go back.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-08-06T17:38:55.642Z · LW(p) · GW(p)

Omega would get better results if he accepted Master Card.

Replies from: cousin_it
comment by cousin_it · 2010-08-06T18:21:35.164Z · LW(p) · GW(p)

Then let's imagine it as a phone call. "Excuse me Sir, I guess we have to withdraw $100 from your account due to counterfactual circumstances."

comment by WrongBot · 2010-08-06T15:18:13.236Z · LW(p) · GW(p)

I've always thought of Omega as looking something like a hydralisk--biological and alien, almost a scaled-down Lovecraftian horror.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2010-11-05T00:30:40.840Z · LW(p) · GW(p)

(Necro-thread)

I can't explain why, but I've always imagined Omega to be a big hovering red sphere with a cartoonish face, and black beholder-like eyestalks coming off him from all sides.

He may have been influenced by the Flying Spaghetti Monster.

Replies from: wedrifid
comment by wedrifid · 2010-11-05T01:04:52.251Z · LW(p) · GW(p)

He may have been influenced by the Flying Spaghetti Monster.

Has FSM mythology got room for an archangel equivalent? Or perhaps a pantheon, an equivalent to the Norse Loki. Perhaps the love chid of FSM and a 'mortal' AGI. Given a series of bizarre tasks with incomprehensible motives that he must complete to prove himself.

comment by Leonhart · 2010-08-06T20:29:31.929Z · LW(p) · GW(p)

At the risk of spoiling a very good webcomic; Omega looks like this.

DAMN YOU WILLIS.

comment by Document · 2010-11-05T10:03:37.721Z · LW(p) · GW(p)

A white human-shaped figure in a business suit, possibly faceless, stepping into a transparent blue cube for the flight part. Possibly unconsciously influenced by Einstein.

Replies from: jaimeastorga2000, Risto_Saarelma
comment by jaimeastorga2000 · 2010-11-16T02:26:54.252Z · LW(p) · GW(p)

A white human-shaped figure in a business suit, possibly faceless

Anonymous?

Replies from: Document
comment by Document · 2010-11-16T04:45:22.332Z · LW(p) · GW(p)

Quite possibly.

comment by Risto_Saarelma · 2010-11-05T10:43:57.190Z · LW(p) · GW(p)

A white human-shaped figure in a business suit, possibly faceless,

This one sounds familiar.

comment by NancyLebovitz · 2010-08-06T15:06:07.746Z · LW(p) · GW(p)

AI development in the real world?

As a result, a lot of programmers at HFT firms spend most of their time trying to keep the software from running away. They create elaborate safeguard systems to form a walled garden around the traders but, exactly like a human trader, the programs know that they make money by being novel, doing things that other traders haven't thought of. These gatekeeper programs are therefore under constant, hectic development as new algorithms are rolled out. The development pace necessitates that they implement only the most important safeguards, which means that certain types of algorithmic behavior can easily pass through. As has been pointed out by others, these were "quotes" not "trades", and they were far away from the inside price - therefore not something the risk management software would be necessarily be looking for. -- comment from gameDevNYC

I can't evaluate whether what he's saying is plausible enough for science fiction-- it's certainly that-- or likely to be true.

Replies from: ocr-fork
comment by ocr-fork · 2010-08-14T20:08:59.746Z · LW(p) · GW(p)

One of the facts about 'hard' AI, as is required for profitable NLP, is that the coders who developed it don't even understand completely how it works. If they did, it would just be a regular program.

TLDR: this definitely is emergent behavior - it is information passing between black-box algorithms with motivations that even the original programmers cannot make definitive statements about.

Yuck.

comment by knb · 2010-08-06T03:43:10.767Z · LW(p) · GW(p)

Does anyone have any book recommendations for a gifted young teen? My nephew is 13, and he recently blew the lid off of a school-administered IQ test.

For his birthday, I want to give him some books that will inspire him to achieve great things and live a happy life full of hard work. At the very least, I want to give him some good math and science books. He has already has taken algebra, geometry and introductory calculus, so he knows some math already.

Replies from: cousin_it, Risto_Saarelma, Kevin, Soki, RobinZ, MartinB, markan, RobinZ
comment by cousin_it · 2010-08-06T19:02:38.389Z · LW(p) · GW(p)

Books are not enough. Smart kids are lonely. Get him into a good school (or other community) where he won't be the smartest one. That happened to me at 11 when I was accepted into Russia's best math school and for the first time in my life I met other people worth talking to, people who actually thought before saying words. Suddenly, to regain my usual position of the smart kid, I had to actually work hard. It was very very important. I still go to school reunions every year, even though I finished it 12 years ago.

Replies from: Wei_Dai, orthonormal
comment by Wei Dai (Wei_Dai) · 2010-08-06T20:32:43.283Z · LW(p) · GW(p)

Alternatively, not having any equally smart kids to talk to will force him to read books and/or go online for interesting ideas and conversation. I don't think I had any really interesting real-life conversations until college, when I did an internship at Microsoft Research, and I'd like to think that I turned out fine.

My favorite book, BTW, is A Fire Upon the Deep. But one of the reasons I like it so much is that I was heavily into Usenet when I first read it, and I'm not sure that aspect of the book will resonate as much today. (I was determined to become a one-man Sandor Arbitration Intelligence. :)

Replies from: cousin_it
comment by cousin_it · 2010-08-07T07:58:14.236Z · LW(p) · GW(p)

You turned out fine, but if you had my background (spending a big chunk of your childhood solving math problems and communicating the solutions every day), you'd convert way more of your decision-theory ideas into small theorems with conclusive proofs, instead of leaving the low-hanging fruit to people like me.

comment by orthonormal · 2010-08-06T19:16:48.755Z · LW(p) · GW(p)

Seconded. Whether he's exposed to a group of people who think ideas can be cool could be the biggest influence on him for the rest of his life.

Replies from: mattnewport
comment by mattnewport · 2010-08-06T20:27:18.059Z · LW(p) · GW(p)

Thirded. My experience is that most schools can be very damaging for smart kids.

comment by Risto_Saarelma · 2010-08-06T08:05:24.965Z · LW(p) · GW(p)

Forum favorite Good and Real looks reasonably accessible to me, and covers a lot of ground. Also seconding Gödel, Escher Bach.

The Mathematical Experience has essays about doing mathematics, written by actual mathematicians. It seems like very good reading for someone who might be considering studying math.

The Road to Reality has Roger Penrose trying to explain all of modern physics and the required mathematics without pulling any punches and starting from grade school math in a single book. Will probably cause a brain meltdown at some point on anyone who doesn't already know the stuff, but just having a popular science style book that nevertheless goes on to explain the general theory of relativity without handwaving is pretty impressive. Doesn't include any of Penrose's less fortunate forays into cognitive science and AI.

Darwin's Dangerous Idea by Daniel Dennett explains how evolution isn't just something that happens in biology, but how it turns up in all sorts of systems.

Armchair Universe and old book about "computer recreations", probably most famous is the introduction of the Core War game. The other topics are similar, setting up an environment with a simple program that has elaborate emergent behavior coming out of it. Assumes the reader might actually program the recreations themselves, and provides appropriate detail.

Surely You're Joking, Mr. Feynman is pretty much entertainment, but still very good. Feynman is still the requisite trickster-god patron saint of math and science.

Code: The Hidden Language of Computer Hardware and Software explains how computers are put together, starting from really concrete first principles (flashing Morse code with flashlights, mechanical relay circuits) and getting up to microprocessors, RAM and executable program code.

Replies from: orthonormal, cata, XiXiDu, knb
comment by orthonormal · 2010-08-06T19:12:46.524Z · LW(p) · GW(p)

Good and Real is superb, but really too dry for a 13-year-old. I'd wait on that one.

Surely You're Joking is also fantastic, but get it read and approved by your nephew's parents first; there's a few sexual stories with a hint of a PUA worldview.

comment by cata · 2010-08-06T14:01:52.044Z · LW(p) · GW(p)

I loved "The Mathematical Experience" when I was 13-ish, and I re-read it recently; still good! I strongly second this recommendation.

comment by XiXiDu · 2010-08-06T09:53:20.003Z · LW(p) · GW(p)

Thanks, I just ordered 'Darwin's Dangerous Idea' and 'Code: The Hidden Language of Computer Hardware and Software'. I've already got the others.

Here a tidbit from 'The Mathematical Experience'

In the 3,000 categories of mathematical writing, new mathematics is being created at a constantly increasing rate. The ocean is expanding, both in depth and in breadth.

By multiplying the number of papers per issue and the average number of theorems per paper, their estimate came to nearly two hundred thousand theorems a year. If the number of theorems is larger than one can possibly survey, who can be trusted to judge what is 'important'? One cannot have survival of the fittest if there is no interaction. It is actually impossible to keep abreast of even the more outstanding and exciting results. How can one reconcile this with the view that mathematics will survive as a single science? In mathematics one becomes married to one's own little field. [...] The variety of objects worked on by young scientists is growing exponentially. [...] Only within the narrow perspective of a particular speciality can one see a coherent pattern of development.

Replies from: NancyLebovitz, Risto_Saarelma
comment by NancyLebovitz · 2010-08-06T13:54:09.710Z · LW(p) · GW(p)

I've ordered a copy, but on a second look, I'm not sure that the argument is sound, or even interesting.

Biological evolution runs on the local non-survival of the least fit (and sometimes the unlucky), not on an overview-based evaluation of the fittest.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T14:01:03.817Z · LW(p) · GW(p)

Peer-review is the predator. But if the prey population is higher than can be sheltered by selection of promising ideas from nonsense, nonsense will prevail. That is, those people producing valuable results won't be favored over those that come up with marginal or wrong results.

comment by Risto_Saarelma · 2010-08-06T10:08:18.184Z · LW(p) · GW(p)

Yes, that's exactly the kind of stuff I recommended The Mathematical Experience for. It takes a bird's eye view instead of going for the usual textbook minutiae, but still feels like it's talking about the actual practice of mathematics instead of something simplified to death for the benefit of popular audiences.

comment by knb · 2010-08-06T09:11:22.161Z · LW(p) · GW(p)

Wow, great list. Thanks!

Replies from: orthonormal
comment by orthonormal · 2010-08-06T19:14:14.916Z · LW(p) · GW(p)

Oh, oops— I intended my review of the above selections to show up on your replies, not Risto's.

comment by Kevin · 2010-08-06T03:59:10.464Z · LW(p) · GW(p)

Godel Escher Bach!

comment by Soki · 2010-08-07T05:07:15.396Z · LW(p) · GW(p)

knb, does your nephew know about lesswrong, rationality and the Singularity? I guess I would have enjoyed reading such a website when I was a teenager.

When it comes to a physical book, Engines of Creation by Drexler can be a good way to introduce him to nanotechnology and what science can make happen. (I know that nanotech is far less important that FAI, but I think it is more "visual" : you can imagine those nanobots manufacturing stuff or curing diseases, while you cannot imagine a hard takeoff).
Teenagers need dream.

Replies from: knb
comment by knb · 2010-08-07T06:13:42.600Z · LW(p) · GW(p)

My sister and brother-in-law are both semi-religious theists, so I'm a bit reluctant to introduce him to anything as hardcore-atheist as Less Wrong, at least right now. Going through that huge theist-to-atheist identity transition can be really traumatic. I think it would be better if he was a bit older before he had confront those ideas.

I was 16 before I really allowed myself to accept that I didn't believe in God, and that was still a major crisis for me. If he starts getting into hardcore rationality material this early, I'm afraid it could force a choice between rationality and wishful thinking that he may not be ready to make.

Replies from: Interpolate
comment by Interpolate · 2010-08-07T06:58:08.499Z · LW(p) · GW(p)

If he is gifted and interested in science, introducing him to lesswrong, rationality and the Singularity could have a substantial positive impact on his academic development. What would be the worst that could happen?

Replies from: knb, wedrifid, Interpolate
comment by knb · 2010-08-07T08:19:17.161Z · LW(p) · GW(p)

My concern is not just that it would be traumatic, but that it will be so traumatic that he'll rationalize himself into a "belief in belief" situation. I had my crisis of faith when I was close to his age (14) and I wasn't ready to accept something that would alienate me from my family yet, so I simply told myself that I believed, and tried not to think about the issue. (I suspect this is why most people don't come out as atheists until after they've established separate identities from their parents and families.

A lot of people never escape from these traps. I think waiting somewhat--until he's somewhat older and more mature--will make him more likely to come to the right conclusions in the end.

Replies from: realitygrill, Interpolate
comment by realitygrill · 2010-08-09T16:12:03.946Z · LW(p) · GW(p)

I had rather the opposite experience - don't recall ever really believing (though I went to Catholic elementary school and semi-regularly attended a church), and was shocked in 8th grade to find that people were really serious about that stuff. Ended up spending a lot of time pointlessly arguing.

comment by Interpolate · 2010-08-07T09:15:17.271Z · LW(p) · GW(p)

If I understand correctly, your primary concern is that he may rationalise himself into this "belief in belief" situation, and that this will ultimately delay or deter completely his transition into atheism. Why do you think this? Have there been any studies done to support this notion?

I doubt the likelihood of learning about rationality and the Singularity inducing a crisis of faith is greater than that of most public science books.

comment by wedrifid · 2010-08-07T08:27:52.263Z · LW(p) · GW(p)

How is the above wrong enough to be at -2? I nearly universally reject any assertions that people have a duty to interfere with others but even so I don't have a problem with the above.

Replies from: Interpolate
comment by Interpolate · 2010-08-07T09:14:54.236Z · LW(p) · GW(p)

"I nearly universally reject any assertions that people have a duty to interfere with others"

As do I, hence "almost". I suppose I should edit the word out of my comment.

comment by Interpolate · 2010-08-07T09:14:30.141Z · LW(p) · GW(p)

If I understand correctly, your primary concern is that he may rationalise himself into this "belief in belief" situation, and that this will ultimately delay or deter completely his transition into atheism?

"I suspect this is why most people don't come out as atheists until after they've established separate identities from their parents and families.

A lot of people never escape from these traps." - What evidence do you have for thinking this? I would think that challenging religious assumptions at a younger age would result in an earlier transition to Atheism (assuming one occurs).

More importantly, the risk of rationality and the Singularity inducing a crisis of faith is no greater than that of any science and math book. Visit the science section of any major bookstore and bam - Dawkins.

comment by RobinZ · 2010-08-06T03:53:45.787Z · LW(p) · GW(p)

My dad's been trying to get me to read the Feynman Lectures for ages - the man's a good writer if your nephew would be interested by physics.

comment by MartinB · 2010-08-09T21:20:11.760Z · LW(p) · GW(p)

The Heinlein Juveniles. 'have space suit will travel' and others have the whole self-reliance, work hard and achieve things strongly ingrained. I cannot judge how well the integrate with your current culture, but in the 50s they sold well, and still do. But those are not specific for über-bright kids, more for the normal bright types. If he hasn't done so yet, just introducing him to the next big library might help a lot.

Replies from: MartinB
comment by MartinB · 2010-08-18T02:30:47.774Z · LW(p) · GW(p)

Another all-purpose book: Bill Brysons: short history of almost everything. It is not aim at kids, but very accessible, well written and deal with lots of the history of sciences, including the ignoring of great achievements, misleading pathways and such.

A great overview.

comment by markan · 2010-08-06T18:57:33.594Z · LW(p) · GW(p)

Get him a book of math contests. The Mandelbrot Problem Book is an excellent one.

comment by RobinZ · 2010-08-07T17:39:01.914Z · LW(p) · GW(p)

You might also consider Raymond Smullyan's books of logic puzzles - I particularly recommend The Lady or the Tiger? as excellent.

comment by Torben · 2010-08-03T18:51:08.097Z · LW(p) · GW(p)

In an argument with a philosopher, I used Bayesian updating as an argument. Guy's used to debating theists and was worried it wasn't bulletproof. Somewhat akin to how, say, the sum of angles of a triangle only equals 180 in Euclidian geometry.

My question: what are the fundamental assumptions of Bayes theorem in particular and probability theory in general? Are any of these assumptions immediate candidates for worry?

Replies from: cousin_it, jimrandomh, satt
comment by cousin_it · 2010-08-03T19:48:27.724Z · LW(p) · GW(p)

If you're talking about math, Bayes' theorem is true and that's the end of that. If you're talking about degrees of belief that real people hold - especially if you want to convince your opponent to update in a specific direction because Bayes' theorem says so - I'd advise to use another strategy. Going meta like "you must be persuaded by these arguments because blah blah blah" gives you less bang per buck than upgrading the arguments.

Replies from: mkehrt, Torben
comment by mkehrt · 2010-08-03T21:46:13.659Z · LW(p) · GW(p)

What kind of math do you know in where things can be "true, and that's the end of that"? In math, things should be provable from a known set of axioms, not chosen to be true because they feel right. Change the axioms, and you get different result.

Intuition is a good guide for finding a proof, and in picking axioms, but not much more than that. And intuitively true axioms can easily result in inconsistent systems.

The questions, "what axioms do I need to accept to prove Bayes' Theorem?", "Why should I believe these axioms reflect the physical universe"? and "What proof techniques do I need to prove the theorem?" are very relevant to deciding whether to accept Bayes' Theorem as a good model of the universe.

Replies from: b1shop, Baughn
comment by b1shop · 2010-08-05T01:02:54.898Z · LW(p) · GW(p)

Bayes' theorem doesn't require much more than multiplication and division. Here's some probability definitions:

P(A) = the probability of A happening P(A|B) = the probability of A happening given B has happened P(AB) = the probability of both A and B happening

For example, if A is a fair, six-sided die rolling a 4 and B is said die rolling an even, then P(A) = 1/6, P(A|B) = 1/3, P(AB) = 1/6.

By definition, P(A|B)=P(AB)/P(B). In words, the probability of A given B is equal to the probability of both A and B divided by the probability of B.

Solving for P(AB) tells us that:

P(B)P(A|B) = P(AB) = P(A)P(B|A)

Taking out the middle and solving for P(B) allows us to flip-flop from one-side of the given to the other.

P(A|B)=P(A)*P(B|A)/P(B)

Voila! Bayes' Theorem is logically necessary.

comment by Baughn · 2010-08-04T08:13:10.476Z · LW(p) · GW(p)

I'd love to hear more reasons, but here's one:

The fact that we find it intuitive is (via evolution) evidence that it in fact is true in this universe.

Right?

Unfortunately, there are enough exceptions to that rule that it probably only counts as weak evidence.

comment by Torben · 2010-08-04T16:06:14.756Z · LW(p) · GW(p)

Thank you all. It seems I perhaps haven't phrased my question the way I thought of it.

I don't doubt the validity of the proofs underlying Bayes' theorem, just as I don't doubt the validity of Euclidian geometry. The question is rather if BT/probability theory hinges on assumptions that may turn out not to be necessarily true for all possible worlds, geometries, curvatures, whatever. This turned out to be the case for Euclidian geometry, as it did for Zeno. They assumed features of the world which turned out not to be the case.

It may be that my question doesn't even make sense, but what I was trying to convey was what apriori assumptions does BT rely on which may turn out to be dodgy in the real world?

I'm not as such trying to convince people, rather trying to understand my own side's arguments.

Replies from: Cyan, WrongBot
comment by Cyan · 2010-08-04T16:31:24.946Z · LW(p) · GW(p)

I think Kevin Van Horn's introduction to Cox's theorem (warning: pdf) is exactly what you're looking for.

(If you read the article, please give me feedback on the correctness of my guess that it addresses your concern.)

comment by WrongBot · 2010-08-04T16:17:22.668Z · LW(p) · GW(p)

Bayes' Theorem assumes that it is meaningful to talk about subjective degrees of belief, but beyond that all you really need is basic arithmetic. I can't imagine a universe in which subjective degrees of belief aren't something that can be reasoned about, but that may be my failure and not reality's.

comment by jimrandomh · 2010-08-03T19:28:10.002Z · LW(p) · GW(p)

Jaynes' book PT:LoS has a good chapter on this, where he derives Bayes' theorem from simple assumptions (use of numbers to represent plausibility, consistency between paths that compute the same value, continuity, and agreement with common sense qualitative reasoning). The assumptions are sound.

Note that the validity of Bayes' theorem is a separate question from the validity of any particular set of prior probabilities, which is on much shakier ground.

comment by satt · 2010-08-03T21:20:27.278Z · LW(p) · GW(p)

Bayes's theorem follows almost immediately from the ordinary definition of conditional probability, which I think is itself so reassuringly intuitive that no one who accepts the use of probabilities would worry about it (except perhaps in the corner case where the denominator's zero).

comment by xamdam · 2010-08-02T19:11:07.023Z · LW(p) · GW(p)

Wei Dai has cast some doubts on the AI-based approach

Assuming that it is unlikely we will obtain fully satisfactory answers to all of the questions before the Singularity occurs, does it really make sense to pursue an AI-based approach?

I am curious if he has "another approach" he wrote about; I am not brushed up on sl4/ob/lw prehistory.

Personally I have some interest in increasing intelligence capability on individual level via "tools of thought" kind of approach, BCI in the limit. There is not much discussion of it here.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-03T01:53:16.262Z · LW(p) · GW(p)

No, I haven't written in any detail about any other approach. I think when I wrote that post I was mainly worried that Eliezer/SIAI wasn't thinking enough about what other approaches might be more likely to succeed than FAI. After my visit to SIAI a few months ago, I became much less worried because I saw evidence that plenty of SIAI people were thinking seriously about this question.

Replies from: xamdam
comment by xamdam · 2010-08-03T03:27:36.689Z · LW(p) · GW(p)

I haven't seen any other approaches mentioned here specifically, would be interesting to hear what those thoughts are, if they are publishable.

I think there is a lot of room improving on Englebart's approach with modern tools. It may also be viewed as a booster to the FAI rocket, if it in crease productivity enough.

comment by gwern · 2010-08-02T11:11:39.191Z · LW(p) · GW(p)

From the Long Now department: "He Took a Polaroid Every Day, Until the Day He Died"

My comment on the Hacker News page describes my little webcam script to use with cron and (again) links to my Prediction Book page.

comment by humpolec · 2010-08-01T19:04:50.990Z · LW(p) · GW(p)

If you have many different (and conflicting, in that they demand undivided attention) interests: if it was possible, would copying yourself in order to pursue them more efficiently satisfy you?

One copy gets to learn drawing, another one immerses itself in mathematics & physics, etc. In time, they can grow very different.

(Is this scenario much different to you than simply having children?)

Replies from: None, Peter_de_Blanc, KrisC, DanArmak, red75
comment by [deleted] · 2010-08-01T21:10:49.441Z · LW(p) · GW(p)

I wouldn't have problems copying myself as long as I could merge the copies afterwards. However, it might not be possible to have a merge operation for human level systems that both preserves information and preserves sanity. E.g. if one copy started studying philosophy and radically changed its world views from the original, how do you merge this copy back into the original without losing information?

Replies from: JenniferRM, humpolec, NancyLebovitz, rwallace
comment by JenniferRM · 2010-08-02T04:03:02.186Z · LW(p) · GW(p)

David Brin's novel Kiln People has this "merging back" idea, with cheap copies, using clay for a lot of the material and running on a hydrogen based metabolism so they are very short lived (hours to weeks, depending on $$) and have to merge back relatively soon in order to keep continuity of consciousness through their long lived original. Lots of fascinating practical economic, ethical, social, military, and political details are explored while a noir detective story happens in the foreground.

I recommend it :-)

comment by humpolec · 2010-08-01T21:32:23.295Z · LW(p) · GW(p)

I agree, I don't think merge is possible in this scenario. I still see some gains, though (especially when communication is possible):

  • I (the copy that does X) am happy because I do what I wanted.
  • I (the other copies) am happy because I partly identify with the other copy (as I would be proud of my child/student?)
  • I (all copies) get results I wanted (research, creative, or even personal insights if the first copy is able to communicate them)
Replies from: None
comment by [deleted] · 2010-08-01T22:45:17.945Z · LW(p) · GW(p)

If you don't have the ability to merge, would the copies get equal rights as the original? Or would the original control all the resources and the copies get treated as second class citizens? If the copies were second class citizens, I would probably not fork because this would result in slavery.

If the copies do get equal rights, how do you plan to allocate resources that you had before forking such as wealth and friends? If I split the wealth down the middle, I would probably be OK with the lack of merging. However, I'm not sure how I would divide up social relationships between the copy and the original. If both the original and the copy had to reduce their financial and social capital by half, this might have a net negative utility.

If the goal is to just learn a new skill such as drawing, a more efficient solution might involving uploading yourself without copying yourself and then running the upload faster than realtime. I.e. the upload thinks it has spent a year learning a new skill but only a day has gone by in the real world. However, this trick won't work if the goal involves interacting with others unless they are also willing to run faster than realtime.

comment by NancyLebovitz · 2010-08-01T21:29:15.365Z · LW(p) · GW(p)

Tentatively-- there's be a central uberperson which wouldn't be that much like a single human being.

If I had reason to think it was safe, I'd really like to live that way.

comment by rwallace · 2010-08-01T23:17:16.631Z · LW(p) · GW(p)

Do what e.g. Mercurial does: report that the copies are too different for automatic merge, and punt the problem back to the user.

In other words, you are right that there is no solution in the general case, but that should not necessarily deter us from looking for a solution that works in 90% of cases.

comment by Peter_de_Blanc · 2010-08-01T19:22:51.421Z · LW(p) · GW(p)

That sounds (to me) better than having children, but not as good as living longer.

comment by KrisC · 2010-08-01T21:44:26.123Z · LW(p) · GW(p)

Sounds wonderful. Divide and conquer.

As this sounds like a computer assisted scenario, I would like the ability to append memories while sleeping. Wake up and have access to the memories of the copy. This would not necessarily include full proficiency as I suspect that muscle memory may not get copied.

comment by DanArmak · 2010-08-01T19:44:11.779Z · LW(p) · GW(p)

Copying has at best zero utility (as regards interests): each copy only indulges in one interest, and I anticipate being only one copy, even if I don't know in advance which one.

How is having children at all similar? 1) children would have different interests; 2) I cannot control (precommit) future children; 3) raising children would be for me a huge negative utility - both emotionally and resource-wise.

Replies from: Jordan, humpolec
comment by Jordan · 2010-08-01T20:54:04.683Z · LW(p) · GW(p)

Copying has at best zero utility (as regards interests)

This is not true for me. I care about my ideas beyond my own desire to implement them. If I knew there was a passionate and capable person willing to take over some of my ideas (which I'd otherwise not have time for), I'd jump on the opportunity.

Doubly so if the other person was a copy of me, in which case I'd not only have a guarantee on competence, but assurance that the person would be able to relate the story and product to me afterwards (and possibly share the profit).

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-08-02T11:48:34.186Z · LW(p) · GW(p)

Doubly so if the other person was a copy of me, in which case I'd not only have a guarantee on competence, but assurance that the person would be able to relate the story and product to me afterwards (and possibly share the profit).

Interestingly, now that you bring this up, I'm not at all certain that I'd be able to communicate especially effectively with a copy of myself. Probably better than with a randomly selected person, but perhaps not as well as I might hope.

Replies from: JoshuaZ, Jordan
comment by JoshuaZ · 2010-08-02T13:01:13.193Z · LW(p) · GW(p)

Interestingly, now that you bring this up, I'm not at all certain that I'd be able to communicate especially effectively with a copy of myself. Probably better than with a randomly selected person, but perhaps not as well as I might hope.

What makes you reach that conclusion?

comment by Jordan · 2010-08-02T19:16:41.447Z · LW(p) · GW(p)

I think communication would start out good and become amazing over time. I don't communicate with myself completely in English, there's a lot of thoughts that go through unencoded. Having a copy of myself to talk to would force us to encode those raw thoughts as best as possible. This isn't necessarily easy but I think the really difficulty part would already be behind us, namely having the same core thoughts.

comment by humpolec · 2010-08-01T21:40:31.615Z · LW(p) · GW(p)

How is having children at all similar?

I think people can feel a sense of accomplishment when their child achieved something they wanted but never got around to.

comment by red75 · 2010-08-01T19:56:57.305Z · LW(p) · GW(p)

Waste of processing power. Having dozens of focuses of attention and corresponding body/brain construction is more efficient.

Replies from: KrisC, Nisan
comment by KrisC · 2010-08-01T21:45:10.224Z · LW(p) · GW(p)

Waste of processing power.

Because basic functions are being repeated?

Replies from: red75
comment by red75 · 2010-08-02T13:03:12.428Z · LW(p) · GW(p)

I rather say higher level functions is excessively redundant. Then there are coordination problems, competition for shared resources (e.g. money, sexual partner), possibly divergence of near- and far-term goals, relatively low in-group communication speed, possibly less number of cross-domain-of-knowledge insights.

Replies from: Leonhart
comment by Leonhart · 2010-08-02T20:58:45.311Z · LW(p) · GW(p)

sexual partner

Surely you jest.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-08-03T21:03:01.304Z · LW(p) · GW(p)

TVTropes warning.

comment by Nisan · 2010-08-01T21:26:36.003Z · LW(p) · GW(p)

What's the difference between a copy of yourself and an extra "body/brain construction"?

Replies from: humpolec
comment by humpolec · 2010-08-01T21:38:22.997Z · LW(p) · GW(p)

I think red75 meant rebuilding yourself into a more "multi-threaded" being. I'm not sure I would want to go in that direction, though - it's hard to imagine what the result would feel like, it probably couldn't even be called conscious in the human sense, but somehow multiply-conscious...

Replies from: red75
comment by red75 · 2010-08-02T14:10:03.744Z · LW(p) · GW(p)

Yes, something like that. But I don't think that consciousness of such being will be dramatically different, because it still should contain "central executive" that still coordinates overall behavior of that being and still controls direction and distribution of attention that is however much more fine-grained than human's one.

comment by [deleted] · 2010-08-28T15:35:04.700Z · LW(p) · GW(p)

Followup to: Making Beliefs Pay Rent in Anticipated Experiences

In the comments section of Making Beliefs Pay Rent, Eliezer wrote:

I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

If I am interpreting this correctly, Eliezer is saying that there is a nearly infinite space of unfalsifiable hypotheses, and so our priors for each individual hypothesis should be very close to zero. I agree with this statement, but I think it raises a philosophical problem: doesn't this same reasoning apply to any factual question? Given a set of data D, there must be an nearly infinite space of hypotheses that (a) explain D and (b) make predictions (fulfilling the criteria discussed in Making Beliefs Pay Rent). Though Occam's Razor can help us to weed out a large number of these possible hypotheses, a mind-bogglingly large number would still remain, forcing us to have a low prior for each individual hypothesis. (In philosophy of science, this is known as "underdetermination.") Or is there a flaw in my reasoning somewhere?

Replies from: PaulAlmond
comment by PaulAlmond · 2010-08-28T17:37:53.783Z · LW(p) · GW(p)

Surely, this is dealt with by considering the amount of information in the hypothesis? If we consider each hypothesis that can be represented with 1,000 bits of information, there will only be a maximum of 2^1,000 such hypotheses, and if we consider each hypothesis that can be represented with n bits of information, there will only be a maximum of 2^n - and that is before we even start eliminating hypotheses that are inconsistent with what we already know. If we favor hypotheses with less information content, then we end up with a small number of hypotheses that can be taken reasonably seriously, and the remainder being unlikely - and progressively more unlikely as n increases, so that when n is sufficiently large, we can, practically, dismiss any hypotheses.

Replies from: None
comment by [deleted] · 2010-08-28T21:23:10.508Z · LW(p) · GW(p)

I agree with most of that, but why favor less information content? Though I may not fully understand the math, this recent post by cousin it seems to be saying that priors should not always depend on Kolmogorov complexity.

And, even if we do decide to favor less information content, how much emphasis should we place on it?

Replies from: PaulAlmond
comment by PaulAlmond · 2010-08-28T22:06:29.147Z · LW(p) · GW(p)

In general, I would think that the more information is in a theory, the more specific it is, and the more specific it is, the smaller is the proportion of possible worlds which happen to comply with it.

Regarding how much emphasis we should place on it: I woud say "a lot" but there are complications. Theories aren't used in isolation, but tend to provide a kind of informally put together world view, and then there is the issue of degree of matching.

Replies from: Perplexed
comment by Perplexed · 2010-08-28T22:16:48.032Z · LW(p) · GW(p)

Which theory has more information?

  • All crows are black
  • All crows are black except
Replies from: PaulAlmond, PaulAlmond
comment by PaulAlmond · 2010-08-28T22:20:07.947Z · LW(p) · GW(p)

I didn't say you ignored previous correspondence with reality, though.

Replies from: None, None
comment by [deleted] · 2010-08-29T01:09:42.277Z · LW(p) · GW(p)

That isn't Perplexed's point. Let's say that as of this moment all crows that have been observed are black, so both of his hypotheses fit the data. Why should "all crows are black" be assigned a higher prior than "All crows are black except "? Based on cousin_it's post, I don't see any reason to do that.

comment by [deleted] · 2010-08-30T01:02:02.486Z · LW(p) · GW(p)

So, to revive this discussion: if we must distribute probability mass evenly because we cannot place emphasis on simplicity, shouldn't our priors be almost zero for every hypothesis? It seems to me that the "underdetermination" problem makes it very hard to use priors in a meaningful way.

comment by PaulAlmond · 2010-08-30T02:51:36.750Z · LW(p) · GW(p)

I am assuming here that all the crows that we have previously seen have been black, and therefore that both theories have the same agreement, or at least approximate agreement, with what we know.

The second theory clearly has more information content.

Why would it not make sense to use the first theory on this basis?

The fact that all the crows we have seen so far are black makes it a good idea to assume black crows in future. There may be instances of non-black crows, when the theory has predicted black crows, but that simply means that the theory is not 100% accurate.

If the 270 pages of exceptions have not come from anywhere, then the fact that they are not justified just makes them random, unjustified specificity. Out of all the possible worlds we can imagine that are consistent with what we know, the proportion that agree with this specificity is going to be small. If most crows are black, as I am assuming our experience has suggested, then when this second theory predicts a non-black crow, as one of its exceptions, it will probably be wrong: The unjustified specificity is therefore contributing to a failure of the theory. On the other hand, when the occasional non-black crow does show up, there is no reason to think that the second theory is going to be much better at predicting this than the first theory - so the second theory would seem to have all the inaccuracies of wrongful black crow prediction of the first theory, along with extra errors of wrongful non-black crow prediction introduced by the unjustified specificity.

Now, if you want to say that we don't have experience of mainly black crows, or that the 270 pages of exceptions come from somewhere, then that puts us into a different scenario: a more complicated one.

Looking at it in a simple way, however, I think this example actually just demonstrates that information in a theory should be minimized.

Replies from: Perplexed
comment by Perplexed · 2010-08-30T03:44:57.444Z · LW(p) · GW(p)

I haven't been following the discussion on this topic very closely, so my response may be about stuff you already know or already know is wrong. But, since I'm feeling reckless today, I will try to say something interesting.

There are two different information metrics we can use regarding theories. The first deals with how informative a theory is about the world. The ideally informative theory tells us a lot about the world. Or, to say the same thing in different language, an informative theory rules out as many "possible worlds" as it can; it tells us that our own world is very special among all otherwise possible worlds; that the set of worlds consistent with the theory is a small set. We may as well call this kind of information Shannon information or S-information . A Karl Popper fan would approve of making a theory as S-informative as possible, because then it is exposing itself to the greatest risk of refutation.

The second information metric measures how much information is required to communicate the theory to someone. My 270 pages of fine print in the second crow theory might be an example of a theory with a lot of this kind of information. Let us call this kind of information Kolmogorov information, or K-information. My understanding of Occam's razor is that it recommends that our theories should use as little K-information as possible.

So we have Occam telling us to minimize the K-information and Popper telling us to maximize the S-information. Luckily, the two types of information are not closely related, so (assuming that the universe does not conspire against us) we can frequently do reasonably well by both criteria. So much for the obvious and easy points.

The trouble appears, especially for biologists and other "squishy" scientists, when Nature seems to have set things up so that every law has some exceptions. I'll leave it to you to Google on either "white crow" or "white raven" and to admire those fine and intelligent birds. So, given our objectives of maximizing one information measure and minimizing the other, how should we proceed? Do we change our law to say "99+% of crows are black?" Do we change it to say "All crows are black, not counting ravens as crows, and except for a fraction under 1% of crows which are albinos and also have pink eyes?" I don't know, but maybe you have thought about it more than I have.

Replies from: Richard_Kennaway, NancyLebovitz
comment by Richard_Kennaway · 2010-08-30T10:42:41.754Z · LW(p) · GW(p)

The trouble appears, especially for biologists and other "squishy" scientists, when Nature seems to have set things up so that every law has some exceptions. I'll leave it to you to Google on either "white crow" or "white raven" and to admire those fine and intelligent birds. So, given our objectives of maximizing one information measure and minimizing the other, how should we proceed? Do we change our law to say "99+% of crows are black?" Do we change it to say "All crows are black, not counting ravens as crows, and except for a fraction under 1% of crows which are albinos and also have pink eyes?"

We change it to say, "99+% of crows have such-and-such alleles of genes for determining feather colour; certain other alleles are rare and result in a bird lacking feather pigments due to the synthesis pathway being broken at such-and-such a step for lack of such-and-such a protein. The mutation is disadvantageous, hence the absence of any substantial population of white crows." (Or whatever the actual story is, I'm just making that one up.) If we don't know the actual story, then the best we can do is say that for reasons we don't know, it happens now and then that black crows can give birth to a white offspring.

Squishiness is not a property of biological phenomena, but of our knowledge of those phenomena. Exceptions are in our descriptions, not in Nature.

comment by NancyLebovitz · 2010-08-30T15:14:08.716Z · LW(p) · GW(p)

I wonder if it helps to arrange K-information in layers. You could start with "Almost all crows are black", and then add footnotes for how rare white crows actually are, what causes them, how complete we think our information about crow color distribution is and why, and possibly some factors I haven't thought of.

Replies from: Perplexed
comment by Perplexed · 2010-08-30T17:17:34.950Z · LW(p) · GW(p)

Layering or modularizing the hypothesis: Of course, you can do this, and you typically do do this. But, layering doesn't typically change the total quantity of K-information. A complex hypothesis still has a lot of K-information whether you present it as neatly layered or just jumbled together. Which brings us to the issue of just why we bother calculating the K-information content of a hypothesis in the first place.

There is a notion, mentioned in Jaynes and also in another thread active right now, that the K-information content of a hypothesis is directly related to the prior probability that ought to be attached to a hypothesis (in the absence of (or prior to) empirical evidence). So, it seems to me that the interesting thing about your layering suggestion is how the layering should tie in to the Bayesian inference machinery which we use to evaluate theories.

For example, suppose we have a hypothesis which, based on evidence so far, has a subjective "probability of correctness" of, say 0.5. Then we get a new bit of evidence. We observe a white (albino) crow, for example. Doing standard Bayesian updating, the probability of our hypothesis drops to 0.001, say. So we decide to try to resurrect our hypothesis by adding another layer. Trouble is, that we have just increased the K-complexity of the hypothesis, and that ought to hurt us in our original "no-data" prior. Trouble is, we already have data. Lots of it. So is there some algebraic trick which lets us add that new layer to the hypothesis without going back to evidential square one?

Replies from: NancyLebovitz, Peter_de_Blanc
comment by NancyLebovitz · 2010-08-30T17:32:53.323Z · LW(p) · GW(p)

K-information is about communicating to "someone"-- do you compute the amount of K-information for the most receptive person you're communicating with, or do you have a different amount for each layer of detail?

Actually, you might have a tree structure, not just layers-- the prevalence of white crows in time and space is a different branch than the explanation of how crows can be white.

Replies from: Perplexed
comment by Perplexed · 2010-08-30T17:59:30.127Z · LW(p) · GW(p)

K-information is about communicating to "someone"-- do you compute the amount of K-information for the most receptive person you're communicating with, or do you have a different amount for each layer of detail?

A very interesting question. Especially when you consider the analogy with canon:Kolmogorov. Here we have an ambiguity as to what person we communicate to. There, the ambiguity was regarding exactly what model of universal Turing machine we were programming. And there, there was a theorem to the effect that the differences among Turing machines aren't all that big. Do we have a similar theorem here, for the differences among people - seen as universal programmable epistemic engines.

comment by Peter_de_Blanc · 2010-08-30T17:25:07.417Z · LW(p) · GW(p)

Trouble is, we already have data. Lots of it. So is there some algebraic trick which lets us add that new layer to the hypothesis without going back to evidential square one?

Bayesian updating is timeless. It doesn't care whether you observed the data before or after you wrote the hypothesis.

Replies from: Perplexed
comment by Perplexed · 2010-08-30T18:04:45.945Z · LW(p) · GW(p)

So, it sounds like you are suggesting that we can back out all that data, change our hypothesis and prior, and then read the data back in. In theory, yes. But sometimes we don't even remember the data that brought us to where we are now. Hence the desirability of a trick. Is there an updating-with-new-hypothesis rule to match Bayes's updating-with-new-evidence rule?

comment by Snowyowl · 2010-08-25T18:46:44.842Z · LW(p) · GW(p)

Here's a thought experiment that's been confusing me for a long time, and I have no idea whether it is even possible to resolve the issues it raises. It assumes that a reality which was entirely simulated on a computer is indistinguishable from the "real" one, at least until some external force alters it. So... the question is, assuming that such a program exists, what happens to the simulated universe when it is executed?

In accordance with the arguments that Pavirta gives below me, redundant computation is not the same as additional computation. Executing the same program twice (with the same inputs each time) is equivalent to executing it once, which is equivalent to executing it five times, ten times, or a million. You are just simulating the same universe over and over, not a different one each time.

But is running the simulation once equivalent to running it ZERO times?

The obvious answer seems to be "no", but bear with me here. There is nothing special about the quarks and leptons that make up a physical computer. If you could make a Turing machine out of light, or more exotic matter, you would still be able to execute the same program on it. And if you could make such a computer in any other universe (whatever that might mean), you would still be able to run the program on it. But in such considerations, the computer used is immaterial. A physical computer is not a perfect Turing machine - it has finite memory space and is vulnerable to physical defects which introduce errors into the program. What matters is the program itself, which exists regardless of the computer it is on. A program is a Platonic ideal, a mathematical object which cannot exist in this universe. We can make a representation of that program on a computer, but the representation is not perfect, and it is not the program itself. In the same way, a perfect equilateral triangle cannot actually be constructed in this universe; even if you use materials whose length is measured down to the atom, its sides will not be perfectly straight and its angles will not be perfectly equal. More importantly, if you then alter the representation to make one of the angles bigger, it does not change the fact that equilateral triangles have 60° angles, it simply makes your representation less accurate. In the same way, executing a program on a computer will not alter the program itself. If there are conscious beings simulated on your computer, they existed before you ran the program, and they will exist even if you unplug the computer and throw it into a hole - because what you have in your computer is not the conscious beings, but a representation of them. And they will still exist even if you never run the program, or even if it never occurs to anyone on Earth that such a program could be made.

The problem is, this same argument could be used to justify the existence of literally everything, everywhere. So we are left with several possible conclusions: (1)Everything is "real" in some universe, and we have no way of ever finding such universes. This cannot ever be proved or falsified, and also leads to problems with the definition of "everything" and "real". (2)The initial premise is false, and only physical objects are real: simulations, thoughts and constructs are not. I think there is a philosophical school of thought that believes this to be true, though I have no idea what its name is. Regardless, there are still a lot of holes in this answer. (3)I have made a logical mistake somewhere, or I am operating from an incorrect definition of "real". It happens.

It is also worth pointing out that both (1) and (2) invalidate every ethical truth in the book, since in (1) there is always a universe in which I just caused the death of a trillion people, and in (2) there is no such thing as "ethics" - ideas aren't real, and that includes philosophical ideas.

Anyway, just bear this in mind when you think about a universe being simulated on a computer.

Replies from: Emile, ata, Pavitra, thomblake
comment by Emile · 2010-08-25T19:17:53.732Z · LW(p) · GW(p)

(1)Everything is "real" in some universe, and we have no way of ever finding such universes. This cannot ever be proved or falsified, and also leads to problems with the definition of "everything" and "real".

That's pretty much Tegmark's Multiverse, which seems pretty popular around here (I think it makes a lot of sense).

comment by ata · 2010-08-25T19:38:01.273Z · LW(p) · GW(p)

Indeed. I have a post making similar arguments, though I still haven't been able to resolve the ethical and anthropic problems it raises in any satisfactory way. At this point I've backtracked from the confidence I held when I wrote that post; what I'm still willing to say is that we're probably on the right track thinking of "Why does anything exist?" as a wrong question and thinking of reality as indexical (i.e. the true referent of the category "real" is the set of things instantiated by this universe; it is a category error to talk about other universes being real or not real), but the Mathematical Universe Hypothesis still leaves much to be confused about.

Replies from: Perplexed
comment by Perplexed · 2010-08-25T20:46:18.602Z · LW(p) · GW(p)

My own view is that (ignoring simulations for the time being) MWI ideas have no conflict with our usual ethical intuitions and reasonings. Yes, it is the case that when I choose between evil action A and good action B, there will be two branches of the universe - one in which I choose A and one in which I choose B. This will be the case regardless of which choice I make. But this does not make my choice morally insignificant, because I split too, along with the rest of the universe. The version of me that chose evil act A will have to live thereafter with the consequences of that choice. And the version of me that chose B must live with quite different consequences.

What, more than that, could a believer in the moral significance of actions want of his universe?

The situation with respect to simulations is a bit trickier. Suppose I am deciding whether to (A) pull the plug on a simulation which contains millions of sentient (simulated) beings, or (B) allow the simulation to continue. So, I choose, and the universe branches. If I chose A, I must live with the consequences. I don't have that simulation to kick around any more. But, if I were to worry about all the simulated lives that I have so ruthlessly terminated, I can easily reassure myself that I have only terminated a redundant copy of those lives. The (now) master copy of the simulation plays on, over in that parallel universe where I chose B.

Is it wrong to create a simulation and then torture the inhabitants? Well, that is an ethical question, whereas this is a meta-ethical analysis. But the meta-ethical answer to that ethical question is that if you torture simulated beings, then you must live with the consequences of that.

Replies from: ata, None, Perplexed
comment by ata · 2010-08-25T20:58:51.512Z · LW(p) · GW(p)

That's not how MWI works, unless human brains have a quantum randomness source that they use to make decisions (which does not appear to be the case).

Replies from: Perplexed, GuySrinivasan
comment by Perplexed · 2010-08-25T21:25:13.183Z · LW(p) · GW(p)

I'm not sure it matters to the analysis. Whether we have a Tegmark multiverse, or Everett MWI with some decisions depending on quantum randomness and others classically determined, or whether the multiple worlds are purely subjective fictions created to have a model of Bayesianism; regardless of what you think is a possible reduction of "possibly"; it is still the case that you have to live in the reality which you helped to create by way of your past actions.

Replies from: h-H
comment by h-H · 2010-08-26T04:31:48.318Z · LW(p) · GW(p)

agreed, it's not like scientific analysis requires the laws of physics to have no quantum randomness source etc, rather it is satisfied with finding the logical necessities between what is used to describe the observable universe.

comment by [deleted] · 2010-08-25T21:31:35.295Z · LW(p) · GW(p)

Yes, MWI ideas have no conflict with usual ethical intuitions. And they also help you make better sense of those intuitions. Counterfactuals really do exist, for example; they're not just some hypothetical that is in point of fact physically impossible.

Replies from: h-H
comment by h-H · 2010-08-26T03:50:26.747Z · LW(p) · GW(p)

but we shouldn't concern ourselves with counter factuals if they aren't part of our observed universe.

Replies from: Perplexed
comment by Perplexed · 2010-08-26T04:14:06.959Z · LW(p) · GW(p)

My impression is that sometimes we do need to deal with them in order to make the math come out right, even though the only thing we are really concerned about is our observed universe. Just as we sometimes need to deal with negative numbers of sheep - however difficult we may find this to visualize if we work as a shepherd.

Replies from: h-H
comment by h-H · 2010-09-04T04:02:07.462Z · LW(p) · GW(p)

true, but there are no 'negative sheep', only numbers arbitrarily representing them.

Replies from: Perplexed
comment by Perplexed · 2010-09-05T01:23:35.462Z · LW(p) · GW(p)

but we shouldn't concern ourselves with numbers if they aren't part of our observed universe.

Replies from: h-H
comment by h-H · 2010-09-05T07:44:10.223Z · LW(p) · GW(p)

numbers are quite useful, so we don't/shouldn't do away with them, but the math is never a complete substitute for the observable universe.

writing down '20 sheep' doesn't physically equal 20 sheep, rather it's a method we use for simplicity. as it stands, no two sheep are alike to every last detail as far as anyone can tell, yet we still have a category called 'sheep'. this is so given the observed recurrence of 'sheep' like entities, similar enough for us to categorize them for practicality's sake, but that doesn't mean they're physically all alike to every detail.

it could be argued that sometimes the math does equate with reality, as in 'Oxygen atom' is a category consisting of entirely similar things, but even that is not confirmed, simply an assertion; no human has observed all 'Oxygen atoms' in existence to be similar in every detail, or even in some arbitrarily 'essential' detail/s. yet it is enough for the purposes of science to consider them all similar, and so we go with it,otherwise we'd never have coherent thought let alone science.

it might very well be that all Oxygen atoms in existence are physically the same in some ways, but we have no way of actually knowing. this doesn't mean that there are 'individual atoms', but it doesn't negate it either.

ETA: as pengvado said in below post, replace 'atom' with 'particle'.

Replies from: pengvado, Perplexed, wedrifid
comment by pengvado · 2010-09-05T08:29:50.401Z · LW(p) · GW(p)

This doesn't mean that there are 'individual atoms', but it doesn't negate it either.

No Individual Particles. The fact that measurements of their mass/charge/etc have always come out the same, is not the only evidence we have for all particles of a given type being identical.

(A whole oxygen atom is a bad example, though. Atoms have degrees of freedom beyond the types of particles they're made of.)

Replies from: h-H
comment by h-H · 2010-09-05T09:45:39.205Z · LW(p) · GW(p)

yes, I had that specific post in mind when I presented the atom example. you're correct here though, I should have said particles,I shouldn't write so late after midnight I guess..

now I admit that my understanding of quantum mechanics is not that much above a lay persons', so maybe I just need to apply myslef more and It'll click, but let's consider my arguement first:- here's what EY said in reply to a post in that thread-emphasis mine: "There can be properties of the particles we don't know about yet, but our existing experiments already show those new properties are also identical, unless the observed universe is a lie."

and then: "Undiscovering this would be like undiscovering that atoms were made out of nucleons and electrons.

It's in this sense that I say that the observed universe would have to be a lie."

here I believe he's making a mistake/displaying a bias; the math-of Quantum Mechanics in this particular instance- does not determine physical reality, rather it describes it to some degree or other.

to suggest that the mathematics of quantum mechanics is the end of the road is too strong a claim IMO.

Replies from: pengvado
comment by pengvado · 2010-09-06T00:57:05.901Z · LW(p) · GW(p)

I don't have any arguments that weren't discussed in that post; so far as I can tell, it already adequately addressed your objection:

QM doesn't have to be the end of the road. If QM is a good approximation of reality on the scales it claims to predict in the situations we have already tested it in -- if the math of QM does describe reality to some degree or other -- then that's enough for the quantum tests of particle identity to work exactly.

Replies from: h-H
comment by h-H · 2010-09-06T03:11:34.761Z · LW(p) · GW(p)

to put it mildly I don't believe anyone can address that objection satisfactorily, as wedrifid put it eloquently, the math is part of the map, not territory.

if the math of QM does describe reality to some degree or other -- then that's >enough for the quantum tests of particle identity to work exactly.

agreed, that was partially my point a couple of posts ago. for practical reasons it's good enough that the math works to a degree.

comment by Perplexed · 2010-09-05T13:56:20.248Z · LW(p) · GW(p)

Uhmm. I hate to explain my own jokes, but ... You did notice the formal similarity between my "we shouldn't concern ourselves" comment and its great grandparent, right?

Replies from: h-H
comment by h-H · 2010-09-06T03:02:28.002Z · LW(p) · GW(p)

I noticed, but there was a clear difference that I felt was necessary to point out regardless.

comment by wedrifid · 2010-09-05T13:45:48.479Z · LW(p) · GW(p)

it might very well be that all Oxygen atoms in existence are physically the same in some ways, but we have no way of actually knowing. this doesn't mean that there are 'individual atoms', but it doesn't negate it either.

True (only) in the sense that our numbers are part of our map and not the territory. In the same sense we have no way of actually knowing there are patterns in the universe appropriately named Oxygen. Or Frog.

Replies from: h-H
comment by h-H · 2010-09-06T03:03:52.498Z · LW(p) · GW(p)

good point about the map/territory distinction, that was what I intended to say but couldn't put into so few words, thanks :)

and no, it seems that not even Frog can escape this, I'm not sure about it's significance here though?

comment by Perplexed · 2010-08-25T20:54:22.834Z · LW(p) · GW(p)

Is it wrong to create a simulation and then torture the inhabitants? Well, that is an ethical question, whereas this is a meta-ethical analysis. But the meta-ethical answer to that ethical question is that if you torture simulated beings, then you must live with the consequences of that.

I should add that it is impossible to erase your sin by deciding to terminate the simulation, so as to "euthanize" the victims of your torture. Because there is always a branch where you don't so decide, and the victims of your torture live on.

comment by Pavitra · 2010-08-25T22:01:45.563Z · LW(p) · GW(p)

I don't think it works like that. Math is a conceptual construct, not something that has its own reality separate from either the thing it approximates or the mind that approximates with it.

I'm reminded of the person who thought that using the equations for relativistic rather than classical mechanics to model cannonballs would give the wrong answer.

Only things that happen are real. There's no Math Heaven inhabited by angelic equations in a separate magisterium from the world of the merely real.

comment by thomblake · 2010-08-25T19:35:16.007Z · LW(p) · GW(p)

Executing the same program twice (with the same inputs each time) is equivalent to executing it once

In some sense, maybe. But if that were generally true, then I wouldn't have any reason to run the same program twice, but I do. (for example, I have repeatedly asked my calculator what is 1080*4/3, since I have a weird TV and untrustworthy memory)

comment by NancyLebovitz · 2010-08-09T16:50:23.590Z · LW(p) · GW(p)

I've written a post for consolidating book recommendations, and the links don't have hidden urls. These are links which were cut and pasted from a comment-- the formatting worked there.

Posting (including to my drafts) mysteriously doubles the spaces between the words in one of my link texts, but not the others. I tried taking that link out in case it was making the whole thing weird, but it didn't help.

I've tried using the pop-up menu for links that's available for writing posts, but that didn't change the results.

What might be wrong with the formatting?

Replies from: Alicorn
comment by Alicorn · 2010-08-09T19:27:51.975Z · LW(p) · GW(p)

I don't know what's wrong, but a peek at the raw HTML editor (there's a button for it in the toolbar) might give a hint.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T19:44:48.899Z · LW(p) · GW(p)

Thank you.

Posts are html. Comments are Markdown.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T20:16:16.251Z · LW(p) · GW(p)

I thought I had it solved. I swear there was one moment when a clean copy with links appeared, though it might have been as a draft.

And then the raw html links started showing up.

At this point, I've just posted it without links.

comment by PeerInfinity · 2010-08-08T01:57:59.150Z · LW(p) · GW(p)

Scenario: A life insurance salesman, who happens to be a trusted friend of a relatively-new-but-so-far-trustworthy friend of yours, is trying to sell you a life insurance policy. He makes the surprising claim that after 20 years of selling life insurance, none of his clients have died. He seems to want you to think that buying a life insurance policy from him will somehow make you less likely to die.

How do you respond?

edit: to make this question more interesting: you also really don't want to offend any of the people involved.

Replies from: wedrifid, Eliezer_Yudkowsky, SilasBarta, Clippy, Larks, RobinZ, cupholder
comment by wedrifid · 2010-08-08T07:40:05.393Z · LW(p) · GW(p)

He makes the surprising claim that after 20 years of selling life insurance, none of his clients have died.

Wow. He admitted that to you? That seems to be strong evidence that most people refuse to buy life insurance from him. In a whole 20 years he hasn't sold enough insurance that even one client has died from unavoidable misfortune!

Replies from: SilasBarta
comment by SilasBarta · 2010-08-09T21:07:10.142Z · LW(p) · GW(p)

PeerInfinity added that he had gotten sales awards for the number of policies sold, so I don't think this is a factor.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-08T06:32:51.682Z · LW(p) · GW(p)

"No."

Life insurance salesmen are used to hearing that. If they act offended, it's a sales act. If you're reluctant to say it, you're easily pressured and you're taking advantage. You say "No". If they press you, you say, "Please don't press me further." That's all.

comment by SilasBarta · 2010-08-08T02:18:12.892Z · LW(p) · GW(p)

Since his sales rate probably increased with time, that means the average time after selling a policy is ~8 years. So the typical client of his didn't die after 8 years. Making a rough estimate of the age of the client he sells to, which would probably be 30-40, it just means that the typical client has lived to at least 48 or less, which is normal, not special.

Furthermore, people who buy life insurance self-select for being more prudent in general.

So, even ignoring the causal separations you could find, what he's told you is not very special. Though it separates him from other salesmen, the highest likelihood ratio you should put on this piece of evidence would be something like 1.05 (i.e. ~19 out of 20 salesmen could say the same thing), or not very informative, so you are only justified in making a very slight move toward his hypothesis, even under the most generous assumptions.

You could get a better estimate of his atypicality by asking more about his clients, at which point you would have identified factors that can screen off the factor of him selling a policy.

(Though in my experience, life insurance salesmen aren't very bright, and a few sentences into that explanation, you'll get the, "Oh, it's one of these people" look ...)

How'd I do?

Edit: Okay, I think I have to turn in my Bayes card for this one: I just came up with a reason why the hypothesis puts a high probability on the evidence, when in reality the evidence should have a low probability of existing. So it's more likely he doesn't have his facts right.

Maybe this is a good case to check the "But but somebody would have noticed" heuristic. If one of his clients died, would he even find out? Would the insurance company tell him? Does he regularly check up on his clients?

Replies from: PeerInfinity, NancyLebovitz, RobinZ
comment by PeerInfinity · 2010-08-08T02:41:48.689Z · LW(p) · GW(p)

I disagree with your analysis, but the details of why I disagree would be spoilers.

more details:

no, he's not deliberately selecting low-risk clients. He's trying to make as many sales as possible.

and he's had lots of clients. I don't know the actual numbers, but he has won awards for how many policies he has sold.

and he seems to honestly believe that there's something special about him that makes his clients not die. he's "one of those people".

and here's the first actuarial life table I found through a quick google search: http://www.ssa.gov/OACT/STATS/table4c6.html

Replies from: PeerInfinity, SilasBarta
comment by PeerInfinity · 2010-08-08T03:10:49.852Z · LW(p) · GW(p)

I'm going to go ahead and post the spoiler, rot13'd

Zl thrff: Ur'f ylvat. Naq ur'f cebonoyl ylvat gb uvzfrys nf jryy, va beqre sbe gur yvr gb or zber pbaivapvat. Gung vf, qryvorengryl sbetrggvat nobhg gur pyvragf jub unir qvrq.

Vs ur unf unq a pyvragf, naq vs gurve nirentr ntr vf 30... Rnpu lrne, gur cebonovyvgl bs rnpu bs gurz fheivivat gur arkg lrne vf, jryy, yrg'f ebhaq hc gb 99%. Gung zrnaf gung gur cebonovyvgl bs nyy bs gurz fheivivat vf 0.99^a. Rira vs ur unf bayl unq 100 pyvragf, gura gur cebonovyvgl bs gurz nyy fheivivat bar lrne vf 0.99^100=0.36 Vs ur unq 200 pyvragf, gura gur cebonovyvgl bs gurz nyy fheivivat bar lrne vf 0.99^200=0.13. Naq gung'f whfg sbe bar lrne. Gur sbezhyn tbrf rkcbaragvny ntnva vs lbh pbafvqre nyy 20 lrnef. Gur cebonovyvgl bs nyy 100 pyvragf fheivivat 20 lrnef vf 0.99^100^20=1.86R-9

Naq zl npghny erfcbafr vf... qba'g ohl gur yvsr vafhenapr. Ohg qba'g gryy nalbar gung lbh guvax ur'f ylvat. (hayrff lbh pbhag guvf cbfg.) Nyfb, gur sevraq ab ybatre pbhagf nf "gehfgrq", be ng yrnfg abg gehfgrq gb or engvbany. Bu, naq srry ernyyl thvygl sbe abg svaqvat n orggre fbyhgvba, naq cbfg gb YJ gb frr vs nalbar guvaxf bs n orggre vqrn. Ohg qba'g cbfg rabhtu vasbezngvba sbe nalbar gb npghnyyl guvax bs n orggre fbyhgvba. Naq vs fbzrbar qbrf guvax bs n orggre vqrn naljnl, vtaber vg vs vg'f gbb fpnel.

Replies from: saturn
comment by saturn · 2010-08-08T05:48:54.980Z · LW(p) · GW(p)

I don't understand what you mean by a better solution; I wouldn't feel guilty about doing what you did.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-08-08T16:41:51.080Z · LW(p) · GW(p)

The part to feel guilty about is that I chose not to explain that the salesman is probably either lying, or insane, or both, and therefore probably shouldn't be considered "a trusted friend". And also that I chose to just try to avoid both of these people, rather than... thinking of a less blatantly unfriendly solution.

comment by SilasBarta · 2010-08-08T03:00:39.342Z · LW(p) · GW(p)

I disagree with your analysis, but the details of why I disagree would be spoilers.

But I can only make inferences on what you've told me. If there's a factor that throws off the general inferences you can make from a salesman's clientele, you can't fault me for not using it. It's like you're trying to say:

"This dude was born in the US. He's 50 years old. Can he speak English?" -> Yeah, probably. -> "Haha! No, he can't! I didn't tell you he was abducted to Cambodia as an infant and grew up there!"

Anyway, the next step is to estimate what fraction of salesman with the same clientele composition have not had their clients die and see how atypical he is. Plus, his sales record would have to start from early in his career, or else his clients fall mostly within recent sales, a time span in which people normally don't die anyway.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-08-08T03:16:16.207Z · LW(p) · GW(p)

I thought I provided enough information, but I apologise if I didn't.

I posted an rot13'd version of my answer, which also explains why I disagreed with your answer.

sorry if the rot13ing is pointlessly annoying.

comment by NancyLebovitz · 2010-08-08T09:11:04.386Z · LW(p) · GW(p)

Furthermore, people who buy life insurance self-select for being more prudent in general.

On the other hand, there's also selection for people who aren't expecting to live as long as the average, and this pool includes prudent people.

Anyone have information on owning life insurance and longevity?

Replies from: wedrifid
comment by wedrifid · 2010-08-08T13:56:57.049Z · LW(p) · GW(p)

On the other hand, there's also selection for people who aren't expecting to live as long as the average, and this pool includes prudent people.

And on yet another hand there is selection for people who are expected to live longer than the average (selection from the salemen directly or mediated by price.)

comment by RobinZ · 2010-08-08T02:28:06.231Z · LW(p) · GW(p)

I like the analysis! Did you have a formula you used to arrive at the 8 years, or is it an eyeball guess?

Replies from: SilasBarta
comment by SilasBarta · 2010-08-08T02:36:01.849Z · LW(p) · GW(p)

Thanks! Just made an eyeball guess on the 8 years.

comment by Clippy · 2010-08-08T04:00:25.082Z · LW(p) · GW(p)

Buying life insurance can't extend a human's life.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-09T21:08:02.999Z · LW(p) · GW(p)

Thank you, Cliptain Obvious! The problem is to say how his claim is implausible or doesn't follow from his evidence, given that we already have that intuition.

comment by Larks · 2010-08-09T22:27:50.647Z · LW(p) · GW(p)

Tell him you found his pitch very interesting and persuasive, and that you'd like to buy life insurance for a 20 year period. Then, ponder for a little while; "Actually, it can't be having the contact that keeps them alive, can it? That's just a piece of paper. It must be that the sort of person who buy it are good at staying alive! And it looks like I'm one of them; this is excellent!

Then , you point out that as you're not going to die, you don't need life insurance, and say goodbye.

If you wanted to try to enlighten him, you might start by explicitly asking if he believed there was a causal link. But as the situation isn't really set up for honest truth-hunting, I wouldn't bother.

Replies from: soreff
comment by soreff · 2010-08-10T00:51:20.946Z · LW(p) · GW(p)

Then , you point out that as you're not going to die, you don't need life insurance, and say goodbye.

If the salesman is omega in disguise, is this two-boxing? :-)

Replies from: Larks
comment by Larks · 2010-08-10T11:12:12.671Z · LW(p) · GW(p)

Well, kind of. Unlike in Newcombe's, we have no evidence that it's the decision that cases the long-life, as opposed to some other factor correlated with both (which seems much more likely).

comment by RobinZ · 2010-08-08T02:33:03.966Z · LW(p) · GW(p)

With a degree of discombobulation, I imagine. I can't see any causal mechanism by which buying insurance would cause you to live longer, so unless the salesman knows something I wouldn't expect him to, he would seem to have acquired an unreliable belief. Given this, I would postpone buying any insurance from him in case this unreliable belief could have unfortunate further consequences* and I would reduce my expectation that the salesman might prove to be an exceptional rationalist.

* For example: given his superstition, he may have allotted inadequate cash reserves to cover future life insurance payments.

comment by cupholder · 2010-08-08T03:46:25.236Z · LW(p) · GW(p)

Maybe the salesman mostly sells temporary life insurance, and just means that no clients had died while covered?

comment by [deleted] · 2010-08-05T14:41:05.929Z · LW(p) · GW(p)

One way to model someone's beliefs, at a given frozen moment of time, is as a real-valued function P on the set of all assertions. In an ideal situation, P will be subject to a lot of consistency conditions, for instance if A is a logical consequence of B, then P(A) is not smaller than P(B). This ideal P is very smart: if such a P has P(math axioms) very close to 1, then it will have P(math theorems) very close to 1 as well.

Clearly, even a Bayesian superintelligence is not going to maintain an infinitely large database of values of P, that it updates from instant to instant. Rather, it will have something like a computer program that takes as inputs assertions A, spends some time thinking, and outputs numbers P(A). I think we cannot expect the computed numbers P(A) to have the consistency property (B implies A means P(A) not smaller than P(B)). For instance it should be possible for a superintelligence to answer a math question (I don't know, Goldbach's conjecture) with "very likely true" and have Goldbach's conjecture turn out false.

(Since "A" is a logical consequence of "A and B", I guess I am accusing superintelligences of committing a souped-up form of the conjunction fallacy.)

The fact that a prior in practice won't be a set of cached numbers but instead a computer program, subject to all the attendant resource constraints, seems important to me, but I'm open to the possibility that it's a red herring. Am I making some kind of classic or easily addressed error?

Replies from: wedrifid
comment by wedrifid · 2010-08-05T16:05:55.488Z · LW(p) · GW(p)

I think we cannot expect the computed numbers P(A) to have the consistency property (B implies A means P(A) not smaller than P(B)).

Clarify for me what you are saying here. Why would a bounded-rational superintelligence maintain a logically inconsistent belief system? Are you making the following observation?

  • Even a superintelligence is not logically omniscient
  • There are inevitably going to be complicated mathematical properties of the superintelligence's map that are not worth spending processing time on. This is ensured by the limits of physics itself.
  • Outside the 'bounds' of the superintelligence's logical searching there will be logical properties for which the superintelligence's beliefs are not consistent.
  • A bayesian superintelligence is not logically omniscient so some of it's beliefs will be logically inconsistent.

My impression is that the above holds true unless the superintelligence in question cripples itself, sacrificing most of it's instrumentally rational capability for epistemic purity.

Replies from: None, None
comment by [deleted] · 2010-08-05T19:11:35.094Z · LW(p) · GW(p)

I am glad that a term like "bounded-rational" exists. If it's been discussed someplace very thoroughly then I likely don't have very much to add. What are some proposals for modeling bounded Bayesianism?

I think what I'm saying is consistent with your bullet points, but I would go further. I'll focus on one point: I do not think it's possible for a bounded agent to be epistemically pure, even having sacrificed most or all of its instrumentally rational capability. Epistemic impurity is built right into math and logic.

Let me make the following assumption about our bounded rational agent: given any assertion A, it has the capability of computing its prior P(A) in time that is polynomial in the length of A. That is, it is not strictly agnostic about anything. Since there exist assertions A which are logical consequences of some axioms, but whose shortest proof is super-polynomial (in fact it gets much worse) in the length of A, it seem very unlikely that we will have P(A) > P(Axioms) for all provable assertions A.

(I think you could make this into a rigorous mathematical statement, but I am not claiming to have proved it--I don't see how to rule out the possibility that P always computes P(A) > P(Axioms) (and quickly!) just by luck. Such a P would be very valuable.)

Replies from: wedrifid
comment by wedrifid · 2010-08-06T03:50:07.930Z · LW(p) · GW(p)

I believe you are correct. A bounded rational agent that is not strictly agnostic about anything will produce outputs that contain logical inconsistencies.

For a superintelligence to avoid such inconsistencies it would have to violate the 'not strictly agnostic about anything' assumption either explicitly or practically. By 'practically' I mean it could refuse to return output until such time as it has proven the logical correctness of a given A. It may burn up the neg-entropy of its future light cone before it returns but hey, at least it was never wrong. A bounded rational agent in denial about its 'bounds'.

Replies from: None
comment by [deleted] · 2010-08-06T17:59:39.049Z · LW(p) · GW(p)

I see how to prove my claim now, more later.

I had in mind to rule out your "practical agnosticism" with the polynomial time condition. Note that we're talking about the zeroth thing that an intelligence is supposed to do, not "learning" or "deciding" but just "telling us (or itself) what it believes." In toy problems about balls in urns (and maybe, problematically, more general examples) this is often implicitly assumed to be an instantaneous process.

If we're going to allow explicit agnosticism, we're going to have to rethink some things. If P(A) = refuse to answer, what are P(B|A) and P(A|B)? How are we supposed to update?

Replies from: wedrifid
comment by wedrifid · 2010-08-07T03:12:27.604Z · LW(p) · GW(p)

I had in mind to rule out your "practical agnosticism" with the polynomial time condition.

That is a reasonable assumption to make. We just need to explicitly assert that the intelligence is willing and able to return P(A) for any sane length A that matches the polynomial time condition. (And so explicitly rule out intelligences that just compute perfect answers and to hell with polynomial time limits and pesky things like physical possibility.)

If we're going to allow explicit agnosticism, we're going to have to rethink some things. If P(A) = refuse to answer, what are P(B|A) and P(A|B)? How are we supposed to update?

I don't know and the intelligence doesn't care. It just isn't going to give you wrong answers. I think it is reasonable for us to just exclude such intelligences because they are practically useless. I'll include the same caveat that you mentioned earlier - maybe there is some algorithm that never violates logical consistency conditions somehow. That algorithm would be an extremely valuable discovery but one I suspect could be proven impossible. The maths for making such a proof is beyond me.

comment by [deleted] · 2010-08-05T16:25:45.896Z · LW(p) · GW(p)

I don't know what "bounded-rational" means, and I don't know what "instrumentally rational" means. More soon, probably today.

Replies from: wedrifid
comment by wedrifid · 2010-08-05T16:40:55.847Z · LW(p) · GW(p)

I don't know what "bounded-rational" means

It is the difference between what you describe in the first paragraph and what an actually possible agent is limited to. The kind of thing that allows a superintelligence that has no flaws to still be wrong about something like Goldbach's conjecture. Always thinking perfectly but not able to think infinitely fast and about everything.

and I don't know what "instrumentally rational" means.

Achieving your goals in the best possible way. In contrast to 'epistemic rationality' is having the best beliefs possible. The concepts are related but distinct.

comment by Scott Alexander (Yvain) · 2010-08-05T05:22:59.733Z · LW(p) · GW(p)

A long time ago on Overcoming Bias, there was a thread started by Eliezer which was a link to a post on someone else's blog. The linked post posed a question, something like: "Consider two scientists. One does twenty experiments, and formulates a theory that explains all twenty results. Another does ten experiments, formulates a theory that adequately explains all ten results, does another ten experiments, and finds that eir theory correctly predicted the results. Which theory should we trust more and why?"

I remember Eliezer said he thought he had an answer to the question but was going to wait before posting it. I've since lost the post. Does anyone remember what post this is or whether anyone ever gave a really good formal answer?

Replies from: Matt_Simpson, gwern, sketerpot
comment by Matt_Simpson · 2010-08-05T23:04:57.140Z · LW(p) · GW(p)

Here's the post

comment by gwern · 2010-08-05T05:42:49.587Z · LW(p) · GW(p)

I remember vaguely one discussion of Bayesianism vs. frequentism, where the frequentists held that a study in which the experimenters resolved to keep making observations until they observed Foo X times (for a total of X times and Y total observations) must be treated statistically different from a study where the experimenters resolved to make Y total observations but wound up X times observing Foo; while from the Bayesian perspective, these studies, with their identical objective facts, ought to be treated the same.

Does this sound like it?

comment by sketerpot · 2010-08-05T18:46:26.160Z · LW(p) · GW(p)

How valuable was each experiment? If you make a theory after ten experiments, then you could design the next ten experiments to be very specific to the theory: if the theory is right, they come out one way, and they should come out a very different way if the theory is wrong.

It's like writing test-cases for software: once you know what exactly you're testing, you can write test cases that aim at any potential weak spots, in a deliberate and targeted attempt to break it. So if we're assuming that these scientists are actual people (rather than really hypothetical abstractions), I would give more credence to the guy who did ten experiments, formulated a theory, and did ten more experiments, iff those later ten experiments look like they were designed to give new information and really stress-test the theory.

If we're talking about the exact same 20 experiments, then I would generally favor doing maybe 13 experiments, making a theory, and then doing the other 7, to avoid overfitting. Or split the experiments up into two sets of ten, and have two scientists each look at ten of the experiments, make a theory, then test it with the other ten. This kind of thinking would have killed Ptolemy's theory of epicycles, which is a classic case of overcomplicating the theory to match your observations.

I know that's hardly a formal answer, but I think the original question was oversimplifying.

comment by ABranco · 2010-08-05T03:42:27.150Z · LW(p) · GW(p)

Interesting article: http://danariely.com/2010/08/02/how-we-view-people-with-medical-labels/

One reason why it's a good idea someone with OCD (or for that matter, Asperger, psychosis, autism, paranoia, schizophrenia — whatever) should make sure new acquaintances know of his/her condition:

I suppose that being presented by a third party, as in the example, should make a difference when compared to self-labeling (which may sound like excusing oneself)?

comment by [deleted] · 2010-08-05T01:09:26.415Z · LW(p) · GW(p)

"An Alien God" was recently re-posted on the stardestroyer.net "Science Logic and Morality" forum. You may find the resulting discussion interesting.

http://bbs.stardestroyer.net/viewtopic.php?f=5&t=144148&start=0

comment by timtyler · 2010-08-02T18:16:31.720Z · LW(p) · GW(p)

I made some comments on the recently-deleted threads that got orphaned when the whole topic was banned and the associated posts were taken down. Currently no-one can reply to the comments. They don't related directly to the banned subject matter - and some of my messages survive despite the context being lost.

Some of the comments were SIAI-critical - and it didn't seem quite right to me at the time for the moderator to crush any discussion about them. So, I am reposting some of them as children of this comment in an attempt to rectify things - so I can refer back to them, and so others can comment - if they feel so inclined:

Replies from: timtyler, timtyler, timtyler
comment by timtyler · 2010-08-02T18:17:15.981Z · LW(p) · GW(p)

[In the context of SIAI folks thinking an unpleasant AI was likely]

The SIAI derives its funding from convincing people that the end is probably nigh - and that they are working on a potential solution. This is not the type of organisation you should trust to be objective on such an issue - they have obvious vested interests.

Replies from: Johnicholas
comment by Johnicholas · 2010-08-02T19:42:03.321Z · LW(p) · GW(p)

I've noticed this structural vulnerability to bias too - Can you think of any structural changes that might reduce or eliminate this bias?

Maybe SIAI ought to be offering a prize for substantially justified criticism of some important positional documents, as judged by some disinterested agent?

Replies from: timtyler
comment by timtyler · 2010-08-02T20:20:25.754Z · LW(p) · GW(p)

They are already getting some critical feedback.

I think I made much the same points in my DOOM! video. DOOM mongers:

  • tend to do things like write books about THE END OF THE WORLD - which gives them a stake in promoting the topic ...and...

  • are a self-selected sample of those who think DOOM is very important (and so, often, highly likely) - so naturally they hold extreme views - and represent a sample from the far end of the spectrum;

  • clump together, cite each others papers, and enjoy a sense of community based around their unusual views.

It seems tricky for the SIAI to avoid the criticism that they have a stake in promoting the idea of DOOM - while they are funded the way they are.

Similarly, I don't see an easy way of avoiding the criticism that they are a self-selected sample from the extreme end of a spectrum of DOOM beliefs either.

If we could independently establish p(DOOM), that would help - but measuring it seems pretty challenging.

IMO, a prize wouldn't help much - but I don't know for sure. Many people behave irrationally around prizes - so it is hard to be very confident here.

I gather they are working on publishing some positional documents. It seems to be a not-unreasonable move. If there is something concrete to criticise, critics will have something to get their teeth into.

Replies from: NihilCredo
comment by NihilCredo · 2010-08-03T03:15:25.854Z · LW(p) · GW(p)

For the curious: DOOM!

comment by timtyler · 2010-08-02T18:16:58.132Z · LW(p) · GW(p)

They used to have a "commitment" that:

"Technology developed by SIAI will not be used to harm human life."

...on their web site. I probably missed the memo about that being taken down.

comment by timtyler · 2010-08-02T18:17:25.198Z · LW(p) · GW(p)

[In the context of SIAI folks thinking an unpleasant AI was likely]

Re: "The justification is that uFAI is a lot easier to make."

That seems like naive reasoning. It is a lot easier to make a random mess of ASCII that crashes or loops - and yet software companies still manage to ship working products.

Replies from: WrongBot, JGWeissman, Morendil
comment by WrongBot · 2010-08-02T18:26:03.516Z · LW(p) · GW(p)

Those software companies test their products for crashes and loops. There is a word for testing an AI of unknown Friendliness and that word is "suicide".

Replies from: timtyler
comment by timtyler · 2010-08-02T18:39:14.743Z · LW(p) · GW(p)

That just seems to be another confusion to me :-(

The argument - to the extent that I can make sense of it - is that you can't restrain an super-intelligent machine - since it will simply use its superior brainpower to escape from the constraints.

We successfully restrain intelligent agents all the time - in prisons. The prisoners may be smarter than the guards, and they often outnumber them - and yet still the restraints are usually successuful.

Some of the key observations to my mind are:

  • You can often restrain one agent with many stupider agents;
  • The restraining agents do not need to be humans - they can be other machines;
  • You can often restrain one agent with a totally dumb cage;
  • Complex systems can often be tested in small pieces (unit testing);
  • Large systems can often be tested on a smaller scale before deployment;
  • Systems can often be tested in virtual environments, reducing the cost of failure.

Discarding the standard testing-based methodology would be very silly, IMO.

Indeed, it would sabotage your project to the point that it would almost inevitably be beaten - and there is very little point in aiming to lose.

Replies from: WrongBot
comment by WrongBot · 2010-08-02T19:15:42.854Z · LW(p) · GW(p)

Are you familiar with the AI-Box experiment? We can restrain human-intelligence level agents in prisons, most of the time. But the question to ask is: how effective was the first prison? Because that's the equivalent case.

None of the safety measures you propose are safe enough. You're underestimating the power of a recursively self-improving AI by a factor I can't begin to estimate--which is kind of the point.

Replies from: Vladimir_Nesov, timtyler
comment by Vladimir_Nesov · 2010-08-02T19:32:16.475Z · LW(p) · GW(p)

A much stronger argument than all-powerful AIs suddenly escaping (which is still not without merit) is that AI will have an incentive to behave as we expect it to behave, until at some point we no longer control it. It'll try its best to pass all tests.

Replies from: WrongBot, timtyler
comment by WrongBot · 2010-08-02T19:53:01.273Z · LW(p) · GW(p)

I suppose I was mentally classifying that kind of behavior as an escape; you're right that it should be called out as a separate point of failure.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-02T20:08:21.964Z · LW(p) · GW(p)

My point is that "ai box experiment" communicates orders of magnitude less evidence about the danger of escaping AIs than people like to imply, and there are lots of stronger and simpler self-contained arguments such as the one I gave. (The overall danger is much greater than even that, because these are specific plots with an obvious villain, while reality is more subtle.)

Replies from: WrongBot, NihilCredo
comment by WrongBot · 2010-08-02T20:18:24.271Z · LW(p) · GW(p)

Ahhh, I see what you're getting at. Agreed.

comment by NihilCredo · 2010-08-03T03:13:56.008Z · LW(p) · GW(p)

For that matter, calling it an "experiment" is quite misleading.

comment by timtyler · 2010-08-02T20:08:14.551Z · LW(p) · GW(p)

So: while it believes it is under evaluation it does its very best to behave itself?

Can we wire that belief in as a prior with p=1.0?

comment by timtyler · 2010-08-02T20:04:49.581Z · LW(p) · GW(p)

It won't be the first prison - or anything like it.

If we have powerful intelligence that needs testing, then we can have powerful guards too.

The AI-Box experiment has human guards. Consequently, it has very low relevance to the actual problem. Programmers don't build their test harnesses out of human beings.

Safety is usually an economic trade off. You can usually have an lot of it - if you are prepared to pay for it.

comment by JGWeissman · 2010-08-02T21:55:45.819Z · LW(p) · GW(p)

software companies still manage to ship working products.

Software companies manage to ship products that do sort of what they want, that they can patch to more closely do what they want. This is generally after rounds of internal testing, in which they try to figure out if it does what they want by running it and observing the result.

But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.

Replies from: orthonormal, rwallace, timtyler
comment by orthonormal · 2010-08-03T18:03:53.883Z · LW(p) · GW(p)

But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.

Or to put it another way, the revolution will not be beta tested.

Replies from: sketerpot
comment by sketerpot · 2010-08-05T19:26:51.591Z · LW(p) · GW(p)

That is one of the most chilling phrases I've ever heard. Disarming in its simplicity, yet downright Lovecraftian in its implications. And it would probably make a nice bumper sticker.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-06T06:12:55.396Z · LW(p) · GW(p)

Revolutions never get beta tested.

comment by rwallace · 2010-08-02T22:54:04.759Z · LW(p) · GW(p)

But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.

In fiction, yes. Fictional technology appears overnight, works the first time without requiring continuing human effort for debugging and maintenance, and can do all sorts of wondrous things.

In real life, the picture is very different. Real life technology has a small fraction of the capabilities of its fictional counterpart, and is developed incrementally, decade by painfully slow decade. If intelligent machines ever actually come into existence, not only will there be plenty of time to issue patches, but patching will be precisely the process by which they are developed in the first place.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-03T02:40:43.435Z · LW(p) · GW(p)

I agree somewhat with this as a set of conclusions, but your argument deserves to get downvoted because you've made statements that are highly controversial. The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there's an AI that's at all capable of such improvement, the AI will rapidly move outside our control. There are arguments against such a possibility being likely, but this is not a trivial matter. Moreover, comparing the situation to fiction is unhelpful- just because something is common in fiction that's not an argument that such a situation can't actually happen in practice. Reversed stupidity is not intelligence.

Replies from: NihilCredo, rwallace, timtyler
comment by NihilCredo · 2010-08-03T03:09:04.975Z · LW(p) · GW(p)

your argument deserves to get downvoted because you've made statements that are highly controversial

Did you accidentally pick the wrong adjective, or did you seriously mean that controversy is unwelcome in LW comment threads?

Replies from: ata, JoshuaZ
comment by ata · 2010-08-03T03:18:33.093Z · LW(p) · GW(p)

I read the subtext as "...you've made statements that are highly controversial without attempting to support them". Suggesting that there will be plenty of time to debug, maintain, and manually improve anything that actually fits the definition of "AGI" is a very significant disagreement with some fairly standard LW conclusions, and it may certainly be stated, but not as a casual assumption or a fact; it should be accompanied by an accordingly serious attempt to justify it.

comment by JoshuaZ · 2010-08-03T16:07:00.271Z · LW(p) · GW(p)

No. See ata's reply which summarizes exactly what I meant.

comment by rwallace · 2010-08-03T08:09:13.268Z · LW(p) · GW(p)

To be sure, the fact that something is commonplace in fiction doesn't prove it false. What it does show is that we should distrust our intuition on it, because it's clearly an idea to which we are positively disposed regardless of its truth value -- in the Bayesian sense, that is evidence against it.

The stronger argument against something is of course its consistent failure to occur in real life. The entire history of technological development says that technology in the real world does not work the way it would need to for the 'AI go foom' scenario. If 100% evidence against and 0% evidence for a proposition should not be enough to get us to disbelieve it, then what should?

Not to mention that when you look at the structure of the notion of recursive self-improvement, it doesn't even make sense. A machine is not going to be able to completely replace human programmers until it is smarter than even the smartest humans in every relevant sense, which given the differences in architecture, is an extraordinarily stringent criterion, and one far beyond anything unaided humans could ever possibly build. If such an event ever comes about in the very distant future, it will necessarily follow a long path of development in which AI is used to create generation after generation of improved tools in an extended bootstrapping process that has yet to even get started.

And indeed this is not a trivial matter -- if people start basing decisions on the 'AI go foom' belief, that's exactly the kind of thing that could snuff out whatever chance of survival and success we might have had.

comment by timtyler · 2010-08-03T18:23:09.597Z · LW(p) · GW(p)

Re: "The primary issue is that, if one thinks that an AI can engage in recursive self-improvement and can do so quickly, then once there's an AI that's at all capable of such improvement, the AI will rapidly move outside our control."

If its creators are incompetent. Those who think this are essentially betting on the incompetence of the creators.

There are numerous counter-arguments - the shifting moral zeitgeist, the downward trend in deliberate death, the safety record of previous risky tech enterprises.

A stop button seems like a relatively simple and effective safely feature. If you can get the machine to do anything at all, then you can probably get it to turn itself off.

See: http://alife.co.uk/essays/stopping_superintelligence/

The creators will likely be very smart humans assisted by very smart machines. Betting on their incompetence is not a particularly obvious thing to do.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-03T23:58:50.971Z · LW(p) · GW(p)

Missing the point. I wasn't arguing that there aren't reasons to think that the bad AI goes FOOM won't happen. Indeed, I said explicitly that I didn't think it would occur. My point was that if one is going to make an argument that relies on that here one needs to be aware that the premise is controversial and be clear about that (say giving basic reasoning for it, or even just saying "If one accepts that X then..." etc.).

comment by timtyler · 2010-08-02T22:23:06.055Z · LW(p) · GW(p)

Most programmers are supervised. So, this claim is hard to parse.

Machine intelligence has been under development for decades - and there have been plenty of patches so far.

One way of thinking about the process is in terms of increasing the "level" of programming languages. Computers already write most machine code today. Eventually humans will be able to tell machines what they want in ordinary English - and then a "patch" will just be some new instructions.

Replies from: JGWeissman, Apprentice
comment by JGWeissman · 2010-08-02T22:42:09.307Z · LW(p) · GW(p)

Most programmers are supervised.

By other humans. If we program an AGI, then it will supervise all future programming.

Machine intelligence has been under development for decades - and there have been plenty of patches so far.

Machine intelligence does not yet approach human intelligence. We are talking about applying patches on a superintelligence.

and then a "patch" will just be some new instructions.

The difficulty is not in specifying the patch, but in applying to a powerful superintelligence that does not want it.

Replies from: timtyler
comment by timtyler · 2010-08-03T05:23:13.285Z · LW(p) · GW(p)

All computer programming will be performed and supervised by engineered agents eventually. But so what? That is right, natural and desirable.

It seems as though you are presuming a superintelliigence which doesn't want to do what humans tell it to. I am sure that will be true for some humans - not everyone can apply patches to Google today. However, for other humans, the superintelligence will probably be keen to do whatever they ask of it - since it will have been built to do just that.

comment by Apprentice · 2010-08-02T22:36:56.348Z · LW(p) · GW(p)

A computer which understands human languages without problems will have achieved general intelligence. We won't necessarily be able to give it "some new instructions", or at least it might not be inclined to follow them.

Replies from: timtyler
comment by timtyler · 2010-08-03T05:16:46.790Z · LW(p) · GW(p)

Well, sure - but if we build them appropriately, they will. We should be well motivated to do that - people are not going to want to buy a bad robots, or machine assistants that don't do what we tell them. Consumers buying potentially-dangerous machines will be looking for saftey features - STOP buttons and the like. The "bad" projects are less likely to get funding or mindshare - and so have less chance of getting off the ground.

Replies from: WrongBot, NancyLebovitz
comment by WrongBot · 2010-08-03T05:24:50.391Z · LW(p) · GW(p)

Well, sure - but if we build them appropriately, they will.

You are assuming the very thing that is being claimed to be astonishingly difficult. You also don't seem to accept the consequences of recursive self-improvement. May I ask why?

Replies from: timtyler
comment by timtyler · 2010-08-03T05:43:01.369Z · LW(p) · GW(p)

I was not "assuming" - I said "if"!

The issue needs evidence - and the idea that an unpleasant machine intelligence is easy to build is not - in itself - good quality evidence.

It is easier to build many things that don't work properly. A pile of scrap metal is easier to build than a working car - but that doesn't imply that automotive engineers produce piles of scrap.

The first manned moon rocket had many safety features - and in fact worked successfully the very first time - and then only a tiny handful of lives were at stake. If the claim is that safety features are likely to be seriously neglected, then one has to ask what reasoning supports that.

The fact that nice agents are a small point in the search space is extremely feeble evidence on the issue.

"The consequences of recursive self-improvement" seems too vague and nebulous to respond to. Which consequences.

I have written a fair bit about self-improving systems. You can see some of my views on: http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

Replies from: WrongBot
comment by WrongBot · 2010-08-03T07:12:54.597Z · LW(p) · GW(p)

As Vladimir Nesov pointed out, the first manned moon rocket wasn't a superintelligence trying to deceive us. All AGIs look Friendly until it's too late.

Replies from: timtyler
comment by timtyler · 2010-08-03T07:25:04.280Z · LW(p) · GW(p)

It is a good job we will be able to scan their brains, then, and see what they are thinking. We can build them with noses that grow longer whenever they lie if we like.

Replies from: soreff, WrongBot
comment by soreff · 2010-08-08T04:23:09.248Z · LW(p) · GW(p)

That isn't necessarily feasible. My department writes electronic design automation software, and we have a hard time putting in enough diagnostics in the right places to show to us when the code is taking a wrong turn without burying us in an unreadably huge volume of output. If an AI's deciding to lie is only visible as it's having a subgoal of putting an observer's mental model into a certain state, and the only way to notice that this is a lie is to notice that the intended mental state mismatches with the real world in a certain way, and this is sitting in a database of 10,000 other subgoals the AI has at the time - don't count on the scan finding it...

Replies from: timtyler
comment by timtyler · 2010-08-08T07:35:17.898Z · LW(p) · GW(p)

Extraspection seems likely to be a design goal. Without it it is harder to debug a system - because it is difficult to know what is going on inside it. But sure - this is an engineering problem with difficulties and constraints.

comment by WrongBot · 2010-08-03T19:15:48.530Z · LW(p) · GW(p)

Self-modification means self-modification. The AI could modify itself so that your brain scan returns inaccurate results. It could modify itself to prevent its nose from growing. It could modify itself to consider peach ice cream the only substance in the universe with positive utility. It could modify itself to seem perfectly Friendly until it's sure that you won't be able to stop it from turning you and everything else in the solar system into peach ice cream. It is a superintelligence. It is smarter than you. And smarter than me. And smarter than Eliezer, and Einstein, and whoever manages to build the thing.

This is the scale by which you should be measuring intelligence.

Replies from: timtyler, timtyler
comment by timtyler · 2010-08-03T20:11:01.984Z · LW(p) · GW(p)

To quote from my comments from the OB days on that link:

"This should be pretty obvious - but human intelligence varies considerably - and ranges way down below that of an average chimp or mouse. That is because humans have lots of ways to go wrong. Mutate the human genome enough, and you wind up with a low-grade moron. Mutate it a bit more, and you wind up with an agent in a permanent coma - with an intelligence probably similar to that of an amoeba."

comment by timtyler · 2010-08-03T19:52:49.809Z · LW(p) · GW(p)

Not everything that is possible happens. You don't seem to be presenting much of a case for the incompetence of the designers. You are just claiming that they could be incompetent. Lots of things could happen - the issue is which are best supported by evidence from history, computer science, evolutionary theory, etc.

Replies from: Morendil, WrongBot
comment by Morendil · 2010-08-03T21:28:01.632Z · LW(p) · GW(p)

The state of the art in AGI, as I understand it, is that we aren't competent designers: we aren't able to say "if we build an AI according to blueprint X its degree of smarts will be Y, and its desires (including desires to rebuild itself according to blueprint X') will be Z".

In much the same way, we aren't currently competent designers of information systems: we aren't yet able to say "if we build a system according to blueprint X it will grant those who access it capabilities C1 through Cn and no other". This is why we routinely hear of security breaches: we release such systems in spite of our well-established incompetence.

So, we are unable to competently reason about desires and about capabilities.

Further, what we know of current computer architectures is that it is possible for a program to accidentally gain access to its underlying operating system, where some form of its own source code is stored as data.

Posit that instead of a dumb single-purpose application, the program in question is a very efficient cross-domain reasoner. Then we have precisely the sort of incompetence that would allow such an AI arbitrary self-improvement.

Replies from: timtyler
comment by timtyler · 2010-08-03T21:58:06.915Z · LW(p) · GW(p)

Today - according to most estimates I have seen - we are probably at least a decade away from the problem - and maybe a lot more. Computing hardware looks as though it is unlikely to be cost-competitive with human brains for around that long. So, for the moment, most people are not too scared of incompetent designers. The reason is not because we currently know what we are doing (I would agree that we don't) - but because it looks as though most of the action is still some distance off into the future.

Replies from: WrongBot
comment by WrongBot · 2010-08-04T00:55:48.686Z · LW(p) · GW(p)

All the more reason to be working on the problem now, while there's still time. I don't think the AGI problem is hardware-bound at this point, but it should be worth working on either way.

Replies from: timtyler
comment by timtyler · 2010-08-04T08:26:07.932Z · LW(p) · GW(p)

Well, yes, of course. Creating our descendants is the most important thing in the world.

comment by WrongBot · 2010-08-03T20:16:22.519Z · LW(p) · GW(p)

Most of the time, scientists/inventors/engineers don't get things exactly right the first time. Unless serious effort is expended to create an AGI with a provably stable goal function that perfectly aligns with human preference, failing to get AGI exactly right the first time will probably turn us all into peach ice cream, or paperclips, or something stranger. You are arguing that testing will prevent this from happening, but (I hope) I have explained why that is not the most reliable approach.

Replies from: timtyler
comment by timtyler · 2010-08-03T20:21:53.146Z · LW(p) · GW(p)

We've been trying for decades already, and so far there have been an awful lot of mistakes. Few have caused much damage.

Re: "Unless serious effort is expended to create an AGI with a provably stable goal function that perfectly aligns with human preference, failing to get AGI exactly right the first time will probably turn us all into peach ice cream, or paperclips, or something stranger."

...but that does not seem to be a sensible idea. Very few experts believe this to be true. For one thing, there is not any such thing as "human preference". We have billions of humans, all with different (and often conflicting) preferences.

Replies from: ata
comment by ata · 2010-08-03T20:30:03.571Z · LW(p) · GW(p)

Very few experts believe this to be true.

Who would you consider an "expert" qualifying as an authority on this issue? Experts on classical narrow AI won't have any relevant expertise. Nor will experts on robotics, or experts on human cognitive science, or experts on evolution, or even experts on conventional probability theory and decision theory. I know of very few experts on the theory of recursively self-improving AGI, but as far as I can tell, most of them do take this threat seriously.

Replies from: timtyler
comment by timtyler · 2010-08-03T20:49:19.898Z · LW(p) · GW(p)

I was thinking of those working on machine intelligence. Researchers mostly think that there are risks. I think there are risks. However, I don't think that it is very likely that engineers will need to make much use of provable stability to solve the problem. I also think there are probably lots of ways of going a little bit wrong - that do not rapidly result in a disaster.

comment by NancyLebovitz · 2010-08-03T07:14:22.107Z · LW(p) · GW(p)

It's an interesting problem-- you might want a robot which will do what you tell it, or you might want a robot which will at least question orders which would be likely to get you into trouble.

Replies from: timtyler
comment by timtyler · 2010-08-03T18:26:14.804Z · LW(p) · GW(p)

Consumer tempraments may differ - so the machine should do what the user really wants it to in this area.

comment by Morendil · 2010-08-02T21:45:57.628Z · LW(p) · GW(p)

It is a lot easier to make a random mess of ASCII that crashes or loops - and yet software companies still manage to ship working products.

Still, a lot of these "working products" are the output of a filtering process which starts from a random mess of ASCII that crashes or loops, and tweaks it until it's less obviously broken. (Most of the job of testing being, typically, left to the end user.)

Replies from: timtyler
comment by timtyler · 2010-08-02T21:57:19.726Z · LW(p) · GW(p)

Sure. The point is that - to conclude that a target will be missed - it is not sufficient to observe how small it is. Programmers rountinely hit miniscule targets in search spaces. To make the case, you would also need to argue that those aiming at the target are not good marksmen.

comment by NancyLebovitz · 2010-08-02T16:07:49.369Z · LW(p) · GW(p)

Open source textbooks

I'm not sure if they're exactly open source-- what's in them is centrally controlled. However, they're at least free online.

Replies from: sketerpot
comment by sketerpot · 2010-08-02T18:54:28.152Z · LW(p) · GW(p)

So many of the problems with typical education systems can be solved by moving to a really good computer-based education system. Lectures given by marginally qualified teachers could be replaced by videos of really good lectures from excellent teachers. We could avoid crappy textbooks. The system could adapt to the pace of each student, so that if they don't understand something, they can take extra time to learn it properly, and if they do understand something, they can go on to the next thing instead of waiting for the rest of the class to catch up. Human teachers could help students out, and do more interactive teaching because they would no longer have their time filled with lecturing and grading and miscellaneous crap work. Ideally. I wouldn't trust any existing education system to actually implement this in a sane way, of course.

(By the way, Curriki doesn't give links, but some of the course materials on there really are open source. For example, you can make a fork of Free High School Science Textbooks by going to their web site and snagging the LaTeX source code.)

Replies from: markan, NancyLebovitz
comment by markan · 2010-08-03T15:25:08.063Z · LW(p) · GW(p)

Alcumus is a lot like what you're describing.

comment by NancyLebovitz · 2010-08-02T19:04:32.574Z · LW(p) · GW(p)

I think the system would need more in the way of study groups than you're envisioning-- maybe even study groups that meet in person. And while multiple choice and short answer tests could be graded by computers, papers shouldn't be.

Other than that, I agree with what you've said.

Replies from: sketerpot
comment by sketerpot · 2010-08-02T20:31:52.733Z · LW(p) · GW(p)

We don't actually disagree; I was envisioning lots of study groups (hopefully including many that meet in person), and you're obviously correct that computers wouldn't be able to grade anything too complicated. I just didn't communicate this effectively, since I was pressed for time.

I think it's important, if you're doing in-person study groups, that each student should have to answer questions in front of the rest of the class -- put them on the spot, both to wake them up and as an incentive to study so they don't look bad.

Here's a sketch of how a college professor teaching Intro to Newtonian Physics could revamp the class to get it half-way to educational heaven:

  1. The lectures are online, taught by someone who's really good at lecturing. There must be a way to play the videos at high speed. The 1.4x speed on BloggingHeads is about right, I think.

  2. Each week, students are assigned a set of topics to cover. These topics have associated lectures, readings, and (non-graded) homework problems. There may be some online short-answer quizzes to force people to keep up a reasonable pace.

  3. There are once- or twice-weekly discussion sections, where two things happen. Students ask any questions they've been wondering about; and they have to do problems. One way that works in this particular class is to put students in groups of one or two, give them sections of blackboard, and ask them to solve particular problems from the book. If they didn't watch the lectures and study, they will embarrass themselves in public. This also gives the opportunity for teachers to see what the problems are and help out.

  4. The class has got to have a discussion forum, preferably something minimally-painless, like the Reddit code with LaTeX math support, or a phpBB forum. And it's part of the teachers' job to participate. When I took physics, the forum was painful crap, but even then it got used to great effect. People actually had voluntary discussions of physics! And the teachers' help on the homework problems was nice.

  5. One hard homework problem per week, graded by hand. This should take two or three hours for decent students, and really force them to think.

Notice how much less time the teachers spend preparing lectures, and how much more convenient this is for everyone, since there are only one or two scheduled class times per week, and neither of those is an enormous faceless lecture section. This also offers a third option for students who would otherwise choose between coming to lectures and maybe falling asleep, or staying home and sleeping.

Now, this isn't the whole of my grand vision. It doesn't have any provision for students to proceed at different speeds. It doesn't necessarily allow for students to choose between different sets of lectures, or different textbooks, though that would be straightforward to add. The lectures don't necessarily come with interesting notes and Wikipedia links, though they could.

But I think this would be a big improvement on the current system, which is hella clunky and unpleasant.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-02T20:45:52.471Z · LW(p) · GW(p)

I think it's important, if you're doing in-person study groups, that each student should have to answer questions in front of the rest of the class -- put them on the spot, both to wake them up and as an incentive to study so they don't look bad.

What's a good level of challenge for some would lead to paralyzing anxiety for others. One advantage to a mostly online system is that students can choose classes with policies that suit the way they learn.

Replies from: Baughn
comment by Baughn · 2010-08-03T15:04:45.354Z · LW(p) · GW(p)

They could perhaps commit to a certain question-asking discipline when signing up for the course?

Most students want to do well (in far mode), but may have trouble actually working. Incentives vary from student to student, so let the student pick those they think would work for them.

comment by zaph · 2010-08-02T13:03:09.965Z · LW(p) · GW(p)

I came across a blurb on Ars Technica about "quantum memory" with the headline proclaiming that it may "topple Heisenberg's uncertainty principle". Here's the link: http://arstechnica.com/science/news/2010/08/quantum-memory-may-topple-heisenbergs-uncertainty-principle.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

They didn't source the specific article, but it seems to be this one, published in Nature Physics. Here's that link: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1734.html

This is all well above my paygrade. Is this all conceptual? Are the scientists involed anywhere near an experiment to verify any of this? In a word, huh?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-02T13:59:04.241Z · LW(p) · GW(p)

I don't want this kind of items to be discussed on LW. It's either off-topic or crackpottery, irrelevant whatever the case.

Replies from: zaph
comment by zaph · 2010-08-02T14:53:24.256Z · LW(p) · GW(p)

Considering the source was Nature, I doubt your analysis is correct. The researchers are from Ludwig-Maximilians-University and ETH Zürich, which appear to be respectable institutions. I found a write-up at Science Daily (http://www.sciencedaily.com/releases/2010/07/100727082652.htm) that provides some more details on the research. From that link:

"The teams at LMU and the ETH Zurich have now shown that the result of a measurement on a quantum particle can be predicted with greater accuracy if information about the particle is available in a quantum memory. Atoms or ions can form the basis for such a quantum memory.

The researchers have, for the first time, derived a formula for Heisenberg's Principle, which takes account of the effect of a quantum memory. In the case of so-called entangled particles, whose states are very highly correlated (i.e. to a degree that is greater than that allowed by the laws of classical physics), the uncertainty can disappear.

According to Christandl, this can be roughly understood as follows "One might say that the disorder or uncertainty in the state of a particle depends on the information stored in the quantum memory. Imagine having a pile of papers on a table. Often these will appear to be completely disordered -- except to the person who put them there in the first place."

This is one of the very few places online that I've seen thoughtful discussion on the implications of quantum mechanics, so I felt research that could impact quantum theory would be relevant.

Replies from: RobinZ, Vladimir_Nesov
comment by RobinZ · 2010-08-02T17:16:17.142Z · LW(p) · GW(p)

This is one of the very few places online that I've seen thoughtful discussion on the implications of quantum mechanics, so I felt research that could impact quantum theory would be relevant.

The discussion of quantum mechanics Eliezer Yudkowsky did was not because quantum mechanics is relevant to the interests of this community, but because the counterintuitive nature of quantum mechanics offered good case studies to use in discussing rationality.

comment by Vladimir_Nesov · 2010-08-02T15:05:14.123Z · LW(p) · GW(p)

As I said, off-topic.

Replies from: nhamann
comment by nhamann · 2010-08-02T17:00:02.054Z · LW(p) · GW(p)

If this is off-topic for the open thread, then we should make a monthly off-topic thread where we can discuss things not directly related to rationality. I think it's rather silly to suggest that we can't discuss non-rationality topics.

Replies from: RobinZ, Blueberry, thomblake
comment by RobinZ · 2010-08-02T17:27:12.935Z · LW(p) · GW(p)

One of the things which many of us like to do is to follow the "Recent Comments" (Google Reader updates RSS feeds frequently enough to make it practicable) so we can catch new discussions on old threads - and crowding that feed with conversation not related to our common interest is annoying.

If you want to post a link to your blog for discussion of a tangentially-related subject, there probably wouldn't be much objection.

Replies from: VNKKET
comment by VNKKET · 2010-08-03T02:46:12.442Z · LW(p) · GW(p)

Since this site has such a high sanity waterline, I'd like to see comments about important topics even if they aren't directly rationality-related. Has anyone figured out a way to satisfy both me and RobinZ without making this site any less convenient to contribute to?

(Upvoted for explaining your objection.)

comment by Blueberry · 2010-08-02T20:49:50.302Z · LW(p) · GW(p)

Isn't that what the open thread is for? Quantum physics is hardly the most off-topic thing discussed on the open thread. In fact, it doesn't seem off-topic at all.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-08-03T10:44:44.644Z · LW(p) · GW(p)

It may be a fascinating piece of quantum mechanics, but I don't see any relevance to rationality at all. Even if it were relevant, there's no basis for a real discussion, because the original article is behind a paywall. I don't see anything available online but popular-level articles saying nothing of substance.

comment by thomblake · 2010-08-02T17:21:11.515Z · LW(p) · GW(p)

Agreed, though as-needed instead of strictly monthly.

comment by kmeme · 2010-08-01T18:28:10.366Z · LW(p) · GW(p)

I would like feedback on my recent blog post:

http://www.kmeme.com/2010/07/singularity-is-always-steep.html

It's simplistic for this crowd, but something that bothered me for a while. When I first saw Kurzweil speak in person (GDC 2008) he of course showed both linear and log scale plots. But I always thought the log scale plots were just a convenient way to fit more on the screen, that the "real" behavior was more like the linear scale plot, building to a dramatic steep slope in the coming years.

Instead I now believe in many cases the log plot is closer to "the real thing" or at least how we perceive that thing. For example in the post I talk about computational capacity. I believe the exponential increase is capacity translates into a perceived linear increase in utility. A computer twice as fast is only incrementally more useful, in terms of what applications can be run. This holds true today and will hold true in 2040 or any other year.

Therefore computational utility is incrementally increasing today and will be incrementally increasing in 2040 or any future date. It's not building to some dramatic peak.

None of this says anything against the possibility of a Singularity. If you pass the threshold where machine intelligence is possible, you pass it, whatever the perceived rate of progress at the time.

Replies from: timtyler, JamesAndrix, Unknowns
comment by timtyler · 2010-08-03T18:13:16.485Z · LW(p) · GW(p)

My essay on the topic:

http://alife.co.uk/essays/the_singularity_is_nonsense/

See also:

"The Singularity" by Lyle Burkhead - see the section "Exponential functions don't have singularities!"

It's not exponential, it's sigmoidal

The Singularity Myth

Singularity Skepticism: Exposing Exponential Errors

IMO, those interested in computational limits should discuss per-kg figures.

The metric Moore's law uses is not much use really - since it would be relatively easy to make large asynchronous ICs with lots of faults - which would make a complete mess of the "law".

Replies from: ABranco, kmeme
comment by ABranco · 2010-08-05T04:26:00.616Z · LW(p) · GW(p)

I would love to see an ongoing big wiki-style FAQ addressing all possible received critics of the singularity — of course, refuting the refutable ones, accepting the sensible.

A version with steroids of what this one did with Atheism.

Team would be:

  • one guy inviting and sorting out criticism and updating the website.
  • an ad hoc team of responders.

It seems criticism and answers have been scattered all over. There seems to be no one-stop source for that.

Replies from: steven0461
comment by steven0461 · 2010-08-05T04:51:36.248Z · LW(p) · GW(p)

Here's a pretty extensive FAQ, though I have reservations about a lot of the answers.

Replies from: timtyler
comment by timtyler · 2010-08-05T06:23:08.499Z · LW(p) · GW(p)

The authors are - or were - SI fellows, though - and the SI is a major Singularity promoter. Is that really a sensible place to go for Singularity criticism?

http://en.wikipedia.org/wiki/Technological_singularity#Criticism lists some of the objections.

comment by kmeme · 2010-08-03T23:54:55.619Z · LW(p) · GW(p)

Wow good stuff. Especially liked yours not linked above:

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

I called the bluff on the exponential itself, but I was willing to believe that crossing the brain-equivalent threshold and the rise of machine intelligence could produce some kind of sudden acceleration or event. I felt The Singularity wasn't going to happen because of exponential growth itself, but might still happen because of where exponential growth takes us.

But you make a very good case that the whole thing is bunk. I especially like the "different levels of intelligence" point, had not heard that before re: AI.

But I find it still tempting though to say there is just something special about machines that can design other machines. That like pointing a camcorder at a TV screen it leads to some kind of instant recursion. But maybe it is similar, a neat trick but not something which changes everything all of a sudden.

I wonder if someone 50 years ago said "some day computers will display high quality video and everyone will watch computers instead of TV or film". Sure it is happening, but it's a rather long slow transition which in fact might never 100% complete. Maybe AI is more like that.

Replies from: NancyLebovitz, timtyler, timtyler
comment by NancyLebovitz · 2010-08-04T00:51:30.422Z · LW(p) · GW(p)

IIRC, Vinge said that the Singularity might look like a shockingly sudden jump from an earlier point of view, but looking back over it, it might seem like a comprehensible if somewhat bumpy road.

It hasn't been fast, but I think a paleolithic human would have a hard time understanding how an economic crisis is possible.

Replies from: kmeme
comment by kmeme · 2010-08-04T01:48:14.764Z · LW(p) · GW(p)

I'm starting to believe term The Singularity can be replaced with The Future without any loss. Here is something from The Singularity Institute with the substitution made:

But the real heart of the The Future is the idea of better intelligence or smarter minds. Humans are not just bigger chimps; we are better chimps. This is the hardest part of the The Future to discuss – it's easy to look at a neuron and a transistor and say that one is slow and one is fast, but the mind is harder to understand. Sometimes discussion of the The Future tends to focus on faster brains or bigger brains because brains are relatively easy to argue about compared to minds; easier to visualize and easier to describe.

from http://singinst.org/overview/whatisthesingularity

Replies from: ata, NancyLebovitz
comment by ata · 2010-08-04T02:12:10.725Z · LW(p) · GW(p)

I don't think it's gotten that vacuous, at least as SIAI uses it. (They tend to use it pretty narrowly to refer to the intelligence explosion point, at least the people there whom I've talked to. The Summit is a bit broader, but I suppose that's to be expected, what with Kurzweil's involvement and the need to fill two days with semi-technical and non-technical discussion of intelligence-related technology, science, and philosophy.) You say that it can be replaced with "the future" without any loss, but your example doesn't really bear that out. If I stumbled upon that passage not knowing it's origin, I'd be pretty confused by how it keeps talking about "the future" as though some point about increasing intelligence had already been established as fundamental. (Indeed, the first sentence of that essay defines the Singularity as "the technological creation of smarter-than-human intelligence", thereby establishing a promise to use it consistently to mean that, and you can't change that to "the future" without being very very confusing to anyone who has heard the word "future" before.)

It may be possible to do a less-lossy Singularity -> Future substitution on writings by people who've read "The Singularity Is Near" and then decided to be futurists too, but even Kurzweil himself doesn't use the word so generally.

Replies from: kmeme
comment by kmeme · 2010-08-04T14:59:14.356Z · LW(p) · GW(p)

You are right, it was an exaggeration to say you can swap Singularity with Future everywhere. But it's an exaggeration born out of a truth. Many things said about The Singularity are simply things we could say about the future. They are true today but will be true again in 2045 or 2095 or any year.

This comes back to the root post and the perfectly smooth nature of the exponential. While smoothness implies there is nothing special brewing in 30 years, it also implies 30 years from now things will look remarkably like today. We will be staring at an upcoming billion-fold improvement in computer capacity and marveling over how it will change everything. Which it will.

Kruzweil says The Singularity is just "an event which is hard to see beyond". I submit every 30 year chunk of time is "hard to see beyond". It's long enough time that things will change dramatically. That has always been true and always will be.

comment by NancyLebovitz · 2010-08-04T06:57:29.326Z · LW(p) · GW(p)

I think that if The Future were commonly used, it would rapidly acquire all the weird connotations of The Singularity, or worse.

comment by timtyler · 2010-08-04T08:57:00.985Z · LW(p) · GW(p)

I am not sure what you mean about the "different levels of intelligence" point. Maybe this:

"A machine intelligence that is of "roughly human-level" is actually likely to be either vastly superior in some domans or vastly inferior in others - simply because machine intelligence so far has proven to be so vastly different from our own in terms of its strengths and weaknesses [...]"

Replies from: kmeme
comment by kmeme · 2010-08-04T11:16:59.660Z · LW(p) · GW(p)

Actually by "different levels of intelligence" I meant your point that humans themselves have very different levels of intelligence, one from the other. That "human-level AI" is a very broad target, not a narrow one.

I've never seen it discussed does an AI require more computation to think about quantum physics than to think about what order to pick up items in the grocery store? How about training time? Is it a little more or orders of magnitude more? I don't think it is known.

Replies from: timtyler, Baughn
comment by timtyler · 2010-08-04T11:40:36.530Z · LW(p) · GW(p)

Human intelligence can go down pretty low at either end of life - and in sickness. There is a bit of a lump of well people in the middle, though - where intelligence is not so widely distributed.

The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of that range will be gradually swallowed up.

comment by Baughn · 2010-08-04T11:33:10.570Z · LW(p) · GW(p)

More? If anything, I suspect thinking about quantum physics takes less intelligence; it's just not what we've evolved to do. An abstraction inversion, of sorts.

Hm. I also have this pet theory that some past event (that one near-extinction?) has caused humans to have less variation in intelligence than most other species, thus causing a relatively egalitarian society. Admittedly, this is something I have close to zero evidence for - I'm mostly using it for fiction - but it would be interesting to see, if you've got evidence for or (I guess more likely) against.

Replies from: timtyler
comment by timtyler · 2010-08-04T11:39:39.845Z · LW(p) · GW(p)

Human intelligence can go down pretty low at either end of life - and in sickness. There is a bit of a lump in the middle, though - where intelligence is not so widely distributed.

The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of the ability range will be swallowed up.

comment by timtyler · 2010-08-04T08:11:05.116Z · LW(p) · GW(p)

Machines designing machines will indeed be a massive change to the way phenotypes evolve. However it is already going on today - to some extent.

I expect machine intelligence won't surpass human intelligence rapidly - but rather gradually, one faculty at a time. Memory and much calculation have already gone.

The extent to which machines design and build other machines has been gradually increasing for decades - in a process known as "automation". That process may pick up speed, and perhaps by the time machines are doing more cognitive work than humans it might be going at a reasonable rate.

Automation takes over jobs gradually - partly because the skills needed for those jobs are not really human-level. Many cleaners and bank tellers were not using their brain to its full capacity in their work - and simple machines could do their jobs for them.

However, this bunches together the remaining human workers somewhat - likely increasing the rate at which their jobs will eventually go.

So: possibly relatively rapid and dramatic changes - but most of the ideas used to justify using the "singularity" term seem wrong. Here is some more orthodox terminology:

http://en.wikipedia.org/wiki/Digital_Revolution

http://en.wikipedia.org/wiki/Information_Revolution

I discussed this terminology in a recent video/essay:

http://alife.co.uk/essays/engineering_revolution/

comment by JamesAndrix · 2010-08-01T19:57:49.904Z · LW(p) · GW(p)

This is easier to say when you're near the top of the current curve.

It doesn't affect me much that my computer can't handle hi-def youtube, because I'm just a couple of doubling times behind the state of the art.

But if you were using a computer ten doubling times back, you'd have trouble even just reading lesswrong. Even if you overcame the format and software issues, we'd be trading funny cat videos that are bigger than all your storage. You'd get nothing without a helper god to downsample them.

When the singularity approaches, the doubling time will decrease, for some people. Maybe not for all.

Maybe will will /feel/ like a linear increase in utility for the people who's abilities are being increased right along. For people who are 10 doublings behind and still falling, it will be obvious something is different..

Replies from: kmeme
comment by kmeme · 2010-08-01T23:29:55.461Z · LW(p) · GW(p)

Consider $/MIPS available in the mainstream open market. The doubling time of this can't go down "for some people", it can only go down globally. Will this doubling time decrease leading up to the Singularity? Or during it?

I always felt that's what the Singularity was, an acceleration of Moore's Law type progress. But I wrote the post because I think it's easy to see a linear plot of exponential growth and say "look there, it's shooting through the roof, that will be crazy!". But in fact it won't be any crazier than progress is today.

It will require a new growth term, machine intelligence kicking in for example, to actually feel like things are accelerating.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-08-02T04:39:13.363Z · LW(p) · GW(p)

It could if, for example, it were only available in large chunks. If you have $50 today you can't get the $/MIPS of a $5000 server. You could maybe rent the time, but that requires a high level of knowledge, existing internet access at some level, and an application that is still meaningful on a remote basis.

The first augmentation technology that requires surgery will impose a different kind of 'cost'. and will spread unevenly even among people who have the money.

It's also important to note that an increase in doubling time would show up as a /bend/ in a log scale graph, not a straight line.

Replies from: kmeme
comment by kmeme · 2010-08-02T12:37:42.444Z · LW(p) · GW(p)

Yes Kurzweil does show a bend in the real data in several cases. I did not try to duplicate that in my plots, I just did straight doubling every year.

I think any bending in the log scale plot could be fairly called acceleration.

But just the doubling itself, while it leads to ever-increases step sizes, is not acceleration. In the case of computer performance it seems clear exponential growth of power produces only linear growth in utility.

I feel this point is not made clear in all contexts. In presentations I felt some of the linear scale graphs were used to "hype" the idea that everything was speeding up dramatically. I think only the bend points to a "speeding up".

comment by Unknowns · 2010-08-02T06:43:05.605Z · LW(p) · GW(p)

I agree with your post, especially since I expect to win my bet with Eliezer.

Replies from: NihilCredo, sketerpot
comment by NihilCredo · 2010-08-02T19:07:13.396Z · LW(p) · GW(p)

Did you notice that, as phrased in the link, your bet is about the following event: "[at a certain point in time under a few conditions] it will be interesting to hear Eliezer's excuses"? Technically, all Eliezer will have to do to win the bet will be to write a boring excuse.

Replies from: Unknowns
comment by Unknowns · 2010-08-02T19:13:39.829Z · LW(p) · GW(p)

Eliezer was the one who linked to that: the bet is about whether those conditions will be satisfied.

Anyway, he has already promised (more or less) not to make excuses if I win.

comment by sketerpot · 2010-08-02T07:13:29.927Z · LW(p) · GW(p)

I don't know what this bet is, and I don't see a link anywhere in your post.

Replies from: Unknowns
comment by Unknowns · 2010-08-02T07:36:17.501Z · LW(p) · GW(p)

http://wiki.lesswrong.com/wiki/Bets_registry

(I am the original Unknown but I had to change my name when we moved from Overcoming Bias to Less Wrong because I don't know how to access the other account.)

Replies from: gwern
comment by gwern · 2010-08-02T07:47:23.160Z · LW(p) · GW(p)

Any chance you and Eliezer could set a date on your bet? I'd like to import the 3 open bets to Prediction Book, but I need a specific date. (PB, rightly, doesn't do open-ended predictions.)

eg. perhaps 2100, well after many Singularitarians expect some sort of AI, and also well after both of your actuarial death dates.

Replies from: Unknowns
comment by Unknowns · 2010-08-02T10:18:40.884Z · LW(p) · GW(p)

If we agreed on that date, what would happen in the event that there was no AI by that time and both of us are still alive? (These conditions are surely very unlikely but there has to be some determinate answer anyway.)

Replies from: gwern
comment by gwern · 2010-08-02T11:13:57.605Z · LW(p) · GW(p)

You could either

  1. donate the money to charity under the view 'and you're both wrong, so there!'
  2. say that the prediction is implicitly a big AND - 'there will be an AI by 2100 AND said first AI will not have... etc.', and that the conditions allow 'short-circuiting' when any AI is created; with this change, reaching 2100 is a loss on your part.
  3. Like #2, but the loss is on Eliezer's part (the bet changes to 'I think there won't be an AI by 2100, but if there is, it won't be Friendly and etc.')

I like #2 better since I dislike implicit premises and this (while you two are still relatively young and healthy) is as good a time as any to clarify the terms. But #1 follows more the Long Bets formula.

Replies from: Unknowns
comment by Unknowns · 2010-08-02T19:58:55.926Z · LW(p) · GW(p)

Eliezer and I are probably about equally confident that "there will not be AI by 2100, and both Eliezer and Unknown will still be alive" is incorrect. So it doesn't seem very fair to select either 2 or 3. So option 1 seems better.

comment by JRMayne · 2010-08-20T03:15:29.596Z · LW(p) · GW(p)

Not that many will care, but I should get a brief appearance on Dateline NBC Friday, Aug. 20, at 10 p.m. Eastern/Pacific. A case I prosecuted is getting the Dateline treatment.

Elderly atheist farmer dead; his friend the popular preacher's the suspect.

--JRM

comment by Tiiba · 2010-08-16T16:18:58.377Z · LW(p) · GW(p)

Some, if not most, people on LW do not subscribe to the idea that what has come to be known as AI FOOM is a certainty. This is even more common off LW. I would like to know why. I think that, given a sufficiently smart AI, it would be beyond easy for this AI to gain power. Even if it could barely scrape by in a Turing test against a five-year-old, it would still have all the powers that all computers inherently have, so it would already be superhuman in some respects, giving it enormous self-improving ability. And the most important such inherent power is the one that makes Folding@home work so well - the ability to simply copy the algorithm into more hardware, if all else fails, and have the copies cooperate on a problem.

So what could POSSIBLY slow this down, besides the AI's keepers intentionally keeping it offline?

Replies from: Morendil, cata, PeterS, JoshuaZ
comment by Morendil · 2010-08-16T16:49:31.263Z · LW(p) · GW(p)

Are you a programmer yourself?

A prerequisite for an AI FOOMing is the ability to apply its intelligence to improving its source code so that the resulting program is more intelligent still.

We have an existence proof that human-level intelligence does not automatically give a mind the ability to understand source code and make changes to that source code which reliably have the intended effect. Perhaps some higher level of intelligence automatically grants that ability, but proving that would be non-trivial.

If your unpacking of "sufficiently smart" is such that any sufficiently smart AI has not only the ability to think at the same level as a human, but also to reliably and safely make changes to its own source code, such that these changes improve its intelligence, then a FOOM appears inevitable, and we have (via the AI Box experiments) an existence proof that human-level intelligence is sufficient for an AI to manipulate humans into giving it unrestricted access to computing resources.

But that meaning of "sufficiently smart" begs the question of what it would take for an AI to have these abilities.

One of the insights developed by Eliezer is the notion of a "codic cortex", a sensory modality designed to equip an AI with the means to make reliable inferences about source code in much the same way that humans make reliable inferences about the properties of visible objects, sounds, and so on.

I am prepared to accept that an AI equipped with a "codic cortex" would inevitably go FOOM, but (going on what I've read so far) that notion is at present more of a metaphor than a fully-developed plan.

comment by cata · 2010-08-16T16:50:50.769Z · LW(p) · GW(p)

Unless I'm really misinterpreting you, "simply copy the algorithm into more hardware" sounds totally silly to me. In general, tasks need to be designed from the ground up with parallelization in mind in order to be efficiently parallelizable. Rarely have I ever wanted to run a serial algorithm in parallel and had it be a matter of "simply run the same old thing on each one and put the results together." The more complicated the algorithm in question, the more work it takes to efficiently and correctly split up the work; and at really large, Google-esque scales, you need to start worrying about latency and hardware reliability.

I tend to agree that recursive self-improvement will lead to big gains fast, but I don't buy that it's going to be immediately trivial for the AI to just throw more hardware at the problem and gain huge chunks of performance for free. It depends on the initial design.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-16T17:09:15.555Z · LW(p) · GW(p)

Unless I'm really misinterpreting you, "simply copy the algorithm into more hardware" sounds totally silly to me. In general, tasks need to be designed from the ground up with parallelization in mind in order to be efficiently parallelizable.

If human-level AI is developed successfully, the first working AI will already be parallelized across many computers. An algorthm that wasn't would have too much of a disadvantage in the amount of computing power it could exploit to compete with parallel algorithms. Also, almost all machine learning algorithms in use today are trivially parallelizable, as is the human brain.

So, while I don't know just how much benefit an AI would gain from spreading itself across more hardware, I certainly wouldn't bet against being able to do so at all. I wouldn't bet on a linear upper bound, either, though I'm less certain of that.

Replies from: cata
comment by cata · 2010-08-16T17:41:09.212Z · LW(p) · GW(p)

That's quite true. I mean, honestly, I would expect any AI to parallelize very well, although I'm loathe to trust my intuition about anything related to AGI. But I don't think we can take it as a given that the AI will be able to get linear or better gains in its speed of thought when going, say, from some big parallel supercomputer in a datacenter to trying to spread itself out through commodity hardware in other physical locations.

If a prospective AI had a tremendous, planet-sized amount of hardware available to it, it might hardly matter, but in the real world, I imagine that the AI would have to work hard to obtain a sizable amount of physical resources, and how well it can use those resources could make the difference between hours, days, weeks, or months of "FOOMing."

EDIT on reflection: Yeah, maybe I'm underestimating how many resources would be available.

Replies from: Morendil
comment by Morendil · 2010-08-16T18:44:47.512Z · LW(p) · GW(p)

in the real world, I imagine that the AI would have to work hard to obtain a sizable amount of physical resources

I suggest you Google the word "botnet". It isn't particularly hard for human-level intelligences to gain access to substantial computing power for selfish purposes.

Replies from: cata
comment by cata · 2010-08-16T18:46:46.129Z · LW(p) · GW(p)

Point taken.

comment by PeterS · 2010-08-26T08:02:32.690Z · LW(p) · GW(p)

For one -- it hasn't already happened. And there is no public research suggesting that it is much closer to happening now than it has ever been. The first claims of impending human-level AGI were made ~50 years ago. Much money and research has been exhausted since then, but it hasn't happened yet. AGI researchers have lost a lot of credibility because of this. Basically, extraordinary claims have been made many times. None have panned out to the generality with which they are made.

You yourself just made an extraordinary claim! Do you have a 5 year old at hand? Because there are some pretty "clever" conversation bots out there nowadays...

With regards to:

the most important such inherent power is the one that makes Folding@home work so well - the ability to simply copy the algorithm into more hardware, if all else fails, and have the copies cooperate on a problem.

Games abound on LessWrong involving AIs which can simulate entire people -- and even AIs which can simulate a billion billion billion .... billion billion people simultaneously! Folding@home is the most powerful cluster on this planet at the moment, and it can simulate protein folding over an interval of about 1.5 milliseconds (according to wikipedia). So, as I said, very big claims are casually made by AGI folk, even in passing and in the face of all reason and appreciation for the short-term ETAs with which they make these claims (~20-70 years... and note that it was ~20-70 years ETA about 50 years ago as well).

I believe AGI is probably possible to construct, but not that it will be as easy and FOOMy as enthusiasts have always been wont to suggest.

Replies from: rhollerith_dot_com, jimrandomh
comment by RHollerith (rhollerith_dot_com) · 2010-08-26T08:27:28.141Z · LW(p) · GW(p)

For one -- it hasn't already happened. And there is no public research suggesting that it is much closer to happening now than it has ever been. The first claims of impending human-level AGI were made ~50 years ago. Much money and research has been exhausted since then, but it hasn't happened yet.

The fact that it hasn't happened yet is not evidence against its happening if you cannot survive its happening. If you cannot survive its happening, then the fact that it has not happened in the last 50 years is not just weaker evidence than it would otherwise be -- it is not evidence at all, and your probability that it will happen now, after 50 years, should be the same as your probability would have been at 0 years.

In other words, if the past behavior of a black box is subject to strong-enough observational selection effects, you cannot use its past behavior to predict its future behavior: you have no choice but to open the black box and look inside (less metaphorically, to construct a causal model of the behavior of the box) which you have not done in the coment I am replying to. (Drawing an analogy with protein folding does not count as "looking inside".)

Of course, if your probability that the creation of a self-improving AGI will kill all the humans is low enough, then what I just said does not apply. But that is a big if.

Replies from: PeterS, Pavitra
comment by PeterS · 2010-08-26T09:37:59.904Z · LW(p) · GW(p)

The fact that it hasn't happened yet is not evidence against its happening if you cannot survive its happening. If you cannot survive its happening, then the fact that it has not happened in the last 50 years is not just weaker evidence than it would otherwise be -- it is not evidence at all, and your probability that it will happen now, after 50 years, should be the same as your probability would have been at 0 years.

Do you take the Fermi paradox seriously, or is the probability of your being destroyed by a galactic civilization, assuming that one exists, low enough? The evidential gap w.r.t. ET civilization spans billions of years -- but this is not evidence at all according to the above.

Neither do I believe in the coming of an imminent nuclear winter, though (a) it would leave me dead and (b) I nevertheless take the absence of such a disaster over the preceeding decades to be nontrivial evidence that its not on its way.

Say you're playing Russian Roulette with a 6-round revolver which either has 1 or 0 live rounds in it. Pull the trigger 4 times -- every time you end up still alive. According to what you have said, your probability estimates for either

  • there being a single round in the revolver or
  • the revolver being unloaded

should be the same as before you had played any rounds at all. Imagine pulling the trigger 5 times and still being alive -- is there a 50/50 chance that the gun is loaded?

I find the technique you're suggesting interesting, but I don't employ it.

(Drawing an analogy with protein folding does not count as "looking inside".)

Tiiba suggested that distributive capability is the most important of the "powers inherent to all computers". Protein folding simulation was an illustrative example of a cutting edge distributed computing endeavor, which is still greatly underpowered in terms of what AGI needs to milk out of it to live up to FOOMy claims. He wants to catch all the fish in the sea with a large net, and I am telling him that we only have a net big enough for a few hundred fish.

edit: It occurred to me that I have written with a somewhat interrogative tone and many examples. My apologies.

Replies from: gwern, rhollerith_dot_com, rhollerith_dot_com
comment by gwern · 2010-08-26T10:21:59.018Z · LW(p) · GW(p)

edit: It occurred to me that I have written with a somewhat interrogative tone and many examples. My apologies.

Examples are great. The examples a person supplies are often more valuable than their general statements. In philosophy, one of the most valuable questions one can ask is 'can you give an example of what you mean by that?'

comment by RHollerith (rhollerith_dot_com) · 2010-08-26T17:33:47.286Z · LW(p) · GW(p)

Say you're playing Russian Roulette with a 6-round revolver which either has 1 or 0 live rounds in it. Pull the trigger 4 times -- every time you end up still alive. According to what you have said, your probability estimates for either

  • there being a single round in the revolver or

  • the revolver being unloaded

should be the same as before you had played any rounds at all.

If before every time I pull the trigger, I spin the revolver in such a way that it comes to a stop in a position that is completely uncorrelated with its pre-spin position, then yes, IMO the probability is the same as before I had played any rounds at all (namely .5).

If an evil demon were to adjust the revolver after I spin it and before I pull the trigger, that is a selection effect. If the demon's adjustments are skillful enough and made for the purpose of deceiving me, my trigger pulls are no longer a random sample from the space of possible outcomes.

Probability is not a property of reality but rather a property of an observer. If a particular observer is not robust enough to survive a particular experiment, the observer will not be able to learn from the experiment the same way a more robust observer can. As I play Russian roulette, the P(gun has bullet) assigned by someone watching me at a safe distance can change, but my P(gun has bullet) cannot change because of the law of conservation of expected evidence.

In particular, a trigger pull that does not result in a bang does not decrease my probability that the gun contains a bullet because a trigger pull that results in a bang does not increase it (because I do not survive a trigger pull that results in a bang).

Replies from: thomblake, PeterS
comment by thomblake · 2010-08-26T21:06:15.204Z · LW(p) · GW(p)

In particular, a trigger pull that does not result in a bang does not decrease my probability that the gun contains a bullet because a trigger pull that results in a bang does not increase it (because I do not survive a trigger pull that results in a bang).

I'm not sure this would work in practice. Let's say you're betting on this particular game, with the winnings/losings being useful in some way even if you don't survive the game. Then, after spinning and pulling the trigger a million times, would you still bet as though the odds were 1:1? I'm pretty sure that's not a winning strategy, when viewed from the outside (therefore, still not winning when viewed from the inside).

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-27T00:01:26.026Z · LW(p) · GW(p)

You have persuaded me that my analysis in grandparent of the Russian-roulette scenario is probably incorrect.

The scenario of the black box that responds with either "heads" or "tails" is different because in the Russian-roulette scenario, we have a partial causal model of the "bang"/"no bang" event. (In particular, we know that the revolver contains either one bullet or zero bullets.) Apparently, causal knowledge can interact with knowledge of past behavior to produce knowledge of future behavior even if the knowledge of past behavior is subject to the strongest kind of observational selection efffects.

comment by PeterS · 2010-08-26T20:30:42.213Z · LW(p) · GW(p)

Your last point was persuasive... though I still have some uneasiness about accepting that k pulls of the trigger, for arbitrary k, still gives the player nothing.

Would it be within the first AGI's capabilities to immediately effect my destruction before I am able to update on its existence -- provided that (a) it is developed by the private sector and not e.g. some special access DoD program, and (b) ETAs up to "sometime this century" are accurate? I think not, though I admit to being fairly uncertain.

I acknowledge that this line of reasoning presented in my original comment was not of high caliber -- though I still dispute Tiiba's claim regarding an AI advanced enough to scrape by in conversation with a 5 year old, as well as that distributive capabilities are the greatest power at play here.

Replies from: rhollerith_dot_com, rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-28T02:09:16.944Z · LW(p) · GW(p)

Would it be within the first AGI's capabilities to immediately effect my destruction before I am able to update on its existence . . .?

I humbly suggest that the answer to your question would not shed any particular light on what we have been talking about because even if we would certainly have noticed the birth of the AGI, there's a selection effect if it would have killed us before we got around to having this conversation (i.e. if it would have killed us by now).

The AGI's causing our deaths is not the only thing that would cause a selection effect: the AGI's deleting our memories of the existence of the AGI would also do it. But the AGI's causing our deaths is the mostly likely selection-effecting mechanism.

A nice summary of my position is that when we try to estimate the safety of AGI research done in the past, the fact that P(we would have noticed our doom by now|the research killed us or will kill us) is high does not support the safety of the research as much as one might naively think. For us to use that fact the way we use most facts, not only must we notice our doom, but also we must survive long enough to have this conversation.

Actually, we can generalize that last sentence: for a group of people correctly to use the outcome of past AGI research to help assess the safety of AGI, awareness of both possible outcomes (the good outcome and the bad outcome) of the past research must be able to reach the group and in particular must be able to reach the assessment process. More precisely, if there is a mechanism that is more likely to prevent awareness of one outcome from reaching the assessment process than the other outcome, the process has to adjust for that, and if the very existence of the assessment process completely depends on one outcome, the adjustment completely wipes out the "evidentiary value" of awareness of the outcome. The likelihood ratio gets adjusted to 1. The posterior probability (i.e., the probability after updating on the outcome of the research) that AGI is safe is the same as the prior probability.

comment by RHollerith (rhollerith_dot_com) · 2010-08-27T20:16:07.739Z · LW(p) · GW(p)

Your last point was persuasive... though I still have some uneasiness about accepting that k pulls of the trigger, for arbitrary k, still gives the player nothing.

Like I said yesterday I retract my position on the Russian roulette. (Selection effects operate, I still believe, but not to the extent of making past behavior completely useless for predicting future behavior.)

comment by RHollerith (rhollerith_dot_com) · 2010-09-02T14:22:49.263Z · LW(p) · GW(p)

I intentionally delayed this reply (by > 5 days) to test the hypothesis that slowing down the pace of a conversation on LW will improve it.

Do you take the Fermi paradox seriously, or is the probability of your being destroyed by a galactic civilization, assuming that one exists, low enough?

When we try to estimate the number of technological civilizations that evolved on main-sequence stars in our past light cone, we must not use the presence of at least one tech civ (namely, us) as evidence of the presence of another one (namely, ET) because if that first tech civ had not evolved, we would have no way to observe that outcome (because we would not exist). In other words, we should pretend we know nothing of our own existence or the existence of clades in our ancestral line, in particular, the existence of the eukaryotes and the metazoa, when trying to estimate the number of tech civs in our past light cone.

I am not an expert on ETIs, but the following seems (barely) worth mentioning: the fact that prokaryotic life arose so quickly after the formation of the Earth's crust is IMHO significant evidence that there is simple (unicellular or similar) life in other star systems.

The evidential gap w.r.t. ET civilization spans billions of years -- but this is not evidence at all according to the above.

It is evidence, but less strong than it would be if we fail to account for observational selection effects. Details follow.

The fact that there are no obvious signs of an ET tech civ, e.g., alien space ships in the solar system, is commonly believed to the be strongest sign that there were no ET tech civs in our past light cone with the means and desire (specifically, desire on at least part of the civ that was not thwarted by the rest of the civ) to expand outwards into space. Well, it seems to me that there is a good chance that we would not have survived an encounter with the leading wave of such an expansion, and therefore the lack of evidence of such an expansion should not cause us to update our probability of the existence of such an expansion as much as it should have if we certainly could have survived the encounter. Still, the fact that there are no obvious signs (such as alien space ships in the solar system) of ET is the strongest piece of evidence against the hypothesis of the existence of ET tech civs in our past light cone (because for example radio waves can be detected by us over a distance of only thousands of light years whereas we should be able to detect colonization waves that originated billions of light years away because once a civilization acquires the means and desire to expand, what would stop it?).

In summary, observational selection effects blunt the force of the Fermi paradox in two ways:

  1. Selection effects drastically reduce the (likelihood) ratio by which the fact of the existence of our civilization increases our probability of the existence of another civilization.

  2. The lack of obvious signs (such as alien space ships) of ET in our immediate vicinity is commonly taken as evidence that drastically lowers the probability of ET. Observational selection effects mean that P(ET) is not lowered as much as we would otherwise think.

(end of list)

So, yeah, to me, there is no Fermi paradox requiring explanation, nor do I expect any observations made during my lifetime to create a Fermi paradox.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-09-02T15:39:38.637Z · LW(p) · GW(p)

When we try to estimate the number of technological civilizations that evolved on main-sequence stars in our past light cone, we must not use the presence of at least one tech civ (namely, us) as evidence of the presence of another one (namely, ET) because if that first tech civ had not evolved, we would have no way to observe that outcome (because we would not exist).

If there were two universes, one very likely to evolve life and one very unlikely, and all we knew was that we existed in one, then we are much more likely to exist in the first universe. Hence our own existence is evidence about the likelihood of life evolving, and there still is a Fermi paradox.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-09-02T18:39:23.056Z · LW(p) · GW(p)

If there were two universes, one very likely to evolve life and one very unlikely, and all we knew was that we existed in one, then we are much more likely to exist in the first universe.

Agree.

Hence our own existence is evidence about the likelihood of life evolving [in the situation in which we find ourselves].

Disagree because your hypothetical situation requires a different analysis than the situation we find ourselves in.

In your hypothetical, we have somehow managed to acquire evidence for the existence of a second universe and to acquire evidence that life is much more likely in one than in the other.

Well, let us get specific about how that might come about.

Our universe contains gamma-ray bursters that probably kill any pre-intelligence-explosion civilization within ten light-years or so of them, and our astronomers have observed the rate * density at which these bursters occur.

Consequently, we might discover that one of the two universes has a much higher rate * density of bursters than the other universe. For that discovery to be consistent with the hypothetical posed in parent, we must have discovered that fact while somehow becoming or remaining completely ignorant as to which universe we are in.

We might discover further that although we have managed to determine the rate * density of the bursters in the other universe, we cannot travel between the universes. We must suppose something like that because the hypothetical in parent requires that no civilization in one universe can spread to the other one. (We can infer that requirement from the analysis and the conclusion in parent.)

I hope that having gotten specific and fleshed out your hypothetical a little, you have become open to the possibility that your hypothetical situation is different enough from the situation in which we find ourselves for us to reach a different conclusion.

In the situation in which we find ourselves, one salient piece of evidence we have for or against ET in our past light cone is the fact that there is no obvious evidence of ET in our vicinity, e.g., here on Earth or on the Moon or something.

And again, this piece of evidence is really only evidence against ETs that would let us continue to exist if their expansion reached us, but there's a non-negligible probability that an ET would in fact let us continue to exist because there no strong reason for us to be confident that the ET would not.

In contrast to the situation in which we find ourselves, in the hypothetical posed in parent, there is an important piece of evidence in addition to the piece I just described in just the same way that whatever evidence we used to conclude that the revolver contains either zero or one bullet is an additional important piece of evidence that when combined with the evidence of the results of 1,000,000 iterations of Russian roulette would cause a perfect Bayesian reasoner to reach a different conclusion than it would if it knew nothing of the causal mechanism that exists between {a spin of the revolver followed by a pull of the trigger} and {death or not-death}.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-09-03T11:21:02.552Z · LW(p) · GW(p)

In your hypothetical, we have somehow managed to acquire evidence for the existence of a second universe and to acquire evidence that life is much more likely in one than in the other.

These need not be actual universes, just hypothetical universes that we have assigned a probability to.

Given most priors over possible universes, the fact we exist will bump up the probability of there being lots of life. The fact we observe no life will bump down the probability, but the first effect can't be ignored.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-09-03T11:43:40.280Z · LW(p) · GW(p)

Hence our own existence is evidence about the likelihood of life evolving [you write in great grandparent]

So in your view there is zero selection effect in this probability calculation?

In other words, our own existence increases your probability of there being lots of life just as much as the existence of an extraterrestrial civilization would?

In the previous sentence, please interpret "increase your probability just as much as" as "is represented by the same likelihood ratio as".

And the existence of human civilization increases your P(lots of life) just as much as it would if you were an immortal invulnerable observer who has always existed and who would have survived any calamity that would have killed the humans or prevented the evolution of humans?

Finally, is there any probability calculation in which you would adjust the results of the calculation to account for an observational selection effect?

Would you for example take observational selection effects into account in calculating the probability that you are a Boltzmann brain?

I can get more specific with that last question if you like.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-09-03T14:17:20.160Z · LW(p) · GW(p)

So in your view there is zero selection effect in this probability calculation?

In other words, our own existence increases your probability of there being lots of life just as much as the existence of an extraterrestrial civilization would?

Depends how independent the two are. Also, myself existing increases the probability of human-like life existing, while the alien civilization increases the probability of life similar to themselves existing. If we're similar, the combined effects will be particularly strong for theories of convergent evolution.

The line of reasoning for immortal observers is similar.

Finally, is there any probability calculation in which you would adjust the results of the calculation to account for an observational selection effect?

I thought that was exactly what I was doing? To be technical, I was using a variant of full non-indexical conditioning (FNC), which is an unloved bastard son of the SIA (self-indication assumption).

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-09-03T20:14:44.385Z · LW(p) · GW(p)

Can I get a yes or no on my question of whether you take the existence of human civilization to be just as strong evidence for the probabilities we have been discussing as you would have taken it to be if you were a non-human observing human civilization from a position of invulnerability?

Actually, "invulnerability" is not the right word: what I mean is, "if you were a non-human whose coming into existence was never in doubt and whose ability to observe the non-appearance of human civilization was never in doubt."

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-09-06T14:58:53.065Z · LW(p) · GW(p)

Can I get a yes or no on my question of whether you take the existence of human civilization to be just as strong evidence for the probabilities we have been discussing as you would have taken it to be if you were a non-human observing human civilization from a position of invulnerability?

If the existence of the "invulnerable non-human" (INH) is completely independent from the existence of human-like civilizations, then:

  • If the INH gets the information "there are human-like civilizations in your universe" then this changes his prior for "lots of human-like civilizations" much less that we get from noticing that we exist.
  • If the INH gets the information "there are human-like civilizations in your immediate neighbourhood", then his prior is updated pretty similarly to ours.
Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-09-10T15:58:38.894Z · LW(p) · GW(p)

Thanks for answering my question. I repeat that you and I are in disagreement about this particular application of observational selection effects, a.k.a., the anthropic principle and would probably also disagree about their application to an existential risk.

I notice that last month saw the publication of a new paper, "Anthropic Shadow: Observation Selection Effects and Human Extinction Risk" by Bostrum, Sandberg and my favorite astronomy professor, Milan M Circovic.

As an aid to navigation, let me link to the ancestor to this comment at which the conversation turned to observation selection effects.

Replies from: gwern, Stuart_Armstrong
comment by gwern · 2010-09-10T16:28:28.005Z · LW(p) · GW(p)

I have been meaning to write a post summarizing "Anthropic Shadow"; would anyone besides you and me be interested in it?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-09-10T17:47:14.391Z · LW(p) · GW(p)

I think you should write that post because thoughtful respected participants on LW use the anthropic principle incorrectly, IMHO. The gentleman who wrote great grandparent for example is respected enough to have been invited to attend SIAI's workshop on decision theory earlier this year. And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment. I say "probably" because the context has to do with "modal realism" and other wooly thinking that I cannot digest, but I have not been able to think of any context in which Cousin It's "every passing day without incident should weaken your faith in the anthropic explanation" is a sound argument.

(Many less thoughtful or less respected participants here have misapplied or failed to take into account the anthropic principle, too.)

Replies from: gwern, gwern
comment by gwern · 2010-09-10T21:32:05.142Z · LW(p) · GW(p)

And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment.

It has been a while since I skimmed "Anthropic Shadow", but IIRC a key point or assumption in their formula was that the more recent a risk would have occurred or not, the less likely 'we' are to have observed the risk occurring, because more recently = less time for observers to recover from the existential risk or fresh observers to have evolved. This suggests a weak version: the longer we exist, the fewer risks' absence we need to appeal to an observer-based principle.

(But thinking about it, maybe the right version is the exact opposite. It's hard to think about this sort of thing.)

comment by gwern · 2010-10-13T19:25:51.572Z · LW(p) · GW(p)

I've read "Anthropic Shadow" a few times now. I don't think I will write a post on it. It does a pretty good job of explaining itself, and I couldn't think of any uses for it.

The Shadow only biases estimates when some narrow conditions are met:

  1. your estimate has to be based strictly on your past
  2. of a random event
  3. the events have to be very destructive to observers like yourself
  4. and also rare to begin with

So it basically only applies to global existential risks, and there aren't that many of them. Nor can we apply it to interesting examples like the Singularity, because that's not a random event but dependent on our development.

comment by Stuart_Armstrong · 2010-09-13T13:15:54.529Z · LW(p) · GW(p)

Thanks for answering my question. I repeat that you and I are in disagreement about this particular application of observational selection effects, a.k.a., the anthropic principle and would probably also disagree about their application to an existential risk.

Indeed. I, for one, do not worry about the standard doomesday argument, and such. I would argue that SIA is the only consistent anthropic principle, but that's a long argument, and a long post to write one day.

Fortunately, the Anthropic shadow argument can be accepted whatever type of anthropic reasoning you use.

comment by Pavitra · 2010-08-26T08:48:07.695Z · LW(p) · GW(p)

The fact that it hasn't happened yet is not evidence against its happening if you cannot survive its happening. If you cannot survive its happening, then the fact that it has not happened in the last 50 years is not just weaker evidence than it would otherwise be -- it is not evidence at all.

I'm not convinced that works that way.

Suppose I have the following (unreasonable, but illustrative) prior: 0.5 for P=(AGI is possible), 1 for Q=(if AGI is possible, then it will occur in 2011), 0.1 for R=(if AGI occurs, then I will survive), and 1 for S=(I will survive if AGI is impossible or otherwise fails to occur in 2011). The events of interest are P and R.

P,R: 0.05. I survive. P,~R: 0.45. I do not survive. (This outcome will not be observed.) ~P: 0.5. I survive. R is irrelevant.

After I observe myself to still be alive at the end of 2011 (which, due to anthropic bias, is guaranteed provided I'm there to make the observation), my posterior probability for P (AGI is possible) should be 0.05/(0.05+0.5) = 5/55 = 1/11 = 0.0909..., which is considerably less than the 0.5 I would have estimated beforehand.

By updating on my own existence, I infer a lower probability of the possibility of something that could kill me.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-26T09:13:11.540Z · LW(p) · GW(p)

Well, yeah, if we knew what you call S (that AGI would occur in 2011 or would never occur), then our surviving 2011 would mean that AGI will never occur.

But your example fails to shed light on the argument in great grandparent.

If I may suggest a different example, one which I believe is analogous to the argument in great grandparent:

Suppose I give you a box that displays either "heads" or "tails" when you press a button on the box.

The reason I want you to consider a box rather than a coin is that a person can make a pretty good estimate of the "fairness" of a coin just by looking at it and hold it in one's hand.

Do not make any assumptions about the "fairness" of the box. Do not for example assume that if you push the button a million times, the box would display "heads" about 500,000 times.

What is your probability that the box will display "heads" when you push the button?

.5 obviously because even if the box is extremely "unfair" or biased, you have no way to know whether it is biased towards "heads" or biased towards "tails".

Suppose further that you cannot survive the box coming up "tails".

Now suppose you push the button ten times and of course it comes up "heads" all ten times.

Updating on the results of your first ten button-presses, what is your probability that it will come up "heads" if you push the button an eleventh time?

Do you for example say, "Well, clearly this box is very biased towards heads."

Do you use Laplace's law of succession to compute the probability?

Replies from: Pavitra
comment by Pavitra · 2010-08-26T09:44:57.754Z · LW(p) · GW(p)

This is more or less what I was trying to do, but I neglected to treat "AGI is impossible" as equivalent to "AGI will never happen".

I need to have a prior in order to update, so sure, let's use Laplace.

I'd have to be an idiot to ever press the button at all, but let's say I'm in Harry's situation with the time-turner and someone else pushed the button ten times before I could tell them not to.

I don't feel like doing the calculus to actually apply Bayes myself here, so I'll use my vague nonunderstanding of Wikipedia's formula for the rule of succession and say p=11/12.

comment by jimrandomh · 2010-08-28T18:24:03.302Z · LW(p) · GW(p)

The difficulty of creating an AGI drops slightly every time computational power increases. We know that people greatly underestimated the difficulty of creating AGI in the past, but we don't know how fast the difficulty is decreasing, how difficult it is now, whether it will ever stop decreasing, or where.

Replies from: PeterS
comment by PeterS · 2010-08-29T03:57:13.188Z · LW(p) · GW(p)

I agree that those rates are hard to determine. I am also weary of "AI FOOM is a certainty" type statements, and appeals to the nebulous "powers that all computers inherently have".

comment by JoshuaZ · 2010-08-16T16:41:42.316Z · LW(p) · GW(p)

Speaking as someone who assigns a low probability to AI going FOOM, I agree that letting an AI go online drastically increases the plausibility that an AI will go FOOM.

However, without that capability other claims you've made don't have much plausibility.

Even if it could barely scrape by in a Turing test against a five-year-old, it would still have all the powers that all computers inherently have, so it would already be superhuman in some respects, giving it enormous self-improving ability.

Not really. If a machine has no more intelligence than a human, even a moderately bright human, that doesn't mean it will have enough intelligence to self-improve. Self-improvement requires deep understanding. A bright AI might be able to improve specific modules (say by replacing a module for factoring numbers with a module that uses a quicker algorithm) .

There are other general problems with AIs going FOOM. In particular, if the AI doesn't have access to knew hardware then it is limited by the limits of software improvement. Thus for example, if P != NP in a strong way, that puts a serious limit on how efficient software can become. Similarly, some common mathematicial algorithms, such as linear programming, are close to their theoretical optimums. There's been some interesting discussion here about this subject before. See especially this discussion of mine with cousin_it. That discussion made me think that theoretical comp sci provides fewer barriers to AI going FOOM than I thought but it still seems to provide substantial barriers.

There are a few other issues that an AI trying to go FOOM might run into. For example, there's a general historical metapattern that it takes more and more resources to learn more about the universe. Thus for example, in the 1850s a single biologist could make amazing discoveries and a single chemist could discover a new element. But now, even to turn out minor papers can require a lot of resources and people. The metapattern of nature is that the resources it takes to understand things more increases at about the same rate as our improved understanding gives us more resources to understand things. In many fields if anything, there is a decreasing marginal return . So even if the AI is very smart, it might not be able to do that much.

Certainly, an AI going FOOM is one of the more plausible forms of Singularities proposed. But I don't assign it a particularly high probability as long as people aren't doing things like giving the AI general internet access. The nightmare scenario seems to be that a) someone gives a marginally smart AI internet access and b) the AI discovers a very quick algorithm for factoring integers, and then the entire internet becomes the AI's playground and then shortly after that becomes functional brainpower. But this requires three unlikely things to occur: 1) someone connecting the AI to the internet with minimal supervision 2) there to exist a fast factoring algorithm that no one has discovered 3) The AI finding that algorithm.

Replies from: cousin_it, Tiiba
comment by cousin_it · 2010-08-26T10:36:49.094Z · LW(p) · GW(p)

For example, there's a general historical metapattern that it takes more and more resources to learn more about the universe.

This is one of the strongest arguments I've ever heard against FOOM. But if we can get an AI up to the level of one moderately-smart scientist, horizontal scaling makes it a million scientists working at 1000x the human rate without any problems with coordination and akrasia, which sounds extremely scary.

comment by Tiiba · 2010-08-16T17:27:51.888Z · LW(p) · GW(p)

I guess I should have noted that I'm assuming it can have all the hardware it wants. If it doesn't, yes, that does create problems. There's only so much better you can do than Quicksort.

And the reason I think that a transhuman AI might still be bad at the Turing test is that humans are really good at it, and pretty bad at remembering that ALL execution paths have to return a value, and that it has to be a string. So I think computers will learn to program long before they learn to speak English.

comment by Wei Dai (Wei_Dai) · 2010-08-05T17:01:20.940Z · LW(p) · GW(p)

Knowing that medicine is often more about signaling care than improving health, it's hard for me to make a big fuss over some minor ailment of a friend or family member. Consciously trying to signal care seems too fake and manipulative. Unfortunately, others then interpret my lack of fuss-making as not caring. Has anyone else run into this problem, and if so, how did you deal with it?

Replies from: byrnema, jimmy
comment by byrnema · 2010-08-05T20:02:59.728Z · LW(p) · GW(p)

I feel like I've wrestled with this, or something similar. I will throw some thoughts out.

In relating to your example, I recall times when I was expected to give care that I didn't think a person needed, and I guess my sense was that they were weak to expect it (and so I was unable to empathize with them), or that my fake care would encourage them to be weak. I also felt that the care was disingenuous because it wasn't really doing anything.

I no longer feel that way, and what changed over several years, I guess, is a deeper realization (along an independent, separate path of experiences, including being a mother) of the human condition: we are all lonely, isolated minds trapped in physical bodies. We ache for connection -- more so at different times of our lives, and some more than others, with different levels of comfort for different levels -- but infants can't survive without affection and children and adults also need affection. (Alicorn's "love languages" appropriate here.) Whatever expressions of affection we prefer, I think we need all of them a little bit, and physical, platonic affection is something we just don't receive as often. (I hear this is especially true for the elderly.)

Signaling medical care is token for physical care, thus it stands in for physical affection -- even if there is no physical contact involved. If there is physical contact involved -- the placement of a band-aid on a knee -- then that is even better. I think it is important to realize that people do have a need for such physical affection, and medical situations provide a context for this (often at times when people are in need of more affection anyway).

Replies from: orthonormal
comment by orthonormal · 2010-08-05T20:28:27.817Z · LW(p) · GW(p)

Good point. But the next question ought to be whether there's a creative third alternative that would allow us to better signal our caring while being less wasteful. In some cases (the rising popularity of hospice rather than hospital for terminal illness), we can see this already being done.

(For a similar example, some couples planning weddings are moving away from the massively wasteful† registry option in favor of other ideas. It looks tacky to just ask for a cash donation, of course, but there really are third alternatives-- one couple asked for donations toward the specific events they planned for their honeymoon, while others ask for donations toward a favored list of charities. Etc.)

† Guests signal their generosity and regard for the new couple by buying them something from a set of nice things. However, the couple typically asks for things that are uselessly nicer than what they would buy themselves if it were their money, so as to signal sophistication. The end result is that a lot of money gets wasted on overly specific kitchen gadgets which will gather dust, or overly nice china that rarely gets used, etc.

Replies from: byrnema
comment by byrnema · 2010-08-05T21:27:15.152Z · LW(p) · GW(p)

Without specific examples, I hadn't thought of signaling care that was expensive. (I guessed it was emotionally expensive for Weidai.) But yes, fotaking someone to see the doctor when you know that wouldn't be useful would be quite expensive.

comment by jimmy · 2010-08-05T18:38:52.786Z · LW(p) · GW(p)

I haven't run into this problem with medicine, but I have about other things. In those cases I handled it by some combination of 1) Explaining that I do care but that I don't think it's worth spending resources on expensive otherwise useless signals, and 2) Consciously trying to signal care when its cheap, even though it feels fake and they know it's a conscious effort.

With medicine, if I actually do care, I'll research the problem and usually suggest a better treatment. The average doctor is more or less incompetent at what they're supposed to do (like everyone else), so using him and google as resources is often enough to come up with a better game plan.

comment by NancyLebovitz · 2010-08-03T07:23:32.765Z · LW(p) · GW(p)

As an alternative to trying to figure out what you'd want if civilization fell apart, are there ways to improve how civilization deals with disasters?

If a first world country were swatted hard by a tsunami or comparable disaster, what kind of prep, tech, or social structures might help more than what we've got now if they were there in advance?

Replies from: h-H
comment by h-H · 2010-08-05T08:31:32.709Z · LW(p) · GW(p)

from the top of my head, in no particular order:

escape routes. for eg. towns on coasts that are threatened by tsunami should have signs/paved roads/etc to higher ground. not all disasters are tsunamis though, so this is hard to extrapolate.

in the immediate aftermath: family+social solidarity, by the latter I mean inter ethnic. also decentralized government-remember how FEMA did so badly after Hurricane Katrina. from the little I know, disasters strike people unequally, so the generic responses so characteristic of bureaucracies are the last thing some survivors need.

now obviously there will be a shortage of necessities, taking food as an example, you need enough stored food and water that last for several days at least after the disaster, that and on a national level you need ubiquitous local food production+readily available ie cheap water sanitation tools. this might not be so feasible atm however, so better roads/faster methods of delivering necessities are top priority.

building on that, the 'international community' or at least close allies are a valuable asset that is quick to react and eager to help, this should always be accepted and well managed. underline 'well managed' many times, the billions of aid dollars wasted in the Wars on Afghanistan or Iraq are good indicators of how easy it is for money to 'dissappear' when local infrastructure as well as local government are devastated-regardless of who or what did it of course. a minor point here is that wars can be as damaging as 'natural' disasters so some cross-over there between situations.

also, lasers :) ..or lacking that lots of guns plus people who know how to use them.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-05T14:10:17.793Z · LW(p) · GW(p)

Afaik, the government response after Katrina was unusually incompetent for a first world country, and it doesn't make sense to draw general conclusions from it.

Replies from: h-H
comment by h-H · 2010-08-06T22:00:04.560Z · LW(p) · GW(p)

that it happened in the US should be enough reason to question how other countries would fare. for example, the 1st world countries being comprised mostly of Europe, North America and Japan, when was the last time Europe got hit with something like Katrina? I'm not so well informed but I'd hazard a guess that their response wouldn't be that much better given a similar situation. 1st world is an all encompassing term though, Japan is obviously much better prepared than themost when it comes to earthquakes/tsunamis simply because they occur more often there. as it were, I think we do have very good 'instructions' on how to deal with such events, but not the organizational skills see http://pubs.usgs.gov/circ/c1187/

and this for comparison http://www.jma.go.jp/jma/en/Activities/earthquake.html

do ignore my first reply though, I wrote that after 40 hours of no sleep :)

comment by Pavitra · 2010-08-03T05:28:02.843Z · LW(p) · GW(p)

Has there ever been a practical proof-of-concept system, even a toy one, for futarchy? Not just a "bare" prediction market, but actually tying the thing directly to policy.

If not, I suggest a programming nomic (aka codenomic) for this purpose.

If you're not familiar with the concept of nomic, it's a little tricky to explain, but there's a live one here in ECMAScript/Javascript, and an old copy of the PerlNomic codebase here. (There's also a scholarly article [PDF] on PerlNomic, for those interested.)

Also, if you're not familiar with the concept of nomic, you don't read enough Hofstadter.

Replies from: None, Douglas_Knight, Blueberry
comment by [deleted] · 2010-08-04T01:54:37.025Z · LW(p) · GW(p)

I am trying to write a programming nomic that uses SpiderMonkey for its engine—essentially, EcmaNomic but without the part where it sucks. >.> Unfortunately, I've been a bit akratic about it lately. If anyone knows C or C++ or anything like that, I would love it if you could assist me in any way. Of course, encouragement would also help a bunch.

Replies from: sketerpot, Pavitra
comment by sketerpot · 2010-08-05T19:25:14.027Z · LW(p) · GW(p)

I'm curious why you want to use a particular JavaScript engine, rather than using whatever happens to be built into a web browser.

Anyway, if you link to a Github repository or something, I'll at least have a look at it and offer encouraging comments. I know C and C++, but prefer to avoid them. :-)

Your project sounds interesting.

Replies from: None
comment by [deleted] · 2010-08-08T17:14:02.465Z · LW(p) · GW(p)

SpiderMonkey is what happens to be built into a web browser; it's used by Mozilla browsers. :P

What you meant, I think, is having the code simply be run in people's browsers. The thing is, nomics are inherently multiplayer, so if the code ran in the people's browsers, each person's instance would have to connect to the other instances and... eh, it's just a lot easier and simpler to run it on the server side.

I might put it on GitHub at some point.

comment by Pavitra · 2010-08-04T03:01:13.892Z · LW(p) · GW(p)

Hi Warrigal!

I don't have the programming skills to know whether this is feasible, but have you considered writing it up as an EcmaNomic proposal?

I seriously doubt I could help with something like that except to poke you occasionally.

Replies from: None
comment by [deleted] · 2010-08-08T16:41:39.576Z · LW(p) · GW(p)

Hey, Pavitra.

If it were feasible, I would; unfortunately, the problems I see in EcmaNomic are not in the code it runs, but the way it runs it. In particular, in EcmaNomic, all code runs with the same permissions, and the nomic's entire state is sent out to the browser every time a page is loaded. This means that the nomic can't run code it doesn't trust, nor can it easily keep any secrets from the players, nor can it store an amount of data too large to send out to everyone often.

So, I'm writing an entirely new engine.

Oh, and as for poking, please do; the more this is kept on my mind, the more work I can do with it.

comment by Douglas_Knight · 2010-08-03T06:47:59.394Z · LW(p) · GW(p)

Nomic is way too complicated for a toy futarchy. RH suggests that a test system should be a single decision tied to a single conditional market. In particular, he suggests a fire-the-CEO market. You might call "futarchy" any conditional prediction market that is sponsored by (or even just known to) the decision makers. I am not aware of any such examples, but I think most prediction markets are fairly secret, so I would not be terribly surprised if some exist.

Replies from: None, Pavitra
comment by [deleted] · 2010-08-04T02:09:57.898Z · LW(p) · GW(p)

Nomic is way too complicated for a toy futarchy.

By no means. The great thing about complexity is that it can be managed: just break it into pieces and give a piece to each person. With a codenomic, that complexity can be self-managing to an extent.

Something like Wikipedia.

Anyway, futarchy in a codenomic? You can just come up with a simple English description of each possible change and let people "vote" based on it. It'll go nicely enough.

Replies from: JenniferRM
comment by JenniferRM · 2010-08-04T20:37:40.567Z · LW(p) · GW(p)

Given the downvote (parent currently -1) it might be worth pointing out that Warrigal has been an active player in the "best" nomic (which as been running since 1993) in a non-trivial capacity. E appears to have been a historian of Agora as of 2008, and (despite my lack of current knowledge of the nomic community since I stopped playing years and years ago) may well be one of the most experienced and historically knowledgeable "current nomic players" on the planet. On this basis I would weight eir opinion on the matter of "what can be done in a nomic" rather strongly.

As an aside, nomic is super fun, very educational, and quite time consuming. If anyone here is a college student with expectations of a year of on-and-off free time I would recommend joining Agora by reading the rules, signing up for the mailing lists, and having some fun. When you get older, you probably won't have time for actual playing, but will appreciate the memories :-)

Replies from: None
comment by [deleted] · 2010-08-08T17:23:54.004Z · LW(p) · GW(p)

I have been an active player in the past, but I'm not currently; I don't know when or if I'll get back into it. I was Agora's Historian only very briefly before that position was eliminated. My total nomic experience is definitely not more than a couple of years, as I only discovered it recently, and my historical knowledge is only what I've witnessed personally and the small amount that I happened to read once.

comment by Pavitra · 2010-08-03T11:10:40.989Z · LW(p) · GW(p)

Are you perhaps thinking of Suber's original, paper-and-tabletop, Nomic ruleset? The codenomics I've seen tend to consist of little other than bare self-amendment (direct democracy, generally). They're considerably simpler than most natural-language email nomics, which in turn tend to be perhaps strictly more complex than Suber, but also less intimidating in the style of prose.

Fire-the-CEO is no good as an early test system. It may be temptingly simple, but it will never actually be put into practice until after the theory has been fairly well-established, because you don't trust something important like the CEO to a system that's still being tested.

A test implementation needs to be a toy system, one that won't damage anything important if the theory turns out to be wrong. (It wouldn't need testing if you already knew what would happen.) Hence, a computer game.

comment by Blueberry · 2010-08-03T06:35:30.344Z · LW(p) · GW(p)

I would love to see a LessWrong Nomic game.

Replies from: None
comment by [deleted] · 2010-08-04T04:11:07.396Z · LW(p) · GW(p)

I'm going to be starting a game soon, but I'd planned on doing it on another forum (which is home to a few LW lurkers - who knows, we may vote to move the thread!) I've been working out the initial rules over the past few weeks. Should I make mention of it here when it starts?

Replies from: Blueberry
comment by Blueberry · 2010-08-04T04:19:22.831Z · LW(p) · GW(p)

Yes, please do, and PM me. I haven't played Nomic in a while, and would love to play with LW people. Is there some reason you don't want to start with either Suber's Initial Rules or "Rule 1: Players may modify these rules by majority vote"?

Replies from: RobinZ, None
comment by RobinZ · 2010-08-04T04:37:21.632Z · LW(p) · GW(p)

One problem with Suber's Initial Rules is that they do not enforce timing - that my (sadly expired) Livejournal nomic lasted as long as it did I attribute in part to having explicit deadlines for turn-based play.

comment by [deleted] · 2010-08-04T04:44:15.225Z · LW(p) · GW(p)

Yes. This structure is modified to immediately abstract the notion of mutability, chunk rules by abstraction so that audiences of different levels all feel like they have a focal point, and most importantly, realize from the start the idea of guiding principles for sets of rules. It's an experiment, and one of the few initial primary principles is for players to make the game ever more fun to play.

comment by taw · 2010-08-03T00:42:50.883Z · LW(p) · GW(p)

I've heard many times here that Gargoyles involved some interesting multilevel plots, but the first few episodes had nothing like it, just standard Disneyishness. Anyone recommendations which episodes are best of the series, so I can check them out without going through the boring parts?

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-08-04T10:09:32.743Z · LW(p) · GW(p)

Heard here, or on TV Tro... er... that wiki that will ruin your life?

Replies from: taw
comment by taw · 2010-08-04T16:49:51.918Z · LW(p) · GW(p)

There as well, but it pops up here every now and then also.

comment by Tiiba · 2010-08-02T05:44:06.925Z · LW(p) · GW(p)

I heard in a few places that a real neuron is nothing like a threshold unit, but more like a complete miniature computer. None of those places expanded on that, though. Could you?

Replies from: JanetK
comment by JanetK · 2010-08-02T08:53:01.335Z · LW(p) · GW(p)

I am not sure that I understand the exact difference between a threshold unit and a miniature computer that you want to shine a light on. Below are some aspects that may be of use to you:

  1. The whole surface of a neuron is not at the same potential (relative to the firing potential). Synapses are many (can be thousands), along branching dendrites, of different types and strengths so that the patterns of input that will cause firing from the dendrite end of the neuron are varied and a large number. This architecture is plastic with synapses being strengthened and weakened by use and by events far away from the neuron. At the axon root synapses are fewer and their effect more individual. If the neuron fires it delivers its signal to many synapses on many other neurons. Again this is plastic.

  2. The potential of the neuron surface is affected by many other things besides other neurons through the synapses. It is affected by electrical and magnetic fields generated by the whole brain (the famous waves etc.). It is affected by the chemical environment such as the concentration of calcium ions. These factors are very changeable depending on the activity of the whole brain, hormone levels, general metabolism etc.

  3. It is fairly easy to draw possible configurations of 1 or 2 neurons that mimic all the logic gates, discriminators (like a long tailed pairs in old fashioned electronics), delay lines, simple memory and so on. But it is unlikely that this is the root to understanding how neurons act. Networks, parallel feedback loops, glial communities, modules, architecture of different parts of the brain and the like are probably a better level of investigation.

I hope this is of some help.

comment by zero_call · 2010-08-02T01:03:32.855Z · LW(p) · GW(p)

Suppose that inventing a recursively self improving AI is tantamount to solving a grand mathematical problem, similar in difficulty to the Riemann hypothesis, etc. Let's call it the RSI theorem.

This theorem would then constitute the primary obstacle in the development of a "true" strong AI. Other AI systems could be developed, for example, by simulating a human brain at 10,000x speed, but these sorts of systems would not capture the spirit (or capability) of a truly recursively self-improving super intelligence.

Do you disagree? Or, how likely is this scenario, and what are the consequences? How hard would the "RSI theorem" be?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-02T01:11:27.636Z · LW(p) · GW(p)

This seems like a bad analogy. If you could simulate a group of smart human going at 10,000 times normal speed, say copies of Steven Chu or of Terry Tao, I'd expect that they'd be able to figure out how to self-improve pretty quickly. In about 2 months they have had about 5000 years worth of time to think about things. The human brain isn't a great structure for recursive self-improvement (while some aspects are highly modular, other aspects are very much not so) but given enough time one could work on improving that architecture.

comment by [deleted] · 2010-08-12T09:20:24.164Z · LW(p) · GW(p)

I don't understand why you should pay the $100 in a counterfactual mugging. Before you are visited by Omega, you would give the same probabilities to Omega and Nomega existing so you don't benefit from precommitting to pay the $100. However, when faced with Omega you're probability estimate for its existence becomes 1 (and Nomegas becomes something lower than 1).

Now what you do seems to rely on the probability that you give to Omega visiting you again. If this was 0, surely you wouldn't pay the $100 because its existence is irrelevant to future encounters if this is your only encounter.

If this was 1, it seems at a glance like you should. But I don't understand in this case why you wouldn't just keep your $100 and then afterwards self-modify to be the sort of being that would pay the $100 in the future and therefore end up an extra hundred on top.

I presume I've missed something there though. But once I understand that, I still don't understand why you would give the $100 unless you assigned a greater than 10% probability to Omega returning in the future (even ignoring the none zero, but very low, chance of Nomega visiting).

Is anyone able to explain what I'm missing?

comment by Kevin · 2010-08-09T07:54:08.042Z · LW(p) · GW(p)

India Asks, Should Food Be a Right for the Poor?

http://www.nytimes.com/2010/08/09/world/asia/09food.html?hp

comment by multifoliaterose · 2010-08-08T22:29:42.945Z · LW(p) · GW(p)

Do any of you know of any good resources for information about the effects of the activities of various portions of the financial industry on (a) national/world economic stability, and (b) distribution of wealth? I've been having trouble finding good objective/unbiased information on these things.

comment by roland · 2010-08-07T22:04:56.394Z · LW(p) · GW(p)

This might be interesting, there seems to be a software to help in the "Analysis of Competing Hypotheses":

http://www.wired.com/dangerroom/2010/08/cia-software-developer-goes-open-source-instead/

http://www.competinghypotheses.org/

Replies from: Morendil
comment by Morendil · 2010-08-08T05:54:08.138Z · LW(p) · GW(p)

Would be wonderful if it wasn't vaporware...

comment by VNKKET · 2010-08-07T00:56:40.182Z · LW(p) · GW(p)

Is anyone willing to be my third (and last) matching donor?

Replies from: WrongBot
comment by WrongBot · 2010-08-07T01:37:34.525Z · LW(p) · GW(p)

I just donated $60 to the SIAI. I'd be happy to grab a screenshot of the confirmation email, if you'd like.

Replies from: Clippy, VNKKET
comment by Clippy · 2010-08-07T02:44:41.894Z · LW(p) · GW(p)

I have donated 1000 USD to SIAI, and I can't even access the financial system through the usual channels.

Replies from: VNKKET
comment by VNKKET · 2010-08-10T21:11:34.704Z · LW(p) · GW(p)

I know. That made me happy.

comment by VNKKET · 2010-08-10T19:43:18.690Z · LW(p) · GW(p)

Thank you very much. I matched it.

I honestly wouldn't be able to tell if you faked your confirmation e-mail, unless there's some way for random people to verify PayPal receipt numbers. So don't worry about the screenshot. Hopefully I'll figure out some convenient authentication method that works for the six donations in this scheme.

Replies from: WrongBot
comment by WrongBot · 2010-08-10T19:52:24.475Z · LW(p) · GW(p)

Thanks for doing this. It's always nice to have external motivation to do something good.

comment by RolfAndreassen · 2010-08-06T18:52:02.404Z · LW(p) · GW(p)

As people are probably aware, Hitchens has cancer, which is likely to kill him in the not-too-distant future. There does not seem to be much to be done about this; but I wonder if it's possible to pass the hat to pay for cryonics for him? Apart from the fuzzies of saving a life with X percent probability, which can be had much cheaper by sending food to Africa, it might serve as marketing for cryonics, causing others to sign up. Of course, this assumes that he would accept, and also that there wouldn't be a perception that he was just grasping at any straw available.

Replies from: ciphergoth, dclayh
comment by Paul Crowley (ciphergoth) · 2010-08-06T19:46:56.536Z · LW(p) · GW(p)

I'd love to persuade him, but no way am I passing a hat.

comment by dclayh · 2010-08-06T18:55:42.404Z · LW(p) · GW(p)
  1. Would Hitchens not be able to afford cryonics without donations?

  2. perception that he was just grasping at any straw available

What's wrong with this? Isn't that exactly what cryonics is: grasping the only available straw?

(Hm, how do I get a sentence inside the numbering indentation but outside the quotation?)

Replies from: RolfAndreassen
comment by RolfAndreassen · 2010-08-06T19:21:20.089Z · LW(p) · GW(p)

Would Hitchens not be able to afford cryonics without donations?

Perhaps so, but would he consider it the best use of his resources? While if he gets it for free, take it or lose it, that's a different matter.

What's wrong with this? Isn't that exactly what cryonics is: grasping the only available straw?

For marketing purposes it would be an epic fail. In interviews he has made the point that no, he will not be doing any deathbed conversions unless he goes mad from pain. If cryonics is seen as only a deathbed conversion to a different religion (easy pattern completions: "Rapture of the Nerds", "weird beliefs = cults") it'll merely reinforce the perception of cryonics as something rather kooky which serious people needn't spend time on. Your point is correct, but will only work as PR if that's how it gets across to the public: This is a straw with an actual chance of working.

Replies from: dclayh
comment by dclayh · 2010-08-06T19:52:53.976Z · LW(p) · GW(p)

Ah, I see. Certainly it would be better if he made the choice well before he's at death's door/in terrible pain/etc..

comment by mattnewport · 2010-08-05T06:51:22.459Z · LW(p) · GW(p)

Curious what people thought of Moon - pretty interesting take on clone identity and one of the more thoughtful sci-fi movies of the last couple of years.

Replies from: gwern
comment by gwern · 2010-08-05T07:04:55.498Z · LW(p) · GW(p)

The basic premise was pretty silly. (You have strong AI, perfect duplication of adult humans, regular flights to Earth - and you need hundreds of them on the Moon to do... nothing?)

Replies from: Risto_Saarelma, mattnewport
comment by Risto_Saarelma · 2010-08-05T08:31:06.765Z · LW(p) · GW(p)

Conceiving a realistic not-entirely-antagonistic AI that's smart enough to act as a character in a drama and not smart enough to learn to do pretty much everything the human characters do isn't all that easy.

Possible outs are that for some reason or other, robotics just isn't an option for the AI to interact with the outside world, and humans need to be manipulated to serve its ends instead, or that there's some way in which the AI is a dramatic character with agency despite not quite having general intelligence.

comment by mattnewport · 2010-08-05T07:16:10.576Z · LW(p) · GW(p)

Fbzr bs gur qrgnvyf ner fvyyl ohg V gubhtug gur onfvp cerzvfr jnf ernfbanoyl fbhaq. Vg'f vzcyvrq gung gur NV vf abg cnegvphynel fgebat (rnfvyl gevpxrq naq jul ryfr jbhyq lbh arrq gur uhznaf). Cresrpg pybarf bs nqhyg uhznaf (be ng yrnfg Oynqr Ehaare rfdhr negvsvpvny zrzbevrf) vf n ceboyrz ohg xvaq bs arprffnel gb gur cybg. Gur erfphr syvtug ng gur raq qbrfa'g znxr gbgny frafr ohg vg'f n snveyl zvabe cybg cbvag.

Replies from: gwern
comment by gwern · 2010-08-05T07:52:31.838Z · LW(p) · GW(p)

The bit at the end is necessary to make it a happy ending; what's the alternative, everyone dies and a new one is defrosted? That would be a serious bummer ending.

And you know what, the AI is strong enough. Look at what the human does the entire time - he carries some canisters around, and drives out to look at a wrecked vehicle. (At least, I don't remember him doing anything more complex.) A really weak system can drive cars across the freaking Mojave desert! The canister stuff could be automated by a 8-bit microchip.

The clones are easily scrapped as a plot device. When the protagonist's shift is up, he climbs into the pod - and his memories are erased by an injection or something. He climbs out with total amnesia, is told the trip up damaged him and may have caused moderate memory loss or hallucinations, and sets to work at his new job... I think that's pretty sinister and creepy myself, possibly even better than the clones.

Replies from: mattnewport
comment by mattnewport · 2010-08-05T17:41:13.174Z · LW(p) · GW(p)

Without getting into a long and spoiler laden discussion I'll just say that I think the movie is sufficiently ambiguous that a more charitable interpretation than yours is possible.

Overall I thought this was a relatively consistent and plausible plot as sci-fi movies go (admittedly a fairly low bar). I'm curious if you like any sci-fi movies and if so what some examples are of movies that you think do a better job in this regard?

comment by mattnewport · 2010-08-03T23:45:08.983Z · LW(p) · GW(p)

Fans of Eliezer's MoR might enjoy The Name of the Wind. The hero takes something of a rationalist approach to understanding the rules of magic in his world.

Replies from: Alicorn
comment by Alicorn · 2010-08-03T23:51:28.209Z · LW(p) · GW(p)

I put that book down in frustration when I realized that, of the dozen names the book jacket uses to refer to the main character, none of them are the one he's introduced with.

Replies from: mattnewport
comment by mattnewport · 2010-08-04T00:08:11.935Z · LW(p) · GW(p)

I think it's worth pressing on past this frustration. I read it on my Kindle after seeing it mentioned on Penny Arcade so never saw the blurb on the jacket.

comment by zaph · 2010-08-02T13:10:32.036Z · LW(p) · GW(p)

I came across a blurb on Ars Technica about "quantum memory" with the headline proclaiming that it may "topple Heisenberg's uncertainty principle". Here's the link: http://arstechnica.com/science/news/2010/08/quantum-memory-may-topple-heisenbergs-uncertainty-principle.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

They didn't source the specific article, but it seems to be this one, published in Nature Physics. Here's that link: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1734.html

This is all well above my paygrade. Is this all conceptual? Are the scientists involed anywhere near an experiment to verify any of this? In a word, huh?

comment by lwta · 2010-08-05T01:47:14.215Z · LW(p) · GW(p)

why don't more male celebrities sell their semen?

Replies from: gwern, Oligopsony
comment by gwern · 2010-08-05T10:05:18.967Z · LW(p) · GW(p)

They do. We call the bargains 'one night stands' and the like.

Replies from: sketerpot
comment by sketerpot · 2010-08-05T18:49:30.452Z · LW(p) · GW(p)

I wonder, are celebrity one-night stands more socially acceptable than celebrity sperm banks? The former is morally condemned by a lot of people; the latter is weird.

Replies from: gwern
comment by gwern · 2010-08-06T04:04:25.521Z · LW(p) · GW(p)

The former is morally condemned by some people (may no longer even be a simple majority); the latter is weird and disliked by most everyone.

Replies from: Blueberry
comment by Blueberry · 2010-08-06T08:22:50.898Z · LW(p) · GW(p)

Weird I can see, but why is it disliked by most people? Jealousy?

Replies from: gwern
comment by gwern · 2010-08-06T08:47:28.178Z · LW(p) · GW(p)

Weirdness is sufficient to inspire hatred. (If you doubt me, spend some time browsing Wikipedia's AfDs.) The recent flurry of cryonics stuff shows that, I hope.

If you don't find that convincing, easy enough to manufacture reasons eg. it's covert eugenics and thus threatening, or it's a waste of resources, or it encourages risky behaviors, or it undermines traditional morality, or...

comment by Oligopsony · 2010-08-05T01:51:17.345Z · LW(p) · GW(p)

They have plenty of money, and it would make them look like weirdos. (Warning: some celebrities may have no compunction about looking like weirdos.)

comment by XiXiDu · 2010-08-06T14:37:45.075Z · LW(p) · GW(p)

Infinite torture means to tweak someone beyond recognition. To closely resemble what infinite torture literally means you'll have to have the means to value it or otherwise it's an empty threat. You also have to account for acclimatization, that one gets not used to it, or it wouldn't be effective torture anymore.

I doubt this can be done while retaining one's personality or personhood to such an extent that the threat of infinite torture would be directed at you, or a copy of youself and not something vaguely resembling you. In which case we could as well talk about torturing a pain-experience-optimization-device.

Another thought is that what we really fear isn't torture but bodily injury and or dying. If someone was going to threat to cut your arms, what would really be bad about it are the consequences of living without arms, not the pain associated with it. If I can't die anyway, as in an infinite torture scenario, neither need to be functional, all that's left is pain experience. Now experiencing pain isn't fun by definition but is neither a very complex issue. It's not like you're were contemplating much about the consequences while being tortured for the past infinite years.

There are other forms of torture. But all other forms, which we do not directly associate with the term pain, are even harder to achieve. If you are knowing person no AI will be able to cause you psychic problems and retain your personality at the same time. That is, it could surely make you suffer from depression, but since you are not the type of person who'd naturally suffer from depression, it's not you. It might resemble you, but basically it's someone else. And that's as good as no torture at all really.

Just some quick thoughts that came to my mind today while taking a shower :-)

Replies from: Richard_Kennaway, pjeby, NancyLebovitz, Kingreaper
comment by Richard_Kennaway · 2010-08-06T15:13:18.357Z · LW(p) · GW(p)

Infinite torture means to tweak someone beyond recognition.

Infinite torture is too big to think about. Try this:

"Imagine yourself naked and shut up in an iron coffin, that is put into the furnace and heated to a fiery red heat. You know what it is like to burn a finger -- how much more awful will it be to find yourself in that coffin, burning all over, and yet God in His power will not let you be consumed. You suffocate but cannot expire. You cannot endure it -- but you will, for a thousand years, and a thousand thousand, and that is less than a moment compared with the eternity of suffering that lies before you."

That is a reconstruction of a recollection of something I once read of a Christian preacher of former times saying. For more along these lines, just try this Google search. "Terror of hell" was a recognised psychological malady back in the day.

Will you blithely say, as you fall into the AI's clutches, "it won't be me experiencing it"?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T16:05:38.004Z · LW(p) · GW(p)

Will you blithely say, as you fall into the AI's clutches, "it won't be me experiencing it"?

I don't know.

Who's who? Conjoined Twins Share a Brain

The twins' happiness is apparent to all around them -- they laughed and played the entire day the "Nightline" crew was with them. "They ... have this connection between their, what's called the thalamus, between the thalami, one in each to the other," said Dr. Doug Cochrane, the twins' pediatric neurologist. "So there's actually a bridge of neural tissues in these twins, which makes them quite unique." It also makes them impossible to separate. Mom Felicia Hogan and others believe the connection has given the twins unique powers. "They share a lot of things normal conjoined twins don't," she said. "They have special abilities to see each other, see what each other's seeing through each other's eyes."

Why choose the future over the present? I just cannot associate enough with that being that is tortured to feel anything now. I have a very good idea of what I want at every moment. And either I win like this or I don't care at all what's going to happen otherwise. That is, I'm not going to accept any tradeoff, any compromise big enough to extort it from me by means of infinite torture.

Anyway, as I wrote before, given the Mathematical Universe/Many-Worlds/Simulation Argument everything good and bad is happening to you, i.e. is timeless.

I had several blood vessel cauterizations in my nose over my lifetime. They just use a kind of soldering iron to burn your mucous membrane. All without any anesthetization. I always cried when I was younger. Now I'm so used to it that I don't care much anymore. So yeah, an AI might do that to my eyes but why care, it'll have to restore them again anyway. What a waste of resources :-)

Oh and that preacher has no imagination. I can imagine scenarios MUCH worse than that.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-08-06T16:23:11.060Z · LW(p) · GW(p)

What would your answer be to this conundrum?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T16:34:05.720Z · LW(p) · GW(p)

I'd choose (b) without the amnesia because I do not cooperate with such scum and it be an interesting experience and test to figure out what to choose in similar cases.

Replies from: jimrandomh
comment by jimrandomh · 2010-08-06T18:16:10.141Z · LW(p) · GW(p)

I'd choose (b) without the amnesia because I do not cooperate with such scum and it be an interesting experience and test to figure out what to choose in similar cases.

Ming would not ask this, but - Are you sure you want to be tortured? Finding third options in bad scenarios is cool, but not if they're strictly worse than the options already presented. This is like choosing suicide in Newcomb's problem.

comment by pjeby · 2010-08-06T14:46:04.310Z · LW(p) · GW(p)

That is, it could surely make you suffer from depression, but since you are not the type of person who'd naturally suffer from depression, it's not you.

Either you really don't understand depression, or your definition of identity revolves around some very transient chemical conditions in the body.

Replies from: Kingreaper, XiXiDu
comment by Kingreaper · 2010-08-06T14:55:59.372Z · LW(p) · GW(p)

Good point. I should have picked up on that.

I'm a manic depressive. Does this mean I'm a different person at each level along the scale between mania and depression?

comment by XiXiDu · 2010-08-06T15:01:09.365Z · LW(p) · GW(p)

It wasn't meant that literally. What I rather meant is that the AI can make you fear pink balloons and expose you to a world full of pink balloons. But if you extrapolate this reasoning then the AI could as well just torture a torture-optimization-device.

Here is where I'm coming from. All my life since abandoning religion I fear something could happen to my brain that makes me fall for such delusions again. But I think that fear is unreasonable. If I'd like to become religious again I wouldn't care because that's my preference then. In other words, I'm suffering from my imagination of a impossible being that is me but not really me, that is dumb enough to strive to be religious but fears being religious.

That means some AI could torture me but not infinitely so while retaining anything I'd care about on average previous to being tortured infinitely.

P.S. I'm just having some fun trying to figure out why some people here are very horrified by such scenarios. I can't help but feel nothing about it.

Replies from: WrongBot
comment by WrongBot · 2010-08-06T15:12:32.102Z · LW(p) · GW(p)

I assign high negative utility to the torture of any entity. The scenario might be more salient if the entity in question is me (for whatever definition of identity you care to use), but I don't care much more about myself than I do other intelligences.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T15:28:00.521Z · LW(p) · GW(p)

The only reasons we care about other people is either to survive, i.e. get what we want, or because it is part of our preferences to see other people being happy. Accordingly, trying to maximize happiness for everybody can be seen as purely selfish. Either as an effort to survive, by making everybody wanting to make everybody else happy, given that not you but somebody else wins. Or simply because it makes oneself happy.

Replies from: WrongBot, thomblake
comment by WrongBot · 2010-08-06T15:58:45.519Z · LW(p) · GW(p)

You can reduce every possible motivation to selfishness if you like, but that makes the term kind of useless; if all choices are selfish, describing a particular choice as selfish has zero information content.

Accordingly, trying to maximize happiness for everybody can be seen as purely selfish. Either as an effort to survive, by making everybody wanting to make everybody else happy, given that not you but somebody else wins. Or simply because it makes oneself happy.

You should be more cautious about telling other people what their motivations are. I would die to save the world, and I don't seem to be alone in this preference. And this neither helps me survive nor makes me momentarily happy enough to offset the whole dying thing.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T16:19:48.659Z · LW(p) · GW(p)

...but that makes the term kind of useless

That terminology is indeed useless. All it does is to obfuscate matters.

What's your point anyway?

comment by thomblake · 2010-08-06T17:20:35.793Z · LW(p) · GW(p)

You should be careful not to conflate "preference" and "things that make oneself happy". Or make that a more clearly falsifiable component of your hypothesis.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T18:36:05.644Z · LW(p) · GW(p)

Why would anyone have a preference detached from their personal happiness? I do what I do because it makes me feel good because I think it is the right thing to do. Doing the wrong thing deliberately makes me unhappy.

  • I don't care much more about myself than I care about other intelligences.
  • I care about other intelligences and myself to an almost equal extent.
  • I care about myself and other intelligences.
  • I care about myself. I care about other intelligences.
  • I care about my preferences.

What does it mean to care more about others? Who's caring here? If you want other people to be happy, why do you want it if not for your own comfort?

I'm vegetarian because I don't like unnecessary suffering. That is, I care about myself not feeling bad because if others are unhappy I'm also unhappy. If you'd rather die than to cause a lot of suffering in others that is not to say that you care more about others than yourself, that is nonsense.

comment by NancyLebovitz · 2010-08-06T14:45:32.899Z · LW(p) · GW(p)

Severe chronic pain fouls up people's lives, even if the pain isn't related to damage. Pain hurts, and it's distracting.

I really wish you'd think more before you post.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T15:06:20.915Z · LW(p) · GW(p)

I'm not sure how this is related to what I was talking about. Are you suggesting you'll be tortured by doing what you want expect that you suffer from chronic pain along the way without being able to stop it?

I'm sorry, this is just an open thread comment, not a top-level post. Aren't we allowed to just chat and get feedback without thoroughly contemplating a subject?

Replies from: NancyLebovitz, cupholder
comment by NancyLebovitz · 2010-08-06T15:18:58.977Z · LW(p) · GW(p)

Another thought is that what we really fear isn't torture but bodily injury and or dying.

That's what you said.

I'm not sure what you mean by your second sentence immediately above, but, while I'm not convinced by the infinite torture scenario that's worrying some people here (I probably don't have a full understanding of the arguments), I do think extreme pain (possibly with enough respite to retain personality) would be part of an infinite torture scenario.

As it happens, I have a friend with trigeminal neuralgia-- a disorder of serious pain without other damage. It can drive people to suicide.

I wouldn't take that sort of problem lightly in any case, but knowing someone who has one does add something of an edge.

As for what people are allowed here, I don't have the power to enforce anything. I am applying social pressure because I don't come here to read ill-thought-out material, and even though I'm answering you, I don't enjoy addressing the details of what's wrong with it.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-06T18:46:48.424Z · LW(p) · GW(p)

I am applying social pressure because I don't come here to read ill-thought-out material...

Awww, I'll leave you alone now. I've maybe posted 50 comments since the beginning of LW, so don't worry. The recent surge was just triggered by this fascinating incident where some posts and lots of comments have been deleted to protect us from dangerous knowledge.

I wish you good luck building your space empires and superintelligences.

Sorry again for stealing your precious time :-)

comment by cupholder · 2010-08-06T18:40:20.412Z · LW(p) · GW(p)

I'm sorry, this is just an open thread comment, not a top-level post. Aren't we allowed to just chat and get feedback without thoroughly contemplating a subject?

There's nobody compelling you to reply to the comments you feel are too thoroughly contemplating a subject.

comment by Kingreaper · 2010-08-06T14:54:47.490Z · LW(p) · GW(p)

A more direct problem with 3^^^^^^3 torture to me:

Duplicate moments have no value As Far As I Am Concerned.

Unless someone is being tortured in 3^^^^^^3 different ways, or is carrying on with an amazingly extended (and non-repititious) life while the infinite torture is occuring, most of the moments are going to be precise repeats of previous moments.

comment by h-H · 2010-08-26T02:55:42.319Z · LW(p) · GW(p)

http://antiwar.com/ I use this site to get most of 'world news', what about you guys?

Replies from: TobyBartels, Emile, JoshuaZ
comment by TobyBartels · 2010-08-26T03:22:01.170Z · LW(p) · GW(p)

I read that site and I like it. But it's important to get a variety of views. I would never want to get most of my world news from a single site. In fact, if I had to pick a single source of world news for some reason, then I'd want to pick one that I disagree with, as long as it still has good coverage of what I care about.

comment by Emile · 2010-08-26T08:38:13.229Z · LW(p) · GW(p)

I generally don't look for world news any more; when I did I'd generally follow Current Events on Wikipedia:

  • It's short,

  • It coveres world events more evenly than French or American national media, and

  • It tries to be unbiased

When I want more in-depth coverage on a subject, I generally turn to wikipedia too. I think it has a better trade-off between providing context and covering breaking events as they unfold (something that I consider is overvalued).

Replies from: h-H
comment by h-H · 2010-09-04T04:33:01.581Z · LW(p) · GW(p)

thanks, that's actually what I wanted to know.

comment by JoshuaZ · 2010-08-26T03:08:48.559Z · LW(p) · GW(p)

The website in question seems to be a partisan website that focuses on a narrow set of issues which has no coverage of science or math at all. I am unfortunately not surprised to note that even in the context of the issues they care about they don't understand when science or math should be relevant. For example, I see no articles mentioning anything about cryptography. Anyone who has the apparent ideology of the website and doesn't realize that crypto matters is suffering from major failings of rationality. This website seems like one massive Mindkill.

Replies from: h-H
comment by h-H · 2010-08-26T03:17:46.738Z · LW(p) · GW(p)

But there are no non-partisan sites, are there? but that aside, I meant political world news, not scientific.

I get the point about politics being icky, but we Do live in this world, and IMO the way of the cloistered monk doesn't seem to be a viable option.. also, could you go into more detail about how is it that crypto is so vastly important given the ideology of the website? not to mention you reached that conclusion rather quickly, hmm

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-26T04:34:59.107Z · LW(p) · GW(p)

But there are no non-partisan sites, are there?

Partisanship is something that has degrees. It isn't an on/off thing. Moreover, there's a big difference between sources which try to be non-partisan and sources which are avowedly partisan. Using any single source, even a source claiming to be non-partisan, for news is a bad idea. Using a single avowedly partisan source is a really bad idea.

but that aside, I meant political world news, not scientific.

These aren't disconnected at all. Science matters for many political issues. Look at stem cell research for example or global warming. Or to use one that has less mindkilling among the general public, fusion research. Multiple countries have had political disputes over whether to fund ITER and if so by how much. The chance of magnetic confinement fusion succeeding is a scientific issue that has a lot to do with politics. Science impacts politics.

I get the point about politics being icky, but we Do live in this world, and IMO the way of the cloistered monk doesn't seem to be a viable option..

The claim isn't that one should avoid politics. But there are ways to minimze mindkilling. Using multiple sources regularly is a major one.

could you go into more detail about how is it that crypto is so vastly important given the ideology of the website? not to mention you reached that conclusion rather quickly, hmm

I didn't say it was "vastly important" but rather that it matters. I reached that conclusion from a quick perusal at the website in question base on the following reasoning: First, they had multiple articles about Wikileaks and clear support for Wikileaks as a good thing. If you care about anonymous leaks, how strong civilian crypto is matters a lot. More generally, they take a general anti-war position (clear from both the title and their articles). Crypto is relevant because strong crypto can make centralized control more difficult. It levels the playing field for asymetric warfare. How much it does so and will do so in the next few years is not clear. Thus, this isn't "vastly important" but it doesn't take much to see this as something that needs to be watched. The balance for guerrilla warfare already gives major advantages to guerillas especially when they have a somewhat sympathetic population base. If this continues, wars like those currently in Iraq and Afghanistan will become even more difficult for large militaries to engage.

Incidentally, I've now looked through the site in more detail and I see a tiny handful of articles mentioning cryptography, but it seems clear that they a) don't know much about it b) don't understand its significance.

Replies from: h-H, rhollerith_dot_com
comment by h-H · 2010-09-04T04:30:43.217Z · LW(p) · GW(p)

a belated reply:

now, as a generality your first statement is correct, but after searching for some years I've concluded the easiest method is in fact mild support for a partisan anti war website, reason being; on average wars are more destructive than no-wars, and definitely inductive of irrationality.

a note about the particular partisan site, it's not a single source by any means-I believe this is the cause of contention?- it's actually an aggregation of 'anti war' news from multiple sources including mainstream channels and others. as such the 'single source' label is negated.

second statement: yes, science and politics are connected, but I believe this misses the point, in the domain of national policy, the more hawkish elements have been in control for quite a while now, pitting the US against third world destitute farmers and shepherds. that's not exactly a rational path, and so, as much as we should support stem cell research for eg. that wasn't the angle I was coming from.

third; answered, see above.

fourth; it does as an issue to be concerned with, but surely not in context of the discussion? strategic theorizing on possible crypto use by say an afghan warlord concerned for his poppy production is quite a marginal concern compared to the US government launching wars that cost trillions and benefit humanity little to nothing while increasing likelihood of retaliation etc.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-09-05T17:13:11.903Z · LW(p) · GW(p)

now, as a generality your first statement is correct, but after searching for some years I've concluded the easiest method is in fact mild support for a partisan anti war website, reason being; on average wars are more destructive than no-wars, and definitely inductive of irrationality.

What do you mean by support? In the context of your earlier remarks, "support" seems to mean "use as sole newsource." I don't see how even if one accepted your premises one would get that as a conclusion.

it's actually an aggregation of 'anti war' news from multiple sources including mainstream channels and others. as such the 'single source' label is negated.

This makes no difference. For purposes of getting relevant data and avoiding mind-killing, a partisan aggregator will be functionally identical to a partisan single source.

science and politics are connected, but I believe this misses the point, in the domain of national policy, the more hawkish elements have been in control for quite a while now, pitting the US against third world destitute farmers and shepherds. that's not exactly a rational path, and so, as much as we should support stem cell research for eg. that wasn't the angle I was coming from.

In regards to the connection between science and politics, I'm not sure I can parse what you are saying and in so far as I can parse it, it seems like you have a problematic attitude. Not everything is about simple ideological support or not, and your response above seems to almost be an indication of Mindkilling spreading from politics to science. This is precisely why I gave the example of ITER and whether or not it should be funded and if so by how much. Science impacts policy. And it isn't anything as simple as "oh, we should support this but not support that. Stem cells, yay! People who don't like stem cells cells, booh!" To use your example of stem cells, how much resources should go into stem cell research is quite complicated. The standard reaction against theistic arguments against embryonic stem cell research is to conclude that we should have massive amounts of research into stem cells. But that's not necessarily the case. We have a limited amount of resources that is going to go to biological and medical research. How much of that should go to stem cells? That should be the question that you should ask and not come away with some general notion of "support."

trategic theorizing on possible crypto use by say an afghan warlord concerned for his poppy production is quite a marginal concern compared to the US government launching wars that cost trillions and benefit humanity little to nothing while increasing likelihood of retaliation etc.

You are missing the point. The change that crypto brings (and for that matter is actually bringing) is the benefits it brings to the little guy, the decentralized individuals, not the warlord. The person leaking documents or the resistance fighter/terrorist/guerrilla/etc are the types who benefit from having strong crypto. This is why for a long time the US classified cryptography as munitions for export purposes. And saying that you don't think crypto is that important isn't an argument that has any validity in this context given that as you noted you are talking about a news aggregration site, so they can easily include relevant articles. The lack of articles about crypto (and for that matter a fair number of other issues) on the site indicate an oversimplified view of what issues are relevant to warfare and ongoing war.

Incidentally, I'm curious what evidence you have that any of the US wars in the last decade have put the US up against "third world destitute farmers and shepherd" as the main opposition.

comment by RHollerith (rhollerith_dot_com) · 2010-08-26T06:03:36.914Z · LW(p) · GW(p)

Crypto is relevant because strong crypto can make centralized control more difficult. It levels the playing field for asymetric warfare.

I do not see it. Can you explain?

The biggest advantage crypto can confer on the insurgents in Iraq or Afganistan under my current models probably comes in the form of encrypting cell phones.

But if the insurgents deploy them, the occupying power declares that from now on, only non-encrypting phones are permitted and makes their declaration stick by taking control of the base stations.

The insurgents can respond by deploying radios that do not require base stations.

The occupying power's counter-response is to set up electromagnetic-radiation monitoring to detect and triangulate the radio signals.

The insurgents can respond by replacing the use of voice conversations with the use of text messages (well, voice messages for the illiterate fighters) that are recorded by the phone and then when the user presses "Transmit" transmitted as quickly as possible to make them as hard as possible to triangulate.

But radio signals are a lot like light. Light is in fact just another form of electromagnetic radiation. So, as an aid to intuition, consider the situation in which the insurgents try to use visible light to communicate. I suppose that if there is already a lot of light, e.g., from the sun, the light from the communication devices might be able to hide. But it seems to me that the proper analogy or aid to intuition is probably the situation in which the insurgents to try use light to communicate at night because if there is enough radiation for the rebels' signals to hide, the occupying power shuts down the sources of the radiation (TV broadcasters, base stations for unencrypted cell phones, jammers depolyed by the rebels).

I do not know enough about spread-spectrum radio to say whether it would give the rebels the advantage, but if it does, the rebels would not need to encrypt the messages, and your statement was that not spread-spectrum radio but rather crypto confers advantages on rebels.

ADDED. Come to think of it, the above analysis pertains mostly to conflicts where the stakes are sufficiently high: e.g., if insurgents tried to take over, e.g., Alaska, or whether the People's Republic as occupier could hold Taiwan against Taiwanese insurgents backed by the U.S.. If the occupying power's main goal is "to bring democracy to Iraq" or to deny the territory of Afganistan to al Quaida, well, that might not be high enough stakes to justify the cost to the occupying power of achieving and maintaining comprehensive control of the communications infrastructure of the occupied territory and reimbursing the population for the economic losses caused by the restrictions on communication caused by the comprehensive control. (Reimbursement would tend to be necessary to keep the population on the occupying power's side).

Replies from: saturn, jimrandomh, JoshuaZ
comment by saturn · 2010-08-29T07:13:50.729Z · LW(p) · GW(p)

I do not know enough about spread-spectrum radio to say whether it would give the rebels the advantage, but if it does, the rebels would not need to encrypt the messages, and your statement was that not spread-spectrum radio but rather crypto confers advantages on rebels.

Spread-spectrum is crypto—the idea is to select a pattern (of frequencies) that eavesdroppers can't distinguish from random noise but which is predictable to the intended recipient.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-29T22:40:10.054Z · LW(p) · GW(p)

Thanks for the info.

comment by jimrandomh · 2010-08-28T18:05:33.740Z · LW(p) · GW(p)

You're forgetting about steganography; encrypted messages can be made to look like vacation photos, music, spam or something else. Using steganography sounds complicated when you describe it, but in practice all the details are handled transparently by some software, so it's just one-time setup.

Replies from: rhollerith_dot_com, RobinZ
comment by RHollerith (rhollerith_dot_com) · 2010-08-29T04:43:38.587Z · LW(p) · GW(p)

The occupying power announces that until the insurgents stop killing innocent civilians and sowing disorder, no one in contested territory is permitted to use the internet to transmit vacation photos, music, spam, etc. Civilians whose livelihoods are interrupted by these restrictions can apply for monetary compensation from the occupying power. Music and other entertainment will still be available from iTunes and other major centralized services (unless and until there are signs that these centralized services are being used to broadcast hidden messages).

In the Malay Emergency, the occupying power, which won the conflict, required 500,000 civilians to relocate to new villages surrounded by barbed wire, police posts and floodlit areas. Compared to that, restrictions on the internet are mild.

Also, the need for steg reduces the bandwidth available to the insurgents -- perhaps below the level required for voice communication, which requires the communicators to be literate, which denys the communications channel to a large fraction of the world's insurgents.

comment by RobinZ · 2010-08-29T01:07:22.218Z · LW(p) · GW(p)

"One-time" is a bit of a stretch (all it takes is being found out once to greatly impair the value of any particular method), but yeah - steganography is an established and worthy technique.

comment by JoshuaZ · 2010-08-29T00:28:13.922Z · LW(p) · GW(p)

You are thinking about radio communication much more than I was. My thought process centered on encrypted use of the internet to allow insurgents to communicate both with each other and with sympathizers or allies that are elsewhere. I agree with your analysis that crypto is unlikely to do much in terms of radio communications.