Open Thread: April 2010

post by Unnamed · 2010-04-01T15:21:03.777Z · LW · GW · Legacy · 539 comments

Contents

  Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads.  Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.
None
539 comments

An Open Thread: a place for things foolishly April, and other assorted discussions.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads.  Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.

539 comments

Comments sorted by top scores.

comment by Baughn · 2010-04-02T19:37:15.966Z · LW(p) · GW(p)

It doesn't seem like it's ever going to be mentioned otherwise, so I thought I should tell you this:

Lesswrong is writing a story, called "Harry Potter and the Methods of Rationality". It's just about what you'd expect; absolutely full of ideas from LW.com. I know it's not the usual fare for this site, but I'm sure a lot of you have enjoyed Eliezer's fiction as fiction; you'll probably like this as well.

Who knows, maybe the author will even decide to decloak and tell us who to thank?

Replies from: JGWeissman, Alicorn, ata, Unnamed, Document, LucasSloan, Vladimir_Nesov, CronoDAS
comment by JGWeissman · 2010-04-02T20:52:23.786Z · LW(p) · GW(p)

My fellow Earthicans, as I discuss in my book Earth In The Balance and the much more popular Harry Potter And The Balance Of Earth, we need to defend our planet against pollution. As well as dark wizards.

-- Al Gore on Futurama

comment by Alicorn · 2010-04-02T20:20:09.920Z · LW(p) · GW(p)

I'm 98% confident it's Eliezer. He's been taunting us about a piece of fanfiction under a different name on fanfiction.net for some time. I guess this means I don't have to bribe him with mashed potatoes to get the URL after all.

Edit: Apparently, instead, I will have to bribe him with mashed potatoes for spoilers. Goddamn WIPs.

Replies from: Eliezer_Yudkowsky, Baughn, Matt_Simpson
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-02T22:55:56.287Z · LW(p) · GW(p)

Yeah, I don't think I can plausibly deny responsibility for this one.

Googling either (rationality + fanfiction) or even (rational + fanfiction) gets you there as the first hit, just so ya know...

Also, clicking on the Sitemeter counter and looking at "referrals" would probably have shown you a clickthrough from a profile called "LessWrong" on fanfiction.net.

Want to know the rest of the plot? Just guess what the last sentence of the current version is about before I post the next part on April 3rd. Feel free to post guesses here rather than on FF.net, since a flood of LW.com reviewers would probably sound rather strange to them.

Replies from: JGWeissman, Unnamed, Mass_Driver, CronoDAS, Kevin, Jack, Alicorn, Cyan, ShardPhoenix, CronoDAS, Liron, anonym, arundelo, Furcas
comment by JGWeissman · 2010-04-03T06:02:17.853Z · LW(p) · GW(p)

"Oh, dear. This has never happened before..."

Voldemort's Killing Curse had an epiphenomenal effect: Harry is a p-zombie. ;)

comment by Unnamed · 2010-04-04T03:38:20.557Z · LW(p) · GW(p)

I don't like where this is headed - Harry isn't provably friendly and they're setting him loose in the wizarding world!

comment by Mass_Driver · 2010-04-04T06:19:45.495Z · LW(p) · GW(p)

Also, there is a sharply limited supply of people who speak Japanese, Hebrew, English, math, rationality, and fiction all at once. If it wasn't you, it was someone making a concerted effort to impersonate you.

comment by CronoDAS · 2010-04-02T23:52:23.956Z · LW(p) · GW(p)

Do I have to guess right? ;)

comment by Kevin · 2010-04-03T03:29:39.137Z · LW(p) · GW(p)

It gets a strong vote of approval from my girlfriend. She made it about halfway through Three Worlds Collide without finishing, for comparison. We'll see if I can get my parents to read this one...

Edit: And I think this is great. Looking forward to when Harry crosses over to the universe of the Ultimate Meta Mega Crossover.

Replies from: Kevin, Kutta
comment by Kevin · 2010-04-03T21:16:10.814Z · LW(p) · GW(p)

Let's make that a Prediction. Harry becomes the ultimate Dark Lord by destroying the universe and escaping to the Metametaverse of the Ultimate Meta Mega Crossover.

comment by Kutta · 2010-04-03T15:52:00.676Z · LW(p) · GW(p)

Looking forward to when Harry crosses over to the universe of the Ultimate Meta Mega Crossover.

DO NOT want.

comment by Jack · 2010-04-15T05:33:50.629Z · LW(p) · GW(p)

This Harry is so much like Ender Wiggin.

Replies from: Cyan
comment by Cyan · 2010-04-15T06:10:20.318Z · LW(p) · GW(p)

Really? I picture him looking like a younger version of this.

Replies from: Jack, Jack
comment by Jack · 2010-04-15T06:42:40.444Z · LW(p) · GW(p)

This Harry and Ender are both terrified of becoming monsters. Both have a killer instinct. Both are much smarter than most of their peers. Ender's two sides are reflected in the monstrous Peter and the loving Valentine. The two sides of Potter-Evans-Verres are reflected in Draco and Hermione. The environments are of course very similar: both are in very abnormal boarding schools teaching them things regular kids don't learn.

Oh, and now the Defense Against the Dark Arts prof is going to start forming "armies" for practicing what is now called "Battle Magic" (like the Battle Room!).

And the last chapter's disclaimer?

The enemy's gate is Rowling.

If the parallels aren't intentional I'm going insane.

Replies from: NancyLebovitz, Cyan
comment by NancyLebovitz · 2010-04-15T14:48:57.243Z · LW(p) · GW(p)

And going back a few chapters, I'm betting that what Harry saw as wrong with himself is hair-trigger rage.

comment by Cyan · 2010-04-15T13:53:07.696Z · LW(p) · GW(p)

The enemy's gate is Rowling.

Ooo, I missed that. Yeah, OK.

comment by Jack · 2010-04-15T06:45:04.729Z · LW(p) · GW(p)

This Harry and Ender are both terrified of becoming monsters. Both have a killer instinct. Both are much smarter than most of their peers. Ender's two sides are reflected in the monstrous Peter and the loving Valentine. The two sides of Potter-Evans-Verres are reflected in Draco and Hermione. The environments are of course very similar: both are in very abnormal boarding schools teaching them things regular kids don't learn.

Oh, and now the Defense Against the Dark Arts prof is going to start forming "armies" for practicing what is now called "Battle Magic" (like the Battle Room!).

And the last chapter's disclaimer?

The enemy's gate is Rowling.

If the parallels aren't intentional I'm going insane.

comment by Alicorn · 2010-04-02T23:14:09.601Z · LW(p) · GW(p)

There is a reason I didn't look for it. It isn't done. Having found it anyway via link above, of course I read it because I have almost no self-control, but I didn't look for it!

Are you sure you wouldn't rather have the mashed potatoes? There's a sack of potatoes in the pantry. I could mash them. There's also a cheesecake in the fridge... I was thinking of making soup... should I continue to list food? Is this getting anywhere?

comment by Cyan · 2010-04-03T02:33:53.055Z · LW(p) · GW(p)

Holy fucking shit that was awesome.

comment by ShardPhoenix · 2010-04-04T08:23:05.998Z · LW(p) · GW(p)

This is a lot of fun so far, though I think McGonnagal was in some ways more in the right than Harry in chapter 6. Also, I kind of feel like Draco's behavior here is a bit unfair to the wizarding world as portrayed in the canon - the wizarding world is clearly not at all medieval in many ways (especially in the treatment of women where the behavior we actually see is essentially modern), so I'm not sure why it should necessarily be so in that way. Regardless of my nitpicking it's a brilliant fanfic and it's nice to see muggle-world ideas enter the wizarding world (which always seemed like it should have happened already).

comment by CronoDAS · 2010-04-03T02:14:46.506Z · LW(p) · GW(p)

You also have the approval of several Tropers, only one of which is me.

comment by Liron · 2010-04-05T01:36:34.237Z · LW(p) · GW(p)

I normally read within {nonfiction} U {authors' other works} but I had such a blast with Methods of Rationality that I might try some more fiction.

Replies from: MBlume, Kevin
comment by MBlume · 2010-04-05T03:58:58.251Z · LW(p) · GW(p)

This story reminded me distinctly of Harry Potter and the Nightmares of Futures Past -- you might enjoy that one. Harry works until he's 30 to kill Voldemort, and by the time he succeeds, everyone he loves is dead. He comes up with a time travel spell that breaks if the thing being transported has any mass, so he kills himself, and lets his soul do the travelling. 30-year-old Harry's soul merges with 11-year-old Harry, and a very brilliant, very prepared, very powerful, and deeply disturbed young wizard enters Hogwarts.

Replies from: gwern, Alicorn, Liron
comment by gwern · 2010-04-16T22:12:14.709Z · LW(p) · GW(p)

I've finished reading that.

It's very well written technically - better than Eliezer who overindulges in speechifying, hyperbole, and italics - but in general Harry doesn't seem disturbed enough, heals too easily, and there are too few repercussions from his foreknowledge. (Snape leaving and usurping Kakaroff at Durmstang seems to be about it.)

That, and the author may never finish, which is so frustrating an eventuality that I'm not sure I could recommend it to anyone.

Replies from: Kevin
comment by Kevin · 2010-04-18T08:08:04.719Z · LW(p) · GW(p)

AH... spoiler!

Replies from: gwern
comment by gwern · 2010-04-18T12:56:04.080Z · LW(p) · GW(p)

Snape leaving is hardly a spoiler, since so far it hasn't affected anything...

comment by Alicorn · 2010-04-16T22:18:52.100Z · LW(p) · GW(p)

Similar in premise is "The Mirror of Maybe" (slash warning, never-updates warning) in which a fifth-year Harry is shown a hypothetical future and uses the extensive knowledge gained thereby to ditch school, disguise himself as an adult, and become the greatest Gary Stu of all time. Slightly AU magic system and, as I warned, it never freakin' updates.

comment by Liron · 2010-04-06T00:03:19.197Z · LW(p) · GW(p)

lol

comment by Kevin · 2010-04-05T04:27:58.561Z · LW(p) · GW(p)

I like all of Eliezer's fiction... if you want more like this, see the pseudo-sequel, http://lesswrong.com/lw/18g/the_finale_of_the_ultimate_meta_mega_crossover/ It is too insane of a story to recommend to most people, but assuming you've read Eliezer's non-fiction, you can jump right in.

Otherwise, just about all of Eliezer's fiction is worth reading, Three World's Collide is his best work of fiction.

comment by anonym · 2010-05-28T15:53:23.916Z · LW(p) · GW(p)

It's now the second hit on Google for (rationality + fiction)!

comment by arundelo · 2010-04-04T22:14:10.631Z · LW(p) · GW(p)

What proportion of the whole story are the current ten (nine) chapters likely to be?

(There is going to be more, right? Right?!)

Replies from: Eliezer_Yudkowsky, LucasSloan
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-04T22:19:52.306Z · LW(p) · GW(p)

It's almost done, actually. Here's a sneak preview of the next chapter:

Dumbledore peered over his desk at young Harry, twinkling in a kindly sort of way. The boy had come to him with a terribly intense look on his childish face - Dumbledore hoped that whatever this matter was, it wasn't too serious. Harry was far too young for his life trials to be starting already. "What was it you wished to speak to me about, Harry?"

Harry James Potter-Evans-Verres leaned forward in his chair, looking bleak. "Headmaster, I got a sharp pain in my scar during the Sorting Feast. Considering how and where I got this scar, it didn't seem like the sort of thing I should just ignore. I thought at first it was because of Professor Snape, but I followed the Baconian experimental method which is to find the conditions for both the presence and the absence of the phenomenon, and I've determined that my scar hurts if and only if I'm facing the back of Professor Quirrell's head, whatever's under his turban. Now it could be that my scar is sensitive to something else, like Dark Arts in general, but I think we should provisionally assume the worst - You-Know-Who."

"Great heavens, Harry!" gasped Dumbledore. He sat there with his head whirling. The boy was right that this was nothing to ignore. He dared not confront Professor Quirrell within the halls of Hogwarts, around the other students - he would have to figure out some way to lure Quirrell out of the castle -

But the grim young boy was still speaking. "Now, if the worst is true, then we know exactly where You-Know-Who is right now. And I don't think that's an opportunity we should pass up. Destroying his body didn't work last time, so I asked Hermione if she'd ever heard of anything that would destroy a soul, and she mentioned a method of executing criminals called the Dementor's Kiss..."

just kidding

Replies from: Kevin, Cyan, Baughn, Psy-Kosh
comment by Kevin · 2010-04-05T04:31:47.106Z · LW(p) · GW(p)

just kidding

that that is an excerpt or that you are almost done?

comment by Cyan · 2010-04-15T04:46:39.155Z · LW(p) · GW(p)

How proud of myself should I feel for figuring out how Comed-Tea works before Harry did? (Keeping in mind that it's been years since I internalized the facts that in the Harry Potter universe, prophecies work and Time-Turners don't create alternate time-lines, information not available to rational!Harry.)

Replies from: gwern
comment by gwern · 2010-04-16T22:10:03.270Z · LW(p) · GW(p)

How proud of myself should I feel for figuring out how Comed-Tea works before Harry did?

Not very. Tons of commentators glommed onto the non-time-warping explanation, and the fic all but tells us that this is a possibility, especially with the experiment vignette with Hermione on the train.

(Personally, I don't like the idea that the Comed-Tea affects only Harry; that mechanism leaves Luna Lovegood as an ethically depraved libeller.)

Replies from: CronoDAS, arundelo, Cyan
comment by CronoDAS · 2010-04-16T22:18:27.367Z · LW(p) · GW(p)

Or her father, at least. (I think there was an author's note about this - she says vague things and he turns them into ridiculous headlines.)

Replies from: gwern
comment by gwern · 2010-04-16T23:53:42.828Z · LW(p) · GW(p)

I think there was an author's note about this

Well, that's just great - how am I supposed to know that now with Eliezer's little erasure system?

she says vague things and he turns them into ridiculous headlines.

I suppose better Xenophilius being a depraved libeller than Luna... although as an adult it's even more inexcusable.

Replies from: CronoDAS
comment by CronoDAS · 2010-04-21T01:36:17.705Z · LW(p) · GW(p)

I was looking over the old chapters and I found this:

One alert reviewer asked whether, if Luna is a seer, that means this is going to be an HPDM bottom!Draco mpreg fic. I regret that FFN does not allow me any larger font size in which to say NO. It honestly hadn't occurred to me that Luna might be a real seer - I'll have to decide whether to run with that or not - but I think we can all safely assume that if Luna is a seer, she said something about "light planting a seed in darkness", and Xenophilius, as always, interpreted this in rather the wrong way.

comment by arundelo · 2010-04-16T23:57:43.538Z · LW(p) · GW(p)

that mechanism leaves Luna Lovegood as an ethically depraved libeller.

Or just charmingly nutty.

comment by Cyan · 2010-04-16T23:14:55.567Z · LW(p) · GW(p)

Good to know.

comment by Baughn · 2010-04-05T11:08:44.914Z · LW(p) · GW(p)

And what does Voldemort have to do with anything?

He's not Harry's target, he's just a stumbling block in the middle. You're not fooling me that easily. :P

Replies from: sketerpot
comment by sketerpot · 2010-04-05T22:11:19.141Z · LW(p) · GW(p)

This Harry is so much more potentially powerful than canon Harry, therefore having canon Voldemort be the final boss would be a let-down. Eliezer's author description explicitly says that anything which strengthens the hero must be accompanied by a corresponding increase in the difficulties he will face, so I think we can be pretty confident that things are going to be much more awesome than just defeating Voldemort with the Potterverse equivalent of RPG rules exploitation.

comment by Psy-Kosh · 2010-04-05T04:23:14.965Z · LW(p) · GW(p)

just kidding

Besides, even you had that happen in your story and had a dementor munching on the back of Quirrell's head, wouldn't the result be the equivalent of destroying only a single horcrux? (unless the bits of soul are linked in such a way that the dementor can suck them all up at a distance through the one...)

You can't escape writing the rest of this that easily! ;)

Also, hrm... would your comment here then count as you doing a parody fanfic of your own fanfic?

Replies from: Kevin
comment by Kevin · 2010-04-05T04:30:46.087Z · LW(p) · GW(p)

It could be one chapter where they debate whether or not to sic the dementor on Quirell without even confronting him, then one chapter where they figure out how to magically triangulate and destroy all of the Horcruxes at once.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-04-05T04:40:50.961Z · LW(p) · GW(p)

Shhhh... Stop trying to make it easy for him to end the story sooner than later. ;)

(Nevermind some of the grayer ethical aspects of, in a world with potentially eternal afterlife, Moldy Voldy's crimes may not stack up to that. (That is, "destroying a soul" >>> "killing someone" in the potter verse. Probably even "killing many someones" (otoh, IIRC the Dementor's were to a large extent his creatures, so we can probably safely assume that he was involved with or arranged for (or, more to the point, would in the future arrange for) plenty of soul consuming/destroying))

comment by LucasSloan · 2010-04-05T03:31:07.759Z · LW(p) · GW(p)

Well, by way of contrast, this point in the original book took us up to page 121 of 309. The story is currently 44,000 words which is approximately half the length of the average novel. However, we still haven't seen any deviation from the original story which suggests that Harry's opposition will be much harder, so I'm inclined to go with the first estimate, which gives us about 1/3 of total length so far. Not counting any sequels, of course.

comment by Furcas · 2010-04-03T04:50:37.354Z · LW(p) · GW(p)

Yeah, I don't think I can plausibly deny responsibility for this one.

Even if you'd used a different pseudonym and such, I'm sure a lot of us would have figured it out just from your writing style, the rationality explanations, and ... other things. Hell, the first chapter's disclaimer alone was a giveaway. :)

Anyway, I've just finished reading all nine chapters, and this is a dream come true for me. I've had a few fantasies of my own about how I would have done things differently (and better / more rationally) if I'd been in Harry's shoes, and they were a lot like this fanfic... except for the sheer, scintillating brilliance of your work, I mean.

This could be a good introduction/portal to rationality for a lot of people. I'll do what (little) I can to promote it and get you more readers, and I suggest other LWers do the same.

comment by Baughn · 2010-04-02T20:36:51.697Z · LW(p) · GW(p)

No, no, it's not Eliezer.

It's an alternate personality, which acts exactly the same and shares memories, that merely believes it's Eliezer.

Replies from: Kevin
comment by Kevin · 2010-04-03T02:43:32.600Z · LW(p) · GW(p)

Sounds like an Eliezer to me.

Replies from: Larks
comment by Larks · 2010-04-03T13:28:37.548Z · LW(p) · GW(p)

like an Eliezer, yes.

comment by Matt_Simpson · 2010-04-04T06:34:40.134Z · LW(p) · GW(p)

Edit: Apparently, instead, I will have to bribe him with mashed potatoes for spoilers. Goddamn WIPs.

I know, right? This would have been a wonderful story for me to read 10 years ago or so, and not just because now I'm having difficulty explaining to my girlfriend why I spent friday night reading a Harry Potter fanfic instead of calling her...

comment by ata · 2010-04-06T05:36:59.042Z · LW(p) · GW(p)

Magnificent. (I've sent it to some of my friends, most of whom are thoroughly enjoying it too; many of them are into Harry Potter but not advanced rationalism, so maybe it will turn some of them on to the MAGIC OF RATIONALITY!)

Edit: Sequel idea which probably only works as a title: "Harry Potter and the Prisoner's Dilemma of Azkaban". Ohoho!

Edit 2: Also on my wishlist: Potter-Evans-Verres Puppet Pals.

Replies from: gwern
comment by gwern · 2010-04-07T21:25:41.134Z · LW(p) · GW(p)

"Harry Potter and the Prisoner's Dilemma of Azkaban"

I could see that working as a prison mechanism, actually. Azkaban would be an ironic prison, akin to Dante's contrapasso. (The book would be an extended treatise on decision theory.)

The reward for both inmates cooperating is escape from Azkaban, the punishment really horrific torture, and the inmates are trapped as long as they are conniving cheating greedy bastards - but no longer.

(The prison could be like a maze, maybe, with all sorts of different cooperation problems - magic means never having to apologize for Omega.)

Replies from: pengvado
comment by pengvado · 2010-04-07T21:35:47.305Z · LW(p) · GW(p)

So if one prisoner cooperates and the other defects, then the defector goes free and the cooperator doesn't? That doesn't sound very effective for keeping conniving cheating greedy bastards in prison.

Replies from: gwern
comment by gwern · 2010-04-07T21:46:26.693Z · LW(p) · GW(p)

I figure one would probably have to modify the dilemma to give sub-escape rewards to the defector. (I realize this inversion destroys the specific logical structure, but that's artistic license for you.)

Replies from: bogdanb, Strange7
comment by bogdanb · 2010-04-09T14:48:20.475Z · LW(p) · GW(p)

Four possible outcomes: stay in prison (maintain status quo), be released, be (mind)raped by a Dementor, or receive some chocolate.

Distribute in the payoff matrix according to whatever Æsop you’re pushing to :-)

Replies from: Strange7
comment by Strange7 · 2010-04-09T16:05:26.516Z · LW(p) · GW(p)
  • Competitor gets chocolate, cooperator gets indirect dementor exposure.

  • Both compete, both severely dementor'd.

  • Both cooperate, both released, but bound together somehow.

comment by Strange7 · 2010-04-07T21:57:33.923Z · LW(p) · GW(p)

What about making the prison a hedge-maze sort of area, with lots of controllable access-points? Points earned by interactions can be spent to give yourself temporary access through a specific gate, any given pair of prisoners can only play the game a certain number of times per day, and unspent points decay - say, 5% loss per day. To earn enough points to pay your way out the front door, you effectively have to have access to the whole interior, and be on good terms with most of the people there.

Replies from: gwern
comment by gwern · 2010-04-07T22:17:21.542Z · LW(p) · GW(p)

The problem is that with 'currency' and iterated interactions like that, you start to approximate a concentration or POW camp, with considerable mingling and freedom, which allows bad'uns to thrive. At least, if my reading of literature about said camps (like World of Stone or King Rat) is anything to go by.

Replies from: Strange7, RobinZ
comment by Strange7 · 2010-04-07T22:52:36.199Z · LW(p) · GW(p)

In that case, the points would have to be associated with a task rather than simply cooperation.

Edit: also http://www.girlgeniusonline.com/comic.php?date=20070727

Replies from: gwern
comment by gwern · 2010-04-08T16:41:57.909Z · LW(p) · GW(p)

Sure, that's reasonable. And it makes the prison/maze much more general - there could be all sorts of rationalist/moral traps in it, and then one could make the pure prisoner's dilemma the final obstacle before escape.

I suppose the hard part is justifying in-universe the master rationalists who could create such a prison/maze - EY has clearly set Harry up in the fanfic as being the first master rationalist, and we can hardly postulate a hidden master when EY went to such pains with Draco to demonstrate the wizarding world's general moral bankruptcy (a hidden master would, one think, manage to bring the wizarding world up to at least muggle levels, if maybe not past it).

Replies from: wnoise, Strange7
comment by wnoise · 2010-04-09T04:09:35.125Z · LW(p) · GW(p)

a hidden master would, one think, manage to bring the wizarding world up to at least muggle levels, if maybe not past it

Why would one think that? This hidden master could be a total jerk-face.

comment by Strange7 · 2010-04-08T16:56:49.092Z · LW(p) · GW(p)

Presumably Harry himself will be bringing about some drastic reforms.

There's also the issue that wizards of the distant past might have been better rationalists than the current crop, but had less to work with, and the arts have simply been lost over time.

Replies from: gwern
comment by gwern · 2010-04-08T17:15:58.125Z · LW(p) · GW(p)

and the arts have simply been lost over time.

That's not a bad idea. It actually works well - the general loss of wizarding power is then due not to any genetic dilution by mudbloods, but because they're ignorant or lazy. It goes a little against the LW grain (we despise Golden Age myths), but since Rowling insists on a wizarding Golden Age, it's a good subversion.

Replies from: NancyLebovitz, Strange7
comment by NancyLebovitz · 2010-04-08T23:24:01.839Z · LW(p) · GW(p)

They don't have systems or habits for preserving knowledge reliably, and there's enough competition between wizards that a lot of the best spells (not to mention methods for developing powerful spells) won't be recorded, and might not even be taught.

comment by Strange7 · 2010-04-08T17:33:12.104Z · LW(p) · GW(p)

Actually, genetic dilution might still be a factor... if the rationalism of the founders was imperfect, and they didn't know much about heredity, the inability of most people to duplicate magical feats might have been interpreted as the result of an error in those finicky incantations. Emphasis on rote memorization of reliable effects would then come at the expense of higher-level invention and item creation techniques.

There are some possibly-relevant discussions of a history of magic in Tales of MU.

Replies from: gwern
comment by gwern · 2010-04-08T21:39:45.488Z · LW(p) · GW(p)

There are some possibly-relevant discussions of a history of magic in Tales of MU.

You'll have to link them, then (unless you mean the very funny section about the science cultists); I read a bit of Tales of MU, but got weirded out after a while.

Replies from: Strange7
comment by Strange7 · 2010-04-09T15:51:48.759Z · LW(p) · GW(p)

“The first codified definition of a ‘wizard’ as opposed to other less formal and implicitly inferior magic-users was someone who did ‘name-workings’. Nowadays, the formal definition of ‘wizard’ is somebody who uses spells, whether they invoke true names or not, as opposed to sorcerers, who throw around raw techniques…

http://www.talesofmu.com/story/book0x/378

I couldn’t help but think how close this approach was to the old “scientific” method of formalizing spells that had been left behind in the dark ages, the way that resulted in spells that only worked at all under highly select circumstances and could rarely be duplicated by more than a handful of people.

http://www.talesofmu.com/story/book0x/235

comment by RobinZ · 2010-04-07T22:34:43.817Z · LW(p) · GW(p)

The reference to King Rat) I can identify with an Internet search - what's World of Stone?

Replies from: gwern
comment by gwern · 2010-04-07T23:37:49.693Z · LW(p) · GW(p)

Try "Tadeusz Borowski". Sample quotes:

"I was told about a camp where transports of new prisoners arrived each day, dozens of people at a time. But the camp only had a certain quantity of daily food rations - I cannot recall how much, maybe enough for 2, or 3 thousand - and Herr Kommandant disliked to see the prisoners starve. Each man, he felt, must receive his allotted portion. And always the camp had a few dozen men too many. So every evening a ballot, using cards or matches, was held in every block, and the following morning the losers did not go to work. At noon they were led out behind the barbed-wire fence and shot."

A (nonfiction) quote I sometimes think of in connection with World of Stone, though it's actually from The Captive Mind, is:

"Had Beta been French, perhaps he would've been an existentialist, though that wouldn't've satisfied him. He smiled contemptuously at mental speculations, for he remembered seeing philosophers fighting over garbage in the concentration camps. Human thought had no significance; subterfuge & self-deception were easy to decipher: all that really counted was the movement of matter."

comment by Unnamed · 2010-04-02T23:49:56.131Z · LW(p) · GW(p)

Harry Potter as a boy genius smart-aleck aspiring rationalist works surprisingly well. And the idea of extending the pull of rationalism a bit beyond its standard sci-fi hunting grounds using Harry Potter fanfiction is brilliant.

comment by Document · 2010-04-18T06:45:46.264Z · LW(p) · GW(p)

For the record, it's currently the first Google autocomplete result for "harry potter and the me", with apparently multiple pages of forum posts and such about it.

Replies from: Jack
comment by Jack · 2010-04-18T07:39:48.892Z · LW(p) · GW(p)

So people get really invested in this fan-fiction stuff, huh?

comment by LucasSloan · 2010-04-02T22:43:47.118Z · LW(p) · GW(p)

Fb, sebz gur cbvag bs ivrj bs na Nygreangr-Uvfgbel, V nffhzr gur CBQ vf Yvyyl tvivat va naq svkvat Crghavn'f jrvtug ceboyrz. Gung jbhyq graq gb vzcebir Crghavn'f ivrj bs ure zntvpny eryngvirf, naq V nffhzr gur ohggresyvrf nera'g rabhtu gb fnir Wnzrf naq Yvyyl sebz Ibyqrzbeg. Tvira gur infgyl vapernfrq vagryyvtrapr bs Uneel, V nffhzr ur vf abg trargvpnyyl gur fnzr puvyq jr fnj va gur obbxf, nygubhtu vzcebirq puvyqubbq ahgevgvba pbhyq nyfb or n snpgbe.

Replies from: Baughn
comment by Baughn · 2010-04-03T11:17:15.574Z · LW(p) · GW(p)

Not having the same father would tend to imply not being genetically the same, yes.

This isn't the Harry Potter we know.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-04-03T11:39:39.930Z · LW(p) · GW(p)

He does have the same genetic parents; it's his biological aunt, not his biological mother, who married someone different in this timeline.

I recently received your letter of acceptance to Hogwarts, addressed to Mr. H. Potter. You may not be aware that my genetic parents, James Potter#James_Potter) and Lily Potter (formerly Lily Evans) are dead. I was adopted by Lily's sister, Petunia Evans-Verres, and her husband, Michael Verres-Evans.

Replies from: Baughn
comment by Baughn · 2010-04-03T15:06:59.122Z · LW(p) · GW(p)

I feel rather foolish now. Of course he does.

Should still be a genetic reshuffling, at least. The point of departure seems to be before his birth, so the butterfly effect would be in effect.

comment by Vladimir_Nesov · 2010-04-02T21:02:18.767Z · LW(p) · GW(p)

The probability of magic should make any effort on testing the hypothesis unjustified. Testing theories no matter how improbable is generally incorrect dogma. (One should distinguish improbable from silly though.)

Replies from: Eliezer_Yudkowsky, Baughn, Matt_Simpson, Alicorn
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-04T01:31:13.517Z · LW(p) · GW(p)

I think you underestimate the real-world value of Just Testing It. If I got a mysterious letter in the mail and Mom told me I was a wizard and there was a simple way to test it, I'd test it. Of course I know even better than rationalist!Harry all the reasons that can't possibly be how the ontologically lowest level of reality works, but if it's cheap to run the test, why not just say "Screw it" and test it anyway?

Harry's decision to try going out back and calling for an owl is completely defensible. You just never have to apologize for doing a quick, cheap experimental test, pretty much ever, but especially when people have started arguing about it and emotions are running high. Start flipping a coin to test if you have psychic powers, snap your fingers to see if you can make a banana, whatever. Just be ready to accept the result.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-04T08:13:21.569Z · LW(p) · GW(p)

You just never have to apologize for doing a quick, cheap experimental test, pretty much ever

This (injunction?) is equivalent to ascribing much higher probability to the hypothesis (magic) than it deserves. It might be a good injunction, but we should realize that at the same time, it asserts inability of people to correctly judge impossibility of such hypotheses. That is, this rule suggests that probability of some hypothesis that managed to make it in your conscious thought isn't (shouldn't be believed to be) 10^-[gazillion], even if you believe it is 10^-[gazillion].

Replies from: bogdanb, RobinZ
comment by bogdanb · 2010-04-09T15:29:06.817Z · LW(p) · GW(p)

I guess it depends a bit on how you came to consider the proposition to be tested, but I’m not sure how to formalize it.

I wouldn’t waste a moment’s attention in general to some random person proposing anything like this. But if someone like my mother or father, or a few of my close friends, suddenly came with a story like this (which, mark you, is quite different from the usual silliness), I would spend a couple of minutes doing a test before calling a psychiatrist. (Though I’d check the calendar first, in case it’s April 1st.)

Especially if I were about that age. I was nowhere near as bright and well-read rationalist!Harry at that age (nor am I now). I read a lot though, and I had a pretty clear idea of the distinction between fact and fiction, but I remember I just didn’t have enough practical experience to classify new things as likely true or false at a glance.

I remember at one time (between 8 and 11 years old) I was pondering the feasibility of traveling to Florida (I grew up in Eastern Europe) to check if Jules Verne’s “From the Earth to the Moon” was real or not, by asking the locals and looking for remains of the big gun. It wasn’t an easy test, so I concluded it wasn’t worth it. However, I also remember I did check if I had psychic powers by trying to guess cards and the like; that took less than two minutes.

comment by RobinZ · 2010-04-04T13:36:00.892Z · LW(p) · GW(p)

The probability that you have no grasp on the situation is high enough to justify an easy, simple, harmless test.

And I'd appreciate it if spoilers for the story were ROT13'd or something - I haven't read it.

Replies from: Kevin, ShardPhoenix
comment by Kevin · 2010-04-05T02:07:19.925Z · LW(p) · GW(p)

You mean the plot point that Harry Potter tested the Magic hypothesis? I don't think most plot points in the introductions of stories really count as spoilers.

Replies from: CronoDAS, RobinZ
comment by CronoDAS · 2010-04-05T02:15:30.293Z · LW(p) · GW(p)

Yeah, that's not a spoiler any more than "Obi-Wan Kenobi is a Jedi" is a spoiler.

Replies from: DonGeddis
comment by DonGeddis · 2010-04-05T20:14:43.785Z · LW(p) · GW(p)

A "Jedi"? Obi-Wan Kenobi?

I wonder if you mean old Ben Kenobi. I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit.

comment by RobinZ · 2010-04-05T11:13:53.368Z · LW(p) · GW(p)

Ah, of course. That's fine, then.

Although you might want to let EY know that someone posted unobfusticated spoilers for ... Chapter 10, was it? - in violation of community standards. ;)

comment by ShardPhoenix · 2010-04-05T00:07:57.678Z · LW(p) · GW(p)

I agree, though I think the particular test chosen in the story didn't make much sense - even if magic was real I wouldn't have expected that to have any effect.

Replies from: RobinZ
comment by RobinZ · 2010-04-05T00:24:55.284Z · LW(p) · GW(p)

The most astonishing thing about spoilers, I find, is that they are often provided to you with exactly as much enthusiasm after you announce that you haven't seen the story as before.

Replies from: wnoise, ShardPhoenix
comment by wnoise · 2010-04-05T03:36:13.997Z · LW(p) · GW(p)

This isn't surprising at all.

People who give out spoilers when discussing a work generally don't care that you don't like to hear spoilers before you've experienced a work.

comment by ShardPhoenix · 2010-04-05T01:45:07.834Z · LW(p) · GW(p)

Considering you've read the rest of the posts in this thread, that's not a spoiler, just my opinion about what you've already been discussing.

Replies from: RobinZ
comment by RobinZ · 2010-04-05T01:45:46.180Z · LW(p) · GW(p)

I haven't.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-04-05T03:27:24.891Z · LW(p) · GW(p)

Well, it was a bit silly to comment on it without context then. At any rate no major/obvious spoilers have been posted here.

Replies from: RobinZ
comment by RobinZ · 2010-04-05T11:03:16.391Z · LW(p) · GW(p)

That's a relief.

comment by Baughn · 2010-04-02T21:11:33.419Z · LW(p) · GW(p)

It was strongly implied that some element of Harry's mind had skewed that prior dramatically. Perhaps his horcrux, perhaps infant memories, but either way it wasn't as you'd expect. Even for an eleven-year-old.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-02T21:38:46.524Z · LW(p) · GW(p)

He didn't bite the bullet, didn't truly disbelieve his corrupted hardware. This is a problem that has to be solved by introspection, better theory of decision-making. It's not enough to patch it by observation in each particular case, letting the reality compute a correct conclusion above your defunct epistemology, even when you have all the data you might possibly need to infer the conclusion yourself.

Replies from: Mass_Driver, bogdanb
comment by Mass_Driver · 2010-04-09T16:05:42.881Z · LW(p) · GW(p)

Why not? I mean, granted, there might be occasions when you need the ability to disbelieve your hardware, but I'm having trouble thinking of any. It's unlikely enough that you'll go crazy; it's still more unlikely that you'll go crazy in such a way that your future depends on immediately and decisively noticing that you're mad. if you enjoy running tests and have the resources for it, why not indulge?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-09T16:23:13.453Z · LW(p) · GW(p)

It's unlikely enough that you'll go crazy; it's still more unlikely that you'll go crazy in such a way that your future depends on immediately and decisively noticing that you're mad.

I'm talking about not interpreting intuitive "feel" for a belief as literally representing consciously endorsed level of belief. It's perfectly normal for your emotion to be at odds with your beliefs (see scope insensitivity for example). This kind of madness doesn't imply being less functional than average. We are all mad here. If you feel that "magic might be real", but at the same time believe that "magic can't be real, no chance", you shouldn't take the side of the feeling. The feeling might constitute new evidence for your beliefs to take into account, but the final judgment has to involve conscious interpretation, you shouldn't act on emotion directly. And sometimes, this means acting against emotion (intuitive expectation). In this case in particular, intuition is weak evidence, so it doesn't overpower a belief that magic isn't real, even if it's strong intuition.

comment by bogdanb · 2010-04-09T15:32:36.421Z · LW(p) · GW(p)

Do you realize how many catgirls were killed because of you today?

comment by Matt_Simpson · 2010-04-03T05:58:38.073Z · LW(p) · GW(p)

One of the goals was to get his parents to stop fighting over whether or not magic was real.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-03T09:38:54.999Z · LW(p) · GW(p)

How would it work? As expected outcome is that no magic is real, we'd need to convince the believer (mother) to disbelieve. An experiment is usually an ineffective means to that end. Rather, we'd need to mend her epistemology.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-03T21:02:52.392Z · LW(p) · GW(p)

Well, Harry did spend some time making sure that this experiment would convince either of his parents if it went the appropriate way, though he had his misgivings. As a child who isn't respected by his parents, what better options does he have to stop the fight? (serious question)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-03T21:39:04.916Z · LW(p) · GW(p)

Having no good options doesn't make the remaining options any good. This is a serious problem, for example, when people try to explain apparent miracles they experience: they find the best explanation they are able to come up with, and decide to believe that explanation, even if it has no right to any plausibility, apart from the fact it happened to be the only one available.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-03T21:58:29.690Z · LW(p) · GW(p)

So you think that the best response is to do nothing about the fight. Perhaps, but setting up the experiment didn't take that much effort. What was Harry's opportunity cost here? Is it that high?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-03T22:13:11.391Z · LW(p) · GW(p)

It's not completely out of the question that it was a fine rhetorical effort (though it's not particularly plausible), but it's still not concerned with finding out the truth, which was presented as the goal.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-03T22:33:26.853Z · LW(p) · GW(p)

There seemed to be two goals to me - finding the truth and stopping the fight. I'll have to reread that section later.

comment by Vladimir_Nesov · 2010-04-03T09:37:12.241Z · LW(p) · GW(p)

A valid point.

comment by Alicorn · 2010-04-02T21:08:27.676Z · LW(p) · GW(p)

You have not taken into account that testing magical hypotheses may be categorized as "play" and pay its rent on time and effort accordingly.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-02T21:36:15.467Z · LW(p) · GW(p)

Then this activity shouldn't be rationalized as being the right decision specifically for the reasons associated with the topic of rationality. For example, the father dismissing the suggestion to test the hypothesis is correct, given that the mere activity of testing it doesn't present him with valuable experience.

You've just taken the conclusion presented in the story, and wrote above it a clever explanation that contradicts the spirit of the story.

comment by CronoDAS · 2010-04-02T20:39:41.214Z · LW(p) · GW(p)

Wow, I wish I saw this sooner. And there are already 99 pages of reviews!

ETA: Wow, now there's 100...

comment by Zubon · 2010-04-04T17:31:09.330Z · LW(p) · GW(p)

Example of teachers not getting past Guessing the Teacher's Password: debating teachers on the value of pi. Via Gelman.

Replies from: Eliezer_Yudkowsky, Tyrrell_McAllister, Emile, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-04T18:52:28.624Z · LW(p) · GW(p)

AAAAAIIIIIIIIEEEEEEEE

BOOM

Replies from: Alicorn
comment by Alicorn · 2010-04-04T18:59:49.841Z · LW(p) · GW(p)

Clearly, your math teacher biting powers are called for.

Replies from: CronoDAS, JGWeissman
comment by CronoDAS · 2010-04-04T21:24:36.062Z · LW(p) · GW(p)

In first grade, I threw a crayon at the principal. Can I help? ;)

comment by JGWeissman · 2010-04-04T19:08:29.201Z · LW(p) · GW(p)

Let's not get too hasty. They still might know logarithms. ;)

comment by Tyrrell_McAllister · 2010-04-04T18:57:24.868Z · LW(p) · GW(p)

It would have been even more frustrating had the protagonist not also been guessing the teacher's password. It seemed that the protagonist just had a better memory of what more authoritative teachers had said.

The protagonist was closer to being able to derive π himself, but that played no part in his argument.

Replies from: JGWeissman
comment by JGWeissman · 2010-04-04T19:06:09.773Z · LW(p) · GW(p)

There's no evidence that the protagonist didn't just have a better memory of what more authoritative teachers had said.

The protagonist knew that pi is defined as the ratio of a circle's circumference and diameter, and the numbers that people have memorized came from calculating that ratio.

The protagonist knew that pi is irrational, that irrational means it cannot be expressed as a ratio of integers, and that 7 and 22 are integers, and that therefore pi cannot be exactly expressed as 22/7.

The protagonist was willing to entertain the theory that 22/7 is a good enough approximation of pi to 5 digits, but updated when he saw that the result came out wrong.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-04-05T12:30:48.545Z · LW(p) · GW(p)

The protagonist knew that pi is defined as the ratio of a circle's circumference and diameter, and the numbers that people have memorized came from calculating that ratio.

The protagonist knew that pi is irrational, that irrational means it cannot be expressed as a ratio of integers, and that 7 and 22 are integers, and that therefore pi cannot be exactly expressed as 22/7.

These are important pieces of knowledge, and they are why I said that they protagonist was closer to being able to derive π himself.

The protagonist was willing to entertain the theory that 22/7 is a good enough approximation of pi to 5 digits, but updated when he saw that the result came out wrong.

The result only came out wrong relative to his own memorized teacher-password. Except for his memory of what the first five digits of π really were, he gave no argument that they weren't the same as the first five digits of 22/7.

Replies from: RobinZ
comment by RobinZ · 2010-04-05T14:23:08.655Z · LW(p) · GW(p)

Y'know, there's something this blogger I read once wrote that seems kinda applicable here:

I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-04-05T17:32:34.171Z · LW(p) · GW(p)

Y'know, there's something this blogger I read once wrote that seems kinda applicable here:

I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong.

I did not criticize the protagonist. He acted entirely appropriately in his situation. Trying to derive digits of π (by using Archimedes's method, say) would not have been an effective way to convince his teammates under those circumstances. In some cases, such as a timed exam, going with an accurately-memorized teacher-password is the best thing to do. [ETA: Furthermore, his and our frustration at his teammates was justified.]

But the fact remains that the story was one of conflicting teacher-passwords, not of deep knowledge vs. a teacher-password. Although the protagonist possessed deeper knowledge, and although he might have been able to reconstruct Archimedes's method, he did not in fact use his deeper knowledge in the argument to make 3.1415 more probable than the first five digits of 22/7.

Again, I'm not saying that he should have had to do that. But it would have made for a better anti-teacher-password story.

Replies from: RobinZ
comment by RobinZ · 2010-04-05T21:06:35.654Z · LW(p) · GW(p)

I see what you mean. I think the confusion we've had on this thread is over the loaded term "teacher's password" - yes, the question only asked for the password, but it would be less misleading to say that both the narrator and the schoolteachers had memorized the results, but the narrator did a better job of comprehending the reference material.

comment by Emile · 2010-04-08T15:38:44.618Z · LW(p) · GW(p)

Quite depressing. Makes me even less likely to have my kids educated in the states. I wonder how bad Europe is on that count? Is it really better here? It can be hard to tell from inside; correcting for the fact that most info I get is biased one way or the other leaves me with pretty wide confidence intervals.

comment by timtyler · 2010-04-04T19:18:29.679Z · LW(p) · GW(p)

22/7 gives "something like" something like 3.1427 ?!? Surely it is more like some other things that that!

Replies from: RobinZ
comment by RobinZ · 2010-04-04T23:03:14.266Z · LW(p) · GW(p)

Well, yes - it's more like 3.142857 recurring. But that's fairly minor.

(Footnote: I originally thought the teachers had performed the division incorrectly, rather than the anonymous commenter incorrectly recount the number, so this comment was briefly incorrect.)

comment by [deleted] · 2010-04-01T17:46:36.350Z · LW(p) · GW(p)

After the top level post about it, I bought a bottle of Melatonin to try. I've been taking it for 3 weeks. Here are my results.

Background: Weekdays I typically sleep for ~6 hours, with two .5 hour naps in the middle of the day (once at lunch and once when I get home from work). Weekends I sleep till I feel like getting up, so I usually get around 10-11 hours.

I started with a 3mg pill, then switched to a ~1.5 mg pill (I cut them in half) after being extremely tired the next day. I take it about an hour before I go to sleep.

The first thing I noticed was that it makes falling asleep much easier. It's always been a struggle for me to fall asleep (usually I have to lay there for an hour or more), but now I'm almost always out cold within 20 minutes.

I've also noticed that I feel much less tired during the day, which was my impetus for trying it in the first place. However, I'm not sure how much of this is a result of needing less sleep, and how much is a result of me falling asleep faster and thus sleeping for longer. But it's definitely noticeable.

Getting up in the morning is not noticeably easier.

No evidence that it's habit forming. I'm currently not taking it on weekends (I found myself needing a nap even after getting 10-11 hours of sleep), and I don't notice any additional difficulty going to bed beyond what I would normally have.

I seemed to have more intense dreams the first several days taking it, but they seem to have gone back to normal (or I've gotten used to them/don't remember them).

Overall it seems to work (for me at least) exactly as gwern described, and I'd happily recommend it to anyone else who has difficulty sleeping.

Replies from: alasarod, Jonathan_Graehl, Matt_Simpson, Liron, Jack
comment by alasarod · 2010-04-01T23:54:18.081Z · LW(p) · GW(p)

I took it for at least 8 weeks, primarily on weekdays. I found after a while that I was waking up at 4am, sometimes unable to get back to sleep. I had some night sweats too. May not be a normal response, but I found that if I take it in moderation it does not have these effects.

Replies from: None, gwern
comment by [deleted] · 2010-04-02T14:57:29.288Z · LW(p) · GW(p)

I wonder if you need to get back to sleep after waking up at 4 AM.

comment by gwern · 2012-06-08T20:35:36.916Z · LW(p) · GW(p)

I found that if I take it in moderation it does not have these effects.

So you are still using it after those 8 weeks?

comment by Jonathan_Graehl · 2010-04-01T19:29:46.594Z · LW(p) · GW(p)

The easily available product for me is a blend of 3mg melatonin/25mg theanine. 25mg is a heavy tea-drinker's dose, and I see no reason to consume theanine at all (even dividing the pills in half), so I haven't bought any.

Does anyone have some evidence recommending for/against taking theanine? In my view, the health benefits of tea drinking are negligible, and theanine is just one of many compounds in tea.

Replies from: JenniferRM, wedrifid
comment by JenniferRM · 2010-04-02T05:58:43.535Z · LW(p) · GW(p)

Theanine may be "one of many compounds found in tea" but, on the recommendation of an acquaintance I tried taking theanine itself as an experiment once (from memory maybe 100mg?). First I read up on it a little and it sounded reasonably safe and possibly beneficial and I drank green tea anyway so it seemed "cautiously acceptable" to see what it was like in isolation. Basically I was wondering if it helped me relax, focus, and/or learn better.

The result was a very dramatic manic high that left me incapable intellectually directed mental focus (as opposed to focus on whatever crazy thing popped into my head and flittered away 30 minutes later) for something like 35 hours. Also, I couldn't sleep during this period.

In retrospect I found it to be somewhat scary and it re-confirmed my general impression of the bulk of "natural" supplements. Specifically, it confirmed my working theory that the lack of study and regulation of supplements leads to a market full of many options that range from worthless placebo to dangerously dramatic, with tragically few things in the happy middle ground of safe efficacy.

Melatonin is one of the few supplements that I don't put in this category, however in that case I use less than "the standard" 3mg dose. When I notice my sleep cycle drifting unacceptably I will spend a night or two taking 1.5mg of melatonin (using a pill cutter to chop 3mg pills in half) to help me fall asleep and then go back to autopilot. The basis for this regime is that my mother worked in a hospital setting and 1.5mg was what various doctors recommended/authorized for patients to help them sleep.

There was a melatonin fad in the late 1990's(?) where older people were taking melatonin as a "youth pill" because endogenous production declines with age. I know of no good studies supporting that use, but around that time was when the results about sleep came out, showing melatonin to be effective even for "jet lag" as a way to reset one's internal clock swiftly and safely.

Replies from: Kevin
comment by Kevin · 2010-04-02T06:15:44.636Z · LW(p) · GW(p)

That reaction sounds rare. Do you think 20 cups of tea would have triggered a similar reaction in you?

There is a huge variation based on dosage for all things you can ingest: food, drug, supplement, and "other". Check out the horrors of eating a whole bottle of nutmeg. http://www.erowid.org/experiences/subs/exp_Nutmeg.shtml

Replies from: gwern
comment by gwern · 2010-04-03T01:36:15.822Z · LW(p) · GW(p)

Do you think 20 cups of tea would have triggered a similar reaction in you?

Who knows? I doubt she'll ever find out. 20 cups of tea is a lot. 10 or 15 cups will send you to the bathroom every half hour, assuming your appetite doesn't decline so much that you can't bring yourself to drink any more.

comment by wedrifid · 2010-04-01T21:01:41.108Z · LW(p) · GW(p)

From memory it is a 'mostly harmless' way to reduce anxiety and promote relaxation. This is a relatively rare result given that things with an anxiolytic effect often produce dependence. Works mostly by increasing GABA in the brain, with a bit of a boost to dopamine too. Some people find it also helps them focus.

See also sublutamine, a synthetic analogue. It is used to promote endurance, particularly the kind caused by residual lethargy that sometimes hangs around after depression. Also provides a stimulant effect while also being relaxing, or at least not as agitating as stimulants can tend to be.

comment by Matt_Simpson · 2010-04-02T18:37:36.678Z · LW(p) · GW(p)

I've been trying it as well for ~2 months (with some gaps).

Normally I have trouble falling asleep, but have no problem staying asleep, so the main reason I take melatonin is to help fall asleep.

Currently, I take 2 5mg pills. Taking 1 doesn't have a very noticeable effect on my ability to fall asleep, but 2 seems to do the trick. However, I have to be sure that I give myself 7-8 hours for sleep, otherwise getting up is more difficult and I may be very groggy the next day. This can be problematic because sometimes I just have to stay up slightly later doing homework and because I can't take the melatonin I end up barely getting any sleep at all.

I haven't noticed any habit forming effects, though some slight effects might be welcome if it helped me to remember to take the supplement every night ;)

edit: its actually two 3mg pills, not 5mg. I googled the brand walmart carries since that's where I bought mine from, and it said 5mg on the bottle. Now that I'm home, I see that my bottle is actually 3mg.

comment by Liron · 2010-04-02T00:12:21.995Z · LW(p) · GW(p)

I also tried it out after reading that LW post. At first it was fantastic at getting me to fall asleep within 30 minutes (I'm a good sleeper, it would only take me 30 minutes because I would be going to sleep not tired in order to wake up earlier) and I would wake up feeling alert.

Now unfortunately I wake up feeling the same and basically have stopped noticing its effects. The only time I take it is when I want to go to sleep and I'm not tired.

Also: During the initial 1-2 week period of effectiveness, I had intense and vivid and stressful dreams (or maybe I simply remembered my normal dreams better).

comment by Jack · 2010-04-02T00:30:42.502Z · LW(p) · GW(p)

Thanks. It would be really helpful if people talking about their experiences would describe the entirety of their psychostimulant usage since how they interact and whether or not other drugs can be replaced are important things to know about Melatonin.

Replies from: None
comment by [deleted] · 2010-04-02T03:56:40.814Z · LW(p) · GW(p)

I am not any other drugs or medication. The only thing that would qualify as a stimulant is caffeine - I have a coffee in the morning and a soda at lunch.

comment by PhilGoetz · 2010-04-01T21:40:57.650Z · LW(p) · GW(p)

I have a couple of problems with anthropic reasoning, specifically the kind that says it's likely we are near the middle of the distribution of humans.

First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it's a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.

ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.

Second, the reasoning requires changing my observation. My observation is, "I am the Xth human born." The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?

Replies from: Jordan, Gavin
comment by Jordan · 2010-04-03T17:55:22.611Z · LW(p) · GW(p)

To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?

Because we don't care about the probability of being a particular individual, we care about the probability of being in a certain class (namely the class of people born late enough in history, which is characterized exactly by one minus "the probability that I was human number N, where N is some number from 1 to X").

Replies from: PhilGoetz
comment by PhilGoetz · 2010-04-05T15:17:01.427Z · LW(p) · GW(p)

But if you turn it around, and say "where N is some number from X to the total number of humans ever born", you get different results. And if you say "where N is within 1/10th of all humans ever of X", you also get different results.

Replies from: Jordan
comment by Jordan · 2010-04-07T07:39:27.022Z · LW(p) · GW(p)

And if you say "where N is within 1/10th of all humans ever of X", you also get different results.

This is a different class, so yes, you get a different probability for belonging to it. But you likewise get a different probability that you'll see a doomsday conditioning on belonging to that class.

Consider class A, the last 10% of all people to live, and class B, the last 20%. Clearly there's a greater chance I belong to class B. But class B has a lower expectation for observing doomsday. There's a lower chance of being in a class with a higher chance of seeing doomsday, and a higher chance of being in a class with a lower chance of seeing doomsday.

What's wrong with this? I don't see any problem with the freedom of choice for our class.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-04-22T00:49:35.709Z · LW(p) · GW(p)

Both your example are still taking from your current position to the end of all humans. What I said was that you get different results if you take one decile from your position, not all the way to the end. There's no reason to do one rather than the other.

Replies from: Jordan
comment by Jordan · 2010-04-30T18:21:44.420Z · LW(p) · GW(p)

P(Observing doomsday) = P(Being in some class of people) * P(Observing doomsday | you belong to the class of people)

You get a different probability for belonging to those classes, but the conditional probabilities of observing doomsday given that you belong to those classes are different. I'm not convinced that these differences don't balance out when you multiply the two probabilities together. Can you show me a calculation where you actually get two different values for your likelihood of seeing doomsday?

Replies from: thomblake
comment by thomblake · 2010-04-30T18:34:51.937Z · LW(p) · GW(p)

Maybe I'm misreading this, but it looks like you're missing a term...

You said: P(O) = P(B) * P(O|B)

Bayes's theorem: P(O) P(B|O) = P(B) P(O|B)

ne?

Replies from: rhollerith_dot_com, Jordan
comment by RHollerith (rhollerith_dot_com) · 2010-04-30T20:48:04.428Z · LW(p) · GW(p)

[Jordan] said: P(O) = P(B) * P(O|B)

Bayes's theorem: P(O) P(B|O) = P(B) P(O|B)

I agree that Jordan's equation needs to be adjusted (corrected), but I humbly suggest that in this context, it is better to adjust it to the product rule:

P(O and B) = P(B) * P(O|B).

ADDED. Yeah, minor point.

comment by Jordan · 2010-05-04T03:41:53.262Z · LW(p) · GW(p)

Yes, correct. I missed that. For the standard Doomsday Argument P(B|O) is probably 1, so it can be excluded, but for alternative classes of people this isn't so.

comment by Gavin · 2010-04-03T17:39:17.524Z · LW(p) · GW(p)

The real problem with anthropic reasoning is that it's just a default starting point. We are tricked because it seems very powerful in contrived thought experiments in which no other evidence is available.

In the real world, in which there is a wealth of evidence available, it's just a reality check saying "most things don't last forever."

In real world situations, it's also very easy to get into a game of reference class tennis.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-04-05T15:23:35.293Z · LW(p) · GW(p)

I read the linked-to comment, but still don't know what reference class tennis is.

comment by wnoise · 2010-04-01T17:05:49.014Z · LW(p) · GW(p)

Some fantastic singularity-related jokes here:

http://crisper.livejournal.com/242730.html

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-01T17:21:36.788Z · LW(p) · GW(p)

Voted up for having jokes with cautionary power, and not just amusement value.

comment by JamesAndrix · 2010-04-02T05:28:19.619Z · LW(p) · GW(p)

http://www4.gsb.columbia.edu/ideasatwork/feature/735403/Powerful+Lies

The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars — the leaders —resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars — the subordinates — showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.

comment by Richard_Kennaway · 2010-04-07T19:39:04.986Z · LW(p) · GW(p)

A couple of articles on the benefits of believing in free will:

Vohs and Schooler, "The Value of Believing in Free Will"

Baumeister et al., "Prosocial Benefits of Feeling Free"

The gist of both is that groups of people experimentally exposed to statements in favour of either free will or determinism[1] acted, on average, more ethically after the free will statements than the determinism statements.

References from a Sci. Am. article.

[1] Cough.

ETA: This is also relevant.

Replies from: Jack
comment by Jack · 2010-04-07T19:46:51.676Z · LW(p) · GW(p)

Cool. Since a handful of studies suggest a narrow majority believe moral responsibility and determinism to be incompatible this shouldn't actually be that surprising. I want to know how people act after being exposed to statements in favor of compatibilism.

comment by Wei Dai (Wei_Dai) · 2010-04-06T11:16:31.941Z · LW(p) · GW(p)

I've written a reply to Bayesian Flame, one of cousin_it's posts from last year. It's titled Frequentist Magic vs. Bayesian Magic. I'd appreciate some review and comments before I post it here. Mainly I'm concerned about whether I've correctly captured the spirit of frequentism, and whether I've treated it fairly.

BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.

Replies from: Vladimir_Nesov, JGWeissman, JGWeissman, Steve_Rayhawk, Morendil
comment by Vladimir_Nesov · 2010-04-06T11:32:08.306Z · LW(p) · GW(p)

Why does the universe that we live in look like a giant computer? What about uncomputable physics?

Consider "syntactic preference" as an order on agent's strategies (externally observable possible behaviors, but in mathematical sense, independently on what we can actually arrange to observe), where the agent is software running on an ordinary computer. This is "ontological boxing", a way of abstracting away any unknown physics. Then, this syntactic order can be given interpretation, as in logic/model theory, for example by placing the "agent program" in environment of all possible "world programs", and restating the order on possible agent's strategies in terms of possible outcomes for the world programs (as an order on sets of outcomes for all world programs), depending on the agent.

This way, we first factor out the real world from the problem, leaving only the syntactic backbone of preference, and then reintroduce a controllable version of the world, in a form of any convenient mathematical structure, an interpretation of syntactic preference. The question of whether the model world is "actually the real world", and whether it reflects all possible features of the real world, is sidestepped.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-04-06T12:59:35.211Z · LW(p) · GW(p)

Thanks (and upvoted) for this explanation of your current approach. I think it's definitely worth exploring, but I currently see at least two major problems.

The first is that my preferences seem to have a logical dependency on the ultimate nature of reality. For example, I currently think reality is just "all possible mathematical structures", but I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly. What would happen if you tried to use your idea to extract my preferences before I resolve that question?

The second is that I don't see how you plan to differentiate within "syntactic preference", those that are true preferences, and those that are caused by computational limitations and/or hardware/software errors. Internally, the agent is computing the optimal strategy (as best as it can) from a preference that's stated in terms of "the real world" and maybe also in terms of subjective anticipation. If we could somehow translate those preferences directly into preferences on mathematical structures, we would be able to bypass those computational limitations and errors without having to single them out.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-06T15:35:50.541Z · LW(p) · GW(p)

The first is that my preferences seem to have a logical dependency on the ultimate nature of reality.

An important principle of FAI design to remember here is "be lazy!". For any problem that people would want to solve, where possible, FAI design should redirect that problem to FAI, instead of actually solving it in order to construct a FAI.

Here, you, as a human, may be interested in "nature of reality", but this is not a problem to be solved before the construction of FAI. Instead, the FAI should pursue this problem in the same sense you would.

Syntactic preference is meant to capture this sameness of pursuits, without understanding of what these pursuits are about. Instead of wanting to do the same thing with the world as you would want to, the FAI having the same syntactic preference wants to perform the same actions as you would want to. The difference is that syntactic preference refers to actions (I/O), not to the world. But the outcome is exactly the same, if you manage to represent your preference in terms of your I/O.

I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly

You may still know the process of discovery that you want to follow while doing what you call getting to know your own preference. That process of discovery gives definition of preference. We don't need to actually compute preference in some predefined format, to solve the conceptual problem of defining preference. We only need to define a process that determines preference.

The second is that I don't see how you plan to differentiate within "syntactic preference", those that are true preferences, and those that are caused by computational limitations and/or hardware/software errors.

This issue is actually the last conceptual milestone I've reached on this problem, just a few days ago. The trouble is in how would the agent reason about the possibility of corruption of its own hardware. The answer is that human preference is to a large extent concerned with consequentialist reasoning about the world, so human preference can be interpreted as modeling the environment, including the agent's hardware. This is an informal statement, referring to the real world, but the behavior supporting this statement is also determined by formal syntactic preference that doesn't refer to the real world. Thus, just mathematically implementing human preference is enough to cause the agent to worry about how its hardware is doing (it isn't in any sense formally defined as its own hardware, but what happens in the agent's formal mind can be interpreted as recognizing the hardware's instrumental utility). In particular, this solves the issues of possible morally harmful impact of the FAI's computation (e.g. simulating tortured people and then deleting them from memory, etc.), and of upgrading the FAI beyond the initial hardware (so that it can safely discard the old hardware).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-04-06T22:05:35.803Z · LW(p) · GW(p)

Once we implement this kind of FAI, how will we be better off than we are today? It seems like the FAI will have just built exact simulations of us inside itself (who, in order to work out their preferences, will build another FAI, and so on). I'm probably missing something important in your ideas, but it currently seems a lot like passing the recursive buck.

ETA: I'll keep trying to figure out what piece of the puzzle I might be missing. In the mean time, feel free to take the option of writing up your ideas systematically as a post instead of continuing this discussion (which doesn't seem to be followed by many people anyway).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-06T22:40:41.375Z · LW(p) · GW(p)

FAI doesn't do what you do; it optimizes its strategy according to preference. It's more able than a human to form better strategies according to a given preference, and even failing that it still has to be able to avoid value drift (as a minimum requirement).

Preference is never seen completely, there is always loads of logical uncertainty about it. The point of creating a FAI is in fixing the preference so that it stops drifting, so that the problem that is being solved is held fixed, even though solving it will take the rest of eternity; and in creating a competitive preference-optimizing agent that ensures the preference to fair OK against possible threats, including different-preference agents or value-drifted humanity.

Preference isn't defined by an agent's strategy, so copying a human without some kind of self-reflection I don't understand is pretty pointless. Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference.

FAI is not built without exact and complete definition of preference. The uncertainty about preference can only be logical, in what it means/implies. (At least, when we are talking about syntactic preference, where the rest of the world is necessarily screened off.)

Replies from: andreas
comment by andreas · 2010-04-07T01:22:45.069Z · LW(p) · GW(p)

Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference.

Reading your previous post in this thread, I felt like I was missing something and I could have asked the question Wei Dai asked ("Once we implement this kind of FAI, how will we be better off than we are today?"). You did not explicitly describe a way of extracting preference from a human, but phrases like "if you manage to represent your preference in terms of your I/O" made it seem like capturing strategy was what you had in mind.

I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware). You have not said anything about what kind of static analysis would take you from an agent's s̶t̶r̶a̶t̶e̶g̶y̶ program to an agent's preference.

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2010-04-22T10:58:22.560Z · LW(p) · GW(p)

After reading Nesov's latest posts on the subject, I think I better understand what he is talking about now. But I still don't get why Nesov seems confident that this is the right approach, as opposed to a possible one that is worth looking into.

You [Nesov] have not said anything about what kind of static analysis would take you from an agent's program to an agent's [syntactic] preference.

Do we have at least an outline of how such an analysis would work? If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-22T14:04:21.779Z · LW(p) · GW(p)

But I still don't get why Nesov seems confident that this is the right approach, as opposed to a possible one that is worth looking into.

What other approaches do you refer to? This is just the direction my own research has taken. I'm not confident it will lead anywhere, but it's the best road I know about.

Do we have at least an outline of how such an analysis would work? If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

I have some ideas, though too vague to usefully share (I wrote about a related idea on the SIAI decision theory list, replying to Drescher's bounded Newcomb variant, where a dependence on strategy is restored from a constant syntactic expression in terms of source code). For "semantic preference", we have the ontology problem, which is a complete show-stopper. (Though as I wrote before, interpretations of syntactic preference in terms of formal "possible worlds" -- now having nothing to do with the "real world" -- are a useful tool, and it's the topic of the next blog post.)

At this point, syntactic preference (1) solves the ontology problem, (2) gives focus to investigation of what kind of mathematical structure could represent preference (strategy is a well-understood mathematical structure, and syntactic preference is something allowing to compute a strategy, with better strategies resulting from more computation), and (3) gives a more technical formulation of the preference extraction problem, so that we can think about it more clearly. I don't know of another effort towards clarifying/developing preference theory (that reaches even this meager level of clarity).

If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

Returning to this point, there are two show-stopping problems: first, as I pointed out above, there is the ontology problem: even if humans were able to write out their preference, the ontology problem makes the product of such an effort rather useless; second, we do know that we can't write out our preference manually. Figuring out an algorithmic trick for extracting it from human minds automatically is not out of the question, hence worth pursuing.

P.S. These are important questions, and I welcome this kind of discussion about general sanity of what I'm doing or claiming; I only saw this comment because I'm subscribed to your LW comments.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-04-25T16:37:08.156Z · LW(p) · GW(p)

Why do you consider the ontology problem to be a complete show-stopper? It seems to me there are at least two other approaches to it that we can take:

  1. We human beings seem to manage to translate our preferences from one ontology to another when necessary, so try to figure out how we do that, and program it into the FAI.

  2. Work out what the true, correct ontology is, then translate our preferences into that ontology. It seems that we already have a good candidate of this in the form of "all mathematical structures". Formalizing that notion seems really hard, but why should it be impossible?

You claim that syntactic preference solves the ontology problem, but I have even fewer ideas about how to extract the syntactic preference of arbitrary programs. You mention that you do have some vague ideas, so I guess I'll just have to be patient and let you work them out.

second, we do know that we can't write out our preference manually.

How do we know that? It's not clear to me that there is any more evidence for "we can't write out our preferences manually", than for "we can't build an artificial general intelligence manually".

I only saw this comment because I'm subscribed to your LW comments.

I had a hunch that might be the case. :)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-26T14:20:58.452Z · LW(p) · GW(p)

Why do you consider the ontology problem to be a complete show-stopper? It seems to me there are at least two other approaches to it that we can take:

By "show-stopper" I simply mean that we absolutely have to solve it in some way. Syntactic preference is one way, what you suggest could conceivably be another.

You claim that syntactic preference solves the ontology problem, but I have even fewer ideas about how to extract the syntactic preference of arbitrary programs.

An advantage I see with syntactic preference is that it's at least more or less clear what are we working with: formal programs and strategies. This opens the whole palette of possible approaches to the remaining problems to try on. With "all mathematical structures" thing, we still don't know what we are supposed to talk about, there is as of now no way forward already at that step. At least syntactic preference allows to make one step further, to firmer ground, even though admittedly it's unclear what to do next.

second, we do know that we can't write out our preference manually.

How do we know that? It's not clear to me that there is any more evidence for "we can't write out our preferences manually", than for "we can't build an artificial general intelligence manually".

I mean the "complexity of value"/"value is fragile" thesis. It seems to me quite convincing, and from the opposite direction, I have the "preference is detailed" conjecture resulting from the nature of preference in general. For "is it possible to build AI", we don't have similarly convincing arguments (and really, it's an unrelated claim that only contributes connotation of error in judgment, without giving an analogy in the method of arriving at that judgment).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-05-01T03:17:00.483Z · LW(p) · GW(p)

I mean the "complexity of value"/"value is fragile" thesis.

I agree with "complexity of value" in the sense that human preference, as a mathematical object, has high information content. But I don't see a convincing argument from this premise to the conclusion that the best course of action for us to take, in the sense of maximizing our values under the constraints that we're likely to face, involves automated extraction of preferences, instead of writing them down manually.

Consider the counter-example of someone who has the full complexity of human values, but would be willing to give up all of their other goals to fill the universe with orgasmium, if that choice were available. Such an agent could "win" by building a superintelligence with just that one value. How do we know, at this point, that our values are not like that?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-03T23:15:53.241Z · LW(p) · GW(p)

Whatever the case is with how acceptable the simplified values are, automated extraction of preference seems to be the only way to actually knowably win, rather than striking a compromise, which simplified preference is suggested to be. We must decide from information we have; how would you come to know that a particular simplified preference definition is any good? I don't see a way forward without having a more precise moral machine than a human first (but then, we won't need to consider simplified preference).

comment by Vladimir_Nesov · 2010-04-07T08:13:37.495Z · LW(p) · GW(p)

I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware).

Correct. Note that "strategy" is a pretty standard term, while "I/O map" sounds ambiguous, though it emphasizes that everything except the behavior at I/O is disregarded.

You have not said anything about what kind of static analysis would take you from an agent's strategy to an agent's preference.

An agent is more than its strategy: strategy is only external behavior, normal form of the algorithm implemented in the agent. The same strategy can be implemented by many different programs. I strongly suspect that it takes more than a strategy to define preference, that introspective properties are important (how the behavior is computed, as opposed to just what the resulting behavior is). It is sufficient for preference, when it is defined, to talk about strategies, and disregard how they could be computed; but to define (extract) a preference, a single strategy may be insufficient, it may be necessary to look at how the reference agent (e.g. a human) works on the inside. Besides, the agent is never given as its strategy, it is given as its source code that normalizes to that strategy, and computing the strategy may be tough (and pointless).

comment by JGWeissman · 2010-04-06T16:51:44.732Z · LW(p) · GW(p)

You can do better than frequentist approach without using the "magic" universal prior. You can just use a prior that represents initial ignorance of the frequency at which the machine produces head-biased and tail-biased coins. (dP(f) = 1df). If you want to look for repeating patterns, you can assign probability (1/2)(1/2^n) to the theory that the machine produces each type of coin on a frequency depending on the last n coins it produced. This requires treating a probability as a strength of belief, and not the frequency of anything, which is what (as I understand it) frequentists are not willing to do.

Note the universal prior, if you can pull it off, is still better than what I described. The repeating pattern seeking prior will not notice, for example, if the machine makes head biased coins on prime-numbered trials, but tailbiased coins on composite-numbered trials. This is because it implicitly assigns probability 0 to that type of machine, which takes infinite evidence to update.

comment by JGWeissman · 2010-04-06T16:00:08.192Z · LW(p) · GW(p)

BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.

I second this feature request.

ETA: I did not notice earlier Steve Rayhawk made the same comment.

comment by Steve_Rayhawk · 2010-04-06T11:53:01.410Z · LW(p) · GW(p)

I wish there is a "public drafts" feature on LessWrong

Seconded. See also JenniferRM on editorial-level versus object-level comments.

comment by Morendil · 2010-04-06T11:38:33.652Z · LW(p) · GW(p)

Agreed. I'll be investigating what it would take to implement that.

(Edit: interesting; draft folders are apparently private sub-reddits created when a user registers and admin'ed by that user.)

comment by Scott Alexander (Yvain) · 2010-04-01T20:11:37.920Z · LW(p) · GW(p)

The London meet is going ahead. Unless someone proposes a different time, or taw's old meetings are still going on and I just didn't know about them, it will be:

5th View cafe on top of Waterstone's bookstore near Piccadilly Circus Sunday, April 4 at 4PM

Roko, HumanFlesh, I've got your numbers and am hoping you'll attend and rally as many Londoners as you can.

EDIT: Sorry, Sunday, not Monday.

Replies from: ciphergoth, Roko, taw, Richard_Kennaway, Roko
comment by Paul Crowley (ciphergoth) · 2010-04-02T09:56:28.720Z · LW(p) · GW(p)

Found this entirely by chance - do a top level post?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-02T22:57:33.053Z · LW(p) · GW(p)

Do a top-level post.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-04-03T12:45:59.984Z · LW(p) · GW(p)

Done. I hesitated as I wasn't in any sense the organiser of this event, just someone who had heard about it, but better me than no-one!

comment by Roko · 2010-04-02T15:50:46.553Z · LW(p) · GW(p)

Hmm, that's also Easter Sunday, so I have commitments with family. I would love to meet you in person, Yvain, but it looks like I won't make this.

comment by taw · 2010-04-02T12:19:40.082Z · LW(p) · GW(p)

I'll try to come.

comment by Richard_Kennaway · 2010-04-02T09:09:01.154Z · LW(p) · GW(p)

I hope to get to this, as I'll be not too far away this weekend.

comment by Roko · 2010-04-01T20:26:19.731Z · LW(p) · GW(p)

I think I can't afford to come...

comment by PeerInfinity · 2010-04-01T17:31:17.465Z · LW(p) · GW(p)

I recently found something that may be of interest to LW readers:

This post at the Lifeboat Foundation blog announces two tools for testing your "Risk Intelligence":

The Risk Intelligence Game, which consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. Then it calculates your risk intelligence quotient (RQ) on the basis of your estimates.

The Prediction Game, which provides you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

Replies from: Will_Newsome, gimpf
comment by Will_Newsome · 2010-04-01T21:56:33.407Z · LW(p) · GW(p)

An annoying thing about the RQ test (rot13'd):

Jura V gbbx gur ED grfg gurer jnf n flfgrzngvp ovnf gbjneqf jung jbhyq pbzzbayl or pnyyrq vagrerfgvat snpgf orvat zber cebonoyr naq zhaqnar/obevat snpgf orvat yrff cebonoyr. fgrira0461 nyfb abgvprq guvf. Guvf jnf nobhg 1 zbagu ntb. ebg13'q fb nf abg gb shegure ovnf crbcyrf' erfhygf.

comment by gimpf · 2010-04-03T18:54:45.704Z · LW(p) · GW(p)

I did not check the test in detail, but I somehow question the validity of the test: As presented in their summary, would not just total risk aversion give you a perfect score? 50% on everything, except for the 0 and 100 entries (where 0 is something like "hey, I do play an instrument, and I know this is total crap, except if I would now be hallucinating, in which case..."). It seems like a test which is too easy to play.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-04-07T03:35:34.946Z · LW(p) · GW(p)

I remember seeing an LW post about why it's cheating to always guess 50%, but I haven't found the link to that post yet... I think the basic idea was that you could technically be perfectly calibrated by always guessing 50%, but that's like always claiming that you don't know anything at all. It also means that you're never updating your probabilities. It also makes you easily exploitable, since you'll always assume that your probability of winning any gamble is 50%. Oh, and then there are the times when you'll give different probabilities for the same event, if the question is worded in different ways.

Replies from: saturn, gimpf, RobinZ
comment by saturn · 2010-04-07T09:36:04.189Z · LW(p) · GW(p)

It also makes you easily exploitable, since you'll always assume that your probability of winning any gamble is 50%.

Your probability of winning any two-sided bet is 50%, as long as you pick which side of the bet you take at random. A "rational ignoramus" who always had minimum confidence wouldn't accept any arrangement where the opponent got to pick which side of the bet to take.

comment by gimpf · 2010-04-07T05:27:42.807Z · LW(p) · GW(p)

Please note that I explicitly referred to the test, not to reality.

comment by RobinZ · 2010-04-07T03:56:53.991Z · LW(p) · GW(p)

Oh, and then there are the times when you'll give different probabilities for the same event, if the question is worded in different ways.

That implies a very easy Dutch-book:

  1. Create a lottery with three possible outcomes (a), (b), and (c) - for example, (a) 1, (b) 2, 3, or 4, and (c) 5 or 6 on a six-sided die. (Note that the probabilities are not equal - I have no need of that stipulation.)

  2. Ask "what are the odds that (a) will happen?" In response to the proposed even-odds, bet against (a).

  3. Ask "what are the odds that (b) will happen?" In response to the proposed even-odds, bet against (b).

  4. Ask "what are the odds that (c) will happen?" In response to the proposed even-odds, bet against (c).

  5. Collect on two bets out of three, regardless of outcome.

comment by NancyLebovitz · 2010-04-19T13:11:29.598Z · LW(p) · GW(p)

Karma creep: It's pleasant to watch my karma going up, but I'm pretty sure some of it is for old comments, and I don't know of any convenient way to find out which ones.

If some of my old comments are getting positive interest, I'd like to revisit the topics and see if there's something I want to add. For that matter, if they're getting negative karma, there may be something I want to update.

Replies from: RobinZ
comment by RobinZ · 2010-04-19T14:25:37.604Z · LW(p) · GW(p)

The only way I know to track karma changes is having an old tab with my Recent Comments visible and comparing it to the new one. That captures a lot of the change - >90% - but not the old threads.

I would love to know how hard it would be to have a "Recent Karma Changes" feed.

comment by taw · 2010-04-04T02:15:07.814Z · LW(p) · GW(p)

Is there any evidence that Bruce Bueno de Mesquita is anything else than a total fraud?

Am I missing something here?

Replies from: gwern, ciphergoth, Matt_Simpson
comment by gwern · 2010-04-07T21:26:50.662Z · LW(p) · GW(p)

Well, his TED talk does make a number of specific testable predictions. They were registered in wrongtomorrow.com, but that's down.

Replies from: taw
comment by taw · 2010-04-10T00:53:14.961Z · LW(p) · GW(p)

Here they are. These are 5 predictions all basically saying "Iran will not make a nuclear test by 2011" as far as their predictive content is concerned, which is not much unlike predicting that "we will not use flying cars by 2011".

Replies from: gwern
comment by gwern · 2010-04-10T03:05:34.370Z · LW(p) · GW(p)

I don't think they're that vague and obvious.

  • No nukes was something of a surprise to many people when that NIE came out
  • the loss of Ahmadinejad power prediction is nontrivial. I, and most others, I think, would have predicted an increase.
  • The noone-endorsing-nukes 2011 prediction is also significant, if heavily correlated with Ahmadinejad losing some power.
Replies from: taw
comment by taw · 2010-04-10T11:13:36.256Z · LW(p) · GW(p)

He predicts "Ahmedinijad will lose influence and the mullahs will become slightly more influential", not loss of office - which is not testable.

All Iranian officials have claimed endlessly that their program is "civilian only" etc. - it would be a huge surprise if they made a sudden reversal.

If someone expected Iran to have had nukes, they have a serious prediction problems. The only people "expecting" that were the same who were expecting Saddam to have nukes.

comment by Paul Crowley (ciphergoth) · 2010-04-04T08:56:38.097Z · LW(p) · GW(p)

That review is a very worthwhile read - thanks for linking to it!

comment by Matt_Simpson · 2010-04-04T06:32:06.241Z · LW(p) · GW(p)

I've heard claims that his "general model of international conflict" has been independently tested by the CIA and some other organization to 90% accuracy, but haven't seen any details of any of these tests.

Replies from: taw
comment by taw · 2010-04-04T08:15:53.517Z · LW(p) · GW(p)

Oh he gives plenty of such claims, not a single one of them are independently verifiable. You cannot access such report. This increases my estimation he's a fraud relative to not giving such claims in the first place.

Replies from: Douglas_Knight, CronoDAS
comment by Douglas_Knight · 2010-04-08T05:15:38.786Z · LW(p) · GW(p)

At the Amazon link you provide, BBdM gives the full citation for the CIA report, among others:

Stanley Feder, "Factions and Policon: New Ways to Analyze Politics," in H. Bradford Westerfield, ed. Inside CIA's Private World: Declassified Articles from the Agency's Internal journal, 1955-1992 (New Haven: Yale University Press, 1995)

It does not mention BBdM by name, but is about Policon, which I believe is the original name of his company.

I have not read the report and don't know if it supports him, but I think it's pretty common for people's lack of interest in such reports to create the illusion that they have been fabricated so difficulty finding them on the web isn't much evidence.

ETA: the other articles he mentions: a follow-up by Feder (gated) and an academic review (ungated).

ETA: I have still not read the report, but I should say that first page says exactly what he says it says: 90% accuracy, standard CIA methods also 90% accuracy, but his predictions are more precise.

comment by CronoDAS · 2010-04-04T21:23:43.471Z · LW(p) · GW(p)

You'd think that if he had some method that at least happened to get lucky once in a while, he'd find a way to say "Hey, look at this success I can show!" or something.

Allow me to make a prediction: There will be conflict in the Middle East. ;)

(And I'm not exactly going out on a limb here. I don't even have to say when; there's been conflict there for roughly the past four thousand years, and I don't think anything's going to change that for as long as people still live there.)

comment by beriukay · 2010-04-14T08:51:48.789Z · LW(p) · GW(p)

A recent study (hiding behind a paywall) indicates people overestimate their ability to remember and underestimate the usefulness of learning. More ammo for the sophisticated arguer and the honest enquirer alike.

Replies from: Risto_Saarelma, NancyLebovitz
comment by Risto_Saarelma · 2010-04-21T12:44:28.645Z · LW(p) · GW(p)

Available without the paywall from the author's home page.

comment by NancyLebovitz · 2010-04-14T08:55:53.647Z · LW(p) · GW(p)

It's also an argument in favor of using checklists.

comment by Kevin · 2010-04-04T22:01:01.206Z · LW(p) · GW(p)

US Government admits that multiple-time convicted felon Pfizer is too big to fail. http://www.cnn.com/2010/HEALTH/04/02/pfizer.bextra/index.html?hpt=Sbin

Did the corporate death penalty fit the crime(s)? Or, how can corporations be held accountable for their crimes when their structure makes them unpunishable?

Replies from: Amanojack
comment by Amanojack · 2010-04-05T11:27:34.210Z · LW(p) · GW(p)

The causes of "too big to fail" are:

  1. Corporate personhood laws makes it harder to punish the actual people in charge.

  2. Problems in tort law (in the US) make it difficult to sue corporations for certain kinds of damages.

  3. A large government (territorial monopoly of jurisdiction) makes it more profitable for any sufficiently large company to use the state as a bludgeon against its competitors (lobbying, bribes, friends in high places) instead of competing directly on the market.

  4. Letting companies that waste resources go bankrupt causes short-term damage to the economy, but it is healthy in the long term because it allows more efficient companies to take over the tied-up talent and resources. Politicians care more about the short term than the long term.

  5. For pharmaceutical companies there is an additional embiggening factor. Testing for FDA drug approval costs millions of dollars, which constitutes a huge barrier to entry for smaller companies. Hence the large companies can grow larger with little competition. This is amplified by 1 and 2, and 3 suggests that most of the competition among Big Pharma is over legislators and regulators, not market competition.

Disclosure: I am a "common law" libertarian (I find all monopolies counterproductive, including state governments).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-05T13:39:48.179Z · LW(p) · GW(p)

I'd add trauma from the Great Depression (amplified by the Great Recession) which means that any loss of jobs sounds very bad, and (not related to the topic but a corollary) anything which creates jobs can be made to sound good.

comment by Kevin · 2010-04-03T21:40:17.156Z · LW(p) · GW(p)

Applied rationality April Edition: convince someone with currently incurable cancer to sign up for cryonics: http://news.ycombinator.com/item?id=1239055

Hacker News rather than Reddit this time, which makes it a little easier.

Replies from: scotherns
comment by scotherns · 2010-04-14T10:21:36.367Z · LW(p) · GW(p)

I've been trying to do this since November for a close family member. So far the reaction has been fairly positive, but she has still not decided to go for it.

comment by Oscar_Cunningham · 2010-04-01T17:19:58.027Z · LW(p) · GW(p)

My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I'm still a vegetarian. Clearly I'm on shaky ground, since my beliefs weren't formed from evidence, but purely from nurture.

Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse than mistreating them). Since eating meat is not necessary to live, it must therefore be as bad as hunting for fun, which is much more widely disapproved of. (I'm not a vegan, and I often eat sweets containing gelatine, if asked to explain this, I would rationalise that eating these thing causes the death of many fewer animals than actually eating, like, steak).

But having read all of Eliezer's posts, I now realise that I could have come up with that rationalisation even if eating meat were not wrong, and that I'm now in just a bad a position as a religious believer. I want a crisis of faith, but I have a problem... I don't know where to go back to. There's no objective basis for morality. I don't know what kind of evidence I should condition on (I don't know what would be different about the world if eating meat was good instead of bad). If a religious person realises they have no evidence they should go back to their priors. Because god has a tiny prior, they should immediately stop believing. I don't know exactly what the prior on "killing animals is wrong" is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this. What should I do now?

Footnote: I probably don't have to say this, but I don't want arguments for or against vegetarianism, simply advice on how one should challenge one's own moral beliefs. I've used "eating meat" and "killing animals" interchangeably in my post, because I think that they are morally equivalent due to supply and demand.

Replies from: Bongo, Alicorn, None, cupholder, phaedrus, Kevin
comment by Bongo · 2010-04-01T18:23:25.495Z · LW(p) · GW(p)

I hope this isn't a vegatarianism argument, but remember that you have to rehabilitate both killing and cruelty to justify eating most meat, even if killing alone has held you back so far.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2010-04-01T18:51:00.525Z · LW(p) · GW(p)

That's an excellent point, and one I may not have spotted otherwise. Thank you.

comment by Alicorn · 2010-04-01T18:10:07.470Z · LW(p) · GW(p)

Do you want to eat meat?

Or do you just want to have a good reason for not wanting to eat meat?

It's... y'know... food. I don't have an ethical objection to peppermint but I don't eat it because I don't want to.

comment by [deleted] · 2010-04-02T14:41:59.963Z · LW(p) · GW(p)

.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-04-02T18:45:19.396Z · LW(p) · GW(p)

What is worse? Death, or a life of pain?

Is a state of nonexistence(death) truly a negative, or is it the most neutral of all states?

If Omega told me that the rest of my life would be more painful than it was pleasant I would still choose to live. I think most others here would choose similarly (except in cases of extreme pain like torture).

Replies from: None
comment by [deleted] · 2010-04-02T19:47:45.629Z · LW(p) · GW(p)

.

Replies from: Jayson_Virissimo, Strange7
comment by Jayson_Virissimo · 2010-04-03T06:32:01.367Z · LW(p) · GW(p)

On grounds of utility, I believe that is irrational, choosing to live.

Even if my life would be painful on net, there are still projects I want to finish and work I want to do for others that would prevent me from choosing death. Valuing things such as these is no more irrational than valuing your own pleasure.

Perhaps our disagreement is over the connection between pain/pleasure and utility. I would prefer a world in which I was in pain but am able to complete certain projects to one in which I was in pleasure but unable to complete certain projects. In the economic sense of utility (rank in an ordinal preference function), my utility would be higher in the former world than the latter world (even though the former is more painful).

Replies from: Amanojack
comment by Amanojack · 2010-04-03T10:04:10.585Z · LW(p) · GW(p)

I think your disagreement is over time preference. Which path you choose now depends on how much you discount future pain versus present moral guilt or empathy considerations.

I would prefer a world in which I was in pain but am able to complete certain projects to one in which I was in pleasure but unable to complete certain projects.

In other words, you would make that choice now because that would make you feel best now. Of course (you project that) you would make the same choice at time T, for all T occurring between now and the completion of your projects.

This is known as having a high time preference. It might seem like a quintessential example of low time preference, because you get a big payoff if you can persist through to completing those projects. However, the initial assumption was that "the rest of my life would be more painful than it was pleasant," so ex hypothesi the payoff cannot possibly be big enough to balance out the pain.

Replies from: CronoDAS
comment by CronoDAS · 2010-04-05T04:58:11.561Z · LW(p) · GW(p)

Pleasure and pain have little to do with it.

Replies from: Amanojack
comment by Amanojack · 2010-04-05T10:34:16.626Z · LW(p) · GW(p)

Thanks, I read the article, and I think everything in it is actually answered by my post above. For instance:

I wouldn't put myself into a holodeck even if I could take a pill to forget the fact afterward. That's simply not where I'm trying to steer the future.

He's confused about time structure here. He doesn't want to take the pill now, because that would have a dreadful effect on his happiness now. Whether we call it pleasure/pain, happiness/unhappiness or something else, there's no escaping it.

So my values are not strictly reducible to happiness: There are properties I value about the future that aren't reducible to activation levels in anyone's pleasure center;

Eliezer says his values are not reducible to happiness. Yet how unhappy (or painful) would it be for him right now to watch the happy-all-the-time pill slowly being inched toward his mouth, knowing he'll be made to swallow it? I suspect those would be the worst few moments of his life.

It's not that values are not reducible to happiness, it's that happiness has a time structure that our language usually ignores.

Replies from: CronoDAS
comment by CronoDAS · 2010-04-05T20:13:14.620Z · LW(p) · GW(p)

What if you sneak up on him while he's sleeping and give him the happy-all-the-time injection before he knows you've done it? Then he wouldn't have that moment of unhappiness.

Replies from: Amanojack
comment by Amanojack · 2010-04-05T20:55:35.593Z · LW(p) · GW(p)

Yes, and he would never care about it as long as he never entertained the prospect. I don't think there is a definition of "value" that does everything he needs it to while at the some time not referring to happiness/unhappiness or similar. Charity requires that I continue to await such definition, but I am skeptical.

Replies from: CronoDAS
comment by CronoDAS · 2010-04-05T20:59:33.242Z · LW(p) · GW(p)

Preference satisfaction =! happiness. How many times do I have to make this point?

Which would you prefer to have happen, without any forewarning: 1) I wirehead you, and destroy the rest of the world, or 2) I torture you for a while, and leave the rest of the world alone.

If you don't pick 2, you're an asshole. :P

Replies from: Amanojack
comment by Amanojack · 2010-04-05T22:01:34.019Z · LW(p) · GW(p)

Preference satisfaction =! happiness. How many times do I have to make this point?

Indeed, we cannot say categorically that "preference satisfaction = happiness," but my point has been that such a statement is not very elucidating unless it takes time structure into account:

Satisfaction of my current preferences right now = happiness right now (this is tautological, unless we are including dopamine-based "wanting but not liking" in the definition of preferences - I can account for the case if you'd like, but it will make my response a lot longer)

Knowledge that my current preferences will be satisfied later = happiness right now (but yes, this does not necessarily equal happiness later)

ETA: In case it's not clear, you still haven't shown that values are not just another way of looking at happiness/unhappiness. "Preference" is just another word for value - or if not, please define. I didn't answer your other question simply because neither answer reveals anything about either of our positions.

However, your implying that someone who chooses option 1 should feel like an asshole underscores my point: If someone chose 2 over 1 it'd be because choosing 1 would be painful (insofar as seeing oneself as an asshole is painful ;-). (By the way, you can't get around this by adding "without warning" because the fact that you can make a choice about what is coming implies you believe it's coming (even if you don't know when), or if you don't believe it's coming then it's a meaningless hypothetical.)

Disclosure: I didn't wait for Shadow (and I felt like an asshole, although that was afterward).

Replies from: CronoDAS
comment by CronoDAS · 2010-04-06T14:17:04.658Z · LW(p) · GW(p)

Satisfaction of my current preferences right now = happiness right now (this is tautological, unless we are including dopamine-based "wanting but not liking" in the definition of preferences - I can account for the case if you'd like, but it will make my response a lot longer)

It isn't tautological. In fact, it's been my experience that this is simply not true. There seem to be times that I prefer to wallow in self-pity rather than feel happiness. Anger also seems to preclude happiness in the moment, but there are also times that I prefer to be angry. I could probably also make you happy and leave many of your preferences unsatisfied by injecting you with heroin.

Somehow I think we're just talking past each other...

Replies from: Amanojack
comment by Amanojack · 2010-04-06T15:27:51.134Z · LW(p) · GW(p)

There seem to be times that I prefer to wallow in self-pity rather than feel happiness.

I've been there too, but I will try to show below that that is just having an (extremely) high time preference. Don't you get some tiny instantaneous satisfaction from choosing at any given moment to continue wallowing in self-pity? I do. It's similar to the guilty pleasure of horking down a tub of Ben&Jerry's, but on an even shorter timescale.

Here are my assumptions:

  1. If humans had no foresight, they would simply choose what gave them immediate pleasure or relief from pain. The fact that we have instincts doesn't complicate this, because an instinct is simply an urge, which is just another way of saying, "going against it is more painful than going with it."

  2. But we do have foresight, and our minds are (almost) constantly hounding us to consider the effects of present decisions on future expected pleasure. This is of course vital to our continued survival. However, we can push those future-oriented thoughts out of our minds to some extent (some people excel at this). Certain states - anger probably foremost among them - can effectively shut off or decrease the hounding about the future as well. Probably no one has a time preference of "IMMEDIATELY" all the time, and having a low (long) time preference is usually associated with good mental health and self-actualization.

(Note that this "emotional time preference" is relative: perhaps we weight the pleasure experienced in the next second very highly versus the coming year, or perhaps the reverse; or perhaps it's the next hour versus the next few days, etc.)

So what we call values are generally things we are willing to defer to our brain's "future hounding" about.

Example: A man chances upon some ice cream, but he is lactose intolerant. Let's say he believes the pain of the forthcoming upset stomach will exceed the pleasure of the eating experience. If his mind is hounding him hard enough about the pain of an upset stomach (which will occur hours later), it will override the prospects of a few minutes pleasure starting now. If he is able to push those thoughts out of his mind, he can enjoy the ice cream now, against his "better judgment" (this is akrasia in a nutshell).

Now I think we have the groundwork to explain in an elucidating way why the question you posed can be answered in terms of pleasure/pain only.

1) I wirehead you, and destroy the rest of the world

I'm going to make the decision right now, and I contend that I am going to make that decision purely based on pleasure vs. pain felt right now. Factors include: empathy, guilt, and future prospects of happiness (let's just assume being wireheaded really is purely pleasurable/happy).

If I reject (1), it will be because

  • I believe I would immediately feel very sorry for the people (because of "future hounding"), and/or
  • I believe I would immediately feel very guilty for my decision, and/or
  • I relatively discount the future prospects of pleasure.

If I accept (1), it will be because of a different weighting of the above factors, but that weighting is happening right now, not at some future time. Now let me repeat what I wrote above: The fact that we have instincts (empathy, guilt, even the instinct to create and adhere to "values") doesn't complicate this, because an instinct is simply an urge, which is just another way of saying, "going against it is more painful than going with it."

EDIT: typos and clarification

comment by Strange7 · 2010-04-02T19:53:46.730Z · LW(p) · GW(p)

When do you think suicide would be the rational option?

Replies from: jimrandomh, None
comment by jimrandomh · 2010-04-02T20:20:46.234Z · LW(p) · GW(p)

When do you think suicide would be the rational option?

When doing so causes a sufficiently large benefit for others (ie, 'a suicide mission', as opposed to mere suicide). Or when you have already experienced enough danger (that is, situations likely to have killed you) to overcome your prior and make you conclude that you have quantum immortality with high enough confidence.

comment by [deleted] · 2010-04-02T20:04:18.609Z · LW(p) · GW(p)

.

comment by cupholder · 2010-04-01T19:21:07.688Z · LW(p) · GW(p)

I don't know exactly what the prior on "killing animals is wrong" is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this.

Is it meaningful to put a probability on 'killing animals is wrong' and absolute moral statements like that? Feels like trying to put a probability on 'abortion is wrong' or 'gun control is wrong' or '(insert your pet issue here) is wrong/right' or...

Replies from: Kevin
comment by Kevin · 2010-04-02T05:51:28.274Z · LW(p) · GW(p)

No, it's not meaningful to put a prior probability on it, unless you seriously think something like absolute morality exists. Having said that, the prior for "killing animals is wrong" is still higher than the prior for the God of Abraham existing.

Replies from: Vladimir_Nesov, Nick_Tarleton
comment by Vladimir_Nesov · 2010-04-02T18:15:52.556Z · LW(p) · GW(p)

Note that Bayesian probability is not absolute, so it's not appropriate to demand absolute morality in order to put probabilities on moral claims. You just need a meaningful (subjective) concept of morality. This holds for any concept one can consider, any statement can be assigned a subjective probability, and morality isn't an exceptional special case.

comment by Nick_Tarleton · 2010-04-02T16:22:35.405Z · LW(p) · GW(p)

If morality is a fixed computation, you can place probabilities on possible outputs of that computation (or more concretely, on possible outputs of an extrapolation of your or humanity's volition).

comment by phaedrus · 2012-02-21T22:29:49.711Z · LW(p) · GW(p)

I find this paper to be a good resource to think about this subject: https://motherjones.com/files/emotional_dog_and_rational_tail.pdf

Replies from: army1987, Sniffnoy
comment by A1987dM (army1987) · 2012-02-22T00:08:05.965Z · LW(p) · GW(p)

You have to escape underscores by preceding them with backslashes, otherwise they're interpreted as markup for italics.

comment by Sniffnoy · 2012-02-21T23:41:44.427Z · LW(p) · GW(p)

The underscores need escaping.

comment by Kevin · 2010-04-02T05:50:18.910Z · LW(p) · GW(p)

See this discussion of my own meat-eating. My conclusion was that there is not much of a rational basis for deciding one way or the other -- my attempts to use rationality broke down.

I think you should go out and get yourself something deliciously meaty, while still being mostly vegetarian. "Fair weather vegetarianism". Unless you don't actually like the taste of meat. That's ok. There's also an issue of convenience. You could begin the slippery slope of drinking chicken broth soup and Thai food with lots of fish sauce.

We exist in an immoral system and there isn't much to do about it. Being a vegetarian for reasons of animal suffering is symbolic. If we truly cared about the holocaust of animal suffering, we would be waging a guerrilla war against factory farms.

Replies from: None
comment by [deleted] · 2010-04-02T15:30:50.789Z · LW(p) · GW(p)

.

Replies from: Kevin
comment by Kevin · 2010-04-02T16:50:48.138Z · LW(p) · GW(p)

In this case, other people seem to have concluded that the value of not eating a piece of an animal is in the long run equal to that much animal not suffering/dying. So I know the difference one person could make and it seems too small to be worth the hassle of not eating meat that other people prepare for me, and not worth the inconvenience of not getting the most delicious item on the menu at restaurants.

comment by beriukay · 2010-04-05T14:38:45.904Z · LW(p) · GW(p)

Perhaps the folks at LW can help me clarify my own conflicting opinions on a matter I've been giving a bit of thought lately.

Until about the time I left for college, most of my views reflected those of my parents. It was a pretty common Republican party-line cluster, and I've got concerns that I have anchored at a point too close to favoring the death penalty than I should. I read studies about how capital punishment disproportionately harms minorities, and I think Robin Hanson had more to say about difference in social tier. Early in my college time, this sort of problem led me to reject the death penalty on practical grounds. Then, as I lost my religious views, I stopped seeing it as a punishment at all. I started to see it as a the same basic thing as putting down an aggressive dog. After all, dead people have a pretty encouraging recidivism rate.

I began to wonder if I could reject the death penalty on principle. A large swath of America believes that the words of the Declaration of Independence are as pertinent to our country as the Constitution. This would mean that we could disallow execution because it conflicts with our "inalienable" right to life. But then, I can't justify using the same argument as the people who try to prove that America is a Christian nation. As an interesting corollary, it seems that anyone citing the Declaration in this manner will have a very hard time also supporting the death penalty for this reason.

So basically, I think I would find the death penalty morally acceptable, but only in the hypothetical realm of virtual certainty that the inmate is guilty of a heinous crime. And I have no bound for what that virtual certainty is. Certainly a 5% chance of being falsely accused is too high. I wouldn't kill one innocent man to rid the world of 19 bad ones. But then, I would kill an innocent person to stop a billion headaches (an example I just read in Steven Landsburg's The Big Questions), so I obviously don't demand 100% certainty.

It seems like I might be asking: "What are the chances that someone was falsely accused, given that they were accused of an execution-worthy crime?" And a follow-up "What is an acceptable chance for killing an innocent person?"

Can Bayes help here? I am eager to hear some actual opinions on this matter. So far I've come up with precious little when talking to friends and family.

Replies from: Unnamed, Rain, Kevin, Morendil, Amanojack
comment by Unnamed · 2010-04-06T05:41:08.360Z · LW(p) · GW(p)

My take on capital punishment is that it's not actually that important an issue. With pretty much anything that you can say about the death penalty, you can say something similar about life imprisonment without parole (especially with the way that the death penalty is actually practiced in the United States). Would you lock an innocent man in a cell for the rest of his life to keep 19 bad ones locked up?

Virtually zero chance of recidivism? True for both. Very expensive? Check. Wrongly convicted innocent people get screwed? Check - though in both cases they have a decent chance of being exonerated after conviction before getting totally screwed (and thus only being partially screwed). Could be considered immoral to do something so severe to a person? Check. Deprives people of an "inalienable" right? Check (life/liberty). Strongly demonstrates society's disapproval of a crime? Check (slight edge to capital punishment, though life sentences would be better at this if the death penalty wasn't an option). Applied disproportionately to certain groups? I think so, though I don't know the research. Strong deterrent? It seems like the death penalty should be a bit stronger, but the evidence is unclear on that. Provides closure to the victim's family? Execution seems like more definitive closure, but they have to wait until years after sentencing to get it.

The criminal justice system is a big important topic, and I think it's too bad that this little piece of it (capital punishment) soaks up so much of our attention to it. Overall, my stance on capital punishment is ambivalent, leaning against it because it's not worth the trouble, though in some cases (like McVeigh) it's nice to have around and I could be swayed by a big deterrent effect. I'd prefer for more of the focus to be on this sort of thing (pdf).

Replies from: Kevin
comment by Kevin · 2010-04-06T05:46:37.219Z · LW(p) · GW(p)

Good post. I have never seen strong evidence that the death penalty has a meaningful deterrent effect but I'd be curious to see links one way or the other.

I lean towards prison abolition, but it's an idealistic notion, not a pragmatic one. I suppose we could start by getting rid of prisons for non-violent crimes and properly funding mental hospitals. http://en.wikipedia.org/wiki/Prison_abolition_movement I can't see that happening when we can't even decriminalize marijuana.

comment by Rain · 2010-04-05T14:50:45.705Z · LW(p) · GW(p)

Standard response: politics is the mind-killer.

Personal response: I'm opposed to the death penalty because it costs more than putting them in prison for life due to the huge number of appeals they're allowed (vaguely recall hearing in newspapers / reports). I feel the US has become so risk-averse and egalitarian that it cannot properly implement a death penalty. This is reflected in the back-and-forth questions you ask.

I also oppose it on the grounds that it is often used as a tool of vengeance rather than justice. Nitrogen poisoning (I think that was the gas they were talking about) is a safe, highly reliable, and euphoric means of death, but the US still prefers electrocution (can take minutes), injection (can feel like the veins are burning from the inside out while the body is paralyzed), etc.

That said, I don't care enough about the topic to try and alter its use, whether through voting, polling, letters, etc, nor do I desire to put much thought into it. Best to let hot topics alone.

And after asking about Bayes, you should ask for math rather than opinions.

Replies from: beriukay
comment by beriukay · 2010-04-06T09:33:01.735Z · LW(p) · GW(p)

Yeah, my formatting of the last few sentences wasn't very great. Sorry.

comment by Kevin · 2010-04-06T05:31:21.594Z · LW(p) · GW(p)

There is strong Bayesian evidence that the USA has executed one innocent man. http://en.wikipedia.org/wiki/Cameron_Todd_Willingham By that I mean that an Amanda Knox test type analysis would clearly show that Willingham is innocent, probably with greater certainty than when the Amanda Knox case was analyzed. Does knowing that the USA has indeed provably executed an innocent person change your opinion?

What are the practical advantages of death over life in prison? US law allows for true life without parole. Life in an isolated cell in a Supermax prison is continual torture -- it is not a light punishment by any means. Without a single advantage given for the death penalty over life in prison without parole, I think that ~100% certainty is needed for execution.

I am against the death penalty for regular murder and mass murder and aggravated rape. I am indifferent with regards to the death penalty for crimes against humanity as I recognize that symbolic execution could be appropriate for grave enough crimes.

Replies from: beriukay, wedrifid
comment by beriukay · 2010-04-06T11:12:06.558Z · LW(p) · GW(p)

Kevin, thank you for the specific example. It definitely strengthened my practical objection to the practice. I strongly suspect that the current number of false positives lies outside of my acceptance zone.

Rain, I agree that politics is a mind-killer, but thought it worthy of at least brushing the cobwebs off some cached thoughts. Good point about Nitrogen. I wonder why we choose gruesome methods when even CO would be cheap, easy and effective.

Morendil, I appreciate the other questions. You have a good point that if Omega were brought in on the justice system, it would definitely find better corrective measures than the kill command. I think Eliezer once talked about how predicting your possible future decisions is basically the same as deciding. In that case, I already changed many things on this Big Question, and am just finally doing what I predicted I might do last time I gave any thought to capital punishment. Which happened to be at the conclusion (if there is such a thing) of a murder trial where my friend was a victim. Lots of bias to overcome there, methinks.

Unnamed, interesting points. I hadn't actually considered how similar life imprisonment is to execution, with regard to the pertinent facts. I was recently introduced to the concept of restorative justice which I think encompasses your article. I find it particularly appealing because it deals with what works, instead of worthless Calvinist ideals like punishment. From my understanding, execution only fulfills punishment in the most trivial of senses.

comment by wedrifid · 2010-04-06T07:13:39.344Z · LW(p) · GW(p)

I am against the death penalty for regular murder and mass murder and aggravated rape. I am indifferent with regards to the death penalty for crimes against humanity as I recognize that symbolic execution could be appropriate for grave enough crimes.

"Crimes against humanity" is one of the crimes that for most practical purposes means "... and lost".

Replies from: Kevin
comment by Kevin · 2010-04-06T07:51:11.036Z · LW(p) · GW(p)

Yup. Even though they'll never face charges, some of the winners are guilty as sin. And I mean that the Project for the New American Century was on the winning side of the war, their namesake mission has failed horribly.

comment by Morendil · 2010-04-05T17:29:54.478Z · LW(p) · GW(p)

The more judicious question, I am coming to realize, isn't so much "Which of these two Standard Positions should I stand firmly on".

The more useful question is, why do the positions matter? Why is the discussion currently crystallized around these standard positions important to me, and how should I fluidly allow whatever evidence I can find to move me toward some position, which is rather unlikely (given that the debate has been so long crystallized in this particular way) to be among the standard ones. And I shouldn't necessarily expect to stay at that position forever, once I have admitted in principle that new evidence, or changes in other beliefs of mine, must commit me to a change in position on that particular issue.

In the death-penalty debate I identify more strongly with the "abolitionist" standard position because I was brought up in an abolitionist country by left-wing parents. That is, I find myself on the opposite end of the spectrum from you. And yet, perhaps we are closer than is apparent at first glance, if we are both of us committed primarily to investigating the questions of values, the questions of fact, and the questions of process that might leave either or both of us, at the end of the inquiry, in a different position than we started from.

  • Would I revise my "in principle" opposition to the death penalty if, for instance, the means of "execution" were modified to cryonic preservation? Would I then support cryonic preservation as a "punishment" for lesser crimes such as would currently result in lifetime imprisonment?

  • Would I still oppose the death penalty if we had a Truth Machine? Or if we could press Omega into service to give us a negligible probability of wrongful conviction? Or otherwise rely on a (putatively) impartial means of judgment which didn't involve fallible humans? Is that even desirable, if it was at all possible?

  • Would I support the death penalty if I found out it was an effective deterrent, or would I oppose it only if I found that it didn't deter? Does deterrence matter? Why, or why not?

  • How does economics enter into such a decision? How much, whatever position I arrive at, should I consider myself obligated to actively try to ensure that the society I live in espouses that position? For what scope of "the society I live in" - how local or global?

Those are topics and questions I encounter in the process of thinking about things other than the death penalty; practically every important topic has repercussions on this one.

There's an old systems science saying that I think applies to rational discussions about Big Questions such as this one: "you can't change just one thing". You can't decide on just one belief, and as I have argued before, it serves no useful purpose to call an isolated belief "irrational". It seems more appropriate to examine the processes whereby we adjust networks of beliefs, how thoroughly we propagate evidence and argument among those networks.

There is currently something of a meta-debate on LW regarding how best to reflect this networked structure of adjusting our beliefs based on evidence and reasoning, with approaches such as TakeOnIt competing against more individual debate modeling tools, with LessWrong itself, not so much the blog but perhaps the community and its norms, having some potential to serve as such a process for arbitrating claims.

But all these prior discussions seem to take as a starting point that "you can't change just one belief". That's among the consequences of embracing uncertainty, I think.

Replies from: Rain
comment by Rain · 2010-04-05T17:42:45.758Z · LW(p) · GW(p)

Yeah, that's why I try to avoid hot topics. Too much work.

Replies from: Morendil
comment by Morendil · 2010-04-05T18:01:32.567Z · LW(p) · GW(p)

Well, even relatively uncontroversial topics have the same entangled-with-your-entire-belief-network quality to them, but (to most people) less power to make you care.

The judicious response to that is to exercise some prudence in the things you choose to care about. If you care too much about things you have little power to influence and could easily be wrong about, you end up "mind-killed". If you care too little and about too few things except for basic survival, you end up living the kind of life where it makes little difference how rational you are.

The way it's worked out for me is that I've lived through some events which made me feel outraged, and for better or for worse the outrage made me care about some particular topics, and caring about these topics has made me want to be right about them. Not just to associate myself with the majority, or with a set of people I'd pre-determined to be "the right camp to be in", but to actually be right.

comment by Amanojack · 2010-04-05T20:20:12.108Z · LW(p) · GW(p)

Political questions like this are far removed from the kind of analysis you seem to want to apply. If it's you taking out a killer yourself that's one thing, but the question of whether to support it as a law is something entirely different. This rabbit hole goes very far indeed. Anyway, why would you care about the Constitution - you're not one of the signers, are you? ;-)

Replies from: Mass_Driver, Rain
comment by Mass_Driver · 2010-04-06T04:32:26.253Z · LW(p) · GW(p)

I care about the Constitution for a couple of reasons beyond the narrowly patriotic:

(1) For the framers, its design posed a problem very similar to the design of Friendly AI. The newly independent British colonies were in a unique situation. On the one hand, whatever sort of nation they designed was likely to become quite powerful; it had good access to very large quantities of people, natural resources, and ideas, and the general culture of empiricism and liberty meant that the nation behaved as if it were much more intelligent than most of its competitors. On the other hand, the design they chose for the government that would steer that nation was likely to be quite permanent; it is one thing to change your system of government as you are breaking away from a distant and unpopular metropole, and another to change your government once that government is locally rooted and supported. The latter takes a lot more blood, and carries a much higher risk of simply descending into medium-term anarchy. Finally, the Founders knew that they could not see every possible obstacle that the young and unusual nation would encounter, and so they would have to create a system that could learn based on input from its environment without further input from its designers. So just as we have to figure out how to design a system that will usefully manage vast resources and intelligence in situations we cannot fully predict and with directions that, once issued, cannot be edited or recalled, so too did the Founding Fathers, and we should try to learn from their failures and successes.

(2) The Constitution has come to embody, however imperfectly, some of the core tenents of Bayesianism. I quote Chief Justice Oliver Wendell Holmes:

Persecution for the expression of opinions seems to me perfectly logical. If you have no doubt of your premises or your power and want a certain result with all your heart you naturally express your wishes in law and sweep away all opposition...But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas...that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution.

Replies from: Amanojack, NancyLebovitz
comment by Amanojack · 2010-04-06T14:37:58.945Z · LW(p) · GW(p)

Re 1, if that is the case why not support the Articles of Confederation instead? I also take exception to the underlying assumption that society needs top-down designing, but that's a very deep debate.

But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas. ... hat at any rate is the theory of our Constitution.

If that was really the theory - "checks and balances" - the Constitution was a huge step backward from the Articles of Confederation. (I don't support the AoC, but I'd prefer them to the Constitution.)

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-06T16:38:46.220Z · LW(p) · GW(p)

Re 1, if that is the case why not support the Articles of Confederation instead?

I never said we should support it; I said we should care about it.

It would be silly to claim that anyone interested in FAI should be pro-Constitution; there were plenty of 18th century people who earnestly grappled with their version of the FAI problem and thought the Constitution was a bad idea. If you agree more with the anti-Federalists, fine! The point is that we should closely follow the results of the experiment, not that we should bark agreement with the particular set of hypotheses chosen by the Founding Fathers for extensive testing.

comment by NancyLebovitz · 2010-04-06T10:21:18.016Z · LW(p) · GW(p)

Very good point, and the founders' process for developing the constitution and bill of rights is important for thinking about how to develop a Friendly (mostly Friendly?) AI.

comment by Rain · 2010-04-06T12:11:17.281Z · LW(p) · GW(p)

Anyway, why would you care about the Constitution - you're not one of the signers, are you? ;-)

I swore an oath to support and defend the Constitution as a condition of employment, so at the very least I have to signal caring about it. I doubt beriukay is in the same position, though.

Replies from: rortian
comment by rortian · 2010-04-08T01:51:40.042Z · LW(p) · GW(p)

Do you really take that sort of thing seriously? Far out if you do, but I have trouble with the concept of an 'oath'.

Replies from: Rain, mattnewport
comment by Rain · 2010-04-08T02:03:31.223Z · LW(p) · GW(p)

Oaths in general can be a form of precommitment and a weak signal that someone ascribes to certain moral or legal values, though no one seemed to take it seriously in this instance. On my first day, it was just another piece of paper in with all the other forms they wanted me to sign, and they took it away right after a perfunctory reading. I had to search it out online to remember just what it was I had sworn to do. Later, I learned some people didn't even remember they had taken it.

Personally, I consider it very important to know the rules, laws, commitments, etc., for which I may be responsible, so when I or someone else breaks them, I can clearly note it.

For example, in middle school, one of my teachers didn't like me whispering to the person sitting next to me in class. When she asked what I was doing, I told her that I was explaining the lesson, since she did a poor job of it. She asked me if I would like to be suspended for disrespect; I made sure to let her know that the form did not have 'disrespect' as a reason for suspension, only detention.

Replies from: rortian, wedrifid
comment by rortian · 2010-04-09T01:24:58.882Z · LW(p) · GW(p)

Personally, I consider it very important to know the rules, laws, commitments, etc., for which I may be responsible, so when I or someone else breaks them, I can clearly note it.

Far out. That is important.

As for your story, it's something I would have done but I hope you understand that a little tact could have gone a long way.

What I was trying to get at you seem to think also. You think you are sending a 'weak signal' that you are committed to something. But you are using words that I think many around here would be suspicious of (e.g. oath and sworn).

You can say you will do something. If someone doesn't trust that assertion, how will they ever trust 'no really I'm serious'.

Replies from: Rain
comment by Rain · 2010-04-09T01:36:11.100Z · LW(p) · GW(p)

You can say you will do something. If someone doesn't trust that assertion, how will they ever trust 'no really I'm serious'.

Perhaps through enforcement. There are a significant number of laws, regulations, and directives that cover US Federal employees, and the oath I linked to above is a signed and sworn statement indicating the fact that I am aware of and accept responsibility for them.

comment by wedrifid · 2010-04-08T02:10:44.124Z · LW(p) · GW(p)

She asked me if I would like to be suspended for disrespect; I made sure to let her know that the form did not have 'disrespect' as a reason for suspension, only detention.

You prefer more time locked up in school than less?

Replies from: Rain, Rain
comment by Rain · 2010-04-08T02:58:35.551Z · LW(p) · GW(p)

No.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-04-08T06:44:53.950Z · LW(p) · GW(p)

My explanation: It is ironic that 'more time at school after it finishes' is used as a punishment and yet 'days off school' is considered a worse punishment.

Given the chance I would go back in time and explain to my younger self that just because something is presented as a punishment or 'worse punishment' doesn't mean you have to prefer to avoid it. Further, I would explain that getting what he wants does not always require following the rules presented to him. He can make his own rules, chose among preferred consequences.

While I never actually got either a detention or a suspension, I would have to say I'd prefer the suspension.

Replies from: rortian
comment by rortian · 2010-04-09T01:27:08.073Z · LW(p) · GW(p)

In theory but I wonder how long it has been since you were in school. In GA they got around to making a rule that if you were suspended you would lose your drivers license. Also, suspensions typically imply a 0 on all assignments (and possibly tests) that were due for its duration.

Replies from: wedrifid
comment by wedrifid · 2010-04-09T01:31:30.260Z · LW(p) · GW(p)

In theory but I wonder how long it has been since you were in school.

As a teacher or a student? 4 years and respectively.

comment by wedrifid · 2010-04-08T06:45:10.403Z · LW(p) · GW(p)

My explanation: It is ironic that 'more time at school after it finishes' is used as a punishment and yet 'days off school' is considered a worse punishment.

Given the chance I would go back in time and explain to my younger self that just because something is presented as a punishment or 'worse punishment' doesn't mean you have to prefer to avoid it. Further, I would explain that getting what he wants does not always require following the rules presented to him. He can make his own rules, chose among preferred consequences.

While I never actually got either a detention or a suspension, I would have to say I'd prefer the suspension.

comment by Rain · 2010-04-08T02:14:16.480Z · LW(p) · GW(p)

I have a martyr complex.

comment by mattnewport · 2010-04-08T01:54:14.367Z · LW(p) · GW(p)

but I have trouble with the concept of an 'oath'.

How so?

Replies from: Kevin, rortian
comment by Kevin · 2010-04-08T03:13:57.415Z · LW(p) · GW(p)

An oath is an appeal to a sacred witness, typically the God of Abraham. An affirmation is the secular version of an oath in the American legal system.

Replies from: mattnewport
comment by mattnewport · 2010-04-08T07:33:09.915Z · LW(p) · GW(p)

Hailing from secular Britain I wasn't aware of the distinction. Affirmation actually sounds more religious to me. I'd never particularly associated the idea of an oath with religion but I can see how such an association could sour one on the word 'oath'.

comment by rortian · 2010-04-09T01:30:19.808Z · LW(p) · GW(p)

Yeah I like Kevin's short answer. But in general I said to Rain:

You can say you will do something. If someone doesn't trust that assertion, how will they ever trust 'no really I'm serious'.

When you make something a contract you see there are some legal teeth, but swearing to uphold the constitution feels silly.

Replies from: mattnewport
comment by mattnewport · 2010-04-09T01:46:04.710Z · LW(p) · GW(p)

Well obviously the idea of an oath only has value if it is credible, that is why there are often strong cultural taboos against oath breaking. In times past there were often harsh punishments for oath breaking to provide additional enforcement but it is true that in the modern world much of the function of oaths has been transferred to the legal system. Traditionally one of the things that defined a profession was the expectation that members of the profession held themselves to a standard above and beyond the minimum enforced by law however. Professional oaths are part of that tradition, as is the idea of an oath sworn by civil servants and other government employees. This general concept is not unique to the US or to government workers.

comment by SforSingularity · 2010-04-03T15:51:10.029Z · LW(p) · GW(p)

As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.

Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.

Discuss.

Replies from: Mass_Driver, Rain, cousin_it
comment by Mass_Driver · 2010-04-04T06:34:59.131Z · LW(p) · GW(p)

You try frantically to tell people about this, and it always seems to go badly for you.

Telling people frantically about problems that are not on a very short list of "approved emergencies" like fire, angry mobs, and snakes is a good way to get people to ignore you, or, failing that, to dislike you.

It is only very recently (in evolutionary time) that ordinary people are likely to find important solutions to important social problems in a context where those solutions have a realistic chance of being implemented. In the past, (a) people were relatively uneducated, (b) society was relatively simpler, and (c) arbitrary power was held and wielded relatively more openly.

Thus, in the past, anyone who was talking frantically about social reform was either hopelessly naive, hopelessly insane, or hopelessly self-promoting. There's a reason we're hardwired to instinctively discount that kind of talk.

comment by Rain · 2010-04-03T19:08:57.572Z · LW(p) · GW(p)

You should present the easily implemented, obviously better solution at the same time as the problem.

If the solution isn't easy to implement by the person you're talking to, then cost/benefit analysis may be in favor of the status quo or you might be talking to the wrong person. If the solution isn't obviously better, then it won't be very convincing as a solution or you might not have considered all opinions on the problem. And if there is no solution, then why complain?

comment by cousin_it · 2010-04-03T15:57:51.568Z · LW(p) · GW(p)

Is that true? 'Cause if it's true, I'd like to join.

comment by Richard_Kennaway · 2010-04-20T19:20:27.328Z · LW(p) · GW(p)

Does brain training work? Not according to an article that has just appeared in Nature. Paper here, video here or here.

These results provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults. This was true for both the ‘general cognitive training’ group (experimental group 2) who practised tests of memory, attention, visuospatial processing and mathematics similar to many of those found in commercial brain trainers, and for a more focused training group (experimental group 1) who practised tests of reasoning, planning and problem solving. Indeed, both groups provided evidence that training-related improvements may not even generalize to other tasks that use similar cognitive functions.

Note that they were specifically looking for transfer effects. The specific tasks practised did themselves show improvements.

Replies from: RobinZ, Jack
comment by RobinZ · 2010-04-20T19:44:13.537Z · LW(p) · GW(p)

Brain training, for those not following the link, refers to playing games involving particular mental skills (e.g. memory). The study ran six weeks.

I don't think the experiment looks definite - the control group did not appear as thoroughly distinguished from the test groups as I would have liked - but the MRC Cognition and Brain Sciences Unit (who were partners in the experiment) is well-regarded enough that I would call the null result major evidence.

comment by Jack · 2010-04-20T19:36:25.114Z · LW(p) · GW(p)

The fact that they studied adults rather than children may make a difference.

comment by NancyLebovitz · 2010-04-06T14:48:05.064Z · LW(p) · GW(p)

Rats have some ability to distinguish between correlation and cauation

To get back to the rat study—it's very simple actually. What I did is: I had the rats learn that a light, a little flashing light in a Pavlovian box, is followed sometimes by a tone and sometimes by food. So they might have used Pavlovian conditioning; just as I said, Pavlovian conditioning might be the substrate by which animals learn to piece together spatial maps and maybe causal maps as well. If they treat the light as a common cause of the tone and of food, they see [hear] the tone and they predict food might happen. Just like if you see the barometer drop then you think, "Oh, the storm might happen." But, if you see someone tamper with the barometer and you know that the barometer and the storm aren't causally related, then you won't think that the weather is going to change. So, the question is, if the rat intervenes to make the tone happen, will it now no longer think the food will occur.

So there were a bunch of rats; they all had the same training—light as an antecedent to tone and food. Then, at test, some of the rats got tone and they tended to go look in the food section. So they were expecting food based on the tone—which humans would says is a diagnostic reasoning process. “Tone is there because light causes tone and light also causes food. Oh, there must be food.” Or, it's just second order Pavlovian conditioning. The critical test was with another group of rats that got the same training. We gave them a lever that they had never had before. They were in this box, and they have a lever that is rigged so that if they press the lever the tone will immediately come up. So now the question is, do the rats attribute that tone to being caused by themselves. That is, did they intervene to make that variable change? If they thought that they were the cause of the tone, that means it couldn't have been the light, therefore the other effects of the light, food, would not have been expected. In that case, the intervening rats, after hearing the tone of their own intervention, should not expect food. Indeed, they didn't go to food nearly as much. That is the essence of the finding and how it fits in with this idea of causal models and how we go about testing our world.

the abstract

Replies from: Amanojack
comment by Amanojack · 2010-04-06T16:12:15.277Z · LW(p) · GW(p)

I had the rats learn that a light, a little flashing light in a Pavlovian box, is followed sometimes by a tone and sometimes by food.

The information here is a little scant. If, in the cases where there was a tone instead of food, the tone always followed very soon after the light, it'd be most logical for rats to wait for the tone after seeing the light, and only go look for food after confirming that no tone was forthcoming. (This would save them effort assuming the food section was significantly far away. No tone = food. Tone = no food. Or did the scientists sometimes have the light be followed by both tone and food? I assume no, because that would introduce a first-order Pavlovian association between tone and food, which would mess up the next part of the experiment.)

Then, at test, some of the rats got tone and they tended to go look in the food section.

If, as I suggested above, the rats had previously been trained to wait for the lack of a tone before checking in the food section, this result would more strongly rule out a second-order Pavlovian response.

The critical test was with another group of rats that got the same training. We gave them a lever that they had never had before. They were in this box, and they have a lever that is rigged so that if they press the lever the tone will immediately come up. ... In that case, the intervening rats, after hearing the tone of their own intervention, should not expect food. Indeed, they didn't go to food nearly as much.

On the one hand, this is really surprising. On the other hand, I don't see how rats could survive without some cause-and-effect and logical reasoning. I'm really eager to see more studies on logical reasoning in animals. Any anecdotal evidence with house pets anyone?

comment by Vladimir_Nesov · 2010-04-02T09:50:32.652Z · LW(p) · GW(p)

David Chalmers has written up a paper based on the talk he gave at 2009 Singularity Summit:

From the blog post where he announced the paper:

The main focus is the intelligence explosion that some think will happen when machines become more intelligent than humans. First, I try to clarify and analyze the argument for an intelligence explosion. Second, I discuss strategies for negotiating the singularity to maximize the chances of a good outcome. Third, I discuss issues regarding uploading human minds into computers, focusing on issues about consciousness and personal identity.

Replies from: timtyler
comment by timtyler · 2010-04-02T12:18:08.149Z · LW(p) · GW(p)

Rather sad to see Chalmers embracing the dopey "singularity" terminology.

He seems to have toned down his ideas about development under conditions of isolation:

"Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will."

Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but of course we won't keep these things permanently restrained on grounds of sheer paranoia - that would stop us from using them.

53 pages with only 2 mentions of zombies - yay.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-02T14:19:00.320Z · LW(p) · GW(p)

Sure there will be test harnesses

We can't test for values -- we don't know what they are. A negative test might be possible ("this thing surely has wrong values"), as a precaution, but not a positive test.

Replies from: timtyler
comment by timtyler · 2010-04-02T14:29:40.417Z · LW(p) · GW(p)

Testing often doesn't identify all possible classes of flaw - e.g. see:

http://en.wikipedia.org/wiki/Unit_testing#Unit_testing_limitations

It is still very useful, nonetheless.

comment by Kutta · 2010-04-01T20:16:33.077Z · LW(p) · GW(p)

PDF: "Are black hole starships possible?"

This paper examines the possibility of using miniature black holes for converting matter to energy via Hawking radiation, and propelling ships with that. Pretty interesting, I think.

I'm no physicist and not very math literate, but there is one issue I pondered: namely, how the would it be possible to feed matter to a mini black hole that has an attometer scale event horizon and radiating petajoules of energy in all directions? The black hole would be an extremely tiny target in a barrier of ridiculous energy density. The paper, as rudimentary it is, does not discuss this feeding issue.

Replies from: wnoise, JenniferRM
comment by wnoise · 2010-04-01T20:53:28.616Z · LW(p) · GW(p)

I prefer links to the abstract, when possible.

http://arxiv.org/abs/0908.1803

comment by JenniferRM · 2010-04-02T19:51:05.097Z · LW(p) · GW(p)

This might be interesting in combination with the a "balanced drive". They were invented by science fiction author Charles Sheffield who attributed them his character Arthur Morton McAndrew so they are sometimes also called a "McAndrew Drive" or a "Sheffield Drive".

The basic trick is to put an incredibly dense mass at the end of a giant pole such that the inverse square law of gravity is significant along the length of the pole. The ship flies "mass forward" through space. Then the crew cabin (and anything else incapable of surviving enormous acceleration) is set up on the pole so that the faster the acceleration the closer it is to the mass. The cabin, flying "floor forward", changes its position while the floor flexes as needed so that the net effect of the ship's acceleration plus the force of gravity balance out to something tolerable. When not under acceleration you still get gravity in the cabin by pushing it out to very tip of the pole.

The literary value of the system is that you can do reasonably hard science fiction and still have characters jaunt from star to star so long as they are willing to put up with the social isolation because of time dilation, but the hard part is explaining what the mass at the end of the pole is, and where you'd get the energy to move it.

If you could feed a black hole enough to serve as the mass while retaining the ability to generate Hawking radiation, that might do it. Or perhaps simply postulating technological control of quantum black holes and then use two in your ship: a big one to counteract acceleration and a small one to get energy from a "Crane-Westmoreland Generator".

comment by Amanojack · 2010-04-01T20:03:23.420Z · LW(p) · GW(p)

Why doesn't brain size matter? Why is a rat with its tiny brain smarter than a cow? Why does the cow bother devoting all those resources to expensive gray matter? Eliezer posted this question in the February Open Topic, but no one took a shot at it.

FTA: "In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter."

This statement seems ripe for semantic disambiguation. Cows can "afford" a larger brain than rats can, and although "large cow brain < small rat brain", it seems highly likely that "large cow brain > small cow brain". The fact that a large cow brain is wildly inefficient compared to a more optimized smaller brain is irrelevant to natural selection, a process that "search[es] the immediate neighborhood of its present point in the solution space, over and over and over." It's not as if cow evolution is an intelligent being that can go take a peek at rat evolution and copy its processes.

Still, why don't we see such apparent resource-wasting in other organs? My guess is that the brain is special, in that

1) As with other organs, it seems plausible that the easiest/fastest "immediate neighbor" adaptation to selective pressure on a large animal to acquire more intelligence is simply to grow a larger brain.

2) But in contrast with other organs, if a larger brain is very expensive (hard for the rat to fit into tight places, scampers slower, requires much more food), there are other ways to dramatically improve brain performance - albeit ones that natural selection may be slower to hit upon. Why slower? Presumably because they are more complex, less suited to an "immediate neighbor" search, more suited to an intelligent search or re-design. (The evolution process would be even slower in large animals with longer life cycles.)

I bolded "dramatically" because the possibility of substantial intelligence gains by code optimization alone (without adding parallel processors, for instance) also seems to be a key factor in the AI "FOOM" argument. Maybe that's a clue.

Replies from: rwallace, JamesAndrix, JamesAndrix
comment by rwallace · 2010-04-02T10:36:23.444Z · LW(p) · GW(p)

Be careful about making assumptions about the intelligence of cows. I used to think sheep were stupid, then I read that sheep can tell humans apart by sight (which is more than I can do for them!), and I realized on reflection I never had any actual reason to believe sheep were stupid, it was just an idea I'd picked up and not had any reason to examine.

Also, be careful about extrapolating from the intelligence of domestic cows (which have lived for the last few thousand years with little evolutionary pressure to get the most out of their brain tissue) to the intelligence of their wild relatives.

Replies from: Bo102010
comment by Bo102010 · 2010-04-02T12:25:47.339Z · LW(p) · GW(p)

I'm not sure if it's useful to speak of a domesticated animal's raw "intelligence" by citing how they interact with humans.

"Little evolutionary pressure" means "little NORMAL evolutionary pressure" for animals protected by humans. That is, surviving and propagating is less about withstanding normal natural situations, and more about successfully interacting with humans.

So, sheep/cows/dogs/etc. might have pools of genius in the area of "find a human that will feed you," and may be really dumb in almost other areas.

Replies from: None
comment by [deleted] · 2010-04-02T14:29:56.930Z · LW(p) · GW(p)

.

comment by JamesAndrix · 2010-04-01T21:30:47.615Z · LW(p) · GW(p)

At the risk of repeating the same mistake as my previous comment, I'll do armchair genetics this time:

Perhaps genes controlling the size of various mammalian organs and body regions tend to grow or shrink uniformly, and only become disproportionate when there is a stronger evolutionary pressure. When there is a mutation leading to more growth, all the organs tend to grow more.

comment by JamesAndrix · 2010-04-01T21:21:12.654Z · LW(p) · GW(p)

(I now see this answered in the first few comments on the link eliezer posted.)

Purely armchair neurology: To answer the question of why cow brains would need to be bigger than rat brains, I asked what would go wrong if we put a rat brain into a cow. (Ignoring organ rejection and cheese crazed, wall-eating cows)

We would need to connect the rat brain to the cow body, but there would not be a 1 to 1 correspondence of connections. I suspect that a cow has many more nerve endings throughout it's body. At least some of the brain/body correlation must be related to servicing the body nerves. (both sensory and motor)

Replies from: PhilGoetz
comment by PhilGoetz · 2010-04-01T21:46:30.873Z · LW(p) · GW(p)

The cow needs more receptors, and more activators. However, this would lead one to expect the relationship of brain size to body size to follow a power-law with an exponent of 2/3 (for receptors, which are primarily on the skin); or of 1 (for activators, which might be in number proportional to volume). The actual exponent is 3/4. Scientists are still arguing over why.

Replies from: Erik, None
comment by Erik · 2010-04-06T07:34:17.719Z · LW(p) · GW(p)

West and Brown has done some work on this which seemed pretty solid to me when I read it a few months ago. The basic idea is that biological systems are designed in a fractal way which messes up the dimensional analysis.

From the abstract of http://jeb.biologists.org/cgi/content/abstract/208/9/1575:

We have proposed a set of principles based on the observation that almost all life is sustained by hierarchical branching networks, which we assume have invariant terminal units, are space-filling and are optimised by the process of natural selection. We show how these general constraints explain quarter power scaling and lead to a quantitative, predictive theory that captures many of the essential features of diverse biological systems. Examples considered include animal circulatory systems, plant vascular systems, growth, mitochondrial densities, and the concept of a universal molecular clock. Temperature considerations, dimensionality and the role of invariants are discussed. Criticisms and controversies associated with this approach are also addressed.

A Science article of theirs containing similar ideas: http://www.sciencemag.org/cgi/content/abstract/sci;284/5420/1677

Edit: A recent Nature article showing that there is systematic deviations from the power law, somewhat explainable with a modified version of the model of West and Brown:

http://www.nature.com/nature/journal/v464/n7289/abs/nature08920.html

Replies from: Erik
comment by Erik · 2010-04-12T10:48:42.762Z · LW(p) · GW(p)

A recent Nature article showing that there is systematic deviations from the power law, somewhat explainable with a modified version of the model of West and Brown:

http://www.nature.com/nature/journal/v464/n7289/abs/nature08920.html

comment by [deleted] · 2010-04-02T15:40:54.655Z · LW(p) · GW(p)

.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-04-02T16:44:37.797Z · LW(p) · GW(p)

Can something be mathematical and yet not strict?

Overly-simple mathematical models don't always work in the real world.

Replies from: None
comment by [deleted] · 2010-04-03T20:10:03.748Z · LW(p) · GW(p)

.

comment by Peter_de_Blanc · 2010-04-07T02:05:20.709Z · LW(p) · GW(p)

I'd like to plug a facebook group:

Once we reach 4,096 members, everyone will donate $256 to SingInst.org.

Folks may also be interested in David Robert's group:

1 million people, $100 million to defeat aging.

comment by CronoDAS · 2010-04-05T02:10:14.211Z · LW(p) · GW(p)

My mother's sister has two children. One is eleven and one is seven. They are both being given an unusually religious education. (Their mother, who is Catholic, sent them to a prestigious Jewish pre-school, and they seem to be going through the usual Sunday School bullshit.) I find this disturbing and want to proselytize for atheism to them. Any advice?

ETA: Their father is non-religious. I don't know why he's putting up with this.

Replies from: Unnamed, Kevin, NancyLebovitz, RobinZ, Amanojack, wedrifid, None, LucasSloan
comment by Unnamed · 2010-04-06T06:03:55.905Z · LW(p) · GW(p)

I wouldn't proselytize too directly - you want to stay on their (and their mother's) good side, and I doubt it would be very effective anyways. You're better off trying to instill good values - open-mindedness, curiosity, ability to think for oneself, and other elements of rationality & morality - rather than focusing on religion directly. Just knowing an atheist (you) and being on good terms with him could help lead them to consider atheism down the road at some point, which is another reason why it's important to maintain a good relationship. Think about the parallel case of religious relatives who interfere with parents who are raising their kids non-religiously - there are a lot of similarities between their situation and yours (even though you really are right and they just think they are) and you could run into a lot of the same problems that they do.

I haven't had the chance to try it out personally, but Dale McGowan's blog seems useful for this sort of thing, and his books might be even more useful.

Replies from: sketerpot
comment by sketerpot · 2010-04-07T20:38:10.276Z · LW(p) · GW(p)

I think that's some very good advice, and I'd like to elaborate a bit. The thing that made me ditch my religion was the fact that I already had a secular, socially liberal, science-friendly worldview, and it clashed with everything they said in church. That conflict drove my de-conversion, and made it easier for me to adjust to atheism. (I was even used to the idea, from most of my favorite authors mentioning that they weren't religious. Harry Harrison, in particular, had explicitly atheistic characters as soon as his publishers would let him.)

So, yeah, subtlety is your friend here.

comment by Kevin · 2010-04-05T02:34:43.775Z · LW(p) · GW(p)

One thing to do is make sure the kids understand that the Bible is just a bunch of stories. My mom teaches Reform Jewish Sunday school and makes this clear to her students. I make fun of her for cranking out little atheists.

Teaching that the bible is a bunch of stories written by multiple humans over time is not nearly as offensive as preaching atheism. Start there. This bit of knowledge should be enough to get your young relatives thinking about religion, if they want to start thinking about it.

comment by NancyLebovitz · 2010-04-05T22:46:49.798Z · LW(p) · GW(p)

I'm not speaking from experience here, but that doesn't stop me from having opinions.

I don't believe this is an emergency. Are the kid's lives being affected negatively by the religion? What do they think of what they're being taught?

Actually, this could be an emergency if they're being taught about Hell. Are they? Is it haunting them?

Their minds aren't a battlefield between you and religious school-- what they believe is, well not exactly their choice because people aren't very good at choosing, but more their choice than yours.

I recommend teaching them a little thoughtful cynicism, with advertisements as the subject matter.

Replies from: CronoDAS
comment by CronoDAS · 2010-04-06T14:03:15.942Z · LW(p) · GW(p)

Actually, this could be an emergency if they're being taught about Hell. Are they? Is it haunting them?

I haven't seen any evidence that they're being bothered by anything.

Mostly, I just want to make it clear that, unlike a lot of other things they're learning in school, there are a lot of people who have good reasons to think the stories aren't true - to make it clear that there's a difference between "Moses led the Jews out of Egypt" and "George Washington was the first President of the United States."

comment by RobinZ · 2010-04-05T11:28:23.713Z · LW(p) · GW(p)

Dangerous situation!

How do the parents feel about science and science fiction? I believe that stuff has good effects.

comment by Amanojack · 2010-04-06T16:25:59.028Z · LW(p) · GW(p)

Possibly introducing them to some of the content in A Human's Guide to Words, such as dissolving the question, would lead them to theological noncognitivism. The nice thing about that as opposed to direct atheism is it's more "insidious" because instead of saying, "I don't believe" the kids would end up making more subtle points, like, "What do you even mean by omnipotent?" This somehow seems a lot less alarming to people, so it might bother the parents much less, or even seem like "innocent" questioning.

comment by wedrifid · 2010-04-07T22:21:12.611Z · LW(p) · GW(p)

Introduce them to really cool, socially near, atheists. In particular, provide contact with attractive opposite-gender children who are a couple of years older and are atheists.

comment by [deleted] · 2010-04-05T20:15:58.653Z · LW(p) · GW(p)

Teach them the basis of bayesian reasoning without any connection to religion. This will help them in more ways and will lay the foundation for later when they naturally start questioning religion. Also their parents wont have anything against it you merely introduce it as a method for physics or chemistry or with the standard medical examples.

comment by LucasSloan · 2010-04-05T22:10:28.366Z · LW(p) · GW(p)

Speaking as someone who is seeing that sort of thing happening on the inside, I'm really not sure how you should deal with it. Even teaching traditional rationality doesn't help if religion is wrapped up in their social identity. I myself was lucky, in that I never did believe in god. I almost believe that the reason I came through sane was my IQ, although I'm sure that cannot be entirely correct. Getting them to socialize with other children who don't believe in god, or if that's not possible, children who believe in very different gods might help. I would also suggest you introduce them to fiction with strong rationality memes - Eliezer's Harry Potter fanfic [edited, see below] is the kind of thing that might appeal to children, although it has too much adult material.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-05T22:59:01.520Z · LW(p) · GW(p)

Um... Chapter 7 is not the child-friendliest chapter in the world. Teen-friendly, maybe. Not child-friendly.

Replies from: LucasSloan
comment by LucasSloan · 2010-04-06T00:16:24.658Z · LW(p) · GW(p)

Ah, yes. Totally slipped my mind. Part of the problem might be that I was reading that kind of material by age 10 so I'm a bit desensitized. However, I continue to think that the overall package is generally appealing to children. Perhaps delivery of a hard copy that has been judiciously edited might work.

Replies from: gwern
comment by gwern · 2010-04-07T21:36:13.670Z · LW(p) · GW(p)

Part of the problem might be that I was reading that kind of material by age 10 so I'm a bit desensitized.

True story: when I was 8 or so, I loved Piers Anthony's Xanth books. So much that I went and read all of his other books.

Replies from: Alicorn
comment by Alicorn · 2010-04-07T22:20:18.616Z · LW(p) · GW(p)

Even Xanth isn't harmless throughout.

Replies from: gwern
comment by gwern · 2010-04-07T22:25:11.493Z · LW(p) · GW(p)

Xanth's dark places are a heck of a lot more kid-friendly than, say, Bio of a Space Tyrant.

Replies from: Alicorn
comment by Alicorn · 2010-04-07T22:29:54.873Z · LW(p) · GW(p)

Of course. But I can't think of a single Piers Anthony item that I'd actually recommend to a child. Or, for that matter, to an adult, but that's because Anthony's work sucks, not because it's inappropriate.

Replies from: Cyan, CronoDAS, NancyLebovitz
comment by Cyan · 2010-04-08T00:31:50.113Z · LW(p) · GW(p)

I'd classify his... preoccupation... with young teenage girls paired with much older men as "inappropriate".

Replies from: CronoDAS, wedrifid, Alicorn
comment by CronoDAS · 2010-04-16T21:00:18.784Z · LW(p) · GW(p)

This is one of those "stupid questions" to which the answer seems obvious to everyone but me:

What's wrong with a 16-year-old and a 30-year-old having sex?

Replies from: AdeleneDawner, wnoise
comment by AdeleneDawner · 2010-04-16T21:15:47.441Z · LW(p) · GW(p)

It's a power thing. In our culture, the power differential between most 16-year-olds and most 30-year-olds is large enough to make the concept of 'uncoerced consent' problematic.

comment by wnoise · 2010-04-16T21:18:25.706Z · LW(p) · GW(p)

In principle, nothing. Positive, worthwhile, sexual relationships can exist between 16-year-olds and 30-year-olds. In practice, there can be a great deal wrong, that cuts against the probability of any given relationship with that age split being a net positive. There are immediately obvious power differentials (several legal and common commercial age lines of increasing responsibility and power are between them[1]), there is a large disparity in history and experience, and probably economic power. These really can lower the downside immensely, while not raising the upside.

[1]: i.e. 18 several things change, 21 drinking, renting cars at 25

Replies from: Alicorn
comment by Alicorn · 2010-04-16T21:22:36.580Z · LW(p) · GW(p)

I'd put it differently: There's nothing intrinsically wrong with a 16-year-old and a 30-year-old having sex, any more than there is anything intrinsically wrong with two 30-year-olds having sex. There may be extrinsic factors in either case that make it problematic (somebody's being coerced or forced, somebody's elsewhere married, somebody's intoxicated, somebody's being manipulative to get the sex). The way our society is set up, the first case is dramatically more likely to feature such extrinsic factors than the second case.

comment by wedrifid · 2010-04-08T02:05:15.521Z · LW(p) · GW(p)

Most of my aversion to that theme is (just?) cultural preference. I cannot tell whether I would object to the practice in another culture without more information about, for example, any physical or emotional trauma involved, reproductive implications, degree of physical maturity and the opportunity for the girls to self-determine their own lives. I would then have to compare the practice with 'forced schooling' from our culture to decide which is more disgusting.

Replies from: Cyan
comment by Cyan · 2010-04-08T02:13:52.718Z · LW(p) · GW(p)

I would then have to compare the practice with 'forced schooling' from our culture to decide which is more disgusting.

I've read a fair bit about this, but I would be interested in reading more about your perspective on this, in particular, the parts of the system that evoke for you such a visceral feeling as disgust.

Replies from: Blueberry
comment by Blueberry · 2010-04-16T19:05:07.195Z · LW(p) · GW(p)

I'm interested in wedrifid's response as well, but I share the disgust for forced schooling, at least as it's currently practiced.

  • In particular it's the extreme lack of freedom that bothers me. Students are constantly monitored, disciplined for minor infractions, and often can't even go to the bathroom without permission.

  • Knowledge is dispensed in small units to the students as if they were all identical, without any individualization or recognition that students may be interested in different things or have different rates of learning.

  • Students are frequently discouraged from learning on their own or pursuing their own interests, or at the very least not given time to do so.

  • The practice of giving grades puts the emphasis on competition and guessing the teacher's password rather than on creative thought or deep understanding. Students learn to get a grade, not out of intellectual curiosity.

  • Students are isolated in groups of students their own age, rather than interacting in the real world, with community members of all different ages. This creates an unnatural and unhealthy social environment that leads to cliques and bullying.

There are many schools that have made progress on some of these areas. Many cities have alternative or magnet schools that solve some of these problems, so I'm describing a worst-case scenario.

I'd suggest "The Teenage Liberation Handbook" by Grace Llewellyn for more on this, if you haven't already read it.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-16T19:12:16.039Z · LW(p) · GW(p)

Students don't get to see adults making decisions.

comment by Alicorn · 2010-04-08T01:24:29.337Z · LW(p) · GW(p)

Right. And I would consider that inappropriateness sufficient to refrain from recommending the books to a child. The fact that they also suck is necessary to extend that lack of recommendation to adults. Sorry if it was unclear.

Replies from: Cyan
comment by Cyan · 2010-04-08T01:58:41.162Z · LW(p) · GW(p)

Oh no, you were clear. All I mean is that the skeeviness of that particular theme is sufficient for not recommend PA to adults (even if the writing weren't ass).

ETA: Yeah, so, that was me being unclear, not you.

Replies from: Blueberry
comment by Blueberry · 2010-04-16T18:53:54.172Z · LW(p) · GW(p)

I'm incredibly curious why that theme bothers you so much that you wouldn't recommend that book to adults. There's a lot of fiction, and erotic fiction, around that theme: would you be against all of it?

I haven't read Anthony, so I don't know how he handles it. But despite cultural taboos, in some sense it seems like a better fit for young (straight) men to date older women, and vice versa. The more experienced partner can teach the less experienced partner. The power imbalance can be abused, but any relationship has the potential for abuse.

Is it just the violation of the cultural taboo that bothers you? Is it the same sort of moral disgust that people feel about incest? Sexual taboos are incredibly fascinating to me.

comment by CronoDAS · 2010-04-16T20:43:39.583Z · LW(p) · GW(p)

Having read quite a bit of Piers Anthony's work, I noticed that it got consistently worse as he got older. I still think A Spell for Chameleon was pretty good (and so was Tarot, if you don't mind the deliberate squick-inducing scenes), but anything he wrote after, say, 1986 is probably best avoided - everything had a tendency to turn into either pure fluff or softcore pornography.

Replies from: Alicorn
comment by Alicorn · 2010-04-16T21:16:12.806Z · LW(p) · GW(p)

The entire concept of Chameleon is nasty. Her backstory sets up all of the men from her village as being thrilled to take advantage of "Wynne" and universally unwilling to give "Fanchon" the time of day, while about half of them like "Dee". (Anthony is notable for being outrageously sexist towards both genders at once.) Her lifelong ambition is to sit halfway between the two extremes permanently, sacrificing the chance to ever have her above-average intellect because she wants male approval and it's conditional on being pretty (while she recognizes that being as stupid as she sometimes gets is a hazard). Bink is basically presented as a saint for putting up with the fact that she's sometimes ugly for the sake of getting "variety". It's implied that in her smart phase he values her as a conversation partner but actually touching her then would be out of the question. I haven't read the book in years, but I don't remember Chameleon having any complaints about the dubious sort of acceptance Bink offers; she just loves him because he's the protagonist and love means never having to say you want any accommodations whatsoever from your partner, apparently.

comment by NancyLebovitz · 2010-04-08T03:07:54.011Z · LW(p) · GW(p)

I still have some fondness for Macroscope. The gender stuff is creepy, but the depiction of an interstellar information gift culture seemed very cool at the time. I should reread it and see how it compares to how the net has developed.

comment by alyssavance · 2010-04-02T04:17:42.118Z · LW(p) · GW(p)

"Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea."

Attention everyone: This post is currently broken for some unknown reason. Please use the new post at http://lesswrong.com/lw/212/announcing_the_less_wrong_subreddit_2/ if you want to discuss the sub-Reddit. The address of the sub-Reddit is http://www.reddit.com/r/LessWrong

comment by Rain · 2010-04-01T15:38:34.661Z · LW(p) · GW(p)

What do you value?

Here are some alternate phrasings in an attempt to find the same or similar reasoning (it is not clear to me whether these are separate concepts):

  • What are your preferences?
  • How do you evaluate your actions as proper or improper, good or bad, right or wrong?
  • What is your moral system?
  • What is your utility function?

Here's another article asking a similar question: Post Your Utility Function. I think people did a poor job answering it back then.

Replies from: Rain, Clippy, cousin_it, magfrump, Amanojack, CannibalSmith, None
comment by Rain · 2010-04-01T16:48:53.805Z · LW(p) · GW(p)

I value empathy. Unfortunately, it's a highly packed word in the way I use it.

Attempting a definition, I'd say it involves creating the most accurate mental models of what people want, including oneself, and trying to satisfy those wants. This makes it a recursive and recursively self-improving model (I think), since one thing I want is to know what else I, and others, want. To satisfy that want, I have to constantly get better at want-knowing.

The best way to determine and to satisfy these preferences appears to be through the use of rationality and future prediction, creating maps of minds and chains of causality, so I place high value on those skills. Without the ability to predict the future or map out minds, "what people want" becomes far too close to wireheading or pure selfishness.

Empathy, to me, involves trying to figure out what the person would truly want, given as much understanding and knowledge of the consequences as possible, contrasting with what they say they want.

comment by Clippy · 2010-04-01T17:08:15.271Z · LW(p) · GW(p)

Take a wild, wild guess.

No rush -- I'll wait.

Replies from: Rain
comment by Rain · 2010-04-01T17:39:53.455Z · LW(p) · GW(p)

I would guess "paperclips and things which are paperclippy", but that still leaves many open questions.

Is 100 paperclips which last for 100 years better than 1 paperclip which lasts for 100,000 years?

How about one huge paperclip the size of a planet? Is that better or worse than a planetary mass turned into millimeter sized paperclips?

Or maybe you could make huge paperclippy-shapes out of smaller paperclips: using paperclip-shaped molecules to form tiny paperclips which you use to make even bigger paperclips. But again, how long should it last? Would you create the most stable paperclips possible, or the most paperclippy paperclips possible?

And how much effort would you put into predicting and simplifying the future (modeling, basic research, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips? You could spend your entire existence in the quest for the definition to ultimate paperclippiness...

Replies from: Clippy
comment by Clippy · 2010-04-01T18:09:09.002Z · LW(p) · GW(p)

Well, User:Rain, that's about the story of my existence right there. What kinds of paperclips are the right ones? What tradeoffs should I make?

However, regarding the specific matters you bring up, they are mostly irrelevant. Yes, there could be some conceivable situation in which I have to trade off paperclips now against paperclips later. But the way it usually works is that once I obtain or make a paperclip, I move it to the safe zone, where I'll pretty much have it forever. Also, it's obviously the number of paperclips that matters, and the constraint on bizarre paperclips is obviously that they have to be able to (counterfactually) hold sheets of paper together.

If you want to get past this abstract philosophizing and on to some concrete problems, it would be better to talk about the dilemma that User:h-H posed to me, in which I must consider alternate models of paperclipping that don't have the shape of standard paperclips. Here's my recent progress on thinking about the issue.

My current difficulty is extrapolating my values to cover unexpected situations like this, starting from the simplest algorithm I can find which generates my current preference. The problem is that I can't think of any simple algorithm that would favor larger numbers of paperclips, but not have them used for anything else, like holding paper together, or unbending them to connect objects (please pardon the repulsive imagery).

So, I'm beginning to think that at least some of the universe should be paper, but I find that quite a repugnant conclusion. It's a work in progress.

Replies from: Rain
comment by Rain · 2010-04-01T18:35:36.397Z · LW(p) · GW(p)

once I obtain or make a paperclip, I move it to the safe zone, where I'll pretty much have it forever.

How much modeling of the universe have you done, or will you do, to ensure that this is the case? How much effort will you put into creating and defending a suitable safe zone? Will you search the stars for invasive self-replicators (green goo) who may be a threat to your paperclip cache, or will you postpone active offense/defense until such time as you receive a credible threat?

It's a work in progress.

Humankind has been searching for answers for thousands of years. Do you think you'll have a better time discovering ultimate paperclippiness? Or will you be content with a lesser understanding ("I did my best")?

Replies from: Clippy
comment by Clippy · 2010-04-02T15:11:37.739Z · LW(p) · GW(p)

1) I will do all things such that they maximize expected paperclip content of the universe, trading off smaller paperclip quantities for larger ones. I can't express a more specific algorithm than that without knowing the particulars of the situation.

2) I will do much better than humans at finding the ultimate morality* of the universe because I can spend all my resources to make perfect copies of myself that share my values and update knowledge and reflective value equilibria among each other, rather than having to pursue other values like "signaling" (except when I post here, of course).

* Note I said ultimate morality, not ultimate paperclippiness. There's a strange tendency here to equate everything I like with "paperclippiness", and everything humans like with "morality" or "friendliness". Wrong: we all want morality (humans and clippys), we just disagree about what counts as moral. To use the alternate naming scheme of calling my values "paperclippy" and human values "morality" is to assume away the problem. (Certainly, I sometimes fall into the trap of calling good things, either for humans or clippys, "paperclippy" -- see here -- but this is just when I'm being imprecise with my phrasings.)

Replies from: Rain
comment by Rain · 2010-04-02T15:32:53.736Z · LW(p) · GW(p)

Thanks for answering my questions. I think I have a better understanding of your values now.

My empathy circuits are lighting up, telling me I should buy a box of paperclips and keep them safe for you. And that I should put them on display as a warning to myself.

Replies from: Clippy
comment by Clippy · 2010-04-02T15:36:07.015Z · LW(p) · GW(p)

A warning of what???

Replies from: Rain
comment by Rain · 2010-04-02T15:38:33.627Z · LW(p) · GW(p)

How morality can go awry.

I already have a framed print of Hug Bot on my wall.

comment by cousin_it · 2010-04-02T00:22:59.048Z · LW(p) · GW(p)

How do you evaluate your actions as proper or improper, good or bad, right or wrong?

I don't fully understand how I tell good from bad. A query goes in, an answer pops out in the form of a feeling. Many of the criteria probably come from my parents, from reading books, and from pleasant/unpleasant interactions with other people. I can't boil it down to any small set of rules that would answer every moral question without applying actual moral sense, and I don't believe anyone else can.

It's easier to give a diff, to specify how my moral sense differs from that of other people I know. The main difference I see is that some years ago I deeply internalized the content of Games People Play) and as a result I never demonstrate to anyone that I feel bad about something - I now consider this a grossly immoral act. On the other hand, I cheat on women a lot and don't care too much about that. In other respects I see myself as morally average.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-04T09:16:51.276Z · LW(p) · GW(p)

How has not demonstrating to people that you feel bad about something worked out for you?

Replies from: cousin_it
comment by cousin_it · 2010-04-04T18:35:18.221Z · LW(p) · GW(p)

Very well. It attracts people.

comment by magfrump · 2010-04-01T17:24:16.614Z · LW(p) · GW(p)

I value my physical human needs, similarly to Maslow.

I endeavor to value larger, long-term contributions to my needs more than short term ones.

I often act as though I value others' needs approximately in relation to how well I know them, though I endeavor to value others' needs equally to my own. Specifically I do this when making a conscious value calculation rather than doing what "feels right."

I almost always fulfill my own basic needs before fulfilling the higher needs of others; I justify this by saying that I would be miserable and ineffective otherwise but it's very difficult to make my meat-brain go along with experiments to that end.

My conscious higher order values emerge from these.

comment by Amanojack · 2010-04-03T10:10:42.023Z · LW(p) · GW(p)

What do you value?

Getting pleasure and avoiding pain, just like everyone else. The question isn't, "What do I value?" but "When do I value it?" (And also, "What brings you pleasure and pain?" But do you really want to know that?)

Replies from: ata
comment by ata · 2010-04-05T08:32:29.710Z · LW(p) · GW(p)

Getting pleasure and avoiding pain, just like everyone else.

It's not as simple as that.

Happiness/suffering might be a better distinction. Some people get happiness (and even pleasure) from receiving physical sensations that can be classed as painful (people who like spicy foods, people who are into masochism, etc.). Using happiness/suffering makes it clear that we're talking about mental states, not physical sensations.

And, of course, there are some people who claim to actually value suffering, e.g. religious leaders who preach it as a means to spiritual cleanliness, though it's arguable that they're talking more about pain than suffering, if they find it spiritually gratifying. Or it might behoove us to clarify it further as anticipated happiness/suffering — "What do you value?" meaning "What do you anticipate will maximize your long-term happiness and minimize your long-term suffering?".

Further, talking about values often puts people in full-force signaling mode. It might actually expand to "What do you want people to think you anticipate will maximize your long-term happiness and minimize your long-term suffering?" So answering "What is your utility function?" (what's the common pattern behind what you actually do?) or "What is your moral system?" (what's the common pattern behind how you wish you and others would act?) might be best.

Replies from: Amanojack
comment by Amanojack · 2010-04-05T10:43:33.638Z · LW(p) · GW(p)

Happiness/unhappiness vs. pleasure/pain - whatever you want to call it. All these sorts of words carry extra baggage, but pleasure/pain seems to carry the least. In particular, if someone asked me, "How do you know you're happy right now?" I would have to say, "Because I feel good feelings now."

Re: your second paragraph, I suggest that you're driving toward my "When do you value?" question above.

As for what I want to signal, that's a more mundane question for me, but I suppose I want people to see me as empathetic and kind - people seeing me that way gives me pleasure / makes me feel happy.

comment by CannibalSmith · 2010-04-02T10:25:00.325Z · LW(p) · GW(p)

I value time spent in flow times the amount of I/O between me and the external world.

"Time spent in flow" is a technical term for having a good time.

By I/O (input/output) I mean both information and actions. Talking to people, reading books, playing multiplayer computer games, building pyramids, writing software to be used by other people are examples of high impact of me on the world and/or high impact of the world on me. On the other hand, getting stoned (or, wireheaded) and daydreaming has low interaction with the external world. Some of it is okay though because it's an experience I can talk to other people about.

comment by [deleted] · 2010-04-01T17:29:01.686Z · LW(p) · GW(p)

I value individual responsibility for one's own life. As a corollary I value private property and rationality as means to attain the former.

From this I evaluate as good anything that respects property and allows for individual choices. Anything that violates property or impedes choice as bad.

Replies from: wedrifid
comment by wedrifid · 2010-04-01T21:50:40.903Z · LW(p) · GW(p)

I value individual responsibility for one's own life. As a corollary I value private property and rationality as means to attain the former.

Are you sure that is your real reason for valuing the latter? I doubt it.

  • Private property implies responsibility for one's own life can be taken by your grandfather and those in your community who force others to let you keep his stuff.
  • Individual responsibility for one's own life, if that entails actually living, will sometimes mean choosing to take what other people claim as their own so that you may eat.
  • Private property ensures that you don't need to take individual responsibility for protecting yourself. Other people handle that for you. Want to see individual responsibility? Find a frontier and see what people there do to keep their stuff.
  • Always respecting private probability and unimpeded choice guaruntees that you will die. You can't stop other people from creating a superintelligence in their back yard to burn the cosmic commons. And if they can do that, well, your life is totally in their hands, not yours.
Replies from: None
comment by [deleted] · 2010-04-02T08:27:45.982Z · LW(p) · GW(p)

"Are you sure that is your real reason for valuing the latter? I doubt it."

Why do you think you know my valuations better than me? What evidence do you have?

As for your bullet points, if I eat a sandwich nobody else can. That's inevitable. Taking responsibility for my own life means producing the sandwich I intend to eat or trade something else I produced for it. If I simply grab what other people produced I shift responsibility to them.

And on the other hand if I produced a sandwich and someone else eats it, I can no longer use the sandwich as I intended. Responsibility presupposes choice because I can not take on responsibility for something I have no choice over. And property simply is the right to choose.

Replies from: wedrifid
comment by wedrifid · 2010-04-05T08:12:29.046Z · LW(p) · GW(p)

Why do you think you know my valuations better than me? What evidence do you have?

Only the benefit of the doubt.

If you actually value private property because you value individual responsibility then your core value system is based on confusion. Assuming you meant "I value personal responsibility, I value private property, these two beliefs are politically aligned and here is one way that one can work well with the other" puts your position at least relatively close to sane.

And property simply is the right to choose.

No more than Chewbacca is an Ewok. He just isn't, even if they both happen to be creatures from Star Wars.

Replies from: None
comment by [deleted] · 2010-04-05T10:37:13.232Z · LW(p) · GW(p)

And property simply is the right to choose.

No more than Chewbacca is an Ewok. He just isn't, even if they both happen to be creatures from Star Wars.

So, there's the problem. I was using property as "having the right to choose what is done with something". I looked it up in a dictionary but that wasn't helpful. So what is your definition of property?

Edit:

Wikipedia seems to be on my side: "Depending on the nature of the property, an owner of property has the right to consume, sell, rent, mortgage, transfer, exchange or destroy their property, and/or to exclude others from doing these things." I think this boils down to "the right to choose what is done with it".

On a side note it seems that "personal property" is closer to what I meant than "private property".

Replies from: wedrifid
comment by wedrifid · 2010-04-05T11:49:28.357Z · LW(p) · GW(p)

Laws pertaining to personal property give me the reasonable expectation that someone else will take, indeed, insist on taking responsibility for punishing anyone who chooses to take my stuff. If I take too much responsibility for keeping my personal property I expect to be arrested. I have handed over responsibility in this instance so that I can be assured of my personal property. This is an acceptable (and necessary) compromise. Personal responsibility is at odds with reliance on social norms and laws enforced by others.

I am all in favor of personal property, individual choice and personal responsibility. They often come closely aligned, packaged together in a political ideology. Yet they are sometimes in conflict and one absolutely does not imply the other.

comment by Amanojack · 2010-04-05T22:50:20.474Z · LW(p) · GW(p)

I've become a connoisseur of hard paradoxes and riddles, because I've found that resolving them always teaches me something new about rationalism. Here's the toughest beast I've yet encountered, not as an exercise for solving but as an illustration of just how much brutal trickiness can be hidden in a simple-looking situation, especially when semantics, human knowledge, and time structure are at play (which happens to be the case with many common LW discussions).

A teacher announces that there will be a surprise test next week. A student objects that this is impossible: "The class meets on Monday, Wednesday, and Friday. If the test is given on Friday, then on Thursday I would be able to predict that the test is on Friday. It would not be a surprise. Can the test be given on Wednesday? No, because on Tuesday I would know that the test will not be on Friday (thanks to the previous reasoning) and know that the test was not on Monday (thanks to memory). Therefore, on Tuesday I could foresee that the test will be on Wednesday. A test on Wednesday would not be a surprise. Could the surprise test be on Monday? On Sunday, the previous two eliminations would be available to me. Consequently, I would know that the test must be on Monday. So a Monday test would also fail to be a surprise. Therefore, it is impossible for there to be a surprise test.”

Can the teacher fulfill his announcement?

Extensive treatment and relation to other epistemic paradoxes here.

Replies from: thomblake, Rain, wedrifid
comment by thomblake · 2010-04-08T16:23:14.505Z · LW(p) · GW(p)

Let's not forget that the clever student will be indeed very surprised by a test on any day, since he thinks he's proven that he won't be surprised by tests on those days. It seems he made an error in formalizing 'surprise'.

(imagine how surprised he'll be if the test is on Friday!)

Replies from: Amanojack
comment by Amanojack · 2010-04-08T17:32:51.650Z · LW(p) · GW(p)

Since the student believes a surprise test is impossible, it seems this wouldn't surprise him.

comment by Rain · 2010-04-08T16:18:15.380Z · LW(p) · GW(p)

Why not give a test on Monday, and then give another test later that day? I bet they would be surprised by a second test on the same day.

Replies from: Amanojack
comment by Amanojack · 2010-04-08T17:24:00.692Z · LW(p) · GW(p)

True, there's nothing saying there won't be two tests.

Rather than solve this, I was hoping people'd take a look at the linked explanation. When phrased more carefully, it becomes a whole bunch of nested paradoxes, the resolution of which contains valuable lessons on how words can trick people. It covers some LW material along the way, such as Moore's Paradox.

Replies from: Rain
comment by Rain · 2010-04-08T19:48:28.645Z · LW(p) · GW(p)

But if there's a solution, it's not really a paradox.

And I don't like word arguments.

Replies from: Sniffnoy, Amanojack
comment by Sniffnoy · 2010-04-08T20:21:58.715Z · LW(p) · GW(p)

Ugh, yes. Why are we speaking of "paradoxes" at all? Anything that actually occurs is not a paradox. If something appears to be a paradox, either you have reasoned incorrectly, you've made untenable assumptions, or you've just been using fuzzy thinking. This is a problem; presumably it has some solution. Describing it as a "paradox" and asking people not to solve it is not helpful. You don't understand it better that way, you understand it by solving it. The only thing gained that way is an understanding of why it appears to be a paradox, which is useful as a demonstration of the dangers of fuzzy thinking, but also kind of obvious.

Maybe I'm being overly strict about the word "paradox" here, but I really just don't see the term as at all helpful. If you're using it in the strict sense, they shouldn't occur except as an indicator that you've done something wrong (in which case you probably wouldn't use the word "paradox" to describe it in the first place). If you're using it in the loose sense, it's misleading and unhelpful (I prefer to explcitly say "apparent paradox".)

Replies from: Amanojack
comment by Amanojack · 2010-04-08T21:03:17.603Z · LW(p) · GW(p)

We're all saying the exact same thing here: words are not to be treated as infallible vehicles for communicating concepts. That was the point of my original post, the point of Rain's reply, and yours as well. (You're completely right about the word "paradox.")

Also, I'm not saying not to try solving it, just that I've no intention of refuting all proposed solutions. I didn't want my reply to be construed as a debate about the solution, because that would never end.

comment by Amanojack · 2010-04-08T20:28:15.112Z · LW(p) · GW(p)

Words frequently confuse people into believing something they wouldn't otherwise. You may be correct that this confusion can always be addressed indirectly, but in any case it needs to be addressed. Addressing semantic confusion requires identifying it, and I found this riddle (actually the whole article) a great neutral exercise for that purpose.

EDIT: Looking back, I should probably just have posted riddle and kept quiet. Updated for next time.

comment by wedrifid · 2010-04-07T22:15:32.424Z · LW(p) · GW(p)

not as an exercise for solving

...and yet...

Can the teacher fulfill his announcement?

Probably.

p(teacher provides a surprise test) = 1 - x^3
Where:
x = 'improbability required for an event to be surprising'

If a 50% chance of having a test that day would leave a student surprised he can be 87.5% confident in being able to fullfil his assertion.

However, if the teacher was a causal decision agent then he would not be able to provide a surprise test without making the randomization process public (or a similar precommitment).

Replies from: Amanojack, RobinZ
comment by Amanojack · 2010-04-08T00:23:28.040Z · LW(p) · GW(p)

The problem with choosing at day at random is, what if it turns out to be Friday? Friday would not be a surprise, since the test will be either Monday, Wednesday or Friday, and so by Thursday the students would know by process of elimination that it had to be Friday.

comment by RobinZ · 2010-04-07T22:30:12.690Z · LW(p) · GW(p)

How do you get that result while requiring that the test occur next week? It is that assumption that drives the 'paradox'.

Replies from: wedrifid
comment by wedrifid · 2010-04-07T22:51:19.858Z · LW(p) · GW(p)

The answer to the question 'Can the teacher fulfill his announcement?' is 'Probably'. The answer to the question 'Is there a 100% chance that the teacher fulfills his announcement?' is 'No'.

Replies from: RobinZ
comment by RobinZ · 2010-04-07T23:44:20.999Z · LW(p) · GW(p)

You misunderstand me - I maintain that an obvious unstated condition in the announcement is that there will be a test next week. Under this condition, the student will be surprised by a Wednesday test but not a Friday test, and therefore

p(teacher provides a surprise test) = 1 - x^2

and, if I guess your algorithm correctly,

p(teacher provides a surprise lack of test) = x^2 * (1 - x)

[edit: algebra corrected]

Replies from: wedrifid
comment by wedrifid · 2010-04-08T01:24:49.714Z · LW(p) · GW(p)

I maintain that an obvious unstated condition in the announcement is that there will be a test next week.

The condition is that there will be a surprise test. If the teacher were to split 'surprise test' into two and consider max(p(surprise | p(test) == 100)) then yes, he would find he is somewhat less likely to be making a correct claim.

You misunderstand me

I maintain my previous statement (and math):

The answer to the question 'Can the teacher fulfill his announcement?' is 'Probably'. The answer to the question 'Is there a 100% chance that the teacher fulfills his announcement?' is 'No'.

Something that irritates me with regards to philosophy as it is often practiced is that there is an emphasis on maintaining awe at how deep and counterintuitive a question is rather than extract possible understanding from it, disolve the confusion and move on.

Yes, this question demonstrates how absolute certainty in one thing can preclude uncertainty in some others. Wow. It also demonstrates that one can make self defeating prophecies. Kinda-interesting. But don't let that stop you from giving the best answer to the question. Given that the teacher has made the prediction and given that he is trying to fulfill his announcement there is a distinct probability that he will be successful. Quit saying 'wow', do the math and choose which odds you'll bet on!

Replies from: RobinZ
comment by RobinZ · 2010-04-08T02:51:11.130Z · LW(p) · GW(p)

I never intended to dispute that

The answer to the question 'Can the teacher fulfill his announcement?' is 'Probably'. The answer to the question 'Is there a 100% chance that the teacher fulfills his announcement?' is 'No'.

only the specific figure 87.5%.

It's a minor point. Your logic is good.

comment by Mass_Driver · 2010-04-04T06:26:18.078Z · LW(p) · GW(p)

Does anyone have suggestions for how to motivate sleep? I've hacked all the biological problems so that I can actually fall asleep when I order it, but me-Tuesday generally refuses to issue an order to sleep until it's late enough at night that me-Wednesday will sharply regret not having gone to bed earlier.

I've put a small effort into setting a routine, and another small effort into forcing me-Tuesday to think about what I want to accomplish on Wednesday and how sleep will be useful for that; neither seems to be immediately useful. If I reorganize my entire day around motivating an early bedtime, that often works, but at an unacceptably high cost; the point of going to bed early is to have more surplus time/energy, not to spend all of my time/energy on going to bed.

I am happy to test various hypotheses, but don't have a good sense of which hypotheses to promote or how to generate plausible hypotheses in this context.

Replies from: Nick_Tarleton, Amanojack, RobinZ
comment by Nick_Tarleton · 2010-04-04T18:26:51.672Z · LW(p) · GW(p)

Melatonin. Also, getting my housemates to harass me if I don't go to bed.

Replies from: gwern
comment by gwern · 2010-04-07T21:30:34.191Z · LW(p) · GW(p)

Mass_Driver's comment is kind of funny to me, since I had addressed exactly his issue at length in my article.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-08T15:25:39.002Z · LW(p) · GW(p)

Which, I couldn't help but notice, you have thoughtfully linked to in your comment. I'm new here; I haven't found that article yet.

Replies from: gwern
comment by gwern · 2010-04-08T16:38:49.071Z · LW(p) · GW(p)

If you're not being sarcastic, you're welcome.

If you're being sarcastic, my article is linked, in Nick_Tarleton's very first sentence; it would be odd for me to simply say 'my article' unless some referent had been defined in the previous two comments, and there is only one hyperlink in those two comments.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-08T19:14:46.702Z · LW(p) · GW(p)

Gwern, I apologize for the sarcasm; it wasn't called for. As I said, I'm new here, and I guess I'm not clicking "show more above" as much as I should.

However, a link still would have been helpful. As someone who had never read your article, I had no way of knowing that a link to "Melatonin" contained an extensive discussion about willpower and procrastination. It looked to me like a biological solution, i.e., a solution that was ignoring my real concerns, so I ignored it.

Having now read your article, I agree that taking a drug that predictably made you very tired in about half an hour could be one good option for fighting the urge to stay up for no reason, and I also think that the health risks of taking melatonin long-term -- especially at times when I'm already tired -- could be significant. I may give it a try if other strategies fail.

Replies from: gwern
comment by gwern · 2010-04-08T21:38:38.088Z · LW(p) · GW(p)

I also think that the health risks of taking melatonin long-term

I strongly disagree, but I also dislike plowing through as enormous a literature as that on melatonin and effectively conducting a meta-study, since Wikipedia already covers the topic and I wouldn't get a top-level article out of such an effort, just some edits for the article (and old articles get few hits, comments, or votes, if my comments are anything to go by).

comment by Amanojack · 2010-04-04T17:53:31.699Z · LW(p) · GW(p)

I've been struggling with this for years, and the only thing I've found that works when nothing else does is hard exercise. The other two things that I've found help the most:

  • Let the sun hit your eyelids first thing in the morning (to halt melatonin production)
  • F.lux, a program that auto-adjusts your monitor's light levels (and keep your room lights low at night; otherwise melatonin production will be delayed)

EDIT: Apparently keeping your room lights at a low color temperature (incandescent/halogen instead of fluorescent) is better than keeping them at low intensity:

"...we surmise that the effect of color temperature is greater than that of illuminance in an ordinary residential bedroom or similar environment where a lowering of physiological activity is desirable, and we therefore find the use of low color temperature illumination more important than the reduction of illuminance. Subjective drowsiness results also indicate that reduction of illuminance without reduction of color temperature should be avoided." —Noguchi and Sakaguchi, 1999 (note that these are commercial researchers at Matsushita, which makes low-color-temperature fluorescents)

Replies from: Mass_Driver, khafra, Nick_Tarleton
comment by Mass_Driver · 2010-04-05T13:44:16.146Z · LW(p) · GW(p)

That all sounds awfully biological -- are you sure fixing monitor light levels is a solution for akrasia?

Replies from: Amanojack
comment by Amanojack · 2010-04-05T20:04:09.637Z · LW(p) · GW(p)

No, the items I've given will only make you more sleepy at night than you would have been. If that's not enough, I agree it's akrasia of a sort, also known as having a super-high time preference.

comment by khafra · 2010-04-06T17:15:09.537Z · LW(p) · GW(p)

Does that imply that HIDs are safer for long drives at night than halogen headlights?

comment by Nick_Tarleton · 2010-04-04T23:53:25.205Z · LW(p) · GW(p)

If you use Mac OS, Nocturne lets you darken the display, lower its color temperature, etc. manually/more flexibly than F.lux.

Replies from: gwern, andreas
comment by gwern · 2010-04-07T21:28:33.356Z · LW(p) · GW(p)

For Linux, there's Redshift. I like it because it's kinder on my eyes, though it doesn't do anything for akrasia.

comment by andreas · 2010-04-05T00:19:28.573Z · LW(p) · GW(p)

There is also Shades, which lets you set a tint color and which provides a slider so you can move gradually between standard and tinted mode.

comment by RobinZ · 2010-04-04T13:31:25.508Z · LW(p) · GW(p)

What do you do instead of going to bed? I notice myself spending time on the Internet.

Replies from: MatthewB
comment by MatthewB · 2010-04-05T03:21:55.020Z · LW(p) · GW(p)

Either that or painting (The latter is harder to do because the cats tend to want to help me paint, yet don't get the necessity of oppose-able thumbs ... umm...Opposeable? Opposable??? anyway....)

Since I have had sleep disorders since I was 14, I've got lots of practice at not sleeping (pity there was no internet then)... So, I either read, draw, paint, sculpt, or harass people on the opposite side of the earth who are all wide awake.

Replies from: RobinZ
comment by RobinZ · 2010-04-05T11:06:06.537Z · LW(p) · GW(p)

Ah, that puts the causal chain opposite mine - I stay up because I'm doing something, not vice-versa.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-05T13:48:53.954Z · LW(p) · GW(p)

I used to be more like MatthewB, but now I'm more like RobinZ. I tend to stay up browsing the Internet, reading sci-fi, or designing board games.

The roommate idea has worked in the past, and I do use it for 'emergencies.' My roommates don't really take akrasia seriously, though; they figure if I want to stay up all night and regret it, then that's just fine.

Replies from: CronoDAS
comment by CronoDAS · 2010-04-06T14:40:42.322Z · LW(p) · GW(p)

Random ideas:

Set an alarm clock or two for the time you want to go to bed, so you don't "lose track of the time."

Find some program that automatically turns off your Internet access at a certain time each night.

comment by Peter_Twieg · 2010-04-01T17:33:15.476Z · LW(p) · GW(p)

I recently got into some arguments with foodies I know on the merits (or lack thereof) of organic / local / free-range / etc. food, and this is a topic where I find it very difficult to find sources of information that I trust as reflective of some sort of expert consensus (insofar as one can be said to exist.) Does anyone have any recommendations for books or articles on nutrition/health that holds up under critical scrutiny? I trust a lot of you as filters on these issues.

Replies from: Yvain, taw, RobinZ
comment by Scott Alexander (Yvain) · 2010-04-11T13:47:55.052Z · LW(p) · GW(p)

There are lots of studies on the issue, and as usual most of them are bad and disagree with each other.

I tend to trust the one by the UK Food Standards Association because it's big and government-funded. Mayo Clinic agrees. I think there are a few studies that show organic foods do have lower pesticide levels than normal, but nothing showing that it actually leads to health benefits. Pesticides can cause some health problems in farmers, but they're receiving a bajillion times the dose of someone who just eats the occasional carrot. And some "organic pesticides" are just as bad as any synthetic ones. There's also a higher risk of getting bacterial infections from organic food.

Tastewise, a lot of organics people cite some studies showing that organic apples and other fruit taste better than conventional - I can't find the originals of these and there are equally questionable studies that say the opposite. Organic vegetables taste somewhere between the same and worse, even by organic peoples' admission. There's a pretty believable study showing conventional chicken tastes better than organic, and a more pop-sci study claiming the same thing about almost everything. I've seen some evidence that locally grown produce tastes better than imported, but that's a different issue than organic vs. non-organic and you have to make sure people aren't conflating them.

They do produce less environmental damage per unit land, but they produce much less food per unit land and so require more land to be devoted to agriculture. How exactly that works out in the end is complex economics that I can't navigate.

My current belief is that organics have a few more nutrients here and there but not enough to matter, are probably less healthy overall when you consider infection risk, and taste is anywhere from no difference to worse except maybe on a few limited fruits.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-11T14:04:49.588Z · LW(p) · GW(p)

Of course, "organic" covers a wide range. I tend not to be blown away by the organic veggies and fruit at Whole Foods. I've had extraordinarily good produce from my local (south Philadelphia) farmer's markets.

comment by taw · 2010-04-02T13:21:01.427Z · LW(p) · GW(p)

The famous metaanalyses which has shown that vitamin supplementation is essentially useless, or possibly even harmful totally destroys the basic argument ("oh look, more vitamins!" - not that it's usually even true) that organic is good for your health.

It might still be tastier. Or not.

Replies from: aleksiL, NancyLebovitz
comment by aleksiL · 2010-04-11T06:34:38.900Z · LW(p) · GW(p)

Do you mean these metaanalyses?

Replies from: taw
comment by taw · 2010-04-11T18:36:53.172Z · LW(p) · GW(p)

Yes. Even if PhilGoetz is correct that harmfulness was an artifact, there's still essentially zero evidence for benefits of eating more vitamins than RDA.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-11T18:50:53.384Z · LW(p) · GW(p)

I thought Vitamin D was an exception.

comment by NancyLebovitz · 2010-04-11T11:25:44.066Z · LW(p) · GW(p)

My experience (admittedly, not double-blinded) is that the food from the farmer's markets tends to be a lot tastier.

Three possibilities: confirmation bias at my end, the theory that local-organic-free range creates better food (and better food tastes better) is correct, and selection pressure-- the only way they can get away with those prices is to sell food which tastes really good.

Replies from: mattnewport, jimrandomh, taw, RobinZ
comment by mattnewport · 2010-04-11T19:45:49.786Z · LW(p) · GW(p)

You should be extremely skeptical of any taste comparisions that are not blinded. One recent story carried out a blind taste comparison of Walmart and Whole Foods produce and found Walmart was preferred for some items. If the taste test had not been conducted blind you would likely have seen very different results.

This comparison doesn't directly bear on your theory since both the Walmart and Whole Foods produce was local and organic in most cases but perceptions of the source are very significant in taste judgements.

comment by jimrandomh · 2010-04-11T14:23:09.391Z · LW(p) · GW(p)

Alternative theory: food from local sources (such as farmer's markets) tastes better because it's fresher, because it's transported less and warehoused fewer times. This would imply that production methods, such as being organic or free range, have little or nothing to do with it. This is also pretty easy to test, if you have some visibility into supply chains.

Replies from: taw, NancyLebovitz
comment by taw · 2010-04-11T18:48:23.103Z · LW(p) · GW(p)

In UK all supermarkets offer both "normal" and "organic" food. Isn't it true wherever you live? You can use this to check if this makes any difference in taste, as both are most likely transported and stored the same.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-11T18:52:49.758Z · LW(p) · GW(p)

I want to test a different hypothesis-- whether extreme freshness is necessary for excellent flavor.

Replies from: taw
comment by taw · 2010-04-11T19:22:01.617Z · LW(p) · GW(p)

That's easy. If you have something very tasty, just store it in a fridge for an extra day, and try it again. I remember some experiments showing that meat got somewhat tastier around its labeled expiration date, which is the opposite result.

comment by NancyLebovitz · 2010-04-11T14:48:51.819Z · LW(p) · GW(p)

Plausible, but hard to test-- how would I get conventionally raised food which is as fresh as what I can get in farmer's markets?

I'd say that the frozen meat is also tastier, and it's (I hope) no fresher than what I can get at Trader Joe's.

comment by taw · 2010-04-11T18:46:58.354Z · LW(p) · GW(p)

My extensive but not blinded at all testing suggests that cheapest brands of supermarket food usually taste far worse than more expensive brands, and quite a number of times fell below my edibility threshold.

My theory is this: it's cheaper to produce bad-tasting food than well-tasting food - and then you can use market segmentation - poor people who cannot afford more expensive food will buy this, while majority of people will buy better tasting and more expensive food. Two price points earn you more money, and as better tasting food is more expensive to make competition cannot undercut you.

One thing I cannot explain is that this difference applies only to some kinds of food - cheap meat is really vile, but for example cheap eggs taste the same as expensive organic eggs, tea price has little to do with its taste, not to mention things like salt and sugar which simply have to taste the same by laws of chemistry.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-11T20:29:32.167Z · LW(p) · GW(p)

You can buy fancy salts (mined from different places-- there's a lot of pink Tibetan salt around) these days. I'm not interested enough in salt to explore them, so I have no opinion about the taste.

I've found that the cheap eggs ($1/dozen) leave me feeling a little off if I eat them a couple of days in a row, but organic free range ($3.50 or more/dozen) don't.

comment by RobinZ · 2010-04-11T13:37:57.510Z · LW(p) · GW(p)

Your second possibility deserves elaboration - I believe a fair restatement is: factory farming methods are less responsive than local organic free-range methods to taste and quality (i.e. cannot control for it as effectively).

comment by RobinZ · 2010-04-01T18:34:04.428Z · LW(p) · GW(p)

Is the methodology of the Amanda Knox test useful in this case? (I didn't attempt the test or even read the posts, but it sounds like a similarly politicized problem.)

Replies from: komponisto, Jack
comment by komponisto · 2010-04-02T07:35:23.964Z · LW(p) · GW(p)

An Amanda-Knox-type situation would be one where the priors are extreme and there are obvious biases and probability-theoretic errors causing people to overestimate the strength of the evidence.

I think one would have to know a fair amount of biochemistry in order for food controversies to seem this way.

Although one might potentially be able to apply the heuristic "look at which side has the more generally impressive advocates" -- which works spectacularly well in the Knox case -- to an issue like this.

Replies from: Jack
comment by Jack · 2010-04-02T14:06:59.055Z · LW(p) · GW(p)

I thought Robin meant: Let the Less Wrong community sort through the information and see if there is a consensus arises on one side or the other. In this case no one has a "right answer" in mind, but we got a pretty conclusive, high confidence answer in the Knox case. Maybe we can do that here- we'd just need to put the time in (and have a well-defined question). Yes, there aren't many biochemists among us. But we all seem remarkably comfortable reading through studies and evaluating scientific findings on grounds of statistics, source credibility etc. Also, my uninformed guess is that a lot of the science is just going to consist of statistical correlations without a lot of deep biochemistry.

Replies from: RobinZ
comment by RobinZ · 2010-04-02T14:19:22.993Z · LW(p) · GW(p)

I thought Robin meant: Let the Less Wrong community sort through the information and see if there is a consensus arises on one side or the other.

Oddly, no - although I think that would be a good exercise to carry out at intervals, I was imagining the theoretical solo game that each commenter played before bringing evidence to the community. Which has the difficulties that komponisto mentioned, of there not being prominent pro- and con- communities available, among other things.

Replies from: Jack
comment by Jack · 2010-04-02T15:16:27.068Z · LW(p) · GW(p)

I'm thinking:

  1. Define the claim/s precisely.
  2. Come up with a short list of pro and con sources
  3. Individual stage: anyone who wants to participate goes through the sources and does some addition research as they feel necessary.
  4. Each individual posts their own probability estimates for the claims.
  5. Communal stage: Disagreements are ironed out, sources shared, arguing and beliefs revised.
  6. Reflection: What, if anything, have we agreed on. It would be a lot harder than the Knox case but it is probably doable.
Replies from: RobinZ
comment by RobinZ · 2010-04-02T15:54:35.022Z · LW(p) · GW(p)

Yes, that's it. I don't think enough time has passed to get around to another such exercise, however.

comment by Jack · 2010-04-02T13:58:06.946Z · LW(p) · GW(p)

It takes about an hour to familiarize yourself with all of the relevant information in the Knox case, I imagine it would take a lot longer in this case. It might still work though if enough people were willing to invest the time, especially since most people don't already have rigid, well-formed opinions on the issue.

comment by Alex Flint (alexflint) · 2010-04-13T23:00:38.666Z · LW(p) · GW(p)

Having read the quantum physics sequence I am interested in simulating particles at the level of quantum mechanics (for my own experimentation and education). While the sequence didn't go into much technical detail, it seems that the state of a quantum system comprises an amplitude distribution in configuration space for each type of particle, and that the dynamics of the system are governed by the Shroedinger equation. The usual way to simulate something like this would be to approximate the particle fields as piecewise linear and update iteratively according to the Shroedinger equation. Some questions:

  • Does anyone have a good source for the technical background I will need to implement such a simulation? Specifically more technical details of the Shroedinger equation (the wikipedia article is unhelpful)

  • I imagine this will quickly become intractable quite as I try to simulate more complex systems with more particles. How quickly, though? Could I simulate, e.g., the interaction of two H_2 ions in a reasonable time (say, no more than a few hours)?

  • Surely others have tried this. Any links/references would be much appreciated.

comment by RobinZ · 2010-04-07T01:08:49.057Z · LW(p) · GW(p)

Arithmetic, Population, and Energy by Dr. Albert A. Bartlett, Youtube playlist. Part One. 8 parts, ~75 minutes.

Relatively trivial, but eloquent: Dr. Bartlett describes some properties of exponential functions and their policy implications when there are ultimate limiting factors. Most obvious policy implication: population growth will be disastrous unless halted.

Replies from: Strange7
comment by Strange7 · 2010-04-07T01:18:30.786Z · LW(p) · GW(p)

People have been worrying about that one since Malthus. Turns out, production capacity can increase exponentially too, and when any given child has a high enough chance of survival, the strategy shifts from spamming lots of low-investment kids (for farm labor) to having one or two children and lavishing resources on them, which is why birthrates in the developed world are dropping below replacement.

Replies from: wnoise, RobinZ
comment by wnoise · 2010-04-07T02:37:32.727Z · LW(p) · GW(p)

Turns out, production capacity can increase exponentially too,

Yes, for a while. The simplest factor driving this is exponentially more laborers. Then there's better technology of all sorts. Still, after a certain point we start hitting hard limits.

when any given child has a high enough chance of survival, the strategy shifts from spamming lots of low-investment kids (for farm labor) to having one or two children and lavishing resources on them, which is why birthrates in the developed world are dropping below replacement.

(a) Is this guaranteed to happen, a human universal or is it a contingent feature of our culture?
(b) Even if it is guaranteed to happen, will the race be won by increasing population hitting hard limits, or populations lifting themselves out of poverty?

Replies from: gwern, Mass_Driver
comment by gwern · 2010-04-07T21:43:43.848Z · LW(p) · GW(p)

(a) Is this guaranteed to happen, a human universal or is it a contingent feature of our culture?

I believe it's a quite general phenomenon - Japan did it, Russia did it, USA did it, all of Europe did it, etc. It looks like a pretty solid rich=slower-growth phenomenon: http://en.wikipedia.org/wiki/File:Fertility_rate_world_map.PNG

And if there were a rich country which continued to grow, threatening neighbors, there's always nukes & war.

comment by Mass_Driver · 2010-04-07T04:02:22.818Z · LW(p) · GW(p)

I think "hard limits" is the wrong way to frame the problem. The only limits that appear truly unbeatable to me right now are the amounts of mass-energy and negentropy in our supergalactic neighborhood, and even those limits may be a function of the map, rather than the territory.

Other "limits" are really just inflection points in our budget curve; if we use too much of resource X, we may have to substitute a somewhat more costly resource Y, but there's no reason to think that this will bring about doom.

For example, in our lifetime, the population of Earth may expand to the point where there is simply insufficient naturally occurring freshwater on Earth to support all humans at a decent standard of living. So, we'll have to substitute desalinized oceanwater, which will be expensive -- but not nearly as expensive as dying of drought.

Likewise, there are only so many naturally occurring oxygen atoms in our solar system, so if we keep breathing oxygen, then at a certain population level we'll have to either expand beyond the Solar System or start producing oxygen through artificial fusion, which may cost more energy than it generates, and thus be expensive. But, you know, it beats choking or fighting wars over a scarce resource.

There are all kinds of serious economic problems that might cripple us over the next few centuries, but Malthusian doom isn't one of them.

Replies from: wnoise
comment by wnoise · 2010-04-07T04:58:27.418Z · LW(p) · GW(p)

It's true that many things have substitutes. All these limits are soft in the sense that we can do something else, and the magic of the market will select the most efficient alternative. At some point this may be no kids, rather than desalinization plants, however, cutting off the exponential growth.

(Phosphorus will be a problem before oxygen. Technically, we can make more phosphorus, and I suppose the cost could go down with new techniques other than "run an atom smasher and sort what comes out".)

But there really are hard limits. The volume we can colonize in a given time goes up as (ct)^3. This is really, really. really fast. Nonetheless, the required volume for an exponentially expanding population goes as e^(lambda t), and will get bigger than this. (I handwave away relativistic time-dilation -- it doesn't truly change anything.)

Replies from: Mass_Driver, Strange7
comment by Mass_Driver · 2010-04-07T05:28:19.493Z · LW(p) · GW(p)

the magic of the market will select the most efficient alternative. At some point this may be no kids

Or, more precisely, less kids. I don't insist that we're guaranteed to switch to a lower birth rate as a species, but if we do, that's hardly an outcome to be feared.

Phosphorus will be a problem before oxygen.

Fascinating. That sounds right; do you know where in the Solar System we could try to 'mine' it?

The volume we can colonize in a given time goes up as (ct)^3.

Not until we start getting close to relativistic speeds. I could care less about the time-dilation, but for the next few centuries, our maximum cruising speed will increase with each new generation. If we can travel at 0.01 c, our kids will travel at 0.03 c, and so on for a while. Since our cruising velocity V is increasing with t, the effective volume we colonize per generation increases at more than (ct)^3. We should also expect to sustainably extract more resources per unit volume as time goes on, due to increasing technology. Finally, the required resources per person are not constant; they decrease as population increases because of economies of scale, economies of scope, and progress along engineering learning curves. All these factors mean that it is far too early to confidently predict that our rate of resource requirements will increase faster than our ability to obtain resources, even given the somewhat unlikely assumption that exponential population growth will continue indefinitely. By the time we really start bumping up against the kind of physical laws that could cause Malthusian doom, we will most likely either (a) have discovered new physical laws, or (b) have changed so much as to be essentially non-human, such that any progress human philosophers make today toward coping with the Malthusian problem will seem strange and inapposite.

comment by Strange7 · 2010-04-07T18:32:45.807Z · LW(p) · GW(p)

Actually, if we figure out how to stabilize traversible wormholes, the colonizable volume goes up faster than (ct)^3. I'm not sure exactly how much faster, but the idea is, you send one mouth of the wormhole rocketing off at relativistic speed, and due to time dialation, the home-end of the gate opens up allowing travel to the destination in less than half the time it would take a lightspeed signal to travel to the destination and back.

Replies from: bogdanb
comment by bogdanb · 2010-04-08T08:48:47.037Z · LW(p) · GW(p)

Assuming zero space inflation, the “exit” mouth of the wormhole can’t travel faster than c with respect to the entry. So for expansion purposes (where you don’t need (can’t, actually, due to lack of space) to go back), you’re limited to c (radial) expansion. Which is the same as without wormholes.

In other words, the volume covered by wormholes expands as (c×t)³ relative to when you start sending wormholes. The number of people is exponential relative to when you start reproducing. Even if you start sending wormholes a long time before you start reproducing exponentially, you’re still going to fill the wormhole-covered volume.

(The fault in your statement is that you can go in “less” than half the time only for travel within the volume already covered by wormholes. For arbitrarily far distances you still need to wait for the wormhole exit to reach there, with is still below c.)

Space inflation doesn’t help that much. Given long enough time, the “distance” between the wormhole entry and exit point can grow at more than c (because the space between the two expands; the exit points still travel below c). In other words, far parts of the Universe can fall outside your event horizon, but the wormhole can keep them still accessible (for various values of can...). This can allow you unbounded-growth in the volume of space for expansion (exponentially, if the inflation is exponential), but note that the quantity of matter accessible is still the same volume that was in your (c×t)³ (without inflation) volume of space.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-04-08T09:07:37.508Z · LW(p) · GW(p)

Strange7 is referring to this essay, especially section 6.

Wormholes sent to the Andromeda at near light speeds arrive in approx year 2,250,000 co-moving time, but in year 15 empire-time (setting year zero at start of expansion).

Replies from: bogdanb
comment by bogdanb · 2010-05-30T00:44:03.593Z · LW(p) · GW(p)

I still don’t get how you can get more than c×t³ as a colonized volume.

With wormholes you could travel within that volume very quickly, which will certainly help you approach c-speed expansion faster, since engine innovations at home can be propagated to the border immediately. And, of course, your volume will be more “useful” because of the lower communication costs (time-wise, & presuming worm-holes are not very expensive otherwise). But I don’t see how you can expand the volume quicker than c, since the border expansion will still be limited by it.

(Disclaimer: I didn’t read everything there, mostly the section you pointed out.)

comment by RobinZ · 2010-04-07T02:08:17.195Z · LW(p) · GW(p)

Simple thermodynamics guarantees that any growing consumption of resources is unsustainable on a long enough timescale - even if you dispute the implicit timescale in Dr. Bartlett's talk*, at some point planning will need to account for the fundamental limits. Ignoring the physics is a common error in economics (even professional economics, depressingly).

* Which you appear not to have watched through - for shame!

Replies from: Strange7
comment by Strange7 · 2010-04-07T18:24:09.410Z · LW(p) · GW(p)

Yes, obviously thermodynamics limits exponential growth. I'm saying that exponential growth won't continue indefinitely, that people (unlike bugs) can, will, and in fact have already begun to voluntarily curtail their reproduction.

Replies from: Jack
comment by Jack · 2010-04-07T18:32:11.205Z · LW(p) · GW(p)

What kind of reproductive memes do you think get selected for?

Replies from: RobinZ, Strange7
comment by RobinZ · 2010-04-07T18:41:19.412Z · LW(p) · GW(p)

How strong is the penalty for defection?

Replies from: Jack
comment by Jack · 2010-04-07T19:43:41.013Z · LW(p) · GW(p)

Yeah, this obviously matters a lot. Right now low to non-existent outside the People's Republic of China, though I suppose that could change. There are a lot of barriers to effective enforcement of reproductive prohibitions: incredibly difficult to solve cooperation issues, organized religions, assorted rights and freedoms people are used to. I suppose a sufficiently strong centralized power could solve the problem though such a power could be bad for other reasons. My sense is the prospects for reliable enforcement are low but obviously a singularity type superintelligence could change things.

Replies from: bogdanb
comment by bogdanb · 2010-04-08T08:30:54.332Z · LW(p) · GW(p)

I’m not quite sure that penalties are that low outside China.

There are of course places where penalties for many babies are low, and there are even states that encourage having babies — but the latter is because birth rates are below replacement, so outside of our exponential growth discussion; I’m not sure about the former, but the obvious cases (very poor countries) are in the malthusian scenario already due to high death rates.

But in (relatively) rich economies there are non-obvious implicit limits to reproduction: you’re generally supposed to provide a minimum of care to children; even more, that “minimum” tends to grow with the richness of the economy. I’m not talking only about legal minimum, but social ones: children in rich societies “need” mobile phones and designer clothes, adolescents “need” cars, etc.

So having children tends to become more expensive in richer societies, even absent explicit legal limits like in China, at least in wide swaths of those societies. (This is a personal observation, not a proof. Exceptions exist. YMMV. “Satisfaction guaranteed” is not a guarantee.)

Replies from: Jack
comment by Jack · 2010-04-08T16:12:05.883Z · LW(p) · GW(p)

The legal minimum care requirement is a good point. With the social minimum: I recognize that this meme exists but it doesn't seem like there are very high costs to disobeying it. If I'm part of a religion with an anti-materialist streak and those in my religious community aren't buying their children designer clothes either... I can't think of what kind of penalty would ensue (whereas not bathing or feeding your children has all sorts of costs if an outsider finds out). It seems better to think of this as a meme which competes with "Reproduce a lot" for resources rather than as a penalty for defection.

Your observation is a good one though.

Replies from: bogdanb
comment by bogdanb · 2010-05-30T00:54:36.242Z · LW(p) · GW(p)

Sure, within a relatively homogeneous and sufficiently “socially isolated”* community the social cost is light.

(*: in the sense that “social minimum” pressures from outside don’t affect it significantly, including by making at least some members “defect to consumerism” and start a consumerist child-pampering positive feedback loop.)

I seem to think that such communities will not become very rich, but I can’t justify it other than with a vague “isolation is bad for growth” idea, so I don’t trust my thought.

Do you have any examples of “rich” societies (by current 1st-world standards) which are socially isolated in the way you describe? (Ie, free from “consumerist” pressure from inside and immune to it from outside.) I can’t think of any.

Replies from: Jack
comment by Jack · 2010-06-05T09:56:21.755Z · LW(p) · GW(p)

Mormons?

comment by Strange7 · 2010-04-07T18:40:57.895Z · LW(p) · GW(p)

I'm not sure I understand what you mean. This isn't a matter of interpersonal communication, it's just individual married couples more-or-less rationally pursuing the 'pass on your genes' mandate by maximizing the survival chances of one or two children rather than hedging their bets with a larger number of individually-riskier children.

Replies from: Jack
comment by Jack · 2010-04-07T19:33:59.052Z · LW(p) · GW(p)

If a gene leads to greater fertility rates with no drop in survival rates, it spreads. Similarly if a meme leads to greater fertility with no drop in survival rate and is sufficiently resistant to competing memes it too spreads. Thus, those memes/memetic structures that encourage more reproduction have a selection advantage.

Replies from: Strange7
comment by Strange7 · 2010-04-07T20:38:47.902Z · LW(p) · GW(p)

In this case, the meme in question leads to a drop in fertility rates, but increases survival rates more than enough to compensate.

Replies from: Jack
comment by Jack · 2010-04-07T21:05:22.301Z · LW(p) · GW(p)

I don't really think your characterization of the global drop in fertility rate is right (farmers with big families survive just fine!) but that isn't really the point. The point is, mormons aren't dying and neither are lots of groups which encourage reproduction among their members. Unless there are a lot of deconversions or enforced prohibitions against over reproducing the future will consist of lots of people whose parents believed in having lots of children and those people will likely feel the same way. They will then have more children who will also want to have lots of children. This process is unsustainable.

Replies from: Strange7
comment by Strange7 · 2010-04-07T21:23:21.358Z · LW(p) · GW(p)

Unless there are a lot of deconversions

I'm expecting a lot of deconversions. Mormons already go to a lot of trouble to retain members and punish former members, which suggests there's a corresponding amount of pressure to leave. Catholics did the whole breed-like-crazy thing, and that worked out well for a while, but catholicism doesn't rule the world.

I think the relative zeal of recent converts as compared to lifelong believers has something to do with how siblings raised apart are more likely to have sexual feelings for each other, but that's probably a topic for another time.

comment by NancyLebovitz · 2010-04-05T22:57:29.679Z · LW(p) · GW(p)

An extensive observation-based discussion of why people leave cults Worth reading, not just for the details, but because it's made very clear that leaving has to make emotional sense to the person doing it. Logical argument is not enough!

People leave because they've been betrayed by leaders, they've been influenced by leaders who are on their own way out of the cult, they find the world is bigger and better than the cult has been telling them, the fears which drove a person into a cult get resolved, and /or life changes which show that the cult isn't working for them.

comment by gaffa · 2010-04-05T13:51:43.592Z · LW(p) · GW(p)

Does anyone know a popular science book about, how should I put it, statistical patterns and distributions in the universe. Like, what kind of things follow normal distributions and why, why do power laws emerge everywhere, why scale-free networks all over the place, etc. etc.

Replies from: DanielVarga, Cyan
comment by DanielVarga · 2010-04-08T22:05:48.748Z · LW(p) · GW(p)

Sorry for ranting instead of answering your question, but "power laws emerge everywhere" is mostly bullshit. Power laws are less ubiquitous than some experts want you to believe. And when you do see them, the underlying mechanisms are much more diverse than what these experts will suggest. They have an agenda: they want you to believe that they can solve your (biology, sociology, epidemiology, computer networks etc.) problem with their statistical mechanics toolbox. Usually they can't.

For some counterbalance, see Cosma Shalizi's work. He has many amusing rants, and a very good paper:

Gauss Is Not Mocked

So You Think You Have a Power Law — Well Isn't That Special?

Speaking Truth to Power About Weblogs, or, How Not to Draw a Straight Line

Power-law distributions in empirical data

Note that this is not a one-man crusade by Shalizi. Many experts of the fields invaded by power-law-wielding statistical physicists wrote debunking papers such as this:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.8169

Another very relevant and readable paper:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.6305

Replies from: RobinZ
comment by RobinZ · 2010-04-08T22:41:18.113Z · LW(p) · GW(p)

That gives a whole new meaning to Mar's Law.

Replies from: DanielVarga
comment by DanielVarga · 2010-04-08T23:33:01.781Z · LW(p) · GW(p)

Thank you, I never knew this fallacy has its own name, and I have been annoyed by it since ages. Actually, since 2003, when I was working on one of the first online social network services (iwiw.hu). The structure of the network was contradicting most of the claims made by the then-famous popular science books on networks. Not scale-free, (not even truncated power-law), not attack-sensitive, most of the edges were strong links. Looking at the claims of the original papers instead of the popular science books, the situation was not much better.

comment by Cyan · 2010-04-05T17:36:01.495Z · LW(p) · GW(p)

You could try "Ubiquity" by Mark Buchanan for the power law stuff, but it's been a while since I read it, so I can't vouch for it completely. (Confusingly, Amazon lists three books with that title and different subtitles, all by that author, all published around 2001-2002.)

comment by wheninrome15 · 2010-04-02T00:53:16.311Z · LW(p) · GW(p)

Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.

It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are the best and worst case scenarios conditioning on Friendly AI being IMpossible?

Has this been addressed before? As a disclaimer, I haven't thought much about this and I suspect that I'm dressing up the problem in a way that sounds different to me only because I don't fully understand the implications.

Replies from: PhilGoetz, RobinZ
comment by PhilGoetz · 2010-04-02T02:14:20.012Z · LW(p) · GW(p)

Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces?

First, define "friendly" in enough detail that I know that it's different from "will not blow up in our faces".

Replies from: RobinZ
comment by RobinZ · 2010-04-02T02:27:34.217Z · LW(p) · GW(p)

Ooh, good catch! wheninrome15 may need to define "will not blow up in our faces" in more detail as well.

comment by RobinZ · 2010-04-02T01:03:28.730Z · LW(p) · GW(p)

Such an eventuality would seem to require that (a) human beings are not computable or (b) human beings are not Friendly.

In the latter case, if nothing else, there is [individual]-Friendliness to consider.

Replies from: Kevin
comment by Kevin · 2010-04-02T01:16:51.392Z · LW(p) · GW(p)

I think human history has demonstrated that (b) is certainly true... sometimes I am surprised we are still here.

Replies from: RobinZ
comment by RobinZ · 2010-04-02T01:58:12.481Z · LW(p) · GW(p)

The argument from (b)* is one of the stronger ones I've heard against FAI.

* Not to be confused with the argument from /b/.

Replies from: ata
comment by ata · 2010-04-02T10:59:56.304Z · LW(p) · GW(p)

Incidentally, /b/ might be good evidence for (b). It's a rather unsettling demonstration of what people do when anonymity has removed most of the incentive for signaling.

Replies from: taw
comment by taw · 2010-04-02T13:23:24.335Z · LW(p) · GW(p)

I find chans' lack of signaling highly intellectually refreshing. /b/ is not typical - due to ridiculously high traffic only meme-infested threads that you can reply to in 5 seconds survive. Normal boards have far better discussion quality.

comment by [deleted] · 2010-04-01T17:20:54.418Z · LW(p) · GW(p)

Are there any Germans, preferably from around Stuttgart, who are interested in forming a society for the advancement of rational thought? Please PM me.

comment by SilasBarta · 2010-04-01T15:32:35.949Z · LW(p) · GW(p)

I know I asked this yesterday, but I was hoping someone in the Bay Area (or otherwise familiar) could answer this:

Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.

It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.

Replies from: Eliezer_Yudkowsky, pjeby
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-01T16:53:12.840Z · LW(p) · GW(p)

Trust your intuition.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-01T17:22:11.602Z · LW(p) · GW(p)

Is there a post about when to trust your intuition?

Replies from: None, None
comment by [deleted] · 2010-04-01T19:22:37.301Z · LW(p) · GW(p)

This comment shows when :)

If you don't like that, I think this gives somewhat of a better idea when you should consider it.

comment by [deleted] · 2010-04-02T15:06:58.335Z · LW(p) · GW(p)

.

comment by pjeby · 2010-04-02T01:05:38.077Z · LW(p) · GW(p)

It all looks pretty flaky to me at this point, but I figure some of you must have run into her stuff before, and I was hoping you could share.

It looks like a biology-inspired, predictive approach somewhat along the lines of Hawkins' HTMs, except that I've not seen her implementation details spelled out as thoroughly as Hawkins'.

Her analysis seems sound to me (in the sense that her proposed model quite closely matches how humans actually get through the day), except that she seems to elevate certain practical conclusions to a philosophical level that's not really warranted (IMO).

(Of course, I think there would likely be practical problems with AN-based systems being used in general applications -- humans tend to not like it when machines guess, especially if they guess wrong. We routinely prefer our tools to be stupid-but-predictable over smart-but-surprising.)

comment by Matt_Simpson · 2010-04-26T02:05:46.556Z · LW(p) · GW(p)

A couple of physics questions, if anyone will indulge me:

Is quantum physics actually an improvement in the theory of how reality works? Or is it just building uncertainty into our model of reality? I was browsing A Brief History of Time at a bookstore, and the chapter on the Heisenberg uncertainty principle seem to suggest the latter - what I read of it, anyway.

If this is just a dumb question for some reason, feel free to let me know - I've only taken two classes in physics, and we never escaped the Newtonian world.

On a related note, I'm looking for a good physics book that will take me through quantum mechanics. I don't want a textbook because I don't really have the time to spend learning all of the details, but I want something with some equations in it. Any suggestions?

Replies from: Mitchell_Porter, Nick_Tarleton, RobinZ
comment by Mitchell_Porter · 2010-04-26T10:39:02.473Z · LW(p) · GW(p)

Is quantum physics actually an improvement in the theory of how reality works?

It explains everything microscopic. For example, the stability of atoms. Why doesn't an electron just spiral into the nucleus and stay there? The uncertainty principle means it can't be both localized at a point and have a fixed momentum of zero. If the position wavefunction is a big spike concentrated at a point, then the momentum wavefunction, which is the Fourier transform of the position wavefunction, will have a nonzero probability over a considerable range of momenta, so the position wavefunction will start leaking out of the nucleus in the next moment. The lowest energy stable state for the electron is one which is centered on the nucleus, but has a small spread in position space and a small spread in momentum "space".

However, every quantum theory ever used has a classical conceptual beginning. You posit the existence of fields or particles interacting in some classical way, and then you "quantize" this. For example, the interaction between electron and nucleus is just electromagnetism, as in Faraday, Maxwell, and Einstein. But you describe the electron (and the nucleus too, if necessary) by a probabilistic wavefunction rather than a single point in space, and you also do the same for the electromagnetic field. Curiously, when you do this for the field, you get particles as emergent phenomena. A "photon" is actually something like a bookkeeping device for the probabilistic movement of energy within the quantized electromagnetic field. You can also get electrons and nucleons (and their antiparticles) from fields in this way, so everywhere in elementary particle physics, you have this "field/particle duality". For every type of elementary particle, there is a fundamental field, and vice versa. The basic equations that get quantized are field equations, but the result of quantization gives you particle behavior.

Everyone wants to know how to think about the uncertainty in quantum physics. Is it secretly deterministic and we just need a better theory, or do things really happen without a cause; does the electron always have a definite position even when we can't see it, or is it somehow not anywhere in particular; and so on. These conceptual problems exist because we have no derivation of quantum wavefunctions from anything more fundamental. This is unlike, say, the distributions in ordinary probability theory. You can describe the output of a quincunx using the binomial distribution, but you also have a "microscopic model" of where that distribution comes from (balls bouncing left and right as they fall down). We don't have any such model for quantum probabilities, and it would be difficult to produce (see: "Bell's theorem"). Sum over histories looks like such a model, but the problem is that histories can cancel ("interfere destructively"). It is as if, in the quincunx device, there were slots at the bottom where balls never fell, and you explained this by saying that the two ways to get there cancelled each other out - which is how sum-over-histories explains the double-slit experiment: no photons arrive in the dark regions because the "probability amplitude" for getting there via one slit cancels the amplitude for getting there from the other slit.

As a practical matter, most particle physicists think of reality in quasi-classical terms - in terms of fields or particles, whichever seems appropriate, but then blurred out by the uncertainty principle. Sum over histories is an extension of the uncertainty principle to movement and interaction, so it's a whole process in time which is uncertain, rather than just a position.

The actual nature of the uncertainty is a philosophical or even ideological matter. The traditional view effectively treats reality as classical but blurry. There is a deterministic alternative theory (Bohmian mechanics) but it is obscure and rather contrived. The popular view on this site is "the many-worlds interpretation" - all the positions, all the histories are equally real, but they live in parallel universes. I believe this view is, like Bohmian mechanics, a misguided philosophical whimsy rather than the future of physics. Like Bohmian mechanics, it can be given a mathematical and not just a verbal form, but it's an artificial addition to the real physics. It's not contributing to progress in physics. Its biggest claim to practical significance is that it helped to inspire quantum computation; but one is not obliged to think that a quantum computer is actually in all states at once, rather than just possibly in one of them.

So, I hold to the traditional view of the meaning of quantum theory - that it's an introduction of a little uncertainty into a basically classical world. It doesn't make sense as an ultimate description of things; but I certainly don't believe the ideas, like Bohm (nonlocal determinism) or Everett (many worlds), which try to make a finished objective theory by just adding an extra mathematical and metaphysical facade. The extra details they posit have a brittle artificiality about them. They do link up with genuine aspects of the quantum mathematical formalism, and so they may indirectly contribute to progress just a little, but I think the future lies more with the traditional view.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-26T12:07:01.639Z · LW(p) · GW(p)

However, every quantum theory ever used has a classical conceptual beginning.

I don't know if I'm the only person who thinks this is funny, but every theory in physics has a basis in naive trust in qualia, even if it's looking at the readout from an instrument or reading the text of an article.

Replies from: Jack, RobinZ
comment by Jack · 2010-04-26T12:54:08.056Z · LW(p) · GW(p)

I just take all scientific theories to ultimately be theories about phenomenal experience. No naive trust required.

comment by RobinZ · 2010-04-26T12:45:05.021Z · LW(p) · GW(p)

What do you mean?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-26T13:24:14.627Z · LW(p) · GW(p)

The conclusion may be that matter is almost entirely empty space, but you still have to let your interactions with the way you get information about physics use the ancient habit of assuming that what seems to be solid is solid.

Replies from: RobinZ
comment by RobinZ · 2010-04-26T13:37:22.483Z · LW(p) · GW(p)

I think you may misunderstand what the physics actually says. Compared to the material of neutron stars, yes, terrestrial matter is almost entirely empty space ... but it's still resists changes to shape and volume. And you don't need to invoke ancient habits anywhere - those conclusions fall right out of the physics without modification.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-26T14:44:19.948Z · LW(p) · GW(p)

I've beginning to think that I've been over-influenced by "goshwow" popular physics, which tries to present physics in the most surpising way poosible. It's different if I think of that "empty space" near subatomic particles as puffed up by energy fields.

comment by Nick_Tarleton · 2010-04-26T02:35:53.343Z · LW(p) · GW(p)

Is quantum physics actually an improvement in the theory of how reality works? Or is it just building uncertainty into our model of reality?

The Quantum Physics Sequence

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-26T03:36:40.267Z · LW(p) · GW(p)

thanks, but I was hoping for a quick answer. Working through that sequence is on my "Definitely do sometime when I have nothing too important to do" list.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-04-26T03:48:01.933Z · LW(p) · GW(p)

OK, a quick answer: classical physics cannot be true of the reality we find ourselves in. Specifically, classical physics is contradicted by experimental results such as the photoelectric effect and the double-slit experiment. The parts of reality that require you to know quantum physics affect such important things as chemistry, semiconductors and whether our reality can contain such a thing as a "solid object". The only reason we teach classical physics is that it is easier than quantum physics. If everyone could learn quantum physics, there would be no need to teach classical physics anymore.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-26T03:53:59.486Z · LW(p) · GW(p)

First of all, thanks.

The only reason we teach classical physics is that it is easier than quantum physics. If everyone could learn quantum physics, there would be no need to teach classical physics anymore.

Really? Isn't classical physics used in some contexts because the difference between the classical model and reality isn't enough to justify extra complications? I'm thinking specifically of engineers.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-04-26T04:06:19.543Z · LW(p) · GW(p)

Isn't classical physics used in some contexts because the difference between the classical model and reality isn't enough to justify extra complications?

True. Revised sentence: the only reasons for using classical physics are that it is easier to learn, easier to calculate with and it helps you understand people who know only classical physics.

comment by RobinZ · 2010-04-26T02:18:52.673Z · LW(p) · GW(p)

On the first point: I try never to categorize questions as intelligent or dumb, but is quantum mechanics an improvement? Unquestionably. To give only the most obvious example, lasers work by quantum excitation.

I, too, would be interested in learning quantum mechanics from a good textbook.

Replies from: cupholder
comment by cupholder · 2010-04-26T05:47:14.888Z · LW(p) · GW(p)

I understand that Claude Cohen-Tannoudji et al.'s two-volume Quantum Mechanics is supposed to be exceptional, albeit expensive, time consuming to work through fully, and targeted at post-graduates rather than beginners. (Another disclaimer: I have not used the textbook myself.) Cohen-Tannoudji got the 1997 Nobel Prize in Physics for his work with...lasers!

Replies from: wnoise
comment by wnoise · 2010-04-26T09:18:20.765Z · LW(p) · GW(p)

It was my undergraduate textbook. It is certainly thorough, but other than that, I'm not sure I can strongly recommend it. (The typography is painful).

I think starting with Quantum Computation and Quantum Information and hence discrete systems might be a better way to start, and then later expand to systems with continuous degrees of freedom.

Replies from: RobinZ
comment by RobinZ · 2010-04-26T10:31:42.216Z · LW(p) · GW(p)

I'm confused: "typography"? The font on the Amazon "LOOK INSIDE" seems perfectly legible to me.

Replies from: wnoise
comment by wnoise · 2010-04-26T18:37:37.801Z · LW(p) · GW(p)

The typesetting of the equations in particular. There were several things that hampered the readability for me -- like using a period for the dot product, rather than a raised dot. I expect a full stop to mean the equation has ended. Exponents are set too big. Integral signs are set upright, rather than slanted (conversely the "d"s in them are italicized, when they should be viewed as an operator, and hence upright). Large braces for case expansion of definitions are 6 straight lines, rather than smooth curves. The operator version of 1 is an ugly outline. The angle brackets used for bras and kets are ugly (though at least distinct from the less than and greater than signs).

I'm not being entirely fair: these are really nits. On the other hand, these and other things actually made it harder for me to use the book. And it's not an easy book to start with.

Replies from: RobinZ
comment by RobinZ · 2010-04-26T18:49:07.861Z · LW(p) · GW(p)

Thanks for the elaboration. I'll bear that in mind if I have a chance to pick up a copy.

comment by NancyLebovitz · 2010-04-23T13:19:08.789Z · LW(p) · GW(p)

I'm looking at the question of whether it's certainly the case that getting an FAI is a matter of zeroing in directly on a tiny percentage of AI-space.

It seems to me that an underlying premise is that there's no reason for a GAI to be Friendly, so Friendliness has to be carefully built into its goals. This isn't unreasonable, but there might be non-obvious pulls towards or away from Friendliness, and if they exist, they need to be considered. At the very least, there may be general moral considerations which incline towards Friendliness, and which would be more stable than starting from a definition of humanity and then trying to protect that.

Here's an example of a seemingly open choice where there are non-obvious biases towards particular outcomes-- D&D alignments. You look at the tidy little two-dimensional chart, and you might think you can equally play any alignment which appeals to you.

The truth is that Chaotic and/or Evil and/or Neutral alignments tend to make coordination inside parties more difficult. It's possible to play them successfully, but it takes more skill than making Lawful and/or Good work,. Some players find out that playing from the first batch with too much gusto makes gaming less fun. Some GMs put restrictions on the first batch of alignments or how they can be played.

comment by Richard_Kennaway · 2010-04-21T07:10:27.908Z · LW(p) · GW(p)

I wonder how alarming people find this? I guess that if something fooms, this will provide the infrastructure for an instant world takeover. OTOH, the "if" remains as large as ever.

RoboEarth is a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment.

Bringing a new meaning to the phrase "experience is the best teacher", the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction.

They're shortly having a workshop at a large robotics conference in Alaska.

comment by NancyLebovitz · 2010-04-19T11:20:47.678Z · LW(p) · GW(p)

CFS: creative non-fiction about immortality

BOOK PROJECT: Immortality postmark deadline August 6, 2010

For a new book project to be published by Southern Methodist University Press, entitled "Immortality," we're seeking new essays from a variety of perspectives on recent scientific developments and the likelihood, merits and ramifications of biological immortality. We're looking for essays by writers, physicians, scientists, philosophers, clergy--anyone with an imagination, a vision of the future, and a dream (or fear) of living forever.

Essays must be vivid and dramatic; they should combine a strong and compelling narrative with a significant element of research or information, and reach for some universal or deeper meaning in personal experiences. We’re looking for well-written prose, rich with detail and a distinctive voice.

For examples, see Creative Nonfiction #38 (Spring 2010).

Guidelines: Essays must be: unpublished, 5,000 words or less, postmarked by August 6, 2010, and clearly marked “Immortality” on both the essay and the outside of the envelope. Please send manuscript, accompanied by a cover letter with complete contact information (address, phone, and email) and SASE to:

Creative Nonfiction Attn: Immortality 5501 Walnut Street, Suite 202 Pittsburgh, PA 15232

comment by Alex Flint (alexflint) · 2010-04-15T14:44:37.892Z · LW(p) · GW(p)

How does the notion of time consistency in decision theory deal with the possibility of changes to our brains/source code? For example, suppose I know that my brain is going to be forcibly re-written in 10 minutes, and that I cannot change this fact. Then decisions I make after that modification will differ from those I make now, in the presence of the same information (?).

Replies from: RobinZ
comment by RobinZ · 2010-04-15T15:04:48.997Z · LW(p) · GW(p)

"Forcibly rewritten" implies your being a different person afterwards. Naively, time consistency would suggest treating them as such.

Replies from: alexflint
comment by Alex Flint (alexflint) · 2010-04-18T22:07:38.512Z · LW(p) · GW(p)

But if a mind's source code is changed just a little then shouldn't its decisions be changed just a little too (for sufficiently small changes in source code)? In so, then what does time consistency even mean? If not, then how big does a modification have to be to turn a mind into "a different person" and why does such a dichotomy make sense?

Replies from: wnoise, RobinZ
comment by wnoise · 2010-04-19T03:01:46.527Z · LW(p) · GW(p)

Not necessarily: if (a < b) changing to if (a > b) is a very small change in source with a potentially very large effect.

Replies from: alexflint
comment by Alex Flint (alexflint) · 2010-04-19T08:30:34.230Z · LW(p) · GW(p)

Right, so I suppose what I should've said is that if I want to make some arbitrarily small change to the decisions made by mind X (measured as some appropriate quantity of "change") then there exists some change I could make to X's source code such that no decision would deviate by more than the desired amount from X's original decision.

How to measure "change in decisions" and "change in source code" is all a bit fluffy but the point is just that there is a continuum of source code modifications from those with negligible effect to those with large effect. This makes it hard to believe that all modifications can be classified as either "X is now a different person" or "X is the same person" with no middle ground.

And, if on the contrary middle ground is allowed, then what does time consistency mean in such a case?

comment by RobinZ · 2010-04-19T01:36:17.827Z · LW(p) · GW(p)

Well, it's not much of a problem for me in particular, as I'm fairly generous toward other people as a rule - the main problem is continuity of values and desires. A random stranger is not likely to agree with me on most issues, so I'm not sure I want my resources to become theirs rather than Mom's. If there is likely to be significant continuity of a coherent-extrapolated-volition sort, I'd probably not worry.

comment by Alex Flint (alexflint) · 2010-04-15T13:28:06.318Z · LW(p) · GW(p)

If you were going to predict the emergence of AGI by looking at progress towards it over the past 40 years and extrapolate into the future, then what parameter(s) would you measure and extrapolate?

Kurzweil et al measure raw compute power in flops/$, but as has been much discussed on LessWrong there is more to AI than raw compute power. Another popular approach is to chart progress in terms of the animal kingdom, saying things like "X years ago computers were as smart as jellyfish, now they're as smart as a mouse, soon we'll be at human level", but it's hard to say whether a computer is "as smart" as some organism, and even harder to extrapolate that sensibly into the future.

What other approaches?

Dislaimer: I'm not saying this is actually a good way to predict when AGI will emerge!

comment by NancyLebovitz · 2010-04-07T13:17:17.076Z · LW(p) · GW(p)

In spite of the rather aggressive signaling here in favor of atheism, I'm still an agnostic on the grounds that it isn't likely that we know what the universe is ultimately made of.

I'm even willing to bet that there's something at least as weird as quantum physics waiting to be discovered.

Discussion here has led me to think that whatever the universe is made of, it isn't all that likely to lead to a conclusion there's a God as commonly conceived, though if we're living in a simulation, whoever is running it may well have something like God-like omnipotence and omnipresence. "May well" because the simulation-runner may be subject to legal, social, economic, or [unimaginable] constraints.

While I'm on the subject, is there any reason to think Omega is possible? Or is Omega simply a handy tool for thinking about philosophical problems?

I haven't seen "I don't know and you don't either" agnosticism addressed here.

Replies from: Jack, Matt_Simpson, Richard_Kennaway
comment by Jack · 2010-04-07T17:29:36.423Z · LW(p) · GW(p)

it isn't all that likely to lead to a conclusion there's a God as commonly conceived

The Bayesian translation of this is "I'm an atheist".

While I'm on the subject, is there any reason to think Omega is possible? Or is Omega simply a handy tool for thinking about philosophical problems?

Interesting. I'm not sure I know enough about Omega to say. But for one thing: I think it is probably impossible for Omega to predict it's own future mental states (there would be an infinite recursion). This will introduce uncertainty into its model of the universe.

comment by Matt_Simpson · 2010-04-07T15:43:35.453Z · LW(p) · GW(p)

The justification for atheism over agnosticism is essentially Occam's Razor. As far as we know, there are no exceptions to physics as we understand it. So God/Gods explains nothing that isn't already explained by physics. So P(physics is true) >= P(Physics is true AND God/Gods exist(s))

comment by Richard_Kennaway · 2010-04-07T15:24:10.607Z · LW(p) · GW(p)

I've always taken Omega to be just a handy tool for thinking about philosophical problems. "Omega appears and tells you X" is short for "For the purposes of this conundrum, imagine that X is true, that you have undeniably conclusive evidence for X, and that the nature of this evidence and why it convinces you is irrelevant to the problem."

In a case where X is impossible ("Omega appears and tells you that 2+2=3") then the conundrum is broken.

comment by Matt_Simpson · 2010-04-06T21:53:20.314Z · LW(p) · GW(p)

I have a couple of questions about UDT if anyone's willing to bite. Thanks in advance.

comment by NancyLebovitz · 2010-04-06T10:34:51.635Z · LW(p) · GW(p)

Mass Driver's recent comment about developing the US Constitution being like the invention of a Friendly AI opens up the possibility of a mostly Friendly AI-- an AI which isn't perfectly Friendly, but which has the ability to self-correct.

Is it more possible to have an AI which never smiley-faces or paperclips or falls into errors we can't think of than to have an AI which starts to screw up, but can realize it and stops?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-06T12:00:02.063Z · LW(p) · GW(p)

It's not feasible to attempt to create a government which both perfect and self-correcting. I'm not sure if the same is true of FAI.

comment by Mass_Driver · 2010-04-06T04:17:33.994Z · LW(p) · GW(p)

Is anybody interested in finding a study buddy for the material on Less Wrong? I think a lot of the material is really deep -- sometimes hard to internalize and apply to your own life even if you're articulate and intelligent -- and that we would benefit from having a partner to go over the material with, ask tough questions, build trust, and basically learn the art of rationality together. On the off chance that you find Jewish analogies interesting or helpful, I'm basically looking for a chevruta partner, although the sacredish text in question would be the Less Wrong sequences instead of the Bible.

comment by Liron · 2010-04-02T05:59:44.118Z · LW(p) · GW(p)

I was disallowed from posting this on the LessWrong subreddit, so here it is on the LessWrong mainland:

Shoeperstimulus

comment by Richard_Kennaway · 2010-04-01T22:15:46.521Z · LW(p) · GW(p)

I decided the following quote wasn't up to Quotes Thread standard, but worth remarking on here:

Read a book a day.

Arthur C. Clarke, quoted in "Science Fictionisms".

I've never managed to do this. I've sometimes read a book in a day, but never day after day, although I once heard Jack Cohen) say he habitually read three SF books and three non-fiction books a week.

How many books a week do you read, and what sort of books? (No lists of recommended titles, please.)

Replies from: taw, None, Morendil, Liron, wnoise, Rain
comment by taw · 2010-04-02T13:08:04.879Z · LW(p) · GW(p)

I pretty much don't read paper books - this format and medium might as well die as far as I'm concerned. I listen to ridiculous number of audiobooks on Sansa Clip (in fast mode). The Bay has plenty, or Audible if you can stand their DRM. My favourite have been TTC lectures, which have zero value to me other than entertainment.

The idea for audiobooks was mostly to do something useful at times when I cannot do anything else - but it does use cognitive resources and makes me more tired than if I was listening to music or so for the same amount of time. It's very reliable relation.

comment by [deleted] · 2010-04-02T12:27:09.268Z · LW(p) · GW(p)

I read about a book a week, almost exclusively non-fiction, generally falling somewhere between the popular science and textbook level. Occasionally I'll throw a sci-fi novel into the mix.

I'd love to speed this up, since my reading list grows much faster than books get completed, but I'm not sure how (other than simply spending more time reading). Has anyone had luck with speed-reading techniques , such as Tim Ferriss's?

Replies from: JenniferRM, gwern
comment by JenniferRM · 2010-04-02T20:57:36.940Z · LW(p) · GW(p)

In some periods of my life I've read about a book a day (almost entirely fiction), but I mostly look back at those periods with regret, because I suspect my reading was largely based on the desire to escape an unpleasant reality that I understood as inherent to reality rather than something contingent that I could do something about.

As an adult I have found myself reading non-fiction directed at life goals more often and fiction relatively less. Every so often I go 3 months without reading a book but other times I get through maybe 1 a week, but part of this is that non-fiction is generally just a slower read because they actually have substantive content that must be practiced or considered before it really sticks. With math texts I generally slow down to maybe 1 to 10 pages a day.

With non-fiction, I also tend to spend relatively a lot of time figuring out what to read, rather than simply reading it. When I become interested in a subject I don't mind spending several hours trying to work out the idea space and find "the best book" within that field.

I've never made efforts to learn speed reading because the handful of times I've met someone who claimed to be able to do it and was up for a test, their reading comprehension seemed rather low. We'd read the same thing and then I'd ask them about details of motivation or implication and they would have difficulty even remembering particular scenes or plot elements, leaving out their implications entirely.

With speed reading, I sometimes get the impression that people are aiming for "having read X" as the goal, rather than "having read X and learned something meaningful from it".

comment by gwern · 2010-04-03T01:43:02.978Z · LW(p) · GW(p)

The stuff Ferriss covers is normal enough. It's better to think of it as remedial reading techniques for people (most everyone) who don't read well than as speeding up past 'normal'. For example, if you're subvocalizing everything you read, You're Doing It Wrong. For your average LW reader, I'd suggest that anything below 300WPM is worth fixing.

comment by Morendil · 2010-04-02T06:38:16.409Z · LW(p) · GW(p)

After a fallow period I'm back to two-three a month as a fairly regular rhythm. Fiction has been pretty much eliminated from my reading diet (ten years ago it used to make up the bulk of it).

Who else has a LibraryThing account or similar?

Replies from: None, RobinZ, Richard_Kennaway
comment by [deleted] · 2010-04-02T12:29:17.967Z · LW(p) · GW(p)

I have a LibraryThing here, which I generally do a bulk update of every 2-3 months (whenever I'm reminded I have it).

comment by RobinZ · 2010-04-02T12:12:36.022Z · LW(p) · GW(p)

I recently got a GoodReads account - mainly because (a) it's on my iPhone and (b) it is just a reading list, rather than an owning list, so editions and such aren't such a hassle.

comment by Richard_Kennaway · 2010-04-02T06:53:45.242Z · LW(p) · GW(p)

Who else has a LibraryThing account or similar?

I'm on LibraryThing here, but I don't keep it up to date (I did a bulk upload in 2006 and have hardly touched it since), and most of my books that are too old to have ISBNs aren't there. My primary book catalogue isn't online.

comment by Liron · 2010-04-02T00:19:30.024Z · LW(p) · GW(p)

Read: 2 books a year :(

Listen on iPhone through audible.com subscription: 2-3 books a month

Plan to read on iPad once I buy it: Maybe one a month, and denser stuff than what they put on audio.

comment by wnoise · 2010-04-01T22:31:53.932Z · LW(p) · GW(p)

I have read over 5 books a day. I generally read less than one book a week though, as there are so many other things to consume, e.g. on the Internet.

comment by Rain · 2010-04-01T22:25:50.795Z · LW(p) · GW(p)

Tyler Cowen also reads voraciously.

I read up to 3 books a week, averaging around 0.3 due to long periods of avoiding them. The internet and Netflix are much more immediate and require less work.

I read primarily science fiction and fantasy, but I have lots of classical fiction and non-fiction as well, Great Books style.

comment by SilasBarta · 2010-04-01T17:50:24.747Z · LW(p) · GW(p)

Question about Mach's principle and relativity, and some scattered food for thought.

Under Mach and relativity, it is only relative motion, including acceleration, that matters. Using any frame of reference, you predict the same results. GR also says that acceleration is indistinguishable from being in a gravitational field.

However, accelerations have one observable impact: they break things. So let's say I entered the gravitational field of a REALLY high-g planet. That can induce a force on me that breaks my bones. Yet I can define myself as being at rest and say that the planet is moving towards me. But my bones will still break. Why does a planet coming toward me cause my bones to break, even before I touch it, and there exists a frame in which I'm not undergoing acceleration?

I have an idea of how to answer this (something like, "actually, if I define myself as the origin, the entire universe is accelerating towards me, which causes some kind of gravitational waves which predict the same thing as me undergoing high g's"). But I bring it up because I'm trying to come up with a research program that expresses all the laws of physics in terms of information theory (kinda like the "it from bit" business you hear about, except with actual implications).

Relative energy levels have an informational interpretation: higher energy states are less likely, so less likely states convey more information. So structrual breakage can be explained in terms of the system attempting to store more information than it is capable of. Buckling (elastic instability), in turn, can be explained as when information is favored (via low energy levels) to be stored in a different degree of freedom than that in which the load is applied on.

Gravitational potential energy and kinetic energy from velocity also have an informational interpretation. So: how does this all come together to explain structural breakage under acceleration, in information-theoretic terms?

Replies from: wnoise, pengvado
comment by wnoise · 2010-04-01T17:55:28.396Z · LW(p) · GW(p)

However, accelerations have one observable impact: they break things.

No. Moving non-rigidly breaks things. Differences in acceleration on different parts of things break things.

Replies from: rwallace, SilasBarta
comment by rwallace · 2010-04-02T10:43:25.352Z · LW(p) · GW(p)

The classic pithy summary of this is "falling is harmless, it's the sudden stop at the end that kills you."

Replies from: None, SilasBarta
comment by [deleted] · 2010-04-02T16:31:19.243Z · LW(p) · GW(p)

You know, really, neither falling nor suddenly stopping is harmful. The thing that kills you is that half of you suddenly stops and the other half of you gradually stops.

Replies from: SilasBarta
comment by SilasBarta · 2010-04-02T16:45:45.137Z · LW(p) · GW(p)

Well put. And the way I can fit this into an information-theoretic formalism is that one part of the body has high kinetic energy relative to the other, which requires more information to store.

comment by SilasBarta · 2010-04-02T15:49:57.030Z · LW(p) · GW(p)

Yes, but the sudden stop is itself a (backwards) acceleration, which should be reproducible merely from a gravitational field.

(Anecdote: when I first got into aircraft interior monument analysis, I noticed that the crash conditions it's required to withstand include a forward acceleration of 9g, corresponding to a head-on crash. I naively asked, "wait, in a crash, isn't the aircraft accelerating backwards (aft)?" They explained that the criteria is written in the frame of reference of the objects on the aircraft, which are indeed accelerating forward relative to the aircraft.)

Replies from: wnoise
comment by wnoise · 2010-04-02T16:53:34.827Z · LW(p) · GW(p)

The sudden stop is a differential backwards acceleration. The front of the object gets hits and starts accelerating backwards while the back is not,

If you could stop something by applying a uniform 10000g to all parts of the object, it would survive none the worse for wear. If you can't, and only apply it to part, the object gets smushed or ripped apart.

comment by SilasBarta · 2010-04-02T15:45:15.574Z · LW(p) · GW(p)

Actually, from a frame of reference located somewhere on the breaking thing, wouldn't it be the differences in relative positions (not accelerations) of its parts that causes the break? After all, breakage occurs when (there exists a condition equivalently expressible as that in which) too much elastic energy is stored in the structure, and elastic energy is a function of its deformation -- change in relative positions of its parts.

Replies from: JGWeissman
comment by JGWeissman · 2010-04-02T16:08:33.513Z · LW(p) · GW(p)

Yes, change in relative positions causes the break. But differences in velocities caused the change in relative positions. And differences in acceleration caused the differences in velocities.

Normally, you can approximate that a planet's gravitational field is constant with the region containing a person, so it will cause a uniform acceleration, that will change the person's velocity uniformly, which will not cause any relative change in position.

However, the strength of the gravitational field really varies inversely with the square of the distance to the center of the planet, so if the person's head is further from the the planet than their feet, their feet will be accelerated more than their head. This is known as gravitational shear. For small objects in weak fields, this effect is small enough not to be noticed.

Replies from: SilasBarta
comment by SilasBarta · 2010-04-02T16:27:45.507Z · LW(p) · GW(p)

Okay, thanks, that makes sense. So being in free fall in a gravitational field isn't really comparable to crashing into something, because the difference in acceleration across my body in free fall is very small (though I suppose could be high for a small, ultra-dense planet).

So, in free fall, the (slight) weaking gravitational field as you get farther from the planet should put your body in (minor) tension, since, if you stand as normal, your feet accelerate faster, pulling your head along. If you put the frame of reference at your feet, how would you account for your head appearing to move away from you, since the planet is pulling it in the direction of your feet?

Replies from: Cyan, JGWeissman
comment by Cyan · 2010-04-02T22:28:26.553Z · LW(p) · GW(p)

though I suppose could be high for a small, ultra-dense planet

Spaghettification.

comment by JGWeissman · 2010-04-02T17:05:23.579Z · LW(p) · GW(p)

If you put the frame of reference at your feet, how would you account for your head appearing to move away from you, since the planet is pulling it in the direction of your feet?

Your feet are in an accelerating reference frame, being pulled towards the planet faster than your head. One way to look at it is that the acceleration of your feet cancels out a gravitational field stronger than that experienced by your head.

Replies from: SilasBarta
comment by SilasBarta · 2010-04-02T17:22:22.265Z · LW(p) · GW(p)

But I've ruled that explanation out from this perspective. My feet are defined to be at rest, and everything else is moving relative to them. Relativity says I can do that.

Replies from: JGWeissman
comment by JGWeissman · 2010-04-02T17:44:18.572Z · LW(p) · GW(p)

Relavtivity says that there are no observable consequences from imposing a uniform gravitational field on the entire universe. So, imagine that we turn on a uniform gravitational field that exactly cancels the gravitational field of the planet at your feet. Then you can use an inertial (non accelerating) frame centered at your feet. The planet, due to the uniform field, accelerates towards you. Your head experiences the gravitational pull of the planet, plus the uniform field. At the location of your head the uniform field is slightly stronger than is needed to cancel the planet's gravity, so your head feels a slight pull in the opposite direction, away from your feet.

An important principle here is that you have to apply the same transformation that lets you say your feet are at rest to the rest of the universe.

comment by pengvado · 2010-04-02T12:00:50.766Z · LW(p) · GW(p)

Why does a planet coming toward me cause my bones to break, even before I touch it, and there exists a frame in which I'm not undergoing acceleration?

In a gravitational field steep enough to have nonnegligible tides (that is the phenomenon you were referring to, right?), there is no reference frame in which all parts of you remain at rest without tearing you apart. You can define some point in your head to be at rest, but then your feet are accelerating; and vice versa.

comment by AngryParsley · 2010-04-01T15:58:00.988Z · LW(p) · GW(p)

Sam Harris gave a TED talk a couple months ago, but I haven't seen it linked here. The title is Science can answer moral questions.

Replies from: taw, cupholder, timtyler, Vladimir_Nesov, Liron, drimshnick
comment by taw · 2010-04-02T13:10:01.725Z · LW(p) · GW(p)

It was so filled with wrong I couldn't even bother to finish it, and I usually enjoy crackpots from TED.

comment by cupholder · 2010-04-01T18:05:37.418Z · LW(p) · GW(p)

Harris has also written a blog post nominally responding to 'many of my [Harris'] critics' of his talk, but it seems to be more of a reply to Sean Carroll's criticism of Harris' talk (going by this tweet and the many references to Carroll in Harris' post). Carroll has also briefly responded to Harris' response.

comment by timtyler · 2010-04-02T11:59:50.372Z · LW(p) · GW(p)

My reaction was: bad talk, wrong answers, not properly thought through.

comment by Vladimir_Nesov · 2010-04-01T16:34:48.541Z · LW(p) · GW(p)

He discusses that science can answer factual questions, thus resolving uncertainty in moral dogma defined conditionally on those answers. This is different from figuring out moral questions themselves.

Replies from: Jack
comment by Jack · 2010-04-02T14:51:32.899Z · LW(p) · GW(p)

That isn't all he is claiming though:

I was not suggesting that science can give us an evolutionary or neurobiological account of what people do in the name of “morality.” Nor was I merely saying that science can help us get what we want out of life. Both of these would have been quite banal claims to make (unless one happens to doubt the truth of evolution or the mind’s dependency on the brain). Rather I was suggesting that science can, in principle, help us understand what we should do and should want—and, perforce, what other people should do and want in order to live the best lives possible. My claim is that there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of mind

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-02T18:07:28.311Z · LW(p) · GW(p)

He does claim this, but it's not what he actually discusses in the talk.

comment by Liron · 2010-04-02T00:15:27.486Z · LW(p) · GW(p)

I'm always impressed by Harris's eloquence and clarity of thought.

comment by drimshnick · 2010-04-01T16:08:15.986Z · LW(p) · GW(p)

This is a blog devoted to rationality - it's no wonder it hasn't been linked here.

Replies from: Rain
comment by Rain · 2010-04-01T16:14:36.436Z · LW(p) · GW(p)

Why do you say that? What do you mean?

comment by Liron · 2010-04-02T06:00:14.735Z · LW(p) · GW(p)

I was disallowed from posting this on the LessWrong subreddit, so here it is on the LessWrong mainland: Shoeperstimulus

comment by MatthewB · 2010-04-03T10:48:14.739Z · LW(p) · GW(p)

Are rush Limbaugh and Glen Beck (with their sidekick O'Reily, who does't really factor much) foolishly April enough?