[SEQ RERUN] Prices or Bindings?

post by MinibearRex · 2012-09-30T05:07:14.754Z · LW · GW · Legacy · 11 comments

Today's post, Prices or Bindings? was originally published on 21 October 2008. A summary (taken from the LW wiki):

 

Are ethical rules simply actions that have a high cost associated with them? Or are they bindings, expected to hold in all situations, no matter the cost otherwise?


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Ethical Injunctions, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

11 comments

Comments sorted by top scores.

comment by DanielLC · 2012-09-30T05:15:17.139Z · LW(p) · GW(p)

You (Eliezer) would break your word to save the world. It's just, if someone sufficiently rational came to you and told you about their biotech concoction, keeping your word is your best shot to save the world under UDT.

Of course, the guy you're quoting didn't understand UDT, so it would be perfectly understandable for him to use much the same reasoning and call it "honor".

Replies from: MixedNuts
comment by MixedNuts · 2012-09-30T14:56:12.433Z · LW(p) · GW(p)

When do UDT and honor yield different results?

Replies from: Manfred, DanielLC
comment by Manfred · 2012-09-30T17:54:45.724Z · LW(p) · GW(p)

You'll break your word to people not smart enough to realize you'll break your word to people who aren't smart enough.

Replies from: MinibearRex
comment by MinibearRex · 2012-09-30T21:13:02.249Z · LW(p) · GW(p)

If you try to add to that category people who know that, but think that they are smart enough, then it gets tricky. How do I know whether I actually am smart enough, or whether I just think I'm smart enough.

Replies from: Manfred
comment by Manfred · 2012-09-30T23:31:28.299Z · LW(p) · GW(p)

Hm, not sure. Obviously on the object level you can just prove what the UDT agent will do. But not being able to do that is presumably why you're uncertain in the first place.

Still, I think people should usually just trust themselves. "I don't think I'm a rock, and a rock doesn't think it's a rock, but that doesn't mean I might be a rock."

Replies from: MinibearRex
comment by MinibearRex · 2012-10-01T04:22:02.234Z · LW(p) · GW(p)

I tried to solve it on my own, but haven't been able to so far. I haven't been able to figure out what sort of function someone who knows that I'm using UDT will use to predict my actions, and how my own decisions affect that. If someone knows that I'm using UDT, and I think that they think that I will cooperate with anyone who knows I'm using UDT, then I should break my word. But if they know that...

In general, I'm rather suspicious of the "trust yourself" argument. The Lake Wobegon effect would seem to indicate that humans don't do it well.

Replies from: Manfred
comment by Manfred · 2012-10-01T08:00:40.412Z · LW(p) · GW(p)

If you're so smart, why ain't you a rock? :P

And yeah, at some level you have to be checking for whether or not you are proving what the UDT agent will do - if you prove it you're safe, and if you don't you're not. The trouble is that checking for the proof can contain all the steps of the proof, in which case you might get things wrong because your search wasn't checking itself! So one way is to check for the proof in a way that doesn't correlate with the specific proof. "Did I check any proofs? No? Better not trust anyone."

comment by DanielLC · 2012-09-30T22:21:20.145Z · LW(p) · GW(p)

"Honor" is not well-defined. He could have meant UDT. He also could have meant something closer to always keeping his promises. Someone using UDT wouldn't keep a stupid promise. They wouldn't make a stupid promise either, but they might not always have been using UDT. They also might not keep a promise to someone insufficiently rational, though that could cause problems with people who aren't sure they're sufficiently rational, people who find out about the broken promise later, etc.

Also, the quote makes it sound like he's choosing honor over the world. The way Eliezer sees it, he's choosing to have the opportunity to save the world.

Replies from: MixedNuts
comment by MixedNuts · 2012-10-01T09:46:02.879Z · LW(p) · GW(p)

I meant "honor" as shorthand for "never ever break your word", yes.

Should one break stupid promises? I don't want to add a clause to every promise saying "unless my future self decides the promise is stupid"; this seems to way underpower oaths.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-01T14:50:09.905Z · LW(p) · GW(p)

It seems reasonable to me that if I make stupid promises, my subsequent choices are to behave stupidly or behave (as the word is being used here) dishonorably. Those aren't great choices, but that shouldn't surprise me: stupid acts often result in not-so-great consequences.

Replies from: MixedNuts
comment by MixedNuts · 2012-10-01T20:07:25.385Z · LW(p) · GW(p)

Recovering from stupid choices is a practical question for many of us.