Posts

Comments

Comment by Ultima on Rationality Quotes: January 2011 · 2011-01-28T14:54:38.706Z · LW · GW

Good point.

Comment by Ultima on Rationality Quotes: January 2011 · 2011-01-27T02:27:05.242Z · LW · GW

Isn't that exactly what we do here (and on other forums)?

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-26T16:55:45.754Z · LW · GW

Wasn't that line (the one saying that the hangman comes for him on wednesday) just supposed to be an example? I didn't think that the problem required the hangman to come on wednesday; I thought that it left open when he would actually come.

(And, no, I'm definitely not making fun of you.)

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-26T01:23:00.411Z · LW · GW

Wait, why on wednesday?

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T22:56:33.173Z · LW · GW

What about if the prisoner is still alive on thursday in the afternoon?

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T22:18:50.636Z · LW · GW

the warden was correct in the end

Where?

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T22:10:09.151Z · LW · GW

Oh okay.

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T22:04:03.673Z · LW · GW

No, but I can accurately predict that a coin will come heads or tails.

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T22:02:17.646Z · LW · GW

Oh, I see.

At first, I missed the significance of this passage:

The warden's statement is then false and unparadoxical. This is similar to the one-day analogue, where the warden says "You will be executed tomorrow at noon, and will be surprised" and the prisoner says "wtf?".

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T21:45:11.869Z · LW · GW

Then here's an analogous "paradox":

  • There were two men standing in front of me. One said that the ground was red, and the other said that it was blue. Neither of them are ever wrong.

So, yeah, that's why I said that it sounds like a garden variety contradiction.

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T21:08:05.811Z · LW · GW

Sounds like a garden variety contradiction.

(If it's past noon on thursday, obviously he would know that it's going to come the next day at noon; the warden simply would have been wrong.)

Or am I still misunderstanding it?

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T20:59:03.684Z · LW · GW

For the purposes of the problem, to be surprised just means that something happened to you which you didn't predict beforehand.

Okay, I understand that.

Its not the usual definition (among other things it implies I should be 'surprised' if a coin I flip comes up heads) but presumably whoever first came up with the paradox couldn't think of a better word to express whatever they meant.

But I don't understand that.

I mean, why couldn't I simply predict that it would be heads or tails?

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T20:53:09.069Z · LW · GW

Perfect!

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T20:49:21.546Z · LW · GW

Then wouldn't the prisoner be surprised no matter what?

But, wait, when exactly are we judging whether he's surprised?

Let's say that it's thursday in the afternoon, and he's sitting around saying to himself, "I'm totally surprised that it's going to come on friday. I was online reading about this exact situation, and I thought that it couldn't come on friday, because it would be the last available day, and I would know that it would be coming." Or are we waiting for that surprise to dissipate, and turn into "well, I guess that I'm going to die tomorrow"?

From what I can see at this point, I think that the "paradox" comes of an equivocation between those two situations (being surprised right after it doesn't happen, and then having that surprise dissipate into expectation). But I could be wrong.

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T20:31:16.522Z · LW · GW

Sure, you would be surprised that you were about to die now.

But would you also be surprised that your life didn't end up being eternal? No, because you know that you will die someday.

But what's the significance of this distinction for this problem? Well, I don't understand how the prisoner could think anything other than, "I guess that I'm going to end up dead one of these days around noon (monday, tuesday, wednesday, thursday, or friday)." It's not like he has any reason to think that it would be more likely to happen one of the days rather than another. But, in your example, you do have a reason for that (dying now would be less likely than dying later).

But, wait, isn't that the whole issue in contention (whether he has any reason to think that it would be more likely to happen one of the days rather than another)? Yeah, so let me get back to that.

Let's say that the hangman shows up on the first day at noon (monday). Would the prisoner be "surprised" that it was monday rather than one of the other days? Why would he? He wouldn't have any information besides that it would be on one of those days. Or let's say that the hangman shows up on the second day at noon (tuesday). Would the prisoner be "surprised" that it was tuesday instead of one of the other days? I mean, why would he? He wouldn't have any knowledge except that it would be on one of the next 3 days.

I'm completely confused by this "paradox".

Maybe you could help me out?

Comment by Ultima on Resolving the unexpected hanging paradox · 2011-01-25T19:57:53.886Z · LW · GW

I'm totally confused.

Why would anybody in that situation ever be surprised?

I mean, they would know that somebody will execute them at noon on one of the days (monday, tuesday, wednesday, thursday, or friday). No matter what day it come on, why would they be surprised? If it comes at noon on monday, they would think, "Oh, it's noon on monday, and I'm about to die; nothing surprising here." If it doesn't come at noon on monday, they would think, "Oh, it's noon on monday, and I'm not about to die; nothing surprising here (I guess that it will come on one of the other days)." Or whatever.

(Assuming that the the warden told the truth, and the prisoner assumed that.)

Comment by Ultima on Some rationalistic aphorisms · 2011-01-25T19:40:43.057Z · LW · GW

Instead of jumping to such a reckless conclusion, why don't you just ask him what he means by "ontologically a verb"?

Comment by Ultima on Perfectly Friendly AI · 2011-01-24T20:23:32.845Z · LW · GW

Solid answer, as far as I can see right now.

Comment by Ultima on Perfectly Friendly AI · 2011-01-24T20:18:41.884Z · LW · GW

That's my understanding of what I value, at least.

Well, I'm not so sure that those words (the ones that I used to summarize your position) even mean anything.

How could you value them wanting to kill you somewhat (which would be you feeling some desire while cycling through a few different instances of imagining them doing something leading to you dying), but also value you not dying even more (which would be you feeling even more desire while moving through a few different examples of imagining you being alive)?

It would be like saying that you value going to the store somewhat (which would be you feeling some desire while cycling through a few different instances of imagining yourself traveling to the store and getting there), but value not actually being at the store (which would be you feeling even more desire while moving through a few different examples of not being at the store). But would that make sense? Do those words (the ones making up the first sentence of this paragraph) even mean anything? Or are they just nonsense?

Simply put, would it make sense to say that somebody could value X+Y (where the addition sign refers to adding the first event to the second in a sequence), but not Y (which is a part of X+Y, which the person apparently likes)?

Comment by Ultima on Perfectly Friendly AI · 2011-01-24T20:01:06.757Z · LW · GW

I assign utility to their values even if they conflict with mine to such a great degree, but I have to measure that against the negative utility they impose on me.

So, as to the example, you would value that they want to kill you somewhat, but you would value not dying even more?

Comment by Ultima on Perfectly Friendly AI · 2011-01-24T19:52:46.734Z · LW · GW

I suspect that quite a few humans (including me) would be horrified by the actual implications of their own values being maximally realized.

How exactly could you be "horrified" about that unless you were comparing some of your own values being maximally realized with some of your other values not being maximally realized?

In other words, it doesn't make sense (doesn't even mean anything!) to say that you would be horrified (isn't that a bad thing?) to have your desires fulfilled (isn't that a good thing?), unless you're really just talking about some of your desires conflicting with some of your other desires.

Comment by Ultima on Perfectly Friendly AI · 2011-01-24T19:42:45.902Z · LW · GW

Things like me getting killed in the course of satisfying their utility functions, as you mentioned above, would be a big one.

So basically your "utility function assigns value to the desires of beings whose values conflict with your own" unless they really conflict with your own (such as get you killed in the process)?

Comment by Ultima on Perfectly Friendly AI · 2011-01-24T19:21:29.555Z · LW · GW

Note: I get super redundant after like the first reply, so watch out for that. I'm not trying to be an asshole or anything; I'm just attempting to respond to your main point from every possible angle.

For the purposes of discussion on this site, a Friendly AI is assumed to be one that shares our terminal values.

What's a "terminal value"?

My utility function assigns value to the desires of beings whose values conflict with my own.

Even for somebody trying to kill you for fun?

I can't allow other values to supersede mine, but absent other considerations, I have to assign negative utility in my own function for creating negative utility in the functions of other existing beings.

What exactly would those "other considerations" be?

I have to assign negative utility in my own function for creating negative utility in the functions of other existing beings.

Would you be comfortable being a part of putting somebody in jail for murdering your best friend (whoever that is)?

I'm skeptical that an AI that would impose catastrophe on other thinking beings is really maximizing my utility.

What if somebody were to build an AI for hunting down and incarcerating murderers?

Would that "maximize your utility", or would you be uncomfortable with the fact that it would be "imposing catastrophe" on beings "whose desires conflict with [your] own"?

It seems to me that to truly maximize my utility, an AI would need to have consideration for the utility of other beings.

What if the "terminal values" (assuming that I know what you mean by that) of those beings made killing you (for laughs!) a great way to "maximize their utility"?

Perhaps my utility function gives more value than most to beings that don't share my values

But does that extraordinary consideration stretch to the people bent on killing other people for fun?

However, if an AI imposes truly catastrophic fates on other intelligent beings, my own utility function takes such a hit that I cannot consider it friendly.

Would your utility function take that hit if an AI saved your best friend from one of those kind of people (the ones who like to kill other people for laughs)?

Comment by Ultima on Some rationalistic aphorisms · 2011-01-24T19:11:21.933Z · LW · GW

I don't know about everybody else, but I'm totally confused.

I mean, if it's visual, why don't you just make us a picture or a video, instead of just relying on words (which aren't very good for this purpose)?

Comment by Ultima on Some rationalistic aphorisms · 2011-01-24T19:01:21.319Z · LW · GW

Something like this?

(Where each of the shaded rectangles is one of the candy bars.)

And, as to whether there's an analogue of this in actual sight, of course there's not (if I know what you mean), but that doesn't mean that it's an uncommon thing. Just look out into your room (or wherever you are), and imagine that something (such as a dog or something) is there. What's the difference between the actual scene and the imagined scene? Well, the actual one is much more vivid, and the imagined is much less vivid. It's not that you're only seeing the real situation, and not seeing the imagined one; it's simply that the real one is much more forceful to your mind than the imagined. The imagined one is "superimposed" over the other one; you can see both.

So is that how the candy bars were stacked on top of each other on your visual field?

Comment by Ultima on Politics is a fact of life · 2011-01-23T18:08:20.731Z · LW · GW

remaining politics-free keeps us more level-headed about other issues, and because it avoids driving away the rather sizable population of rationalists who hate politics.

Is everybody here interested in everything here?

Probably not, so why shouldn't the people who don't like politics just avoid the political stuff (just like the people not interested in AI just ignore the AI posts)?

Comment by Ultima on Politics is a fact of life · 2011-01-23T18:01:56.239Z · LW · GW

How could the LWers learn to discuss it rationally if you keep it out right from the outset? Isn't talking about something rationally a skill that you acquire? Sure, most people have no idea how to discuss politics rationally (and furthermore have no idea that they have no idea how to talk about it in a rational way), but that doesn't mean that they can't learn. If you keep your kid on a leash all day because you don't trust him to make the right decisions, what exactly would be bound to happen once you get distracted and drop the leash for a few minutes?