Posts

Comments

Comment by Stephen_Weeks on Nonperson Predicates · 2008-12-27T20:16:00.000Z · LW · GW

It's not so much the killing that's an issue as the potential mistreatment. If you want to discover whether people like being burned, "Simulate EY, but on fire, and see how he responds" is just as bad of an option as "Duplicate EY, ignite him, and see how he responds". This is a tool that should be used sparingly at best and that a successful AI shouldn't need.

Comment by Stephen_Weeks on Inner Goodness · 2008-10-24T02:03:23.000Z · LW · GW

I'm entertained to remember that one of the last things you said to me at Penguicon was that I'm evil. This post reminded me of that.

This is a really interesting post. This was one of my major disagreements with my church back when I was religious.

Comment by Stephen_Weeks on Ethics Notes · 2008-10-22T00:04:13.000Z · LW · GW
As for the idea of competing AIs, if they can modify each other's code, what's to keep one from just deleting the other?

Or, for that matter, from modifying the other AI to change its values and goals in how the other AI modifies itself? Indirect self-modification?

This problem seems rather harder than directly implementing a FAI.

Comment by Stephen_Weeks on Ethics Notes · 2008-10-22T00:02:26.000Z · LW · GW
Maybe Weeks is referring to "not wanting" in terms of not finally deciding to do something he felt was wrong, rather than not being tempted?

Not so. Back when I was religious, there were times when I waned to do things that went against my religious teachings, but I refrained from them out of the belief that they would somehow be harmful to me in some undefined-but-compelling way, not because they seemed wrong to me.

I've certainly felt tempted about many things, but the restraining factor is possible negative consequences, not ethical or moral feelings.

I don't recall ever wanting to do something I felt was wrong, or feeling wrong about something I wanted to do. At most I've felt confused or uncertain about whether the benefits would be greater than the possible harm.

The feeling of "wrong" to me is "bad, damaging, negative consequences, harmful to myself or those I care about". The idea of wanting to do something with those qualities seems contradictory, but it's well established by evidence that many people do feel like that about things they want to do. That part wasn't surprising to me.

Comment by Stephen_Weeks on Ethical Inhibitions · 2008-10-20T05:08:50.000Z · LW · GW

Psy-Kosh: I don't think I have, but I'm not very sure on that point. I don't remember ever wanting to do something that I both felt would be wrong and wouldn't have consequences otherwise. The part that was particularly unusual to me was the idea of something not only being "wrong", but universally unacceptable, as in:

If you have the sense at all that you shouldn't do it, you have the sense that you unconditionally shouldn't do it.
Comment by Stephen_Weeks on Ethical Inhibitions · 2008-10-20T04:04:11.000Z · LW · GW

This entire post is kind of surreal to me, as I'm pretty confident I've never felt the emotion described here before. I guess this makes some behavior I've seen before seem more understandable, but it's still a strange to see this described as a human universal when I don't seem to have that response.

Is there a standard term for this that I could use to research it? I did some searching on wikipedia with phrases used in the post, but I couldn't find anything.

Comment by Stephen_Weeks on Fighting a Rearguard Action Against the Truth · 2008-09-24T04:47:44.000Z · LW · GW

Are there any sources of more information on this convulsive effort that adult religionists go through upon noticing the lack of God?

Comment by Stephen_Weeks on Qualitative Strategies of Friendliness · 2008-08-30T07:23:15.000Z · LW · GW

Ian: the issue isn't whether it could determine what humans want, but whether it would care. That's what Eliezer was talking about with the "difference between chess pieces on white squares and chess pieces on black squares" analogy. There are infinitely many computable quantities that don't affect your utility function at all. The important job in FAI is determining how to create an intelligence that will care about the things we care about.

Certainly it's necessary for such an intelligence to be able to compute it, but it's certainly not sufficient.

Comment by Stephen_Weeks on Can Counterfactuals Be True? · 2008-07-24T21:29:39.000Z · LW · GW

You could always just juxtapose a box and an arrow: □→

Comment by Stephen_Weeks on Thou Art Physics · 2008-06-06T14:54:48.000Z · LW · GW

So, can someone please explain just exactly what "free will" is such that the question of whether I have it or not has meaning? Every time I see people asking this question, it's presented as some intuitive, inherently obvious property, but I actually can't see how the world would be different if I do have free will or if I don't. I really don't quite understand what the discussion is about.