[SEQ RERUN] Heading Toward Morality

post by MinibearRex · 2012-06-09T05:41:45.225Z · LW · GW · Legacy · 18 comments

Contents

18 comments

Today's post, Heading Toward Morality was originally published on 20 June 2008. A summary (taken from the LW wiki):

 

A description of the last several months of sequence posts, that identifies the topic that Eliezer actually wants to explain: morality.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was LA-602 vs RHIC Review, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

18 comments

Comments sorted by top scores.

comment by Lukas_Gloor · 2012-06-13T23:07:18.428Z · LW(p) · GW(p)

What irritates me about this post is that Yudkowsky just seems to assume without questioning (at least not in that article and related ones) that we ought be concerned about human morality. In "Fake Utility Functions", he argues that hedonistic utilitarianism fails to due justice to all the complex human values . But that's not the goal utiltiarians wanted to achieve, that's not their view of ethics. Ethics should be independent of the evolutionary psychology of Homo sapiens. Self-aware beings could have ended up with different values. What are the meta criteria by which we should decide what values to have in the first place? Hedonistic utilitarians answer that what matters, ultimately, can only be conscious experience. Yudkowsky seemed to assume that hedonistic utiltiarians thought that humans must want to be hedonistic utiltiarians deep down. But they don't need that to be the case at all. Human ethical intuitions could well be more misguided than Yudkowsky acknowledges anyway (i.e. that many people have strong intuitions against some of the consequences of consequentialism). Yudkowsky's dismissal of the One Great Moral Principle thus seems hastened. Toby Ord made a similar point in the comments to "Fake Utility Functions". (I don't want to advocate classical utilitarianism here because I think there are reasons that speak against happiness being the relevant criterion, I just wanted to point out that more thought should be given to this foundational issue of ethics.)

comment by ZZZling · 2012-06-09T08:39:46.865Z · LW(p) · GW(p)

Don't tell me you want to figure out how to "program" moral behavior :)

Replies from: Manfred
comment by Manfred · 2012-06-09T16:07:22.936Z · LW(p) · GW(p)

No, no. We want to figure out how to program moral behavior. "Programming" it would be much harder.

Replies from: ZZZling
comment by ZZZling · 2012-06-10T08:40:16.177Z · LW(p) · GW(p)

Are you serious? Do you really think that morality can be programmed on computers? Good luck then. Pursuing even unrealistic goals can yield useful results. As the least, your effort will mark more clearly boundaries and limitations of the computer programming method in solving the AI problem.

Replies from: wedrifid
comment by wedrifid · 2012-06-10T19:54:57.366Z · LW(p) · GW(p)

Required reading: The Generalized Antizombie Principle

Replies from: ZZZling
comment by ZZZling · 2012-06-10T22:39:04.497Z · LW(p) · GW(p)

I think you misunderstood my point here.

But first, yes, I skimmed through the recommended article, but dont see how does it fit in here. Its an old familiar dispute about philosophical zombies. My take on this, the idea of such zombies is rather artificial. I think it is advocated by people who have problems a understanding mind/body connection. These people are dualists, even if they don`t admit it.

Now about morality. There is a good expression in the article you referenced: high-level cognitive architectures. We don`t know yet what this architecture is, but this is the level that provides categories and the language one has to understand and adopt in order to understand high-level mind functionality, including morality. Programming languages are a way below that level and not suitable for the purpose. As an illustration, imagine that we have a complex expert system that performs extensive data base searches and sophisticated logical inferences, and then we try to understand how it works in terms of gates, transistors, capacitors that operate on a microchip. It will not do it! The same is about trying to program morality. How one is going to do this? To write a function like, bool isMoral(...)? You pass parameters that represent a certain life situation and it returns true of false for moral/immoral? That seems absurd to me. The best that I can think about utilizing programming for AI is to write a software that models behavior of neurons. There still will remain a long way up to high-level cognitive architectures, and only then, morality.

Replies from: wedrifid
comment by wedrifid · 2012-06-10T23:54:40.018Z · LW(p) · GW(p)

I think you misunderstood my point here.

I was responding directly to this claim:

Are you serious? Do you really think that morality can be programmed on computers? Good luck then.

... which I would not make due to the violation of GAP.

Regarding the somewhat weaker claim "programming morality into computers would be very hard" we may have less disagreement. My expectation is that even with the best human minds dedicated into 'programming morality into computers" after first spending decades of research into those 'high-level architectures' they are still quite likely to make a mistake and thereby kill us all.

Replies from: ZZZling
comment by ZZZling · 2012-06-11T00:40:11.255Z · LW(p) · GW(p)

I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack. I think you understand my point now (at least partially) and can see how weird it looks to me such ideas as programming morality. I now realize, there maybe many people here who take these ideas seriously.

Replies from: wedrifid
comment by wedrifid · 2012-06-11T00:46:51.962Z · LW(p) · GW(p)

I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack.

GAP, The Generalized Antizombie Principle as mentioned in the preceding comments. (Perhaps I should have included the 'Z'.) You have made no social violation and there is nothing personal here, just a factual claim dismissed due to a commonly understood principle.

Replies from: ZZZling
comment by ZZZling · 2012-06-11T01:11:24.790Z · LW(p) · GW(p)

I think I understand now why you keep mentioning GAP. You thought that I objected the idea of morality programming due to zombie argument. Sort of, we will create only a morality imitating zombie, rather than a real moral mind, etc. No, my objection is not about this. I dont take zombies seriously and dont care about them. My objection is about hierarchy violation. Programming languages are not right means to describe/implement high-level cognitive architectures, which will be a basis for morality and other high-level phenomena of mind.

Replies from: TheOtherDave, wedrifid
comment by TheOtherDave · 2012-06-11T01:19:33.638Z · LW(p) · GW(p)

Hm.
If I implement a neural-networking algorithm on my computer and present it with a set of prototypical images until it reliably recognizes pictures of rabbits, would you say I have not programmed my computer to recognize rabbits?
If so, what verb would you use to describe what I've done?

Replies from: ZZZling
comment by ZZZling · 2012-06-11T01:59:12.101Z · LW(p) · GW(p)

You've implemented a neural network (rather simple) and made it to self-organize to recognize rabbits. It was self-organized following outside sensory input (this is only one way direction of information flow, another direction would be sending controlling impulses to network output, so that those impulses would affect what kind of input the network receives).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-11T02:55:32.990Z · LW(p) · GW(p)

OK.

Now, suppose I want a term that isn't quite so specific to a particular technology, a particular technique, a particular style of problem solving. That is, suppose I want a term that refers to a large class of techniques for causing my computer to perform a variety of cognitive tasks, including but not limited to recognizing rabbits.

If I'm understanding you correctly, you reject the phrase "I program the computer to perform various cognitive tasks" but might endorse "I made the computer self-organize to perform various cognitive tasks."

Have I understood you correctly?

Replies from: ZZZling
comment by ZZZling · 2012-06-11T03:56:42.596Z · LW(p) · GW(p)

Well, it's not that I made it to self-organize, it is information coming from the real world that did the trick. I only used a conventional programming language to implement a mechanism for such self-organization (neural network). But I'm not programming the way how this network is going to function. It is rather "programmed" by reality itself. The reality can be considered as a giant supercomputer constantly generating consistent streams of information. Some of that information is fed to a network and makes it to self-organize.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-11T13:16:41.616Z · LW(p) · GW(p)

Fair enough... so, OK, "I made it to self-organize" isn't right either.

That said, I'll point out that that was your own choice of words ("You've implemented a neural network [..] and made it to self-organize").

I mention this, not to criticize your choice of words, but to point out that you have experience with the dynamic that causes people to choose a brief not-quite-right phrase that more-or-less means what we want to express, rather than a paragraph of text that is more precise.

Which is exactly what's going on when people talk about programming a computer to perform cognitive tasks.

I could have challenged your word choice when you made it (just like you did now, when I echoed it back), but I more or less understood what you meant, and I chose to engage with your approximate meaning instead. Sometimes that's a helpful move in conversations.

Replies from: ZZZling
comment by ZZZling · 2012-06-12T03:12:38.461Z · LW(p) · GW(p)

Yes, there is some ambiguity in use of words, I myself noticed it yesterday. I can only say that you understood it correctly and made the right move! OK, I'll try to be more accurate in using words (sometimes it is not simple, requires time and effort).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-12T14:06:23.273Z · LW(p) · GW(p)

I agree completely that it's not simple and requires time and effort.
I am, as I said explicitly, not criticizing your choice of words.
I'm criticizing your listening skills.

This whole thread got started because you chose to interpret "programming morality" in a fairly narrow way to mean something unreasonable, and then chose to criticize that unreasonable thing.

I am suggesting that next time around, you can profitably make more of an effort as a listener to meet the speaker halfway and think about what reasonable thing they might have been trying to express, rather than interpret their words narrowly to suggest something unreasonable.
Just as you value others doing the same for you.

comment by wedrifid · 2012-06-11T01:30:45.810Z · LW(p) · GW(p)

I think I understand now why you keep mentioning GAP.

Did you correctly infer that it is primarily because that post and the surrounding posts in the associated sequence appeared in my playlist while I was at the gym today? That would have been impressive.

(If I hadn't been primed I may have ignored your comment rather than replied with the relevant link.)

You thought that I objected the idea of morality programming due to zombie argument.

The other direction. Your objection (as it was then made) was a violation of the aforementioned GAZP so I rejected it.