Levels of Intelligence

post by draq · 2010-10-26T11:57:22.948Z · LW · GW · Legacy · 82 comments

Contents

  Level 1: Algorithm-based Intelligence
  Level 2: Goal-oriented Intelligence 
  Level 3: Philosophical Intelligence
None
82 comments

Level 1: Algorithm-based Intelligence

An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms. 

Level 2: Goal-oriented Intelligence

An intelligence of level 2 has an innate goal. It develops and finds new algorithms to solve a problem. For example, the paperclip maximizer is a level-2 intelligence.

Level 3: Philosophical Intelligence

An intelligence of level 3 has neither any preset algorithms nor goals. It looks for goals and algorithms to achieve the goal. Ethical questions are only applicable to intelligence of level 3.

82 comments

Comments sorted by top scores.

comment by nhamann · 2010-10-26T16:46:06.961Z · LW(p) · GW(p)

An intelligence of level 1 acts on innate algorithms, like a bacterium that survives using inherited mechanisms.

This suggestion seems disengaged from the biological literature. It has become known in recent years, for instance, that bacteria live very complicated social lives. From The Social Lives of Microbes:

It used to be assumed that bacteria and other microorganisms lived relatively independent unicellular lives, without the cooperative behaviors that have provoked so much interest in mammals, birds, and insects. However, a rapidly expanding body of research has completely overturned this idea, showing that microbes indulge in a variety of social behaviors involving complex systems of cooperation, communication, and synchronization.

See Quorum sensing for more details.

Also, I'm not sure we should call what bacteria do a level of "intelligence," even though they have wonderful and complicated social behavior. "Intelligence" in the context of AI typically is reserved for "cross-domain optimization power" or something like that, and bacteria seem to be lacking that.

Replies from: draq
comment by draq · 2010-10-26T16:50:32.184Z · LW(p) · GW(p)

My post isn't supposed to be biologically accurate. Bacteria include a vast majority of organisms and I do them wrong if I depict them as crude and simple. As a part of my apology tour, I will start with my gut flora.

Replace "bacteria" with "secure hash algorithm".

Replies from: nhamann
comment by nhamann · 2010-10-26T17:14:36.028Z · LW(p) · GW(p)

My post isn't supposed to be biologically accurate.

My point is, if you're going to talk about bacteria in a way that characterizes them incorrectly, why talk about bacteria at all?

Replace "bacteria" with "secure hash algorithm".

You should do this in the post?

Replies from: draq
comment by draq · 2010-10-26T19:49:52.297Z · LW(p) · GW(p)

So you point is that I am wrong on bacteria. I agree, let's move on.

Replies from: nhamann
comment by nhamann · 2010-10-26T20:36:50.407Z · LW(p) · GW(p)

Agreed. I'm not sure there's much to gain from a taxonomy like yours, because there's too many details that have been abstracted away. Understanding intelligence is a difficult scientific problem, and we need a technical explanation of intelligence. It is not clear to me how one would extend what you've written into such an explanation.

comment by [deleted] · 2010-10-26T13:10:16.195Z · LW(p) · GW(p)

I have the sense that this may be too simple.

Are humans structurally distinguishable from paperclip maximizers?

Are "innate algorithms" and "finds new algorithms" really qualitatively different?

Replies from: draq
comment by draq · 2010-10-26T14:43:26.066Z · LW(p) · GW(p)

Well, a paperclip maximizer has an identifiable goal. What is the identifiable goal of humans?

Well, "finding new algorithms" aka learning may itself be a kind of algorithm, but certainly of a higher-level than a simple algorithms aka instinct or reflex. I think there is a qualitative difference between an entity that cannot learn and an entity that can.

comment by David_Allen · 2010-10-26T14:15:05.772Z · LW(p) · GW(p)

I sometimes consider this topic. I would phrase it "How can intelligence generally be categorized?" Ideally we would be able to measure and categorize the intelligence level of anything; for example rocks, bacterium, eco-systems, suns, algorithms (AI), aliens that are smarter than humans.

Intelligence appears to be related to the level of abstraction that can be managed. This is roughly what is captured in the OP's list. Higher levels of abstraction allow an intelligence to integrate input from broader or more complex contexts, to model and to respond to those contexts.

The level of intelligence will be very context dependent. There may not be a single way to rank intelligence, but many, each focused on different specific contexts.

When tool use comes into play, it may be hard to separate the intelligence built into the tool, from the intelligence of the tool user. It may not always be clear where the tool ends and the tool user begins.

Replies from: draq
comment by draq · 2010-10-26T14:57:35.843Z · LW(p) · GW(p)

I fully agree. There are many aspects of intelligence.

The reason I choose this categorization, given it is valid, is to highlight the aspect of intelligence that is relevant to ethics.

I think only a level-3 intelligence can be a moral agent. An intelligence that has an innate goal does not need to and cannot bother itself with moral questions.

comment by Emile · 2010-10-26T14:05:10.829Z · LW(p) · GW(p)

It looks for goals and algorithms to achieve the goald.

What criterion should it use to choose between goals?

(also, there's a typo)

Replies from: draq
comment by draq · 2010-10-26T14:50:56.923Z · LW(p) · GW(p)

Well that's the point. The intelligence itself defines the criterion. Choosing goals presumes a degree of self-reflection that a paperclip maximizer does not have.

If a paperclip maximizer starts asking why it does what it does, then there are two possible outcomes. Either it realises that maximizing paperclips is required for a greater good, in which case it is not really a paperclip maximizer, but a "greater good" maximizer, and paperclip maximising isn't the end to itself.

Or it realises that paperclip maximising is absolutely pointless and there is something better to do. In that case, it stops being a paperclip maximiser.

So, to be and to stay a paperclip maximiser, it must not question the end of its activity. And that's slightly different to human beings, who are often asking for the meaning of life.

Replies from: Emile
comment by Emile · 2010-10-26T15:56:18.588Z · LW(p) · GW(p)

If a paperclip maximizer starts asking why it does what it does, then there are two possible outcomes. Either it realises that maximizing paperclips is required for a greater good, in which case it is not really a paperclip maximizer, but a "greater good" maximizer, and paperclip maximising isn't the end to itself.

In other words, if a paperclip maximizer isn't a paperclip maximizer, then it isn't a paperclip maximizer.

Or it realises that paperclip maximising is absolutely pointless and there is something better to do. In that case, it stops being a paperclip maximiser.

According to what criterion would it determine what constitutes "better"?

What you're describing isn't an agent that doesn't have a goal and decides on one. It's an agent that has something like a goal / a utility function / a criterion for "better" / morals (those are roughly equivalent here), and uses that to decide on sub-goals.

I strongly recommend reading the metaethics sequence (if you haven't already).

Replies from: draq, draq
comment by draq · 2010-10-26T16:26:27.564Z · LW(p) · GW(p)

I think the problem is that while I believe and presumed an absolute moral system, you don't.

comment by draq · 2010-10-26T16:24:30.402Z · LW(p) · GW(p)

I believe the problem is that while I believe in and presumed an absolute moral system, you don't.

Let's agree on a definition of morality/ethics, that it is what we should do to reach a desirable state or value, given that we both understand what "value" or "should" mean.

I think that morality exists as much as the physical world exist. If you believe that the physical reality is absolute, then there is no reason to doubt that there is a consistent absolute moral system. In our everyday life, we don't question the reality of the physical world, as much as we always uphold a moral system (unless we are psychopath). We have moral perception as much as we have an physical perception.

Of course, concerning the physical world, we have established a methodology that is agreed upon by the vast majority of people. That is, we have a method using which we can determine what is false, if not what is true. So far, we do not have anything alike in morality that is as easily understandable as the scientific methods. So far it only means that we cannot determine the moral system as precisely as the physical system we live in.

In summary, I believe that the moral world is as real as the physical world. However, I don't know the moral world completely as much as I don't understand the physical world completely. So, I don't know what constitutes "better" in every possible situation, as much as I don't know what constitutes "real" in every possible situation.

But I believe that there is one single right answer. Otherwise, it becomes quite confusing.

Replies from: AlexMennen, David_Allen, Pavitra, Emile
comment by AlexMennen · 2010-10-26T17:30:52.308Z · LW(p) · GW(p)

What would it mean for there to be an absolute moral system? Sure, we have moral perception, but it's primarily instinct that humans evolved to make cooperation easier. Your level 2 and level 3 intelligences are not different.

Replies from: draq
comment by draq · 2010-10-26T19:52:52.545Z · LW(p) · GW(p)

The absolute moral system I am talking about is as "absolute" as the physical world. Our perception of the reality ("the absolute physical world") is also a primarily instinct that humans evolved to make life easier.

The difference between level 2 and level 3 intelligence is, using an analogy, like the difference between an intelligence that acts on postulated theories of the physical world and an intelligence that discovers new physical theories.

Replies from: AlexMennen
comment by AlexMennen · 2010-10-26T21:51:38.882Z · LW(p) · GW(p)

So are you defining morality as the behavior must conducive to cooperation?

Replies from: draq
comment by draq · 2010-10-26T22:02:05.764Z · LW(p) · GW(p)

If I understand correctly what you are saying, then the answer is no.

Morality is the system of normative rules in contrast to the system of descriptive theories that we use to understand our physical world..

Replies from: AlexMennen
comment by AlexMennen · 2010-10-26T23:13:16.036Z · LW(p) · GW(p)

But there are many valid systems of normative rules. If there is an absolute morality, that means that one such system must be identified as special in some way. The thing that makes correct physics special over other possible descriptive theories is that, in this universe, it accurately predicts events. What about absolute morality makes it special as compared to other systems of normative rules?

Replies from: draq
comment by draq · 2010-10-27T15:38:52.611Z · LW(p) · GW(p)

As you know, there are different "valid" set of theories regarding the physical reality: the biblical view, the theories underlying TCM, the theories underlying homeopathy, the theories underlying chiropractise and the scientific view. The scientific view is well-established because there is an intersubjective consensus on the usefulness of the methodology.

The methods used in moral discussions are by far not so rigidly defined as in science, it's called civil discourse. The arguments must be logical consistent and the outcomes and conclusions of the normative theory must face the empirical challenge, i.e. if you can derive from your moral system that it is permissible to kill innocent children without any benefits, then there is probably something wrong.

Replies from: AlexMennen
comment by AlexMennen · 2010-10-27T23:16:16.316Z · LW(p) · GW(p)

What is it about killing innocent children without any benefits that means that a correct moral system cannot permit it? If it is a matter of opinion, then the moral system is not absolute. If it is something other than opinion, you have not identified what that thing is.

Replies from: draq
comment by draq · 2010-10-28T17:03:10.675Z · LW(p) · GW(p)

I feel that killing innocent children without any benefit is wrong. I reason about it, and within my normative system, it makes sense to believe that is absolute moral, and not just mere opinion.

I see through a telescope a bright spot in the sky. I think it is the planet Saturn. I reason about it and within my system of physical theories, it makes sense to believe that is absolute real, and not just mere opinion.

comment by David_Allen · 2010-10-28T15:31:47.182Z · LW(p) · GW(p)

But I believe that there is one single right answer. Otherwise, it becomes quite confusing.

There is no one single right answer, and yes it is quite confusing.

The simple reason for this is that everything operates within a context. Context creates meaning; in the absence of context, there is no meaning. This is the context principle.

Let's agree on a definition of morality/ethics, that it is what we should do to reach a desirable state or value, given that we both understand what "value" or "should" mean.

The meanings for "should" and "desirable state/value" will have to be established within a context. Outside of that context those terms may have different meanings, or may be meaningless.

By saying "Let's agree on a definition of morality/ethics" and "given that we both understand" you are attempting to establish a common context with the other commenters on LW. A common context provides shared meaning and opens a path for communication between disparate domains.

You say:

I believe in and presumed an absolute moral system

To me this implies that you believe in a moral system that can be applied to all contexts.

Given your rough definition of morality:

it is what we should do to reach a desirable state or value

I can think of contexts where morality is meaningless. For example electrons don't have desires and don't respond to the idea of should.

So morality can't applied to all contexts, and so in that sense it can't be absolute.

In a previous post you seem to realize this to some extent:

You say:

I think only a level-3 intelligence can be a moral agent.

By this, level-1 and level-2 intelligences operate in morality free contexts. They can't be moral or not-moral.

If you observe a paperclip maximizer engaged in not-moral behavior, you are labeling the behavior as not-moral from within your context. The paperclip maximizer's behavior does not have an inherent quality of moral or not-moral.

So what is your context for the "one single right answer"?

Replies from: draq
comment by draq · 2010-10-28T17:13:54.036Z · LW(p) · GW(p)

Is there anything absolute according to your defintion?

Are numbers absolute? I can think of a context, where numbers are meaningless. E.g. if I am talking about Picasso.

Is the physical reality absolute? I can think of a context where the physical reality isn't absolute. For example, if I am thinking of numbers.

Replies from: David_Allen
comment by David_Allen · 2010-10-28T19:04:21.731Z · LW(p) · GW(p)

Is there anything absolute according to your defintion?

I'm not sure how to answer this. What do you mean by "absolute".

Are numbers absolute? I can think of a context, where numbers are meaningless. E.g. if I am talking about Picasso.

Numbers are symbols defined within some context. Certainly we have words for numbers and so while talking about Picasso you could say "Picasso is three.". From the context of the speaker, the word "three" gains meaning from its definition in the English language, from its position in the sentence, from the conversation as a whole, and from the prior experiences of the speaker. A listener may come away with a completely different meaning when she hears that sentence, it would depend on her context.

I would like to be clear on what you are asking. Perhaps you are thinking about numbers in terms like Plato's Theory of Forms?

Is the physical reality absolute? I can think of a context where the physical reality isn't absolute. For example, if I am thinking of numbers.

Our physical reality appears to be the common context that everything shares within our universe. For something to exist in our universe it must be physically manifest in some form.

The numbers you are thinking are physically manifest. If you are thinking about numbers, the meaning of the numbers exist within the context of your mind's consciousness. Your consciousness is an abstraction running within the context of your brain. Your brain is implemented within the context of our physical reality. Ultimately some set of quarks in specific energy configurations are attributable to the numbers you are thinking, but the direct relationship would be hard to pin down.

Context and content can be split arbitrarily, creating layers of abstractions.

If you are performing calculations on numbers in your mind, from within the context of the math abstraction you are using, it doesn't matter if you are performing the calculation or a computer is. Meaning within that context is substrate independent. But that meaning still will ultimately need a physical representation for it to exist.

Does this make physical reality absolute to you?

Replies from: draq
comment by draq · 2010-10-28T19:23:28.691Z · LW(p) · GW(p)

So morality can't applied to all contexts, and so in that sense it can't be absolute.

I'm not sure how to answer this. What do you mean by "absolute".

In the same sense you used to deny the existence of absolute morality.

Does this make physical reality absolute to you?

Using that defintion, morality isn't as absolute as physical reality. Morality then only applies to self-reflective level-3 intelligence (cf that comment of mine).

But why do you believe that everything happens within the context of physical reality?

Let me present you the Cartesian view (cf Mind-body dichotomy):

Mental phenomena and physical phenomena are in two different domains. Human beings exist in both due to "God" (who, for our purpose, does nothing else, so there is no way to test God empirically). In this view, God is the absolute context, while the physical reality isn't.

So is there any convincing reason why I should think that the physical reality instead of God is absolute, other than the fact that many clever people think that way. I don't want to believe in an absolute system based on the majority opinion.

Replies from: David_Allen
comment by David_Allen · 2010-10-29T17:31:59.306Z · LW(p) · GW(p)

Is there anything absolute according to your defintion?

I'm not sure how to answer this. What do you mean by "absolute".

In the same sense you used to deny the existence of absolute morality.

In the sense of "deny" as in "refuse to accept the truth of it", I did not deny the existence of absolute morality, I disproved it under a certain meaning of absolute. You have yet to show flaws in my reasoning or to counter with an alternate meaning of absolute where absolute morality is valid.

Your original question "Is there anything absolute according to your defintion?". I need to rephrase this in terms of my definition of absolute; "Is there anything that has meaning in all contexts?". This is to avoid the confounding alternate senses of the word absolute.

The answer is no. This is because I can always find or generate a specific context that does not provide meaning to anything proposed to have meaning in all contexts. For most cases I could simply use electrons as the context. For example, electrons don't have property X, or are not influenced by X. X is meaningless to electrons. For the few cases where this fails I could use algebra. Algebra doesn't contain a meaning for X.

... Morality then only applies to self-reflective level-3 intelligence (cf that comment of mine).

This allows us to get rid of the word absolute and to rephrase the problem as "Can the same morality be applied to all possible cases of level-3 intelligences.".

For the common meaning of morality I think that this simply can't be done. As I've been saying, its all about context.

Eliezer Yudkowsky's Baby Eating Aliens highlights clashing moralities.

But why do you believe that everything happens within the context of physical reality?

Ideally I don't hold beliefs about anything that happens outside of physical reality. If you notice beliefs of that nature point them out and I'll reconsider them. You should feel free to always assume that I don't believe that my claims apply outside the physical universe.

So is there any convincing reason why I should think that the physical reality instead of God is absolute...

I can't answer this question directly for several reasons. I don't know what would convince you. In your current context you may simply be unconvincible. Also, I've actually argued that nothing is absolute for a specific meaning of absolute, so I'm not inclined to now argue that physical reality is absolute.

However, I will try to say something about the belief in God so that I may learn something from your response.

The God hypothesis is indistinguishable from other stories that people have made up and could make up. This leads to the conclusion that God exists in the same way that Sherlock Holmes exists.

For example you might say "God created the universe. The existence of the universe is proof of God." I will respond, "Frud is a tuna sandwich I once made that had a special property, it created the universe, past and future. The existence of the universe is proof of Frud."

Every claim you make about God, I can make about a not-God. I can also state my counter claim in a way that makes it about an innumerable number of not-Gods. Every additional claim that you make about God leaves you with a single God, but allows me to multiply the innumerable not-Gods that collectively satisfy the same conditions.

Every piece of evidence that supports God also supports not-God, but supports vastly more not-Gods than God. For any level of precision the likelihood of God being true rounds to 0, and the likelihood of not-God rounds to 1. This a general problem with non-scientific hypotheses.

So if you wish to believe in God, you will need to do so in the absence of evidence. In your context you might even find practical benefits from such a belief.

A rough answer your original question; you should believe in physical reality over God, because God appears to exist as a story, and physical reality appears to actually exist.

Replies from: draq
comment by draq · 2010-10-29T17:59:57.603Z · LW(p) · GW(p)

Using that defintion, morality isn't as absolute as physical reality.

Again, as I said, under your definition of absolute, which is that reality is absolute, I agree with your disapproval of my belief in absolute morality since morality is of a different quality than reality.

Our physical reality appears to be the common context that everything shares within our universe.

Your definition of absolute is plausible, but I do not share it. I think that mental phenomena exist independently from the physical world.

What makes me believe it? If I believe that mental phenomena vanish without the natural world, I could equally believe that the natural phenomena vanish without my mind (or "mental world"). To believe that one provides the context for the other is, I believe, an arbitrary choice. Therefore, I believe in their independent existence.

Concerning God. For many people, the God hypothesis is more than just to believe that the universe is created by some distant creator who does nothing else. God also intervenes into the world. So it is possible to test God's existence empirically. And for many Christians, this is apparently happening. Spend enough time with them, and they will tell you fantastic stories.

Personally, I don't believe in God.

Replies from: David_Allen
comment by David_Allen · 2010-10-29T19:50:42.430Z · LW(p) · GW(p)

Again, as I said, under your definition of absolute, which is that reality is absolute, I agree with your disapproval of my belief in absolute morality since morality is of a different quality than reality.

We are not connecting entirely on these points. I have in fact claimed explicitly that nothing is "absolute". I also said in my previous post:

Also, I've actually argued that nothing is absolute for a specific meaning of absolute, so I'm not inclined to now argue that physical reality is absolute.

So I'm surprised that this point did not come across.

To clarify, I am saying that physical reality appears to be the common context that everything shares within our universe. This context however only has meaning at a specific level of abstraction, the physical level. There are other levels of abstraction (contexts) for which reality has no meaning, just as there are contexts for which morality has no meaning.

Our physical reality appears to be the common context that everything shares within our universe.

Your definition of absolute is plausible, but I do not share it. I think that mental phenomena exist independently from the physical world.

This is not the definition of absolute that I have been working with. I will restate it. Something that is "absolute" has meaning in all contexts. I don't think physical reality qualifies. If you reread my previous comments with that in mind we might come closer to a common understanding of this conversation.

What makes me believe it? If I believe that mental phenomena vanish without the natural world, I could equally believe that the natural phenomena vanish without my mind (or "mental world"). To believe that one provides the context for the other is, I believe, an arbitrary choice. Therefore, I believe in their independent existence.

This is interesting, lets split this out to a new thread, I'll post a specific reply later.

For many people, the God hypothesis is more than just to believe that the universe is created by some distant creator who does nothing else. God also intervenes into the world.

My argument holds for that case as well.

So it is possible to test God's existence empirically.

For this to be done, the God hypothesis will need to be updated to make specific predictions and to exclude alternate explanations. As it stands the current hypothesis can't be tested scientifically.

And for many Christians, this is apparently happening. Spend enough time with them, and they will tell you fantastic stories.

I have spent that time and I am familiar with the stories. However, attribution of specific outcomes to God in retrospect falls prey to the argument that I made. For example I can attribute the same outcomes to Frud.

Replies from: draq
comment by draq · 2010-10-29T21:41:44.341Z · LW(p) · GW(p)

What I meant to say is "morality is absolute as reality." I hope that clears everything.

Given that I experience God or anything supernatural empirically and I can reasonably exclude that I am suffering from hallucinations, then it is more probable for me to believe that the phenomena was supernatural rather than an improbable quantum mechanical phenonemon. Maybe what I call God is actually Frud. Maybe God "is a tuna sandwich I once made that had a special property, it created the universe, past and future." I don't expect to realise all of God's properties from a single experience.

Predictive power is not always required. Historians have quite a problem predicting things based on what they read on Caesar. You can't thus say that there are no historical facts (fact as factual as in "objective" news reporting).

You point out a context that does not require predictive power, but you have not shown that this context is equivalent to testing for God's existence empirically. Without a common context, your example is irrelevant to the issue.

I don't get you. What is your understanding of "testing for God's existence empirically?"

Replies from: David_Allen
comment by David_Allen · 2010-10-31T05:49:52.465Z · LW(p) · GW(p)

Well draq, I'm removing myself from all threads of our conversation both current and proposed. I don't see enough benefit for me to go on.

I can't be sure that you read my comments with enough care for you to understand and appropriately respond. You missed points that I made repeatedly and explicitly.

I've pulled apart several of your ideas and demonstrated their problems. You didn't responded directly to my claims, you didn't show flaws in my reasoning. Generally in debate this means you conceded the point.

You seldom answered my direct questions. This made it very difficult for me to understand your point of view.

Instead your responses were indirect and included new claims or questions. These responses lead away from the topic at hand. You appeared to avoid any conversation that would actually shed light on your beliefs and allow them to be sorted into true/useful or false/harmful.

From our conversation and your other comments I have concluded that you generally aren't distinguishing between stories and reality, and are resistant to doing so. This gives me low confidence in your ability to come to sound conclusions.

comment by Pavitra · 2010-10-26T21:54:58.369Z · LW(p) · GW(p)

I believe in and presumed an absolute moral system

What form of evidence or argument would persuade you to change your mind?

Replies from: draq
comment by draq · 2010-10-26T22:00:33.487Z · LW(p) · GW(p)

What form of evidence or argument would persuade you to change your mind on the usefulness/validity of falsification?

What form of evidence or argument would persuade you to change your mind on your understanding of the physical reality?

Replies from: Pavitra, AlexMennen
comment by Pavitra · 2010-10-27T18:56:08.318Z · LW(p) · GW(p)

What form of evidence or argument would persuade you to change your mind on the usefulness/validity of falsification?

If the people around me that I consider intelligent and respectable said consistently that ideas don't need to be falsifiable, and if the people who rejected the falsification criterion could do useful and miraculous things like inventing telephones far more often than the pro-falsification-ists could, then I would conclude that falsificationism was bunk.

What form of evidence or argument would persuade you to change your mind on your understanding of the physical reality?

I don't understand the question. How is changing my mind on my understanding of the physical reality distinct from just changing my mind about any question at all?

Replies from: draq
comment by draq · 2010-10-27T19:59:58.709Z · LW(p) · GW(p)

I believe in an absolute moral system as much as I believe in the rules of mathematics and other ideas. We can debate whether ideas (or the physical reality for that matter) exist in the absence of a mind, but I guess that is not the point.

As long as we have values, desires, dislikes and make judgements (which all of us do and which maybe is a defining characteristic of the human being beyond the biological basics) and if we want to put these values into a logical consistent system, we have an absolute moral system.

So if I stop having any desires and stop making any judgements, then I may still believe in a moral system, as much as an agnostic won't deny the existence of God, but it would be totally irrelevant to me.

Replies from: Pavitra, jimrandomh
comment by Pavitra · 2010-10-28T20:57:31.772Z · LW(p) · GW(p)

Relevance is the right question. When dealing with purely abstract concepts like mathematics, it's useless to ask whether they exist. It's extraordinarily unlikely that any empirical evidence could persuade me that 1+1 does not equal 2, but I can realistically doubt whether the addition of natural numbers is a good model for counting clouds.

Similarly, the question should not be whether the absolute moral system you believe in is true or valid or genuinely universal, but rather whether it accurately and precisely models how you judge and desire.

Since you could stop having desires and making judgments without damaging your belief in your absolute moral system, it seems reasonable that you could alter them as well, or even that you have already done so. How sure are you that what you believe to be fundamentally morally right matches what you actually want?

Replies from: draq
comment by draq · 2010-10-29T15:30:55.038Z · LW(p) · GW(p)

Relevance is a good point.

Changing or stop having desires damages my belief in an absolute morality as much as changing or stop having sensory perception damages my belief in an absolute reality.

My belief in an absolute morality is as strong or as weak as my belief in my absolute reality. It doesn't matter whether morality or reality really exists, but that we treat them similarly. It is slightly dissonant to conduct science as if it exists, but to become relativist when arguing about morality.

In the end, it is not what we should believe, but how our thinking work. When thinking about anything normative, we automatically presume absolute morality. At least, we believe that arguments have to be logically consisted, and even if that is the only absolute thing we believe in, it would be absolute morality. Otherwise, we are nihilist, which is certainly an attainable position.

Concerning relevance: Using the same line of argument, there are also "absolute cuteness", "absolute beauty" and other "absolute things" (if we have a perception of them and there is some intersubjective consensus). They are probably somehow related to absolute morality, they may be subsets of a bigger system, since they are all mental phenomena. They are relevant to varying degrees, while morality and reality are two absolute things, that matter us a lot, unless we are nihilist.

Replies from: Pavitra
comment by Pavitra · 2010-10-29T19:03:51.684Z · LW(p) · GW(p)

Ah, I see where you're coming from.

My thesis (and, I think, the general consensus position on this site) is this: One's morality is a feature of one's individual brain, rather than of physics. In particular, one should not expect that other people -- and, especially, nonhuman other minds -- will deduce the same absolute morality that you believe in, no matter how intelligent they are. (A sufficiently intelligent mind might deduce "Draq believes that the absolute morality is X", but not "the absolute morality is X".)

Have you read No Universally Compelling Arguments?

Replies from: draq
comment by draq · 2010-10-29T19:47:07.810Z · LW(p) · GW(p)

A sufficiently intelligent mind might deduce "Draq believes that the absolute morality is X", but not "the absolute morality is X".

Would you still agree with the argument if you substitute "morality" with "reality"?

As I repeatedly said, morality is as absolute or relative as reality. So if you don't believe in an absolute reality either, then I can't convince you, nor do I want to, since relativism/nihilism is a perfectly attainable position.

I just think that it is very arbitrary to say one exist and the other one is made up.

And it is not the way how we everyday life is. We live in a world where we subconsciously accept the world around us as (absolute) real, and we live in a world where we subcounsciously accept values as (absolute) real. If we value something, say "Pancakes are tasty/desirable", then we automatically think "It matters, what we like", which itself is a value.

Even if "it matters" is the only "moral" or mental perception we accept as absolute, then there is an absolute system.

"Something matters" cannot be explained descriptively (it does not have a meaning in physical terms), but has to be referred to within the value system. Therefore, the value system is self-referring and you cannot reduce it to sensory perception or scientific explanations.

Since we perceive both values and physical phenomena, I wonder why we regard one as absolute and the other one as relative.

Ah, I see where you're coming from.

By the way, where am I coming from?

Replies from: Pavitra
comment by Pavitra · 2010-10-30T03:17:15.686Z · LW(p) · GW(p)

Would you still agree with the argument if you substitute "morality" with "reality"?

No. As I said, I believe that the core of our disagreement is that, where you believe that there is a single fundamental morality to the entire universe, I believe that there is a separate arguably-fundamental-ish morality to each person (glossing over people with incoherent preferences, etc.).

I just think that it is very arbitrary to say one exist and the other one is made up.

I realize that I haven't really given any particular arguments for my position, so (assuming you didn't read the article I linked to) it's somewhat reasonable for you to think that we have a mere disagreement from first principles, with no particular reason to choose one position over the other.

Suppose someone disagrees with you about what's morally right. Does that mean they're wrong? Do you think that your moral conception is fundamentally valid and their moral conception is merely the product of a malfunctioning brain? I just think that it is very arbitrary to say one has a fundamental existence and the other one is made up.

How does your belief in an absolute morality constrain your anticipation; or, alternatively, what consequences does it have on your ethical decision-making? What evidence turns out differently, or what decisions will you make differently, as a consequence of morality being absolute rather than relative?

By the way, where am I coming from?

I can't explain you to you. Point at your feet and say aloud, "You are here."

I've tried to explain me to you; maybe that will help, if I'm sufficiently good at explaining and you're sufficiently good at understanding.

Replies from: draq
comment by draq · 2010-10-30T19:51:19.704Z · LW(p) · GW(p)

I did read the article "No Universal Argument" you linked to and couldn't find any convincing rebuttal to my arguments.

I just read "Making Beliefs Pay Rent" and if I got it right, then it says that science is good (and absolute) because it can predict things while normative theories don't. That is a good point.

My belief in an absolute morality gives me the foundation to enquire moral problems. I'll try to figure out what the "absoulute good" is and try to life my life according to that.

We can predict and explain the "decision-making" of inanimate objects using scientific theories. We can understand the decision-making of moral agents (humans) using normative theories (we might be able to predict their actions using scientific theories, but we won't understand or *explain" it without normative theories).

What about alien intelligence? If we can establish an intersubjective consensus with them and we realise that they have a value system that we can understand, then we can use our own system of normative theories to understand and explain their "decision-making".

If we can't establish an intersubjective consensus with them, then we might be able to predict their actions using scientific theories, but we won't be able to understand their "motives". They would act according to an absolute AI-morality, to which we have no access lacking the intersubjective consensus with them.

To recap and rectify my argument: Intersubjective consensus of the physical world leads us to believe in an absolute physical reality. Intersubjective consensus of the moral/value world leads us to believe in an absolute morality. No intersubjective consensus -- no belief in absolute whatever.

Maybe, and I believe, the moral world is an emergent property of the physical world. Thus, we might be able to use physical theories to predict the actions of moral agents within the physical world, but we won't be able to fully understand it only using physical theories since these don't capture the emergent properties (values, desires, dislikes, et cetera).

Therefore, morality is not as absolute as reality, but it is analogously absolute. (That is/might be a correction to my current position.)

So, if alien intelligence has a value system that we can understand, then we live within the same absolute morality. If alien intelligence acts based on some other emergent properties we cannot understand, then well, bad luck. (Another additon to my current position, thanks to this discussion.)

I can't explain you to you. Point at your feet and say aloud, "You are here."

That's unfortunate, I thought you saw where I was coming from.

Replies from: Pavitra
comment by Pavitra · 2010-10-30T19:59:47.411Z · LW(p) · GW(p)

I can't explain you to you. Point at your feet and say aloud, "You are here."

That's unfortunate, I thought you saw where I was coming from.

Just because I can't explain an idea doesn't mean I don't understand it. I usually try to explain things by providing a diff between the mind I'm trying to explain to and the idea I'm trying to explain; the diff between your mind and itself is null, so I don't have anything to offer as an explanation.

we might be able to predict their actions using scientific theories, but we won't understand or *explain* it without normative theories

What's the difference between being able to predict something and understanding it?

Replies from: draq
comment by draq · 2010-10-30T20:35:27.560Z · LW(p) · GW(p)

You walk up to the fridge, get out a banana and eat it.

If I am Laplace's demon, I might be able to predict your doing (or not). But science does not explain what hunger and desire is, it can describe it using its own language, but the scientific language does not include any words to describe values. Hunger and desire have more qualities than just neuronal processes.

Anyway, the difference might be pointless, because Laplace's demon does not exist and we can't predict in principle anything more complicated than a dozen atoms, unless we have a fundamentally new theory of physics. In that case, the only thing we have left is normative/value theories that help us to predict someone's action.

Replies from: Pavitra
comment by Pavitra · 2010-10-30T21:07:37.683Z · LW(p) · GW(p)

This is a multiple-choice reply. Does explaining what hunger and desire is have any external consequences -- can we do anything with that understanding that we couldn't otherwise -- or is the benefit purely a state of mind?

1.

Well, I guess not. If you're the kind of soulless consequentialist that doesn't see any difference between a person and a p-zombie, then you're not capable of distinguishing real understanding from mere prediction.

I'm okay with that.

2.

Sure -- it allows us to actually predict the behavior of other humans, in a reasonable amount of thinking time, rather than having to do the ridiculous amounts of math to model their entire brain-states up from the quantum level, which would require a computer the size of Jupiter.

A high-level model still counts as a scientific explanation. As long as the hypothesis is falsifiable -- it predicts that certain things will happen, and certain things will not happen -- it doesn't have to be physics to be science.

Replies from: draq
comment by draq · 2010-10-31T21:19:58.599Z · LW(p) · GW(p)

Sorry, I am developing my ideas in the process of the discussion and I probably have amended and changed my position several times thanks to the debate with the LW community. The biggest problem is that I haven't defined a clear set of vocabulary (because I haven't had a clear position yet), so there is a lot of ambiguity and misunderstanding which is solely my fault.

Here is a short summary of my current positions. They may not result in a coherent system. I'm working on that.

1. Value system / morality is science

Imagine an occult Pythagorean who believes that only mathematical objects exist. So he/she wouldn't understand the meaning of electrons and gravitational forces because they cannot be fully expressed in mathematics. He/she would understand the Coulomb's law and Newton's law of gravitation, but a physicist needs more than these mathematical equations for the understanding of physics.

That is the difference between physicists and chemists on one side and mathematicians and string theorists (I have not the slightest idea about string theory, so regard this part as my modest attempt of humour) on the other side.

Analogously, you need to understand the value system to understand and possibly predict the actions of value agents (humans, animals, maybe AIs). Maybe the value system can be mathematicised, or not.

But it would be a scientific explanation. I agree with you.

2. Something matters to me

We all have values. You asked whether the understanding of the value system has any external consequences or is the benefit purely a state of mind. I wonder why does it matter to you to know the difference?

You may answer that thinking of these problems makes you biologically fitter and if you don't ask these questions, your kind will die out and those questions won't be asked.

But when you asked the question, you did not consider your biological fitness. And if you considered your biological fitness, then why does biological fitness matters to you? There is at least one thing that matters to you (assuming you are not a p-zombie), so at least the desire, "something matters to me", is real, as real as your knowledge of the world.

Assuming you are not a psychopath, your only desire is not your own survival, but, being empathetic, also the well-being of your fellow animals, human and sentient beings. And you know that your fellow human beings are empathetic (or acting as if they are empathetic) as well. Ergo you can establish an intersubjective consensus and some common ground what the good is.

3. Epistomology

Mental phenomena are of different qualities than natural phenomena. A desire is more than neuronal processes. You may read all the books on neurobiology, but you may learn more on desires by reading a single book by Nabokov. (You may think that you don't care, then please go back to point 2.). From here continue with the text diagramm.

ps The computer that you need to model the quantum states of a brain would be bigger than the universe, see (Kohn's Nobel Lecture)[http://nobelprize.org/nobel_prizes/chemistry/laureates/1998/kohn-lecture.pdf].

Replies from: Pavitra
comment by Pavitra · 2010-11-01T12:45:51.714Z · LW(p) · GW(p)

1.

Imagine an occult Pythagorean who believes that only mathematical objects exist. So he/she wouldn't understand the meaning of electrons and gravitational forces because they cannot be fully expressed in mathematics.

Are we assuming this hypothetical occult Pythagorean is aware of historically post-Pythagorean pure-mathematical concepts like Conway's Game of Life?

It seems to me that electrons and gravitational forces can be fully expressed in mathematics. Since the equations describing their behavior are known, can't the Pythagorean simply consider a mathematical object whose behavior is defined by those formulas?

I suspect that we have different definitions of "understand", and that that is at the core of the debate. To me, understanding something is the same as being able to predict its behavior; you seem to have something additional in mind -- some sort of qualia, perhaps -- but I'm not sure what it is.

But it would be a scientific explanation. I agree with you.

I'm sorry, I've lost track of context on this one, and I can't figure it out even on rereading. What would be a scientific explanation, of what, under what circumstances?

2.

It may interest you to know that I consider p-zombies as capable of having things that matter to them. This suggests a mismatch of definition, probably of "matter".

You asked whether the understanding of the value system has any external consequences or is the benefit purely a state of mind. I wonder why does it matter to you to know the difference?

I asked in order to determine whether, when you discussed whether a mind has such an understanding, whether you were making a distinction that mattered in reality, or if you were just talking about qualia. I don't believe in qualia, you see.

you can establish an intersubjective consensus and some common ground what the good is.

This is, I think, the strongest and most interesting point you've put forward. It should be possible to establish a science of preference that can predict how people will choose on moral questions; once the models are sufficiently accurate, humanity should be able to formalize our definition of "good". (Assuming, of course, that an intersubjective consensus exists. It could be that we just disagree about a bunch of stuff.)

3.

Mental phenomena are of different qualities than natural phenomena. A desire is more than neuronal processes.

No they're not.

You may read all the books on neurobiology, but you may learn more on desires by reading a single book by Nabokov.

Agreed.

From here continue with the text diagram.

I think there's an important difference between sensory perception and moral perception that you're glossing over: sensory perception is perceiving something out there that exists to be perceived, while moral perception reports only on itself. If sensory perception is a window, moral perception is a painting.

 +-------+   senses   +---------------------+
 |reality|----------->|perception of reality|
 +-------+            +---------------------+

+---------+  moral
|evolution| intuition +----------------------+
| of the  |---------->|perception of morality|
|  brain  |           +----------------------+
+---------+
Replies from: draq
comment by draq · 2010-11-02T17:04:03.645Z · LW(p) · GW(p)

1a.

An electron is not a mathematical object. If it were, then we wouldn't need chemists and physicsts, but only mathematicians. A mathematical object does have any behaviour, as much as a word in a language does not have any behaviour.

Mathematics and logic are tautalogical systems with defined symbols and operations. We use mathematics to describe the physical world as much as we use language to describe the moral world (value system), e.g. in behavioural biology and psychology.

Would you agree that value system is as absolute as the physical world if we can mathematise our normative theories as much as we have mathematised our physical theories?

1b.

An electron is not a mathematical object. Let's say, equation (1) describes the behaviour of an electron according to our current knowledge. Then you might say, the electron is a "mathematical object" contained in (1).

But what if equation (2) is found, that better describes an electron's behaviour? What happens with the "mathematical object"?

2.

I consider p-zombies as capable of having things that matter to them

What is your defintion of "something matters?". As in "it matters to a stone up in the air to fall down to earth."? In that case, our defintions vary.

You seem to be a logical positivist, which is an incomplete world view. If your mind works the same way as mine does, then you should know that qualia exists. It is like if you walk up to a tree and says "No, there is no tree in front of me." and then sidestep it.

3.

I would in principle agree with your diagramm of moral intuition. Let me present you two models:

++++++++++++++++++++++++++++++

A. reality -- senses -- perception of reality

morality -- moral intuition -- perception of morality

B. evolution of the brain -- senses -- perception of reality

evolution of the brain -- moral intuition -- perception of morality

+++++++++++++++++++++++++++++

Why do you cherry-pick from the two categories? Is it because science is more mathematical, has a methodology that is more precise, and has a greater intersubjective consensus? Why does any of these make reality real and morality relative?

For more information on model 3B, look up "evolutionary epistomology".

Application case

I want to apply my theory of absolute moralily to the design of Friendly AI.

Unless we can mathematise our value system, how can we make an AI friendly? We know Asimov's Laws of Robotics, but these laws are in the inprecise formulation of natural language. What do "injury", "human being", "robot", "harm", "obey", "protection" and other words mean? The outcome of such ambiquities is the defining plot element of The Metamorphosis and "I, Robot" (2004 film).

My solution: Design AIs with empathy and access to our intersubjective consensus of morality. If our current normative theories aren't completely wrong, then the absolute good does not require the annihilation of the human species.

You might say that having empathy does not automatically make an AI good, because it may have a wrong normative theory.

Therefore, make many AIs, let them evolve and battle it out. The good will win vs. evil, because being moral, that is having better normative theories, increases biological fitness. So the more moral an AI is, the greater its chance to survive.

Replies from: Pavitra, Pavitra, jimrandomh, Sniffnoy
comment by Pavitra · 2010-11-02T23:20:05.360Z · LW(p) · GW(p)

This is redundant, but the point is important and I don't want it to be overlooked because it's buried at the bottom of a long comment.

Therefore, make many AIs, let them evolve and battle it out. The good will win vs. evil, because being moral, that is having better normative theories, increases biological fitness. So the more moral an AI is, the greater its chance to survive.

If you do that, everyone will die.

comment by Pavitra · 2010-11-02T23:18:23.723Z · LW(p) · GW(p)

An electron is not a mathematical object. If it were, then we wouldn't need chemists and physicsts, but only mathematicians.

Chemists and physicists tell us which mathematical objects we're made out of. The used to think it was integers, but it turns out it wasn't.

A mathematical object does have any behaviour, as much as a word in a language does not have any behaviour.

It has mathematical behavior. Words are not required to be well-defined. What distinction are you trying to make here?

Would you agree that value system is as absolute as the physical world if we can mathematise our normative theories as much as we have mathematised our physical theories?

No, what follows from the hypothetical is that it would be possible to hold meaningful discussions about our normative theories, rather than just saying words. A theory can be rigorously well-defined and also wrong.

An electron is not a mathematical object. Let's say, equation (1) describes the behaviour of an electron according to our current knowledge. Then you might say, the electron is a "mathematical object" contained in (1).

The electron seems to be the mathematical object contained in (1). We will later discover that this is wrong.

But what if equation (2) is found, that better describes an electron's behaviour? What happens with the "mathematical object"?

(1) still "exists" (to the extent that mathematical objects "exist" independently in the Platonic World of Forms, which they don't, but it's a fairly useful approximation most of the time), but (1) is less useful to us now, so we don't spend as much time talking and thinking about it.

What is your defintion of "something matters?". As in "it matters to a stone up in the air to fall down to earth."? In that case, our defintions vary.

It has a utility function; that is, it acts so as to optimize some variable. A rock isn't a very clever faller; it doesn't really optimize in any meaningful sense. For example, a rock won't roll up a two-foot ridge in order to be able to fall two hundred feet down the cliff on the other side.

You seem to be a logical positivist, which is an incomplete world view.

Not quite, but a near miss. I'm a reductionist.

If your mind works the same way as mine does, then you should know that qualia exists. It is like if you walk up to a tree and says "No, there is no tree in front of me." and then sidestep it.

Sure, qualia exist, in the same way that a car or a computer program exists. Qualia just don't have a separate, fundamental existence independent of the mere atomic mechanism of the neurons in the brain of the person experiencing the qualia.

After the components of a car have been assembled, you don't need to perform a ritual blessing over it to infuse the mere mechanism of engine and drive shaft and so forth with the ineffable essence of car-ness. It's already fully possessed of car-ness, simply by virtue of the physical mechanisms that make it up.

Likewise, I don't need an additional element -- spirit, soul, elan vital, ontologically intrinsic morality, whatever -- in order to infuse my brain with qualia. It's already fully possessed of qualia, simply by virtue of the physical mechanisms that make it up. If I survive long enough to get uploaded, I fully expect my uploaded copies to have their own qualia.

Why do you cherry-pick from the two categories?

...no.

Let my try again:

 +-------+    +------+     +---------------------+
 |reality|--->|senses|---->|perception of reality|
 +-------+    +------+     +---------------------+

            +---------+    +----------------------+
            |  moral  |--->|perception of morality|
            |intuition|    +----------------------+
            +---------+

Unless we can mathematise our value system, how can we make an AI friendly? We know Asimov's Laws of Robotics, but these laws are in the inprecise formulation of natural language. What do "injury", "human being", "robot", "harm", "obey", "protection" and other words mean? The outcome of such ambiquities is the defining plot element of The Metamorphosis and "I, Robot" (2004 film).

These are good and important questions. The correct answer is almost certainly "we'd better mathematize our value system, and we'd better get it right."

My solution: Design AIs with empathy and access to our intersubjective consensus of morality. If our current normative theories aren't completely wrong, then the absolute good does not require the annihilation of the human species.

Have you read "Coherent Extrapolated Volition"?

Therefore, make many AIs, let them evolve and battle it out. The good will win vs. evil, because being moral, that is having better normative theories, increases biological fitness. So the more moral an AI is, the greater its chance to survive.

No. If you do that, everyone will die. "Good at killing other AIs" does not even remotely imply "morally good according to human values". Morality is harder than that.

Replies from: draq, draq
comment by draq · 2010-11-03T21:05:25.812Z · LW(p) · GW(p)

What if the coherent extrapolated volition is the death of all people, that is, the end of all volitions?

Replies from: Pavitra
comment by Pavitra · 2010-11-03T21:39:25.789Z · LW(p) · GW(p)

I guess we should do that then? I strongly expect that it won't turn out that that's the right thing to do, though, and it's not what I had in mind when I said you'd kill everyone. I meant that the AI will care about the wrong thing, ignore human morality completely, and eat the world (killing everyone as an incidental side effect) even though it's wrong according to human morals.

Replies from: draq
comment by draq · 2010-11-04T15:54:39.358Z · LW(p) · GW(p)

When I use the word morality, then I certainly don't mean any rules of conduct.

What is your defintion of human morality?

Replies from: Pavitra
comment by Pavitra · 2010-11-04T17:54:55.589Z · LW(p) · GW(p)

Often, when I stop to think about a decision, I find that my desire changes upon reflection. The latter desire generally seems more intellectually coherent(*), and across multiple instances, the initial desires on various occasions are generally more inconsistent with one another while the after-reflection desires are generally more consistent with one another. From this I infer the existence of a (possibly only vague, partially specified, or partially consistent) common cause to the various instances' after-reflection desires. This common cause appears to roughly resemble a bundle of heuristics that collectively approximate some sort of optimization criteria. I call the bundle of heuristics my "moral intuition" and the criteria they approximate my "morality".

I suspect that other human's minds are broadly similar to mine in this respect, and that their moral intuitions are broadly similar to mine. To the extent they correlate, we might call the set of common trends "human morality" or "humaneness".

(*) An example of intellectual coherence vs. incoherence: Right now, I'd like to go get some ice cream from the freezer. However, on reflection, I remember that there isn't any ice cream in the freezer at the moment, so walking over to the freezer would not satisfy the impulse that motivated the action.

Replies from: draq
comment by draq · 2010-11-04T17:58:16.473Z · LW(p) · GW(p)

What about the Baby-Eaters and the Super Happy People in the story Three Worlds Collide? Do they have anything you would call "humaneness"?

Replies from: Pavitra
comment by Pavitra · 2010-11-04T18:21:07.574Z · LW(p) · GW(p)

No.

Edit: Well, sort of. Some of their values partially coincide with ours. But one of the major themes of the story is that we should expect aliens to have inhumane value systems.

comment by draq · 2010-11-03T20:52:08.632Z · LW(p) · GW(p)

Physical and mathematical objects

Chemists and physicists tell us which mathematical objects we're made out of. The used to think it was integers, but it turns out it wasn't.

If the physical world can be fully reduced to mathematics, we don't need chemists and physicist to tell us which mathematical objects we're made out of. A mathematician would know that, unless there is something about an electron that can not be fully reduced to mathematics.

We use mathematics to describe physical objects, but physical objects are not mathematical objects. We use languages to describe physical objects, but physical objects are not words. Why are things mathematical and not lingual? Is it because the mathematical description yields better predictions?

Theories and what theories describe

Would you agree that value system is as absolute as the physical world if we can mathematise our normative theories as much as we have mathematised our physical theories?

No, what follows from the hypothetical is that it would be possible to hold meaningful discussions about our normative theories, rather than just saying words. A theory can be rigorously well-defined and also wrong.

I think you are missing the point. A physical theory can be wrong, that, I guess, does not shake your belief in an absolute reality. A normative theory, even mathematised, can also be wrong, but why should it shake my belief in an absolute morality?

Reductionism

Not quite, but a near miss. I'm a reductionist.

I am fine with that. As long as you believe in qualia as you believe in cars and trees, then we have a base from which we can work on, without bothering the fundaments too much. I think reductionism is wrong, but that's not the discussion here.

From a reductionist point of view, the absolute morality would be a part of the absolute reality, with the mere difference that values have different qualities (no spatial extension, for example) than cars and trees.

Two models

Let me try again:

| reality | ---> | senses | ----> | perception of reality |

| morality | --> | moral intuition | ---> | perception of morality |

or

| senses | ----> | perception of reality |

| moral intuition | ---> | perception of morality |

Again, why is one model better than the other one?

We will die anyway.

No. If you do that, everyone will die. "Good at killing other AIs" does not even remotely imply "morally good according to human values". Morality is harder than that.

It is not necessarily that evolution gets us better physical theories or normative theories. I was simply optimistic. It is possible that people believing in a spaghetti monster kill all rational people, as much as it is possible that an AI has a wrong normative theory and thus kill all human beings. Or, the absolute morality demands our death. Or maybe the LHC will create a black hole that kills us within 24 hours. In all cases, bad luck. We will die anyway. On the longer run, the chance of us irreversibly dying at any single point of time is greater than us living forever.

Concerning Coherent Extrapolated Volition

I would probably have saved a lot of discussion, had I read the article first (and learned of the rationalist taboo). :)

I think what Eliezer calls "coherent extrapolated volition" is what I call "absolute morality. The "ability to extrapolate volition" is what I call "empathy". I don't agree with his goal "initial dynamic should implement the coherent extrapolated volition of humankind" , though. First, what is the defintion of humankind? This is a core problem for the Prime Intellect in The Metamorphosis.

I think, the goal of the intial dynamic should be " to extrapolate volition of all entities that have or can express volitions."

Replies from: jimrandomh, Pavitra
comment by jimrandomh · 2010-11-03T21:16:46.798Z · LW(p) · GW(p)

If the physical world can be fully reduced to mathematics, we don't need chemists and physicist to tell us which mathematical objects we're made out of. A mathematician would know that, unless there is something about an electron that can not be fully reduced to mathematics.

Mathematics is a broad field, with many specialties. A mathematician could only know which mathematical objects correspond to electrons if they studied that particular question. And our name for a mathematician who specializes in studying the question of which mathematical objects correspond to electrons is... Particle Physicist.

A physical theory can be wrong, that, I guess, does not shake your belief in an absolute reality. A normative theory, even mathematised, can also be wrong, but why should it shake my belief in an absolute morality?

It shouldn't, because this is a straw man, not the argument that leads us to conclude that there isn't a single absolute morality.

Replies from: draq
comment by draq · 2010-11-04T16:09:39.280Z · LW(p) · GW(p)

If you read a physics or chemistry textbook, then you'll find a lot of words and only a few equations, whereas a mathematics textbook has much more equations and the words in the book are to explain the equations, whereas the words in a physics book are not only explaining the equations but the issues that the equations are explaining.

However, I haven't fully thought about reductionism, so do you have any recommendations that I want to read?

My current two objections:

1. Computational

According to our current physical theories, it is impossible to predict the behaviour of any system larger than a dozen atoms, see Walter Kohn's Nobel Lecture. We could eventually have a completely new theory, but that would be an optimistic hope.

2. Ontological

Physical objects have other qualities than mathematical objects. And values have other qualities than physical objects. Further elaboration needed.

It shouldn't, because this is a straw man, not the argument that leads us to conclude that there isn't a single absolute morality.

It is not a straw man, because I am not attacking any position. I think I was misunderstood, as I said.

comment by Pavitra · 2010-11-03T21:32:55.512Z · LW(p) · GW(p)

I usually try not to say this, but...

From the FAQ:

I strongly disagree with the Less Wrong consensus on an issue. Is it okay to write a top-level article about it?

Absolutely! Just make sure you know why it's the consensus position, first. Before posting, read what has already been written on the subject to ensure that you are saying something new and not just retracing covered ground. If you aren't sure why the consensus position is the consensus position, feel free to ask in an open thread. Being aware of what has been said about a subject in the past is especially important if you want to argue for the existence of God, claim a universally compelling morality, or suggest a really easy way to make friendly AI without going through all that complicated coherent extrapolated volition stuff. Before tackling the Less Wrong consensus on these issues you may want to first acquire an extraordinary familiarity with the sequences, the arguments against your position, and the Less Wrong norms on the issue.

(Emphasis mine.)

You need to go read the sequences, and come back with specific counterarguments to the specific reasoning presented therein on the topics that you're discussing.

.

.

A mathematician would know that, unless there is something about an electron that can not be fully reduced to mathematics.

Choice of axioms?

Again, why is one model better than the other one?

I can run controlled experiments to show that my perception of reality and your perception of reality have a common cause. I can close a box, and we will both report seeing it change state from open to closed. There is no such evidence of a common thing-that-morality-intuition-observes. If we imagine our minds as rooms, our reality-senses are windows overlooking a common garden; we can see each other's windows, and confirm that we see the same trees and flowers. But our morality-senses need not be true windows; for all we know, they might as well be tromp l'oeil.

             /---> my senses ---> my perception of reality ---\
reality --->|                                                  |---> consensus
             \--->your senses--->your perception of reality---/


               /--- my morals ---\
evolution --->|                   |---> weak consensus
               \---your morals---/

I was simply optimistic.

Optimism and pessimism are incompatible with realism. If you're not willing to believe that the universe works the way that it does in fact work, then you're not qualified to work on potentially-world-destroying projects.

I think what Eliezer calls "coherent extrapolated volition" is what I call "absolute morality".

And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?

(Incidentally, if you've been reading claims from Clippy that humane and paperclip-maximizing moralities are essentially compatible, then you should realize that e may have ulterior motives and may be arguing disingenuously. Sorry, Clippy.)

Replies from: draq
comment by draq · 2010-11-04T16:23:54.112Z · LW(p) · GW(p)

Universal morality

You need to go read the sequences, and come back with specific counterarguments to the specific reasoning presented therein on the topics that you're discussing.

I don't think there is an easy way to make FAI.

Absolute morality is the coherent extrapolated volition of all entities with volition. Morality is based on values. In a universe where there are only insentient stones, there is no morality, and even if there are, they are meaningless. Morality exists only where there are values (things that we either like or dislike), or "volition".

Reality and Morality

So the reason why you think there is a reality is because there is a strong consensus and the reason why you think that there is no morality is because there is no strong consensus?

Optimism and pessimism are incompatible with realism. If you're not willing to believe that the universe works the way that it does in fact work, then you're not qualified to work on potentially-world-destroying projects.

I don't see what optimism or pessimism has to do with willingness to believe in an absolute reality. I only know that my knowledge is restricted, and within the boundaries of my ignorance, I can hope for the better or believe in the worse. If I'm omniscient, I will neither be optimistic or pessimistic. We are optimistic because we are ignorant, not the other way around, at least in my case.

And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?

To be absolute, it has to apply to all mind that has volition.

(Incidentally, if you've been reading claims from Clippy that humane and paperclip-maximizing moralities are essentially compatible, then you should realize that e may have ulterior motives and may be arguing disingenuously. Sorry, Clippy.)

That is why I evaluate arguments based on other things than someone's ulterior motives.

Replies from: Pavitra, jimrandomh
comment by Pavitra · 2010-11-04T18:18:50.896Z · LW(p) · GW(p)

Absolute morality is the coherent extrapolated volition of all entities with volition.

This sounds like a definition, so let's gensym it and see if it still makes sense.

G695 is the coherent extrapolated volition of all entities with volition.

Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?

So the reason why you think there is a reality is because there is a strong consensus and the reason why you think that there is no morality is because there is no strong consensus?

No, that's my reason for breaking symmetry between them, for discarding the assumption that the explanation of the two phenomena should be essentially isomorphic. I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.

within the boundaries of my ignorance, I can hope for the better or believe in the worse.

There is a very great difference between hoping for the better and believing in the better. Nor are "better" or "worse" the only two options.

Suppose you're getting into a car, and you're wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.

To be absolute, it has to apply to all mind that has volition.

You're constructing a universal CEV. It's not an already-existing ontologically fundamental entity. It's not a thing that actually exists.

That is why I evaluate arguments based on other things than someone's ulterior motives.

Consciously, sure. I just wanted to warn you against the human credulity bias.

Replies from: draq
comment by draq · 2010-11-04T18:26:23.193Z · LW(p) · GW(p)

Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?

So your point is there is no point in caring for anything. Do you call yourself a nihilist?

I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.

Would you call yourself a naive realist? What about people on LSD, schizophrenics and religious people who see their Almighty Lord Spaghetti Monster in what you would call clouds. You surely mean that there is one reality between all humans that are "sane".

Suppose you're getting into a car, and you're wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.

I would say, the optimistic view is saying "There is probably/hopefully no crash". But don't let us fight over words.

You're constructing a universal CEV. It's not an already-existing ontologically fundamental entity. It's not a thing that actually exists.

Does CEV of humankind exists?

Replies from: Pavitra
comment by Pavitra · 2010-11-04T18:43:42.542Z · LW(p) · GW(p)

So your point is there is no point in caring for anything. Do you call yourself a nihilist?

No, I care about things. It's just that I don't think that G695 (assuming it's defined -- see below) would be particularly humane or good or desirable, any more than (say) Babyeater morality.

Would you call yourself a naive realist?

Certainly not -- hence "eventually". Science requires interpreting data.

Edit: oh, sorry, forgot to address your actual point.

At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.

I would say, the optimistic view is saying "There is probably/hopefully no crash". But don't let us fight over words.

Very well. Let us assume that (warning: numbers just made up) one in every 100,000 car trips results in a crash. The G698 view says "The chances of a crash are low." The G699 view says "The chances of a crash are high." The G700 view says "The chances of a crash are 1/1000000." I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.

Does CEV of humankind exists?

I personally don't think the extrapolated volition of humanity coheres, but I have the impression that others disagree with me.

I would be very surprised, however, if the extrapolated volition of all volitional entities cohered and the extrapolated volition of all volitional humans did not.

Replies from: draq
comment by draq · 2010-11-04T19:07:51.330Z · LW(p) · GW(p)

I like gensyms.

G101: Pavitra (me) cares about something.

What is the point in caring for G101?

At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.

What if you can't predict?

I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.

That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.

Replies from: Pavitra
comment by Pavitra · 2010-11-04T19:22:22.543Z · LW(p) · GW(p)

G101: Pavitra (me) cares about something.

What is the point in caring for G101?

Since I'm Pavitra, it doesn't really matter to me if G101 has a point; I care about it anyway.

What if you can't predict?

Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.

That is not how your brain works (a rough guess).

Not natively, no. That's why it requires advocacy.

Replies from: draq
comment by draq · 2010-11-04T19:32:12.745Z · LW(p) · GW(p)

Since I'm Pavitra, it doesn't really matter to me if G101 has a point; I care about it anyway.

So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.

Don't you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so... hm... I still won't)

Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.

A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?

Not natively, no. That's why it requires advocacy.

You care about things. I assume you care about your health. In that case, you don't want to be in a crash. So you'll evaluate whether you should get into a car. If you get into the car, you are an optimist, if not, you are a pessimist.

Again, why is important to advocate anything? -- Because you care about it. -- So what?

Replies from: Pavitra
comment by Pavitra · 2010-11-04T21:26:28.994Z · LW(p) · GW(p)

So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.

Don't you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so... hm... I still won't)

Again, it's not that I don't care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don't feel arbitrary to me -- after all, I care about them a great deal! -- but I didn't choose to care about them. I just do.

A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?

Sure, and those are the claims I take the time to evaluate and debunk.

If you get into the car, you are a G701, if not, you are a G702.

Please explain the relationship between G701-702 and G698-700.

Replies from: draq
comment by draq · 2010-11-05T18:47:41.883Z · LW(p) · GW(p)

Again, it's not that I don't care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don't feel arbitrary to me -- after all, I care about them a great deal! -- but I didn't choose to care about them. I just do.

And you believe that other minds have different core believs?

Sure, and those are the claims I take the time to evaluate and debunk.

I think we should close the discussion and take some time thinking.

Please explain the relationship between G701-702 and G698-700.

"chance is low" or "chance is high" are not mere descriptive, they also contain values. chance is low --> probably safe to drive, high --> probably not, based on the more fundamental axiom that surviving is good. And "surviving is good" is not descriptive, it is normative because good is a value. you can also say instead: "you should survive", which is a normative rule.

Replies from: Pavitra
comment by Pavitra · 2010-11-06T16:04:05.412Z · LW(p) · GW(p)

And you believe that other minds have different core believs?

"Belief" isn't quite right; it's not an anticipation of how the world will turn out, but a preference of how the world will turn out. But yes, I anticipate that other minds will have different core preferences.

I think we should close the discussion and take some time thinking.

Yes, okay.

comment by jimrandomh · 2010-11-04T17:58:37.650Z · LW(p) · GW(p)

And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?

To be absolute, it has to apply to all mind that has volition.

No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.

Replies from: draq
comment by draq · 2010-11-04T18:09:18.707Z · LW(p) · GW(p)

What do you think of Eliezer's idea of "coherent extrapolated volition of humankind" and his position that FAI should optimise it?

Replies from: jimrandomh
comment by jimrandomh · 2010-11-04T21:19:57.197Z · LW(p) · GW(p)

I think it is insufficiently detailed to identify a unique utility function - it needs to have specific extrapolation and reconciliation procedures filled in, the details of those procedures are important and affect the result, and a bad extrapolation procedure could produce arbitrary results.

That said, programming an AI with any value system that didn't match the template of CEV (plus details) would be a profoundly stupid act. I have seen so many disastrously buggy attempts to define what human values are that I doubt it could be done correctly without the aid of a superintelligence.

Replies from: draq
comment by draq · 2010-11-05T18:57:43.017Z · LW(p) · GW(p)

No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.

There is no universally compelling argument for morality as much as there is no universally compelling for reality. You can change the physical perception as well. But it does not necessary follow that there is no absolute reality.

I also have to correct my position: CEV is not absolute morality. Volition is rather a "reptor" or "sensor" of morality I made a conceptual mistake.

Can you formulate your thoughts value-free, that is without words like "profoundly stupid", "important". Because these words suggest that we should do something. If there is no universal morality, why do you postulate anything normative? Other than for fun.

ps I have to stop posting. First, I have to take time for thinking. Second, this temporary block is driving me insane.

comment by jimrandomh · 2010-11-02T23:39:05.519Z · LW(p) · GW(p)

You keep using that phrase, "intersubjective consensus". What does it mean, and how do you know that there is one with respect to morality?

comment by Sniffnoy · 2010-11-02T22:46:46.147Z · LW(p) · GW(p)

An electron is not a mathematical object. Let's say, equation (1) describes the behaviour of an electron according to our current knowledge. Then you might say, the electron is a "mathematical object" contained in (1).

But what if equation (2) is found, that better describes an electron's behaviour? What happens with the "mathematical object"?

So you incorrectly identified what sort of mathematical object it is. That doesn't mean it isn't one, just that you made an identification prematurely (and perhaps were insufficiently careful with your language); you'll need to split off the concepts of actual-but-still-unknown-electron and previously-hypothesized-electron.

Replies from: draq
comment by draq · 2010-11-03T19:54:28.977Z · LW(p) · GW(p)

In that sense, everything could be a mathematical object, including qualia. We just haven't identified it.

Also, the concept of actual-but-still-unknown-X and previously-hypothesized-X can be applied to morality in terms of actual-but-still-unknown-norm and previously-hypothesized-norm.

comment by jimrandomh · 2010-10-29T02:04:41.788Z · LW(p) · GW(p)

As long as we have values, desires, dislikes and make judgments (which all of us do and which maybe is a defining characteristic of the human being beyond the biological basics) and if we want to put these values into a logical consistent system, we have an absolute moral system.

No, we have an absolute moral system per person. You can then take groups of those moral systems and combine them, in various different ways such as simulating what they would decide if they voted. However, you will get different results depending which combining procedure you use, and what sort of people you put into the combining procedure.

Replies from: draq
comment by draq · 2010-10-29T15:14:33.501Z · LW(p) · GW(p)

Substitute "moral system" with "reality". Would you still agree with it?

comment by AlexMennen · 2010-10-27T00:03:55.832Z · LW(p) · GW(p)

Good point, actually. In matters of epistemology, it takes reasoning rather than physical evidence to evaluate hypotheses, except when physical evidence can help people see flaws in their reasoning.

Replies from: LucasSloan
comment by LucasSloan · 2010-10-27T00:16:39.879Z · LW(p) · GW(p)

it takes reasoning rather than physical evidence

Your mind exists in the universe. There isn't a hard barrier between "reasoning" and "physical evidence."

comment by Emile · 2010-10-26T18:41:48.141Z · LW(p) · GW(p)

Depends in which sense you mean a moral system to be "absolute".

I would agree that there is probably an "absolute moral system" that all humans would agree on, even if we may not be able to precisely formulate it right now (or at least, a system that most non-pathological humans could be convinced they agree with).

However, that doesn't mean that any intelligence (AI or alien) would eventually settle on those morals.

But I believe that there is one single right answer. Otherwise, it becomes quite confusing.

That doesn't sound like a very good reason to believe something.

(I would agree that there is probably a single right answer for humans)

Replies from: draq
comment by draq · 2010-10-26T20:07:19.164Z · LW(p) · GW(p)

Well, the absolute moral system I meant does encompass everything, incl. AI and alien intelligence. It is true that different moral problems require different solutions, that is also true to physics. Objects in vacuum behave differently than in the atmosphere. Water behaves differently than ice, but they are all governed by the same physics, so I assume.

A similar problem may have a different solution if the situation is different. An Edo-ero samurai and a Wall Street banker may behave perfectly moral even if they act differently to the same problem due to the social environment.

Maybe it is perfectly moral for AIs to kill and annihilate all humans, as much as it is perfectly possible that 218 of Russell's teapots are revolving around Gliese 581 g.

That doesn't sound like a very good reason to believe something.

Well, I formulated it wrongly. I meant that all answers are logically consistent. There might be more than one answer, but they do not contradict each other. So there is only one set of logically consistent answers. Otherwise, it becomes absurd.