Posts

Comments

Comment by GBM on Abstracted Idealized Dynamics · 2008-08-12T04:48:19.000Z · LW · GW

Eliezer, this explanation finally puts it all together for me in terms of the "computation". I get it now, I think.

On the other hand, I have a question. Maybe this indicates that I don't truly get it; maybe it indicates that there's something you're not considering. In any case, I would appreciate your explanation, since I feel so close to understanding what you've been saying.

When I multiply 19 and 103, whether in my head, or using a pocket calculator, I get a certain result that I can check: In theory, I can gather a whole bunch of pebbles, lay them out in 103 rows of 19, and then count them individually. I don't have to rely on my calculator - be it internal or electronic.

When I compute morality, though, the only thing I have to examine is my calculator and a bunch of other ones. I would easily recognize that most calculators I come across will give the same answer to a moral question, at least to a limited number of decimal points. But I have no way of knowing whether those calculators are accurate representations of the world - that is, perhaps all of those calculators were created in a way that didn't reflect reality, and added ten to any result calculated.

If 90% of my calculators say 19 times 103 is equal to 1967, how do I determine that they are incorrect, without having the actual pebbles to count?

Comment by GBM on Inseparably Right; or, Joy in the Merely Good · 2008-08-09T01:12:20.000Z · LW · GW

Eliezer, thank you for this clear explanation. I'm just now making the connection to your calculator example, which struck me as relevant if I could only figure out how. Now it's all fitting together.

How does this differ from personal preference? Or is it simply broader in scope? That is, if an individual's calculation includes "self-interest" and weighs it heavily, personal preference might be the result of the calculation, which fits inside your metamoral model, if I'm reading things correctly.

Comment by GBM on The Meaning of Right · 2008-07-29T05:29:44.000Z · LW · GW

I'm going to need some help with this one.

It seems to me that the argument goes like this, at first:

  • There is a huge blob of computation; it is a 1-place function; it is identical to right.
  • This computation balances various values.
  • Our minds approximate that computation.

Even this little bit creates a lot of questions. I've been following Eliezer's writings for the past little while, although I may well have missed some key point.

Why is this computation a 1-place function? Eliezer says at first "Here we are treating morality as a 1-place function." and then jumps to "Since what's right is a 1-place function..." without justifying that status.

What values does this computation balance? Why those values?

What reason do we have to believe that our minds approximate that computation?

Sorry if these are extremely basic questions that have been answered in other places, or even in this article - I'm trying and having a difficult time with understanding how Eliezer's argument goes past these issues. Any help would be appreciated.

Comment by GBM on Changing Your Metaethics · 2008-07-27T21:20:32.000Z · LW · GW

Richard, I don't know anything about moral theorists, but this series of posts has helped me understand my own beliefs better than anything I've ever read, and they've coalesced mostly while reading this post. "Meta" was a concept missing from my toolbox, at least in the case of morality, and Eliezer's pointing it out has been immensely productive for me.

behemoth, I think the point you make about the second generation is an important one. Because children are both irrational and bad at listening to their intuitions when it's inconvenient to do so, having some form of metamorality is useful to serve as a vessel for morality. The problem is, in doing that, people bind the vessel and its contents, and can't pour the contents into some other vessel if theirs turns out to be leaky. Which is why rationalism is important.

Comment by GBM on Does Your Morality Care What You Think? · 2008-07-27T02:20:10.000Z · LW · GW

Not hard at all, Caledonian.

Also, stop trolling. Offer some insight, or go away.

Comment by GBM on Does Your Morality Care What You Think? · 2008-07-26T07:49:00.000Z · LW · GW

Another thing way to look at this idea of math being a tool that exists only in the mind has occurred to me:

Does addition happen outside the mind? What is something "plus" something else? If we've got a quantity of two sheep, and a quantity of three sheep, and they're standing next to each other, then we can consider the two quantities together, and count five sheep. But let's say a quantity of two sheep wander through a meadow until they come across a quantity of three sheep, and then stop. Where did the actual addition happen? Outside the mind, there are only quantities.

Comment by GBM on Does Your Morality Care What You Think? · 2008-07-26T07:35:03.000Z · LW · GW

I think the problem I have with the math example, and it may be that this is extensible to morality, is this:

If I have a certain quantity of apples, or sheep, or whatever, my mind has a tool (a number) ready to identify some characteristic about that quantity (how many it is). But that's all that number is: a tool. A reference.

Eliezer is right in saying that the teacher's teaching "2+3=5" doesn't make it true any more than the teacher's teaching "2+3=6" makes it true. But that's not because two plus three "actually" equals five. It's because we, as learning animals, have learned definitions of these concepts, and we conceive of them as being fundamental. We think of math as a fundamental part of reality, when it is in fact a low-level, extremely useful, but all-in-the-mind tool used to manipulate our understanding of reality. We're confusing the map with the territory.

Taking this over to morality:

"Killing is wrong" isn't true because someone told us it's true, any more than "Killing is right" would be true if someone were to tell us that. But that's not because killing another human being "actually" is wrong. It's because we, as learning animals, have learned (or evolved the low-level emotions that serve as a foundation for this rule) definitions of right and wrong, and we conceive of them as being fundamental. We think of morality as a fundamental part of reality, when it is in fact, an all-in-the mind tool. Should we throw it out because it's merely evolved? No. It's useful (at least for the species). But we shouldn't confuse the map with the territory.

This is still pretty fuzzy in my mind; please criticize, especially if I've made some fundamental error.

Comment by GBM on The Moral Void · 2008-06-30T12:18:40.000Z · LW · GW

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic - anywhere you care to put it - then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What then?

Well, Eliezer, since I can't say it as eloquently as you:

"Embrace reality. Hug it tight."

"It is always best to think of reality as perfectly normal. Since the beginning, not one unusual thing has ever happened."

If we find that Stone Tablet, we adjust our model accordingly.

Comment by GBM on Probability is in the Mind · 2008-03-13T01:29:00.000Z · LW · GW

Z. M. Davis: Thank you. I get it now.

Comment by GBM on Probability is in the Mind · 2008-03-12T19:13:39.000Z · LW · GW

Roland and Ian C. both help me understand where Eliezer is coming from. And PK's comment that "Reality will only take a single path" makes sense. That said, when I say a die has a 1/6 probability of landing on a 3, that means: Over a series of rolls in which no effort is made to systematically control the outcome (e.g. by always starting with 3 facing up before tossing the die), the die will land on a 3 about 1 in 6 times. Obviously, with perfect information, everything can be calculated. That doesn't mean that we can't predict the probability of a specific event.

Also, I didn't get a response to the Gomboc ( http://tinyurl.com/2rffxs ) argument. I would say that it has an inherent 100% probability of righting itself. Even if I knew nothing about the object, the real probability of it righting itself is 100%. Now, I might not bet on those odds, without previous knowledge, but no matter what I know, the object will right itself. How is this incorrect?

Comment by GBM on Probability is in the Mind · 2008-03-12T06:19:36.000Z · LW · GW

It seems to me you're using "perceived probability" and "probability" interchangeably. That is, you're "defining" probability as the probability that an observer assigns based on certain pieces of information. Is it not true that when one rolls a fair 1d6, there is an actual 1/6 probability of getting any one specific value? Or using your biased coin example: our information may tell us to assume a 50/50 chance, but the man may be correct in saying that the coin has a bias--that is, the coin may really come up heads 80% of the time, but we must assume a 50% chance to make the decision, until we can be certain of the 80% chance ourselves. What am I missing? I would say that the Gomboc (http://tinyurl.com/2rffxs) has a 100% chance of righting itself, inherently. I do not understand how this is incorrect.