Posts

Comments

Comment by Constant2 on The Evolutionary-Cognitive Boundary · 2009-02-12T17:57:14.000Z · LW · GW

I'm getting two things out of this.

1) Evolutionary cynicism produces different predictions from cognitive cynicism, e.g. because the current environment is not the ancestral environment.

2) Cognitive cynicism glooms up Eliezer's day but evolutionary cynicism does not.

(1) is worth keeping in mind. I'm not sure what significance (2) has.

However, we may want to develop a cynical account of a certain behavior while suspending judgment about whether the behavior was learned or evolved. Call such cynicism "agnostic cynicism", maybe. So we have three types of cynicism: evolutionary cynicism, cognitive cynicism, and agnostic cynicism.

A careful thinker will want to avoid jumping to conclusions, and because of this, he may lean toward agnostic cynicism.

Comment by Constant2 on Value is Fragile · 2009-01-29T22:46:12.000Z · LW · GW

it would seem easier to build (or mutate into) something that keeps going forever than it is to build something that goes for a while then stops.

On reflection, I realize this point might be applied to repetitive drudgery. But I was applying it to the behavior "engage in just so much efficient exploration." My point is that it may be easier to mutate into something that explores and explores and explores, than it would be to mutate into something that explores for a while then stops.

Comment by Constant2 on Value is Fragile · 2009-01-29T22:40:39.000Z · LW · GW

the vast majority of possible expected utility maximizers, would only engage in just so much efficient exploration, and spend most of its time exploiting the best alternative found so far, over and over and over.

I'm not convinced of that. First, "vast majority" needs to use an appropriate measure, one that is applicable to evolutionary results. If, when two equally probable mutations compete in the same environment, one of those mutations wins, making the other extinct, then the winner needs to be assigned the far greater weight. So, for example, if humans were to compete against a variant of human without the boredom instinct, who would win?

Second, it would seem easier to build (or mutate into) something that keeps going forever than it is to build something that goes for a while then stops. Cancer, for example, just keeps going and going, and it takes a lot of bodily tricks to put a stop to that.

Comment by Constant2 on OB Status Update · 2009-01-27T21:43:44.000Z · LW · GW

Seconding the expectation of a "useless clusterf--k."

The hope is that the shared biases will be ones that the site owner considers valuable and useful

The obvious way to do that is for the site owner to make some users more equal than others.

the Hacker News website seems to be doing fine.

Security through obscurity. Last I checked, it confirmed my impression, gathered from Digg and Reddit, that as long as the site remains sufficiently unpopular, it will not deteriorate.

Comment by Constant2 on The Complete Idiot's Guide to Ad Hominem · 2008-11-26T14:41:05.000Z · LW · GW

Two separate issues:

1) Is it a good (legitimately persuasive) argument?

2) If not then after all the hairsplitting is done, what sort of bad argument is it?

The more important issue is (a). A few points:

a) Quibbling over the categorization of the fallacy is sometimes used to mask the fact that it's a bad argument.

b) There are plenty of people who can recognize bad arguments without knowing anything about the names of the fallacies, which leads to

c) We learn the names of the fallacies, not in order to learn to spot bad arguments, but as a convenience so that we don't have to explain at length to the other guy why the argument is bad.

d) Often perfectly legitimate arguments technically fall into one of the categories of fallacy. Technically being a classical fallacy is no guaranteed that an argument is actually fallacious. Some counterexamples.

In short, the classical fallacies are a convenient timesaver. But you don't need to have learned them to avoid being an idiot, and learning them will not stop you from being an idiot, and taking them too seriously can make you into an idiot.

Comment by Constant2 on Why Does Power Corrupt? · 2008-10-14T14:20:50.000Z · LW · GW

Morality is an aspect of custom. Custom requires certain preconditions: it is an adaptation to a certain environment. Great political power breaks a key component of that environment.

More specifically, morality is a spontaneously arising system for resolving conflict among people with approximately equal power, such that adherence to morality is an optimal strategy for a typical person. A person with great power has less need to compromise and so his optimal strategy is probably a mix of compromise and brute force - i.e., corruption.

This does not require very specific human psychology. It is likely to describe any set of agents where the agents satisfy certain general conditions. Design two agents (entities with preferences and abilities) and in certain areas those entities are likely to have conflicting desires and are likely, therefore, to come into conflict and to need a system for resolving conflict (a morality) - regardless of their psychology. But grant one of these entities sufficiently great power, and it can resolve conflict by pushing aside the other agent, thereby dispensing with morality, thereby being corrupted by power.

Comment by Constant2 on My Bayesian Enlightenment · 2008-10-05T18:50:39.000Z · LW · GW
Someone had just asked a malformed version of an old probability puzzle [...] someone said to me, "Well, what you just gave is the Bayesian answer, but in orthodox statistics the answer is 1/3." [..] That was when I discovered that I was of the type called 'Bayesian'.

I think a more reasonable conclusion is: yes indeed it is malformed, and the person I am speaking to is evidently not competent enough to notice how this necessarily affects the answer and invalidates the familiar answer, and so they may not be a reliable guide to probability and in particular to what is or is not "orthodox" or "bayesian." What I think you ought to have discovered was not that you were Bayesian, but that you had not blundered, whereas the person you were speaking to had blundered.

Comment by Constant2 on Excluding the Supernatural · 2008-09-12T06:25:23.000Z · LW · GW

Aaron - yes, I know that. It's beside the point.

Comment by Constant2 on Excluding the Supernatural · 2008-09-12T02:39:54.000Z · LW · GW

My point was that vampires were by definition not real - or at least, not understandable - because any time we found something real and understandable that met the definition of a vampire, we would change the definition to exclude it.

But the same exchange might have occurred with something entirely real. We are not in the habit of giving fully adequate definitions, so it is often possible to find counterexamples to the definitions we give, which might prompt the other person to add to the definition to exclude the counterexample. For example:

A: What is a dog?

B: A dog is a four-footed animal that is a popular pet.

A: So a cat is a dog.

B: Dogs bark.

A: So if I teach a cat to bark, it will become a dog.

etc.

Comment by Constant2 on Moral Error and Moral Disagreement · 2008-08-13T17:01:00.000Z · LW · GW

Time - Philip Johnson is not just a Christian but a creationist. Do you mean, "if there are smart creationists out there..."? I don't really pay much attention to the religious beliefs of the smartest mathematicians and scientists and I'm not especially keen on looking into it now, but I would be surprised if all top scientists without exception were atheists. This page seems to suggest that many of the best scientists are something other than atheist, many of those Christian.

Comment by Constant2 on Contaminated by Optimism · 2008-08-06T18:06:42.000Z · LW · GW

Whoever is censoring Caledonian: can it be done without adding the content-free nastiness (such as "bizarre objection", "illogic", and "gibberish")?

Comment by Constant2 on The Meaning of Right · 2008-07-30T17:57:00.000Z · LW · GW

Any two AIs are likely to have a much vaster difference in effective intelligence than you could ever find between two humans (for one thing, their hardware might be much more different than any two working human brains). This likelihood increases further if (at least) some subset of them is capable of strong self-improvement. With enough difference in power, cooperation becomes a losing strategy for the more powerful party.

I read stuff like this and immediately my mind thinks, "comparative advantage." The point is that it can be (and probably is) worthwhile for Bob and Bill to trade with each other even if Bob is better at absolutely everything than Bill. And if it is worthwhile for them to trade with each other, then it may well be in the interest of neither of them to (say) eliminate the other, and it may be a waste of resources to (say) coerce the other. It is worthwhile for the state to coerce the population because the state is few and the population are many, so the per-person cost of coercion falls below the benefit of coercion; it is much less worthwhile for an individual to coerce another (slavery generally has the backing of the state - see for example the fugitive slave laws). But this mass production of coercive fear works in part because humans are similar to each other and so can be dealt with more or less the same way. If AIs are all over the place, then this does not necessarily hold. Furthermore if one AI decides to coerce the humans (who are admittedly similar to each other) then the other AIs may oppose him in order that they themselves might retain direct access to humans.

The AIs might agree that they'd all be better off if they took the matter currently in use by humans for themselves, dividing the spoils among each other.

Maybe but maybe not. Dividing the spoils paints a picture of the one-time destruction of the human race, and it may well be to the advantage of the AIs not to kill off the humans. After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.

You definitely don't want an FAI to unpredictably change its terminal values. Figuring out how to reliably prevent this kind of thing from happening, even in a strongly self-modifying mind (which humans aren't), is one of the sub-problems of the FAI problem.

The FAI may be an unsolvable problem, if by FAI we mean an AI into which certain limits are baked. This has seemed dubious ever since Asimov. The idea of baking in rules of robotics has long seemed to me to fundamentally misunderstand both the nature of morality and the nature of intelligence. But time will tell.

Comment by Constant2 on The Meaning of Right · 2008-07-30T16:10:00.000Z · LW · GW

An AI can indeed have preferences that conflict with human preferences, but if it doesn't start out with such preferences, it's unclear how it comes to have them later.

We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted. For example, babies do not start out preferring Bach to Beethoven or Beethoven to Bach, but adults are able to develop that preference, even if it is not clear at this point how they come to do so.

If you could do so easily and with complete impunity, would you organize fights to death for your pleasure?

Voters have the ability to vote for policies and to do so easily and with complete impunity (nobody retaliates against a voter for his vote). And, unsurprisingly, voters regularly vote to take from others to give unto themselves - which is something they would never do in person (unless they were criminals, such as muggers or burglars). Moreover humans have an awe-inspiring capacity to clothe their rapaciousness in fine-sounding rhetoric.

Moreover, humans are often tempted to do things they know they shouldn't, because they also have selfish desires. AIs don't if you don't build it into them.

Conflict does not require selfish desires. Any desire, of whatever sort, could potentially come into conflict with another person's desire, and when there are many minds each with its own set of desires then conflict is almost inevitable. So the problem does not, in fact, turn on whether the mind is "selfish" or not. Any sort of desire can create the conflict, and conflict as such creates the problem I described. In a nutshell: evil men need not be selfish. A man such as Pol Pot could indeed have wanted nothing for himself and still ended up murdering millions of his countrymen.

Comment by Constant2 on The Meaning of Right · 2008-07-30T13:34:00.000Z · LW · GW

A tendency to become corrupt when placed into positions of power is a feature of some minds.

Morality in the human universe is a compromise between conflicting wills. The compromise is useful because the alternative is conflict, and conflict is wasteful. Law is a specific instance of this, so let us look at property rights: property rights is a decision-making procedure for deciding between conflicting desires concerning the owned object. There really is no point in even having property rights except in the context of the potential for conflict. Remove conflict, and you remove the raison d'etre of property rights, and more generally the raison d'etre of law, and more generally the raison d'etre of morality. Give a person power, and he no longer needs to compromise with others, and so for him the raison d'etre of morality vanishes and he acts as he pleases.

The feature of human minds that renders morality necessary is the possibility that humans can have preferences that conflict with the preferences of other humans, thereby requiring a decisionmaking procedure for deciding whose will prevails. Preference is, furthermore, revealed in the actions taken by a mind, so a mind that acts has preferences. So all the above is applicable to an artificial intelligence if the artificial intelligence acts.

What makes you think a human-designed AI would be vulnerable to this kind of corruption?

I am assuming it acts, and therefore makes choices, and therefore has preferences, and therefore can have preferences which conflict with the preferences of other minds (including human minds).

Comment by Constant2 on The Meaning of Right · 2008-07-29T20:08:00.000Z · LW · GW

We've been told that a General AI will have power beyond any despot known to history.

If that will be then we are doomed. Power corrupts. In theory an AI, not being human, might resist the corruption, but I wouldn't bet on that. I do not think it is a mere peculiarity of humanity that we are vulnerable to corruption.

We humans are kept in check by each other. We might therefore hope, and attempt to engineer, a proliferation of self-improving AIs, to form a society and to keep each other in check. With luck, cooperative AIs might be more successful at improving themselves - just as honest folk are for the most part more successful than criminals - and thus tend for the most part to out-pace the would-be despots.

As far as how a society of AIs would relate to humans, there are various possibilities. One dystopia imagines that humans will be treated like lower animals, but this is not necessarily what will happen. Animals are not merely dumb, but unable to take part in mutual respect of rights. We humans will always be able to and so might well remain forever recipients of an AI respect for our rights however much they evolve past us. We may of course be excluded from aspects of AI society which we humans are not able to handle, just as we exclude animals from rights. We may never earn hyper-rights, whatever those may be. But we might retain our rights.

Comment by Constant2 on When (Not) To Use Probabilities · 2008-07-24T15:26:36.000Z · LW · GW

I say go ahead and pick a number out of the air,

A somewhat arbitrary starting number is also useful as a seed for a process of iterative approximation to a true value.

Comment by Constant2 on Can Counterfactuals Be True? · 2008-07-24T14:16:03.000Z · LW · GW

how you can talk about probabilities without talking about several possible worlds

But if probability is in the mind, and the mind in question is in this world, why are other worlds needed? Moreover (from wikipedia):

In Bayesian theory, the assessment of probability can be approached in several ways. One is based on betting: the degree of belief in a proposition is reflected in the odds that the assessor is willing to bet on the success of a trial of its truth.

Disposition to bet surely does not require a commitment to possible worlds.

Comment by Constant2 on Probability is Subjectively Objective · 2008-07-16T09:02:27.000Z · LW · GW

Elephants are not properties of physics any more than probabilities are. The concept of an elephant is subjective - as are all concepts.

If you are indeed agreeing with the parallel I have set up between probability and elephants and if this is not just your own personal view, then perhaps the subjectivist theory of probability should more properly be called the subjectivist theory of pretty much everything that populates our familiar world. Anyway, I think I can agree that probability is as subjective and as psychological and as non-physical and as existing in the mind and not in the world as an elephant or, say, an exploding nuclear bomb - another item that populates our familiar world.

Comment by Constant2 on Probability is Subjectively Objective · 2008-07-16T07:02:32.000Z · LW · GW

With complete information (and a big computer) an observer would know which way the coin would land - and would find probabilities irrelevant.

But this is true of most everyday observations. We observe events on a level far removed from the subatomic level. With complete information and infinite computing power an observer would would find all or virtually all ordinary human-level observations irrelevant. But irrelevancy to such an observer is not the same thing as non-reality. For example, the existence of elephants would be irrelevant to an observer who has complete information on the subatomic level and sufficient computing power to deal with it. But it does not follow that elephants do not exist. Do you think it follows that elephants do not exist?

The probabilities arise from ignorance and lack of computing power - properties of observers, not properties of the observed.

The concept of an elephant could with equal reason be said to arise from ignorance and lack of computing power. I can certainly understand that a thought such as, "the elephant likes peanuts, therefore it will accept this peanut" is a much easier thought to entertain than a thought that infallibly tracks every subatomic particle in its body and in the environment around it. So, certainly, the concept of an elephant is a wonderful shortcut. But I'm not so sure about getting from this to the conclusion that elephants (like probability) are subjective. Do you think that elephants are subjective?

Comment by Constant2 on Probability is Subjectively Objective · 2008-07-15T22:05:05.000Z · LW · GW

If such ideas seem unproblematic to you

It is the example that seems on the face of it unproblematic. I am open to either (a) a demonstration that it is compatible with subjectivism[*], or (b) a demonstration that it is problematic. I am open to either one. Or to something else. In any case, I don't adhere to frequentism.

[*] (I made no firm claim that it is not compatible with subjectivism - you are the one who rejected the compatibility - my own purpose was only to raise the question since it seems on the face of it hard to square with subjectivism, not to answer the question definitively.)

Comment by Constant2 on Probability is Subjectively Objective · 2008-07-15T20:07:26.000Z · LW · GW

Jaynes' perspective on the historical behaviour of biased coins would make no mention of probability - unless he was talking about the history of the expectations of some observer with partial information about the situation. Do you see anything wrong with that?

I see nothing wrong with that. Similarly, if someone mentions only the atoms in my body, and never mentions me, there is nothing wrong with that. However, I am also there.

What I have pointed out is that seemingly unproblematic statements can indeed be made of the sort that I described. That Jaynes himself makes no such statements says nothing one way or another about this. There are different possible responses, including:

1) It might be shown that certain classes of factual statements about history, including the one I gave, are in fact in some sense relative, may incorporate a tacit perspective and therefore may be in that sense subjective. An example of such a statement might be a statement that an object is "at rest" rather than "in motion". This statement tacitly presupposes a frame of reference, and so is in that sense not fully objective.

2) It might be shown that there was something wrong about the sort of statement that I gave as an example.

Comment by Constant2 on Probability is Subjectively Objective · 2008-07-15T18:55:33.000Z · LW · GW

Not under the view we are discussing.

That was my point.

Comment by Constant2 on Probability is Subjectively Objective · 2008-07-14T18:21:16.000Z · LW · GW

Probability isn't only used as an expression of a person's own subjective uncertainty when predicting the future. It is also used when making factual statements about the past. If a coin was flipped yesterday and came up heads 60% of the time, then it may have been a fair coin which happened to come up heads 60% of the time, or it may have been a trick, biased, coin, whose bias caused it to come up heads 60% of the time. To say that a coin is biased is to make a statement about probability. As Wikipedia explains:

In probability theory and statistics, a sequence of independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is called a biased or unfair coin.

So a statement about probability can enter into a factual claim about the causes of past events.

Comment by Constant2 on Is Morality Given? · 2008-07-07T21:25:00.000Z · LW · GW

There aren't necessarily any common elements, besides utterly trivial ones.

Maybe, maybe not. You won't know without looking. You have to start somewhere.

If you look at examples of misspelled words in various languages and examine their individual properties, you won't find what unites them in a category.

But then, what about correctly spelled words? There will be many observable systematic relationships between those. I happen to think you have the analogy backwards. In the good/evil dichotomy, it is the evil acts, not the not-evil acts, which are narrowly defined and systematically related (I think). If you try to find what is in common between the not-evil acts, those are the acts which have nothing in particular in common. Meanwhile, in the well-spelled/misspelled dichotomy, it is the correctly-spelled words that are narrowly defined and systematically related. In short, I think morality is fundamentally a narrow set of prohibitions rather than a narrow set of requirements. In contrast, the rules of spelling form a narrow set of requirements.

But whether you are right or I am right is something that we won't know without looking.

You have to understand their relationship to the spelling rules in the various languages - rules which themselves are likely to be incompatible and mutually incoherent - to understand what properties make them examples of 'misspelled words'.

Nobody told Galileo and Newton what the rules generating the world's behavior were, but they were able to go a long way toward figuring them out. And isn't that what science is? If you claim that the science can't start without knowing the rules first, then aren't you asserting that science is hopeless?

Comment by Constant2 on Is Morality Given? · 2008-07-07T17:48:00.000Z · LW · GW

Are you willing to have a neverending discussion, with everyone talking past each other, and no working definition for the central concept we're supposed to be examining?

I'm not in charge of the discussion, so it's not a question of what I'm willing to do. I've told you how to get the starting definition you're looking for. As I said: you can start with an ostensive definition by listing examples of evil acts. Then you can find common elements. For example, it might become apparent, after surveying them, that evil acts have in common that they all have victims against whose will the evil acts were committed and who are harmed by the evil acts. It might also become apparent that the evil acts involved one or another form of transgression or trespass against certain boundaries. You might like to study what the boundaries are.

Comment by Constant2 on Is Morality Given? · 2008-07-07T17:00:00.000Z · LW · GW

Basic scientific methodology - you can't study what you can't produce a provisional definition for. Once you have that, you can learn more about what's defined, but you don't get anywhere without that starting point.

The first concepts that more less denoted, say, water, may have included things which today we would reject as not water (e.g., possibly clear alcohol), failed to distinguish water from things dissolved in the water, and excluded forms of water (such as steam and ice). The very first definitions of water were probably ostensive definitions (this here is water, that is water) rather than descriptive or explanatory definitions. The definitions were subject to revision as knowledge improved.

Are you willing to accept an ostensive and potentially erroneous definition of morality that may very well be subject to revision as knowledge improves? One is easy enough to supply by listing a bunch of acts currently believed to be evil, then listing a bunch of believed-to-be morally neutral acts, and pointing out that the first group is evil and the second group isn't. Would that be satisfactory?

Is it an arbitrary grouping, or do we use the label to refer to certain properties that things in that grouping possess?

I think the better question is, do recognized examples of evil have something in common - never mind what we intend by the label. Maybe by the label "water" we initially intended "Chronos's tears" or some such useless thing. The intention isn't necessarily of any particular interest. You are interested in scientific inquiry into morality, yes? - seeing as you talk about "scientific methodology." Science studies the properties of things in themselves independently of whatever nonsense ideas we might have about them; if you want to study our intents then become a philosopher, not a scientist.

Anyway, this question - do examples of evil have something in common - is something for the scientists to answer, no? It doesn't need to be answered before scientific inquiry begins.

Comment by Constant2 on Is Morality Given? · 2008-07-07T04:58:00.000Z · LW · GW

Constant, if moral truths were mathematical truths, then ethics would be a branch of mathematics. There would be axiomatic formalizations of morality that do not fall apart when we try to explore their logical consequences. There would be mathematicians proving theorems about morality. We don't see any of this.

If Tegmark is correct, then everything is mathematics. Do you dispute Tegmark's claim that "there is only mathematics; that is all that exists"? Do you think your argument is any good against Tegmark's hypothesis? Will you tell Tegmark, "the department of physics and the department of biology are separate departments from the department of mathematics, and therefore you are wrong"? I don't think it is quite so easy to dismiss Tegmark's hypothesis merely on the basis that all the sciences are not treated as branches of mathematics. Tegmark's point is that something that we don't realize is mathematics nevertheless is mathematics. All your observation shows is that we don't treat it as mathematics. Which doesn't even touch Tegmark's hypothesis.

Isn't it simpler to suppose that morality was a hypothesis people used to explain their moral perceptions (such as "murder seems wrong") before we knew the real explanations, but now we find it hard to give up the word due to a kind of memetic inertia?

Moral truths pass some basic criteria of reality. They are, importantly, not a matter of opinion. If, as some claim, morality is intuitive game theory (which I think is very much on track), then morality is not a matter of opinion, because whether something is or is not a good strategy is not a matter of opinion. Optimal strategies are what they are regardless of what we think, and therefore pass an important criterion of reality.

Now, there seem to be some who think that discovering that morality is intuitive game theory debunks its reality. But to my mind that is a bit like discovering what fire is debunks the idea that fire is real. It does not: discovering what it is does not debunk it, if anything it reaffirms its reality. If fire is a kind of exothermic chemical reaction then it is most definitely not just in my imagination! And if morality is intuitive game theory then it is most definitely not just in my imagination.

And game theory happens to be... guess what... Starts with an "m".

Comment by Constant2 on Is Morality Given? · 2008-07-07T02:52:58.000Z · LW · GW

Dynamically Linked writes: But, it seems pretty obvious, at least to me, that game theory, evolutionary psychology, and memetics are not contingent on anything except mathematics and the environment that we happened to evolve in.

According to Tegmark "there is only mathematics; that is all that exists". Suppose he is right. Then moral truths, if there are any, are (along with all other truths) mathematical truths. Unless you presuppose that moral truths cannot be mathematical truths then you have not ruled out moral truths when you say that so-and-so is not contingent on anything except mathematics and such-and-such. For my part I fail to see why moral truths could not be mathematical truths.

Before I go on, do you actually believe this [Bayesian net diagram] to be the case?

I'm sorry to say that I can't read Bayesian net diagrams. Hopefully I answered your question anyway.

Comment by Constant2 on Is Morality Given? · 2008-07-07T01:30:41.000Z · LW · GW

Z. M. Davis writes: ... objective illness is just as problematic as objective morality

I would argue that to answer Robin's challenge is not necessarily to assert that there is such a thing as objective illness.

Accounts have been given of the pressure producing the ability to see beauty (google sexual selection or see e.g. this). This does not require that there is some eternal beauty written in the fabric of the universe - it may be, for example, that each species has evolved its own standard of beauty, and that selection is operating on both sides, i.e., selecting against individuals who are insufficiently beautiful and also selecting against admirers who differ too far from the norm.

However, this evolutionary concept of "illness" cannot be the ordinary meaning of the word, because no one actually cares about fitness.

My argument is: people can distinguish illness because it enhances their fitness to do so. Compare this to the following argument: people can distinguish the opposite sex because it enhances their fitness to do so. Now, okay, suppose that people don't care about fitness, as you say. Nevertheless, unbeknownst to them, telling women apart from men enhances their fitness. Similarly for illness.

Take homosexuality. It's often considered a mental disorder, but if someone is gay and happy being so, I would challenge (as evil, even) any attempt to define them as "ill" in anything more than the irrelevant evolutionary sense.

Homosexuality reduces fitness (so you seem to to agree), but this does not make it an illness. Not everything that reduces fitness is an illness. Rather, illness tends to reduce fitness. Let me put it this way. Blindness tends to reduce fitness. But not everything that reduces fitness is blindness. Similarly, illness tends to reduce fitness. But that doesn't mean that everything that reduces fitness is illness.

... that which the patient desires in herself is health, and that which the patient does not desire in herself is sickness.

We can similarly say, that which a person desires in a mate is beauty. However, I think the most that can be said for this is that it is one concept of beauty. It is not the only concept. The idea that there is a shared standard of beauty is, despite much thought and argument to the contrary, still with us, and not illegitimate.

Comment by Constant2 on Is Morality Given? · 2008-07-06T19:48:14.000Z · LW · GW

Richard, we can understand how there would be evolutionary pressure to produce an ability to see light, even if imperfect. But what possible pressure could produce an ability to see morality?

Let's detail the explanation for light to see if we can find a parallel explanation for morality. Brief explanation for light: light bounces off things in the environment in a way which can in principle be used to draw correct inferences about distant objects in the environment. Eventually, some animals evolve a mechanism for doing just this.

Let's attempt the same for morality. Brief explanation for morality: unlike light, evil is not a simple thing that comes in its own fundamental particles. It is more similar to illness. An alien looking at a human cell might not, from first principles, be able to tell whether the cell was healthy or sick - e.g. whether it has not, or has, fallen victim to an attack rewriting its genetic code. The alien may need to look at the wider context in order to draw a distinction between a healthy cell and an ill cell, and by extension, between a healthy human and an ill human. Nevertheless, illness is real and we are able to tell the difference between illness and health. We have at least two reasons for doing this: an illness might pass to us (if it is infectious), and if we select an ill partner for producing offspring we may produce no offspring.

Evil is more akin to illness than to light, and is even more akin to mental illness. Just to continue the case of mating, if we select a partner who is unusually capable of evil (as compared to the human average) then we may find ourselves dead, or harmed, or at odds with our neighbors who are victimized by our partner. If we select a business partner who is honest then we have an advantage over someone who selects a business partner who is dishonest. In order to tell apart an evil person from a good person we need to be able to distinguish an evil act from a good act.

This is only part of it, but there's a 400-word limit.

Comment by Constant2 on Is Morality Given? · 2008-07-06T15:49:14.000Z · LW · GW

But we already know why murder seems wrong to us. It's completely explained by a combination of game theory, evolutionary psychology, and memetics. These explanations screen off our apparent moral perceptions from any other influence. In order words, conditioned on these explanations being true, our moral perceptions are independent of (i.e. uncorrelated with) any possible morality-as-given, even if it were to exist.

Let's try the argument with mathematics: we know why we think 5 is a prime number. It's completely explained by our evolution, experiences, and so on. Conditioned on these explanations being true, our mathematical perceptions are independent of mathematical-truth-as-given, even if it were to exist.

The problem is that mathematical-truth-as-given may shape the world and therefore shape our experiences. That is, we may have had the tremendous difficulty we had in factorizing the number 5 precisely because the number 5 is in fact a prime number. So one place where one could critique your argument is in the bit that goes: "conditioned on X being the case, then our beliefs are independent of Y". The critique is that X may in fact be a consequence of Y, in which case X is itself not independent of Y.

Comment by Constant2 on Is Morality Preference? · 2008-07-05T15:48:16.000Z · LW · GW

What we know about the causal origins of our moral intuitions doesn't obviously give us reason to believe they are correlated with moral truth.

But what we know about morality, we know purely thanks to the causal origin. If you see no obvious connection to moral truth, then either it is purely a coincidence that we happen to believe correctly, or else it is not and you're failing to see something. If it is purely a coincidence, then we may as well give up now.

Comment by Constant2 on The Bedrock of Fairness · 2008-07-03T16:20:14.000Z · LW · GW

Yet most people in a situation of near simultaneity find it easier (or perhaps just safer?) to assume they had arrived simultaneously and come to agreement on dividing the pie 'fairly', rather than argue over who got there first.

You are claiming it is a common practice. But common practice is common practice - not necessarily "fairness". We often do things precisely because they are commonly done. One common practice which is not equal is, if two cars arrive at the same intersection at right angles, then the car on the right has the right of way. This is the common practice, and we do it because it is common practice, and it is common practice because we do it.

Even if it is not common practice, dividing it into thirds may well be apt to occur to most people. This makes it a likely Schelling point. Schelling points aren't about fairness either. They are about trying to predict what the other guy will predict that you predict, all without communicating with each other. You can use a Schelling point to try to find each other in a large city without a prior agreement on where to meet. Each of you tries to figure out what location the other will choose, keeping in mind that the other guy is trying to pick the location which you're most likely to predict he's going to pick (and you can probably keep recursing).

If all we're trying to do is come to an agreement there is no need to get deeply philosophical about fairness per se.

Comment by Constant2 on The Bedrock of Fairness · 2008-07-03T15:26:20.000Z · LW · GW

If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?

If, indeed, it requires that we imagine a flying pie-baking monster in order to come up with a situation in which the concept of 'fairness' is actually relevant (e.g. not immediately trumped by an external factor), then it suggests that the concept of 'fairness' is in the real world virtually irrelevant. I notice also that the three have arrived separately and exactly simultaneously, another rarity, but also important to make 'fairness' an issue.

Comment by Constant2 on The Bedrock of Fairness · 2008-07-03T14:33:51.000Z · LW · GW

And then they discover, in the center of the clearing, a delicious blueberry pie.

If the pie is edible then it was recently made and placed there. Whoever made it is probably close at hand. That person has a much better claim on the pie than these three and is therefore most likely rightly considered the owner. Let the owner of the pie decide. If the owner does not show up, leave the pie alone. Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.

Comment by Constant2 on What Would You Do Without Morality? · 2008-06-30T17:07:00.000Z · LW · GW

Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality -- an objective standard of conduct that is written into the fabric of reality itself -- does not exist: there is no such thing!

To use Eliezer's terminology, you seem to be saying that "morality" is a 2-place word:

Morality: Species, Act -> [0, ∞)

which can be "curried", i.e. can "eat" the first input to become a 1-place word:

Homosapiens::Morality == Morality_93745

Comment by Constant2 on The Moral Void · 2008-06-30T16:19:02.000Z · LW · GW

I think we must conclude that morality is a means, not an end in itself.

Morality is commonly thought of neither as a means nor as an end, but as a constraint. This view is potentially liberating, because the conception of morality as a means to an end implies the idea that any two possible actions can be compared to see which is the best means to the end and therefore which is the most moral. To choose the less moral of the two choices is, on this conception, the very definition of immoral. Thus on this conception, our lives are in principle mapped out for us in the minutest detail, because at each point it is immoral to fail to take the unique most moral path.

An alternative conception is that morality is a set of constraints, and within those constraints you are free to do whatever you like without your choice being immoral. This is potentially liberating, because if the constraints are minimal (and on most conceptions they are) then our lives are not mapped out for us.

Comment by Constant2 on Possibility and Could-ness · 2008-06-20T12:53:00.000Z · LW · GW

Hopefully - "Choice" doesn't seem to enter into it, in my opinion, because the person may be functionally bounded to one, determined pathway, perhaps analogous to the way that I'm bounded from flying to the moon.

He may indeed have a determined path, but as Eliezer has attempted to argue, this is not incompatible with saying that he has a choice.

I think it only adds to the the main economic theories to remain reasonably skeptical about the concept of choice

And I think that it rips them apart, because they are weaved together from the concept of choice. Get rid of the concept of choice and it's like grabbing the thread that it's made of and confiscating it. But the fabric is made from the thread.

If you get rid of choice, what are you left with? You need to get rid of the concept of alternatives as well, because it is the flip side of choice (a person presented with a set of alternatives is presented with a choice between those alternatives, as recognized in the statement, "you have a choice"). Get rid of choice and you need to get rid of the concept of preference, because what a person prefers between A and B is nothing other than what he would choose if given the choice between A and B. Get rid of preference, and you get rid of indifference, so you get rid of indifference curves. Supply and demand are built on indifference curves, so you get rid of supply and demand. Get rid of supply and demand and you get rid of price theory.

Comment by Constant2 on Possibility and Could-ness · 2008-06-20T04:01:00.000Z · LW · GW

Hopefully, you are not addressing an important distinction. You haven't said what is to be done with it. The passage that I quoted includes these words:

while another bundle of goods is affordable

The bundles of goods that are affordable are precisely the bundle of goods among which we choose.

Hopefully writes: constant: buys, eats, etc. Here it's not any more necessary to assert or imply deliberation (which is what I think you mean by saying "choice"

No, it is not what I mean. A person chooses among actions A, B, and C, if he has the capacity to perform any of A, B, or C, and in fact performs (say) C. It does not matter whether he deliberates or not. The distinction between capacity and incapacity takes many forms; in the definition which I quoted the capacity/incapacity distinction takes the form of an affordability/unaffordability distinction.

Comment by Constant2 on Possibility and Could-ness · 2008-06-15T09:10:41.000Z · LW · GW

Joseph - "Choice" is, I should think, more like "fire" and "heat" than like "phlogiston" and "caloric". We have abandoned the last two as outdated scientific theories, but have not abandoned the first two even though they are much older concepts, presumably because they do not represent scientific theories but rather name observable mundane phenomena.

Comment by Constant2 on Why Quantum? · 2008-06-05T01:04:08.000Z · LW · GW

I think the more fundamental reason most physicists working in the foundations of quantum mechanics don't believe in many-worlds is that those who do believe in many worlds consider the foundations problem to be solved, and see no need to work on it anymore.

Bravo. This potential for systematic bias on certain questions can be generalized and ought to have a name. It suggests that we should reduce the weight that we place on expert opinion on certain questions in any field, to the extent that the choice to work in the field will depend on how a person answers those questions.

So when we decide whether to rely on expert opinion, we ought to keep in mind that certain biases will tend to afflict precisely the experts, making non-experts in some cases more reliable guides.

Comment by Constant2 on Timeless Identity · 2008-06-03T16:32:32.000Z · LW · GW

note that Parfit is describing thought experiments, not necessarily endorsing them.

I spy with my little eye something beginning with D.

Comment by Constant2 on Timeless Physics · 2008-05-27T20:49:24.000Z · LW · GW

If you took one world and extrapolated backward, you'd get many pasts. If you take the many worlds and extrapolate backward, all but one of the resulting pasts will cancel out!

I agree. However, at the same time, we don't actually remember the many extrapolated pasts of the one world we inhabit. Of course, "remembering" multiple extrapolated pasts might be indistinguishable from failing to remember any particular past (e.g., if both X and not-X lie in our extrapolated past, then our "remembering" both X and not-X might be nothing other than failing to remember whether X or not-X).

Comment by Constant2 on Timeless Physics · 2008-05-27T20:41:35.000Z · LW · GW

So, if one looks at the current configuration space for a point of 'now', and works the equations backwards, does one get only one possible past, or an large number of possible pasts? If its the former, how can one claim that the equations are time symmetric? If its the latter, why don't we remember all of those quantum possibilities?

Both. Many possible pasts, because the many worlds are never entirely causally isolated, so we are to some minuscule degree always affected by parallel worlds (though not enough to notice). But one possible past, because only one of these has any great influence on us (or so it seems for the most part - exceptions aside, e.g. mangled worlds aside). If you want to know how it is possible to have symmetric equations but at the same time the asymmetry of division of worlds in one direction in time, the standard explanation is thermodynamic. It's fundamentally the same reason that if you drop a glass it breaks but if you drop the shards of glass they don't spontaneously mend themselves.

Comment by Constant2 on Relative Configuration Space · 2008-05-27T05:02:20.000Z · LW · GW

I think making certain types of physics impossible to imagine is not such a great idea. What if it turns out that we need those types of physics to describe our universe?

Well, it's not literally impossible to imagine, it's just incoherent in that model. If it turns out that a seeming redundancy in an older model turns out not to be redundant, we can always backtrack.

Comment by Constant2 on Decoherence is Simple · 2008-05-06T15:35:25.000Z · LW · GW

Without something like mangled worlds, one can be tempted by an objective collapse view, as that at least gives a coherent account of the Born rule.

Does it really account for it in the sense of explain it? I don't think so. I think it merely says that the collapsing occurs in accordance with the Born rule. But we can also simply say that many-worlds is true and the history of our fragment of the multiverse is consistent with the Born rule. Admittedly, this doesn't explain why we happen to live in such a fragment but merely asserts that we do, but similarly, the collapse view does not (as far as I know) explain why the collapse occurs in the frequencies it does but merely asserts that it does.

Comment by Constant2 on On Being Decoherent · 2008-04-27T08:00:06.000Z · LW · GW

You also get the same Big World effect from the inflationary scenario in the Big Bang, which buds off multiple universes. And both spatial infinity and inflation are implied by the Standard Model of physics.

How exactly do you get spatial infinity from a big bang in finite time? The stories I hear about the big bang are that the universe was initially very, very small at the beginning of the big bang. If it was small then it was finite. How does an object (such as the universe) grow from finite size to infinite size in finite time?

Comment by Constant2 on The So-Called Heisenberg Uncertainty Principle · 2008-04-23T22:22:40.000Z · LW · GW

Am I correct in assuming that this is independent of (observations, "wave function collapses", or whatever it is when we say that we find a particle at a certain point)?

Wavefunction collapses are unpredictable. The claim in The Quantum Arena, if your summary is right, is that subsequent amplitude distributions are predictable if you know the entire current amplitude distribution. The amplitude distribution is the wavefunction. Since wavefunction collapses are unpredictable but the wavefunction's progression is claimed to be predictable, wavefunction collapses are logically barred from existing if the claim is true. From this follows the no-collapse "interpretation" of quantum mechanics, a.k.a. the many-worlds interpretation. Eliezer's claim, then, is expressing the many-worlds interpretation of QM. The seeming collapse of the wavefunction is only apparent. The wavefunction has not, in fact, collapsed. In particular, when you find a particle at a certain point, then the objective reality is not that the particle is at that point and not at any other point. The you which sees the particle at that one point is only seeing a small part of the bigger objective picture.

If I observe the particle on the opposite side of the moon at some point (i.e. where the amplitude is non-zero, but still tiny), does the particle still have the same probability as before of "jumping" back onto the line from x to y?

Yes and no. Objectively, it still has the same probability of being on the line from x to y. But the you who observes the particle on the opposite side of the moon will from that point forward only see a small part of the bigger objective picture, and what that you will see will (almost certainly) not be the particle jumping back onto the line from x to y. So the subjective probability relative to that you is not the same as the objective probability.

Now, let me correct my language. Neither of these probabilities is objective. What I called "objective" was in fact the subjective probability relative to the you who witnessed the particle starting out at x but did not (yet) witness the particle on the other side of the moon.

Comment by Constant2 on Configurations and Amplitude · 2008-04-10T18:36:43.000Z · LW · GW

aren't you writing this on the wrong blog?

As far as I know Robin doesn't actually have a separate economics blog and he seems to drop any economics topic that interests him into this one, so neither Eliezer nor Robin always stick closely to the "bias" theme. Does it really matter?

Comment by Constant2 on Quantum Explanations · 2008-04-09T12:24:00.000Z · LW · GW

Why isn't this an example of the mind projection fallacy?

Surely it is not fallacious to subscribe to the Many-worlds interpretation (which is surely what Eliezer is talking about). If this is the sort of use to which the "mind projection fallacy" is put, then it turns out merely to be a cheap way to put down competing interpretations of the math.