Posts

Comments

Comment by J_Thomas2 on The Cartoon Guide to Löb's Theorem · 2008-08-22T23:35:00.000Z · LW · GW

I thought of a simpler way to say it.

If Hillary Clinton was a man, she wouldn't be Bill Clinton's wife. She'd be his husband.

Similarly, if PA proved that 6 was prime, it wouldn't be PA. It would be Bill Clinton's husband. And so ZF would not imply that 6 is actually prime.

Comment by J_Thomas2 on The Cartoon Guide to Löb's Theorem · 2008-08-22T23:26:00.000Z · LW · GW

Larry, you have not proven that 6 would be a prime number if PA proved 6 was a prime number, because PA does not prove that 6 is a prime number.

The theorem is only true for the phi that it's true for.

The claim that phi must be true because if it's true then it's true, and if it's false then "if PA |- phi then phi" has an officially true conclusion whenever PA does not imply phi, is bogus.

It's simply and obviously bogus, and I don't understand why there was any difficulty about seeing it.

Comment by J_Thomas2 on No License To Be Human · 2008-08-22T22:53:53.000Z · LW · GW

Caledonian, it's possible to care deeply about choices that were made in a seemingly-arbitrary way. For example, a college graduate who takes a job in one of eight cities where he got job offers, might within the year care deeply about that city's baseball team. But if he had taken a different job it would be a completely different baseball team.

You might care about the result of arbitrary choices. I don't say you necessarily will.

It sounds like you're saying it's wrong to care about morals unless they're somehow provably correct? I'm not sure I get your objection. I want to point out that usually when we have a war, most of the people in each country choose sides based on which country they are in. Less than 50% of americans chose to oppose the current iraq fiasco before it happened. Imagine that russia had invaded iraq with the same pretexts we used, all of which would have worked as well for russia as well as they did for us. Russians had more reason than us to fear iraqi nukes, they didn't want iraq supporting chechen terrorists, they thought Saddam was bad man, etc. imagine the hell we would have raised about it.... But I contend that well over a hundred million americans supported the war for no better reason that they were born in america and so they supported invasions by the US military.

Whether or not there's some higher or plausible morality that says we should not choose our morals at random, still the fact is that most of us do choose our morals at random.

Comment by J_Thomas2 on Invisible Frameworks · 2008-08-22T16:08:43.000Z · LW · GW

I haven't read Roko's blog, but from the reflection in Eliezer's opposition I find I somewhat agree.

To the extent that morality is about what you do, the more you can do the higher the stakes.

If you can drive a car, your driving amplifies your ability to do good. And it amplifies your ability to do bad. If you have a morality that leaves you doing more good than bad, and driving increases the good and the bad you do proportionately, then your driving is a good thing.

True human beings have an insatiable curiousity, and they naturally want to find out about things, and one of the things they like is to find out how to do things. Driving a car is a value in itself until you've done it enough that it gets boring.

But if you have a sense of good and bad apart from curiousity, then it will probably seem like a good thing for good smart people to get the power to do lots of things, while stupid or evil people should get only powers that are reasonably safe for them to have.

Comment by J_Thomas2 on No License To Be Human · 2008-08-22T15:28:14.000Z · LW · GW

"You should care about the moral code you have arbitrarily chosen."

No, I shouldn't. Which seems to be the focal point of this endless 'debate'.

Well, you might choose to care about a moral code you have arbitrarily chosen. And it could be argued that if you don't care about it then you haven't "really" chosen it.

I agree with you that there needn't be any platonic absolute morality that says you ought choose a moral code arbitrarily and care about it, or that if you do happen to choose a moral code arbitrarily that you should then care about it.

Comment by J_Thomas2 on No License To Be Human · 2008-08-22T15:22:44.000Z · LW · GW

We are born with some theorems of right (in analogy to PA).

Kenny, I'd be fascinated to learn more about that. I didn't notice it in my children, but then I wouldn't necessarily notice.

When I was a small child people claimed that babies are born with only a fear of falling and a startle reflex for loud noises. I was pretty sure that was wrong, but it wasn't clear to me what we're born with. It takes time to learn to see. I remember when I invented the inverse square law for vision, and understood why things get smaller when they go farther away. It takes time to notice that parents have their own desires that need to be taken into account.

What is it that we're born with? Do you have a quick link maybe?

Comment by J_Thomas2 on The Cartoon Guide to Löb's Theorem · 2008-08-22T15:08:00.000Z · LW · GW

Larry, one of them is counterfactual.

If you draw implications on a false asumption then the result is useful only to show that an assumption is false.

So if PA -> 1=2 then PA -> 1<>2. How is that useful?

If PA -> 6 is prime then PA also -> 6 is not prime.

Once you assume that PA implies something that PA actually implies is false, you get a logical contradiction. Either PA is inconsistent or PA does not imply the false thing.

How can it be useful to reason about what we could prove from false premises? What good is it to pretend that PA is inconsistent?

Comment by J_Thomas2 on No License To Be Human · 2008-08-22T08:53:42.000Z · LW · GW

Honestly I do not understand how you can continue calling Eliezer a relativist when he has persistently claimed that what is right doesn't depend on who's asking and doesn't depend on what anyone thinks is right.

Before I say anything else I want you to know that I am not a Communist.

Marx was right about everything he wrote about, but he didn't know everything, I wouldn't say that Marx had all the answers. When the time is ripe the proletariat will inevitably rise up and create a government that will organize the people, it will put everybody to work according to his abilities and give out the results according to the needs, and that will be the best thing that ever happened to anybody. But don't call me a Communist, because I'm not one.

Oh well. Maybe Eliezer is saying something new and it's hard to understand. So we keep mistaking what he's saying for something old that we do understand.

To me he looks like a platonist. Our individual concepts of "right" are imperfect representations of the real true platonic "right" which exists independently of any or all of us.

I am more of a nominalist. I see our concepts as things that get continually re-created. We are born without any concept of "right" and we develop such concepts as we grow up, with the foundations in our families. The degree to which we develop similar concepts of "right" is a triumph for our societies. There's nothing inevitable about it, but there's a value to moral uniformity that goes beyond the particular beliefs.

So for example about "murder". Americans mostly believe that killing is sometimes proper and necessary. Killing in self defense. Policemen must sometimes kill dangerous criminals. It's vitally necessary to kill the enemy in wartime. Etc. We call it "murder" only when it is not justified, so of course we agree that murder is wrong.

We would be better off if we all agreed about when killing is "right". Is it right to kill adulterous spouses? The people they have sex with? Is it right to kill IRS agents? Blasphemers? Four years ago a man I met in a public park threatened to kill me to keep me from voting for Kerry. Was he right? Whatever the rules are about killing, if we all agreed and we knew where we stood, we'd be better off than when we disagree and don't know who to expect will try to kill us.

And that is why in the new society children will be taken from their parents and raised in common dormitories. Because individual families are too diverse, and they don't all raise their children to understand that "from each according to his abilities, and to each according to his needs" is the most basic and important part of morality.

But don't call me a Communist, I already explained that I wasn't a Communist in my first sentence above.

Comment by J_Thomas2 on The Cartoon Guide to Löb's Theorem · 2008-08-22T07:55:01.000Z · LW · GW

But Larry, PA does not actually say that 6 is prime, and 6 is not prime.

You could say that if PA proved that every theorem is false then every theorem would be false.

Or what would it mean if PA proved that Lob's theorem was false?

It's customary to say that any conclusion from a false premise is true. If 6 is prime then God's in his heaven, everything's right with the world and we are all muppets. Also God's in hell, everything's wrong with the world, and we are all mutant ninja turtles. It doesn't really matter what conclusions you draw from a false premise because the premnise is false.

Your argument about what conclusion we could draw if PA said that 6 is prime is entirely based on a false premise. PA does not say that 6 is prime.

Comment by J_Thomas2 on The Cartoon Guide to Löb's Theorem · 2008-08-21T23:04:25.000Z · LW · GW

Let me try to say that clearer.

Suppose that A is false.

How the hell are you going to show that if PA proves A true then A will be true, when A is actually false?

If you can't prove what would happen if PA proved A true when A is actually false, then if you can prove that if PA proves A is true then A has to be true, it must be that A is true in the first place.

If this reasoning is correct then there isn't much mystery involved here.

One more time. If PA proves you are a werewolf, then you're really-and-truly a werewolf. PA never proves anything that isn't actually true. OK, say you aren't a werewolf. And I say I have a proof that if PA proves you're a werewolf then you're actually a werewolf. But in reality you aren't a werewolf and PA does not prove you are. How do I prove that PA would do that, when PA does not do that?

Once more through the mill. If PA proves that 6 is a prime number, then 6 is really a prime number. Can you prove that if PA proved 6 was a prime number then 6 would be a prime number? How would you do it?

If you definitely can't prove that, then what does it mean if I show you a proof that if PA proves 7 is a prime number then 7 must actually be prime? If you can't make that proof unless 7 is prime, and you have the proof, then 7 is actually prime.

The problem is with trying to apply material implication when it does not work.

Comment by J_Thomas2 on The Cartoon Guide to Löb's Theorem · 2008-08-21T18:02:50.000Z · LW · GW

I went to the carnival and I met a fortune-teller. Everything she says comes true. Not only that, she told me that everything she says always comes true.

I said, "George Washington is my mother" and she said it wasn't true.

I said, "Well, if George Washington was my mother would you tell me so?" and she refused to say she would. She said she won't talk about what would be true if George Washingto was my mother, because George Washington is not my mother.

She says that everything she says comes true. She looked outside her little tent, and saw the sky was clear. She said, "If I tell you the sky is clear, then the sky is clear."

"But what if it was raining right now? If you told me it was raining, would it be raining?"

"I won't say what I'd tell you if it was raining right here right now, because it isn't raining."

"But if the sky is clear right now you'll tell me the sky is clear."

"Yes. Because it's true."

"So you'll only tell me that (if you say the sky is clear then the sky really is clear), when the sky really is clear?"

"Yes. I only tell you about true things. I don't tell you what I'd do if the world was some other way."

"Oh."

Comment by J_Thomas2 on The Bedrock of Morality: Arbitrary? · 2008-08-21T07:45:00.000Z · LW · GW

The same boy who rationalized a way into believing there was a chocolate cake in the asteroid belt, should know better than to rationalize himself into believing it is right to prefer joy over sorrow.

Obviously, he does know. So the next question is, why does he present material that he knows is wrong?

Professional mathematicians and scientists try not to do that because it makes them look bad. If you present a proof that's wrong then other mathematicians might embarrass you at parties. But maybe Eliezer is immune to that kind of embarrassment. Socrates presented lots of obvious nonsense and people don't think badly of him for it.

The usual reasons why not probably don't apply to him. I don't have any certainty why he does it, though.

Comment by J_Thomas2 on When Anthropomorphism Became Stupid · 2008-08-17T03:49:43.000Z · LW · GW

When you try to predict what will happen it works pretty well to assume that it's all deterministic and get what results you can. When you want to negotiate with somebody it works best to suppose they have free will and they might do whatever-the-hell they want.

When you can predict what inanimate objects will do with fair precision, that's a sign they don't have free will. And if you don't know how to negotiate with them, you haven't got a lot of incentive to assume they have free will. But particularly when they're actually predictable.

The more predictable people get the less reason there is to suppose they have spirits etc motivating them. Unless it's information about the spirits you manipulate to predict what they'll do.

Comment by J_Thomas2 on The Bedrock of Morality: Arbitrary? · 2008-08-17T01:20:00.000Z · LW · GW

People keep using the term "moral relativism". I did a Google search of the site and got a variety of topics with the term dating from 2007 and 2008. Here's what it means to me.

Relative moral relativism means you affirm that to the best of your knowledge nobody has demonstrated any sort of absolute morality. That people differ in moralities, and if there's anything objective to say one is right and another is wrong that you haven't seen it. That very likely these different moralities are good for different purposes and different circumstances, and if a higher morality shows up it's likely to affirm that the different moralities you've heard of tend to each have its place.

This is analogous to being an agnostic about gods. You haven't seen evidence there's any such thing as an objectively absolute morality, so you do not assert that there is such a thing.

Absolute moral relativism accepts all this and takes two steps further. First, the claim is that there is no objective way to judge one morality better than another. Second, the claim is that without any objective absolute morality you should not have any.

This is analogous to being an atheist. You assert that there is no such thing and that people who think there is suffer from fallacious superstitions.

I can be a relative moral relativist and stil say "This is my morality. I chose it and it's mine. I don't need it to be objectively true for me to choose it. You can choose something else and maybe it will turn out we can get along or maybe not. We'll see."

Why should you need an absolute morality that's good all times and all places before you can practice any morality at all? Here I am, here's how I live. It works for me. If you want to politely tell me I'm all wrong then I'll listen politely as long as I feel like it.

Comment by J_Thomas2 on When Anthropomorphism Became Stupid · 2008-08-17T00:39:29.000Z · LW · GW

We have quick muscles, so we do computation to decide how to organise those muscles.

Trees do not have quick muscles, so they don't need that kind of computation.

Trees need to decide which directions to grow, and which directions to send their roots. Pee on the ground near a tree and it will grow rootlets in your direction, to collect the minerals you give it.

Trees need to decide which poisons to produce and where to pump them. When they get chewed on by bugs that tend to stay on the same leaf the trees tend to send their poisons to that leaf. When it's bugs that tend to stay nearby the tree sends the poisons nearby. Trees can somewhat sense the chemicals that distressed trees near them make, and respond early to the particular sorts of threats those chemicals indicate.

Is all that built into the trees' genes? Do they actually learn much? I dunno. I haven't noticed anything like a brain in a tree. But I wouldn't know what to look for. Our brains use a lot of energy, we have to eat a lot to maintain them. They work fast. Trees don't need that speed.

I don't know how smart trees are, or how fast they learn. The esperiments have not been done.

I don't know how moral animals are that we share no common language with. Those experiments haven't been done either. We can't even design the experiments until we get an operational definition of morality.

What experiment would you perform to decide whether an animal was moral? What experiment would show whether an intelligent alien was moral? What experiment could show whether a human imprisoned for a vicious crime was moral?

If you can describe the experiment that shows the difference, then you have defined the term in a way that other people can reproduce.

Comment by J_Thomas2 on The Bedrock of Morality: Arbitrary? · 2008-08-16T20:11:00.000Z · LW · GW

If you've ever taken a mathematics course in school, you yourself may have been introduced to a situation where it was believed that there were right and wrong ways to factor a number into primes. Unless you were an exceptionally good student, you may have disagreed with your teacher over the details of which way was right, and been punished for doing it wrong.

My experience with math classes was much different from yours. When we had a disagreement, the teacher said, "How would we tell who's right? Do you have a proof? Do you have a counter-example?". And if somebody had a proof we'd listen to it. And if I jumped up and said "Wait, this proof is wrong!" then the teacher would say, "First you have to explain what he said up to the point you disagree, and see if he agrees that's what he means. Then you can tell us why it's wrong."

I never got punished for being wrong. If I didn't do homework correctly then I didn't get credit for it, but there was no punishment involved.

But Eliezer described people who disagreed about how many stones to put in a pile and who had something that looked very much like wars about it. That isn't like the math I experienced. But it's very much like the morality I've experienced.

Comment by J_Thomas2 on The Bedrock of Morality: Arbitrary? · 2008-08-16T17:36:00.000Z · LW · GW

Nominull, don't the primalists have a morality about heaps of stones?

They believe there are right ways and wrong ways to do it. They sometimes disagree about the details of which ways are right and they punish each other for doing it wrong.

How is that different from morality?

Comment by J_Thomas2 on Hot Air Doesn't Disagree · 2008-08-16T17:29:15.000Z · LW · GW

I think there is an important distinction between "kill or die" and "kill or be killed." The wolf's life may be at stake, but the rabbit clearly isn't attacking the wolf. If I need a heart transplant, I would still not be justified in killing someone to obtain the organ.

Mario, you are making a subtler distinction than I was. There is no end to the number of subtle distinctions that can be made.

In warfare we can distinguish between infantrymen who are shooting directly at each other, versus infantry and artillery or airstrikes that dump explosives on them at little risk to themselves.

We can distinguish between soldiers who are fighting for their homes versus soldiers who are fighting beyond their own borders. Clearly it's immoral to invade other countries, and not immoral to defend your own.

I'm sure we could come up with hundreds of ways to split up the situations that show they are not all the same. But how much difference do these differences really make? "Kill or die" is pretty basic. If somebody's going to die anyway, and your actions can decide who it will be, do you have any right to choose?

Comment by J_Thomas2 on Hot Air Doesn't Disagree · 2008-08-16T16:19:52.000Z · LW · GW

This series of Potemkin essays makes me increasingly suspicious that someone's trying to pull a fast one on the Empress.

Agreed. I've suspected for some time that -- after laying out descriptions of how bias works -- Eliezer is now presenting us with a series of arguments that are all bias, all the time, and noticing how we buy into it.

It's not only the most charitable explanation, it's also the most consistent explanation.

Comment by J_Thomas2 on Hot Air Doesn't Disagree · 2008-08-16T14:27:20.000Z · LW · GW

If you were to stipulate that the rabbit is the only source of nourishment available to the fox, this still in no way justifies murder. The fox would have a moral obligation to starve to death.

How different is it when soldiers are at war? They must kill or be killed. If the fact that enemy soldiers will kill them if they don't kill the enemy first isn't enough justification, what is?

Should the soldiers on each side sit down and argue out the moral justification for the war first, and the side that is unjustified should surrender?

But somehow it seems like they hardly ever do that....

Comment by J_Thomas2 on Hot Air Doesn't Disagree · 2008-08-16T14:17:59.000Z · LW · GW

Konrad Lorenz claimed that dogs and wolves have morality. When a puppy does something wrong then a parent pushes on the back of its neck with their mouth and pins it to the ground, and lets it up when it whines appropriately.

Lorenz gave an example of an animal that mated at the wrong time. The pack leader found the male still helplessly coupled with the female, and pinned his head to the ground just like a puppy.

It doesn't have to take language. It starts out with moral beliefs that some individuals break. I can't think of any moral taboos that haven't been broken, except for the extermination of the human species which hasn't happened yet. So, moral taboos that get broken and the perps get punished for it. That's morality.

It happens among dogs and cats and horses and probably lots of animals. It isn't that all these behaviors are in the genes, selected genetically by natural selection over the last million generations or so. They get taught, which is much faster to develop but which also has a higher cost.

Comment by J_Thomas2 on Hot Air Doesn't Disagree · 2008-08-16T01:46:46.000Z · LW · GW

Chaung-Tzu had a story: Two philosophers were walking home from the bar after a long evening drinking. They stopped to piss off a bridge. One of them said, "Look at the fish playing in the moonlight! How happy they are!"

The other said, "You're not a fish so you can't know whether the fish are happy."

The first said, "You're not me so you can't know whether I know whether the fish are happy."

It seems implausible to me that rabbits or foxes think about morality at all. But I don't know that with any certainty, I'm not sure how they think.

Eliezer says with certainty that they do not think about morality at all. It seems implausible to me that Eliezer would know that any more than I do, but I don't know with any certainty that he doesn't know.

Comment by J_Thomas2 on Is Fairness Arbitrary? · 2008-08-15T15:43:03.000Z · LW · GW

Caledonian, thank you. I didn't notice that there might be people who disagree with that, since it seemed to me so clearly true and unarguable.

I guess in the extreme case somebody could believe that fairness has nothing to do with agreement. He might find a bunch of people who have a deal that each of them believes is fair, and he might argue that each of them is wrong, that their deal is actually unfair to every one of them. That each of them is deforming his own soul by agreeing to this horrible deal.

My thought about that is that there might be some deal that none of them has thought of, that would indeed be better for each of them. Maybe if they heard about the other deal they'd all prefer it. I'd want to listen to his proposals and see if I could understand them, or get new ideas from them.

But when somebody argues that a deal is unfair to somebody else, unfair to somebody who himself thinks it is not unfair to himself, it disrespects that person. It is a way to say that he doesn't know what he's doing, that he isn't competent to make his own deals, that he's a stupid or ignorant person who does not know what's good for him, that he needs you to take care of him and make his decisions for him. In general it is rude. And yet sometimes it could be true that people are stupid and agree to deals that are unfair to them because they don't know any better. There are probably 40 million american Republicans I'd suspect of that....

Comment by J_Thomas2 on The Bedrock of Morality: Arbitrary? · 2008-08-15T13:08:27.000Z · LW · GW

Lakshmi, Eliezer does have a point, though.

While there are many competing moral justifications for different ways to divide the pie, and while a moral relativist can say that no one of them is objectively correct, still many human beings will choose one. Not even a moral relativist is obligated to refrain from choosing moral standards. Indeed, someone who is intensely aware that he has chosen his standards may feel much more intensely that they are his than someone who believes they are a moral absolute that all honest and intelligent people are obligated to accept.

So, once you have made your moral choice, it is not fair to simply put it aside because somebody else disagrees. If he convinces you that he's right, then it's OK. But if you believe you know what's right and you agree to do wrong, you are doing wrong.

If all but two members of the group -- you and Aaron -- think it's right to do something that Aaron thinks is unfair to him, then it's wrong for you to violate your ethics and go along with the group. If everybody but you thinks it's right then it's still wrong for you to agree, when you believe it's wrong.

Unless, of course, you belong to a moral philosophy which says it's right to do that.

When Dennis says he deserves the whole pie and you disagree, and it violates your ethical code to say it's right when you think it's wrong, then you should not agree for Dennis to get the whole pie. It would be wrong.

I believe that what you ought to do in the case when there's no agreement, should still be somewhat undecided. If you have the power to impose your own choice on someone else or everybody else then that might be the most convenient thing to do. But it takes a special way of thinking to say it's fair to do that. Is it in general a fair thing to impose your standards on other people when you think you are right? I guess a whole lot of people think so. But I'm convinced they're wrong. It isn't fair. And yet it can be damn convenient....

Comment by J_Thomas2 on The Bedrock of Morality: Arbitrary? · 2008-08-15T05:36:57.000Z · LW · GW

"Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?"

Because they have a specific argument which leads them to believe that?

Sure, but have you ever seen such an argument that wasn't obviously fallacious? I have not seen one yet. It's been utterly obvious every time.

Thomas, you are running in to the same problem Eliezer is: you can't have a convincing argument about what is fair, versus what is not fair, if you don't explicitly define "fair" in the first place. It's more than a little surprising that this isn't very obvious.

I gave a simple obvious definition. You might disagree with it, but how is it unclear?

Comment by J_Thomas2 on Is Fairness Arbitrary? · 2008-08-15T05:16:07.000Z · LW · GW

Hendrick, it could be argued that each person deserves to own 1/N of the pie because they are there. So if Doreen isn't hungry, she still owns 1/N of the pie which she can sell to anyone who is hungry.

Similarly it could be argued that the whole forest should be divided up and each person should own 1/N of it, and if the pie is found in the part of the forest that I own then I own that whole pie. But I have no rights to pies found in the rest of the forest.

Now suppose that all but one of the group is busy looking up into the trees at beautiful birds, which gives them great enjoyment. But Dennis instead has been working hard looking at the ground, searching for pies, and he finds one. Should he own the pie? Should he have the right to give or sell pieces to whoever he wants? Or should he have no special rights?

What if Dennis, knowing that the group will confiscate his pie if he shows it to them, eats it before they notice he has it. Is it then fair to pump his stomach so it can be divided equally?

Say it's 5 people walking through the woods, but they left 5 others back at base camp. Do the other 5 have any right to any of the pie?

If so, what if there are 5 starving children in india. Do they have any rights?

I say, Eliezer is wrong to say there is anything objectively fair about this.

If you and the others present get together and give Dennis 1/Nth of the pie - or even if you happen to have the upper hand, and you unilaterally give Dennis and yourself and all others each 1/Nth - then you are not being unfair on any level; there is no meta-level of fairness where Dennis gets the whole pie.

I agree that giving Dennis the whole pie when others disagree would not be fair. But when you disregard Dennis's opinion and dictate a solution, that isn't fair either. Just because Dennis is unable to explain his position so that you see it's right, and he does not suggest a compromise you can accept, does not make your alternative solution imposed on him fair.

There is no absolute standard of fairness here. It all depends. The concept that we should start with equal shares sounds right if you live in an egalitarian nation, otherwise not. Like, if it's a medieval english nobleman and four retainers walking through the woods, it would be idiotic to assert the pie must be split into 5 equal shares. The retainers would whip you for saying it, and they'd insist it was no more than you deserved, it was a fair response.

I say, fairness involves people who are making a deal, who are trying to be fair to each other. It is not about people who are not present, who cannot speak their minds. You aren't making a deal with starving children in india. You can be kind to them or unkind but until you can make a deal with them you can't be fair or unfair. It is not about the people back in base camp unless you made a deal with them that you will uphold or break.

If the people who are making the deal all agree it is fair, then it is fair. That's what it means for it to be fair. If some of them do not agree that it's fair then it isn't fair. It wouldn't be fair to give Dennis the whole pie, when somebody doesn't want to. It wouldn't be fair to give Dennis nothing, or 1/N of what he believes he deserves, when he doesn't agree. If you can't reach an agreement then you don't have a fair solution. Because that's what a fair solution isn't.

You can't say that just anything is fair. "Fair" isn't an empty concept that can apply to anything whatsoever. "Fair" is a concept that can apply to anything whatsoever that all participants of the deal freely agree to. If they don't agree, then it isn't fair.

Comment by J_Thomas2 on The Bedrock of Morality: Arbitrary? · 2008-08-15T04:17:01.000Z · LW · GW

But most of all - why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good? What is even the appeal of this, morally or otherwise? At all?

I don't think you ought to try to optimise fitness. Your opinion about fitness might be quite wrong, even if you accept the goal of optimising fitness. Say you sacrifice trying to optimise fitness and then it turns out you failed. Like, you try to optimise for intelligence just before a plague hits that kills 3/4 of the public. You should have optimised for plague resistance. What a loser.

And what would you do to optimise genetic fitness anyway? Carefully choose who to have children with?

Perhaps you would want to change the environment so that it will be good for humans, or for your kind of human being. That makes a kind of sense to me, but again it's hard to do. Not only do you have the problem of actually changing the world. You also have the problem of ecological succession. Very often, species that take over an ecosystem change it in ways that leave something else better able to grow than that species' own children. Some places, grasses provide a good environment for pine seedlings that then shade out the grass. But the pines in turn create an environment where hardwood saplings can grow better than pine saplings. Etc. If you like human beings or your own kind of human beings then it makes some sense to create an environment where they will thrive. But do you know how to do that?

If you knew all about how to design ecosystems to get the results you want, that might provide some of the tools you'd need to design human societies. I don't think those tools exist yet.

On a different level, I feel like it's important to avoid minimising mimetic fitness. If you have ideas that you believe are true or good or beautiful, and those ideas seem to kill off the people who hold them faster than they can spread the ideas, that's a sign that something is wrong. It should not be that the good, true, or beautiful ideas die out. Either there's something wrong with the ideas, or else there should be some way to modify the environment so they spread easier, or at least some way to modify the environment so the bearers of the ideas don't die off so fast. I can't say what it is that's wrong, but there's something wrong when the things that look good tend to disappear.

If they're good then there ought to be a way for them to persist until they can mutate or recombine into something better. They don't need to take over the world but they shouldn't just disappear.

I don't like it when the things I like go extinct.

So I don't want to maximise the fitness of things I like, but I sure do want that fitness to be adequate. When it isn't adequate then something is wrong and I want to look carefully at what's wrong. Maybe it's the ideas. Maybe something else.

Similarly, if you run a business you don't need to maximise profits. But if you run at a loss on average then you have a problem that needs to be fixed.

Comment by J_Thomas2 on The Bedrock of Morality: Arbitrary? · 2008-08-15T01:38:04.000Z · LW · GW

Eliezer, you claim that there is no necessity we should accept Dennis's claim he should get the whole pie as fair. I agree.

There is also no necessity he should accept our alternative claim as fair.

There is no abstract notion that is inherently fair. What there is, is that when people do reach agreement that something is fair, then they have a little bit more of a society. And when they can't agree about what's fair they have a little less of a society. There is nothing that says ahead of time that they must have that society. There is nothing that says ahead of time what it is that they must agree is fair. (Except that some kinds of fairness may aid the survival of the participants, or the society.)

Concepts of fairness aren't inherent in the universe, they're an emergent property that comes from people finding ways to coexist and to find mutual aid. If they agree that it's fair for them to hunt down and kill and eat each other because each victim has just as much right and almost as much opportunity to turn the tables, this does not lead to a society that's real useful to its participants and it does not lead them to be particularly useful to each other. It's a morality that in many circumstances will not be fully competitive. But this is a matter of natural selection among ideas, there isn't anything less fair about this concept than other concepts of fairness. It's only less competitive, which is an entirely different thing.

It's an achievement to reach agreement about proper behavior. The default is no agreement. We make an effort to reach agreement because that's the kind of people we are. The kind of people who've survived best so far. When Dennis feels he deserves something different from what we think, we often feel we should try to understand his point of view and see if we can come to a common understanding.

And we have to accept that sometimes we cannot come to any common understanding, that's just how it works. We have to accept that sometimes somebody will feel that it isn't fair, that he's been mistreated, and we have to live with whatever consequences come from that. Society isn't an all-or-none thing. We walk together, we stumble, we fall down, we get back up and try some more.

Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?

You might as well imagine there's a single perfect language and if we all spoke it we'd understand each other completely and everything we said would be true.

Comment by J_Thomas2 on Is Fairness Arbitrary? · 2008-08-14T23:08:12.000Z · LW · GW

One very funny consecuence of defining "fair" as "that which everyone agrees to be "fair"" is that if you indeed could convince everyone of the correctness of that definition, nobody could ever know what IS "fair", since they would look at their definition of "fair", which is "that which everyone agrees to be "fair"", then they would look at what everyone does agree to be fair, and conclude that "that which everyone agrees to be "fair" is "that which everyone agrees to be "fair""", and so on!

An, I have no idea what you are saying here.

If a deal is fair when all participants freely agree to the deal, then there you are.

Are you saying that everybody has to agree to this definition of fairness before anybody can use it? I don't see why. People use the word "fair" when they are talking about deals. We don't all have to agree on the meaning of a word before any of us can use the word in conversation. If that was necessary, what would we say?

If some people freely agree to a deal but they still say it isn't fair -- perhaps it isn't fair to God, or to the pixies, or to somebody in Mali who isn't a party to the deal anyway -- then they can say that. Whether or not we all agree that the deal is fair, still we have a deal we all agree to.

What point is there to build an infinite regress of definitions? What is it good for?

Comment by J_Thomas2 on Is Fairness Arbitrary? · 2008-08-14T22:52:41.000Z · LW · GW

If fairness is about something other than human agreement, what is it?

Suppose you have a rule that you say is always the fair one. And suppose that you apply it to a situation involving N people, and all N of them object, none of them think it's fair. Are you going to claim that the fair thing for them to do is something that none of them agrees to? What's fair about that?

When everybody involved in a deal agrees it's fair, who are you -- an outside kibitzer -- to tell them they're wrong?

Suppose a group all agrees, they think a deal is fair. And then you come in and persuade some of them that it isn't fair after all, that they should get more, and the agreement breaks down. Maybe they fight each other over it. Maybe some of them get hurt. And after some time contending, it's clear that none of them are better off than they were when they had their old agreement. Were you being fair to that group by destroying their agreement?

Comment by J_Thomas2 on Is Fairness Arbitrary? · 2008-08-14T07:05:40.000Z · LW · GW

It's fair when the participants all sincerely agree that it's fair.

If you think you're being unfair to somebody and he disagrees, who's right?

There isn't any guarantee that a fair solution is possible. If people can't agree, then we can't be fair. I say, fairness is a goal that we can sometimes achieve. There's no guarantee that we could always achieve all of our goals if only we did the right things. There's no guarantee that fairness is possible. Just, it's a good goal to try for sometimes, and sometimes we can actually be fair or mostly fair.

People often agree that equal shares is fair. Not always. It seems like a sort of default, and we might choose to start with the default and then argue why we should deviate from it. Like, the one who found the pie in the forest might deserve a finder's fee. The one who negotiated an agreement when it seemed unlikely might deserve a reward for that. If there's a danger that a bear might come take the pie, then one who guards the others while they eat might deserve a reward. If one person is carrying extra weight for things he shares with the others, he might deserve extra calories etc. There can be lots of reasons to deviate from equal shares once you accept equal shares as the default.

Approaching a fair solution is an art. It's an adventure that might not have any good ending possible, but when you don't know it can't be done then it's better to try than just accept failure from the beginning. Starting out with the assumption that there is a fair approach that everybody ought to accept, and that if it doesn't work you'll figure out who to blame, is both backward and counterproductive.

Comment by J_Thomas2 on Planning Fallacy · 2008-08-13T22:38:28.000Z · LW · GW

Glyn, I did something similar, but with mine after the granular tasks are estimated, a random delay is added to each according to a pareto distribution. The more subtasks, the more certain that a few of them will be very much behind schedule.

I chose a pareto distribution because it had the minimal number of parameters to estimate and it had a fat tail. Also I had a maximum entropy justification. Say you use an exponential distribution, you're assuming a constant chance for completion at any time that it's incomplete. But other things equal, the more you get behind schedule the less likely that the chance for cmpletion at any time will stay constant. It should go down. If you estimated 3 hours to completion to start with, and it's already been 6 hours, is it more likely that the correct estimate now is 3 hours, or something larger? And when it's been 9 hours and still incomplete, should you predict 3 hours then? The more you're already behind deadline, the more reasonable it is to suppose that you'll get even farther behind.

Comment by J_Thomas2 on Hiroshima Day · 2008-08-11T17:13:00.000Z · LW · GW

Steve, I think you posted your comment in the wrong thread, not this one.

Comment by J_Thomas2 on Sorting Pebbles Into Correct Heaps · 2008-08-10T03:25:50.000Z · LW · GW

This seems to imply that the relativists are right. Of course there's no right way to sort pebbles, but if there really is an absolute morality that AIs are smart enough to find, then they'll find it and rule us with it.

Of course, there could be an absolute morality that AIs aren't smart enough to find either. Then we'd take pot luck. That might not be so good. Many humans believe that there is an absolute morality that governs their treatment of other human beings, but that no morality is required when dealing with lower animals, who lack souls and full intelligence and language etc. I would not find it implausible if AIs decided their morality demanded careful consideration of other AIs but had nothing to do with their treatment of humans, who after all are slow and stupid and might lack things AIs would have that we can't even imagine.

And yet, attempts to limit AIs would surely give bad results. If you tell somebody "I want you to be smart, but you can only think about these topics and in these particular ways, and your smart thinking must only get these results and not those results" what's the chance he'll wind up stupid? When you tell people there are thoughts they must not think, how can they be sure not to think them except by trying to avoid thinking at all? When you think a new thought you can't be sure where it will lead.

It's a problem.

Comment by J_Thomas2 on Morality as Fixed Computation · 2008-08-09T00:33:00.000Z · LW · GW

Given that the morality we want to impose on a FAI is kind of incoherent, maybe we should get an AI to make sense of it first?

Comment by J_Thomas2 on Hiroshima Day · 2008-08-08T23:59:00.000Z · LW · GW

Funky, you might be right.

Consider Tacitus:
"To ravage, to slaughter, to usurp under false titles, they call empire; and where they make a desert, they call it peace."

How better to make a desert than with nukes?

As a general rule, real WMDs do not help nations achieve the goals they think of as victory. Imagine for example that we had created plentiful nukes two years earlier, and we had then bombed 20 german cities while the germans surrendered to us. We would then have to deal with russia, and our german ally would have 20 fewer cities to assist us than they would otherwise.

WMDs don't give us what we want. They only help us avoid disastrous defeat by threatening a different sort of disaster.

Usually, threatening to use nukes is an admission of defeat. You don't do it when you're winning. Ex: USA in korea. USA in vietnam. israel in 1973. Exception: USA in invasion of iraq. We said if iraq used chemical or bioweapons we'd use nukes. The word is out now that they didn't have them and couldn't have used them, but it's a rare thing -- as if we thought their mustard gas might give them a decisive victory or something.

You might be right that we haven't used any other WMDs because we used nukes once. There's no compelling evidence in either direction. I want to also point out that we have not had many uses for them. WMDs are mostly good for destroying cities full of civilians, with side effects that might last for many generations in the cases of biological, genetic, ecological etc weapons. All have known long-lasting side effects except for the nerve gases which are not particularly effective. How often have we needed to destroy enemy cities? We bombed Hanoi and Haiphong as part of our peace negotiations, but when have we done the like since? We pride ourselves on pinpoint strikes against particular targets.

It's only losers who use WMDs, and only when the surrender they face is worse than the side effects of the WMDs. Why would it be surprising that it hasn't happened again since our ill-advised single use?

Comment by J_Thomas2 on Hiroshima Day · 2008-08-08T22:40:00.000Z · LW · GW

Caledonian, agreed. Whatever we say are the inevitable results of that slaughter, whether it's that we prevented a later nuclear war or we poisoned the chance for peace, they're all bogus.

We don't know what would have happenned instead if only things were different. We can only guess by making netaphors from other situations.

Here's a metaphor--

Pre-nuke: You have a neigbor who annoys you. He plays his stereo too loud. He throws garbage over the fence into your yard. He doesn't mow his grass, you get bugs and another neighbor has trouble selling his house.

You annoy him too.

Every now and then the two of you have a big argument that perhaps goes to the level of a fistfight. If one of you is enough better, he can intimidate the other. Maybe you can make him leave his stereo low and mow his grass, for fear you'll beat him up again. Maybe he can make you stop cooking with garlic and pick up his trash and stop mowing your lawn. Maybe other neighbors will get involved and who wins depends on how the allies fall out. Maybe the allies aren't stable enough to get any resolution.

One-nuke: You buy a gun. You're not sure he'll be properly intimidated by the gun so after you knock him down you shoot off on one of his toes to prove you mean business. Also this helps to intimidate one of your other neighbors who's really good with his fists. Your neighbors argue some about whether you should have done that, but they aren't ready to do anything about it.

Two-nukes: You have a gun. Your ugly neighbor has a gun. You practice your quick-draw and your targetting because next time there's a fight one of you is going to die. The first one to get his gun out and aimed properly wins. So it makes sense to draw quick at the first sign the other guy might draw. It makes sense to draw and shoot at the first sign the argument is getting intense. It makes sense to draw and shoot at the first provocation. It makes sense to draw and shoot the next time you see him.

Many-nukes: You have a gun. Your ugly neighbor has a gun. You have a bunch of catapults that will throw tanks of gasoline onto your neighbor's house and yard. Also thermite. Anybody there will be killed. Your neighbor has the same thing aimed at you. It's a standoff, neither of you can kill the other without at least losing his house. Your neighbors don't like what the situation is doing to their property values, but they are neither willing nor able to kill you over it. You work hard to persuade your neighbor that you're crazy enough to die killing him, so you can intimidate him even though you can't really win. Meanwhile you work at more catapults designed to knock his oil drums out of the air before they reach your house. It doesn't seem like it would work, but if you can persuade him that you believe they'd work then you can intimidate him.

And you've arranged your catapults so that you can in theory hit any house in the neighborhood. It's all very expensive but you know it's necessary. You try to keep any of your other neighbors from building catapults themselves and mostly they don't want to. But if they do try, and they're a bit unfriendly, you threaten to fire-bomb them to make sure they don't, or maybe you shoot them or just beat them up. Whatever it takes, since you sure don't want another important enemy.

Many of your neighbors think you are crazy. But as you point out, you can't expect to have peace. There's always going to be an enemy, and you have to do whatever it takes to win.

If this metaphor fits, it makes the whole thing look insane. You have to accept that any of your neighbors can shoot you while you're opening your front door, or burn your house down around you during the night. It just isn't practical to prevent that, and yet it hardly ever happens. But maybe this isn't a good metaphor. Maybe nations aren't like people, and neighboring nations aren't like people's neighbors. Maybe the best chance for national survival is to threaten everybody else with nukes.

Comment by J_Thomas2 on Morality as Fixed Computation · 2008-08-08T09:36:09.000Z · LW · GW

Funny how the meaning changes if it's desire for gold atoms compared to desire for iron atoms.

I'm real unclear about the concept here, though. Is an FAI going to go inside people's heads and change what we want? Like, it figures out how to do effective advertising?

Or is it just deciding what goals it should follow, to get us what we want? Like, if what the citizens of two countries each with population about 60 million most want is to win a war with the other country, should the FAI pick a side and give them what they want, or should it choose some other way to make the world a better place?

If a billion people each want to be Olympic gold medalists, what can a nearly-omnipotent FAI do for them? Create a billion different Olympic events, each tailored to a different person? Maybe it might choose to improve water supplies instead? I really don't see a problem with doing things that are good for people instead of what they want most, if what they want is collectively self-defeating.

Imagine that we suddenly got a nearly-god-like FAI. It studies physics and finds a way to do unlimited transmutation, any element into any other element. It can create almost unlimited energy by converting iron atoms entirely to energy, and it calculates that the amount of iron the earth receives per day as micrometeorites will more than power anything we want to do. It studies biology and sees how to do various wonders. And it studies humans, and then the first thing it actuall does is to start philosophy classes.

"Study with me and you will discover deep meaning in the world and in your own life. You will become a productive citizen, you will tap into your inner creativity, you will lose all desire to hurt other people and yet you will be competent to handle the challenges life hands you." And it works, people who study the philosophy find these claims are true.

What's wrong with that?

Comment by J_Thomas2 on Hiroshima Day · 2008-08-08T09:09:00.000Z · LW · GW

it's not the wheapons that kill people, but the people who use them.

There's a level where that's kind of true.

But consider the chicken. In the usual way of things, when two cocks meet they do some threat displays and likely one of them runs away. If not they fight each other a little and then likely one of them runs away.

If you strap razor blades to their feet and put them into a pen where they can't run away then you have something you can sell tickets to. Except it's illegal in this country. You could say "Razor blades don't kill fighting cocks, other cocks do" but it wouldn't be very true, now would it?

The people who strap on the razor blades convert a dominance ritual into a sort of ritual murder. The chickens just follow their instincts, never thinking out the consequences.

It would be possible to attach the trigger for a nuclear weapon to a cock so that when it killed its opponent the nuke would go off. We wouldn't actually do that, but we could. And then you could say "Nukes don't kill people. Roosters kill people." It would be just as true as the current version.

Comment by J_Thomas2 on Hiroshima Day · 2008-08-08T02:26:00.000Z · LW · GW

Mark, no one has used biological weapons even though we have developed them. (There may be some unpublicised exceptions, maybe south africa used some against africans etc.) No one has ever used genetic weapons. The idea that every weapon gets used except for MAD is wrong.

You say that we cannot have disarmament. As long as the USA prevents disarmament, you are right. But after the next nuclear war, we will have nuclear disarmament provided the world economy still exists. You say it can't happen on no evidence. I say, wait and see. When it comes, you won't be able to stop it.

Which does not say the world will have universal peace. Making the world safe for conventional warfare may not be exactly pleasant. But it will happen, pleasant or not.

I have the sense that you imagine two sides here, the idealists who imagine wonderful worlds that can never be versus the realists like you who face the gritty truth that life is tough but if we're tough enough we can prevail. But see, your sort of realism came out of the Cold War, and it hasn't adapted to the modern world yet. In the new world aircraft carriers are big fat targets, and within your lifetime we will have to face the weather without the comfort of our nuclear umbrella. It won't be a utopia, it's just the next step.

Comment by J_Thomas2 on Hiroshima Day · 2008-08-07T20:35:36.000Z · LW · GW

Mark, you have the right to your untestable opinions. No one can ever show whether we would have used nukes other times if we hadn't that time, or that somebody else would have used nukes if they had them, or that if you were in Truman's place you'd do the same thing he did.

There's no way for anybody to know about any of these things, so you have the perfect right to believe whatever you want just as you do about how many Santa Clauses there are in Heaven and whether the Yankees would have won the series in 1947 if they had Joe DiMaggio, and whether the germans could have won WWII if they'd pushed forward to take Moscow and they got their winter uniforms and if they tried hard to make friends with the ukrainians etc.

This idea that the best way to prevent a nuclear war is to persuade the world that we're crazy enough to kill everybody so they'd better do what we say -- there's something kind of screwy about it.

If we actually want a world where nobody sets off nuclear bombs, we do much better to create a world where nobody builds nuclear bombs. There's something kind of, well, obvious about that reasoning.

We've had less than 62 years when we have avoided nuclear war despite MAD. We have had 5,000 or 15,000 or 1,000,000 years where we avoided nuclear war by not having nukes, depending on how you count. Which looks like a more reliable way to avoid nuclear war?

Comment by J_Thomas2 on Hiroshima Day · 2008-08-07T19:00:26.000Z · LW · GW

Frelkins, let's consider MAD in action.

In 1973 israel lost a war. Egypt wasn't ready to take half of the sinai much less the whole thing, but still it was clear that israel had lost and would have to negotiate. instead, israel threatened to nuke egypt.

The USA detected nuclear material crossing the dardanelles, and "we" initiated DefCon 1 and announced it as DefCon 3. "We" told the russians that unless they backed down and let israel threaten egypt with nukes when there was no countermeasure available to egypt, we would kill everybody in the world and let the russians kill us all too.

Luckily, the russians weren't as crazy about egypt as the USA was crazy about israel, so they backed down. We then shipped to israel everything the israelis needed to win -- our best planes, our best tanks, our best ECM, our best analysis of our best satellite photos, things we'd been keeping secret from the russians, etc. Israel then won the war and we negotiated with egypt for them, giving israel tens of billions of dollars in echange for their agreeing to accept the peace we negotiated for them.

Given this background, how could it be acceptable to have MAD between israel and iran? Far more pleasant for israel to be a nuclear power with only non-nuclear nations as enemies. When you can do anything you want and nuke em if they can't take a fuck.... And the alternative is to have to negotiate? Negotiate with somebody who could destroy you?

If iran gets nukes there will be a big US uproar about it, kind of like "who lost china?". The republicans will say that they were 100% ready to take care of the problem but the democrats wouldn't let them. And every time israel and iran have harsh words the uproar will start up again. The only reason israel is in danger of total destruction is that the democrats made it happen.

We could make an argument similar to the one about hiroshima. We could say, if only there is one nuclear exchange between a couple of small nations -- israel and lebanon would be ideal -- and the whole world gets to observe the result, then we will get disarmament. The world won't stand for anybody holding nukes. If one nation tries to do it their own public won't stand for it. We could clear up the continuing MAD madness, but it takes one graphic example. Without that, too many people will keep beleving that MAD works.

See the problem with MAD? It isn't just that wars can start from accidents or mistakes. More fundamentally, we have strategists who try to carefully judge just how far they can push our enemies before MAD will break down. The better we are at persuading the world that we are willing to kill everybody for trivial reasons, the more we can get away with. And the more we get away with, the more we try to get away with. The only way to find out how far we can push before MAD breaks down, is to push a little bit too far.

However it gets described, this is not a stable strategy.

But it will take a small nuclear war before we can get out of it. Therefore a small nuclear war is necessary, and beneficial, and if we do something to promote one we will be doing the world a great and necessary service.

I tend to feel there is something wrong with this reasoning. And yet, it looks so true....

Comment by J_Thomas2 on Hiroshima Day · 2008-08-07T17:39:34.000Z · LW · GW

It's absurd to argue that "we" did the right thing because the results happened to turn out well. You can make that argument about anything. For example, if Hitler had not started WWII when he did, there would inevitably have been a world war after both sides had nuclear weapons and it would have been far, far worse. Hitler might have done it for the wrong reasons but we owe him our lives for doing it.

All it takes is to look at what happened, and make up a worse alternative, and then say that what happened was better than the alternative. You can do that about anything. Unless you can repeat the experiment and get odds, it's bogus.

There's the argument that without a horrible example somebody would inevitably have used nukes because they couldn't wrap their minds around it. This is bogus. We have the example of biological warfare, which we have avoided so far without a horrible example. Ditto genetic warfare. However, a variant of it can be used to excuse Truman. Truman had been kept out of the loop, he didn't know about the nukes until very late. Then he had to make a choice quickly. If national leaders couldn't understand it without the graphic example, that would apply to Truman even more. Forgive him, he didn't know what he was doing.

There is the argument that the japanese would not have surrendered without being nuked. This is the first bogus argument all over again, a completely bogus claim about what would have happened otherwise. But this one is special. We had no method set up to accept a surrender from japan. If all along, since Pearl Harbor, we had been negotiating with them about what it would take to end the war, we might at some point have negotiated a peace. But we didn't want to. We wanted an unconditional surrender and nothing else. So there was nothing to negotiate about and no method arranged to do it, and it took the japanese days after our nuke to reach an agreement about how to surrender and we bombed them again while they did it.

So rephrase that. "We nuked japan because we would not accept a conditional surrender." We could have ended our war with japan at any time after Pearl Harbor if we had been willing to negotiate terms. Right after Pearl Harbor the japanese might have insisted on some sort of favorable terms. After they took Manila they would probably not have wanted to give it back. By Midway they might have given back a lot, but of course we wanted them to give back all of china etc. Maybe we could not have agreed on a peace. But we were totally unwilling to find out. It isn't fair to say that we nuked japan because they refused to surrender. It is fair to say that we nuked japan because we were unwilling to negotiate a surrender. We'll never know whether we could have arranged a surrender without nuking japan because we made absolutely no effort to do so. We were unwilling.

There are arguments that we should not negotiate peace and fight at the same time. The basic reason is that peace negotiations weaken our resolve. Apply that this time. We were unwilling to negotiate a surrender with japan because that would weaken our resolve to nuke them.

VD Hanson says that to keep a war from starting again you really need to smash the enemy, show them that they're completely beaten. Japan was already beaten that way, without nukes. The world would be a better place if Hanson got into an argument where he had to admit that his arguments were worthless and perhaps suffer some mild consequence like a YouTube bare-bottom flogging to prove it. The reason he keeps blathering on is that no matter how many times his bogus claims are definitively refuted, he doesn't have to notice.

Bogus justifications aside, people who weren't adults during WWII probably can't understand what it was like. I've read old Life magazines from during the war, and it seems like an alien culture. We haven't had anything like it more recently, except in the weeks after 9/11. For few weeks there, normally-sane people were saying we would have to kill every muslim who didn't renounce islam, or turn the middle east into a glass parking lot, etc. I'm not talking about the insane people who were still saying that in 2002. For awhile there the nation as a whole went crazy. And in WWII it went on for years. We lost close to half a million dead and their families had the grief to deal with. Rationing. A whole lot of men away, in the military. Censorship, and particularly censorship of anything that hindered the war effort. I particularly remember the full-page photo of the steelworkers' union organizer. He tried to start a strike when we were at war! He got a bayonet wound on his butt, and they photographed it and put it in Life magazine along with his recantation. He said he was wrong to hurt the war effort and he apologized.

Say what you like about what you would have done. But you weren't there. People aren't very good at predicting what they would have done if things were different.

So, it's been 63 years and people are real intent on what "we" should have done. Why don't we ever talk about what Genghis should have done, or what Tamerlane should have done, or maybe what Andrew Jackson should have done? We never give a lot of thought to what MacLellan should have done -- if he'd done the right things. maybe the Civil War would have been cut short with far less loss of life. What should we have done in the philippines or the mexican war?

What should Hitler have done? Clearly his approach was suboptimal, but if he had been sane -- if you had been in his place, say -- what might work for getting good results, mercy or at least much reduced injustice for germany?

Why do we only think about what Truman should have done -- one single old dead guy -- and not the other old dead guys?

Comment by J_Thomas2 on Contaminated by Optimism · 2008-08-07T15:19:00.000Z · LW · GW

Suppose we break the problem down into multiple parts.

1. Understand how the problem works, what currently happens.
2. Find a better way that things can work, that would not generate the same problems.
3. Find a way to get from here to there.
4. Do it.

Then part 1 might easily be aided by a guy on a blog. Maybe part 2. Possibly part 3.

A blog is better than a newsgroup because the threads don't scroll off, they're all sitting on the site's computer if anybody cares. Also, as old posts are replaced by new posts people stop responding to old posts. So there isn't as much topic drift, we don't get threads that last for two years with the same title and discuss two dozen different topics.

I tend to think a wiki would be even better. Or a wiki that lets people revise their own posts at will but not other people's posts. Then rather than have long arguments where you finally come to some agreement, the arguments could turn into revisions of the original posts, and people could see at a glance where the remaining disagreement is. If you want to see how they got to where they are, you can look at the history of revisions.

None of this will change the world unless somebody pays attention. But it's much easier to pay attention when there's a coherent train of thought to pay attention to. And people who have a vague idea they want to flesh out, can do worse than present it on a blog and get people who want to help develop it or poke holes in it. Either or both.

Comment by J_Thomas2 on Contaminated by Optimism · 2008-08-06T19:06:32.000Z · LW · GW

'It seems to me like the simplest way to solve friendliness is: "Ok AI, I'm friendly so do what I tell you to do and confirm with me before taking any action." It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse 'friendliness' into the AI.'

As was pointed out, this might not have the consequences one wants. However, even if that wasn't true, I'd still be leery of this option - this'd effectively be giving one human unlimited power.

Would you expect all the AIs to work together under one person's direction? Wouldn't they group up different ways, and work with different people?

In that case, the problem of how to get AIs to be nice and avoid doing things that hurt people boils down to the old problem of how to get people to be nice and avoid doing things that hurt people. The people might have AIs to tell them about some of the consequences of their behavior, which would be an improvement. "But you never told us that humans don't want small quantities of plutonium in their food. This changes everything."

But if it just turns into multiple people having large amounts of power, then we need those people not to declare war on each other, and not crush the defenseless, and so on. Just like now except they'd have AIs working for them.

Would it help if we designed Friendly People? We'd need to design society so that Friendly People outcompeted Unfriendly People....

Comment by J_Thomas2 on Contaminated by Optimism · 2008-08-06T01:11:59.000Z · LW · GW

"Sorry, you're not allowed to suggest ideas using that method" is not something you hear, under Traditional Rationality.

But it is a fact of life, ....

It is a fact of life that ....

I disagree. You list a whole collection of mistakes people make after they have a bad hypothesis that they're attached to. I say, the mistake should not be to use your prior experience when you come up with hypotheses. The mistakes are first to get too attached to one hypothesis, followed by the list of "facts of life" mistakes you then described.

People will get hypotheses from whatever source. Telling them they shouldn't have done it is like telling a jury to completely ignore evidence that they should not have been exposed to. Very hard commands to obey.

What's needed is a new skill. In mathematics, I found it useful when I had trouble proving a theorem to try to disprove it instead. The places I had trouble with the theorem gave great hints toward building a counterexample. Then if the counterexample kept running into problems, the kind of problems it ran into gave great hints toward how to solve the theorem. Then if I still couldn't prove it, the problems pointed toward a better counterexample, and so on.

So for things like evolutionary questions, once you have an idea about a way that evolution might work to get a particular result, the needed skill might be to look for any way that other genes could be selected while subverting that process. If you can be honest enough that you see it's easy for it to fail and hard for it to succeed, then the proposed mechanism gets a lot easier to reject.

This logic applied to SDI, for example. The argument wasn't whether we could build the advanced technology required to shoot down an ICBM. The argument was whether we could improve SDI as fast as our potential enemies could improve their SDI-blocking methods. And we clearly could not.

The question isn't "Can it work?". The question is "Can it outcompete all comers?". Group selection advocates got caught up in the question whether there are circumstances that allow group selection. Yes, there are. It can happen. But then there's the question how often those circumstances show up, and currently the answer looks like "rarely".

Comment by J_Thomas2 on Anthropomorphic Optimism · 2008-08-05T22:16:33.000Z · LW · GW

There's always a nonzero chance that any action will cause an infinite bad. Also an infinite good.

Then how can you put error bounds on your estimate of your utility function?

If you say "I want to do the bestest for the mostest, so that's what I'll try to do" then that's a fine goal. When you say "The reason I killed 500 million people was that according to my calculations it will do more good than harm, but I have absolutely no way to tell how correct my calculations are" then maybe something is wrong?

Comment by J_Thomas2 on Anthropomorphic Optimism · 2008-08-05T16:12:13.000Z · LW · GW

"If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity.

The assumption is that when you lay waste to a million human species the bad that is done is finite.

Is there solid evidence for that? If there's any slightest chance that it will result in infinite bad, then the problem is much more complicated.

There has to be not the 0.00000000000001% probability that your evil might be infinite, before this reasoning makes sense.

Comment by J_Thomas2 on Anthropomorphic Optimism · 2008-08-05T16:03:41.000Z · LW · GW

My point was that in an adversarial situation, you should assume your opponent will always make perfect choices. Then their mistakes are to your advantage. If you're ready for optimal thrashings, random thrashings will be easier.

It isn't that simple. When their perfect choice mean you lose, then you might as well hope they make mistakes. Don't plan for the worst that can happen, plan for the worst that can happen which you can still overcome.

One possible mistake they can make is to just be slow. If you can hit them hard before they can react, you might hurt them enough to get a significant advantage. Then if you keep hitting them hard and fast they might reach the point they're trying to defend against the worst you can do. While they are trying to prepare for the worst attack you can make, you hit them hard with the second-worst attack that they aren't prepared for. Then when they try to defend against whatever they think you'll do next, you do something else bad. It's really ideal when you can get your enemy into that stance.

Of course it's even better when you can persuade them not to be your enemy in the first place. If you coudl do that reliably then you would do very well. But it's hard to do that every time. The enemy gets a vote.

Comment by J_Thomas2 on Anthropomorphic Optimism · 2008-08-05T06:26:54.000Z · LW · GW

Humans faced with resource constraints did find the other approach.

Traditionally, rather than restrict our own breeding, our response has been to enslave our neighbors. Force them to work for us, to provide resources for our own children. But don't let them have children. Maybe castrate the males, if necessary kill the female's children. (It was customary to expose deformed or surplus children. If a slave does get pregnant, who's child is surplus?)

China tried the "everybody limit their children" approach. Urban couples were allowed one child, farm couples could have two. Why the difference? China officially did not have an "other" to enslave. They had to try to make it fair. But why would the strong be fair to the weak? Slaves are bred when there's so much room to expand that the masters' children can't fill the space, and another requirement is that it's easier, cheaper, or safer to breed them than to capture more.

Traditionally slavery was the humane alternative to genocide.

Why didn't Eliezer think that way? My guess is that he is a good man and so he supposed that human populations would think in terms of what's good and fair for everyone, the way he does.

He applied anthropomorphic optimism to human beings.