Posts

Comments

Comment by TGGP2 on The Meaning of Right · 2008-07-30T19:03:00.000Z · LW · GW

Humans having this kind of tendency is a predictable result of what their design was optimized to do, and as such them having it doesn't imply much for minds from a completely different part of mind design space.
Eliezer seems to be saying his FAI will emulate his own mind, assuming it was much more knowledgeable and had heard all the arguments.

Comment by TGGP2 on The Meaning of Right · 2008-07-30T04:27:00.000Z · LW · GW

Then they don't know the true difference between the two possible lives, do they?
"True difference" gets me thinking of "no true Scotsman". Has there ever been anybody who truly knew the difference between two possible lives? Even if someone could be reincarnated and retain memories the order would likely alter their perceptions.

I'm very interested in how Eliezer gets from his meta-ethics to utilitarianism
He's not a strict utilitarian in the "happiness alone" sense. He has an aversion to wireheading, which maximizes the classic version of utility.

I know you frown upon mentioning evolutionary psychology, but is it really a huge stretch to surmise that the more even-keeled, loving and peaceful tribes of our ancestors would out-survive the wilder warmongers who killed each other out?
Yes, it is. The peaceful ones would be vulnerable to being wiped out by the more warlike ones. Or, more accurately (group selection isn't as big a factor given intergroup variance being smaller than intragroup variance), the members of the peaceful tribe more prone to violence would achieve dominance as hawks among doves do. Among the Yanonamo we find high reproductive success among men who have killed. The higher the bodycount, the more children. War and murder appear to be human universals.

Eliezer's obvious awareness of rationalization is encouraging
Awareness of biases can increase errors, so it's not encouraging enough given the stakes.

Finally, I would think there would be more than one AI programmer, reducing the risk of deliberate evil
I'm not really worried about that. No one is a villain in their own story, and people we would consider deviants would likely be filtered out of the Institute and would probably be attracted to other career paths anyway. The problem exists, but I'm more concerned with well-meaning designers creating something that goes off in directions we can't anticipate.

Caledonian, Eliezer never said anything about not bothering to look for arguments. His idea is to find out how he found respond if he were confronted with all arguments. He seems to assume that he (or the simulation of him) will correctly evaluate arguments. His point about no universal arguments is that he has to start with himself rather than some ghostly ideal behind a veil of ignorance or something like that.

Comment by TGGP2 on The Meaning of Right · 2008-07-29T17:26:00.000Z · LW · GW

I'm near Unknown's position. I don't trust any human being with too much power. No matter how nice they seem at first, history indicates to me that they inevitably abuse it. We've been told that a General AI will have power beyond any despot known to history. Am I supposed to have that much reliance on the essential goodness within Eliezer's heart? And in case anyone brings this up, I certainly don't trust the tyranny of the majority either. I don't recognize any moral obligation to stop it because I don't recognize any obligations at all. Also, I might not live to seem him or his followers immanentize the Eschaton.

Female circumcision is commonly carried out by women who've undergone the procedure themselves. So I don't think the Pygmy father will be convinced.

Comment by TGGP2 on The Gift We Give To Tomorrow · 2008-07-19T01:17:00.000Z · LW · GW

Hal Finney, I am reminded of Stephen Pinker's discussion of love between two individuals whose interests exactly coincide. He says that the two would come to form one organism, and they would be like multiple organs or cells within that individual organism, and so would not have to experience "love".

Comment by TGGP2 on The Gift We Give To Tomorrow · 2008-07-18T07:40:00.000Z · LW · GW

I was once impressed by the ability of natural selection to create incredibly complicated functioning living things that can even repair and make copies of themselves. I realized that this was the result of it having so much time and material to work with and relentlessly following an algorithm for attaining fitness that a human being with its biases would be apt to deviate from if consciously pursuing it, but I still felt impressed. I have never felt that way about beauty or emotion.

Comment by TGGP2 on What Would You Do Without Morality? · 2008-06-30T01:46:00.000Z · LW · GW

Since that's already what I believe, it wouldn't be a change at all. I must admit though that I didn't tip even when I believed in God, but I was different in a number of ways.

I think the world would change on the margin and that Voltaire was right when he warned of the servants stealing the silverware. The servants might also change their behavior in more desirable ways, but I don't know whether I'd prefer it on net and as it doesn't seem like a likely possibility in the foreseeable future I am content to be ignorant.

Comment by TGGP2 on [deleted post] 2008-06-29T03:44:00.000Z

I would have expected that violent instability would increase the costs of resource extraction. That is explicitly part of the goal of MEND in Nigeria, which John Robb often discusses.

Comment by TGGP2 on Causality and Moral Responsibility · 2008-06-16T21:53:00.000Z · LW · GW

Sorry I'm late, but this is really a great opportunity to plug For the law, neuroscience changes nothing and everything.

Comment by TGGP2 on Thou Art Physics · 2008-06-07T22:23:00.000Z · LW · GW

Since when does science contain morality?

Comment by TGGP2 on Circular Altruism · 2008-01-25T15:44:00.000Z · LW · GW

TGGP, are you familiar with the teachings of Jesus?
Yes, I was raised Christian and I've read the Gospels. I don't think they provide an objective standard of morality, just the jewish Pharisaic tradition filtered through a Hellenistic lens.

Matters of preference are entirely subjective, but for any evolved agent they are far from arbitrary, and subject to increasing agreement to the extent that they reflect increasingly fundamental values in common.
That is relevant to what ethics people may favor, but not to any truth or objective standard. Agreement among people is the result of subjective judgment.

Comment by TGGP2 on Circular Altruism · 2008-01-25T07:56:00.000Z · LW · GW

Caledonian, what is an objective standard by which an ethical system can be evaluated?

Comment by TGGP2 on When None Dare Urge Restraint · 2007-12-10T05:16:00.000Z · LW · GW

Julian Morrison, William Saletan has suggested lowering the age of consent but states that people wouldn't think rationally about it. I discussed that here, and noted here a study showing that sex and pot don't screw kids up like people thought.

Comment by TGGP2 on When None Dare Urge Restraint · 2007-12-09T22:33:00.000Z · LW · GW

The Americans DID adopt mass marching tactics during the Revolutionary War. We even won battles that way!

Here is Wikipedia on the mistaken idea that the American Revolution was won by guerrilla tactics.

Comment by TGGP2 on Uncritical Supercriticality · 2007-12-08T04:34:00.000Z · LW · GW

What did the Resistance accomplish? I already stated that it seemed to me that it was the opposing armies that got rid of the Nazis. If you disagree on that or have something else you think they did, state it.

Comment by TGGP2 on Uncritical Supercriticality · 2007-12-06T23:45:00.000Z · LW · GW

About violence and society. What do we define by violence? Do we also define intrusion in our personal sphere, psychological re-programming, etc. as violent activities?
You should read Randall Collins.

What about the Resistance in countries that were occupied by Nazi Germany?
Did they actually accomplish anything? I think it was the violence of the opposing armies that actually ended Nazi occupation.

Comment by TGGP2 on Beware of Stephen J. Gould · 2007-11-11T03:20:00.000Z · LW · GW

Should we measure the importance of the ideas discussed by citations?

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-26T01:43:00.000Z · LW · GW

I do not consider your rephrasing to be accurate. I wasn't giving the measurers choice, they are all supposed to follow the same procedure in order to obtain the same (probabilistic) results. It is the Mugger, or outside agent, that is making choices and therefore preventing the experiment from being controlled and repeatable.

What do you see as the major deficiencies in our model of reality? That the behavior of quantum particles is probabilistic rather than deterministic?

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-25T19:28:00.000Z · LW · GW

In quantum experiments the random outcomes are the same for all experimenters, so it can be repeated and the same probabilities will be observed. When you have someone else sending messages, you can't rely on them to behave the same for all experimenters. If there are a larger group of Muggers that different scientists could communicate with, than experiments might reveal statistical information about the Mugger class of entity (treating them as experimental subjects), but it's a stretch.

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-25T02:48:00.000Z · LW · GW

I don't think you've established that "You might as well consider it good", I might as well not consider it good or bad. You haven't given a reason to consider it more good than bad, just hope. I might hope my lottery ticket is a winner, but I have no reason to expect it to be.

If you want to persuade physicists to start looking for messages from beyond the space-time continuum, you'd better be able to offer them a method. I am completely at a loss for how one might go about it. I certainly don't know how you are going to do it at the blackboard. Anything you write on the blackboard comes from you, not something outside space-time. Anticryptography would sound like the study of decrypting encryptions, which is already covered by cryptography. As far as I know, SETI is just dicking around and has no algorithms of the type you speak of, but my information just comes from Michael Crichton and others critical of them. I don't see how you can have this other observer and at the same time have the scientist with control over the lab.

You haven't come up with much of a moral system either, you just say to do what the Mugger says, when we are not in contact with any such Mugger and have no reason to suppose what the Mugger wants us to do is good.

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-24T21:37:00.000Z · LW · GW

Physicists have been proceeding like physicists for some time now and none of them has done anything like receiving the Old Testament from outside of our space-time. Why would you even expect a laboratory experiment to have such a result? It also seems you are postulating an extra-agent (the Mugger), which limits the amount of control experimenters have and in turn makes the experiment unrepeatable.

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-24T01:40:00.000Z · LW · GW

You have not established that one ought to "do his best to increase his intelligence, his knowledge of reality and to help other ethical intelligent agents do the same". Where is the jump from is to ought? I know Robin Hanson gave a talk saying something along those lines, but he was greeted with a considerable amount of disagreement from people whose ethical beliefs aren't especially different from his.

That entails consistently resisting tyranny and exploitation.
If a tyrant's goal was to increase their knowledge of reality and spread it which they chose to go about with violence and exploitation, resistance could very well hinder those goals.

But intelligence can be defined as the ability to predict and control reality or to put it another way to achieve goals.
That would make Azathoth incredibly intelligent, and Azathoth isn't called the "blind idiot" for nothing.

So, if your only goal is to increase intelligence
You haven't established that ought to be our goal.

You cannot increase intelligence indefinitely without eventually confronting the question of what other goals the intelligence you have helped to create will be applied to.
The intelligence might have no other goals other than those I choose to give it and the intelligence I am endlessly increasing might be my own.

That is a tricky question that our civilization does not have much success answering, and I am trying to do better.
Why is a "civilization" the unit of analysis rather than a single agent?

I assign most of the intrinsic good to obeying the Mugger
I do not and you have not established that I should.

the more intrinsic good gets heaped on obeying the Mugger.
You have not established that obeying the mugger will actually lead to preferable results.

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-23T03:49:00.000Z · LW · GW

Richard, my objections in my e-mail to you still stand. I suppose to a Pete Singer utilitarian it might be correct that we assign equal weight of importance to everyone in and beyond our reality, but not everyone accepts that and you have not established that we ought to. If I am a simulation, I take my simulation as my reality and care as little about the space-time simulating me (outside of how they affect me) as another simulation someone in our reality might create. Outside of the issue of importance, you still have not established how we obtain oughts. You simply ask that we accept the authority of someone even as you acknowledge that this person may be a liar and/or malevolent. You have hit the "worship" button without regard to whether it is Nyarlathotep/Loki/Lucifer you are worshiping (in that respect you are not all that different from the adherents of the more primitive religions). Your post was also quite long. I suggest you get a blog of your own to host it on. All the cool people are doing it.

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-22T01:39:00.000Z · LW · GW

I think it was the Stoics who said one's ethical duty was to act in accordance with the Universe. Marcus Aurelius did a lousy job of making sure his son was competent to run the empire though.

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-21T06:34:00.000Z · LW · GW

So, then Richard, do you assert that morality does have observable effects on the universe? Do you think that a physicist can do an experiment that will grant him/her knowledge of morality? You have been rather vague by saying that just as we discovered many positive facts with science, so we can discover normative ones, even if we have not been able to do so before. You haven't really given any indication as to how anyone could possibly do that, except by analogizing again to fields that have only discovered positive rather than normative facts. It would seem to me the most plausible explanation for this difference is that there are none of the latter.

Comment by TGGP2 on Congratulations to Paris Hilton · 2007-10-20T20:13:00.000Z · LW · GW

Richard, if morality is a sort of epiphenomenon with no observable effects on the universe, how could anyone know anything about it?

Comment by TGGP2 on A Priori · 2007-10-12T18:36:00.000Z · LW · GW

Richard, I don't actually believe philosophers are idiots because I've seen their standardized test scores. I do think they could more productively use their intellects though. If I were to ignore IQ/general intelligence and simply try to judge whether one philosopher does better philosophizing than another, would I be able to do it without becoming a philosopher myself and judging their arguments? I can determine that rocket physicists are good at what they do because they successfully send rockets in the air, I know brain surgeons are because the brains they operate on end up with the behavior they promise. I can't think of anything I would hire a philosopher for, other than teaching a philosophy course. So is the merit of philosophy an entirely circular thing or is there a heuristic the non-philosopher layman can use that will let him know he should pay more attention to philosophers than palm-readers?

Comment by TGGP2 on A Priori · 2007-10-12T04:07:00.000Z · LW · GW

Hopefully Anonymous had a post about zombies here, in which I made fun of him.

Anticipating experience may be a useful constraint for science, but that is not all there is to know.
If I was going to dispute this I would have to specify what it means to "know" and get into one of those goofy epistemology discussions I derided here. Philosophy is the required method to argue against philosophy, oh bother. Good thing reality doesn't revolve around dispute.

Comment by TGGP2 on Human Evil and Muddled Thinking · 2007-09-17T21:12:00.000Z · LW · GW

getting out of touch with your basic humanity
I am a homo sapien, therefore my characteristics are human. Perhaps I should wonder why you have an inhuman bias against torture, but of course that is human as well.
Homo sum: humani nil a me alienum puto

Comment by TGGP2 on Human Evil and Muddled Thinking · 2007-09-17T00:03:00.000Z · LW · GW

mtraven, do you really believe in the existence of the soul, or are you just using it because it is in common usage? At my blog I was thinking of writing a post whose title began "Thank god", then remembered I had already declared I was an agnotheist, and then considered "Thank goodness", but remembered I didn't believe in objective good either.

Comment by TGGP2 on A Fable of Science and Politics · 2006-12-23T19:54:01.000Z · LW · GW

The Blues and Greens were Catholics and Monophysites (I forget which was which). They once united and almost overthrew the emperor Justinian (his wife persuaded him not to flee) but Narses set them against each other and crushed their revolt.