Posts

Comments

Comment by Phil_Goetz2 on Pretending to be Wise · 2009-02-20T05:10:16.000Z · LW · GW

Good post. Nick's point is also good.

When parents say they don't care who started it, it may also be a strategy to minimize future fighting. Justice is not always optimal, even in repeated interactions.

Comment by Phil_Goetz2 on Building Weirdtopia · 2009-01-13T06:47:40.000Z · LW · GW

Jorge Luis Borges, The Babylon Lottery, 1941. Government by lottery. Living under a lottery system leads to greater expectation of random events, greater belief that life is and should be ruled by randomness, and further extension of the lottery's scope, in a feedback loop that increases until every aspect of everyone's life is controlled by the lottery.

Comment by Phil_Goetz2 on Can't Unbirth a Child · 2008-12-29T02:52:31.000Z · LW · GW

Anon: "The notion of "morally significant" seems to coincide with sentience."

Yes; the word "sentience" seems to be just a placeholder meaning "qualifications we'll figure out later for being thought of as a person."

Tim: Good point, that people have a very strong bias to associate rights with intelligence; whereas empathy is a better criterion. Problem being that dogs have lots of empathy. Let's say intelligence and empathy are both necessary but not sufficient.

James: "Shouldn't this outcome be something the CEV would avoid anyway? If it's making an AI that wants what we would want, then it should not at the same time be making something we would not want to exist."

CEV is not a magic "do what I mean" incantation. Even supposing the idea were worked out, before the first AI is built, you probably don't have a mechanism to implement it.

anon: "It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves."

Something is missing from that sentence. Whatever you meant, let's not rule out creating new species. We should, eventually.

Eliezer: Creating new sentient species is frightening. But is creating new non-sentient species less frightening? Any new species you create may out-compete the old and become the dominant lifeform. It would be the big lose to create a non-sentient species that replaced sentient life.

Comment by Phil_Goetz2 on Nonperson Predicates · 2008-12-28T03:13:54.000Z · LW · GW

"I propose this conjecture: In any sufficiently complex physical system there exists a subsystem that can be interpreted as the mental process of an sentient being experiencing unbearable sufferings."

It turns out - I've done the math - that if you are using a logic-based AI, then the probability of having alternate possible interpretations diminishes as the complexity increases.

If you allow /subsystems/ to mean a subset of the logical propositions, then there could be such interpretations. But I think it isn't legit to worry about interpretations of subsets.

BTW, Eliezer, regarding this recent statement of yours: "Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction": I challenge you to find one post where you have tried to correct me in a misunderstanding of you, or even to identify the misunderstanding, rather than just complaining about it in a non-specific way.

Comment by Phil_Goetz2 on Harmful Options · 2008-12-28T03:00:37.000Z · LW · GW

Eliezer: "I'll go ahead and repeat that as Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction, that I will not be responding to Goetz's comment."

Really? I challenge you to point to ONE post in which you have tried to correct a misunderstanding by me of your opinion, rather than just complaining about my "misunderstandings" without even saying what the misunderstanding was.

Comment by Phil_Goetz2 on Harmful Options · 2008-12-26T21:16:29.000Z · LW · GW

Eliezer, I have probably made any number of inaccurate depictions of your opinions, but you can't back away from these ones. You DO generally think that your opinion on topics you have thought deeply about is more valuable than the opinion of almost everyone, and you HAVE thought deeply about fun theory. And you ARE planning to build an AI that will be in control of the world. You might protest that "take over the world" has different connotations. But there's no question that you plan for your AI to be in charge.

Comment by Phil_Goetz2 on Harmful Options · 2008-12-26T17:59:11.000Z · LW · GW

It is deeply creepy and disturbing to hear this talk from someone who already thinks he knows better than just about everybody about what is good for us, and who plans to build an AI that will take over the world.

Comment by Phil_Goetz2 on Devil's Offers · 2008-12-26T17:47:24.000Z · LW · GW

Michael, I thought that you advocated comfort with lying because smart people marginalize themselves by compulsive truth-telling. For instance, they find it hard to raise venture capital. Or (to take an example that happened at my company), when asked "Couldn't this project of yours be used to make a horrible terrorist bioweapon?", they say, "Yes." (And they interpret questions literally instead of practically; e.g., the question actually intended, and that people actually hear, is more like, "Would this project significantly increase the ease of making a bioweapon?", which might have a different answer.)

Am I compulsively telling the truth again? Doggone it.

Is it just me, or did Wright's writing style sound very much like Eliezer's?

Comment by Phil_Goetz2 on Existential Angst Factory · 2008-07-19T21:57:41.000Z · LW · GW

pdf23ds: The claim that atheism inevitably leads to nihilism, and that belief in god inevitably relieves it, is made regularly by religious types in the West as the core of their argument for religion.

Comment by Phil_Goetz2 on Existential Angst Factory · 2008-07-19T18:05:21.000Z · LW · GW

Today, in the West, people think that atheism leads to an existential crisis of meaning. But in ancient Greece, people believed in creator gods, and yet had to find their own sense of purpose exactly the same as an atheist.

We assume that the religious person has a purpose given by God. But Zeus would have said that the purpose of humans was to produce beautiful young women for him to have sex with. Ares would have said their purpose was to kill each other. Bacchus would have said it was to party. And so on. The gods ignored humans, had trivial purposes for them, or even hostile intent towards them.

Every believing Greek had to find their own meaning in life; often based on a sense of community. This meaning, or lack thereof, bore no relation to whether they believed in the gods or not.

Anna wrote:

Maybe it will make it easier but they didn't really work at it. By having this alledged surgery will it make then more or less prone to believe in the quick fix or the long term discipline of working at it?

The reason for practicing discipline is to be able to solve problems. It would not be rational to avoid a quick solution to your life's biggest problem, in order to gain experience that might possibly be useful in solving smaller problems later on.

Comment by Phil_Goetz2 on Lawrence Watt-Evans's Fiction · 2008-07-17T03:00:53.000Z · LW · GW

On the flip side, I'd like to see less-rational characters in fantasy books. I can't believe in pseudo-medieval worlds where the main characters have no ethnic, racial, gender, or class prejudices; have no superstitions; and never make decisions for religious reasons.

(In some fantasy, notably Tolkien, ethnic and racial stereotypes are allowed - but in those fantasy worlds, they're true almost 100% of the time; and the author assumes that the reader, like the author, won't even think of them as prejudices.)

Comment by Phil_Goetz2 on Cached Thoughts · 2008-06-30T01:14:52.000Z · LW · GW

In 1998, I wrote a rec.arts.int-fiction post called "Believable stupidity" (http://groups.google.com/group/rec.arts.int-fiction/ browse_thread/thread/60a077934f89a291/ 3fffb9048965857d?lnk=gst&q=believable+stupidity#3fffb9048965857d) split across 3 lines; rejoin for link)

saying that Eliza, a computer program that matches patterns, and fills in a template to produce a response, always wins the Loebner competition because template matching is more like what people do than reasoning is.

Comment by Phil_Goetz2 on Where Philosophy Meets Science · 2008-04-15T01:34:51.000Z · LW · GW

Someone (Russell?) once commented on the surprising efficacy of mathematics, which was developed by people who did not believe that it would ever serve any purpose, and yet ended up being at the core of many pragmatic solutions.

A companion observation is on the surprising inefficacy of philosophy, which is intended to solve our greatest problems, and never does. Like Eliezer, my impression is that philosophy just generates a bunch of hypotheses, with no way of choosing between them, until the right hypotheses is eventually isolated by scientists. Philosophy is usually an attempt to do science without all the hard work. One might call philosophy the "science of untestable hypotheses".

But, on the other hand, there must be cases where philosophical inclinations have influenced people to pursue lines of research that solved some problem sooner than it would have been solved without the initial philosophical inclination.

One example is the initial conception that the Universe could be described mathematically. Kepler and Newton worked so hard at finding mathematical equations to govern the movements of celestial bodies because they believed that God must have designed a Universe according to some order. If they'd been atheists, they might never have done so.

This example doesn't redeem philosophy, because I believe their philosophies were helpful only by chance. I'd like to see how many examples there are of philosophical notions that sped up research that proved them correct. Can anyone think of some?

Comment by Phil_Goetz2 on Belief in the Implied Invisible · 2008-04-11T04:00:42.000Z · LW · GW

To make it clear why you would sometimes want to think about implied invisibles, suppose you're going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster. By the time the spaceship gets there and sets up a colony, the universe's expansion will have accelerated too much for them to ever send a message back. Do you deem it worth the purely altruistic effort to set up this colony, for the sake of all the people who will live there and be happy? Or do you think the spaceship blips out of existence before it gets there? This could be a very real question at some point.
I don't see any difference between deciding to send the spaceship even though the colonists will be outside my lightcone when they get there, and deciding to send the spaceship even though I will be dead when they get there.

I don't think it's possible to get outside Earth's light cone by travelling less than the speed of light, is it? I'm not well-educated about such things, but I thought that leaving a light-cone was possible only during the very early stages (eg., the first several seconds) after the big bang. Of course, that was said back when people believed the universe's expansion was slowing down. But unless the universe's expansion allows things to move out of Earth's light-cone - and I suspect that allowing that possibility would allow violation of causality, because it seems it would require a perceived velocity wrt Earth past the speed of light - then the entire exercise may be moot; the notion of invisibles may be as incoherent as the atomically-identical zombies.

Comment by Phil_Goetz2 on GAZP vs. GLUT · 2008-04-07T13:15:51.000Z · LW · GW

PK is right. I don't think a GLUT can be intelligent, since it can't remember what it's done. If you let it write notes in the sand and then use those notes as part of the future stimulus, then it's a Turing machine.

The notion that a GLUT could be intelligent is predicated on the good-old-fashioned AI idea that intelligence is a function that computes a response from a stimulus. This idea, most of us in this century now believe, is wrong.

Comment by Phil_Goetz2 on GAZP vs. GLUT · 2008-04-07T03:06:32.000Z · LW · GW

Eliezer, I suspect you are not being 100% honest here. I don't have any problems with a GLUT being conscious.
I have problems with a GLUT being conscious. (Actually, the GLUT fails dramatically to satisfy the graph-theoretic requirements for consciousness that I alluded to but did not describe earlier today, but I wouldn't believe that a GLUT could be conscious even if that weren't the case.)

Comment by Phil_Goetz2 on The Generalized Anti-Zombie Principle · 2008-04-06T19:25:34.000Z · LW · GW

Going on about zombies and consciousness as if you were addressing philosophical issues, when you have redefined consciousness to mean a particular easily-comprehended computational or graph-theoretic property, falls squarely into the category of ideas that I consider Silly.
Although, ironically, I'm in the process of doing exactly that. I will try to come up with a rationalization for why it is Not Silly when I do it.

Comment by Phil_Goetz2 on The Generalized Anti-Zombie Principle · 2008-04-06T19:22:25.000Z · LW · GW

Caledonian writes:

Um, no. What it IS is a radically different meaning of the word than what the p-zombie nonsense uses. Chalmers' view requires stripping 'consciousness' of any consequence, while Eliezer's involves leaving the standard usage intact.

'Consciousness' in that sense refers to self-awareness or self-modeling, the attempt of a complex computational system to represent some aspects of itself, in itself. It has causal implications for the behavior of the system, can potentially be detected by an outside observer who has access to the mechanisms underlying that system, and is fully part of reality. What Eliezer wrote is consistent with that definition of consciousness. But that is not "the standard usage". It's a useless usage. Self-representation is trivial and of no philosophical interest. The interesting philosophical question is why I have what the 99% of the world who doesn't use your "standard usage" means by "consciousness". Why do I have self-awareness? - and by self-awareness, I don't mean anything I can currently describe computationally, or know how to detect the consequences of.

This is the key unsolved mystery of the universe, the only one that we have really no insight into yet. You can't call it "nonsense" when it clearly exists and clearly has no explanation or model. Unless you are a zombie, in which case what I interpret as your stance is reasonable.

There is a time to be a behaviorist, and it may be reasonable to say that we shouldn't waste our time pursuing arguments about internal states that we can't detect behaviorially, but it is Silly to claim to have dispelled the mystery merely by defining it away.

There have been too many attempts by scientists to make claims about consciousness that sound astonishing, but turn out to be merely redefinitions of "consciousness" to something trivial. Like this, for instance. Or Crick's "The Astonishing Hypothesis", or other works by neuroscientists on "consciousness" when they are actually talking about focus of attention. I have developed an intellectual allergy to such things. Going on about zombies and consciousness as if you were addressing philosophical issues, when you have redefined consciousness to mean a particular easily-comprehended computational or graph-theoretic property, falls squarely into the category of ideas that I consider Silly.

Comment by Phil_Goetz2 on The Generalized Anti-Zombie Principle · 2008-04-06T03:03:35.000Z · LW · GW

Consciousness, whatever it may be - a substance, a process, a name for a confusion - is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud. The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences.
Eliezer, I'm shocked to see you write such nonsense. This only shows that you don't understand the zombie hypothesis at all. Or, you suppose that intelligence requires consciousness. This is the spiritualist, Searlian stuff you usually oppose.

The zombie hypothesis begins by asserting that I have no way of knowing whether you are conscious, no matter what you write. You of all people I expect to accept this, since you believe that you are Turing-computable. You haven't made an argument against the zombie hypothesis; you've merely asserted that it is false and called that assertion an argument.

The only thing I can imagine is that you have flipped the spiritualist argument around to its mirror image. Instead of saying that "I am conscious; Turing machines may not be conscious; therefore I am not just a Turing machine", you may be saying, "I am conscious; I am a Turing machine; therefore, all Turing machines that emit this sequence of symbols are conscious."

Comment by Phil_Goetz2 on Hand vs. Fingers · 2008-03-30T04:16:49.000Z · LW · GW

If you want to fight the good fight, edit the section "Limits of Reductionism" in the Wikipedia article on Reductionism. It cites many examples of things that are merely complex, as evidence that reductionism is false.

Comment by Phil_Goetz2 on Hand vs. Fingers · 2008-03-30T04:08:45.000Z · LW · GW

I'm confused as to what your purpose is with this series on reductionism. Is there a particular anti-reductionist position you're combating?

Earlier, you wrote,

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
I don't think your typical anti-reductionist is concerned about the existence of different levels of models. I've never heard one ask "How can you model the plane without the wings?"

Anti-reductionists are opposed to models in general. An anti-reductionist believes that a collection of things has properties that are not the results of the combined properties of the things in the collection, let alone of a model. For example, they would say that a human has free will, even though the constituents of the human are deterministic; or that a human has a soul, but if you constructed a human from parts, it wouldn't; or that representations in a human brain have meaning, while representations in a computer cannot; or that a human brain has consciousness, etc.

So I don't think what you're writing addresses what it is that reductionists believe, that non-reductionists don't believe.

Anti-reductionism equals spiritualism, and it is not the opposite of materialism, but of science. Science is not materialistic, since we believe in fields. Also, if you discovered that spirits were composed of magnetic fields, and conducted experiments on them, you would still be a scientist. The spiritualist, by contrast, uses the term "spirit" to name something that can't be explained. Anti-reductionism = spiritualism = the belief that there exist complex phenomena ("spirits") with no explanations.

Comment by Phil_Goetz2 on Initiation Ceremony · 2008-03-30T02:43:05.000Z · LW · GW

Tim, one-tenth would be the correct answer if Brennan were in the Heresy of Virtue, AND there were 16 people in the room. There would be 9 women in the HoV in the room, and 1 man who wasn't Brennan; hence, one in ten.

Thanks to Mike Vassar for pointing out that, if Brennan is in the HoV, you need to count how many men are in the room.

Since there are an odd number of people in the room, the guide must be posing a hypothetical question. If Brennan is in the HoV, the correct answer would be for him to say that he needs to know how many people are in the room in the hypothetical situation.

Comment by Phil_Goetz2 on Initiation Ceremony · 2008-03-29T14:31:39.000Z · LW · GW

Hint for the extra credit: What is the probability that the guide is Brennan? (Zero.)

Comment by Phil_Goetz2 on Initiation Ceremony · 2008-03-29T05:29:40.000Z · LW · GW

In my experience, the problem with running on curiousity is that, to be effective at something, one has to not take the time to investigate lots of unrelated things one is curious about.

Comment by Phil_Goetz2 on Initiation Ceremony · 2008-03-29T04:16:29.000Z · LW · GW

For extra credit, explain how "one-tenth" could also have been the correct answer.

Comment by Phil_Goetz2 on Fake Reductionism · 2008-03-18T16:47:02.000Z · LW · GW

This reminds me of a lesson that I learned, I'm embarrassed to admit, from Tom Brown Jr. (who later threw me out of his school for trying to verify his autobiographical claims).

If you're walking through the woods with a child, and they're interested in all the different plants that they see, they'll ask you what each one is. And, often, they lose interest in each plant after you tell them its name. They still don't know anything about the plant, but they think they do, and it's no longer mysterious and exciting to them.

This is the fault of the child, not the fault of the person who gave the plant its name.