Posts

Comments

Comment by Kevin_Dick on Formative Youth · 2009-02-25T02:46:42.000Z · LW · GW

Sweet! I thought I was the only smart kid that tried to emulate the Thundercats. Personally, I identified most with Panthro. I am not ashamed to admit this. Discipline, teamwork, and fighting evil. Oh, and the gadgets. Yes, the gadgets.

Comment by Kevin_Dick on Surprised by Brains · 2008-11-23T08:24:44.000Z · LW · GW

How is this not a surface analogy?

Comment by Kevin_Dick on The Weak Inside View · 2008-11-18T21:20:04.000Z · LW · GW

Eliezer, I'm actually a little surprised at that last comment. As a Bayesian, I recognize that reality doesn't care if I feel comfortable with whether or not I "know" an answer. Reality requires me to act on the basis of my current knowledge. If you think AI will go self-improving next year, you should be acting much differently than if you believe it will go self-improving in 2100. The difference isn't as stark at 2025 versus 2075, but it's still there.

What makes your unwillingness to commit even stranger is your advocacy that there's significant existential risk associated with self-improving AI. It's literally a life or death situation by you're own valuation. So how are you going to act, like it will happen sooner or later?

Comment by Kevin_Dick on Friedman's "Prediction vs. Explanation" · 2008-09-29T17:33:48.000Z · LW · GW

Upon first reading, I honestly thought this post was either a joke or a semantic trick (e.g., assuming the scientists were themselves perfect Bayesians which would require some "There are blue-eyed people" reasoning).

Because theories that can make accurate forecasts are a small fraction of theories that can make accurate hindcasts, the Bayesian weight has to be on the first guy.

In my mind, I see this visually as the first guy projecting a surface that contains the first 10 observations into the future and it intersecting with the actual future. The second guy just wrapped a surface around his present (which contains the first guy's future). Who says he projected it in the right direction?

But then I'm not as smart as Eliezer and could have missed something.

Comment by Kevin_Dick on The Truly Iterated Prisoner's Dilemma · 2008-09-04T18:20:39.000Z · LW · GW

I think you may be attacking a straw man here. When I was taught about the PD almost 20 years ago in an undergraduate class, our professor made exactly the same point. If there are enough iterations (even if you know exactly when the game will end), it can be worth the risk to attempt to establish cooperation via Tit-for-Tat. IIRC, it depends on an infinite recursion of your priors on the other guy's priors on your priors, etc. that the other guy will attempt to establish cooperation. You compare this to the expected losses from a defection in the first round. For a large number of rounds, even a small (infinitely recursed) chance that the other guy will cooperate pays off. Of course, you then have to estimate when you think the other guy will start to defect as the end approaches. But once you had established cooperation, I seem to recall that this point was stable given the ratio of the C and D payoffs.

Comment by Kevin_Dick on When Anthropomorphism Became Stupid · 2008-08-16T23:56:43.000Z · LW · GW

Doesn't this boil down to being able to "put yourself in another's shoes"? Are mirror neurons what are necessary to carry out moral reasoning?

This kind of solves the pie division problem. If you are capable of putting yourself in the other guy's shoes and still sincerely believing you should get the whole pie, perhaps there is some information about your internal state that you can communicate to the others to convince them?

IS the essence of morality that you should believe in the same division no matter which position you occupy?

Comment by Kevin_Dick on Science Isn't Strict Enough · 2008-05-16T17:05:17.000Z · LW · GW

Elizer. I've been a Believer for 20 years now, so I'm with you. But it seems like you're losing people a little bit on Bayes v Science. You've probably already thought of this, but it might make sense to take smaller pedagogical steps here to cover the inferential distance.

One candidate step I thought of was to first describe where Bayes can supplement Science. You've already identified choosing which hypotheses to test. But it might help to list them all out. Off the top of my head, there's also obviously what to do in the face of conflicting experimental evidence, what to do when the experimental evidence is partially but not exactly on point, what to do when faced with weird (i.e., highly unexpected) experimental evidence, and how to allocate funds to different experiments (e.g., was funding the LHC rational?). I'm certain that you have even more in mind.

Then you can perhaps spiral out from these areas of supplementation to convince people of your larger point. Just a thought.

Comment by Kevin_Dick on Decoherence is Simple · 2008-05-06T17:06:34.000Z · LW · GW

I just had a thought, probably not a good one, about Many Worlds. It seems like there's a parallel here to the discovery of Natural Selection and understanding of Evolution.

Darwin had the key insight about how selection pressure could lead to changes in organisms over time. But it's taken us over 100 years to get a good handle on speciation and figure out the detailed mechanisms of selecting for genetic fitness. One could argue that we still have a long way to go.

Similarly, it seems like we've had this insight that QM leads to Many Worlds due to decoherence. But it could take quite a while for us to get a good handle on what happens to worlds and figure that detailed mechanisms of how they progress.

But it was pretty clear that Darwin was right long before we had worked the details. So I guess it doesn't bother me that we haven't worked out the details of what happens to the Many Worlds.