Posts

Comments

Comment by dbc on Friendly AI and the limits of computational epistemology · 2012-08-08T15:51:18.953Z · LW · GW

I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.

Comment by dbc on Friendly AI and the limits of computational epistemology · 2012-08-08T15:17:07.973Z · LW · GW

I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe.

I think Mitchell's point is that we don't know whether state-machines have qualia, and the costs of making assumptions could be large.

Comment by dbc on Bayesians vs. Barbarians · 2012-08-07T10:30:08.877Z · LW · GW

The topic of this article is how rational agents should solve a particular tragedy of the commons. Certainly, a common moral code is one solution to this problem: an army will have no deserters if each soldier morally refuses to desert. I don't want to put words in your mouth, but you seem to think that common morality is the best, or perhaps only solution.

I think Eliezer is more interested in situations where this solution is impractical. Perhaps the rationalists are a society composed of people with vastly differing moral codes, but even in this case, they should still be capable of agreeing to coordinate, even if that means giving up things that they individually value.

Comment by dbc on Bayesians vs. Barbarians · 2012-08-07T09:53:15.850Z · LW · GW

One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same)

How do you feel about desertion?

Comment by dbc on Mind Projection Fallacy · 2012-07-31T01:07:56.280Z · LW · GW

This comment was sitting at -2 when I saw it, which makes me think that maybe I don't understand Eliezer's point. I thought the OP was making the point that when we talk about something being "attractive" or "moral" or maybe even "sapient", we project facts about our minds into the real world. "Attractive" really means "attractive to humans", and if we forget this fact, we can end up inadvertently drawing wrong conclusions about the world. If that's wrong, then what was this post actually about?

Comment by dbc on An Intuitive Explanation of Solomonoff Induction · 2012-07-09T06:40:06.883Z · LW · GW

Remember, the Kolmogorov complexity depends on your "universal Turing machine", so we should expect to only get estimates. Mitchell makes an estimate of ~50000 bits for the new minimal standard model. I'm not an expert on physics, but the mathematics required to explain what a Lagrangian is would seem to require much more than that. I think you would need Peano arithmetic and a lot of set theory just to construct the real numbers so that you could do calculus (of course people were doing calculus for over one hundred years before real numbers existed, but I have a hard time imagining a rigorous calculus without them...) I admit that 50000 bits is a lot of data, but I'm sceptical that it could rigorously code all that mathematics.

F=ma has the same problem, of course. Does the right hand side really make sense without calculus?

ETA: If you want a fleshed out example, I think a much better problem to start off with would be predicting the digits of pi, or the prime numbers.