Posts

Comments

Comment by Mark_Lu on An Intuitive Explanation of Solomonoff Induction · 2012-07-11T12:20:02.525Z · LW · GW

Okay, I have a "stupid" question. Why is the longer binary sequence that represents the hypothesis less likely to be 'true' data generator? I read the part below but I don't get the example, can someone explain in a different way?

We have a list, but we're trying to come up with a probability, not just a list of possible explanations. So how do we decide what the probability is of each of these hypotheses? Imagine that the true algorithm is produced in a most unbiased way: by flipping a coin. For each bit of the hypothesis, we flip a coin. Heads will be 0, and tails will be 1. In the example above, 01001101, the coin landed heads, tails, heads, heads, tails, and so on. Because each flip of the coin has a 50% probability, each bit contributes ½ to the final probability.

Therefore an algorithm that is one bit longer is half as likely to be the true algorithm. Notice that this intuitively fits Occam's razor; a hypothesis that is 8 bits long is much more likely than a hypothesis that is 34 bits long. Why bother with extra bits? We’d need evidence to show that they were necessary.

Comment by Mark_Lu on Hedonic vs Preference Utilitarianism in the Context of Wireheading · 2012-06-30T09:05:48.458Z · LW · GW

just because I want X doesn't mean I don't also want Y where Y is incompatible with X

In real life you are still forced to choose between X and Y, and through wireheading you can still cycle between X and Y at different times.

Comment by Mark_Lu on A (small) critique of total utilitarianism · 2012-06-28T21:10:44.405Z · LW · GW

This might be one reason why Eliezer talks about morality as a fixed computation.

P.S. Also, doesn't the being itself have a preference for not-suffering?

Comment by Mark_Lu on A (small) critique of total utilitarianism · 2012-06-28T20:30:19.732Z · LW · GW

A problem here seems to be that creating a being in intense suffering would be ethically neutral

Well don't existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?

Comment by Mark_Lu on A (small) critique of total utilitarianism · 2012-06-28T12:58:27.712Z · LW · GW

preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour

Shouldn't we then just create people with simpler and easier to satisfy preferences so that there's more preference-satisfying in the world?

Comment by Mark_Lu on A (small) critique of total utilitarianism · 2012-06-28T09:30:52.736Z · LW · GW

To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down.

Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for 'morality' that we do today to recognize cats using a computer.

Comment by Mark_Lu on A (small) critique of total utilitarianism · 2012-06-28T08:55:15.385Z · LW · GW

similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic

But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that's because brains have a more efficient architecture/algorithms than today's generic computers.

Comment by Mark_Lu on A (small) critique of total utilitarianism · 2012-06-27T16:29:11.759Z · LW · GW

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition.

"State" doesn't have to mean "frozen state" or something similar, it could mean "state of the world/universe". E.g. "a state of the universe" in which many people are being tortured includes the torture process in it's description. I think this is how it's normally used.

Comment by Mark_Lu on A (small) critique of total utilitarianism · 2012-06-27T09:00:53.703Z · LW · GW

Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we'd be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.