## Posts

## Comments

**3p1cd3m0n**on Checklist of Rationality Habits · 2015-01-31T02:51:51.947Z · LW · GW

I really appreciate having the examples in parentheses and italicised. It lets me easily skip them when I know what you mean. I wish others would do this.

**3p1cd3m0n**on Second-Order Logic: The Controversy · 2015-01-27T01:16:31.155Z · LW · GW

"Doesn't physics say this universe is going to run out of negentropy before you can do an infinite amount of computation?" Actually, there is a [proposal that could create a computer that runs forever.

**3p1cd3m0n**on Open Problems Related to Solomonoff Induction · 2015-01-16T03:50:52.729Z · LW · GW

I see. Does the method of normalization you gave work even when there is an infinite number of hypotheses?

**3p1cd3m0n**on Something to Protect · 2015-01-15T01:53:00.024Z · LW · GW

Decreasing existential risk isn't incredibly important to you? Could you explain why?

**3p1cd3m0n**on Open Problems Related to Solomonoff Induction · 2015-01-15T01:35:16.672Z · LW · GW

Right; I forgot that it used a prefix-free encoding. Apologies if the answer to this is painfully obvious, but does having a prefix-free encoding entail that there is a finite number of possible hypotheses?

**3p1cd3m0n**on Open Problems Related to Solomonoff Induction · 2015-01-14T01:50:10.426Z · LW · GW

I still don't see how that would make all the hypotheses sum to 1. Wouldn't that only make the probabilities of all the hypotheses of length n sum to 1, and thus make the sum of all hypotheses sum to > 1? For example, consider all the hypotheses of length 1. Assuming Omega = 1 for simplicity, there are 2 hypotheses, each with a probability of 2^-1/1 = 0.5, making all them sum to 1. There are 4 hypotheses of length 2, each with a probability of 2^-2/1 = 0.25, making them sum to 1. Thus, the sum of the probabilities of all hypotheses of length <= 2 = 2 > 1.

Is Omega doing something I don't understand? Would all hypotheses be required to be some set length?

**3p1cd3m0n**on Open Problems Related to Solomonoff Induction · 2015-01-11T20:53:37.381Z · LW · GW

That would assign a zero probability of hypotheses that take more than n bits to specify, would it not? That sounds far from optimal.

**3p1cd3m0n**on A Case Study of Motivated Continuation · 2015-01-10T23:27:09.453Z · LW · GW

Did you not previously state that one should learn about the problem as much as one can before coming to a conclusion, lest one falls prey to the confirmation bias? Should one learn about the problem fully before making a decision only when one doesn't suspect to be biased?

**3p1cd3m0n**on Open Problems Related to Solomonoff Induction · 2015-01-10T22:15:07.154Z · LW · GW

"Of course" implies that the answer is obvious. Why is it obvious?

**3p1cd3m0n**on Open Problems Related to Solomonoff Induction · 2015-01-10T20:25:52.533Z · LW · GW

Unfortunately Chaitin's Omega's incomputable, but even if it wasn't I don't see how it would work as a normalizing constant. Chaitin's Omega is a real number, there is an infinite number of hypotheses, and (IIRC) there is no real number r such that r multiplied by infinite equals one, so I don't see how Chaitin's Omega could possible work as a normalizing constant.

**3p1cd3m0n**on Zombies! Zombies? · 2015-01-10T16:15:56.946Z · LW · GW

If I understand correctly, Yudkowsky finds philosophical zombies to be implausible, as it would require consciousness to have no causal influence on reality, which Yudkowsky seems to believe entails that if there are philosophical zombies, it’s purely coincidental that accurate discussions of consciousness are done by those who are conscious, which is very improbable and thus philosophical zombies are very implausible. This reasoning seems flawed, as discussing and thinking about consciousness could *cause* consciousness to exist, but this consciousness would have no effect on anything else. For philosophical zombies to exist, thinking about consciousness could only bring about consciousness in certain substrates.

**3p1cd3m0n**on Open Problems Related to Solomonoff Induction · 2015-01-04T20:22:55.373Z · LW · GW

Is there any justification that Solomonoff Induction is accurate, other than intuition?

**3p1cd3m0n**on Open Problems Related to Solomonoff Induction · 2015-01-04T20:12:28.395Z · LW · GW

If I understand Solomonoff Induction correctly, for all n and p the the sum of the probabilities of all the hypotheses of length n equals the sum of the probabilities of hypotheses of length p. If this is the case, what normalization constant could you possibility use to make all the probabilities sum to one? It seems there are none.

**3p1cd3m0n**on Joy in Discovery · 2015-01-04T02:53:31.351Z · LW · GW

"Mystery, and the joy of finding out, is either a personal thing, or it doesn't exist at all—and I prefer to say it's personal." I don't see why this is the case. Can't one only have joy from finding out what no one in the Solar System knows? That way, one can still have joy, but it's still not personal.

**3p1cd3m0n**on Reductionism · 2015-01-03T22:10:45.598Z · LW · GW

Should one really be so certain about there being no higher-level entities? You said that simulating higher-level entities takes fewer computational resources, so perhaps our universe is a simulation and that the creators, in an effort to save computational resources, made the universe do computations on higher-level entities when no-one was looking at the "base" entities. Far-fetched, maybe, but not completely implausible.

Perhaps if we start observing too many lower-level entities, the world will run out of memory. What would *that* look like?

**3p1cd3m0n**on Righting a Wrong Question · 2015-01-03T21:18:15.715Z · LW · GW

What makes you think that the argument you just said was generated by you for a reason, instead of for no reason at all?

**3p1cd3m0n**on The Lens That Sees Its Flaws · 2015-01-01T01:07:33.023Z · LW · GW

What evidence is there for mice being unable to think about thinking? Due to the communication issues, mice can't say if they can think about thinking or not.

**3p1cd3m0n**on Making Beliefs Pay Rent (in Anticipated Experiences) · 2014-12-25T17:55:09.051Z · LW · GW

What evidence is there for floating beliefs being uniquely human? As far as I know, neuroscience hasn't advanced far enough to be able to tell if other species have floating beliefs or not.

Edit: Then again, the question of if floating beliefs are uniquely human is practically a floating belief itself.

**3p1cd3m0n**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2014-12-24T19:06:49.360Z · LW · GW

I question whether keeping probabilities summing to one is a valid justification for acting as if the mugger being honest has a probability of roughly 1/3^^^3. Since we know that due to our imperfect reasoning, the probability is greater than 1/3^^^3, we know that the expected value of giving the mugger $5 is unimaginably large. Of course, acknowledging this fact causes our probabilities to sum to above one, but this seems like a small price to pay.

Edit: Could someone explain why I've lost points for this?

**3p1cd3m0n**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2014-12-07T18:21:05.603Z · LW · GW

Then this solution just assumes the possibility of infinite people is 0. If this solution is based on premises that are probably false, then how is it a solution at all? I understand that infinity makes even bigger problems, so we should instead just call your solution a pseudo-solution-that's-probably-false-but-is--still-the-best-one-we-have, and dedicate more efforts to finding a real solution.

**3p1cd3m0n**on How can I reduce existential risk from AI? · 2014-12-04T01:45:22.146Z · LW · GW

For one, Yudkowsky in *Artificial Intelligence as a Positive and
Negative Factor in Global Risk* says that artificial general intelligence could potentially use its super-intelligence to decrease existential risk in ways we haven't thought of. Additionally, I suspect (though I am rather uninformed on the topic) that Earth-originating life will be much less vulnerable one it spreads away from Earth, as I think many catastrophes would be local to a single planet. I suspect catastrophes from nanotechnology one such catastrophe.

**3p1cd3m0n**on How can I reduce existential risk from AI? · 2014-12-01T00:14:01.574Z · LW · GW

How important is trying to personally live longer for decreasing existential risk? IMO, It seems that most risk of existential catastrophes occurs sooner rather than later, so I doubt living much longer is extremely important. For example, Wikipedia says that a study at the Singularity Summit found that the median date for the singularity occurring is 2040, and one personal gave 80% confidence intervals from 5 - 100 years. Nanotechnology seems to be predicted to come sooner rather than later as well. What does everyone else think?

**3p1cd3m0n**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2014-11-30T23:19:14.421Z · LW · GW

Is there any justification for the leverage penalty? I understand that it would apply if there were a finite number of agents, but if there's an infinite number of agents, couldn't all agents have an effect on an arbitrarily larger number of other agents? Shouldn't the prior probability instead be P(event A | n agents will be effected) = (1 / n) + P(there being infinite entities)? If this is the case, then it seems the leverage penalty won't stop one from being mugged.

**3p1cd3m0n**on How can I reduce existential risk from AI? · 2014-11-21T02:30:03.914Z · LW · GW

Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?

**3p1cd3m0n**on How can I reduce existential risk from AI? · 2014-11-20T03:31:22.241Z · LW · GW

Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I've found none, but I'd like to know because I'm considering developing AGI as a career.

Edit: What about AI that's not AGI?

**3p1cd3m0n**on Efficient Charity · 2014-08-28T17:00:18.659Z · LW · GW

I see what you mean. I don't really know enough about Pascal's mugging to determine whether decreasing existential risk be 1 millionth of a percent is worth it, but it's a moot point, as it seems reasonable that existential risk could be reduced by far more than 1 millionth of one percent.

**3p1cd3m0n**on Efficient Charity · 2014-08-27T17:54:17.650Z · LW · GW

I don't think decreasing existential risk falls into it, because the probability of an existential catastrophe isn't extremely small. One survey taken at Oxford predicted that there was a ~19% chance of human extinction prior to 2100. Determining the probability of existential catastrophe is very challenging and the aforementioned statistic should be viewed skeptically, but a probability anywhere near 19% would still (as far as I can tell) prevent to from falling prey to Pascal's mugging.

**3p1cd3m0n**on Efficient Charity · 2014-08-26T21:47:22.992Z · LW · GW

For many utility functions, I think donating to an organisation working on decreasing existential risk would be incredibly efficient, as:

Even if we use the most conservative of [estimates of the utility of decreasing existential risk], which entirely ignores the possibility of space colonisation and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10^16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives. (Bostrom, Existential Risk Prevention as Global Priority)