Posts

Comments

Comment by nick_hay2 on The Quantum Arena · 2008-04-20T01:57:45.000Z · score: 0 (0 votes) · LW · GW

Just in case it's not clear from the above: there are uncountably many degrees of freedom to an arbitrary complex function on the real line, since you can specify its value at each point independently.

A continuous function, however, has only countably many degrees of freedom: it is uniquely determined by its values on the rational numbers (or any dense set).

Comment by nick_hay2 on Thou Art Godshatter · 2007-11-14T01:59:02.000Z · score: 3 (3 votes) · LW · GW

Eliezer: poetic and informative. I like it.

Comment by nick_hay2 on Cached Thoughts · 2007-10-13T15:36:38.000Z · score: 3 (3 votes) · LW · GW

Felix: Yes, for example see http://en.wikipedia.org/wiki/NC_%28complexity%29

Comment by nick_hay2 on A Priori · 2007-10-10T21:44:06.000Z · score: 2 (2 votes) · LW · GW

Eliezer: "You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence."

Richard: "It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence"."

Eliezer: "If you view it as an argument, yes. The engines yield the same outputs."

Richard: "What does the latter have to do with rationality?"

Pure thought is something your brain does. If you consider having successfully determined a conclusion from pure thought evidence that that thought is correct, then you must consider the output of your brain (i.e. its, that is your, internal representation of this conclusion) as valid evidence for the conclusion. Otherwise you have no reason to trust your conclusion is correct, because this conclusion is exactly the output of your brain after reasoning.

If you consider your own brain as evidence, and someone else's brain works in the same way, computing the same answers as yours, observing their brain is the same as observing your brain is the same as observing your own thoughts. You could know abstractly that "Bob, upon contempating X for 10 minutes, would consider it a priori true iff I would", perhaps from knowledge of both of your brains compute whether something is a priori true. If you then found out that "Bob thinks X a priori true" you could derive that X was a priori true without having to think about it: you know your output would be the same ("X is a priori true") without having to determine it.

Comment by nick_hay2 on Conservation of Expected Evidence · 2007-08-13T23:08:03.000Z · score: 2 (2 votes) · LW · GW

One reason is Cox's theorem, which shows any quantitative measure of plausibility must obey the axioms of probability theory. Then this result, conservation of expected evidence, is a theorem.

What is the "confidence level"? Why is 50% special here?

Comment by nick_hay2 on Conservation of Expected Evidence · 2007-08-13T21:55:16.000Z · score: 18 (17 votes) · LW · GW

Perhaps this formulation is nice:

0 = (P(H|E)-P(H))P(E) + (P(H|~E)-P(H))P(~E)

The expected change in probability is zero (for if you expected change you would have already changed).

Since P(E) and P(~E) are both positive, to maintain balance if P(H|E)-P(H) < 0 then P(H|~E)-P(H) > 0. If P(E) is large then P(~E) is small, so (P(H|~E)-P(H)) must be large to counteract (P(H|E)-P(H)) and maintain balance.

Comment by nick_hay2 on Chronophone Motivations · 2007-03-25T03:47:47.000Z · score: 1 (1 votes) · LW · GW

It seems the point of the exercise is to think of non-obvious cognitive strategies, ways of thinking, for improving things. The chronophone translation is both a tool both for finding these strategies by induction, and a rationality test to see if the strategies are sufficiently unbiased and meta.

But what would I say? The strategy of searching for and correcting biases in thought, failures of rationality, would improve things. But I think I generated that suggestion by thinking of "good ideas to transmit" which isn't meta enough. Perhaps if I discussed various biases I was concerned about, gave a stream of thought analysis of how to improve a particular bias (say, anthropomorphism), this would be invoking the strategy rather than referencing it, thus passing the filter. Hmmm.