Posts

Comments

Comment by GreedyAlgorithm on Three Worlds Decide (5/8) · 2009-02-03T21:30:00.000Z · LW · GW

The Informations told/implied to the Humans that they don't lie or withold information. That is not the same as the Humans knowing that the Informations don't lie.

Comment by GreedyAlgorithm on Worse Than Random · 2008-11-12T20:28:11.000Z · LW · GW

Brian, you want an answer to the real-world situation? Easy. First assume you have a source of inputs that is not antagonistic, as discussed. Then measure which deterministic pivot-choice algorithms would work best on large samples of the inputs, and use the best. Median-of-three is a great pivot choosing algorithm in practice, we've found. If your source of inputs is narrower than "whatever people anywhere using my ubergeneral sort utility will input" then you may be able to do better. For example, I regularly build DFAs from language data. Part of this process is a sort. I could implement this plan and possibly find that, I don't know, median of first, last, and about-twenty-percent-of-the-way-in is in general better. I anticipate the effort would not be worth the cost, so I don't, but there you are.

You don't have to put constraints on the input, you can just measure them (or guess well!). They're probably already there in real-world situations.

Comment by GreedyAlgorithm on Worse Than Random · 2008-11-11T23:26:35.000Z · LW · GW

Brian, the reason we do that is to avoid the quicksort algorithm being stupid and choosing the worst-case pivot every time. The naive deterministic choices of pivot (like "pick the first element") do poorly on many of the permutations of the input which are far more probable than 1/n! because of the types of inputs people give to sorting algorithm, namely, already or nearly-already sorted input. Picking the middle element does better because inputs sorted inside to outside are rarer, but they're still far more likely than 1/n! apiece. Picking a random element is a very easy way to say "hey, any simple algorithm I think up will do things that correlate with algorithms other people think up, so will hit worst-case running times more often than I'd like, so I'll avoid correlation with other people".

There are variants of quicksort that completely avoid worst-case complexity by choosing the true median of the list each time. They incur an extra cost that makes average case worse, though, and they're usually not the best choice because we're almost always not trying to avoid the worst-case for a single run, we're actually trying to make the average case faster.

Comment by GreedyAlgorithm on Principles of Disagreement · 2008-06-02T21:04:07.000Z · LW · GW

If they're both about equally likely to reason as well, I'd say Eliezer's portion should be p * $20, where ln(p/(1-p))=(1.0*ln(0.2/0.8)+1.0*ln(0.85/0.15))/(1.0+1.0)=0.174 ==> p=0.543. That's $10.87, and he owes NB merely fifty-six cents.

Amusingly, if it's mere coincidence that the actual split was 3:4 and in fact they split according to this scheme, then the implication is that we are trusting Eliezer's estimate 86.4% as much as NB's.

Comment by GreedyAlgorithm on Einstein's Speed · 2008-05-21T17:46:37.000Z · LW · GW

"But sometimes experiments are costly, and sometimes we prefer to get there first... so you might consider trying to train yourself in reasoning on scanty evidence, preferably in cases where you will later find out if you were right or wrong. Trying to beat low-capitalization prediction markets might make for good training in this? - though that is only speculation."

Zendo, an inductive reasoning game, is the best tool I know of to practice reasoning on scanty evidence in cases where you'll find out if you were right or wrong. My view of the game: one player at a time takes the role of "reality", which is a single rule classifying all allowed things into two categories. The other players, based on a steadily growing body of examples of correctly classified things and the fact that the other player made up the rule, attempt to determine the rule first. This is fundamentally different from deduction games which traditionally have small hypothesis spaces (Clue - 324, Mystery of the Abbey - 24, Mastermind - I've seen 6561) with each hypothesis being initially equiprobable.

I've seen variants that can be played online with letters or numbers instead of pyramids, but frankly they're not nearly as fun.

Comment by GreedyAlgorithm on Configurations and Amplitude · 2008-04-22T16:46:39.000Z · LW · GW

Here's what I was missing: the magnitudes of the amplitudes needs to decrease when changing from one possible state to more than one. In drawing-on-2d terms, a small amount of dark pencil must change to a large amount of lighter pencil, not a large amount of equally dark pencil. So here's what actually occurs (I think):

A photon is coming toward E (-1,0)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from E to A (-1/sqrt(2),0)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from A to B (0,-1/2) A photon is coming from A to C (-1/2,0)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from B to D (1/2,0) A photon is coming from C to D (0,-1/2)

A photon is coming from E to 1 (0,-1/sqrt(2)) A photon is coming from D to X (0,1/2sqrt(2))+(0,-1/2sqrt(2)) = (0,0) A photon is coming from D to 2 (1/2sqrt(2),0)+(1/2sqrt(2),0) = (1/sqrt(2),0)

Detector 1 hits 1/2 of the time and detector 2 hits 1/2 of the time.

Comment by GreedyAlgorithm on Configurations and Amplitude · 2008-04-19T01:32:32.000Z · LW · GW

Okay, what happens in this situation: Take figure 2. The arrow coming in from the left? Replace it with figure 1, with its mirror relabeled E and detector 2 removed (replaced with figure 2). And lengthen the distance to detector 1 so that it's equal to the total distance to detector 2 in figure 2. And I guess call the detector 1 in figure 2 "X" for "we know you won't be getting any amplitude". Now what? Here's what I get...

A photon is coming toward E (-1,0)

A photon is coming from E to 1 (0,-1) A photon is coming from E to A (-1,0)

A photon is coming from E to 1 (0,-1) A photon is coming from A to B (0,-1) A photon is coming from A to C (-1,0)

A photon is coming from E to 1 (0,-1) A photon is coming from B to D (1,0) A photon is coming from C to D (0,-1)

A photon is coming from E to 1 (0,-1) A photon is coming from D to X (0,1)+(0,-1) = (0,0) A photon is coming from D to 2 (1,0)+(1,0) = (2,0)

From this I conclude that detector 1 will register a hit 1/5 of the time and detector 2 will register a hit 4/5 of the time. Is that correct?

Comment by GreedyAlgorithm on The Generalized Anti-Zombie Principle · 2008-04-06T17:27:30.000Z · LW · GW

The only way I can see p-zombieness affecting our world is if

a) we decide we are ethically bound to make epiphenomenal consciousnesses happier, better, whatever; b) our amazing grasp of physics and how the universe exists leads our priors to indicate that even though it's impossible to ever detect them, epiphenomenal consciousnesses are likely to exist; and c) it turns out doing this rather than that gives the epiphenomenal consciousnesses enough utility that it is ethical to help them out.

Comment by GreedyAlgorithm on Typicality and Asymmetrical Similarity · 2008-02-07T19:04:02.000Z · LW · GW

Lee,

I'd assume we can do other experiments to find this out... maybe they've been done? Instead of {98,100}, try all pairs of two numbers from 90-110 or something?

Comment by GreedyAlgorithm on Something to Protect · 2008-01-31T08:08:17.000Z · LW · GW

Anon, Wendy:

Certainly finding out all of the facts that you can is good. But rationality has to work no matter how many facts you have. If the only thing you know is that you have two options:

  1. Save 400 lives, with certainty
  2. Save 500 lives, 90% probability; save no lives, 10% probability. then you should take option 2. Yes, more information might change your choice. Obviously. And not interesting. The point is that given this information, rationality picks choice 2.
Comment by GreedyAlgorithm on Circular Altruism · 2008-01-22T20:24:55.000Z · LW · GW

Can someone please post a link to a paper on mathematics, philosophy, anything, that explains why there's this huge disconnect between "one-off choices" and "choices over repeated trials"? Lee?

Here's the way across the philosophical "chasm": write down the utility of the possible outcomes of your action. Use probability to find the expected utility. Do it for all your actions. Notice that if you have incoherent preferences, after a while, you expect your utility to be lower than if you do not have incoherent preferences.

You might have a point if there existed a preference effector with incoherent preferences that could only ever effect one preference. I haven't thought a lot about that one. But since your incoherent preferences will show up in lots of decisions, I don't care if this specific decision will be "repeated" (note: none are ever really repeated exactly) or not. The point is that you'll just keep losing those pennies every time you make a decision.

  1. Save 400 lives, with certainty.
  2. Save 500 lives, with 90% probability; save no lives, 10% probability. What are the outcomes? U(400 alive, 100 dead, I chose choice 1) = A, U(500 alive, 0 dead, I chose choice 2) = B, and U(0 alive, 500 dead, I chose choice 2) = C.

Remember that probability is a measure of what we don't know. The plausibility that a given situation is (will be) the case. If 1.0A > 0.9B + 0.1*C, then I prefer choice 1. Otherwise 2. Can you tell me what's left out here, or thrown in that shouldn't be? Which part of this do you have a disagreement with?

Comment by GreedyAlgorithm on Zut Allais! · 2008-01-20T17:03:22.000Z · LW · GW

Long run? What? Which exactly equivalent random events are you going to experience more than once? And if the events are only really close to equivalent, how do you justify saying that 30 one-time shots at completely different ways of gaining 1 utility unit is a fundamentally different thing than a nearly-exactly-repeated game where you have 30 chances to gain 1 utility unit each time?

Comment by GreedyAlgorithm on The Allais Paradox · 2008-01-19T11:02:34.000Z · LW · GW

I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right.

Comment by GreedyAlgorithm on To Lead, You Must Stand Up · 2007-12-30T01:13:40.000Z · LW · GW

Tom: What actually happens under your scenario is that the naive human rationalists frantically try to undo their work when they realize that the optimization processes keep reprogramming themselves to adopt the mistaken beliefs that are easiest to correct. :D

Comment by GreedyAlgorithm on Reversed Stupidity Is Not Intelligence · 2007-12-13T19:30:47.000Z · LW · GW

Caledonian: please define meta-evidence, then, since I think Eliezer has adequately defined evidence. Clear up our confusion!

Comment by GreedyAlgorithm on Fake Morality · 2007-11-09T01:42:38.000Z · LW · GW

Selfreferencing: unfortunately there is an enormous gulf between "most theists" and "theistic philosophers". If you don't believe this then you need to get out more. Perhaps in the U.S. South, for instance. It might be irritating that most theists are not as enlightened as you are, but it is a fact, not a caricature.

I'm pretty sure, for example, that almost everyone I grew up with believes what a divine command theorist believes. And now that I look back at the OP and your comment, I notice that in the former Eliezer continually says "religious fundamentalists" and in the latter you continually say "theistic philosophers", so maybe you already recognize this.

Comment by GreedyAlgorithm on The Wonder of Evolution · 2007-11-03T08:06:19.000Z · LW · GW

To stay unbiased about all of the commenters here, do not visit this link and search the page for names. (sorry, but - wait no, not sorry)

So it seems to me that the smaller you can make a quine in some system with the property that small changes in it mean it produces nearly itself as output, the more likely that system is going to produce replicating evolution-capable things. Or something, I'm making this up as I go along. Is this concept sensical? Is there a computationally feasible way to test anything about it? Has it been discussed over and over?

Maybe we can do far better than evolution, but if we could design a good parallelizable "evolution-friendly" environment and see whether organisms develop that'd still be phenomenal.

Comment by GreedyAlgorithm on Hold Off On Proposing Solutions · 2007-10-17T22:45:03.000Z · LW · GW

"My Ap distribution is rather flat."

Hm, MADIRF? :)

Comment by GreedyAlgorithm on A Priori · 2007-10-08T22:09:47.000Z · LW · GW

Something feels off about this to me. Now I have to figure out if it's because fiction feels stranger than reality or because I am not confronting a weak point in my existing beliefs. How do we tell the difference between the two before figuring out which is happening? Obviously afterward it will be clear, but post-hoc isn't actually helpful. It may be enough that I get to the point where I consider the question.

On further reflection I think it may be that I identify a priori truths with propositions that any conceivable entity would assign a high plausibility value given enough thought. I think I'm saying "in the limit, experience-invariant" rather than "non-experiential". I believe that some things, like 2+2=4, are experience-invariant: in every universe I can imagine, an entity who knows enough about it should conclude that 2+2=4. Perhaps my imagination is deficient, though. :)

Comment by GreedyAlgorithm on Avoiding Your Belief's Real Weak Points · 2007-10-05T07:38:00.000Z · LW · GW

Ha, this just happened to me. Luckily it wasn't too painful because I knew the weakness existed, I avoided it, and then reading E. T. Jaynes' "Probability Theory: The Logic of Science" gave me a different and much better belief to patch up my old one. Also, thanks for that recommendation. A lot.

For a while I had been what I called a Bayesian because I thought the frequentist position was incoherent and the Bayesian position elegant. But I couldn't resolve to my satisfaction the problem of scale parameters. I read that there was a prior that was invariant with respect to them but something kept bothering me.

It turns out that my intuition of probability was still "there is a magic number I call probability inherent in objects and what they might do". So when I saw the question "What is the probability that a glass has water:wine in a ratio of 1.5:1 or less, given that it has water:wine in a ratio between 1:1 and 2:1?" I was still thinking something along the lines of "Well, consider all possible glasses of watered wine, and maybe weight them in some way, and I'll get a probability..."

Jaynes has convinced me that the right way to think about probability is plausibility of situations given states of knowledge. There's nothing wrong with insisting that a prior be set up for any given problem; it's incoherent to set up a problem without looking at the priors. They aren't just useful, they're necessary, and anyone who says it's cheating to push the difficulty of an inductive reasoning problem onto the difficulty of determining real-world priors can be dismissed.

If only I'd asked around about this problem before, maybe I would have discovered meta-Jaynes earlier! Speaking of that, why haven't I seen his stuff or things building on it before? I feel like saying that 99% of people miss its importance says more about my importance assignment than their seeming apathy.

Comment by GreedyAlgorithm on Update Yourself Incrementally · 2007-08-14T16:59:08.000Z · LW · GW

Matthew C:

I don't understand why the Million Dollar Challenge hasn't been won. I've spent some time in the JREF forums and as far as I can see the challenge is genuine and should be easily winnable by anyone with powers you accept. The remote viewing, for instance, that I see on your blog. That's trivial to turn into a good protocol. Why doesn't someone just go ahead and prove these things exist? It'd be good for everyone involved. I see you say: "But for the far larger community of psi deniers who have not read the literature of evidence for psi, and get all your information from the Shermers and Randis of the world, I have a simple message: you are uninformed." So obviously you think that either Randi has bad information or is deliberately sharing bad information. That's fine. If the Challenge is set up correctly it shouldn't matter what Randi does or does not believe/know/whatever. I can only conclude there is at least one serious flaw in the Challenge. Could you tell me what it is?