Posts

Comments

Comment by DonGeddis on Omicron: My Current Model · 2021-12-30T17:31:19.435Z · LW · GW

I had the same reaction as Elizabeth.  The data I've seen suggests that the key variable is "time since last dose".  Vaccines protect against severe disease and death very well, possibly for years.  But protection against infection specifically appears to peak about a month after your last dose, and drop to (around) zero about six months after your last dose.

Are you sure you're not confusing a time sequence here, with quantity or quality?  Your sentence suggests that there is something "different" about getting a booster (but it's the same physical entity as the first two doses!).  And even now, you say "three is better than a fresh two".  Do you have a reference for that, in particular to distinguish recency from quantity?

To be concrete, I would strongly suspect that, six months after these latest boosters, you AGAIN have very little protection against infection.

This chart was from before omicron (Aug 2021), but I'm not aware of any major changes in the data: https://www.medrxiv.org/content/medrxiv/early/2021/08/27/2021.08.25.21262584/F2.large.jpg

(From: https://www.medrxiv.org/content/10.1101/2021.08.25.21262584v1.full )

Comment by DonGeddis on Living in an Inadequate World · 2017-11-12T18:51:24.795Z · LW · GW

"But raising nominal prices is economically useless" No, that's not true. Raising nominal prices helps with sticky wages & debts. Failing to raise nominal prices causes recession, unemployment, and bankruptcies.

"the healthy inflation" That phrase doesn't refer to anything. "Healthy" isn't a modifier that applies to "inflation". There is only one single thing: the change in the overall price level. There aren't "healthy" and "non-healthy" versions of that one thing.

"One could add 000 behind each prices number" No, you're likely thinking of a different hypothetical, something like an overnight currency devaluation. In those cases, wage and debt contracts are simultaneously converted from the "old peso" into the "new peso". That's a very, very different macroeconomic event. "Printing money", on the other hand, changes the prices of goods ... but sticky wage and debt contracts are unaffected (and thus devalued). It is exactly the fact that only some but not all prices are changed by central bank money printing, that causes "raising nominal prices" to have an effect on the real economy. (Exactly as you suggest, if all prices changed simultaneously, the real economy would be unaffected. It's important to understand that central bank money printing is very different.)

Comment by DonGeddis on Living in an Inadequate World · 2017-11-12T18:45:04.726Z · LW · GW

You mostly seem to be noticing that there is a difference between the nominal economy (the numbers on the prices), and the real economy (what resources you can buy with currency). That's certainly true - but actually beside the point. Because the point is that inflation (actually, NGDP) below trend, causes "business-cycle" recessions. The reason is that many contracts (wages, debts) are written in nominal terms, and so when the value of the Unit of Account (function of money) changes in an unexpected way, these fixed nominal contracts don't adjust quickly enough. The result is disequilibrium, rising unemployment and bankruptcies, etc. The fix is to keep nominal prices rising on trend.

Having exchange rates fall, or gold or asset prices rise, is an independent thing. It only matters if the currency begins to be abandoned (as you suggest with dollar prices in Russia), and especially if wage and debt contracts begin to be written in a different Unit of Account. It is stability in the value of the Unit of Account which affects macroeconomic stability.

Summary: "printing money could increase prices" is the whole point, and it doesn't matter if "prices fall in a harder currency" or there is "deflation in gold and bitcoin prices". As long as the local currency is the Unit of Account, then changes in the value of the local currency (aka local currency aggregate demand) are what matter.

Comment by DonGeddis on Undiscriminating Skepticism · 2010-04-14T19:10:41.370Z · LW · GW

There is zero net evidence that IQ correlates with skin tone.

That's not true at all. There is overwhelming evidence that performance on IQ tests is hugely correlated with "race", which basically implies skin tone. Blacks, as a group, score 10-15 points below whites (almost a standard deviation), and (some) Asians and Jews are about half a deviation above whites.

The controversy is not whether there is correlation. The controversy is over the casual explanation. How much of this observed difference is due to genetics, how much due to environment, and how much due to the structure of standard IQ tests?

Mainstream science either holds that there is no genetic component or that the question is unresolved.

Just to clarify: the question is whether there is a genetic component to the observed difference in black/white (and other racial) group IQ scores.

There is clearly a genetic component to individual IQ scores.

This varies based on wealth. Among poor/impoverished peoples, variance in IQ scores is something like 60-90% due to environmental factors (like nutrition). Among wealthy peoples, 60-70% seems to be genetic.

The usual analogy is the height of growing corn. In nutrient-poor dirt, corn height is mostly a function of how much fertilizer/water/sun the plants get. But in well-tended farms, corn stalk height is almost completely a function of inherited genetics.

Comment by DonGeddis on Open Thread: April 2010 · 2010-04-05T20:14:43.785Z · LW · GW

A "Jedi"? Obi-Wan Kenobi?

I wonder if you mean old Ben Kenobi. I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit.

Comment by DonGeddis on Let There Be Light · 2010-03-19T23:27:32.899Z · LW · GW

Bostrom and Sandberg (in your linked paper) suggest three reasons why we might want to change the design that evolution gave us:

  • Changed tradeoffs. We no longer live in the ancestral environment.
  • Value discordance. Evolution's goal may not match our own.
  • Evolutionary restrictions. We might have tools that were not available to evolution.

On #2, I'll note that evolution designed humans as temporary vessels, for the goal of propagating genes. Not, for example, for the goal of making you happy. You may prefer to hijack evolution's design, in service of your own goals, rather than in service of your gene's reproduction.

Lots of evolution's adaptations (including many of the biases we discuss) are good for the propagation of the genes, at the cost of being bad for the individual human who suffers the bias. A self-aware human may wish to choose to reverse that tradeoff.

Comment by DonGeddis on Living Luminously · 2010-03-17T20:26:07.487Z · LW · GW

introspection can't be scientific by definition

What you observe via introspection, is not accessible to third parties, yes.

But you use those observations to build models of yourself. Those models can be made explicit and communicated to others. And they make predictions about your future behavior, so they can be tested.

Comment by DonGeddis on Undiscriminating Skepticism · 2010-03-17T19:01:57.337Z · LW · GW

Most people's moral gut reactions say that humans are very important, and everything else much less so. This argument is easier to make "objective" if humans are the only things with everlasting souls.

Once you get rid of souls, making the argument that humans have some special moral place in the world becomes much more difficult. It's probably an argument that is beyond the reach of the average person. After all, in the space of "things that one can construct out of atoms", humans and goldfish are very, very close.

Comment by DonGeddis on Undiscriminating Skepticism · 2010-03-17T17:56:29.426Z · LW · GW

Is an abortion an "ok if regrettable practice?" You've just assumed the answer is always yes, under any circumstances.

Sorry, you have a point that my test won't apply to every rationalist.

The contrast I meant was: if you look at the world population, and ask how many people believe in atheism, materialism, and that abortion is not morally wrong, you'll find a significant minority. (Perhaps you yourself are not in that group.)

But if you then try to add "believes that infanticide is not morally wrong", your subpopulation will drop to basically zero.

But, rationally, the gap between the first three beliefs, and the last one, is relatively small. Purely on the basis of rationality, you ought to expect a smaller dropoff than we in fact see. Hence, most people in the first group are avoiding the repugnant conclusion for non-rational reasons. (Or believing in the first three, for non-rational reasons.)

If you personally don't agree with the first three premises, then perhaps this test isn't accurate for you.

Comment by DonGeddis on Undiscriminating Skepticism · 2010-03-17T17:49:19.921Z · LW · GW

Your parenthetical comment is the funniest thing I've read all day! The contrast with the seriousness of subject matter is exquisite. (You're of course right about the marginal cases thing too.)

Comment by DonGeddis on Undiscriminating Skepticism · 2010-03-16T22:06:06.886Z · LW · GW

Proposed litmus test: infanticide.

General cultural norms label this practice as horrific, and most people's gut reactions concur. But a good chunk of rationality is separating emotions from logic. Once you've used atheism to eliminate a soul, and humans are "just" meat machines, and abortion is an ok if perhaps regrettable practice ... well, scientifically, there just isn't all that much difference between a fetus a couple months before birth, and an infant a couple of months after.

This doesn't argue that infants have zero value, but instead that they should be treated more like property or perhaps like pets (rather than like adult citizens). Don't unnecessarily cause them to suffer, but on the other hand you can choose to euthanize your own, if you wish, with no criminal consequences.

Get one of your friends who claims to be a rationalist. See if they can argue passionately in favor of infanticide.

Comment by DonGeddis on Undiscriminating Skepticism · 2010-03-15T23:55:17.937Z · LW · GW

Do you have the same opinion about gender-linked "genetically-based behavioral variation"?

Not to open a can of worms here, but the pickup-artist (PUA) community is all about how the innate behavior of (generally heterosexual) men and women differ, in dating scenarios. And, in particular, how those real behaviors differ from the behavior that is taught and reinforced by society and culture.

You can have an opinion that all behavior is changeable, and that it is shaped by society and culture. But that would lead you to one model of how men and women act during dating. (In particular, to a mostly gender-neutral model.) The PUA community has a different model of human dating behavior ... and I would say that theirs is a good deal more accurate at predicting actual observed behavior in the field.

Comment by DonGeddis on The Importance of Goodhart's Law · 2010-03-15T17:34:57.802Z · LW · GW

You are correct, that not all activity recorded in GDP is welfare-enhancing. (Note that GDP also underreports some positive welfare activities.)

But that's not the important point. The important point is: does the difference between the GDP measure, and some more accurate measure, have any implications for economic policy? The answer seems to be no: attempts have been made to define more precise welfare-tracking measures of national welfare, and the result seems to be that they track GDP very closely, and that there is basically no implication for policy decisions.

(Perhaps try an analogy? To oversimplify, in dermatology there are millions of different infections you might get on your skin, but roughly speaking only a small handful of possible treatments. A good doctor doesn't attempt expensive diagnostics to figure out exactly what you have; the proper medical treatment is only to distinguish what general class of infection you have, so that the correct treatment can be applied from the small possibilities. Similarly, GDP is easy to measure, and results in pretty much the correct policy suggestions. Anecdotes of how it is not perfect, are only important in so far as they would imply different policy choices, when they usually don't.)

Comment by DonGeddis on The Importance of Goodhart's Law · 2010-03-14T17:40:01.823Z · LW · GW

It's true that GDP is not identical to national welfare. And you can come up with anecdotes where some welfare measure isn't fully captured by GDP (both positive and negative).

But GDP is useful, because it is very hard to game. The examples in your "fetishism" link are very weak. Unlike the nails example, where we can all agree that the factory made the wrong choice for society, it is far from clear that the GDP examples resulted in the wrong policy, even if GDP is only an approximation for welfare.

GDP is not a good example of Goodhart's Law. It's nothing at all like the (broken) correlation between inflation and unemployment, which varies widely depending on policy choices.

Comment by DonGeddis on Open Thread: February 2010, part 2 · 2010-02-17T04:18:00.830Z · LW · GW

"SA"?

Comment by DonGeddis on A survey of anti-cryonics writing · 2010-02-08T05:20:30.518Z · LW · GW

"Pseudoscience" isn't the only possible criticism of cryonics. One could believe that it may be scientifically possible in theory, still without thinking that it's a good idea to sign up for cryonics in the present day. (Basically, by coming up with something like a Drake equation for the chance of it working out positively for a current-day human, and then estimating the probability of the terms to be very low.)

You're right, that most of the popular criticism of cryonics is mere non-technical mocking. Still, there's a place for reasoned objections as well.

Comment by DonGeddis on Open Thread: February 2010 · 2010-02-01T23:52:33.205Z · LW · GW

With a straightforward interpretation of your question, I'd answer "95%".

But since you made special mention of being "sneaky", I'll assume you've attempted to trick me into misunderstanding the question, and so I'll lower my probability estimate to 75%, with the missing twenty points accounting for you tricking me by your phrasing of the question.

Comment by DonGeddis on Complexity of Value ≠ Complexity of Outcome · 2010-01-31T20:07:11.344Z · LW · GW

I, for one, am interested in hearing arguments against anti-realism.

If you don't have personal interest in writing up a sketch, that's fine. Might you have some links to other people who have already done so?

Comment by DonGeddis on The things we know that we know ain't so · 2010-01-13T00:21:13.046Z · LW · GW

It's true that climate is too complex to predict well. Still, I haven't heard many global warming worriers warn about the threat of a new ice age. It's all about the world actually becoming warmer.

Given that, the real problem seems to be the speed. If it took 1000 years to raise 5 degrees, that might not be so bad. If it's 50 years, the necessary adjustments (to both humans and non-humans) might only happen with massive die-off.

But leaving aside the speed, it's not insane to notice that there is vastly more biodiversity in the tropics, than in the arctic. If you were designing a planet for humans to live on, a little warmer is a whole lot better than a little colder.

This doesn't mean that global warming is "good". But you shouldn't dismiss the positive changes out of hand, when evaluating the future pros and cons.

Comment by DonGeddis on The things we know that we know ain't so · 2010-01-13T00:12:45.265Z · LW · GW

There are (bad) interpretations of QM, where they do mean "conscious" observer. This objection is very close to saying that MWI (multiple worlds) is "right", and the others are "wrong".

That may be the case, but it is far from universally acknowledged among practicing physicists. So, it's a bit unfair to suggest this "error", given that many prominent (but wrong) physicists would not agree that it is an error.

Comment by DonGeddis on Reference class of the unclassreferenceable · 2010-01-12T00:18:18.626Z · LW · GW

But the "inside view" bias is not amenable to being repaired, just by being aware of the bias. In other words, yes, the suggestion is that the direct arguments are optimistically biased. But no, that doesn't mean that anybody expects to be able to identify specific flaws in the direct arguments.

As to what those flaws are ... generally, they occur by failing to even imagine some event, which is in fact possible. So your question to identify the flaws is basically the same as, "what possible relevant events have you not yet thought of?"

Tough question to answer...

Comment by DonGeddis on Reference class of the unclassreferenceable · 2010-01-12T00:13:56.199Z · LW · GW

Already done: JustFuckingGoogleIt

Comment by DonGeddis on On the Power of Intelligence and Rationality · 2009-12-26T05:10:04.800Z · LW · GW

Roughly on the same topic, a few years ago I read Intelligence in War by John Keegan. I was expecting a glorification of that attribute which I believed to be so important; to read story after story of how proper intelligence made the critical difference during military battles.

Much to my surprise, Keegan spends the whole book basically shooting down that theory. Instead, he has example after example where one side clearly had a dominant intelligence advantage (admittedly, here we're talking about "information", not strictly "rationality"), but it always wound up being a mere minor factor in the outcome of the battle.

Definitely worth checking out, if you're at all interested in the power (or lack thereof) of being smarter, rather than all the other factors that determine the outcome of military battles.

Comment by DonGeddis on Open Thread: October 2009 · 2009-10-02T20:07:19.189Z · LW · GW

Eliezer and Robin argue passionately for cyronics. Whatever you might think of the chances of some future civilization having the technical ability, the wealth, and the desire to revive each of us -- and how that compares to the current cost of signing up -- one thing that needs to be considered is whether your head will actually make it to that future time.

Ted Williams seems to be having a tough time of it.

Comment by DonGeddis on Beware of WEIRD psychological samples · 2009-09-17T01:52:14.666Z · LW · GW

It's hard to discuss the subject with the debate becoming emotional, but let me just say that Roissy's goals are to be an entertaining writer, to succeed at picking up women, and to debunk false commonsense notions of dating, through real-life experience.

He's not trying to submit a peer-reviewed paper on evo psych to a rationality audience. To judge him on that basis is to kind of miss the point.

(Ethics is a whole separate question. But then, Stalin was a atheist too, wasn't he?)

Comment by DonGeddis on The Absent-Minded Driver · 2009-09-16T23:57:33.916Z · LW · GW

Rather than using a PRNG (which, as you say, requires memory), you could use a source of actual randomness (e.g. quantum decay). Then you don't really have extra memory with the randomized algorithm, do you?

Comment by DonGeddis on Open Thread: September 2009 · 2009-09-02T00:01:34.130Z · LW · GW

Forget about whether your sandbox is a realistic enough test. There are even questions about how much safety you're getting from a sandbox. So, we follow your advice, and put the AI in a box in order to test it. And then it escapes anyway, during the test.

That doesn't seem like a reliable plan.

Comment by DonGeddis on How inevitable was modern human civilization - data · 2009-08-21T16:59:35.525Z · LW · GW

Re: abiogenesis. You say:

we know of no mechanism under which creation of life seems even remotely plausible.

For a plausible mechanism, see this video. (It starts with anti-creationism stuff; skip to 2:45 to watch the science.)

Comment by DonGeddis on Misleading the witness · 2009-08-10T22:29:16.332Z · LW · GW

Exactly! This is gambling, isn't it? A small expected loss, with a tiny chance of some huge gain.

If your utility for money really is so disproportionate to the actual dollar value, then you probably ought to take a trip to Las Vegas and lay down a few long-odds bets. You'll almost certainly lose your betting money (but you wouldn't "notice it in [your] monthly finances"), while there's some (small) chance that you get lucky and "change [your] month considerably".

It's not hypothetical! You can do this in the real world! Go to Vegas right now.

(If the plane flight is bothering you, I'm sure we could locate some similar online betting opportunities.)

Comment by DonGeddis on Would Your Real Preferences Please Stand Up? · 2009-08-09T18:50:56.781Z · LW · GW

I think there's also a short-term/long-term thing going on with your examples. The drunk really wants to drink in the moment; they just don't enjoy living with the consequences later. Similarly, in the moment, you really do want to continue reading Reddit; it's only hours or days later that you wish you had also managed to complete that other project which was your responsibility.

I bet there's something going on here, about maximizing integrated lifetime happiness, vs. in-the-moment decision-making, possibly with great discounts to those future selves who will suffer the negative effects.

Comment by DonGeddis on Open Thread: August 2009 · 2009-08-08T00:44:31.337Z · LW · GW

I'm curious if Eliezer (or anyone else) has anything more to say about where the Born Probabilities come from. In that post, Eliezer wrote:

But what does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? [...] I don't know. It's an open problem. Try not to go funny in the head about it.

Fair enough. But around the same time, Eliezer suggested Drescher's book Good and Real, which I've been belatedly making my way through.

And then, on pages 150-151, I see that Drescher actually attempts to explain (derive?) the Born probabilities. He also says that we can "reach the same conclusion [...] by appeal to decision theory," and references Deutsch 1999 ("Quantum Theory of Probability and Decisions") and Wallace 2003 ("Quantum Probability and Decision Theory, Revisited").

My problem: I still don't get it. I loved Eliezer's commonsense explanation of QM and MWI. I'm looking for something at the same level, just as intuitive, for the Born probabilities.

Anyone willing and able to take on that challenge?

Comment by DonGeddis on The Hero With A Thousand Chances · 2009-07-31T20:48:27.844Z · LW · GW

"Dust" has been used in SF for nanotech before. And especially runaway nanotech, that is trying to disassemble everything, like a doomsday war weapon that got out of control. I recalled the paperclip maximizer too. Oh, and the Polity/Cormac SF books by Neal Asher, with Jain nodes (made by super AIs) that seem to have roughly the same objective.

Comment by DonGeddis on Absolute denial for atheists · 2009-07-17T03:59:32.297Z · LW · GW

Is there anything that you consider proven beyond any possibility of doubt by both empirical evidence and pure logic, and yet saying it triggers automatic stream of rationalizations in other people?

  • Hitler had a number of top-level skills, and we could learn (some) positive lessons from his example(s).

  • Eugenics would improve the human race (genepool).

  • Human "racial" groups may have differing average attributes (like IQ), and these may contribute to the explanation of historical outcomes of those groups.

(Perhaps these aren't exactly topics that Less Wrong readers (in particular) would run away from. I was attempting to answer the question by riffing off Paul Graham's idea of taboos. What is it "not appropriate" to talk about in ordinary society? Politeness might trigger the rationalization response...)

Comment by DonGeddis on Epistemic Viciousness · 2009-05-08T04:06:51.137Z · LW · GW

rlpowell, you are incorrect. You are spouting an untested theory that is repeated as fact by those with a vested interest in avoiding the harsh light of truth.

In actual fact, there is no problem with breaking someone's arm in an MMA fight (see Mir vs. Sylvia in the UFC, for example). It's also close to impossible to break someone's neck (deliberately), despite what you may see in movies.

The "we're too dangerous to fight" is an easy meme to propagate. But let me just ask you this: let's just say, hypothetically, that your theory ("maximum damage" masters are "useless in MMA fights") was false. How would you ever know? Assuming that someone did not yet have a belief about that proposition, what kind of evidence are you actually aware of, about whether the statement is true or false?

Comment by DonGeddis on The Most Important Thing You Learned · 2009-05-01T21:32:43.512Z · LW · GW

I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.

  • I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages.

  • For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.

Comment by DonGeddis on Changing Emotions · 2009-01-05T05:33:46.000Z · LW · GW

I agree with Doug S. What most people think about, when they want to "try being female for awhile", is to keep their same mind (or perhaps they believe in a soul) while just trying out different clothing. Basically, be in The Matrix, but just get instantiated as the Woman in the Red Dress for a week. Or maybe more like the movie Strange Days, with a technology that's like TV (but better!), kind of like virtual reality. Like watching a movie, but using all your senses, and really getting immersed into it.

I don't think most men imagine actually thinking like a woman's brain thinks. As you say, that wouldn't really be them any longer.

Comment by DonGeddis on The Weighted Majority Algorithm · 2008-11-16T22:38:00.000Z · LW · GW

@ John: can you really not see the difference between "this is guaranteed to succeed", vs. "this has only a tiny likelihood of failure"? Those aren't the same statements.

"If you play the game this way" -- but why would anyone want to play a game the way you're describing? Why is that an interesting game to play, an interesting way to compare algorithms? It's not about worst case in the real world, it's not about average case in the real world. It's about performance on a scenario that never occurs. Why judge algorithms on that basis?

As for predicting the random bits ... Look, you can do whatever you want inside your algorithm. Your queries on the input bits are like sensors into an environment. Why can't I place the bits after you ask for them? And then just move the 1 bits away from whereever you happened to decide to ask?

The point is, that you decided on a scenario that has zero relevance to the real world, and then did some math about that scenario, and thought that you learned something about the algorithms which is useful when applying them in the real world.

But you didn't. Your math is irrelevant to how these things will perform in the real world. Because your scenario has nothing to do with any actual scenario that we see in deployment.

(Just as an example: you still haven't acknowledged the difference between real random sources -- like a quantum counter -- vs. PRNGs -- which are actually deterministic! Yet if I presented you with a "randomized algorithm" for the n-bit problem, which actually used a PRNG, I suspect you'd say "great! good job! good complexity". Even though the actual real algorithm is deterministic, and goes against everything you've been ostensibly arguing during this whole thread. You need to understand that the real key is: expected (anti-)correlations between the deterministic choices of the algorithm, and the input data. PRNGs are sufficient to drop the expected (anti-)correlations low enough for us to be happy.)

Comment by DonGeddis on The Weighted Majority Algorithm · 2008-11-16T22:25:00.000Z · LW · GW

@ Will: Yes, you're right. You can make a randomized algorithm that has the same worst-case performance as the deterministic one. (It may have slightly impaired average case performance compared to the algorithm you folks had been discussing previously, but that's a tradeoff one can make.) My only point is that concluding that the randomized one is necessarily better, is far too strong a conclusion (given the evidence that has been presented in this thread so far).

But sure, you are correct that adding a random search is a cheap way to have good confidence that your algorithm isn't accidentally negatively correlated with the inputs. So if you're going to reuse the algorithm in a lot of contexts, with lots of different input distributions, then randomization can help you achieve average performance more often than (some kinds of) determinism, which might occasionally have the bad luck to settle into worst-case performance (instead of average) for some of those distributions.

But that's not the same as saying that it has better worst-case complexity. (It's actually saying that the randomized one has better average case complexity, for the distributions you're concerned about.)

Comment by DonGeddis on The Weighted Majority Algorithm · 2008-11-16T19:53:00.000Z · LW · GW

To look at it one more time ... Scott originally said Suppose you're given an n-bit string, and you're promised that exactly n/4 of the bits are 1, and they're either all in the left half of the string or all in the right half.

So we have a whole set of deterministic algorithms for solving the problem over here, and a whole set of randomized algorithms for solving the same problem. Take the best deterministic algorithm, and the best randomized algorithm.

Some people want to claim that the best randomized algorithm is "provably better". Really? Better in what way?

Is it better in the worst case? No, in the very worst case, any algorithm (randomized or not) is going to need to look at n/4+1 bits to get the correct answer. Even worse! The randomized algorithms people were suggesting, with low average complexity, will -- in the very worst case -- need to look at more than n/4+1 bits, just because there are 3/4n 0 bits, and the algorithm might get very, very unlucky.

OK, so randomized algorithms are clearly not better in the worst case. What about the average case? To begin with, nobody here has done any average case analysis. But I challenge any of you to prove that every deterministic algorithm on this problem is necessarily worse, on average, that some (one?) randomized algorithm. I don't believe that is the case.

So what do we have left? You had to invent a bizarre scenario, supposing that the environment is an adversarial superintelligence who can perfectly read all of your mind except bits designated 'random', as Eliezer say, in order to find a situation where the randomized algorithm is provably better. OK, that proof works, but why is that scenario at all interesting to any real-world application? The real world is never actually in that situation, so it's highly misleading to use it as a basis for concluding that randomized algorithms are "provably better".

No, what you need to do is argue that the pattern of queries used by a (or any?) deterministic algorithm, is more likely to be anti-correlated with where the 1 bits are in the environment's inputs, than the pattern used by the randomized algorithm. In other words, it seems you have some priors on the environment, that the inputs are not uniformly distributed, nor chosen with any reasonable distribution, but are in fact negatively correlated with the deterministic algorithm's choices. And the conclusion, then, would be that the average case performance of the deterministic algorithm is actually worse than the average computed assuming a uniform distribution of inputs.

Now, this does happen sometimes. If you implement a bubble sort, it's not unreasonable to guess that the algorithm might be given a list sorted in reverse order, much more often than picking from a random distribution would suggest.

And similarly, if the n-bits algorithm starts looking at bit #1, then #2, etc. ... well, it isn't at all unreasonable to suppose that around half the inputs will have all the 1 bits in the right half of the string, so the naive algorithm will be forced to exhibit worst-case performance (n/4+1 bits examined) far more often than perhaps necessary.

But this is an argument about average case performance of a particular deterministic algorithm. Especially given some insight into what inputs the environment is likely to provide.

This has not been an argument about worst case performance of all deterministic algorithms vs. (all? any? one?) randomized algorithm.

Which is what other commenters have been incorrectly asserting from the beginning, that you can "add power" to an algorithm by "adding randomness".

Maybe you can, maybe you can't. (I happen to think it highly unlikely.) But they sure haven't shown it with these examples.

Comment by DonGeddis on The Weighted Majority Algorithm · 2008-11-16T17:29:00.000Z · LW · GW

@ John, @ Scott: You're still doing something odd here. As has been mentioned earlier in the comments, you've imagined a mind-reading superintelligence ... except that it doesn't get to see the internal random string.

Look, this should be pretty simple. The phrase "worst case" has a pretty clear layman's meaning, and there's no reason we need to depart from it.

You're going to get your string of N bits. You need to write an algorithm to find the 1s. If your algorithm ever gives the wrong answer, we're going to shoot you in the head with a gun and you die. I can write a deterministic algorithm that will do this in at most n/4+1 steps. So we'll run it on a computer that will execute at most n/4+1 queries of the input string, and otherwise just halt (with some fixed answer). We can run this trillions of times, and I'm never getting shot in the head.

Now, you have a proposal. You need one additional thing: a source of random bits, as an additional input to your new algorithm. Fine, granted. Now we're going to point the gun at your head, and run your algorithm trillions of times (against random inputs). I was only able to write a deterministic algorithm; you have the ability to write a randomized algorithm. Apparently you think this gives you more power.

Now then, the important question: are you willing to run your new algorithm on a special computer that halts after fewer than n/4+1 queries of the input string? Do you have confidence that, in the worst case, your algorithm will never need more than, say, n/4 queries?

No? Then stop making false comparisons between the deterministic and the randomized versions.

Comment by DonGeddis on The Weighted Majority Algorithm · 2008-11-16T17:12:00.000Z · LW · GW

@ Will: You happen to have named a particular deterministic algorithm; that doesn't say much about every deterministic algorithm. Moreover, none of you folks seem to notice much that pseudo-random algorithms are actually deterministic too...

Finally, you say: "Only when the input is random will [the deterministic algorithm] on average take O(1) queries. The random one will take O(1) on average on every type of input."

I can't tell you how frustrating it is to see you just continue to toss in the "on average" phrase, as though it doesn't matter, when in fact it's the critical part.

To be blunt: you have no proof that all deterministic algorithms require n/4+1 steps on average. You only have a proof that deterministic algorithms require n/4+1 steps in the worst case, full stop. Not, "in the worse case, on average".

Comment by DonGeddis on The Weighted Majority Algorithm · 2008-11-14T21:02:39.000Z · LW · GW

Silas is right; Scott keeps changing the definition in the middle, which was exactly my original complaint.

For example, Scott says: In the randomized case, just keep picking random bits and querying them. After O(1) queries, with high probability you'll have queried either a 1 in the left half or a 1 in the right half, at which point you're done.

And yet this is no different from a deterministic algorithm. It can also query O(1) bits, and "with high probability" have a certain answer.

I'm really astonished that Scott can't see the slight-of-hand in his own statement. Here's how he expresses the challenge: It's clear that any deterministic algorithm needs to examine at least n/4 + 1 of the bits to solve this problem. On the other hand, a randomized sampling algorithm can solve the problem with certainty after looking at only O(1) bits on average.

Notice that tricky final phrase, on average? That's vastly weaker than what he is forcing the deterministic algorithm to do. The "proof" that a deterministic algorithm requires n/4+1 queries requires a goal much more difficult than getting a quick answer "on average".

If you're willing to consider a goal of answering quickly only "on average", then a deterministic algorithm can also do it just as fast (on average) as your randomized algorithm.

Comment by DonGeddis on The Weighted Majority Algorithm · 2008-11-14T04:45:17.000Z · LW · GW

@ Scott Aaronson. Re: your n-bits problem. You're moving the goalposts. Your deterministic algorithm determines with 100% accuracy which situation is true. Your randomized algorithm only determines with "high probability" which situation is true. These are not the same outputs.

You need to establish goal with a fixed level of probability for the answer, and then compare a randomized algorithm to a deterministic algorithm that only answers to that same level of confidence.

That's the same mistake that everyone always makes, when they say that "randomness provably does help." It's a cheaper way to solve a different goal. Hence, not comparable.

Comment by DonGeddis on Selling Nonapples · 2008-11-14T00:51:09.000Z · LW · GW

@ Venu: Modern AI efforts are so far from human-level competence, that Friendly vs. Unfriendly doesn't really matter yet. Eliezer is concerned about having a Friendly foundation for the coming Singularity, which starts with human-level AIs. A fairly stupid program (compared to humans) that merely drives a car, just doesn't have the power to be a risk in the sense that Eliezer worries about.

Comment by DonGeddis on The Weighted Majority Algorithm · 2008-11-13T05:27:45.000Z · LW · GW

I agree with Psy-Kosh too. The key is, as Eliezer originally wrote, never. That word appears in Theorem 1 (about the deterministic algorithm), but it does not appear in Theorem 2 (the bound on the randomized algorithm).

Basically, this is the same insight Eliezer suggests, that the environment is being allowed to be a superintelligent entity with complete knowledge in the proof for the deterministic bound, but the environment is not allowed the same powers in the proof for the randomized one.

In other words, Eliezer's conclusion is correct, but I don't think the puzzle is as difficult as he suggests. I think Psy-Kosh (And Daniel) are right, that the mistake is in believing that the two theorems are actually about the same, identical, topic. They aren't.

Comment by DonGeddis on Friedman's "Prediction vs. Explanation" · 2008-09-29T22:52:00.000Z · LW · GW

Oh, and Thomas says: "There is no way to choose one, except to make another experiment and see which theory - if any (still might be both well or both broken) - will prevail."

Which leads me to think he is constrained by the Scientific Method, and hasn't yet learned the Way of Bayes.

Comment by DonGeddis on Friedman's "Prediction vs. Explanation" · 2008-09-29T22:44:00.000Z · LW · GW

Peter de Blanc is right: Theories screen off the theorists. It doesn't matter what data they had, or what process they used to come up with the theory. At the end of the data, you've got twenty data points, and two theories, and you can use your priors in the domain (along with things like Occam's Razor) to compute the likelihoods of the two theories.

But that's not the puzzle. The puzzle doesn't give us the two theories. Hence, strictly speaking, there is no correct answer.

That said, we can start guessing likelihoods for what answer we would come up with, if we knew the two theories. And here what is important is that all we know is that both theories are "consistent" with the data they had seen so far. Well, there are an infinite number of consistent theories for any data set, so that's a pretty weak constraint.

Hence people are jumping into the guess that scientist #2 will "overfit" the data, given the extra 10 observations.

But that's not a conclusion you ought to make before seeing the details of the two theories. Either he did overfit the data, or he didn't, but we can't determine that until we see the theories.

So what it comes down to is that the first scientist has less opportunity to overfit the data, since he only saw the first 10 points. And, its successful prediction of the next 10 points is reasonable evidence that theory #1 is on the right track, whereas we have precious little evidence (from the puzzle) about theory #2.

But this doesn't say that theory #1 _isbetter than theory #2. It only says that, if we ever had the chance to actually correctly evaluate both theories (using Bayesian priors on both theories and all the data), then we currently expect theory #1 will win that battle more often than theory #2.

But that's a weak, kind of indirect, conclusion.

Comment by DonGeddis on Hot Air Doesn't Disagree · 2008-08-16T15:32:04.000Z · LW · GW

You know, Eliezer, I've seen you come up with lots of interesting analogies (like pebblesorters) to explain you concept of morality. Another one occurred to me that you might find useful: music. It seems to have the same "conflict" between reductionist "acoustic vibrations" vs. Beethoven, as morality. Not to mention the question of what aliens or AIs might consider to be music. Or, for that matter, the fact that there are someone different kinds of music in different human cultures, yet all sharing some elements but not necessarily others.

And, in the end, there is no simple rule you can define, which distinguishes "music" vibrations from "non-music" vibrations.

Well, probably you don't need more suggestions. But I thought the "music ~= morality" connection was at least interesting enough to consider.

Comment by DonGeddis on Self-deception: Hypocrisy or Akrasia? · 2008-07-22T17:07:38.000Z · LW · GW
I don't practice what I preach because I'm not the kind of person I'm preaching to.

(by Bob Dobbes, in Newsweek ... long ago)

Comment by DonGeddis on Is Morality Given? · 2008-07-06T17:23:55.000Z · LW · GW

Eliezer seems to suggest that the only possible choices are morality-as-preference or morality-as-given, e.g. with reasoning like this:

[...] the morality-as-preference viewpoint is a lot easier to shoehorn into a universe of quarks. But I still think the morality-as-given viewpoint has the advantage [...]

But really, evolutionary psychology, plus some kind of social contract for group mutual gain, seems to account for the vast bulk of what people consider to be "moral" actions, as well as the conflict between private individual desires vs. actions that are "right". (People who break moral taboos are viewed not much differently from traitors in wartime, who betray their team/side/cause.)

I don't understand this series. Eliezer is writing multiple posts about the problems with the metatheories of morality as either preferences or given. Sure, both those metatheories are wrong. Is that really so interesting? Why not start to tackle what morality actually is, rather than merely what it is not?