Posts

Comments

Comment by toto on The Irrationality Game · 2010-10-06T11:54:36.834Z · LW · GW

One piece of evidence for the second is to notice how nations with small populations tend to cluster near the top of lists of countries by per-capita-GDP.

1) So do nations with very high taxes, i.e. Nordic countries (or most of Western Europe for that matter).

One of the outliers (Ireland) has probably been knocked down a few places recently, as a result of a worldwide crisis that might well be the result of excessive deregulation.

2) In very small countries, one single insanely rich individual will make a lot of difference to average wealth, even if the rest of the population is very poor. I think Brunei illustrates the point. So I'm not sure the supposedly high rank of small countries is indicative of anything (median GDP would be more useful).

3) There are many small-population countries at the bottom of the chart too.

Upvoted.

Comment by toto on Rationality quotes: August 2010 · 2010-08-06T08:06:27.898Z · LW · GW

This seems to be the premise of Isaac Asimov's "Nightfall".

Comment by toto on Newcomb's Problem and Regret of Rationality · 2010-07-22T09:16:49.783Z · LW · GW

OK. I assume the usual (Omega and Upsilon are both reliable and sincere, I can reliably distinguish one from the other, etc.)

Then I can't see how the game doesn't reduce to standard Newcomb, modulo a simple probability calculation, mostly based on "when I encounter one of them, what's my probability of meeting the other during my lifetime?" (plus various "actuarial" calculations).

If I have no information about the probability of encountering either, then my decision may be incorrect - but there's nothing paradoxical or surprising about this, it's just a normal, "boring" example of an incomplete information problem.

you need to have the correct prior/predisposition over all possible predictors of your actions, before you actually meet any of them.

I can't see why that is - again, assuming that the full problem is explained to you on encountering either Upsilon or Omega, both are truhful, etc. Why can I not perform the appropriate calculations and make an expectation-maximising decision even after Upsilon-Omega has left? Surely Omega-Upsilon can predict that I'm going to do just that and act accordingly, right?

Comment by toto on Open Thread June 2010, Part 2 · 2010-06-10T15:18:31.027Z · LW · GW

I have problems with the "Giant look-up table" post.

"The problem isn't the levers," replies the functionalist, "the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling... Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it's possible to program a conscious being in Haskell."

If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human's behaviour is dependent not just on the present state of the environment, but also on previous states. I don't see how you can successfully emulate a human without that. So the GLUT's entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.

Note that "creation of beliefs" (including about beliefs) is just a special case of memory. It's all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn't have this ability, it can't emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.

So I don't see how the non-consciousness of the GLUT is established by this argument.

But in this case, the origin of the GLUT matters; and that's why it's important to understand the motivating question, "Where did the improbability come from?"

The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (...) In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.

But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.

Comment by toto on Rationality quotes: June 2010 · 2010-06-03T09:29:12.995Z · LW · GW

When it comes to proving such obvious things, one will invariably fail to convince.

Montesquieu, "The Spirit of the Laws", book XXV, chapter XIII. (Link to the book, Original French)

Comment by toto on Rationality quotes: May 2010 · 2010-05-01T17:03:04.006Z · LW · GW

I don't know, to me he's just stating that the brain is the seat of sensation and reasoning.

Aristotle thought it was the heart. Both had arguments for their respective positions. Aristotle studied animals a lot and over-interpreted the evidence he had accumulated: to the naked eye the brain appears bloodless and unconnected to the organs; it is also insensitive, and can sustain some non-fatal damage; the heart, by contrast, reacts to emotions, is obviously connected to the entire body (through the circulatory system), and any damage to it leads to immediate death.

Also, in embryos the brain is typically formed much later than the heart. This is important if, like Aristotle, you spent too much time thinking about "the soul" (that mysterious folk concept which was at the same time the source of life and of sensation) and thus believed that the source of "life" was also necessarily the source of sensation, since both were functions of "the soul".

Hippocrates studied people more than animals, did not theorize too much about "the soul", and got it right. But it would be a bit harsh to cast that as a triumph of rationality against superstition.

Comment by toto on The I-Less Eye · 2010-03-29T09:05:13.874Z · LW · GW

Yes, yes he did, time and again (substituting "copy" for "zombie", as MP points out below). That's the Star Trek paradox.

Imagine that there is a glitch in the system, so that the "original" Kirk fails to dematerialise when the "new" one appears, so we find ourselves with two copies of Kirk. Now Scotty says "Sowwy Captain" and zaps the "old" Kirk into a cloud of atoms. How in the world does that not constitute murder?

That was not the paradox. The "paradox" is this: the only difference between "innocuous" teleportation, and the murder scenario described above, is a small time-shift of a few seconds. If Kirk1 disappears a few seconds before Kirk2 appears, we have no problem with that. We even show it repeatedly in programmes aimed at children. But when Kirk1 disappears a few seconds after Kirk2 appears, all of a sudden we see the act for what it is, namely murder.

How is it that a mere shift of a few seconds causes such a great difference in our perception? How is it that we can immediately see the murder in the second case, but that the first case seems so innocent to us? This stark contrast between our intuitive perceptions of the two cases, despite their apparent underlying similarity, constitutes the paradox.

And yes, it seems likely that the above also holds when a single person is made absolutely unconscious (flat EEG) and then awakened. Intuitively, we feel that the same person, the same identity, has persisted throughout this interruption; but when we think of the Star Trek paradox, and if we assume (as good materialists) that consciousness is the outcome of physical brain activity, we realise that this situation is not very different from that of Kirk1 and Kirk2. More generally, it illustrates the problems associated with assuming that you "are" the same person that you were just one minute ago (for some concepts of "are").

I was thinking of writing a post about this, but apparently all of the above seems to be ridiculously obvious to most LWers, so I guess there's not much of a point. I still find it pretty fascinating. What can I say, I'm easily impressed.

Comment by toto on Undiscriminating Skepticism · 2010-03-15T14:20:15.172Z · LW · GW

People who as their first reaction start pulling excuses why this must be wrong out >of their asses get big negative points on this rationality test.

Well, if people are absolutely, definitely rejecting the possibility that this might ever be true, without looking at the data, then they are indeed probably professing a tribal belief.

However, if they are merely describing reasons why they find this result "unlikely", then I'm not sure there's anything wrong with that. They're simply expressing that their prior for "Communist economies did no worse than capitalist economies" is, all other things being equal, lower than .5.

There are several non-obviously-wrong reasons why one could reasonably put a low prior on this belief. The most obvious is the fact that when the wall fell down, economic migration went from East to West, not the other way round (East-West Germany being the most dramatic example).

Of course, this should not preclude a look at the hard data. Reality is full of surprises, and casual musings often miss important points. So again, saying "this just can't be so" and refusing to look at the data (which I presume is what you had in mind) is indeed probably tribal. Saying "hmmm, I'd be surprised if it were so" seems quite reasonable to me. Maybe I'm just tribalised beyond hope.

Comment by toto on Open Thread: March 2010 · 2010-03-01T14:24:10.829Z · LW · GW

Behind a paywall

But freely available from one of the authors' website.

Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.

Comment by toto on What is Bayesianism? · 2010-02-27T20:15:24.119Z · LW · GW

Frequentists (or just about anybody involved in experimental work) report p-values, which are their main quantitative measure of evidence.

Comment by toto on Case study: abuse of frequentist statistics · 2010-02-22T11:26:23.511Z · LW · GW

(which would require us to know P(H), P(E|H), and P(E|~H))

Is that not precisely the problem? Often, the H you are interested in is so vague ("there is some kind of effect in a certain direction") that it is very difficult to estimate P(E / H) - or even to define it.

OTOH, P(E / ~H) is often very easy to compute from first principles, or to obtain through experiments (since conditions where "the effect" is not present are usually the most common).

Example: I have a coin. I want to know if it is "true" or "biased". I flip it 100 times, and get 78 tails.Now how do I estimate the probability of obtaining this many tails, knowing that the coin is "biased"? How do I even express that analytically? By contrast, it is very easy to compute the probability of this sequence (or any other) with a "non-biased" coin.

So there you have it. The whole concept of "null hypotheses" is not a logical axiom, it simply derives from real-world observation: in the real world, for most of the H we are interested in, estimating P(E / ~H) is easy, and estimating P(E / H) is either hard or impossible.

what about P(E|H)?? (Not to mention P(H).)

P(H) is silently set to .5. If you know P(E / ~H), this makes P(E / H) unnecessary to compute the real quantity of interest, P(H / E) / P(~H / E). I think.

Comment by toto on Med Patient Social Networks Are Better Scientific Institutions · 2010-02-20T20:23:15.680Z · LW · GW

I dispute none of this, but so far as I can tell or guess, the main thing powering the superior statistical strength of PatientsLikeMe is the fact that medical researchers have learned to game the system and use complicated ad-hoc frequentist statistics to get whatever answer they want or think they ought to get, and PatientsLikeMe has some standard statistical techniques that they use every time.

1) I'd like to see independent evidence of their "superior statistical strength".

2) On the face of it, the main difference between these guys and a proper clinical trial is an assumption that you can trust self-reports. Placebo effect be damned.

In particular, I'd really, really like to see the results for some homeopathic "remedy" (a real one, not one of those that silently include real active compounds).

Comment by toto on False Majorities · 2010-02-04T10:42:40.866Z · LW · GW

But believers in the Bible really do reject the Koran, and believers in the Koran reject (the extant versions of) the Bible (which they claim are corrupted, as can be "proved" by noticing that they disagree with the Koran). Whereas in the vegetarianism examples, there is no mutual rejection, just people who emphasise a particular point while also accepting others. Many of the people who go veggie to prevent animal suffering would also agree that it causes environmental damage. It's just that their own emotional hierarchy places animal suffering above environmental damage, not a real disagreement about the state of the world (same map of the territory, different preferred locations).

Comment by toto on The AI in a box boxes you · 2010-02-02T14:07:03.077Z · LW · GW

Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won't prevent that.

Nothing can prevent something that has already happened. On the other hand, pressing the reset button will prevent the AI from ever doing this in the future. Consider that if it has done something that cruel once, it might do it again many times in the future.

Comment by toto on The things we know that we know ain't so · 2010-01-12T11:31:18.919Z · LW · GW

1- I can't remember anybody stating that "global warming has a serious chance of destroying the world". The world is a pretty big ball of iron. I doubt even a 10K warming would have much of an impact on it, and I don't think anybody said it would - not even Al Gore.

2- I can remember many people saying that "man-made global warming has a serious chance of causing large disruption and suffering to extant human societies", or something to that effect.

3- If I try to apply "reference class forecasting" to this subject, my suggested reference class is "quantitative predictions consistently supported by a large majority of scientists, disputed by a handful of specialists and a sizeable number of non-specialists/non-scientists".

4- More generally, reference class forecasting doesn't seem to help much in stomping out bias, since biases affect the choice and delineation of which reference classes we use anyway.

Comment by toto on Contrarianism and reference class forecasting · 2009-11-29T17:07:28.949Z · LW · GW

IIRC Jensen's original argument was based on very high estimates for IQ heritability (>.8). When within-group heritability is so high, a simple statistical argument makes it very likely that large between-group differences contain at least a genetic component. The only alternative would be that some unknown environmental factor would depress all blacks equally (a varying effect would reduce within-group heritability), which is not very plausible.

Now that estimates of IQ heritability have been revised down to .5, the argument loses much of its power.

Comment by toto on Contrarianism and reference class forecasting · 2009-11-26T14:55:24.497Z · LW · GW

Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time.

I think you are unduly confusing mainstream media with mainstream science. Most people do. Unless they're the actual scientists having their claims deformed, misrepresented, and sensationalised by the media.

This says it all.

When has there been a consensus in the established scientific literature about either certitude of catastrophic overpopulation, or imminent turnaround in oil production?

We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics.

Hm. Apparently you also have non-conventional definitions of "overwhelming" and "completely wrong".

Comment by toto on Newcomb's Problem and Regret of Rationality · 2009-11-18T14:37:33.926Z · LW · GW

I guess I'm missing something obvious. The problem seems very simple, even for an AI.

The way the problem is usually defined (omega really is omniscient, he's not fooling you around, etc.) there are only two solutions:

  • You take the two boxes, and Omega had already predicted that, meaning that there is nothing in Box B - you win 1000$

  • You take box B only, and Omega had already predicted that, meaning that there is 1M$ in box B - you win 1M$.

That's it. Period. Nothing else. Nada. Rien. Nichts. Sod all. These are the only two possible options (again, assuming the hypotheses are true). The decision to take box B only is a simple outcome comparison. It is a perfectly rational decision (if you accept the premises of the game).

Now the way Eliezer states it is different from the usual formulation. In Eliezer's version, you cannot be sure about Omega's absolute accuracy. All you know is his previous record. That does complicate things, if only because you might be the victim of a scam (e.g. like the well-known trick to convince comeone that you can consistently predict the winning horse in a 2-horse race - simply start with 2^N people, always give a different prediction to each half of them, discard those to whom you gave the wrong one, etc.)

At any rate, the other two outcomes that were impossible in the previous version (involving mis-prediction by Omega) are now possible, with a certain probability that you need to somehow ascertain. That may be difficult, but I don't see any logical paradox.

For example, if this happened in the real world, you might reason that the probability that you are being scammed is overwhelming in regard to the probability of existence of a truly omniscient predictor. This is a reasonable inference from the fact that we hear about scams every day, but nobody has ever reported such an omniscient predictor. So you would take both boxes and enjoy your expected $1000+epsilon (Omega may have been sincere but deluded, lucky in the previous 100 trials, and wrong in this one).

In the end, the guy who would win most (in expected value!) would not be the "least rational", but simply the one who made the best estimates for the probabilites of each outcome, based on his own knowledge of the universe (if you have a direct phone line to the Angel Gabriel, you will clearly do better).

What is the part that would be conceptually (as opposed to technically/practically) difficult for an algorithm?