Comment by Tom_McCabe on Beyond the Reach of God · 2008-10-05T04:57:00.000Z · LW · GW

"The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share."

There's always Amazon's Mechanical Turk ( It's an inefficient use of people's time, but it's better than just telling people to go away. If people are reluctant to donate money, you can ask for donations of books- books are actually a fairly liquid asset (

Comment by Tom_McCabe on The Dilemma: Science or Bayes? · 2008-05-14T01:45:00.000Z · LW · GW

"Among all these comments, I see no appreciation of the fact that the version of many worlds we have just been given CANNOT MAKE PREDICTIONS, whereas "collapse theories" DO."

So far as I know, MWI and collapse both make the exact same predictions, although Eliezer has demonstrated that MWI is much cleaner in theoretical terms. If there's any feasible experiment which can distinguish between the two, I'm sure quantum physicists would already have tried it.

Comment by Tom_McCabe on Newcomb's Problem and Regret of Rationality · 2008-02-01T00:42:47.000Z · LW · GW

To quote E.T. Jaynes:

"This example shows also that the major premise, “If A then B” expresses B only as a logical consequence of A; and not necessarily a causal physical consequence, which could be effective only at a later time. The rain at 10 AM is not the physical cause of the clouds at 9:45 AM. Nevertheless, the proper logical connection is not in the uncertain causal direction (clouds =⇒ rain), but rather (rain =⇒ clouds) which is certain, although noncausal. We emphasize at the outset that we are concerned here with logical connections, because some discussions and applications of inference have fallen into serious error through failure to see the distinction between logical implication and physical causation. The distinction is analyzed in some depth by H. A. Simon and N. Rescher (1966), who note that all attempts to interpret implication as expressing physical causation founder on the lack of contraposition expressed by the second syllogism (1–2). That is, if we tried to interpret the major premise as “A is the physical cause of B,” then we would hardly be able to accept that “not-B is the physical cause of not-A.” In Chapter 3 we shall see that attempts to interpret plausible inferences in terms of physical causation fare no better."

Comment by Tom_McCabe on When None Dare Urge Restraint · 2007-12-09T23:37:00.000Z · LW · GW

"The same people who would never blindly accept a Bush Admin figure will blindly accept an anti-Bush figure."

Notice how you assume, without bothering to Google it, that the million-casualties figure was "anti-Bush". If it came from Clinton for President, or MoveOn, or the Democratic Party, you would have a case. In reality, the survey was conducted by Opinion Research Business, an independent polling agency which is not even US-based (their HQ is in London). The same group has published pro-Bush results in the past (eg, see

"The terrorists, with arms, attacked the unarmed. With intent to war, attacked those with no such intent. With planning, attacked those without notice."

Uh, we do this all the time, and nobody here has called us cowardly. Air Force bombers, from thirty thousand feet, routinely drop bombs without prior warning on people who cannot possibly retaliate. Even assuming no civilians are killed (which is almost never the case), insurgents with AK-47s cannot realistically hurt B-52s.

Comment by Tom_McCabe on Natural Selection's Speed Limit and Complexity Bound · 2007-11-18T00:35:00.000Z · LW · GW

"The big puzzle here is the inverse square of the mutation rate. The example of improvement in a starting population with a randomized genome of maximum variance, which can't be used to send a strongly informative message, doesn't explain the maintenance of nearly all information in a genome."

(hacks program for asexual reproduction)

I've found that, assuming asexual reproduction, the genome's useful information really does scale nice and linearly with the mutation rate. The amount of maintainable information decreases significantly (by a factor the three or so, in the original test data).

Comment by Tom_McCabe on Natural Selection's Speed Limit and Complexity Bound · 2007-11-06T21:41:00.000Z · LW · GW

"Wei Dai, being able to send 10 bits each with a 60% probability of being correct, is not the same as being able to transmit 6 bits of mathematical information. It would be if you knew which 6 bits would be correct, but you don't."

"Given sexuality and chromosome assortment but no recombination, a species with 100 chromosomes can evolve much faster than an asexual bacterial population!"

No, it can't. Suppose that you want to maintain the genome against a mutation pressure of one hundred bits per generation (one base flip/chromosome, to make it simple). Each member of the population, on average, will still have fifty good chromosomes. But you have to select on the individual level, and you can't let only those organisms with no bad chromosomes reproduce: the chances of such an organism existing are astronomical. You would have to stop, say, anyone with more than forty bad chromosomes from reproducing, so maybe the next generation you'd have an average of 37 or so. But then you add fifty more the next generation... the species will quickly die out, because you can't remove them as fast as they're being introduced.

"Each individual chromosome can be selected, mostly independently of the others."

All the chromosomes are packaged together in a single individual. Reproduction occurs at the individual level; you can't reproduce some chromosomes and not others.

Comment by Tom_McCabe on Natural Selection's Speed Limit and Complexity Bound · 2007-11-05T23:25:00.000Z · LW · GW

"This increases the potential number of semi-meaningful bases (bases such that some mutations have no effect but other mutations have detrimental effect) but cancels out the ability to store any increased information in such bases."

If 27% of all mutations have absolutely no effect, the "one mutation = one death" rule is broken, and so more information can be stored because the effective mutation rate is lower (this also means, of course, that the rate of beneficial mutations is lower). So it may be a 40 MB bound instead of a 25 MB bound, but it doesn't change the basic conclusion.

"If the environment shifts, the homogeneous population may be wiped out but part of the diverse population may survive."

If you start postulating group selection arguments, you won't be able to understand evolution clearly. And the professional evolutionary biologists will think of you as a crackpot. And your dog will get sick and die.

"But all species are considered to be descendants of the same LUCA (Last Universal Common Ancestor), and there is no mathematical reason to consider each species separately."

If the species have stopped interbreeding, deletrious mutations can accumulate in each species independently. Evolution is a mathematical process which does not care what happened ten million years ago.

"What you forget to take into account is that a growing population changes the conditions of the population, and changes selection pressure."

Yes, that's precisely the point. If you have a long period of weak selection pressure, the population will increase and selection pressure will increase. If you have a long period of strong selection pressure, the population will decrease (unless the species is driven to extinction). Hence, you can reliably predict an average selection pressure, because the two must balance each other out.

"The next step should be to try and find some mathematics that applies to non-equilibrium states. Maybe then you can draw some conclusions about the real world."

This has probably already been done.

"The thing is that evolution is not just a thing of species, evolution takes places at all those levels"

I repeat: if you use group selection arguments, your dog will get sick and die.

Comment by Tom_McCabe on Torture vs. Dust Specks · 2007-10-31T02:43:00.000Z · LW · GW

"The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and "Can't Say No" spending."

Could you post a scenario where most people would choose the option which unambiguously causes greater harm, without getting into these kinds of debates about what "harm" means? Eg., where option A ends with shooting one person, and option B ends with shooting ten people, but option B sounds better initially? We have a hard enough time getting rid of irrationality, even in cases where we know what is rational.

Comment by Tom_McCabe on Torture vs. Dust Specks · 2007-10-31T01:07:00.000Z · LW · GW

"An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases."

From Peter's proof, it seems like you should be able to prove that an arbitrarily large (but finite) utility function will be dominated by events with arbitrarily large (but finite) improbabilities.

"Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity."

And so we come to the billion-dollar question: Will scope insensitivity of this type be eliminated under CEV? So far as I can tell, a utility function is arbitrary; there is no truth which destroys it, and so the FAI will be unable to change around our renormalized utility functions by correcting for factual inaccuracy.

"Which exact person in the chain should first refuse?"

The point at which the negative utility of people catching on fire exceeds the positive utility of skydiving. If the temperature is 20 C, nobody will notice an increase of 0.00000001 C. If the temperature is 70 C, the aggregate negative utility could start to outweigh the positive utility. This is not a new idea; see

"We face the real-world analogue of this problem every day, when we decide whether to tax everyone in the First World one penny in order to save one starving African child by mounting a large military rescue operation that swoops in, takes the one child, and leaves."

According to, 10% of the world's adults, around 400 million people, own 85% of the world's wealth. Taxing them each one penny would give a total of $4 million, more than enough to mount this kind of a rescue operation. While incredibly wasteful, this would actually be preferable to some of the stuff we spend our money on; my local school district just voted to spend $9 million (current US dollars) to build a swimming pool. I don't even want to know how much we spend on $200 pants; probably more than $9 million in my town alone.

Comment by Tom_McCabe on Torture vs. Dust Specks · 2007-10-30T20:21:00.000Z · LW · GW

"For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?"

Yes. Note that, for the obvious next question, I cannot think of an amount of money large enough such that I would rather keep it than use it to save a person from torture. Assuming that this is post-Singularity money which I cannot spend on other life-saving or torture-stopping efforts.

"You probably wouldn't blind everyone on earth to save that one person from being tortured, and yet, there are (3^^^3)/(10^17) >> 7*10^9 people being blinded for each person you have saved from torture."

This is cheating, to put it bluntly- my utility function does not assign the same value to blinding someone and putting six billion dust specks in everyone's eye, even though six billion specks are enough to blind people if you force them into their eyes all at once.

"I'd still take the former. (10(10100))/(3^^^3) is still so close to zero that there's no way I can tell the difference without getting a larger universe for storing my memory first."

The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.

Comment by Tom_McCabe on Torture vs. Dust Specks · 2007-10-30T20:07:00.000Z · LW · GW

"Wow. People sure are coming up with interesting ways of avoiding the question."

My response was a real request for information- if this is a pure utility test, I would select the dust specks. If this were done to a complex, functioning society, adding dust specks into everyone's eyes would disrupt a great deal of important stuff- someone would almost certainly get killed in an accident due to the distraction, even on a planet with only 10^15 people and not 3^^^^3.

Comment by Tom_McCabe on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2007-10-23T02:48:00.000Z · LW · GW

"Congratulations, you made my brain asplode."

Read if you haven't already. Your utility function should not be assigning things arbitrarily large additive utilities, or else you get precisely this problem (if pigs qualify as minds, use rocks), and your function will sum to infinity. If you "kill" by destroying the exact same information content over and over, it doesn't seem to be as bad, or even bad at all. If I made a million identical copies of you, froze them into complete stasis, and then shot 999,999 with a cryonics-proof Super-Plasma-Vaporizer, would this be immoral? It would certainly be less immoral than killing a million ordinary individuals, at least as far as I see it.

Comment by Tom_McCabe on Congratulations to Paris Hilton · 2007-10-22T01:55:00.000Z · LW · GW

"So how would you recognize a natural ethical process if you saw one?"

Suppose that you observe process A- maybe you look at it, or poke around a bit inside it, but you don't make a precise model. If you extrapolate A forward in time, you will get a probability distribution over possible states (including the states of all the other stuff that A touches). If A consistently winds up in very small regions of this distribution, compared to what your model is, and there's no way to fix your model without making it extremely complex, you can say A is an "ethical process". Two galaxies, or two rocks, or two rivers, can easily collide; but if you look at humans, or zebras, or even fish, you will notice that they run into each other much less often than you would expect if you made a simple Newtonian model.

Comment by Tom_McCabe on A Rational Argument · 2007-10-02T22:39:01.000Z · LW · GW

"(BTW, one of the reasons I don't vote is that I am confident that I cannot, under any circumstances, EVER, have sufficient and reliable information about the candidates to allow me to make a good decision. So, I believe all voting decisions people actually make are irrational.)"


Comment by Tom_McCabe on 9/26 is Petrov Day · 2007-09-26T19:33:41.000Z · LW · GW

"Can anyone arrange to get money to this man or his family? I'm tempted to donate, to honor his deed."

Comment by Tom_McCabe on The Lens That Sees Its Flaws · 2007-09-26T04:28:24.000Z · LW · GW

"I see man as made in the image of God."

This does make some sense. If man is made in the image of God, and we know God is a mass murderer, then we can predict that some men will also be mass murderers. And lo, we have plenty of examples- Hitler, Stalin, Mao, Pol Pot, etc.

"Sure God is not going to change natural law just because we are putting him to the test."

If God does exist, as soon as we finish saving the world and whatnot, he should be immediately arrested and put on trial for crimes against humanity, due to his failure to intervene in the Holocaust, the smallpox epidemics, WWI, etc.

"Twelve poor followers of Christ were able to convert the Roman empire."

Aye. And Karl Marx must have had divine powers too- how else could a single person, with no political authority, cause a succession of revolutions in some of the largest countries on Earth?

"I could go into the lives of the saints for other examples but I wont."

How do you know that large parts of their lives weren't simply made up?

"You call the getting to the probability of nuclear war a simple question?"

Read the literature on heuristics and biases- researchers deliberately use simple questions with factual answers, so that the data unambiguously show the flaws in human reasoning.

Comment by Tom_McCabe on The Lens That Sees Its Flaws · 2007-09-26T02:57:42.000Z · LW · GW

"I would argue that they are at the core of what it is to live a fully human life."

A fully human life, in the natural sense of the term, has an average span of sixteen years. That's the environment we were designed to live in- nasty, brutal, and full of misery. By the standards of a typical human tribe, the Holocaust would have been notable for killing such a remarkably small percentage of the population. Why on Earth would we want to follow that example?

"It looks like this website has rejected the theistic understanding of faith and hope."

Yes, for a very good reason- it does not work. If you stand in front of a truck, and you have faith that the truck will not run you over, and you hope that the truck will not run you over, your bones and vital organs will be sliced and diced and chopped and fried. The key factor in survival is not lack of hope, or lack of faith, but lack of doing stupid things such as standing in front of trucks.

"I don’t know how you can love something without it making you biased towards it."

This is not what we mean by "biased". By "bias", we mean bugs in the human brain which lead us to give wrong answers to simple questions of fact, such as "What is the probability of X?". See

Comment by Tom_McCabe on Einstein's Arrogance · 2007-09-25T22:35:53.000Z · LW · GW

""Fixed by evidence" != "simple"."

This is certainly true in the general case, but all physics theories which I've studied in detail really are simple, in the bits of entropy sense.

Comment by Tom_McCabe on Einstein's Arrogance · 2007-09-25T22:28:13.000Z · LW · GW

"Please recall that my original contention was that Einstein must have had enough observational evidence to fix the information inherent in General Relativity as a solution. If you describe ways that the information in General Relativity can be fixed by evidence, you are not contradicting this."

True; why do you have to contradict the main point of a post to comment on it? My point was that the space of possibilities was not vast; it was quite small, given the common-sense rules of gravity and math which were known at the time. Developing GR took years, not because Einstein has to sort through ten million different versions of the theory, but because developing a single version of the theory is difficult.

"You are also falling prey to hindsight by not making an equal effort to consider how you could have justified alternatives as unique obvious solutions using subsets of other knowledge known at the time, rather than the particular aspects that now obviously seem so prominent."

This is mathematically impossible unless you assume false knowledge. If equations (A, B, C, D, E) are known at the time of Newton, and Newton's theory of gravity is unique if you assume A, C and D, then any alternative theory of gravity must contradict A, C, or D. Suppose that you can construct an alternative theory of gravity, which is unique assuming equations B and E. If you assume that both B and E are true, then the alternative theory of gravity must be true, hence Newton's theory must be false, hence either A, C, or D must be false. We know now that A, C, and D are all true, therefore, either B or E must be false.

Comment by Tom_McCabe on Einstein's Arrogance · 2007-09-25T21:34:43.000Z · LW · GW

"If only you had been around to solve the problem instead of Maxwell and Einstein, how much work could have been saved!"

Obvious != simple != easy to learn. You of all people should understand this. You seemed to understand it seven years ago, back during the days of your wild and reckless youth. To quote SitS:

"Let's take a concrete example, the story Flowers for Algernon (later the movie Charly), by Daniel Keyes. (I'm afraid I'll have to tell you how the story comes out, but it's a Character story, not an Idea story, so that shouldn't spoil it.) Flowers for Algernon is about a neurosurgical procedure for intelligence enhancement. This procedure was first tested on a mouse, Algernon, and later on a retarded human, Charlie Gordon. The enhanced Charlie has the standard science-fictional set of superhuman characteristics; he thinks fast, learns a lifetime of knowledge in a few weeks, and discusses arcane mathematics (not shown). Then the mouse, Algernon, gets sick and dies. Charlie analyzes the enhancement procedure (not shown) and concludes that the process is basically flawed. Later, Charlie dies.

That's a science-fictional enhanced human. A real enhanced human would not have been taken by surprise. A real enhanced human would realize that any simple intelligence enhancement will be a net evolutionary disadvantage - if enhancing intelligence were a matter of a simple surgical procedure, it would have long ago occurred as a natural mutation. This goes double for a procedure that works on rats! (As far as I know, this never occurred to Keyes. I selected Flowers, out of all the famous stories of intelligence enhancement, because, for reasons of dramatic unity, this story shows what happens to be the correct outcome.)

Note that I didn't dazzle you with an abstruse technobabble explanation for Charlie's death; my explanation is two sentences long and can be understood by someone who isn't an expert in the field. It's the simplicity of smartness that's so impossible to convey in fiction, and so shocking when we encounter it in person. All that science fiction can do to show intelligence is jargon and gadgetry. A truly ultrasmart Charlie Gordon wouldn't have been taken by surprise; he would have deduced his probable fate using the above, very simple, line of reasoning. He would have accepted that probability, rearranged his priorities, and acted accordingly until his time ran out - or, more probably, figured out an equally simple and obvious-in-retrospect way to avoid his fate. If Charlie Gordon had really been ultrasmart, there would have been no story. "

We know that Newton's theory of gravity was hard to invent; it must not have been obvious, because nobody had solved it until Newton, and he was lauded as a hero for his great theory. And yet, it is so simple that we teach it to high school students, and some of them actually understand it. Newton's equation is also a unique solution; the constant of proportionality is fixed by experiment, the m/r^2 term is fixed by the need to include Kepler's laws (which were well known at the time), and extra terms are excluded, because F must vanish when M2 vanishes, or else you violate the laws of motion which Newton had just discovered.

Comment by Tom_McCabe on Einstein's Arrogance · 2007-09-25T20:30:56.000Z · LW · GW

"Sure, if we don't mind that G and T take a full page to write out in terms of the derivatives of the metric tensor."

The Riemann tensor is a more natural measure of curvature than the metric tensor, and even in that language it's still pretty simple:

8piT = R (tensor) - .5gR (scalar)

where R (tensor) (subscript) ab = Riemann tensor (superscript) c (subscript) acb and R (scalar) = g (superscript) ab * R (tensor) (subscript) ab

You can make any theory seem complicated by writing it out in some nonstandard format. Take Maxwell's equations of electromagnetism in tensor form:

dF = 0 dF = 4pi*J

Now differential form:

(divergence) E = p (divergence) B = 0 (curl) E = -dB/dt (curl) B = J + dE/dt

Now integral form:

(flux E over closed surface A) = q (flux B over closed surface A) = 0 (line integral of E over closed loop l) = - d (flux of B over surface enclosed by l)/dt (line integral of B over closed loop l) = (current I passing through surface enclosed by l) + d (flux of E over surface enclosed by l)/dt

Now in action-at-a-distance form:

E = (sum q) -q/4/pi ((r' unit vector from q)/r'/r' + r' d/dt ((r' unit vector from q)/r'/r') + d^2/dt^2 (r' unit vector from q)) B = (sum q) E x -(r' unit vector from q)

Comment by Tom_McCabe on Einstein's Arrogance · 2007-09-25T16:21:25.000Z · LW · GW

"McCabe, you're right, it's completely obvious, it makes you wonder why Einstein took ten years to figure it out."

I never said it was obvious; I said that the equations were a unique solution imposed by various constraints. Proving that the equations are a unique solution is quite difficult; I can't do it, even with a ready-made textbook in front of me. There are many examples of simple, unique-solution equations being very hard to derive- Newton's law of gravity and Maxwell's laws of electromagnetism come to mind.

"But selecting the tensor framework, that is of course where all the bits had to go. It is not an obvious choice at all."

I agree that it is not at all obvious, but the search space doesn't seem to be all that large- how many mathematical toys are there which could form a viable framework for gravity? The difficulty seems to be in understanding the math well enough to determine whether it can represent real-world phenomena. Differential geometry is not a simple Bayesian hypothesis like "the cat is blue"; to figure out whether piece of evidence Q supports a geometric theory of gravity, you have to understand what a geometric theory of gravity would look like (in Bayesian terms, which outcomes it would predict), which is quite difficult.

"Tom, is that an elaborate joke?"

No. What makes you think that?

Comment by Tom_McCabe on Einstein's Arrogance · 2007-09-25T03:46:49.000Z · LW · GW

"And remember that General Relativity was correct, from all the vast space of possibilities."

The Einstein field equation itself is actually extremely simple:

G = 8piT

where G is the Einstein tensor and T is the stress-energy tensor. Few serious competitors to GR have emerged for a very good reason; what sane modifications could you make to this equation? G and T have to be directly proportional, because everyone knows that the curvature of spacetime (and hence the effect of gravity) is directly proportional to the quantity of matter/energy. The constant of proportionality is fixed by direct measurement of g. G must vanish when T vanishes, as there must be no gravity in the absence of matter. T itself cannot be modified, because it's the only sane way to measure mass, energy, and momentum in the Lorentzian manifold framework. G cannot be modified, because it must be constructable from the metric tensor (a property of spacetime), it must be directly proportional to the amount of curvature, and it must be invariant with respect to the choice of coordinate system (the full derivation is left as an exercise to the reader in my textbook).

Comment by Tom_McCabe on We Don't Really Want Your Participation · 2007-09-11T19:24:29.000Z · LW · GW

"Tom, you appear to have given an argument for never funding anything that has research as a major component."

The utility of funding a specific project goes to zero as the amount of money that project requires per unit of output goes to infinity. Funding one project has an opportunity cost, in the utility equation, of not funding other projects. So at some point, it will make sense (doing the opposite would have a negative expected utility) to contribute to some other project than SIAI. I don't have a clear idea of where that point is, but we've gotten a lot closer in the past two years.

Comment by Tom_McCabe on We Don't Really Want Your Participation · 2007-09-11T01:44:36.000Z · LW · GW

"Why would the amount of research being done stay the same, if the amount of money coming in goes up by a factor of 10?"

Number of publications. And lack of other strong Bayesian evidence. Money does not correlate well with thinking capacity; if you dump $20 million into a startup, its intelligent output will (on average) drop off rapidly.

"I guess they might spend it on advocacy, or buying hardware, or something, but surely what it would take for your comment about utility to be correct would be for them to do nothing with it. Why would they do that?"

I have no idea what SIAI's current budget is, or how they spend their money. I'm analyzing it using black-box efficiency, how much goes in versus how much comes out.

Comment by Tom_McCabe on We Don't Really Want Your Participation · 2007-09-10T22:22:47.000Z · LW · GW

How would an artist participate, other than just mailing in a check? Doesn't SIAI have something like $500K worth of checks, from this summer's fundraising alone? If the amount of research being done by SIAI stays the same, and the amount of money coming in goes up by a factor of 10, then the utility of every dollar goes down by a factor of 10; eventually it makes more sense to donate to other groups.

Comment by Tom_McCabe on The Crackpot Offer · 2007-09-09T02:00:10.000Z · LW · GW

"I challenge the "rules" set out by whomever thinks he's the know-all on what can be done with a compass and straight edge."

I would be interested to see what you can get out of a compass and straightedge if you change the allowable operations. You could wind up with something much more complex than the things the ancient Greeks studied (think of how much more complex a Riemannian manifold is than a Euclidean n-space, once you remove a few of Euclid's axioms).

Comment by Tom_McCabe on The Crackpot Offer · 2007-09-08T15:13:52.000Z · LW · GW

"So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory."

I know how that feels. When I was 14 or so, I took a course on cryptography, and the textbook proclaimed that modular inverses were the basis of public-key algorithms like RSA. I felt that modular inverses were crackable, and I plodded along on the problem for a few weeks, until I finally discovered a polynomial-time algorithms for doing modular inverses. It turned out that I had reinvented Euclid's algorithm, and the textbook authors were idiots.

Comment by Tom_McCabe on Why is the Future So Absurd? · 2007-09-07T20:04:09.000Z · LW · GW

"On the topic of reversing changes to appreciate their absurdity: movies that were made in say the 40s or 50s, seem much more alien to me than modern movies allegedly set hundreds of years in the future, or in different universes."

Most people do not know enough history (or rather, the specific parts of history) to even realize how absurd the past was. If you read a high-school level history textbook, which is the most information 99% of the public will actually remember (if that), history seems a great deal like the modern day: people had politics and governments and wars and good guys and bad guys and issues and so on, just like they do in the movies. The absurd parts get subtracted out, or added on as "irrelevant" trivia.

Comment by Tom_McCabe on Stranger Than History · 2007-09-01T23:18:50.000Z · LW · GW

"You can actually give a semi-plausible justification of special relativity based on what was known in 1901."

You can give a semi-plausible justification for anything. It was obvious at the time that our knowledge was incomplete, but the specific way in which our knowledge was incomplete was still a mystery. It is very easy to invent a plausible-sounding quack theory of physics; that is why we have the Crackpot Index.

Comment by Tom_McCabe on Say Not "Complexity" · 2007-08-30T02:27:46.000Z · LW · GW

"That was when I thought to myself, "Maybe this one is teachable.""

How many people have asked you about becoming an AGI designer? It sounds like you have a good deal of experience with rejection, even after weeding out the obvious crackpots.

Comment by Tom_McCabe on The Futility of Emergence · 2007-08-28T02:34:54.000Z · LW · GW

"As a result, the theory wasn't scrapped;"

By "the theory" you mean general relativity, which is one of the most well-confirmed theories in all of physics. You can't just come up with a slightly modified version of GR to accommodate weird observations; the Einstein field equation is a unique solution because of all the demands placed on any reasonable theory of gravity. If you assume:

  • Spacetime is flat in the absence of matter;
  • Spacetime curvature is linear with respect to the density of matter;
  • The standard principles of mathematics (eg, two matrices with different dimensions cannot be equal);
  • The laws of physics are invariant under coordinate transformations (no preferred coordinate system); and
  • Spacetime does not have an a priori curvature not affected by matter;

you are forced to use general relativity.

Comment by Tom_McCabe on The Futility of Emergence · 2007-08-27T21:13:10.000Z · LW · GW

"Tom McCabe, that is not proof."

There is no such thing as proof. See

Comment by Tom_McCabe on The Futility of Emergence · 2007-08-27T05:07:57.000Z · LW · GW

"Black holes, dark matter and dark energy seem to pretty much fit this description. They are, after all, inventions tacked on to calculations, in order to make theory and calculation fit observations."

See for dark matter.

Comment by Tom_McCabe on Science as Attire · 2007-08-23T18:05:59.000Z · LW · GW

"I wasn't aware that speech, empathy, and social skills were functions of the kidneys rather than the brain. "

We know empathy and social skills don't require general intelligence; plenty of mammals show empathy and social skills. If the definition of "intelligence" is "whatever occurs in the brain", then a 4004 CPU shows "intelligence" every time it adds two hex digits.

Comment by Tom_McCabe on Is Molecular Nanotechnology "Scientific"? · 2007-08-21T05:09:44.000Z · LW · GW

"Is Tom McCabe asserting that I have NOT done what is claimed in some part of my CV, or published something which is, in fact, a matter of record?"

If you want something specific, a quick check of your website shows a picture of you (on your personal homepage!) holding two Hugo awards to your head. A quick Google reveals that you have never won the Hugo; this is dishonest at best.

Comment by Tom_McCabe on Is Molecular Nanotechnology "Scientific"? · 2007-08-21T04:04:28.000Z · LW · GW

You mean the JV Post that has supposedly done all this (, yet only has sixteen Google Scholar-indexed papers (

Comment by Tom_McCabe on Hindsight Devalues Science · 2007-08-17T22:51:11.000Z · LW · GW

Ouch. I had vague feelings that something was amiss, but I believed you when you said they were all correct. I knew that sociology had a lot of nonsense in it, but to proclaim the exact opposite of what actually happened and sound plausible is crazy (and dangerous!).

Comment by Tom_McCabe on Update Yourself Incrementally · 2007-08-15T03:13:03.000Z · LW · GW

"Got protocol? Yes or no?"

If there was any actual evidence, somebody would have claimed Randi's million-dollar prize years ago. I wasn't able to find a copy of "The Irreducible Mind" online; it doesn't have a Wikipedia article and apparently isn't that popular. A quick Google of the authors reveals that only one (Bruce Greyson) has a Wikipedia article ( The lead author, Edward F. Kelly, is employed as a professor of "Perceptual Studies" at the University of Virginia Health System ( and has a PhD. from Harvard in "Psycholinguistics/Cognitive Science". The authors seem to work mainly within the field of psychology, asserting that it has "no explanation" for the human mind (

As for the other two links, the first one sounds like nonsense; the "research" was not peer-reviewed, replicated or verified and was "released exclusively to the Daily Mail", a well-known London tabloid ( The article he linked is from The Evening Standard, another British tabloid (, and asserts that "Virtually all the great scientific formulae which explain how the world works allow information to flow backwards and forwards through time - they can work either way, regardless.", as well as a great deal of other obvious nonsense. The second one lists a number of anecdotes, none of which have sources, identifying references or even names.

Comment by Tom_McCabe on Conservation of Expected Evidence · 2007-08-13T23:42:35.000Z · LW · GW

"Of course you are assuming a strong form of Bayesianism here. Why do we have to accept that strong form?"

Because it's mathematically proven. You might as well ask "Why do we have to accept the strong form of arithmetic?"

"So, if some evidence slightly moves the expectation in a particular direction, but does not push it across the 50% line from wherever it started, what is the big whoop?"

Because (in this case especially!) small probabilities can have large consequences. If we invent a marvelous new cure for acne, with a 1% chance of death to the patient, it's well below 50% and no specific person using the "medication" would expect to die, but no sane doctor would ever sanction such a "medication".

"Why is 50% special here?"

People seem to have a little arrow in their heads saying whether they "believe in" or "don't believe in" a proposition. If there are two possibilities, 50% is the point at which the little arrow goes from "not believe" to "believe".

Comment by Tom_McCabe on Absence of Evidence Is Evidence of Absence · 2007-08-13T00:01:42.000Z · LW · GW

Frank: It is impossible for A and ~A to both be evidence for B. If a lack of sabotage is evidence for a fifth column, then an actual sabotage event must be evidence against a fifth column. Obviously, had there been an actual instance of sabotage, nobody would have thought that way- they would have used the sabotage as more "evidence" for keeping the Japanese locked up. It's the Salem witch trials, only in a more modern form- if the woman/Japanese has committed crimes, this is obviously evidence for "guilty"; if they are innocent of any wrongdoing, this too is a proof, for criminals like to appear especially virtuous to gain sympathy.

Comment by Tom_McCabe on The Apocalypse Bet · 2007-08-10T06:52:05.000Z · LW · GW

"Markets for more than a few years into the future normally say that the best forecast is that conditions will stay the same and/or that existing trends will continue."

Judging from the crude oil data, futures markets tend to lag current prices; ie, if the current price was $20 a few days/weeks/months/years ago, the futures price will be $20 today. Markets have short-term memories; they think of "normal" as what conditions have been like for the past few years, and so if there's a deviation from "normal" (in either direction), people predict that the deviation will correct itself over time.

Comment by Tom_McCabe on The Apocalypse Bet · 2007-08-10T03:51:50.000Z · LW · GW

I'm not convinced that prediction markets supply data that's any more accurate than the predictions of individuals. Consider the market in crude oil futures; crude oil is much simpler and therefore much easier to predict than a Friendly intelligence explosion, and yet the data ( shows that futures are horrifically inaccurate at predicting future prices. In fact, for most of the past six years, you could have done better at predicting the price of crude oil by using the current price instead of the future-market price. Does anyone have a link to a paper studying how accurate prediction markets are, compared to individual guessing?

Comment by Tom_McCabe on The Apocalypse Bet · 2007-08-10T03:24:08.000Z · LW · GW

Eliezer: Both Douglas Hofstadter and Ray Kurzweil have two children (,,

Comment by Tom_McCabe on Bayesian Judo · 2007-08-01T01:48:30.000Z · LW · GW

"Funny, I've always thought that debates are one of the most entertaining forms of social interaction available."

We may not have rationality dojos, but in-person debating is as good an irrationality dojo as you're going to get. In debating, you're rewarded for 'winning', regardless of whether what you said was true; this encourages people to develop rhetorical techniques and arguments which are fully general across all possible situations, as this makes them easier to use. And while it may be hard to give public demonstrations of rationality, demonstrations of irrationality are easy: simply talk about impressive-sounding nonsense in a confident, commanding voice, and people will be impressed (look at how well Hitler did).