My Wild and Reckless Youth

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-30T01:52:35.000Z · LW · GW · Legacy · 53 comments

Contents

54 comments

It is said that parents do all the things they tell their children not to do, which is how they know not to do them.

Long ago, in the unthinkably distant past, I was a devoted Traditional Rationalist, conceiving myself skilled according to that kind, yet I knew not the Way of Bayes. When the young Eliezer was confronted with a mysterious-seeming question, the precepts of Traditional Rationality did not stop him from devising a Mysterious Answer. It is, by far, the most embarrassing mistake I made in my life, and I still wince to think of it.

What was my mysterious answer to a mysterious question? This I will not describe, for it would be a long tale and complicated. I was young, and a mere Traditional Rationalist who knew not the teachings of Tversky and Kahneman. I knew about Occam’s Razor, but not the conjunction fallacy. I thought I could get away with thinking complicated thoughts myself, in the literary style of the complicated thoughts I read in science books, not realizing that correct complexity is only possible when every step is pinned down overwhelmingly. Today, one of the chief pieces of advice I give to aspiring young rationalists is “Do not attempt long chains of reasoning or complicated plans.”

Nothing more than this need be said: even after I invented my “answer,” the phenomenon was still a mystery unto me, and possessed the same quality of wondrous impenetrability that it had at the start.

Make no mistake, that younger Eliezer was not stupid. All the errors of which the young Eliezer was guilty are still being made today by respected scientists in respected journals. It would have taken a subtler skill to protect him than ever he was taught as a Traditional Rationalist.

Indeed, the young Eliezer diligently and painstakingly followed the injunctions of Traditional Rationality in the course of going astray.

As a Traditional Rationalist, the young Eliezer was careful to ensure that his Mysterious Answer made a bold prediction of future experience. Namely, I expected future neurologists to discover that neurons were exploiting quantum gravity, a la Sir Roger Penrose. This required neurons to maintain a certain degree of quantum coherence, which was something you could look for, and find or not find. Either you observe that or you don’t, right?

But my hypothesis made no retrospective predictions. According to Traditional Science, retrospective predictions don’t count—so why bother making them? To a Bayesian, on the other hand, if a hypothesis does not today have a favorable likelihood ratio over “I don’t know,” it raises the question of why you today believe anything more complicated than “I don’t know.” But I knew not the Way of Bayes, so I was not thinking about likelihood ratios or focusing probability density. I had Made a Falsifiable Prediction; was this not the Law?

As a Traditional Rationalist, the young Eliezer was careful not to believe in magic, mysticism, carbon chauvinism, or anything of that sort. I proudly professed of my Mysterious Answer, “It is just physics like all the rest of physics!” As if you could save magic from being a cognitive isomorph of magic, by calling it quantum gravity. But I knew not the Way of Bayes, and did not see the level on which my idea was isomorphic to magic. I gave my allegiance to physics, but this did not save me; what does probability theory know of allegiances? I avoided everything that Traditional Rationality told me was forbidden, but what was left was still magic.

Beyond a doubt, my allegiance to Traditional Rationality helped me get out of the hole I dug myself into. If I hadn’t been a Traditional Rationalist, I would have been completely screwed. But Traditional Rationality still wasn’t enough to get it right. It just led me into different mistakes than the ones it had explicitly forbidden.

When I think about how my younger self very carefully followed the rules of Traditional Rationality in the course of getting the answer wrong, it sheds light on the question of why people who call themselves “rationalists” do not rule the world. You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.

Traditional Rationality is taught as an art, rather than a science; you read the biography of famous physicists describing the lessons life taught them, and you try to do what they tell you to do. But you haven’t lived their lives, and half of what they’re trying to describe is an instinct that has been trained into them.

The way Traditional Rationality is designed, it would have been acceptable for me to spend thirty years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera. This is enough to let the Ratchet of Science click forward, but it’s a little harsh on the people who waste thirty years of their lives. Traditional Rationality is a walk, not a dance. It’s designed to get you to the truth eventually, and gives you all too much time to smell the flowers along the way.

Traditional Rationalists can agree to disagree. Traditional Rationality doesn’t have the ideal that thinking is an exact art in which there is only one correct probability estimate given the evidence. In Traditional Rationality, you’re allowed to guess, and then test your guess. But experience has taught me that if you don’t know, and you guess, you’ll end up being wrong.

The Way of Bayes is also an imprecise art, at least the way I’m holding forth upon it. These essays are still fumbling attempts to put into words lessons that would be better taught by experience. But at least there’s underlying math, plus experimental evidence from cognitive psychology on how humans actually think. Maybe that will be enough to cross the stratospherically high threshold required for a discipline that lets you actually get it right, instead of just constraining you into interesting new mistakes.

53 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Robin_Hanson2 · 2007-08-30T04:33:22.000Z · LW(p) · GW(p)

This is a good exercise for all of us - tell a story of when we made a serious inference mistake.

comment by Neel_Krishnaswami · 2007-08-30T10:51:57.000Z · LW(p) · GW(p)

One of my mistakes was believing in Bayesian decision theory, and in constructive logic at the same time. This is because traditional probability theory is inherently classical, because of the axiom that P(A + not-A) = 1. This is an embarassingly simple inconsistency, of course, but it lead me to some interesting ideas.

Upon reflection, it turns out that the important idea is not Bayesianism proper, which is merely one of an entire menagerie of possible rationalities, but rather de Finetti's operationalization of subjective belief in terms of avoiding Dutch book bets. It turns out there are a lot of ways of doing that, because the only physically realizable bets are of finitely refutable propositions.

So you can have perfectly rational agents who never come to agreement, no matter how much evidence they see, because no finite amount of evidence can settle questions like whether the law of the excluded middle holds for propositions over the natural numbers.

Replies from: Peterdjones
comment by Peterdjones · 2012-10-08T20:09:35.416Z · LW(p) · GW(p)

One of my mistakes was believing in Bayesian decision theory, and in constructive logic at the same time. This is because traditional probability theory is inherently classical, because of the axiom that P(A + not-A) = 1.

Could you be so kind as to expand on that?

Replies from: Chrysophylax, warbo
comment by Chrysophylax · 2013-01-31T13:33:59.370Z · LW(p) · GW(p)

0 And 1 Are Not Probabilities - there is no finite amount of evidence that allows us to assign a probability of 0 or 1 to any event. Many important proofs in classical probability theory rely on marginalising to 1 - that is, saying that the total probability of mutually exclusive and collectively exhaustive events is exactly 1. This works just fine until you consider the possibilty that you are incapable of imagining one or more possible outcomes. Bayesian decision theory and constructive logic are both valid in their respective fields, but constructive logic is not applicable to real life, because we can't say with certainty that we are aware of all possible outcomes.

Constructive logic preserves truth values - it consists of taking a set of axioms, which are true by definition, and performing a series of truth-preserving operations to produce other true statements. A given logical system is a set of operations defined as truth-preserving - a syntax into which semantic statements (axioms) can be inserted. Axiomatic systems are never reliable in real life, because in real life there are no axioms (we cannot define anything to have probability 1) and no rules of syntax (we cannot be certain that our reasoning is valid). We cannot ever say what we know or how we know it; we can only ever say what we think we know and how we think we know it.

Replies from: Kindly, Kindly
comment by Kindly · 2013-01-31T13:48:31.301Z · LW(p) · GW(p)

Are there any particular arguments in constructive logic that you formerly believed, and now no longer believe?

Or is this just a thing where you are forever doomed to say "minus epsilon" every time you say "1" but it doesn't actually change what arguments you accept?

comment by Kindly · 2013-01-31T13:51:47.034Z · LW(p) · GW(p)

there is no finite amount of evidence that allows us to assign a probability of 0 or 1 to any event.

To be more precise, there is no such finite evidence unless there already exist events to which you assign probability 0 or 1. If such events do exist, then you may later receive evidence that allows them to propagate.

Replies from: Chrysophylax
comment by Chrysophylax · 2013-02-01T18:11:15.675Z · LW(p) · GW(p)

Even if we have infinite evidence (positive or negative) for some set of events, we cannot achieve infinite evidence for any other event. The point of a logical system is that everything in it can be proven syntactically, that is, without assigning meaning to any of the terms. For example, "Only Bs have the property X" and "A has the property X" imply "A is a B" for any A, B and X - the proof makes no use of semantics. It is sound if it is valid and its axioms are true, but it is also only valid if we have defined certain operations as truth preserving. There are an uncountably infinite number of logical systems under which the truth of the axioms will not ensure the truth of the conclusion - the reasoning won't be valid.

Non-probabilistic reasoning does not ever work in reality. We do not know the syntax with certainty, so we cannot be sure of any conclusion, no matter how certain we are about the semantic truth of the premises. The situation is like trying to speak a language you don't know using only a dictionary and a phrasebook - no matter how certain you are that certain sentences are correct, you cannot be certain that any new sentence is gramatically correct because you have no way to work out the grammar with absolute certainty. No matter how many statements we take as axioms, we cannot add any more axioms unless we know the rules of syntax, and there is no way at all to prove that our rules of syntax - the rules of our logical sytem - are the real ones. (We can't even prove that there are real ones - we're pretty darned certain about it, but there is no way to prove that we live in a causal universe.)

Replies from: Kindly
comment by Kindly · 2013-02-01T19:37:51.179Z · LW(p) · GW(p)

Well, yes. If we believe that A=>B with probability 1, it's not enough to assign probability 1 to A to conclude B with probability 1; you must also assign probability 1 to modus ponens.

And even then you can probably Carroll your way out of it.

comment by warbo · 2013-10-02T11:10:57.834Z · LW(p) · GW(p)

One of my mistakes was believing in Bayesian decision theory, and in constructive logic at the same time. This is because traditional probability theory is inherently classical, because of the axiom that P(A + not-A) = 1.

Could you be so kind as to expand on that?

Classical logics make the assumption that all statements are either exactly true or exactly false, with no other possibility allowed. Hence classical logic will take shortcuts like admitting not(not(X)) as a proof of X, under the assumptions of consistency (we've proved not(not(X)) so there is no proof of not(X)), completeness (if there is no proof of not(X) then there must be a proof of X) and proof-irrelevance (all proofs of X are interchangable, so the existence of such a proof is acceptable as proof of X).

The flaw is, of course, the assumption of a complete and consistent system, which Goedel showed to be impossible for systems capable of modelling the Natural numbers.

Constructivist logics don't assume the law of the excluded middle. This restricts classical 'truth' to 'provably true', classical 'false' to 'provably false' and allows a third possibility: 'unproven'. An unproven statement might be provably true or provably false or it might be undecidable.

From a probability perspective, constructivism says that we shouldn't assume that P(not(X)) = 1 - P(X), since doing so is assuming that we're using a complete and consistent system of reasoning, which is impossible.

Note that constructivist systems are compatible with classical ones. We can add the law of the excluded middle to a constructive logic and get a classical one; all of the theorems will still hold and we won't introduce any inconsistencies.

Another way of thinking about it is that the law of the excluded middle assumes that a halting oracle exists which allows us to take shortcuts in our proofs. The results will be consistent, since the oracle gives correct answers, but we can't tell which results used the oracle as a shortcut (and hence don't need it) and which would be impossible without the oracle's existence (and hence don't exist, since halting oracles don't exist).

The only way to work out which ones are shortcuts is to take 'the long way' and produce a separate proof which doesn't use an oracle; these are exactly the constructive proofs!

comment by Hopefully_Anonymous · 2007-08-30T12:55:26.000Z · LW(p) · GW(p)

Good post. I find your writing style a little overwrought for your audience (us overcomingbias readers) but the practical details and advice are gold.

comment by Flynn · 2007-08-30T12:59:04.000Z · LW(p) · GW(p)

Eliezer,

I'm wondering about the build up to becoming a Bayesian. Do you think it's necessary for a person to understand Traditional Rationality as a mode of thinking before they can appreciate Bayes?

Intuitively, I would suspect that an understanding and even appreciation of ol' fashioned either/or thinking is a necessary foundation for probabilities.

Sorry if this is out of left field. My wife just left for work -- she's a pre-school teacher -- and I was thinking of how the lesson might be applied to her students (who are admittedly far too young for this sort of thing just yet.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-30T13:22:08.000Z · LW(p) · GW(p)

Do you think it's necessary for a person to understand Traditional Rationality as a mode of thinking before they can appreciate Bayes?

Good question! I think it should be possible to start with Bayes, but I've never seen it done. Lessons on Traditional Rationality appeal to built-in human intuitions, like "Reality is either a certain way or it's not", so you'd appeal to the same intuitions but use them to introduce probability principles like "Your probabilities shouldn't sum to more than 1.0."

Replies from: aspera
comment by aspera · 2012-10-10T00:45:52.518Z · LW(p) · GW(p)

Is this what CFAR is trying to do?

I would be interested to hear what other members of the community think about this. I accidentally found Bayes after being trained as a physicist, which is not entirely unlike traditional rationality. But I want to teach my brother, who doesn't have any science or rationality background. Has anyone had success with starting at Bayes and going from there?

comment by anonymous4 · 2007-08-30T16:57:56.000Z · LW(p) · GW(p)

Eliezer,

Great post, as always. I think you're a great writer.

comment by MrHen · 2010-01-22T18:54:38.207Z · LW(p) · GW(p)

I think the following should be added to the about page in some form:

Traditional Rationalists can agree to disagree. Traditional Rationality doesn't have the ideal that thinking is an exact art in which there is only one correct probability estimate given the evidence. In Traditional Rationality, you're allowed to guess, and then test your guess. But experience has taught me that if you don't know, and you guess, you'll end up being wrong.

Until I read this exact paragraph I was always a little confused as to how any of this was terribly new or eye-opening. Putting everything that I have read in the last week into a perspective that includes this paragraph makes everything significantly more potent. If this nugget was in the previous posts I either missed it or forgot it. Either way, its impact did not match its importance.

So, yeah.

Replies from: Peterdjones
comment by Peterdjones · 2012-10-08T20:24:10.938Z · LW(p) · GW(p)

One correct probability estimate of what? You are tacitly assuming that someone has mapped the ideaspace and presented you with a tidy menu of options. But no-one could have converged on relativity before Einstein because he hadn't thought of it yet. Guessing bad, hypothesing good.

So...no.

comment by David_Gerard · 2010-12-06T17:49:43.506Z · LW(p) · GW(p)

But my hypothesis made no retrospective predictions. According to Traditional Science, retrospective predictions don't count - so why bother making them?

Not checking what your hypothesis would have meant doesn't like science as she is did to me. What is the example you were thinking of here? I am having difficulty reconstructing a picture in my head of what you are calling "Traditional Rationality" without using straw.

comment by Joshua · 2011-01-30T03:15:28.133Z · LW(p) · GW(p)

While reading through this I ran into a problem. It seems intuitive to me that to be perfectly rational you would have to have instances in which given the same information two rationalists disagreed. I think this because I presume that a lack of randomness leads to a local maxima. Am I missing something?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-01-30T03:28:11.495Z · LW(p) · GW(p)

Unpack "local maxima". Maxima of what?

Replies from: Joshua
comment by Joshua · 2011-02-12T20:34:43.643Z · LW(p) · GW(p)

I'm thinking of being unable to reach a better solution to a problem because what you know conflicts with arriving at the solution.

Say your data leads you to an inaccurate initial conclusion. Everybody agrees on this conclusion. Wouldn't that conclusion be data for more inaccurate conclusions?

So I thought that there would need to be some bias that was put on your reasoning so that occasionally you didn't go with the inaccurate claim. That way if some of the data is wrong you still have rationalists who arrive at a more accurate map.

Tried to unpack it. Noticed that I seem to expect this "exact art" of rationality to be a system that can stand on its own when it doesn't. What I mean by that is that I seem to have assumed that you could built some sort of AI on top of this system which would always arrive at an accurate perception of reality. But if that was the case, wouldn't Elizer already have done it?

I feel like I'm making mistakes and being foolish right now, so I'm going to stop writing and eagerly await your corrections.

Replies from: somejan, Ratheka
comment by somejan · 2011-02-22T12:25:53.827Z · LW(p) · GW(p)

There's nothing in being a rationalist that prevents you from considering multiple hypotheses. One thing I've not seen elaborated on a lot on this site (but maybe I've just missed it) is that you don't need to commit to one theory or the other, the only time you're forced to commit yourself is if you need to make a choice in your actions. And then you only need to commit for that choice, not for the rest of your life. So a bunch of perfect rationalists who have observed exactly the same events/facts (which of course doesn't happen in real life) would ascribe exactly the same probabilities to a bunch of theories. If new evidence came in they would all switch to the new hypothesis because they were all already contemplating it but considering it less likely than the old hypothesis.

The only thing preventing you from considering all possible hypotheses is lack of brain power. This limited resource should probably be divided among the possible theories in the same ratio that you're certain about them, so if you think theory A has a probability of 50% of being right, theory B a probability of 49% and theory C a probability of 1%, you should spend 99% of your efforts on theory A and B. But if the probabilities are 35%, 33% and 32% you should spend almost a third of your resources on theory C. (Assuming the goal is just to find truth, if the theories have other utilities that should be weighted in as well.)

Replies from: wedrifid
comment by wedrifid · 2012-01-21T04:24:13.433Z · LW(p) · GW(p)

The only thing preventing you from considering all possible hypotheses is lack of brain power. This limited resource should probably be divided among the possible theories in the same ratio that you're certain about them

Likelyhood is one consideration when determining how much to investigate a possible hypotheses but it isn't the only consideration. Quite often the ratio of attention should be different to the ratio of credibility.

comment by Ratheka · 2012-01-21T03:32:16.971Z · LW(p) · GW(p)

I think even a perfect implementation of Bayes would not in and of itself be an AI. By itself, the math doesn't have anything to work on, or any direction to do so. Agency is hard to build, I think.

As always, of course, I could be wrong.

Replies from: ata
comment by ata · 2012-01-21T06:23:22.733Z · LW(p) · GW(p)

Would a "perfect implementation of Bayes", in the sense you meant here, be a Solomonoff inductor (or similar, perhaps modified to work better with anthropic problems), or something perfect at following Bayesian probability theory but with no prior specified (or a less universal one)? If the former, you are in fact most of the way to an agent, at least some types of agents, e.g. AIXI.

Replies from: Ratheka
comment by Ratheka · 2012-01-21T08:07:19.522Z · LW(p) · GW(p)

Well, I'm not personally capable of building AI's, and I'm not as deeply versed as I'm sure many people here are, but, I see an implementation of Bayes theorem as a tool for finding truth, in the mind of a human or an AI or whatever sort of person you care to conceive of / display, whereas the mind behind it is an agent with a quality we might called directedness, or intentionality, or simply an interest to go out and poke the universe with a stick where it doesn't make sense. Bayes is in itself already math, easy to put into code, but we don't understand internally directed behavior well enough to model it, yet.

comment by royf · 2012-06-14T06:36:49.117Z · LW(p) · GW(p)

Traditional Rationalists can agree to disagree. Traditional Rationality doesn't have the ideal that thinking is an exact art in which there is only one correct probability estimate given the evidence.

This is also true of Bayesians. The probability estimate given the evidence is a property of the map, not the territory (hence "estimate"). One correct posterior implies one correct prior. What is this "Ultimate Prior"? There isn't one.

Possibly, you meant that there's one correct posterior given the evidence and the prior. That's correct, but it doesn't prevent Bayesians from disagreeing, because they do have different priors.

Alternatively, one can point out that the "given evidence" operator is, in expectation, always non-expansive, and contractive when the priors disagree. This means that the beliefs of Perfect Bayesians with shared observations converge (with probability 1) into a single posterior. But this convergence is too slow for humans. Agreeing to disagree is sometimes our only option.

Incidentally, it's Traditional Rationalists who believed they should never agree to disagree: the set of hypotheses which aren't "ruled out" by confirmed and repeatable experiments, they argued, is a property of the territory.

Replies from: beoShaffer
comment by beoShaffer · 2012-06-14T06:45:10.283Z · LW(p) · GW(p)

http://wiki.lesswrong.com/wiki/Aumann%27s_agreement_theorem

Replies from: royf
comment by royf · 2012-06-14T06:55:35.915Z · LW(p) · GW(p)

I'm aware of this result. It specifically requires the two Beyesians to have the same prior. My point is exactly that this doesn't have to be the case, and in reality is sometimes not the case.

EDIT: The original paper by Aumann references a paper by Harsanyi which supposedly addresses my point. Aumann himself is careful in interpreting his result as supporting my point (since evidently there are people who disagree despite trusting each other). I'll report here my understanding of the Harsanyi paper once I get past the paywall.

Replies from: royf
comment by royf · 2012-06-15T18:44:10.046Z · LW(p) · GW(p)

The Harsanyi paper is very enlightening, but he's not really arguing that people have shared priors. Rather, he's making the following points (section 14):

  • It is worthwhile for an agent to analyze the game as if all agents have the same prior, because it simplifies the analysis. In particular, the game (from that agent's point of view) then becomes equivalent to a Bayesian complete-information game with private observations.

  • The same-prior assumption is less restrictive than it may seem, because agents can still have private observations.

  • A wide family of hypothetical scenarios can be analyzed as if all agents have the same prior. Other scenarios can be easily approximated by a member of this family (though the quality of the approximation is not studied).

All of this is mathematically very pleasing, but it doesn't change my point. That's mainly because in the context of the Harsanyi paper "prior" means before any observation, and in the context of this post "prior" means before the shared observation (but possibly after private observations).

comment by Epiphany · 2012-10-08T00:48:55.243Z · LW(p) · GW(p)

Problem: "retrospective predictions" is undefined here. Search does not locate this term anywhere on the LessWrong website, the LessWrong wiki or on Wikipedia, but it seems to be the crux of this piece that we have to make retrospective predictions. Also, it's not clear what you mean by it because it sounds oxymoronic - you can't predict something that already happened. My best guess about what you mean by "retrospective predictions" is: Say someone has a theory that humans are hairless because they evolved from aquatic monkeys. That person should "predict" that there's past evidence of aquatic monkeys existing at the right place/time/circumstance/whatever and then go do some research to find out.

Replies from: gwern
comment by gwern · 2012-10-08T02:00:48.520Z · LW(p) · GW(p)

Retrospective prediction is an expansion of http://en.wikipedia.org/wiki/retrodiction

Replies from: Epiphany
comment by Epiphany · 2012-10-08T03:07:30.908Z · LW(p) · GW(p)

Oh, thank you, Gwern! Ok, so retrodiction is more like this: There are facts that we currently know and phenomena that have already happened so you should consider whether your theory would have predicted them. It's not "did something related precede this" but "If we had known this theory before realizing certain facts or making certain observations, would the theory have predicted or explained these?"

Hmm for examples... if there were an all-knowing, all-powerful, all-loving God, what would I predict? If life on earth evolved, what would I predict?

What would God do? Make something awesome or lounge around feeling enlightened. I'm personifying here, and I know it... I have no idea what a God would do but I suspect that it would not be "Make a bunch of creatures knowing that a bunch of them will experience horrible suffering. Demand that they have faith but confuse them with a bunch of different religions to choose from. Create each of them knowing exactly how they'll reason and what they'll experience and what that combination will result in and demand certain beliefs that won't make sense to some of them."

Whereas with evolution, I'd predict that various life forms would evolve, some would succeed, some would not, life would be more like a chaotic experiment than a harmonious symphony, the smartest life forms would be dreadfully confused for quite some time before having it together...

And this sounds like earth.

Replies from: wedrifid, CCC, gwern
comment by wedrifid · 2012-10-08T03:27:19.648Z · LW(p) · GW(p)

Whereas with evolution, I'd predict that various life forms would evolve, some would succeed, some would not, life would be more like a chaotic experiment than a harmonious symphony, the smartest life forms would be dreadfully confused for quite some time before having it together...

I would expect most life to just end up as planets full of green goo (ie. like grey goo but natural). But I'd expect that in a tiny minority of cases things like Fisherian Runaway, complex signalling and just plain luck happen to throw some individual toward the 'general intelligence' path (and a bunch of other deal breaking to not happen on the way). I'd expect any intelligent agents to observe that they are on a planet, in a galaxy in an Everett Branch where life had evolved much like you said.

Replies from: Epiphany, Kawoomba, Manfred
comment by Epiphany · 2012-10-08T06:07:46.200Z · LW(p) · GW(p)

Hmm. I notice that I was not as specific as you are. I didn't say anything about what "most" life forms would be like or whether there would be lots of smart life forms. I haven't really done a thorough retrodiction on evolution, to tell the truth. But I am really liking this new imagination trick of "try to predict the past if the theory was true" (which is subtly different from my other tricks like "is there anything in the past that supports / refutes this?") and it's pleasant atheism-promoting effect on the remnants of my dead agnosticism phase. I'm glad I asked this question and that Gwern helped.

Thinking it out, I do not agree with your green goo hypothesis. I think that as long as there were mutations in the green goo's pattern (and stability in this pattern would be the exception not the rule due to the complexity of making a self-replicating, self-incarnating pattern, and due to environmental differences more complex and diverse than the green goo's pattern would be able to expect) and as long as there was always room for improvement (for something this complex that evolved randomly, perfection in the pattern would be the exception not the rule) it would have to change and mutate and new variations would inevitably emerge.

What would it take to have that kind of stability in life forms? Other than a perfectly stable planet? The life game is very, very complex.

I think, perhaps, a drastic reduction in the number of physical laws (when you have all kinds of neat toys to play with from electricity to friction, room for improvement is immense), as well as the number of substances available (otherwise the goo will only expand and encounter new things which promote adaptations), it MIGHT result in a simple life form becoming "perfect" for it's environment and then stabilizing it's genes as a way of optimizing perfection.

I think diversity and increasing improvement is more likely to result from evolution than perfect, stable green goo.

Replies from: wedrifid
comment by wedrifid · 2012-10-08T14:28:34.384Z · LW(p) · GW(p)

Hmm. I notice that I was not as specific as you are. I didn't say anything about what "most" life forms would be like or whether there would be lots of smart life forms.

We may also have meant different things by "if life on earth evolved". I read it as "conditional on self replicating things we could call 'life' emerged on earth, how would I expect things to proceed" where it could also have meant "conditional on intelligent life like we know it having been evolved, how would I expect that process to have gone".

What I was intending to convey was not so much that one stable form of goo would remain permanently but rather that there is a significant component of the great filter in the stages between life emerging and general-intelligence evolving as well as the component before life emerges at all. I expect that most planets where life evolves at all to not evolve general intelligence or even other lifeforms as interesting as what we consider lesser animals. I expect it to get stuck in local minima rather frequently.

comment by Kawoomba · 2012-10-08T08:00:09.947Z · LW(p) · GW(p)

I disagree. The incentivising force for continued adaptation is changes in your environment (including your fellow other species). Static goo - or uniformly adapting goo - cannot be optimal for all of a planet at once, leaving room to be outcompeted by diversifying dark-green goo, which may eventually evolve into goo-man (I mean, hu-man):

A planet filled with homogeneous green goo would still be subject to offering advantages based on adaptation on two major axes:

1) Planets universally offer different conditions for habitats, pole temperature versus equatorial temperature, seismic activities on active planets, surface versus underground habitats. The green goo would eventually split off into various types, each best suited to the environment. There is no such thing as an "optimal green goo for every environment", optimal refers to a specific set of conditions. Some tasks are hard for single-celled organisms to fulfill, which is probably why the uniform green goo that life developed as on earth diversified while spreading, and that bacteria, while ubiquitous, still aren't considered the dominant life form.

2) As a hypothetical, even a planet transformed into a uniform green goo blob in space would be an environment in itself, allowing for niches for different forms of life (as long as there's still some entropy to waste i.e. a mechanism for mutation). For a crude comparison, think of lava as goo on a different time scale.

Lastly, if you allow certain variations in your green goo, you could well argue that earth as it is now is an amalgam of various sorts of green goo - us. Especially from the vantage point of our basic goo unit - the gene. See the goo now?

(To me, the curious thing isn't the eventual appearance of memetic-temetic based adaptability (intelligence), but of subjective experience to go with it. Good fiction novel on that: Peter Watts’ Blindsight.)

comment by Manfred · 2012-10-08T10:41:16.304Z · LW(p) · GW(p)

I would expect most life to just end up as planets full of green goo (ie. like grey goo but natural).

One might compare this to ecosystems of reproducing known-number iterated prisoner's dilemma robots - the analogous idea is that these ecosystems will usually end up as "tit for tat goo."

Tit for tat is reliable. Like algae in the sea of early earth, tit for tat can serve as a "background" for our ecosystem - cooperation is harvesting energy from the sun, defection is being a predator, but if everyone tries to be a predator everyone dies. So algae reproduces. But also like a sea full of algae, there are predatory / parasitic strategies that work really well once the plants are common, like defecting at the end, or eating plants. If a tit for tat robot has the first mutant baby that defects at the end, that baby will only play against tit for tat robots, so it will defect successfully and have more babies than usual, eventually leading to a whole new strain. The zooplankton of the ecosystem. But then if that becomes common, it may be worth it to produce a parasite to the parasite - defecting twice from the end. The bigger the possible rewards, the more layers of strategies will be viable. Tit for tat goo is unstable - plants quickly grow herbivores, and herbivores can sometimes grow predators.

And that's just iterated prisoner's dilemma. Add in more dimensions, multiple equilibria... things could get pretty complicated.

comment by CCC · 2012-10-08T07:41:56.963Z · LW(p) · GW(p)

Hmm for examples... if there were an all-knowing, all-powerful, all-loving God, what would I predict? If life on earth evolved, what would I predict?

What would God do? Make something awesome or lounge around feeling enlightened. I'm personifying here, and I know it... I have no idea what a God would do but I suspect that it would not be "Make a bunch of creatures knowing that a bunch of them will experience horrible suffering. Demand that they have faith but confuse them with a bunch of different religions to choose from. Create each of them knowing exactly how they'll reason and what they'll experience and what that combination will result in and demand certain beliefs that won't make sense to some of them."

I find myself more inclined to ask the opposite question. That is, assuming that God exists (unpack 'God': A being both omniscient and omnipotent; unpack "omnipotent": having the equivalent of root access to the universe), why is the universe as it is?

If God exists, then the universe is clearly there for a reason. A certain amount of observation has suggested to me that a part of this reason appears to be related to the existance of free will. (Reason: Most of the present evil in the world appears to be caused by the free will of other humans. Thus, I conclude that the presence of free will is more important to God than the total eradication of evil; totally eradicating all evil would eliminate free will).

I haven't really got too much beyond that, yet.

Replies from: Epiphany, drethelin, ArisKatsaris
comment by Epiphany · 2012-10-08T08:40:04.566Z · LW(p) · GW(p)

The idea that evil is evidence that God gives us free will is contradicted by the existence of evil. I identified some potential unreasoned assumptions in this view:

  • Unreasoned Assumption #1: Evil people want to be evil.
  • Unreasoned Assumption #2: Evil people have the ability to change that they're evil.
  • Unreasoned Assumption #3: Evil people know they're being evil.

In my experience most people who do bad things do not know that they're being evil, don't want to be evil, or can't change the fact that they're doing evil things. However, if they were made evil and don't want to be, they are evil against their will - this is not in support of free will. If they're not able to change that they're evil, they don't have an alternative to evil, so they're not choosing evil of their own free will. If they don't know they're doing evil then they weren't even given the proper opportunity to choose whether or not to be evil, which is not a situation most people want, so they can't be said to be evil of their own free will.

I can't tell myself "Being evil is so much fun that God just wants us to be free to do it." That does not seem to be the case.

And even if that was the case, why the heck did God make it fun to be evil? Why would you ever call it free will to enjoy evil and wish you didn't and be unable to change it?

How many people who find evil things fun would, of their own free will, prefer it if they did not find those things fun?

Most of them, in my experience.

For the free will idea to be supported, it would require that everyone has all of the following:

  • Ability to change evil behavior.
  • Ability to see own evil.
  • Ability to stop enjoying evil.
Replies from: CCC, Peterdjones
comment by CCC · 2012-10-08T10:26:52.054Z · LW(p) · GW(p)

Evil people want to be evil.

No, I don't think that's necessary. Sometimes, indeed often, evil is caused by people who simply don't care whether a given course of action is evil or not. Take, for example, the example of the owner of a factory. His factory produces chemical X during its production processes; nobody wants X, nobody likes X. If he dumps it in a lake and hopes that no-one notices, that's definitely evil (especially if people downstream will be drinking the water), but that's not out of a desire to be a moustache-twirling evil villain - that's out of a desire to save on the cost of disposing of it properly.

Evil people have the ability to change that they're evil (If they don't, the evil is not due to free will).

True. A lot of evil can be changed.

Evil people know they're being evil.

Again, not necessary. It merely needs to be reasonably possible for evil people to find out whether or not they are being evil. Sometimes, this requires a fair amount of study. Take the example of a large corporation that's looking for a factory to produce some goods for them. Factory A in Europe says it can produce it for a hundred Euros per item; factory B in China says it can make the same item for fifty Euros per unit. The corporation picks B, and doesn't go and have a look at the apalling conditions that the factory workers are enduring in a very aggressive attempt to cut costs. (Factory B's managers will probably claim that it is a wonderful place to work unless someone actually goes there and looks).

I'm not saying that being evil is at all fun. I'm saying that it's something that some people do; usually, I suspect, because there's something else they care about more. Most of the time, they're either not aware that they are being evil (usually because they never bothered to just sit down and think through the consequences of their actions) or the potential benefit to them is high enough that they don't care about the negative consequences (any action that a company takes to protect a monopoly on a given product or service from fair competition probably falls under here).

comment by Peterdjones · 2012-10-08T20:04:00.544Z · LW(p) · GW(p)

In my experience most people who do bad things do not know that they're being evil, don't want to be evil, or can't change the fact that they're doing evil things.

That isnt a straightforward piece of evidence. Many would describe evil as the deliberate commital of harm. By that definition, there's simply no such things an unwilling or unknowing evil.

comment by drethelin · 2012-10-08T09:18:34.504Z · LW(p) · GW(p)

Most of the evil in the world is caused by god. People starve, because god made us to hunger. This can make you rob, beat, and kill to survive. People are lonely, horny, prideful, angry and vengeful. This makes them fight each other for status, war and plot for glory, and rape for sex. God gave us glands, faulty brains, and hormones. In what sense is our will free? Does someone choose to be born during a drought, or to be infected with malaria by a mosquito?

If I have to assume an involved creator I have to assume we're either entertainment or an experiment

Replies from: CCC
comment by CCC · 2012-10-08T10:34:13.384Z · LW(p) · GW(p)

Yes, God gave us glands and hormones. And then God allowed us to override them. He gave us faulty brains, but allowed us to see the faults and train ourselves to avoid them. People starve - but the supermarkets are full of food. Starvation is an economic problem, not a biological one, and the economy is created by, used and ruled by humanity. There's disease, yes, but there are also doctors. (Incidentally, I have heard the question asked - wouldn't the world be a better place if some of the hard corners are rounded off, if, in effect, it were a padded room instead of a hard, steel floor, so that it would hurt less when we fell. The trouble with that is that, for all I know, we are in the padded room, and what we call steel is simply a slight stiffness in the padding... if we've never seen real steel, how would we know the difference?)

Of course, we might well be entertainment - that's a possibility. I guess we might be an experiment as well, though the trouble with that is that an omniscient being would know the result of the experiment before running it (which puts us back into being entertainment again).

Replies from: drethelin
comment by drethelin · 2012-10-08T13:08:06.305Z · LW(p) · GW(p)

Even if you are correct that starvation and disease are both solvable now, so what? Are the thousands of years of human history before now irrelevant? More people have died than are alive today.

The world doesn't just hurt when we "fall", it hurts many people all the time, for no reason. People are born without limbs, people are struck by lightning, people are born depressed and suicidal. Our minds are built to suffer, evolved to use the pain of existence to encourage us to reproduce. If god could subject us to worse, how does that let you call what we have now good?

Replies from: CCC
comment by CCC · 2012-10-09T12:25:46.721Z · LW(p) · GW(p)

You do raise a very good point here. Even if people in the past had all acted in the most perfect possible way, millions would have died of old age in any case during that time. If God (unpack: omnipotent, omniscient being) exists, therefore, then this must have been a design feature of the universe; or at least, one that He is unwilling to stop.

It's at this point that the question of whether an afterlife exists enters the debate. What death means, for the person who dies, changes pretty dramatically between the universe where an afterlife exists and the universe where an afterlife doesn't exist; and an omniscient being has access to this datum, and can plan according to it.

It is always possible, of course, that an omniscient, omnipotent being might not be good. I doubt the extreme of evil (life is too pleasant for me to believe that that is true), but there is certainly the possibility of indifference to consider.

comment by ArisKatsaris · 2012-10-08T11:26:47.787Z · LW(p) · GW(p)

Most of the present evil in the world appears to be caused by the free will of other humans.

By what kind of calculation have you derived this? In 2009, there were 2,437,163 deaths in USA. The number of murders and suicides for that year put together were only 52,308. (http://www.cdc.gov/nchs/fastats/deaths.htm http://www.disastercenter.com/crime/uscrime.htm )

So, if we just went by this simplistic calculation based on human deaths, only about 2.1% of the evil in the USA would appear to be caused by the free will of other humans -- thus leaving 97.9% of this particular evil as "God"'s province.

But perhaps you can tell us your own set of calculations and how they reached the conclusion that most of the evil is caused by people's free will?

Replies from: CCC
comment by CCC · 2012-10-08T12:29:45.838Z · LW(p) · GW(p)

By what kind of calculation have you derived this?

I admit, I haven't sat down and calculated it; it was merely an impression that I had recieved. I'm not sure whether the number of deaths is necessarily an accurate measure of evil - torture, for example, is evil but results in no deaths, and the possibility of an afterlife may mean that death is not, in and of itself, always evil - but I'll accept that there is at least some correlation with the figure you have chosen.

So. Let me take a look at the page that you have provided (http://www.cdc.gov/nchs/fastats/deaths.htm). I see that the two leading causes of death are heart disease and cancer, adding up to close to half of the deaths for 2009. Heart disease is caused, in large part, by such things as poor diet and insufficient exercise (http://www.cdc.gov/heartdisease/facts.htm). By this measure, therefore, any popular restaurant that does not serve healthy food (and only healthy food, or at least if there is less healthy food then it is clearly marked as such and not more expensive) is encouraging poor diet, increasing mortality due to heart disease, and is therefore evil. In fact, something like 34% of US adults have obesity as a heart disease risk factor (the restaurants are not the only holders of blame here) - and that's not the highest risk factor (inactivity is, at 53%).

As to cancer, the major villain there is tobacco, estimated to be responsible for 30% of cancer deaths and increasing. (http://www.ncbi.nlm.nih.gov/pubmed/7017215). Yet people still sell cigarettes (and other people still buy them - I do not understand why anyone would want to actually spend money on this, yet amazingly they do).

That's a clearly free-willed agency involved in over 30% of heart disease and cancer deaths, which make up over one-third of total deaths. So that's over 10% of total deaths. Before even looking at murders, suicides, and chronic lower respiratory diseases (I expect to find tobacco, and therefore the tobacco industry, as a major culprit there as well). I suspect that a closer analysis may increase that figure.

On the other hand, there are deaths that can have no human agency involved. A common example here is deaths due to natural disasters. Here's a page (http://voices.yahoo.com/worst-natural-disasters-2009-5105563.html) that claims to list the ten worst natural disasters of 2009, worldwide. Total deaths: 10469, including 10000 for the H1N1 flu pandemic. The tenth disaster on the list had only three fatalities, so unless there were a whole lot of disasters in 2009, there can't have been all that many fatalities due to natural disasters.

...to get a really good idea of what's going on here, I'd need to sit down for a long time with a pretty complete set of statistics. I don't have a full analysis to back up my claim here, yet.

comment by gwern · 2012-10-08T16:23:17.127Z · LW(p) · GW(p)

Yes, that's pretty much what retrodiction is. It's not as good as prediction since you can come up with theories over-fitted to exactly the past (a big problem with financial retrodiction: people routinely find some complex strategy or apparent arbitrage when running over the last 30 years of market data, which disappears the moment they tried to use it), but if predictions are unavailable, at least retrodiction keeps you concretely grounded.

I'm not sure I would use God as an example. Theists like Plantinga have done a good job showing that they can come up with a version of God + concepts like 'free will' which is logically consistent with any observation, so neither retrodiction nor prediction matters for their God.

Replies from: Epiphany
comment by Epiphany · 2012-10-08T17:21:17.135Z · LW(p) · GW(p)

I love it. Retrodiction is awesome.

I think I broke the free will God argument. The idea that evil is evidence that God gives us free will is contradicted by the existence of evil. What do you think?

Replies from: gwern
comment by gwern · 2012-10-08T17:50:25.487Z · LW(p) · GW(p)

In general, if someone thinks they've said something that is both new and valuable about the theodicy: they haven't.

Looking at your link, I have no idea what you're trying to say.

Replies from: Epiphany
comment by Epiphany · 2012-10-08T19:56:07.889Z · LW(p) · GW(p)

Well, I reworded my point as "The idea that evil is evidence that God gives us free will is contradicted by the existence of evil" but if you don't think it's going to be interesting, don't bother.

comment by Peterdjones · 2012-10-08T20:19:22.350Z · LW(p) · GW(p)

According to Traditional Science, retrospective predictions don't count—so why bother making them?

Who told you that? Einstein's retrodiction of the perihelion shift of Mercury is an oft-quoted example from a century back.

comment by eigen · 2019-07-21T18:47:54.171Z · LW(p) · GW(p)

Oh wow, I had sort of a feeling that accepting how wrong we can be was not the ultimate goal; of course, it cannot be. I'm interested in where this is going further.

comment by [deleted] · 2019-06-23T01:13:15.463Z · LW(p) · GW(p)