Beyond the Reach of God

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-04T15:42:57.000Z · LW · GW · Legacy · 280 comments

Contents

280 comments

Today's post is a tad gloomier than usual, as I measure such things.  It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me.  Those readers sympathetic to arguments like, "It's important to keep our biases because they help us stay happy," should consider not reading.  (Unless they have something to protect, including their own life.)

So!  Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong.  Not as the result of any explicit propositional verbal belief.  More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.

Some would account this a virtue (zettai daijobu da yo), and others would say that it's a thing necessary for mental health.

But we don't live in that world.  We live in the world beyond the reach of God.

It's been a long, long time since I believed in God.  Growing up in an Orthodox Jewish family, I can recall the last remembered time I asked God for something, though I don't remember how old I was.  I was putting in some request on behalf of the next-door-neighboring boy, I forget what exactly—something along the lines of, "I hope things turn out all right for him," or maybe "I hope he becomes Jewish."

I remember what it was like to have some higher authority to appeal to, to take care of things I couldn't handle myself.  I didn't think of it as "warm", because I had no alternative to compare it to.  I just took it for granted.

Still I recall, though only from distant childhood, what it's like to live in the conceptually impossible possible world where God exists.  Really exists, in the way that children and rationalists take all their beliefs at face value.

In the world where God exists, does God intervene to optimize everything?  Regardless of what rabbis assert about the fundamental nature of reality, the take-it-seriously operational answer to this question is obviously "No".  You can't ask God to bring you a lemonade from the refrigerator instead of getting one yourself.  When I believed in God after the serious fashion of a child, so very long ago, I didn't believe that.

Postulating that particular divine inaction doesn't provoke a full-blown theological crisis.  If you said to me, "I have constructed a benevolent superintelligent nanotech-user", and I said "Give me a banana," and no banana appeared, this would not yet disprove your statement.  Human parents don't always do everything their children ask.  There are some decent fun-theoretic arguments—I even believe them myself—against the idea that the best kind of help you can offer someone, is to always immediately give them everything they want.  I don't think that eudaimonia is formulating goals and having them instantly fulfilled; I don't want to become a simple wanting-thing that never has to plan or act or think.

So it's not necessarily an attempt to avoid falsification, to say that God does not grant all prayers.  Even a Friendly AI might not respond to every request.

But clearly, there exists some threshold of horror awful enough that God will intervene.  I remember that being true, when I believed after the fashion of a child.

The God who does not intervene at all, no matter how bad things get—that's an obvious attempt to avoid falsification, to protect a belief-in-belief.  Sufficiently young children don't have the deep-down knowledge that God doesn't really exist.  They really expect to see a dragon in their garage.  They have no reason to imagine a loving God who never acts.  Where exactly is the boundary of sufficient awfulness?  Even a child can imagine arguing over the precise threshold.  But of course God will draw the line somewhere.  Few indeed are the loving parents who, desiring their child to grow up strong and self-reliant, would let their toddler be run over by a car.

The obvious example of a horror so great that God cannot tolerate it, is death—true death, mind-annihilation.  I don't think that even Buddhism allows that.  So long as there is a God in the classic sense—full-blown, ontologically fundamental, the God—we can rest assured that no sufficiently awful event will ever, ever happen.  There is no soul anywhere that need fear true annihilation; God will prevent it.

What if you build your own simulated universe?  The classic example of a simulated universe is Conway's Game of Life.  I do urge you to investigate Life if you've never played it—it's important for comprehending the notion of "physical law".  Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe, albeit it might be rather fragile and awkward.  Other cellular automata would make it simpler.

Could you, by creating a simulated universe, escape the reach of God?  Could you simulate a Game of Life containing sentient entities, and torture the beings therein?  But if God is watching everywhere, then trying to build an unfair Life just results in the God stepping in to modify your computer's transistors.  If the physics you set up in your computer program calls for a sentient Life-entity to be endlessly tortured for no particular reason, the God will intervene.  God being omnipresent, there is no refuge anywhere for true horror:  Life is fair.

But suppose that instead you ask the question:

Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result?

Not even God can modify the answer to this question, unless you believe that God can implement logical impossibilities.  Even as a very young child, I don't remember believing that.  (And why would you need to believe it, if God can modify anything that actually exists?)

What does Life look like, in this imaginary world where every step follows only from its immediate predecessor?  Where things only ever happen, or don't happen, because of the cellular automaton rules?  Where the initial conditions and rules don't describe any God that checks over each state?  What does it look like, the world beyond the reach of God?

That world wouldn't be fair.  If the initial state contained the seeds of something that could self-replicate, natural selection might or might not take place, and complex life might or might not evolve, and that life might or might not become sentient, with no God to guide the evolution.  That world might evolve the equivalent of conscious cows, or conscious dolphins, that lacked hands to improve their condition; maybe they would be eaten by conscious wolves who never thought that they were doing wrong, or cared.

If in a vast plethora of worlds, something like humans evolved, then they would suffer from diseases—not to teach them any lessons, but only because viruses happened to evolve as well, under the cellular automaton rules.

If the people of that world are happy, or unhappy, the causes of their happiness or unhappiness may have nothing to do with good or bad choices they made.  Nothing to do with free will or lessons learned.  In the what-if world where every step follows only from the cellular automaton rules, the equivalent of Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average.  Who prevents it?  God would prevent it from ever actually happening, of course; He would at the very least visit some shade of gloom in the Khan's heart.  But in the mathematical answer to the question What if? there is no God in the axioms.  So if the cellular automaton rules say that the Khan is happy, that, simply, is the whole and only answer to the what-if question.  There is nothing, absolutely nothing, to prevent it.

And if the Khan tortures people horribly to death over the course of days, for his own amusement perhaps?  They will call out for help, perhaps imagining a God.  And if you really wrote that cellular automaton, God would intervene in your program, of course.  But in the what-if question, what the cellular automaton would do under the mathematical rules, there isn't any God in the system.  Since the physical laws contain no specification of a utility function—in particular, no prohibition against torture—then the victims will be saved only if the right cells happen to be 0 or 1.  And it's not likely that anyone will defy the Khan; if they did, someone would strike them with a sword, and the sword would disrupt their organs and they would die, and that would be the end of that.  So the victims die, screaming, and no one helps them; that is the answer to the what-if question.

Could the victims be completely innocent?  Why not, in the what-if world?  If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple.  Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die.  There isn't anything in there about only innocent people not being horribly tortured for indefinite periods.

Is this world starting to sound familiar?

Belief in a fair universe often manifests in more subtle ways than thinking that horrors should be outright prohibited:  Would the twentieth century have gone differently, if Klara Pölzl and Alois Hitler had made love one hour earlier, and a different sperm fertilized the egg, on the night that Adolf Hitler was conceived?

For so many lives and so much loss to turn on a single event, seems disproportionate.  The Divine Plan ought to make more sense than that.  You can believe in a Divine Plan without believing in God—Karl Marx surely did.  You shouldn't have millions of lives depending on a casual choice, an hour's timing, the speed of a microscopic flagellum.  It ought not to be allowed.  It's too disproportionate.  Therefore, if Adolf Hitler had been able to go to high school and become an architect, there would have been someone else to take his role, and World War II would have happened the same as before.

But in the world beyond the reach of God, there isn't any clause in the physical axioms which says "things have to make sense" or "big effects need big causes" or "history runs on reasons too important to be so fragile".  There is no God to impose that order, which is so severely violated by having the lives and deaths of millions depend on one small molecular event.

The point of the thought experiment is to lay out the God-universe and the Nature-universe side by side, so that we can recognize what kind of thinking belongs to the God-universe.  Many who are atheists, still think as if certain things are not allowed.  They would lay out arguments for why World War II was inevitable and would have happened in more or less the same way, even if Hitler had become an architect.  But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler's personality, often in defiance of his generals and advisors.  There is no particular empirical justification that I happen to have heard of, for doubting this.  The main reason to doubt would be refusal to accept that the universe could make so little sense—that horrible things could happen so lightly, for no more reason than a roll of the dice.

But why not?  What prohibits it?

In the God-universe, God prohibits it.  To recognize this is to recognize that we don't live in that universe.  We live in the what-if universe beyond the reach of God, driven by the mathematical laws and nothing else.  Whatever physics says will happen, will happen.  Absolutely anything, good or bad, will happen.  And there is nothing in the laws of physics to lift this rule even for the really extreme cases, where you might expect Nature to be a little more reasonable.

Reading William Shirer's The Rise and Fall of the Third Reich, listening to him describe the disbelief that he and others felt upon discovering the full scope of Nazi atrocities, I thought of what a strange thing it was, to read all that, and know, already, that there wasn't a single protection against it.  To just read through the whole book and accept it; horrified, but not at all disbelieving, because I'd already understood what kind of world I lived in.

Once upon a time, I believed that the extinction of humanity was not allowed.  And others who call themselves rationalists, may yet have things they trust.  They might be called "positive-sum games", or "democracy", or "technology", but they are sacred.  The mark of this sacredness is that the trustworthy thing can't lead to anything really bad; or they can't be permanently defaced, at least not without a compensatory silver lining.  In that sense they can be trusted, even if a few bad things happen here and there.

The unfolding history of Earth can't ever turn from its positive-sum trend to a negative-sum trend; that is not allowed.  Democraciesmodern liberal democracies, anyway—won't ever legalize torture.  Technology has done so much good up until now, that there can't possibly be a Black Swan technology that breaks the trend and does more harm than all the good up until this point.

There are all sorts of clever arguments why such things can't possibly happen.  But the source of these arguments is a much deeper belief that such things are not allowed.  Yet who prohibits?  Who prevents it from happening?  If you can't visualize at least one lawful universe where physics say that such dreadful things happen—and so they do happen, there being nowhere to appeal the verdict—then you aren't yet ready to argue probabilities.

Could it really be that sentient beings have died absolutely for thousands or millions of years, with no soul and no afterlife—and not as part of any grand plan of Nature—not to teach any great lesson about the meaningfulness or meaninglessness of life—not even to teach any profound lesson about what is impossible—so that a trick as simple and stupid-sounding as vitrifying people in liquid nitrogen can save them from total annihilation—and a 10-second rejection of the silly idea can destroy someone's soul?  Can it be that a computer programmer who signs a few papers and buys a life-insurance policy continues into the far future, while Einstein rots in a grave?  We can be sure of one thing:  God wouldn't allow it.  Anything that ridiculous and disproportionate would be ruled out.  It would make a mockery of the Divine Plan—a mockery of the strong reasons why things must be the way they are.

You can have secular rationalizations for things being not allowed.  So it helps to imagine that there is a God, benevolent as you understand goodness—a God who enforces throughout Reality a minimum of fairness and justice—whose plans make sense and depend proportionally on people's choices—who will never permit absolute horror—who does not always intervene, but who at least prohibits universes wrenched completely off their track... to imagine all this, but also imagine that you, yourself, live in a what-if world of pure mathematics—a world beyond the reach of God, an utterly unprotected world where anything at all can happen.

If there's any reader still reading this, who thinks that being happy counts for more than anything in life, then maybe they shouldn't spend much time pondering the unprotectedness of their existence.  Maybe think of it just long enough to sign up themselves and their family for cryonics, and/or write a check to an existential-risk-mitigation agency now and then.  And wear a seatbelt and get health insurance and all those other dreary necessary things that can destroy your life if you miss that one step... but aside from that, if you want to be happy, meditating on the fragility of life isn't going to help.

But this post was written for those who have something to protect.

What can a twelfth-century peasant do to save themselves from annihilation?  Nothing.  Nature's little challenges aren't always fair.  When you run into a challenge that's too difficult, you suffer the penalty; when you run into a lethal penalty, you die.  That's how it is for people, and it isn't any different for planets.  Someone who wants to dance the deadly dance with Nature, does need to understand what they're up against:  Absolute, utter, exceptionless neutrality.

Knowing this won't always save you.  It wouldn't save a twelfth-century peasant, even if they knew.  If you think that a rationalist who fully understands the mess they're in, must surely be able to find a way out—then you trust rationality, enough said.

Some commenter is bound to castigate me for putting too dark a tone on all this, and in response they will list out all the reasons why it's lovely to live in a neutral universe.  Life is allowed to be a little dark, after all; but not darker than a certain point, unless there's a silver lining.

Still, because I don't want to create needless despair, I will say a few hopeful words at this point:

If humanity's future unfolds in the right way, we might be able to make our future light cone fair(er).  We can't modify fundamental physics, but on a higher level of organization we could build some guardrails and put down some padding; organize the particles into a pattern that does some internal checks against catastrophe.  There's a lot of stuff out there that we can't touch—but it may help to consider everything that isn't in our future light cone, as being part of the "generalized past".  As if it had all already happened.  There's at least the prospect of defeating neutrality, in the only future we can touch—the only world that it accomplishes something to care about.

Someday, maybe, immature minds will reliably be sheltered.  Even if children go through the equivalent of not getting a lollipop, or even burning a finger, they won't ever be run over by cars.

And the adults wouldn't be in so much danger.  A superintelligence—a mind that could think a trillion thoughts without a misstep—would not be intimidated by a challenge where death is the price of a single failure.  The raw universe wouldn't seem so harsh, would be only another problem to be solved.

The problem is that building an adult is itself an adult challenge.  That's what I finally realized, years ago.

If there is a fair(er) universe, we have to get there starting from this world—the neutral world, the world of hard concrete with no padding, the world where challenges are not calibrated to your skills.

Not every child needs to stare Nature in the eyes.  Buckling a seatbelt, or writing a check, is not that complicated or deadly.  I don't say that every rationalist should meditate on neutrality.  I don't say that every rationalist should think all these unpleasant thoughts.  But anyone who plans on confronting an uncalibrated challenge of instant death, must not avoid them.

What does a child need to do—what rules should they follow, how should they behave—to solve an adult problem?

280 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Caledonian2 · 2008-10-04T15:50:44.000Z · LW(p) · GW(p)

I don't think that even Buddhism allows that.
Depends on the version of Buddhism and who you ask... but yes, even the utter destruction of the mind.

Of course, 'utter destruction' is not a well-defined term. Depending on who you ask, nothing in Buddhism is ever actually destroyed. Or in the Dust hypothesis, or the Library of Babel... the existence of the mind never ends, because we've never beaten our wives in the first place.

comment by pdf23ds · 2008-10-04T16:23:07.000Z · LW(p) · GW(p)

I live with this awareness.

comment by Anti-reductionist · 2008-10-04T16:53:31.000Z · LW(p) · GW(p)

Summary: "Bad things happen, which proves God doesn't exist." Same old argument that atheists have thrown around for hundreds, probably thousands, of years. The standard rebuttal is that evil is Man's own fault, for abusing free will. You don't have to agree, but at least quit pretending that you've proven anything.

Replies from: wizzwizz4
comment by wizzwizz4 · 2019-07-14T11:15:35.400Z · LW(p) · GW(p)

Eliezer is an atheist. But this article doesn't say "there is no God"; it says "act as though God won't save you".

comment by Chad2 · 2008-10-04T16:59:47.000Z · LW(p) · GW(p)

"Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe"

Bit of a leap in logic here, no?

Replies from: None, Baughn, hwc, textonyx
comment by [deleted] · 2009-08-26T21:26:25.122Z · LW(p) · GW(p)

Read Gödel, Escher, Bach. And google "Turing Machine".

comment by Baughn · 2011-11-05T13:06:42.360Z · LW(p) · GW(p)

Worst case, our laws of physics seem to be turing-computable.

comment by hwc · 2011-12-07T12:39:37.445Z · LW(p) · GW(p)

The leap is that the Church–Turing thesis applies to human (“sentient”) cognition. Many theists deny this.

Replies from: Kyro
comment by Kyro · 2013-07-02T20:30:36.331Z · LW(p) · GW(p)

Many theists deny this...

To elaborate, if God exists then consciousness depends on having an immaterial soul. If consciousness depends on an immaterial soul, then simulated entities can never truly be conscious. If the simulated entities aren't really conscious they are incapable of suffering, and there's no reason for God to intervene in the simulation.

The thought experiment is not a very effective argument against theism, as it assumes non-existence of souls, but it serves the purpose of illustrating how unthinkably horrible things can actually happen.

Replies from: hwc, SaidAchmiz
comment by hwc · 2013-07-06T17:17:21.085Z · LW(p) · GW(p)

if God exists then consciousness depends on having an immaterial soul.

I translate that into logical notation:

(God exists) -> For all X (X is conscious -> X has an immaterial soul)

I don't concede this conditional. I can imagine a universe with a personal creator, where consciousness is a material property of certain types of complex systems, but souls don't exist.

Replies from: hwc
comment by hwc · 2013-07-06T17:36:33.838Z · LW(p) · GW(p)

Eliezer (I think) feels the same way about the necessity of souls as about the Judeo-Christian god. Interesting hypothesis, but too complex to have anything but a small prior. Then no supporting evidence shows up, despite millennia of looking, reducing the likelihood further.

Replies from: wedrifid
comment by wedrifid · 2013-07-07T05:55:07.929Z · LW(p) · GW(p)

Eliezer (I think) feels the same way about the necessity of souls as about the Judeo-Christian god. Interesting hypothesis, but too complex to have anything but a small prior. Then no supporting evidence shows up, despite millennia of looking, reducing the likelihood further.

Has Eliezer suggested that he believes that the Judeo-Christian god is an "Interesting hypothesis"? My model of him wouldn't say that.

Replies from: hwc
comment by hwc · 2013-07-21T15:34:57.002Z · LW(p) · GW(p)

I think I meant “interesting” in a sarcastic tone.

Another way of putting it: “You (theists) claim a high level of belief in this hypothesis. Because so many people (including close family members) take this position, I have though about this hypothesis and find it too complex to have anything but a small prior. Then I asked myself what observations are more likely if the hypothesis is true and which would be less likely. Then I looked around and found no evidence in favor of your hypothesis.”

comment by Said Achmiz (SaidAchmiz) · 2013-07-06T17:58:49.192Z · LW(p) · GW(p)

A number of your conditionals are false.

To elaborate, if God exists then consciousness depends on having an immaterial soul.

This is totally out of nowhere. What has God's existence have to do with what consciousness does or does not depend on? They seem to be entirely logically independent. (This one has already been handled by hwc.)

If consciousness depends on an immaterial soul, then simulated entities can never truly be conscious.

False again, because there's no a priori reason why simulated entities can't have an immaterial soul. (For instance, if God exists and is omnipotent, then by definition he could cause it to be the case that (some or all) simulated entities have immaterial souls.)

If the simulated entities aren't really conscious they are incapable of suffering

And false a third time, because it assumes that suffering depends on consciousness. A number of e.g. animal rights proponents deny this.

comment by textonyx · 2013-11-09T19:15:02.769Z · LW(p) · GW(p)

Seems to me your comment would have received more votes if you had amplified it a bit considering the majority viewpoint of readers attracted to this blog. What Eli's assumption depends upon: The biblical words are 'God created man in his own image', which hinges on assuming God created the universe. Now, if God can create us in his own image, why can't we create a sentient AI in Our own image? Did god pass on to us whatever "power" he used to endow us with sentience so that we are also empowered to pass on sentience? Can we arrive at a correct answer just be looking at the evidence?

From the theistic approach we live in a universe .. one theory (Linde) is that we live in a multiverse with many local universes with their own laws of physics, perhaps they are turing-computable? There is controversy about whether the baby universe is shaped (inherits) laws from the parent universe or whether the physical laws of the baby universe evolve on their own, essentially random in relationship to the parent universe. It is known that experiment cannot provide an answer to which way this unfolds. Looking at the physical laws of this universe (observations) doesn't provide insight as to which, if any, inheritable traits are passed from parent to baby universe.

In other words, even if the laws of this universe are turing-computable (Zuse, Fredkin/Wolfram, and Deutsch in an expanded CT Thesis sense) that doesn't provide the foundation for a firm conclusion, because not all possibilities are excluded with this amount of information. Computability is an algorithmic thus cause and effect structure. This doesn't answer the question of whether the origin of the universe is likewise computable. Most current theories introduce faster than light source moments and computability/law of cause and effect, have a speed of light limitation. A similar difficulty arises in the effort to reconcile Relativity and Quantum Theory->to make it universal, called the Problem of Gravity which is really a problem about defining and integrating time.

Cause and Effect unfolds over time. The question is called The Problem of First Cause. What was the cause which first spawned this universe in which have evolved. Causes have effects. What was the effect which generated the beginning of the universe? There is a paradox if you define the first cause as identical to the first effect because the progression through time is eliminated which is the hallmark of a relativistic theory of the universe. Religions get around this by positing a God, one without beginning and without end = eternal. If God is identified as the same as the physical universe then he comes to an end assuming the universe has a finite span. The problem is that we are in the forest and can't see the extent of the forest to come to a good answer based on observed evidence. We are not in the same category. So that is the leap of logic, 'Life having been proven to be Turing-complete' leads to an over-generalization because one has to assume that this description is exactly analogous to an evidence based conclusion that human sentience is a causal product of a physical universe evolving according to fixed physical universal laws which are identical to their origin.

I favor this view, but that doesn't mean it isn't circular which is the heart of Chad2's criticism. It's yet another grasp for a firm foundation to answer the core question(s) of Who Am I and What Is My purpose. The tacit assumption contained in Eli's quoted statement is inspired by the same need to invent Karma and Reincarnation to try to show that life is actually fair and that we can make sense of it. One doesn't need to be a theist to deny "The leap is that the Church–Turing thesis applies to human (“sentient”) cognition. Many theists deny this." one just needs to be a critical thinker not ready to adopt a ready plausible answer based on using human reasoning as a great tool to explore consciousness conundrums.

comment by Alex_Martelli · 2008-10-04T17:06:02.000Z · LW(p) · GW(p)

"In sober historical fact", clear minds could already see in 1919 that the absurdity of the Treaty of Versailles (with its total ignorance of economic realities, and entirely fueled by hate and revenge) was preparing the next war -- each person (in both nominally winning and nominally defeated countries) being put in such unendurable situations that "he listens to whatever instruction of hope, illusion or revenge is carried to him on the air".

This was J.M. Keynes writing in 1919, when A. Hitler was working as a police spy for the Rechswehr, infiltrating a tiny party then named DAP (and only later renamed to NDA); Keynes' dire warnings had nothing specifically to do with this "irrelevant" individual, which he had no doubt never even heard about -- there were plenty of other matches ready to set fire to a tinderbox world, after all; for examle, at that time, Benito Mussolini was a much more prominent figure, a well known and controversial journalist, and had just founded the "Fasci Nazionali di Combattimento".

So your claim, that believing the European errors in 1919 made another great war extremely likely, "is an unreasonable belief", is absurd. You weaken your interesting general argument by trying to support it with such tripe; "inevitable" is always an overbid, but to opine that the situation in 1919 made another great war all too likely within a generation, quite independently of what individuals would be leading the various countries involved, is perfectly reasonable.

Keynes's strong and lucid prose could not make a difference in 1919 (even though his book was a best-seller and may have influenced British and American policies, France was too dead-set in its hate and thirst for revenge) -- but over a quarter of a century later, his ideas prevailed: after a brief attempt to de-industrialize Germany and push it back to a pastoral state (which he had already argued against in '19), ironically shortly after Keynes' death, the Marshall Plan was passed (in rough outline, what Keynes was advocating in '19...) -- and we didn't get yet another great european war after that.

Without Hitler, but with Versailles and without any decent reconstruction plan after the Great War, another such great war WAS extremely likely -- it could have differed in uncountable details and even in strategic outline, from the events as they actually unfolded, just like the way a forest fire in dry and thick woods can unfold in many ways that differ in detail... but what exact match or spark lights the fire is in a sense a detail -- the dry and flame-prone nature of the woods makes a conflagration far too likely to avoid it by removing one specific match, or spark: there will be other sparks or matches to play a similar role.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-04T17:14:02.000Z · LW(p) · GW(p)

The claim isn't that Germany would have been perfectly fine, and would never have started a war or done anything else extreme. And the claim is not that Hitler trashed a country that was ticking along happily.

The claim is that the history of the twentieth century would have gone substantially differently. World War II might not have happened. The tremendous role that Hitler's idiosyncrasies played in directing events, doesn't seem to leave much rational room for determinism here.

Replies from: kilobug, pnrjulius
comment by kilobug · 2011-11-05T14:01:12.703Z · LW(p) · GW(p)

Well, the raise of fascism and anti-Semitism in Europe at that time was wide-spread. It was not just a man. From the Dreyfus affair in France, to Mussolini and Franco, to the heated rivalries between the fascists leagues and the popular in France, ... the whole of Europe after WW1 and unfair Versailles treaty, then the disaster of the 1929 crisis, was a fertile land for all fascist movements.

World War II feels much more like a "natural consequence" of previous events (WW1, Versailles treaty, 1929 crisis) and general historical laws (that "populist" politicians thrive when the economical situation is bad), than of a single man. It would have been different with different leaders in the various major countries involved, sure. If Leon Blum helped Republican Spain against Franco instead of letting them stand alone, things could have changed a lot. And many other events could have gone differently - of course, without Hitler, it would have been different.

But different enough so WWII wouldn't occur ? Very unlikely to me - not impossible, but very unlikely with only a single turning point.

Replies from: Smokeskin, MugaSofer
comment by Smokeskin · 2013-01-07T15:01:24.788Z · LW(p) · GW(p)

Be aware of the hindsight bias: http://en.wikipedia.org/wiki/Hindsight_bias

comment by MugaSofer · 2013-04-15T10:46:27.844Z · LW(p) · GW(p)

Depends on how strictly you define "WWII", for one thing. For example, I've seen it argued that Hitler crippled the Nazi defense strategy to the extent they might well have won without him. Is it still WWII if it's the War for Freedom under the First Glorious Father? Probably. Still ...

comment by pnrjulius · 2012-07-05T02:15:49.213Z · LW(p) · GW(p)

It's a subtle matter, but... you clearly don't really mean determinism here, because you've said a hundred times before how the universe is ultimately deterministic even at the quantum level.

Maybe predictability is the word we want. Or maybe it's something else, like fairness or "moral non-neutrality"; it doesn't seem fair that Hitler could have that large an impact by himself, even though there's nothing remotely non-deterministic about that assertion.

Replies from: Eliezer_Yudkowsky, Sniffnoy
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-07-05T02:19:38.526Z · LW(p) · GW(p)

Macroscopic determinism, i.e., the belief that an outcome was not sensitive to small thermal (never mind quantum) fluctuations. If I'm hungry and somebody offers me a tasty hamburger, it's macroscopically determined that I'll say yes in almost all Everett branches; if Zimbabwe starts printing more money, it's macroscopically determined that their inflation rates will rise further.

Replies from: shminux
comment by shminux · 2012-07-05T23:47:45.134Z · LW(p) · GW(p)

Macroscopic determinism

The relevant mathematical term is well-posedness, specifically

The solution's behavior hardly changes, when there's a slight change in the initial condition

Specifically, the short-term changes are small or at least bounded, though the long term behavior may change drastically.

comment by Sniffnoy · 2012-07-05T02:32:46.102Z · LW(p) · GW(p)

Perhapss something along the lines of "stability"? The idea being that small perturbations of input should lead to only small perturbations down the line. ("Stability" isn't really the proper word for that, but I'm not sure what is.)

comment by Ian_C. · 2008-10-04T17:21:40.000Z · LW(p) · GW(p)

Reminds me of this: "Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

But my question would be: Is the universe of cause and effect really so less safe than the universe of God? At least in this universe, someone who has an evil whim is limited by the laws of cause and effect, e.g. Hitler had to build tanks first, which gave the allies time to prepare. In that other universe, Supreme Being decides he's bored with us and zap we're gone, no rules he has to follow to achieve that outcome.

So why is relying on the goodness of God safer than relying on the inexorability of cause and effect?

Replies from: LawChan
comment by LawrenceC (LawChan) · 2014-10-13T17:06:29.015Z · LW(p) · GW(p)

Short Answer: It's not.

Longer Explanation: The way I understand it, the universe of God feels safer because we think of God as like us. In that world, there's a higher being out there. Since we model that being as having similar motivations, desires, etc., we believe that God will also follow some sort of morality and subscribe to basic ideas of fairness. So He'll be compelled to intervene in the case things get too bad.

The existence of God also makes you feel less responsible for your fate. For example, if he chooses to smite you, there's nothing you can do. But in a universe of Math, if you don't take action, no higher being is going to step in to hurt/harm you.

Replies from: CCC
comment by CCC · 2014-10-14T07:39:13.358Z · LW(p) · GW(p)

Since we model that being as having similar motivations, desires, etc., we believe that God will also follow some sort of morality and subscribe to basic ideas of fairness.

Also, consider that we exist. If there is such a supreme being, then logically, that supreme being does not object to our existence (since we have not yet been smited). Therefore, to said supreme being, our presence is either desirable or irrelevant. If desirable, our presence can be expected to continue; and the human ego will not allow many people to seriously consider ourselves irrelevant, so that option is often simply not considered.

comment by pdf23ds · 2008-10-04T17:29:10.000Z · LW(p) · GW(p)

Given how widespread white nationalism is in America, (i.e. it's a common phenomenon) and how intimately tied to fascism it is, I think that there's a substantial chance that the leader that would have taken Hitler's place would have shared his predilection for ethnic cleansing, even if not world domination.

comment by Caledonian2 · 2008-10-04T17:30:54.000Z · LW(p) · GW(p)

It looks more and more like all of this 'Friendly AI' garbage is just a reaction to Eliezer's departure from Judaism.

Which is not to say that this hasn't been obvious for a long time. But this is the closest Eliezer's ever come to simply acknowledging it.

He's already come to terms with the fact that reality doesn't conform to his ideas of what it should be. Now he just has to come to terms with the fact that his ideas do not determine what reality should be.

There isn't going to be a magical sky daddy no matter what we do. There's no such thing as magic. There's no such thing as 'Friendly'. It's not possible to make reality a safe and cozy haven, and trying to make it so will only cause harm.

Replies from: Dr_Manhattan, JoshuaZ, Torgamous
comment by Dr_Manhattan · 2011-01-24T02:10:02.105Z · LW(p) · GW(p)

It's not possible to make reality a safe and cozy haven, and trying to make it so will only cause harm.

I found an abandoned kitten once, brought it home, fed it to health and my mom found it a nice home. Have I caused it harm?

comment by JoshuaZ · 2011-01-24T02:53:57.340Z · LW(p) · GW(p)

There isn't going to be a magical sky daddy no matter what we do. There's no such thing as magic. There's no such thing as 'Friendly'. It's not possible to make reality a safe and cozy haven, and trying to make it so will only cause harm.

All major human goals have trying to make reality more of a safe and cozy haven. This is true not just for things like trying to make Friendly AI, and cryonics, that seem far off, but also simple everyday things like discovering new antibiotics or trying to find cures for diseases.

comment by Torgamous · 2011-11-16T16:59:30.956Z · LW(p) · GW(p)

The concept of "should" is not one the universe recognizes; it exists only in the human mind. So yes, his ideas do determine what should be.

Besides, "life sucks, let's fix it" and "God doesn't exist, let's build one" are far more productive viewpoints than "life sucks, deal with it" and "God doesn't exist, how terrible", even if they never amount to as much as they hope to. The idea that they "will only cause harm" is incredibly nebulous, and sounds more like an excuse to accept the status quo than a valid argument.

Replies from: thomblake
comment by thomblake · 2011-11-16T17:04:05.578Z · LW(p) · GW(p)

The idea that they "will only cause harm" is incredibly nebulous, and sounds more like an excuse to accept the status quo than a valid argument.

That they will only cause harm is a particular proposition, which may well be true (though taken strictly its probability is about 0).

Replies from: Torgamous
comment by Torgamous · 2011-11-16T17:31:22.295Z · LW(p) · GW(p)

How so? "No good will come of this" is an incredibly old argument that's been applied to all kinds of things, and as far as I know rarely has a specific basis. What aspect of his argument am I missing?

Replies from: thomblake
comment by thomblake · 2011-11-16T17:36:14.781Z · LW(p) · GW(p)

I fail to see how the age of the argument is relevant. And it was not an argument, it was a proposition.

Caledonian was asserting that "trying to make reality a safe and cozy haven will only cause harm". This is a fairly well-specified prediction (to the extent that one can observe whether or not X is "trying to" Y in general) and may be true or false. It is not an excuse, nor an argument, nor particularly nebulous.

Though as I mentioned, in general (if taken strictly) assertions that a real-world action will have precisely one sort of effect are false.

Replies from: Torgamous
comment by Torgamous · 2011-11-16T18:50:36.324Z · LW(p) · GW(p)

The age of the proposition and the ease with which it can be applied to a variety of situations is an indication that, when such a proposition is made, it should be examined and justified in more detail before being declared a valid argument. Causing harm, given the subject matter, could mean a variety of things from wasted funds to the death of the firstborn children of every family in Egypt. Lacking anything else in the post to help determine what kind and degree of harm was meant or even where the idea that failed attempts will be harmful came from, the original assertion comes across, to me, as a vague claim meant to inspire a negative reaction. It may be true or false, but the boundaries of "true" are not very clearly defined.

I understand that it is probably wrong, and I understand that you know that too. I'm discussing this because I want to know if I'm doing something wrong when determining the validity of an argument. We also seem to be using different definitions of "argument"; I merely see it as a better-sounding synonym of proposition. No negative connotations were meant to be invoked.

Replies from: thomblake
comment by thomblake · 2011-11-16T18:59:00.638Z · LW(p) · GW(p)

An argument is a series of statements ("propositions") that are intended to support a particular conclusion. For example, "Socrates is a man. All men are mortal. Therefore, Socrates is mortal." Just as one sentence is not a paragraph, one proposition is not an argument.

There is no question of whether "trying to make reality a safe and cozy haven will only cause harm" is a valid argument because it's not an argument at all. This is an argument:

  • If we try to make reality a safe and cozy haven, then we will only cause harm.
  • We are trying to make reality a safe and cozy haven
  • Therefore, we will only cause harm.

Note that this is a valid argument; the truth of the conclusion follows necessarily from the truth of its premises. If you have any problems with it, it is with its soundness, the extent to which the propositions presented are true. It sounds like you think the first proposition is false, but you are claiming Caledonian made an invalid argument instead. If that is the case, you're making a category mistake.

Replies from: Torgamous
comment by Torgamous · 2011-11-16T19:45:33.071Z · LW(p) · GW(p)

And now we're disputing definitions. I was using argument to mean what you've defined as propositions; it was a mistake in labeling, but the category is the same. Regardless, the falseness of his proposition is not an issue. The issue I have is that his initial proposition, though it may possibly be true, has a wide range of possible truenesses, no indication which trueness the poster was aiming for, and may very possibly have been made without a particular value of potential truth in mind. If that's soundness, then yeah, I took issue with the soundness of his proposition.

Replies from: thomblake
comment by thomblake · 2011-11-16T20:11:24.635Z · LW(p) · GW(p)

The issue I have is that his initial proposition, though it may possibly be true, has a wide range of possible truenesses, no indication which trueness the poster was aiming for, and may very possibly have been made without a particular value of potential truth in mind.

I don't see how that's the case. It seems very specific to me. In the statement "X will only cause Y" are you confused about the meaning of X, Y, "will only cause", or something else I'm missing? (X="trying to make ... reality a safe and cozy haven", Y="harm")

Replies from: Torgamous
comment by Torgamous · 2011-11-16T20:32:07.723Z · LW(p) · GW(p)

I take issue with Y. "Harm", though it does have a definition, is a very, very broad term, encompassing every negative eventuality imaginable. Saying "X will cause stuff" only doubles the number of applicable outcomes. That does not meet my definition of "specific".

Replies from: thomblake
comment by thomblake · 2011-11-16T20:35:00.700Z · LW(p) · GW(p)

Aha. Again, a definitional problem. I would indeed regard the claim "dropping this rock will cause something to happen" as specific, and trivially true; it is not vague - there is no question of its truth value or meaning.

I think this is resolved.

Replies from: Torgamous
comment by Torgamous · 2011-11-21T16:10:38.444Z · LW(p) · GW(p)

I'm sorry, I want this conversation to be over too, and I don't mean to be rude, but this has been bugging me all week: where did you get that definition from, and where do you live? Literally everyone I have interacted with or read stuff from before you, including published authors, used the same definitions of "specific" and "vague" that I do, and in ways obvious enough that your confusion confuses me.

Replies from: thomblake
comment by thomblake · 2011-11-21T19:10:19.029Z · LW(p) · GW(p)

I live in (and am from near) New Haven, CT, USA, and I have a background primarily in Philosophy.

A vague proposition is one which has an uncertain meaning - 'meaning' is of course tied up in relevance and context. So observing that a patient coughs is a 'vague' symptom in the sense that the relevant 'meaning' is an answer to the question "What disease does the patient have?" and the answer is unclear.

In the above, Caledonian is stating that "trying to make reality a safe and cozy haven will only cause harm". I do not see this as in any way vague, since it has a clear referent. If anyone were to try to make reality a safe an cozy haven, and caused anything other than harm in doing so, then the proposition would turn out to have been false. You can unambiguously sort worlds where the proposition is true from worlds where the proposition is false.

I'm not sure from previous comments on this thread what definition of 'vague' you were employing or how it differs from this.

Replies from: TheOtherDave, Torgamous
comment by TheOtherDave · 2011-11-21T19:39:48.304Z · LW(p) · GW(p)

You can unambiguously sort worlds where the proposition is true from worlds where the proposition is false.

Doesn't this depend on having an unambiguous test for whether reality in a given world is a safe and cozy haven? If one were skeptical of the possibility of such a test, one might consider the quoted statement vague.

Replies from: thomblake
comment by thomblake · 2011-11-21T20:51:14.358Z · LW(p) · GW(p)

Doesn't this depend on having an unambiguous test for whether reality in a given world is a safe and cozy haven?

No, you need to have unambiguous tests for "consequences other than harm" and "trying to make reality a safe and cozy haven".

The proposition leaves open the possibility that one might accidentally make reality a safe and cozy haven without trying and thus cause non-harm.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-21T21:23:06.703Z · LW(p) · GW(p)

You are entirely correct! Teach me to be sloppy.

comment by Torgamous · 2011-11-21T20:06:04.575Z · LW(p) · GW(p)

Ah, I see. That makes sense now; your previous example had led me to believe that the difference was much greater than it is. I had been using "vague" to mean that it didn't sharply limit the number of anticipated experiences; there are lots of things that are harmful that cover a range of experiences, and so saying that something will "cause harm" is vague. For the disease question, "vague" would be saying "he has a virus"; while that term is very clearly defined, it doesn't tell you if the person has a month to live or just has this year's flu, so the worlds in which the statement is true can vary greatly and you can't plan a whole lot based on it. Ironically, my definition seems a lot vaguer than yours now that they've both been defined.

And now I can happily say the matter's resolved.

comment by Jef_Allbright · 2008-10-04T17:43:44.000Z · LW(p) · GW(p)

"I don't think that even Buddhism allows that."

Remove whatever cultural or personal contextual trappings you find draped over a particular expression of Buddhism, and you'll find it very clear that Buddhism does "allow" that, or more precisely, un-asks that question.

As you chip away at unfounded beliefs, including the belief in an essential self (however defined), or the belief that there can be a "problem to solved" independent of a context for its specification, you may arrive at the realization of a view of the world flipped inside-out, with everything working just as before, less a few paradoxes.

The wisdom of "adult" problem-solving is not so much about knowing the "right" answers and methods, but about increasingly effective knowledge of what doesn't work. And from the point of view of any necessarily subjective agent in an increasingly uncertain world, that's all there ever was or is.

Certainty is always a special case.

comment by Alex_Martelli · 2008-10-04T17:47:21.000Z · LW(p) · GW(p)

By the way, I should clarify that my total disagreement with your thesis on WW2 being single-handedly caused by A. Hitler does in no way imply disagreement with your more general thesis. In general I do believe the "until comes steam-engine-time" theory -- that many macro-scale circumstances must be present to create a favorable environment for some revolutionary change; to a lesser degree, I also do think that mostly, when the macro-environment is ripe, one of the many sparks and matches (that are going off all the time, but normally fizz out because the environment is NOT ripe) will tend to start the blaze. But there's nothing "inevitable" here: these are probabilistic, Bayesian beliefs, not "blind faith" on my part. One can look at all available detail and information about each historical situation and come to opine that this or that one follows or deviates from the theory. I just happen to think that WW2 is a particularly blatant example where the theory was followed (as Keynes could already dimly see it coming in '19, and he was NOT the only writer of the time to think that way...!); another equally blatant example is Roman history in the late Republic and early Empire -- yes, many exceptional individuals shaped the details of the events as they unfolded, but the nearly-relentless march of the colossus away from a mostly-oligarchic Republic and "inevitably" towards a progressively stronger Principate looms much larger than any of these individuals, even fabled ones like Caesar and Octavian.

But for example I'm inclined to think of more important roles for individuals in other historically famous cases -- such as Alexander, or Napoleon. The general circumstances at the time of their accessions to power were no doubt a necessary condition for their military successes, but it's far from clear to me that they were anywhere close to sufficient: e.g., without a Bonaparte, it does seem quite possible to me that the French Revolution might have played itself out, for example, into a mostly-oligarchic Republic (with occasional democratic and demagogic streaks, just like Rome's), without foreign expansionism (or, not much), without anywhere like the 20 years of continuous wars that in fact took place, and eventually settling into a "stable" state (or, as stable as anything ever is in European history;-). And I do quite fancy well-written, well-researched "alternate history" fiction, such as Turtledove's, so I'd love to read a novel about what happens in 1812 to the fledgling USA if the British are free to entirely concentrate on that war, not distracted by Napoleon's last hurrahs in their backyard, because Napoleon was never around...;-) [To revisit "what if Hitler had never been born", btw, if you also like alternate history fiction, Stephen Fry's "Making History" can be recommended;-)]

After Napoleon, France was brought back to the closest status to pre-Revolutionary that the Powers could achieve -- and ("inevitably" one might say;-) 15 years later the Ancien Regime crumbled again; however, that time it gave birth somewhat peacefully to a bourgeois-dominated constitutional monarchy (with no aggressive foreign adventures, except towards hopefully-lucrative colonies). Just like the fact that following Keynes' 1919 advice in 1947 did produce lasting peace offers some support to Keynes' original contention, so the fact that no other "strong-man" emerged to grab the reins in 1830 offers some support to the theory that there way nothing "inevitable" about a military strong man taking power in 1799 -- that, had a military and political genius not been around and greedy for power in '99, France might well have evolved along different and more peaceful lines as it later did in '30. Of course, one can argue endlessly about counterfactuals... but should have better support before trying to paint a disagreement with oneself as "absurd"!-)

BTW, in terms of human death and suffering (although definitely not in terms of "sheer evil" in modern ethical conception), the 16 years of Napoleon's power were (in proportion to the population at the time) quite comparable to, or higher than, Hitler's 12; so, switching from Hitler to Napoleon as your example would not necessarily weaken it in this sense.

comment by ShardPhoenix · 2008-10-04T17:54:41.000Z · LW(p) · GW(p)

I thought I already knew all this, but this post has made me realize that I've still, deep down, been thinking as you describe - that the universe can't be that unfair, and that the future isn't really at risk. I guess the world seems like a bit scarier of a place now, but I'm sure I'll go back to being distracted by day-to-day life in short order ;).

As for cryonics, I'm a little interested, but right now I have too many doubts about it and not enough spare money to go out and sign up immediately.

comment by Zubon · 2008-10-04T18:00:56.000Z · LW(p) · GW(p)

With all the sci fi brought up here, I think we are familiar with Hitler's Time Travel Exemption Act.

Ian C., that is half the philosophy of Epicurus in a nutshell: there are no gods, there is no afterlife, so the worst case scenario is not subject to the whims of petulant deities.

If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0. If there is any probability of your annihilation, no matter how small, you will not survive for an infinite amount of time. That is what happens in an infinite amount of time: everything possible. If all your backup plans can fail at once, even at P=1/(3^^^3), that number will come of eventually with infinite trials.

Replies from: faul_sname
comment by faul_sname · 2012-09-11T07:33:30.836Z · LW(p) · GW(p)

Not necessarily. If the risk decreases faster than an inverse function (ie. if the risk is less than 1/n for each event, where n is the number of events), there can be a probability between 0 and 1.

Replies from: wizzwizz4
comment by wizzwizz4 · 2019-07-14T11:35:30.759Z · LW(p) · GW(p)

Unless you make one more Horcrux than yesterday each day, that's never going to happen. And there's still the finite, fixed, non-zero chance of the magic widget being destroyed and all of your backups failing simultaneously, or the false vacuum collapsing. Unless you seriously think you can think up completely novel ways to prevent your death at a constantly-accelerating rate, with no duplicates, many of which can avoid hypothetical universe-ending apocalypses.

Unless we find a way to escape the known universe, or discover something similarly munchkinneritorial, we're all going to die.

comment by PK · 2008-10-04T18:06:54.000Z · LW(p) · GW(p)

What's the point of despair? There seems to be a given assumption in the original post that:

1) there is no protection, universe is allowed to be horrible --> 2)lets despair

But number 2 doesn't change 1 one bit. This is not a clever argument to disprove number 1. I'm just saying despair is pointless if it changes nothing. It's like when babies cry automatically when something isn't the way they like because they are programmed to by evolution because this reliably attracted the attention of adults. Despairing about the universe will not attract the attention of adults to make it better. We are the only adults, that's it. I would rather reason along the lines of:

1) there is no protection, universe is allowed to be horrible --> 2)what can I do to make it better

Agreed with everything else except the part where this is really sad news that's supposed to make us unhappy.

Replies from: Voltairina, Houshalter
comment by Voltairina · 2012-03-28T06:05:08.825Z · LW(p) · GW(p)

Agreed. Despair is an unsophisticated response that's not adaptive to the environment in which we're using it - we know how to despair now, it isn't rewarding, and we should learn to do something more interesting that might get us results sooner than "never".

comment by Houshalter · 2014-05-16T18:28:59.425Z · LW(p) · GW(p)

What's the point of having feelings or emotions at all? Are they not all "pointless"?

Replies from: keen
comment by keen · 2014-05-16T18:36:41.075Z · LW(p) · GW(p)

I suggest that you research the difference between instrumental values and terminal values.

Replies from: Houshalter
comment by Houshalter · 2014-05-17T02:48:23.684Z · LW(p) · GW(p)

I understand the difference. Perhaps I wasn't clear. You can't just call feelings "pointless" because they don't change anything.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-05-17T15:50:55.248Z · LW(p) · GW(p)

You could argue that some feelings do change things and have an effect on actions; sometimes in a negative direction (i.e. anger leading to vengeance and war) sometimes in a positive direction (i.e. Gratitude resulting in kindness and help.) Anger in this example can be considered "pointless" not because it has no effect upon the world, but because it's effect is negative and not endorsed intellectually. I think that's the sense in which despair is pointless in the original example. It does have an effect on the world; it results in people NOT taking actions to make things better.

You could argue with the use of the word "pointless", I suppose.

comment by Fostiak2 · 2008-10-04T18:25:58.000Z · LW(p) · GW(p)

I don't understand the faith in cryonics.

In a Universe beyond the reach of God, who is to say that the first civilization technologically advanced enough to revive you will not be a "death gives meaning to life" theocracy which has a policy of reviving those who chose to attempt to escape death in order to submit them and their defrosted family members to 1000 years of unimaginable torture followed by execution?

Sure, there are many reasons to believe such a development is improbable. But you are still rolling those dice in a Universe beyond God's reach, are you not?

Replies from: None
comment by [deleted] · 2009-08-26T21:36:18.064Z · LW(p) · GW(p)

Putting aside the fact that theocracy doesn't really lend itself to technological advancement, the utilitity likelihood of living longer outweighs the (dis)utility likelihood of being tortured for 1000 years.

comment by pdf23ds · 2008-10-04T18:34:17.000Z · LW(p) · GW(p)

Of course you are. It's still a probability game. But Eliezer's contention is that the probabilities for cryonics look good. It's worth rolling the dice.

comment by RobinHanson · 2008-10-04T19:03:15.000Z · LW(p) · GW(p)

Yes very very bad things can happen for little reason. But of course we still want positive arguments to convince us to assign large probabilities to scenarios about which you want us to worry.

comment by Kip3 · 2008-10-04T19:08:53.000Z · LW(p) · GW(p)

Where is this noirish Eliezer when he's writing about the existence of free will and non-relativist moral truths?

comment by Hopefully_Anonymous · 2008-10-04T19:13:10.000Z · LW(p) · GW(p)

Don't get bored with the small shit. Cancers, heart disease, stroke, safety engineering, suicidal depression, neurodegenerations, improved cryonic tech. In the next few decades I'm probably going to see most of you die from that shit (and that's if I'm lucky enough to persist as an observer), when you could've done a lot more to prevent it, if you didn't get bored so easily of dealing with the basics.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-04T19:14:56.000Z · LW(p) · GW(p)

Kip, the colors of rationality are crystal, mirror, and glass.

Robin, fair enough; but conversely no amount of argument will convince someone in zettai daijobu da yo mode.

For the benefit of those who haven't been following along with Overcoming Bias, I should note that I actually intend to fix the universe (or at least throw some padding atop my local region of it, as disclaimed above) - I'm not just complaining here.

Replies from: None
comment by [deleted] · 2015-03-03T12:20:56.709Z · LW(p) · GW(p)

Hi Eliezer,

Sorry, very late to this discussion. I just want to tell you this is exactly how people become conservatives, not in the US politics sense but in the works of Edmund Burke sense, and maybe there is something to learn from there. From about the Age of Enlightenment the Western world is in this optimistic socially experimentating moods, easily casting away old institutions like feudalism, aristocracy, monarchy or limited government, and all this optimism comes from the belief that history has a course, a given, pre-defined direction and Eric Voegelin pointed out it is the secularization of a theistic belief, "immanentizing the eschathon". Burke and others have also pointed out this optimism comes from a belief that "human nature is good". Also in an Enlightenment faith that acting rationally is kind of easy once you learn what your mistakes were. Ugh.

Lacking this optimism, many social changes of the last 300 years look kind of brash. In a hindsight it makes more sense that long-standing institutions like aristocratic nobility were better matches to human cognitive biases.

OTOH NRx also gets it wrong, because society had to change to cope with changing technology. The changing of military technology alone - gunpowder democratizing war - had to change things around, probably you cannot really have stuff like nobility when knightly armor is useless against muskets etc.

This puts us into the uncomfortable position that in the last 300 years both progressives and reactionaries were wrong. The progressives were way too optimistic, the reactionaries did not accept technological change requires social change.

So this is the answer I am trying to find out today. Suppose we are in 1700 somewhere in Europe. Unlike them, we do not believe history has a course making it hard for us to screw up social change, do not believe in Providence, do not believe in God wanting to liberate people or even giving them inalienable rights, do not believe human nature is inherently good, and do not think acting rationally is easy. However, we sure as hell don't think we can keep having an essentially post-feudal social structure with all these cheap muskets around and artisans workshops slowly turning into manufactures and peasants becoming less illiterate and all that. What kind of social and political future we create?

For example, one thing I have noticed that earlier, feudal structures of power were more personal, there was a higher chance that people who exercise power and people whom it affects are in each others "monkeysphere", have a close enough link that compassion and empathy can kick in. Back in the Enlightenment era optimistic people believed impersonal structures are good for limiting human caprice and arbitrariness. They believed human nature is good enough that people we don't know aren't just statistic for us, we really care for them. They did not think we need the power of personal contact to be compassionate and not dehumanize each other. I think today we know that our ethical instincts are not that reliable. What does that give?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-03-03T14:28:34.405Z · LW(p) · GW(p)

That doesn't seerm relevant to EYs comment, and he doesn't hang iout here much anymore, if you want to contact him try Facebook.

comment by Carl_Shulman · 2008-10-04T19:16:10.000Z · LW(p) · GW(p)

"If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0. If there is any probability of your annihilation, no matter how small, you will not survive for an infinite amount of time. That is what happens in an infinite amount of time: everything possible. If all your backup plans can fail at once, even at P=1/(3^^^3), that number will come of eventually with infinite trials." Zubon, this seems to assume that the probabilities in different periods are independent. It could be that there is some process that allows the chance of obliteration to decline exponentially (it's easy enough to imagine a cellular automata world where this could be) with time, such that the total chance of destruction converges to a finite number. Of course, our universe's apparent physics seem to preclude such an outcome (the lightspeed limit, thermodynamics, etc), but I wouldn't assign a probability of zero to our existing as simulations in a universe with laws permitting such stability, or to (improbable) physics discoveries permitting such feats in our apparent universe.

comment by TGGP4 · 2008-10-04T19:39:08.000Z · LW(p) · GW(p)

Without Hitler it's likely Ludendorf would have been in charge and things would have been even worse. So perhaps we should be grateful for Hitler!

I gather there are some Orthodox Jews involved in Holocaust denial and were in Iran for that, but this post gets me to thinking that there should be more of them if they really believe in a benevolent and omnipotent God that won't allow sufficiently horrible things to happen.

How widespread is white nationalism in America? I would think it's one of the least popular things around, although perhaps I'm taking the Onion too seriously.

comment by Consequentialist · 2008-10-04T19:52:48.000Z · LW(p) · GW(p)

"The standard rebuttal is that evil is Man's own fault,"

There is no evil. There is neutrality. The universe isn't man's fault; it isn't anyone's fault.

I'm not at all saddened by these facts. My emotional state is unaltered. It's because I take them neutrally.

I've experienced severe pain enough to know that A) Torture works. Really. It does. If you don't believe it, try it. It'll be a short lesson. B) Pain is not such a big deal. It's just an avoid-this-at-all-cost -signal. Sure, I'm in agony, sure, I'd hate to remain in a situation where that signal doesn't go away, but it still is just a signal.

Perhaps as you look at some spot in the sky, they've already - neutrality allowing - tamed neutrality there; made it Friendly.

We've got a project to finish.

comment by michael_vassar3 · 2008-10-04T19:59:12.000Z · LW(p) · GW(p)

More parents might let their toddler get hit by a car if they could fix the toddler afterwards.

There are an awful lot of types of Buddhism. Some allow mind annihilation, and even claim that it should be our goal. Some strains of Epicurianism hold that mind annihilation is a) neutral, and b) better than what all the religions believed in. Some ancient religions seemed to believe in the same awful universal fate as quantum immortality believers do, e.g. eternal degeneration, progressively advanced Alzheimers forever more or less. Adam Smith suggests that this is what most people secretly believe in.

It would take quite a black swan tech to undo all the good from tech up to this point. UFAI probably wouldn't pass the test, since without tech humans would go extinct with a smaller total population of lives lived anyway. Hell worlds seem unlikely. 1984 or the Brave New World (roughly) are a bit more likely, but is it worse than extinction? I don't generally feel that way, though I'm not sure.

Replies from: AnthonyC
comment by AnthonyC · 2015-12-13T18:34:39.711Z · LW(p) · GW(p)

Eight years late reply, but oh well.

I think one of the problems with UFAI isn't just human extinction, or even future human suffering. It's that some kinds of UFAI (the paperclip-maximizer comes to mind) could take over our entire future light cone. preventing any future intelligent life (Earth-originating or otherwise) from evolving and finding a better path.

comment by Rob_Keys · 2008-10-04T20:20:12.000Z · LW(p) · GW(p)

Good post, but how to deal with this information so that it is not so burdensome: Conway himself, upon creating The Game of Life, didn't believe that the cellular automaton could 'live' indefinitely, but was proven wrong shortly after his games creation by the discovery of the glider gun. We cannot assume that the cards were dealt perfectly and the universe or our existence is infinite, but we can hope that the pattern we have put down will continue to stand the test of time. Belief that we are impervious to extinction or that the universe will not ultimately implode leaving everything within no choice but to be squished into a single particle can only do us harm as we try to create new things and discover ways to transcend this existence. Hope that we will make it and that there is ultimately a way off of what may be a sinking ship is what keeps us going.

comment by pdf23ds · 2008-10-04T20:28:25.000Z · LW(p) · GW(p)

I don't understand why the end of the universe bugs people so much. I'll just be happy to make it to next decade, thanks very much. When my IQ rises a few thousand points, I'll consider things on a longer timescale.

comment by Roga · 2008-10-04T20:40:23.000Z · LW(p) · GW(p)

Would Camus agree with you Eliezer?

comment by Consequentialist · 2008-10-04T20:40:50.000Z · LW(p) · GW(p)

What I don't understand is that we live on a planet, where we don't have all people with significant loose change

A) signing up for cryonics B) super-saturating the coffers of life-extensionists, extinction-risk-reducers, and AGI developers.

Instead we currently live on a planet, where their combined (probably) trillions of currency units are doing nothing but bloating as 1s and 0s on hard drives.

Can someone explain why?

Replies from: jasonmcdowell, kybernetikos, AnthonyC
comment by jasonmcdowell · 2010-10-20T09:13:01.752Z · LW(p) · GW(p)

Alas, most people on the planet either:

  1. haven't heard of cryonics / useful life extension,
  2. don't take it seriously,
  3. have serious misunderstandings about it, or
  4. reject it for social reasons.

I'm timidly optimistic about the next two generations.

comment by kybernetikos · 2011-04-24T12:18:45.348Z · LW(p) · GW(p)

It's pretty straightforward, most people don't believe that cryonics or life-extension techniques have a reasonable chance of success within their lifetimes. As for extinction-risk-reduction, most people doubt that there are serious extinction risks that can feasibly be mitigated.

Given those (perhaps misguided beliefs), then what should they spend their money on other than improving their quality of life to the best degree they know how?

When the first person is brought back from cryonic sleep and the disease that put them there cured, you can expect an enormous surge of interest. When someone lives to 150 due to them practicing some sort of life-extension technique, there will be a massive interest. As for extinction-risk-reduction, it would take a lot to get people interested, because extinction is something that hasn't happened for what seems like a really long time and we tend to assume dramatic changes are extremely unlikely.

comment by AnthonyC · 2015-12-13T18:43:19.815Z · LW(p) · GW(p)

"trillions of currency units are doing nothing but bloating as 1s and 0s on hard drives"

This seems very unlikely. Most people with significant savings have it invested in stocks, bonds, or other investments - that is, they've given it to other people to do something with it that they think will turn a profit. Of the money that is sitting in bank accounts, most of it is lent out, again to people planning to actually do something with it (like build business, build houses, or buy things on credit).

comment by Arosophos (Lincoln_Cannon) · 2008-10-04T20:47:45.000Z · LW(p) · GW(p)

"What can a twelfth-century peasant do to save themselves from annihilation? Nothing."

She did something. She passed on a religious meme whose descendents have inspired me, in turn, to pass on the idea that we should engineer a world that can somehow reach backward to save her from annihilation. That may not prove possible, but some possibilities depend on us for their realization.

A Jewish prophet once wrote something like this: "Behold, I will send you Elijah the prophet before the coming of the great and dreadful day of the Lord: And he shall turn the heart of the fathers to the children, and the heart of the children to their fathers, lest I come and smite the earth with a curse." The Elijah meme has often turned my heart toward my ancestors, and I wonder whether we can eventually do something for them.

Unless we are already an improbable civilization, our probable future will be the civilization we would like to become only to the extent that such civilizations are already probable. The problem of evil is for the absolutely omnipotent God -- not for the progressing God.

comment by gwern · 2008-10-04T21:12:07.000Z · LW(p) · GW(p)

Chad: if you seriously think that Turing-completeness does not imply the possibility of sentience, then you're definitely in the wrong place indeed.

Replies from: mwengler
comment by mwengler · 2011-12-26T17:20:57.795Z · LW(p) · GW(p)

Is there a FAQ or reference somewhere on why or how Turing completeness implies sentience? I know there are some very bright rational people who don't believe turing completeness is enough for sentience (Searle, Penrose), you wouldn't want them active here? (By the way don't make the mistake of thinking " I don't believe turing completeness is sufficient for sentience" is equivalent to " I believe turning completeness is not sufficient for sentience." I don't know either way, but it sure seems that "knowing" is more like religious belief than rational deduction.)

Replies from: DSimon, gwern
comment by DSimon · 2011-12-26T17:31:47.708Z · LW(p) · GW(p)

The basic idea is that a perfect simulation of a physical human mind would be sentient due to the anti-zombie principle. Since all you need for such a simulation is a Turing machine, it follows that any Turing machine could exhibit sentience given the right program.

Replies from: wizzwizz4
comment by wizzwizz4 · 2019-07-14T11:45:22.844Z · LW(p) · GW(p)

Iff the universe is Turing complete. Have we proven that yet?

comment by gwern · 2011-12-29T02:15:31.163Z · LW(p) · GW(p)

I don't think Turing-completeness is sufficient for sentience either, just necessary; this is why I said 'possibility'.

Replies from: MaxNanasy
comment by MaxNanasy · 2016-12-10T05:21:55.336Z · LW(p) · GW(p)

Why do you think Turing-completeness is necessary for sentience?

comment by Doug_S. · 2008-10-04T21:12:56.000Z · LW(p) · GW(p)

And I do quite fancy well-written, well-researched "alternate history" fiction, such as Turtledove's, so I'd love to read a novel about what happens in 1812 to the fledgling USA if the British are free to entirely concentrate on that war, not distracted by Napoleon's last hurrahs in their backyard, because Napoleon was never around...

Nitpick:

The "War of 1812" was basically an offshoot of the larger Napoleonic Wars; Britain and France were both interfering with the shipping of "neutral" nations, such as the United States, in order to hurt their enemy. After France dropped its restrictions (on paper, at least), the United States became a lot less neutral, and James Madison and Congress eventually declared war on Britain. (Several years earlier, Jefferson, in response to the predations of the two warring European powers, got Congress to pass an Embargo Act that was to end foreign trade for the duration of the war, so as to keep the U.S. from getting involved. It didn't work out so well.)

In other words, without Napoleon, there probably wouldn't have been a War of 1812 at all.

What I don't understand is that we live on a planet, where we don't have all people with significant loose change

A) signing up for cryonics B) super-saturating the coffers of life-extensionists, extinction-risk-reducers, and AGI developers.

Instead we currently live on a planet, where their combined (probably) trillions of currency units are doing nothing but bloating as 1s and 0s on hard drives.

Can someone explain why?

Many people believe in an afterlife... why sign up for cryonics when you're going to go to Heaven when you die?

comment by Tim_Tyler · 2008-10-04T21:19:45.000Z · LW(p) · GW(p)
I should note that I actually intend to fix the universe [...]

I was not aware that the universe was broken. If so, can we get a replacement instead? ;-)

comment by Kaj_Sotala · 2008-10-04T21:26:39.000Z · LW(p) · GW(p)

It is a strange thing. I often feel the impulse to not believe that something would really be possible - usually when talking about existential risks - and I have to make a conscious effort to suppress that feeling, to remind myself that anything the laws of physics allow is possible. (And even then, I often don't succeed - or don't have the courage to entirely allow myself to succeed.)

comment by Doug_S. · 2008-10-04T21:27:46.000Z · LW(p) · GW(p)

A) Torture works. Really. It does. If you don't believe it, try it. It'll be a short lesson.

That depends on what you're trying to use it for. Torture is very good at getting people to do whatever they believe will stop the torture. For example, it's a good way to get people to confess to whatever you want them to confess to. Torture is a rather poor way to get people to tell you the truth when they have motive to lie and verification is difficult; they might as well just keep saying things at random until they say something that ends the torture.

Replies from: mwengler
comment by mwengler · 2011-12-26T17:23:53.740Z · LW(p) · GW(p)

Someone who knows the truth you are trying to get from them is very likely relatively early on to try the actual truth to get out of the torture.

If you are smart enough to realize that torturing someone who doesn't know the truth, and have the resources to check multiple hypotheses presented by the tortured, and place enough value on getting the truth, torture on the whole is still really effective.

I think it is just a hopeful belief to say "torture does not work."

comment by Aron · 2008-10-04T21:29:05.000Z · LW(p) · GW(p)

Consequentialist: Is it a fair universe where the wealthy live forever and the poor die in the relative blink of an eye? It seems hard for our current society to look past that when setting public policy. This doesn't necessarily explain why there isn't more private money put to the purpose, but I think many of the intelligent and wealthy at the present time would see eternal life quests as a millennial long cliche of laughable selfishness and not in tune with leaving a respectable legacy.

comment by IL · 2008-10-04T21:48:45.000Z · LW(p) · GW(p)

...Can someone explain why?

Many people believe in an afterlife... why sign up for cryonics when you're going to go to Heaven when you die?

That's probably not the explanation, since there are many millions of atheists who heard about cryonics and/or extinction risks. I figure the actual explanation is a combination of conformity, the bystander effect, the tendency to focus on short term problems, and the Silliness Factor.

comment by Doug_S. · 2008-10-04T22:32:11.000Z · LW(p) · GW(p)

I can only speak for myself on this, but wouldn't sign up for cryonics even if it were free, because I don't want to be revived in the future after I'm dead. (Given the choice, I would rather not have existed at all. However, although mine was not a life worth creating, my continued existence will do far less harm than my abrupt death.)

Replies from: None
comment by [deleted] · 2010-12-03T04:50:06.172Z · LW(p) · GW(p)

This is roughly equivalent to stating you don't want to be revived after you fall asleep tonight. If revival from cryosuspension is possible, there is no difference. You want to wake up tomorrow (if you didn't really, there are many easy ways for you to remedy that), therefore you want to wake up from cryonic suspension. You would rather fall asleep tonight than die just before it, therefore you would/should, rationally speaking, take free cryonics.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2012-12-24T10:07:03.840Z · LW(p) · GW(p)

Not equivalent. Now, I'm not saying that I, personally wouldn't want to live (for reasons that are no different from any other animal's reasons), but it's really not equivalent. I have people who depend on me now, I have a good chance of making the world a better place because of my existence. I have people who will immediately suffer if I die.

In the future, what would be the value of this primitive mind? Taken completely out of my current context like that, would I really be the same person? Sure, I might enjoy myself, but what more would I contribute? I'd just be redundant, or worse, completely obsolete...it would be kind of like making 2 copies of yourself and sending them around, and imagining that this means you've lived twice as much. Nope...each version of you lives one, slightly more redundant life.

Again, I'm not saying that I wouldn't want to live in the future. I'm saying that when you go to bed and wake up each night, you lose ~1% of yourself. When you move to a new city or get a new job, you lose 2% of yourself as all your old habits change and are rewritten by new ones. When someone you love and spend all your time with dies, you lose 5% of yourself, and then it gets rewritten with a new relationships or lifestyle. If you lose your job and everyone you knew in life, you lose 10% of yourself as your lifestyle completely and radically alters.

If you actually die, you lose 100% of yourself, of course, and then there is no "you" to speak of.

And if you wake up one day 1000 years from now with your entire society fundamentally altered, you lose at least 20% of yourself.

It'll grow back of course, possibly better than before. Some might even welcome the changes...I think I probably would. But I can totally understand how this might go under someone's threshold for "continuity".

comment by Hopefully_Anonymous · 2008-10-04T22:59:57.000Z · LW(p) · GW(p)

There's a corallary mystery category which most of you fall into: why are so few smart people fighting, even anonymously, against policy grounded in repugnancy bias that'll likely reduce their persistence odds? Where's the fight against a global ban on reproductive human cloning? Where's the fight to increase legal organ markets? Where's the defense of China's (and other illiberal nations)rights to use prisoners (including political prisoners) for medical experimentation? Until you square aware your own repugnancy bias based inaction, criticisms of that of the rest of population on topics like cryonics reads as incoherent to me as debating angels dancing on the heads of pins. My blog shouldn't be so anomolous in seeking to overcome repugnancy bias to maximize persistence odds. Where are the other anonymous advocates? Our reality is the Titanic -who want to go down with the ship for the sake of a genetic aesthetic -because your repugnancy bias memes are likely to only persist in the form of future generations if you choose to value it over your personal persistence odds.

comment by steven · 2008-10-04T23:09:48.000Z · LW(p) · GW(p)

To show that hellish scenarios are worth ignoring, you have to show not only that they're improbable, but also that they're improbable enough to overcome the factor (utility of oblivionish scenario - utility of hellish scenario)/(utility of heavenish scenario - utility of oblivionish scenario), which as far as I can tell could be anywhere between tiny and huge.

As for global totalitarian dictatorships, I doubt they'd last for more than millions of years without something happening to them.

comment by Anonymousinterlocutor · 2008-10-04T23:12:58.000Z · LW(p) · GW(p)

HA,

"why are so few smart people fighting, even anonymously, against policy grounded in repugnancy bias that'll likely reduce their persistence odds?" Here's a MSM citation of Gene Expression today: http://www.theglobeandmail.com/servlet/story/LAC.20081004.WORDS04//TPStory/Science

Steve Sailer is also widely read among conservative (and some other) elites, and there's a whole network of anonymous bloggers associated with him.

"Where's the fight against a global ban on reproductive human cloning?" Such bans have been fought, primarily through alliance with those interested in preserving therapeutic cloning.

"Where's the fight to increase legal organ markets?" Smart people can go to Iran, where legal markets already exist.

"Where's the defense of China's (and other illiberal nations)rights to use prisoners (including political prisoners) for medical experimentation?" There are some defenses of such practices, but it's not obviously a high-return area to invest your energies in, given the alternatives. A more plausible route would be suggesting handy experiments to Chinese partners.

comment by Will_Pearson · 2008-10-04T23:28:20.000Z · LW(p) · GW(p)
I can only speak for myself on this, but wouldn't sign up for cryonics even if it were free, because I don't want to be revived in the future after I'm dead.

I would probably sign up for cryonics if it were free, with a, "do not revive sticker" and detailed data about me so that future brain studiers would have another data point when trying to figure out how it all works.

I don't wish that I hadn't been born, but I figure I have a part to play a purpose that no one else seems to be doing. Once that has been done, then unless something I see need doing and is important and sufficiently left field for no one else to be doing, I'll just potter along doing random things until I die.

comment by pdf23ds · 2008-10-04T23:47:33.000Z · LW(p) · GW(p)

"I figure I have a part to play a purpose that no one else seems to be doing"

How do you figure that? Aren't you a materialist? Or do you just mean that you might find a niche to fill that would be satisfying and perhaps meaningful to someone? I'm having trouble finding a non-teleological interpretation of your comment.

comment by Vilhelm_S · 2008-10-05T00:01:55.000Z · LW(p) · GW(p)

"If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple. Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die. There isn't anything in there about only innocent people not being horribly tortured for indefinite periods."

While I of course I agree with the general sentiment of the post, I don't think this argument works. There is a relevant quote by John McCarthy:

"In the 1950s I thought that the smallest possible (symbol-state product) universal Turing machine would tell something about the nature of computation. Unfortunately, it didn't. Instead as simpler universal machines were discovered, the proofs that they were universal became more elaborate, and do did the encodings of information." (http://cs.nyu.edu/pipermail/fom/2007-October/012141.html)

One might add that the existence of minimalistic universal machines tell us very little about the nature of metaphysics and morality also. The problem is that the encodings of information gets very elaborate: a sentient being implemented in Life would presumably take terabytes of initial state, and that state would be encoding some complicated rules for processing information, making inferences, etc etc. It is those rules that you need to look at to determine if the universe is perfectly unfair or not.

Who knows, perhaps there is a deep fundamental fact that it is not possible to implement sentient beings in a universe where the evaluation rules don't enforce fairness. Or, slightly more plausible, it could be impossible to implement sentient tyrants who don't feel a "shade of gloom" when considering what they've done.

Neither scenario sounds very plausible, of course. But in order to tell whether such fairness constraints exist or not, the 3 rules of Life itself are completely irrelevant. This can be easily seen, since the same higher level rules could be implemented on top of any other universal machine equally easily. So invoking them do not give us any more information.

comment by Vladimir_Nesov · 2008-10-05T00:11:44.000Z · LW(p) · GW(p)

Doug, Will: There is no fundamental difference between being revived after dying, waking up after going to sleep, or receiving neurotransmitter in a synapse after it was released. There is nothing special about 10^9 seconds as opposed to 10^4 seconds or 10^-4 seconds. Unless, of course, these times figure into your morality, but these are considerations far out of scope of ancestral environments humans evolved in. This is a care where unnatural category meets unnatural circumstances, so figuring out a correct answer is going to be difficult, and relying on intuitively reinforced judgment would be reckless.

comment by pdf23ds · 2008-10-05T00:12:28.000Z · LW(p) · GW(p)

"So invoking them do not give us any more information."

I do think we get a little: if such constraints exist, they are a property of the patterns themselves, and not a property of the low-level substrate on which they are implemented. If such a thing were true in this world, it would be a property of people and societies, not a metaphysical property. That rules out a lot of religion and magical thinking, and could be a useful heuristic.

comment by random_guy · 2008-10-05T00:42:39.000Z · LW(p) · GW(p)

What probability do you guys assign to the god hypothesis being true?

Replies from: AnthonyC
comment by AnthonyC · 2015-12-13T18:57:59.317Z · LW(p) · GW(p)

The main issue with your question is the word "the."

There are vastly many possible ways to define the word "god," any one of which could exist or not. But most of those are also individually vastly complicated and exceedingly unlikely to exist unless there is some causal process that brought them into being, which in the eyes of many actual current human believers of a particular version would disqualify them from godhood.

comment by Caledonian2 · 2008-10-05T01:43:48.000Z · LW(p) · GW(p)

You can't 'fix the universe'. You can at most change the properties of small parts of reality -- and that can only be accomplished by accepting and acting in accordance to the nature of reality.

If you don't like the nature of reality, you'd better try to change what you like.

What probability do you guys assign to the god hypothesis being true?
Incoherent 'hypotheses' cannot be assigned a probability; they are, so to speak, "not even wrong".

Replies from: AndyCossyleon
comment by AndyCossyleon · 2010-08-11T16:58:09.275Z · LW(p) · GW(p)

P(Christian God exists) = vanishingly small. Does that answer your question, random_guy?

comment by Ian_C. · 2008-10-05T03:03:09.000Z · LW(p) · GW(p)

I don't want to sign up for cryonics because I'm afraid I will be revived brain-damaged. But maybe others are worried they will have the social status of a freak in that future society.

Replies from: JohnH
comment by JohnH · 2011-04-22T04:21:52.338Z · LW(p) · GW(p)

Not that I am willing to sign up for cryonics but I don't see this as a problem.

Presumably some monkeys will be placed on ice at some point in the testing of defrosting and you will not be defrosted until they are sure that the defrosting side does not cause brain damage. Also presumably there should be some way of determining if brain damage has occurred before defrosting happens and hopefully no one is defrosted that has brain damage until a way to fix the brain damage has been discovered.

I suppose that if the brain damage could be fixed you might lose some important information which does leave the question if you are still you. However if you believe that you are still yourself with the addition of new information, such as is received each day just by living, then you should likewise believe that you will still be yourself if information is lost. Also one of the assumptions of Cryogenics is that the human lifespan will have been greatly expanded so if you have major amnesia from the freezing you can look at it as trading your current life up to the point of freezing for one that is many multiple in length.

This is assuming that cryogenics works as intended, of which point I am not convinced of.

comment by Daniel_Burfoot · 2008-10-05T03:14:44.000Z · LW(p) · GW(p)

Great post and discussion. Go Team Rational!

Eliezer, I think there's a slight inconsistency in your message. On the one hand, there are the posts like this, which can basically be summed up as: "Get off your asses, slackers, and go fix the world." This is a message worth repeating many times and in many different ways.

On the other hand are the "Chosen One" posts. These posts talk about the big gaps in human capabilities - the idea being that some people just have an indefinable "sparkliness" that gives them the power to do incredible things. I read these posts with uneasiness: while agreeing with the general drift, I think I would interpret the basic observations (e.g. CEOs really are smarter than most other people) in a different way.

The inconsistency is that on the one hand you're telling people to get up and go do something, because the future is uncertain and could be very, very good or very, very bad; but on the other hand you're essentially saying that if a person is not a Chosen One, there's not much he can really contribute.

So, what I'd like to see is a discussion of what the rank-and-file members of Team Rational should be doing to help (and I hope that involves more than donating lots of money to SIAI).

Also, we need a cool mascot, maybe zebras.

comment by Hidden_One · 2008-10-05T03:33:41.000Z · LW(p) · GW(p)

"So, what I'd like to see is a discussion of what the rank-and-file members of Team Rational should be doing to help (and I hope that involves more than donating lots of money to SIAI)." How 'rank-and-file' are we talking here? With what skillset, interests, and level of motivation?

comment by Hidden_One · 2008-10-05T04:12:00.000Z · LW(p) · GW(p)

Writing papers like Nick Bostrom's can be valuable:

http://www.nickbostrom.com/

comment by JulianMorrison · 2008-10-05T04:13:00.000Z · LW(p) · GW(p)

I have an analogy: "justice is like cake, it's permitted to exist but someone has to make it".

Can you be happier sheltering in ignorance? I'm not convinced. I think that's a strategy that only works while you're lucky.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-05T04:16:00.000Z · LW(p) · GW(p)

It is extraordinarily difficult to figure out how to use volunteers. Almost any nonprofit trying to accomplish a skilled-labor task has many more people who want to volunteer their time than they can use. The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share.

I'm surprised by the commenters who cannot conceive of a future life that is more fun than the one they have now - who can't imagine a future they would want to stick around for. Maybe I should bump the priority of the Fun Theory sequence.

comment by Tom_McCabe · 2008-10-05T04:57:00.000Z · LW(p) · GW(p)

"The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share."

There's always Amazon's Mechanical Turk (https://www.mturk.com/mturk/welcome). It's an inefficient use of people's time, but it's better than just telling people to go away. If people are reluctant to donate money, you can ask for donations of books- books are actually a fairly liquid asset (http://www.cash4books.net/).

comment by JulianMorrison · 2008-10-05T05:01:00.000Z · LW(p) · GW(p)

Eliezer: Does the law allow just setting them to productive but entirely tangential work, and pocketing the profit for SIAI?

comment by Daniel_Burfoot · 2008-10-05T05:08:00.000Z · LW(p) · GW(p)

@Hidden: just a "typical" OB reader, for example. I imagine there are lots of readers who read posts like this and say to themselves "Yeah! There's no God! If we want to be saved, we have to save ourselves! But... how...?" Then they wake up the next day and go to their boring corporate programming jobs.

@pdf23ds: This feels like tunnel vision. Surely the problem SIAI is working on isn't the ONLY problem worth solving.

@Eliezer: I recognize that it's hard to use volunteers. But members of Team Rational are not herd thinkers. They probably don't need to be led, per se - just kind of nudged in the right direction. For example, if you said, "I really think project X is important to the future of humanity, but it's outside the scope of SIAI and I don't have time to dedicate to it", probably some people would self-motivate to go and pursue project X.

comment by mtraven · 2008-10-05T05:15:00.000Z · LW(p) · GW(p)

The obvious example of a horror so great that God cannot tolerate it, is death - true death, mind-annihilation. I don't think that even Buddhism allows that.
This is sort of a surprising thing to hear from someone with a Jewish religious background. Jews spend very little attention and energy on the afterlife. (And your picture of Buddhism is simplistic at best, but other people have already dealt with that). I've heard the interesting theory that this stems from a reaction against their Egyptian captors, who were of course obsessed with death and the afterlife.

Religion aside, I truly have trouble understanding why people here think death is so terrible, and why it's so bloody important to deep-freeze your brain in the hopes it might be revved up again some time in the future. For one thing, nothing lasts forever, so death is inevitable no matter how much you postpone it. For another, since we are all hard-core materialists here, let me remind you that the flow of time is an illusion, spacetime is eternal, and the fact that your own personal self occupies a chunk of spacetime that is not infinite in any direction is just a fact of reality. It makes about as much sense to be upset that your mind doesn't exist after you die as it does to be upset that it didn't exist before you were born. Lastly, what makes you so damn important that you need to live forever? Get over yourself. After you die, there will be others taking over your work, assuming it was worth doing. Leave some biological and intellectual offspring and shuffle off this mortal coil and give a new generation a chance. That's how progress gets made -- "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it". (Max Planck, quoted by Thomas Kuhn)

comment by pdf23ds · 2008-10-05T05:38:00.000Z · LW(p) · GW(p)

Hopefully posthumans will be a little bit less stubborn in opposition to new scientific ideas.

comment by Consequentialist · 2008-10-05T05:50:00.000Z · LW(p) · GW(p)

"but on the other hand you're essentially saying that if a person is not a Chosen One, there's not much he can really contribute."

Do you think there aren't at least a few Neos whom Eliezer, and transhumanism in general, hasn't reached and influenced? I'm sure there are many, though I put the upper limit of number of people capable of doing anything worthwhile below 1M (whether they're doing anything is another matter). Perhaps the figure is much lower. But the "luminaries", boy, they are rare.

Millions of people are capable of hoovering money well in excess of their personal need. Projects aiming for post-humanity only need to target those people to secure unlimited funding.

comment by Consequentialist · 2008-10-05T06:06:00.000Z · LW(p) · GW(p)

"what makes you so damn important that you need to live forever? Get over yourself. After you die, there will be others taking over your work, assuming it was worth doing. Leave some biological and intellectual offspring and shuffle off this mortal coil and give a new generation a chance"

I vehemently disagree. What makes me so damn important, huh? What makes you so damn unimportant that you're not even giving it a try? The answer to both of these: You, yourself; you make yourself dman important or don't. Importance and significance are self-made. No one can give them to you. You must earn them.

There are damn important people. Unfortunately most of them were. Think of the joy if you could revive the best minds who've ever walked the earth. If you aren't one of them, try to become one.

comment by Z._M._Davis · 2008-10-05T06:09:00.000Z · LW(p) · GW(p)

Mtraven: "I truly have trouble understanding why people here think death is so terrible [...] [S]ince we are all hard-core materialists here, let me remind you that the flow of time is an illusion, spacetime is eternal [...]"

I actually think this one goes the other way. You choose to live right now, rather than killing yourself. Why not consistently affirm that choice across your entire stretch of spacetime?

"[W]hat makes you so damn important that you need to live forever?"

Important to whom?

comment by Consequentialist · 2008-10-05T06:21:00.000Z · LW(p) · GW(p)

Even if you're only capable of becoming an average, main sequence star, and not a quasistellar object outshining billions of others, what you must do is to become that star and not remain unlit. Oftentimes those who appear to shine brightly do so only because there's relative darkness around.

What if Eliezers weren't so damn rare; what if there were 100,000 x "luminaries"; which Eliezer's blog would you read?

comment by Consequentialist · 2008-10-05T06:28:00.000Z · LW(p) · GW(p)

"Important to whom?"
Important to the development of the universe. It's an open-ended project where we, its sentient part, decide what the rewards are, we decide what's important. I've come to the conclusion that optimizing, understanding, and controlling that which is (existence) asymptotically perfectly, is the most obvious goal. Until we have that figured out, we need to stick around.

comment by Hopefully_Anonymous3 · 2008-10-05T06:56:00.000Z · LW(p) · GW(p)

"What if Eliezers weren't so damn rare"

The weird obsequiousness towards Eliezer makes yet another appearance on OB.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-05T07:01:00.000Z · LW(p) · GW(p)

What the hell is supposed to be worth anything if life isn't?

comment by mtraven · 2008-10-05T07:05:00.000Z · LW(p) · GW(p)

Oh, and while I'm stirring up the pot, let me just say that this statement made me laugh: "But members of Team Rational are not herd thinkers." Dude. Self-undermining much?

comment by Z._M._Davis · 2008-10-05T07:26:00.000Z · LW(p) · GW(p)

Consequentialist: "I've come to the conclusion that optimizing, understanding, and controlling that which is (existence) asymptotically perfectly, is the most obvious goal."

You haven't been talking to Roko or Richard Hollerith lately, have you?

comment by Consequentialist · 2008-10-05T07:31:00.000Z · LW(p) · GW(p)

"The weird obsequiousness towards Eliezer makes yet another appearance on OB."

Quite the contrary. I'd prefer it be so that Eliezer is a dime a dozen. It's the relative darkness around that keeps him in the spotlight. Is suspect there's nothing special - in the Von Neumann sense - about this chap, just that I haven't found anyone like him so far. Care to point some others like him?

comment by mtraven · 2008-10-05T07:44:00.000Z · LW(p) · GW(p)

Eliezer, if that last comment was in response to mine it is a disappointingly obtuse misinterpretation which doesn't engage with any of the points I made. "Life" is worth something; that doesn't mean that striving for the infinite extension of individual lives should be a priority.

comment by Will_Pearson · 2008-10-05T08:53:00.000Z · LW(p) · GW(p)
I'm surprised by the commenters who cannot conceive of a future life that is more fun than the one they have now - who can't imagine a future they would want to stick around for. Maybe I should bump the priority of the Fun Theory sequence.

I a different type of fun helping people perform a somewhat meaningful* task than I do when I am just hanging out, puzzle solving, adventure sports or going on holiday. I have a little nagging voice asking, "What was the point of that". Which needs to be placated every so often, else the other types of fun lose their shine.

If I'm revived into a culture that has sufficient technology to revive me, then it is likely that it will not need any help that I could provide. My choices would be, no longer be myself by radically altering my brain to be of some use to people or immerse myself in make work fantasy tasks. If I pick the first, it makes little difference to my current self whether it is radically altered me or someone else being useful. The second choice is also not appealing, it would require lying to myself sufficiently well in order to fool my pointfulness meter.

*Tasks that have real world consequences

Replies from: JohnH
comment by JohnH · 2011-04-22T04:38:23.062Z · LW(p) · GW(p)

You gain experience and new neuron connections all the time, do these things not make you to be yourself? If you are not yourself after gaining experience then the "you" that finishes this sentence is not the "you" that started it, may that "you" rest in peace. Further, I wear glasses which thing augments my abilities greatly, do the glasses make me a different "me" then I would be if glasses had not been invented? If not how is it different then adding new neurons to the brain?

Further, is learning new things not a meaningful experience to you? If you are required to learn lots of new things shouldn't that make the experience more enticing, especially if one knew one would have the time to both learn whatever one wished and to apply what one had learned.

Replies from: Exemplis
comment by Exemplis · 2014-07-17T03:38:29.729Z · LW(p) · GW(p)

Lets do some necromancy here.

Im relatevily new to all ths OB and LW stuff, just getting through major sequences (pls excuse my bad English, its not the language i frequently use). Could u point me in the direction of a thread, where "the glorious possibilities of human immortality" are discussed. What i see from the comments here - is the notion to become some sort of ultimately efficient black hole-like eternal information destroyers. Correct me where im wrong, but it is the logical conclusion of minimizing "self" entropy while maximizing information input.

Replies from: hairyfigment
comment by hairyfigment · 2014-07-17T08:00:30.595Z · LW(p) · GW(p)

The Fun Theory Sequence might help, especially the last post on the list. Can't say more without understanding the purpose of your question better. (Also, I didn't write the grandparent comment.)

comment by Tim_Tyler · 2008-10-05T09:23:00.000Z · LW(p) · GW(p)

Who knows, perhaps there is a deep fundamental fact that it is not possible to implement sentient beings in a universe where the evaluation rules don't enforce fairness. Or, slightly more plausible, it could be impossible to implement sentient tyrants who don't feel a "shade of gloom" when considering what they've done. Neither scenario sounds very plausible, of course.

The rule of thumb is: if you can imagine it, you can simulate it (because your brain is a simulator). The simulation may not be easy, but at least it's possible.

comment by Vladimir_Nesov · 2008-10-05T09:57:00.000Z · LW(p) · GW(p)

You name specific excuses for why life in the future will be bad for you. It sounds like you see the future as a big abandoned factory, where you are a shadow, and the strange mechanisms do their spooky dance. Think instead of what changes could make the future right specifically for you, with tremendous amount of effort applied to this goal. You are just a human, so attention your comfort can get starts far above the order of whole of humanity thinking about every tiny gesture to make you a little bit more comfortable for millions of years, and thinking about it from your own point of view. There is huge potential for improvement in the joy of life, just like human intelligence is nowhere near the ten orders of magnitude from the top, human condition is nowhere near the most optimal. You can't trust your estimate of how limited the future will be, how impossible it will be for the future to find a creative solution to your problem. Give it a try.

comment by Jayson_Virissimo2 · 2008-10-05T10:00:00.000Z · LW(p) · GW(p)

"The claim isn't that Germany would have been perfectly fine, and would never have started a war or done anything else extreme. And the claim is not that Hitler trashed a country that was ticking along happily.

The claim is that the history of the twentieth century would have gone substantially differently. World War II might not have happened. The tremendous role that Hitler's idiosyncrasies played in directing events, doesn't seem to leave much rational room for determinism here."

I disagree. Hitler did not departure very far from the general beliefs of the time. The brand of socialism and nationalism that became what Hitler preached had been growing in prominence for decades in academia and in the middle class consciousness. The alliance between the socialists and the conservatives against the liberals probably would have happened whether Hitler was at the top or not.

comment by Vladimir_Nesov · 2008-10-05T10:22:00.000Z · LW(p) · GW(p)

[sorry for ambiguity: thinking for millions of years, not making comfortable for millions of years]

comment by Will_Pearson · 2008-10-05T11:59:00.000Z · LW(p) · GW(p)

It sounds like you see the future as a big abandoned factory, where you are a shadow, and the strange mechanisms do their spooky dance.
I see the future as full of adults, to which I am a useless child. Or if Eliezer gets his way one adult to which I am an embryo. I can't even help with the equivalent of washing up.

Think instead of what changes could make the future right specifically for you.
I'd like a future where people were on a level with me, so I could be of some meaningful use.

However a future without massive disparities of power and knowledge between myself and the inhabitants, would not be able to revive me from cryo sleep.

comment by Consequentialist · 2008-10-05T12:18:00.000Z · LW(p) · GW(p)

So you don't think you could catch up? If you had been frozen somewhere between -10000 and -100 years and revived now, don't you think you could start learning what the heck it is people are doing and understand nowadays? Besides a lot of the pre-freeze life-experience would be fully applicable to present. Everyone starts learning from the point of birth. You'd have headway compared to those who just start out from nothing.
There are things we can meaningfully contribute to even in a Sysop universe, filled with Minds. We, after all, are minds, too, which have the inherent quality of creativity - creating new, evermore elegant and intricate patterns - at whatever our level is, and self-improvement; optimization.

Bored? Got nothing meaningful to do? Try science.

comment by Consequentialist · 2008-10-05T12:25:00.000Z · LW(p) · GW(p)

This is a big do-it-yourself project. Don't complain about there not being enough opportunities to do meaningful things. If you don't find anything meaningful to do, that's your failure, not the failure of the universe. Searching for meaningful problems to solve is part of the project.

Correction: headway - I meant to say headstart.


comment by Matthew_C. · 2008-10-05T12:56:00.000Z · LW(p) · GW(p)

I find Eliezer's (and many of the others here) total and complete obsession with the "God" concept endlessly fascinating. I bet you think about "God" more often than the large majority of the nominally religious. This "God" fellow has seriously pwn3d your wetware. . .

comment by Vladimir_Nesov · 2008-10-05T12:58:00.000Z · LW(p) · GW(p)

Giant cheesecake fallacy. If future could do everything you wanted to do, it doesn't mean it would do so. Especially if it will be bad for you. If future decides to let you work on a problem, even though it could solve it without you, you can't apply to the uselessness of your action: if future refuses to perform it, only you can make a difference. You can grow to be able to vastly expand the number of things you will be capable of doing, this source never dwindles. If someone or something else solved a problem, it doesn't necessarily spoil the fun for everyone else for all eternity. Or it was worth spoiling the fun, lifting poverty and disease, robbing you of the possibility of working on those things yourself. Take joy in your personal discoveries. Inequality can only be bad because you are not all you could be, not because there are things greater than you. Seek personal growth, not universal misery. Besides, doing something "inherently worthwhile" is only one facet of life, so even if there wouldn't be a solution to that, there are other wonders worth living for.

comment by Doug_S. · 2008-10-05T13:10:00.000Z · LW(p) · GW(p)

A "head start" in the wrong direction isn't much help.

Imagine a priest in the temple of Zeus, back in Ancient Greece. Really ancient. The time of Homer, not Archimedes. He makes how best to serve the gods the guiding principle of his life. Now, imagine that he is resurrected in the world of today. What do you think would happen to him? He doesn't speak any modern language. He doesn't know how to use a toilet. He'd freak out at the sight of a television. Nobody worships the gods any more. Our world would seem not only strange, but blasphemous and immoral, an abomination that ought to be destroyed. His "head start" has given him far more anti-knowledge than actual knowledge.

Of course, he'd know things about the ancient world that we don't know today, but, even twenty years after he arrived, would you hire him, or a twenty-year-old born raised in the modern world? From almost every perspective I could think of, it would be better to invest resources in raising a newborn than to recreate and rehabilitate a random individual from our barbaric past.

----

Me, to The Future: Will you allow me to turn myself into the equivalent of a Larry Niven-style wirehead?
The Future: No.
Me: Will you allow me to end my existence, right now?
The Future: No.
Me: Then screw you, Future.

comment by Vladimir_Nesov · 2008-10-05T13:25:00.000Z · LW(p) · GW(p)

Doug: From almost every perspective I could think of, it would be better to invest resources in raising a newborn than to recreate and rehabilitate a random individual from our barbaric past.

No, for him it won't be better. Altruistic aspect of the humane morality will help, even if it's more energy-efficient to incinerate you. For that matter, why raise a newborn child instead of making a paperclip?

comment by Matthew_C. · 2008-10-05T13:30:00.000Z · LW(p) · GW(p)

In the interest of helping folks here to "overcome bias", I should add just how creepy it is to outside observers to see the unswervingly devoted members of "Team Rational" post four or five comments to each Eliezer post that consist of little more than homilies to his pronouncements, scattered with hyperlinks to his previous scriptural utterances. Some of the more level-headed here like HA have commented on this already. Frankly it reeks of cultism and dogma, the aromas of Ayn Rand, Scientology and Est are beginning to waft from this blog. I think some of you want to live forever so you can grovel and worship Eli for all eternity. . .

Replies from: JohnH
comment by JohnH · 2011-04-22T05:06:01.096Z · LW(p) · GW(p)

"I think some of you want to live forever so you can grovel and worship Eli for all eternity"

The sentence is funnier when one knows that Eli is God in some languages. (not Eliezer but Eli)

I would change it to be that reason or rationalism (or Bayesianism) is the object of worship and Eliezer is the Prophet. It certainly makes pointing out errors (in reasoning) in some of the religion posts a less enticing proposition.

However, it does seem that not everyone is like that. Also, if actual reason is trusted and not dogmatic assertions that such and such is reasonable then error will eventually give way to truth. I certainly believe Eliezer to be mistaken about some things and missing influential observations about others but, for the most part, what he advocates doing with respect to not shutting down thinking because you disagree with things and other such things is correct.

If ones faith in whatever one believes in is so fragile that it can not be questioned then that is precisely when ones faith needs to be questioned. One should discover what ones beliefs actually are and what is essential to those beliefs and then see if the questions are bothersome. If so then one should face the questions head on and figure out why and what they do to ones beliefs.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-05T13:36:00.000Z · LW(p) · GW(p)
I'd like a future where people were on a level with me, so I could be of some meaningful use.

However a future without massive disparities of power and knowledge between myself and the inhabitants, would not be able to revive me from cryo sleep.

I already guessed that might be the wish of many people. That's one reason why I would like to acquire the knowledge to deliberately create a single not-person, a Very Powerful Optimization Process. What does it take to not be a person? That is one of those moral questions that runs into empirical confusions. But if I could create a VPOP that did not have subjective experience (or the confusion we name subjective experience), and did not have any pleasure or pain, or valuation of itself, then I think it might be possible to have around a superintelligence that did not, just by its presence, supersede us as an adult; but was nonetheless capable of guarding the maturation of humans into adults, and, a rather lesser problem, capable of reviving cryonics patients.

If there is anything in there that seems like it should be impossible to understand, then remember that mysteries exist in the map, not in the territory.

The only thing more difficult than creating a Friendly AI, involving even deeper moral issues, is creating a child after the right and proper fashion of creating a child. If I do not wish to be a father, I think that it is acceptable for me to avoid it; and certainly the skill to deliberately not create a child would be less than the skill to deliberately create a healthy child.

So yes I do confess: I wish to create something that has no value of itself, so that it can safeguard the future and our choices, without, in its own presence, automatically setting the form of humankind's adulthood according to the design decisions of a handful of programmers, and superseding our own worth. An emergency measure should do as little as possible, and everything necessary; if I can avoid creating a child I should do so, if only because it is not necessary.

The various silly people who think I want to keep the flesh around forever, or constrain all adults to the formal outline of an FAI, are only, of course, making things up; their imagination is not wide enough to understand the concept of some possible AIs being people, and some possible AIs being something else. A mind is a mind, isn't it? A little black box just like yours, but bigger. But there are other possibilities than that, though I can only see them unclearly, at this point.

comment by jamie2 · 2008-10-05T15:04:00.000Z · LW(p) · GW(p)

A.R.: The standard rebuttal is that evil is Man's own fault, for abusing free will.

That only excuses moral evil, not natural evil.

comment by jameson · 2008-10-05T15:53:00.000Z · LW(p) · GW(p)

I was not aware that the universe was broken. If so, can we get a replacement instead? ;-)

Britain is broken, but Cameron's on that case.

Replies from: Delta
comment by Delta · 2012-09-11T16:04:03.452Z · LW(p) · GW(p)

Cameron just made a homeopathy advocate Health Secretary. Maybe the problem was Britain not being broken enough...

comment by Will_Pearson · 2008-10-05T16:52:00.000Z · LW(p) · GW(p)
An emergency measure should do as little as possible, and everything necessary;

The VPOP will abolish, "Good bye," no?

It will obsolete or profoundly alter the nature of emergency surgery doctors, cancer researchers, fund raisers for cancer research, security services, emergency relief workers, existential risk researchers etc...

Every person on the planet who is trying to act somewhat like an adult will find they are no longer needed to do what is necessary. It doesn't matter that they are obsoleted by a process rather than a person, they are still obsolete. This seems like a step back for the maturation of humanity as a whole. It does not encourage taking responsibility for our actions. Life or life decisions are not so important, or meaningful.

You see it as the only way for humanity to survive, if I bought that I may support your vision, even though I would not necessarily want to live through it.

On a side note, do any of your friends jokingly call you Hanuman? The allusion is unfair, I'll grant you, he is far more insane and cruel than you, but the vision and motivation is eerily similar, on the surface details at least.

comment by Consequentialist · 2008-10-05T17:16:00.000Z · LW(p) · GW(p)

"Frankly it reeks of cultism and dogma,"

Oh, I wouldn't worry about that too much; that's a cunning project underway to enbias Eliezer with delusions-of-grandeur bias, smarter-than-thou bias and whatnot.

Anything to harden our master. :D

comment by Chad2 · 2008-10-05T18:12:00.000Z · LW(p) · GW(p)

"Chad: if you seriously think that Turing-completeness does not imply the possibility of sentience, then you're definitely in the wrong place indeed."

gwern: The implication is certainly there and it's one I am sympathetic with, but I'd say its far from proven. The leap in logic there is one that will keep the members of the choir nodding along but is not going to win over any converts. A weak argument is a weak argument, whether you agree with the conclusion reached by that argument -- it's better for the cause if the arguments are held to higher standards.

comment by Barry_Kelly · 2008-10-05T18:24:00.000Z · LW(p) · GW(p)

Zubon,

"If you want a sufficient response to optimism, consider: is the probability that you will persist forever 1? If not, it is 0."

You're only correct if the probability is constant with respect to time. Consider, however, that some uncertain events have a non-zero probability even if infinite time passes. For example, random walks in three dimensions (or more) are not guaranteed to meet their origin again, even over infinite time:

comment by Caledonian2 · 2008-10-05T20:27:00.000Z · LW(p) · GW(p)

gwern: The implication is certainly there and it's one I am sympathetic with, but I'd say its far from proven.
1) Consciousness exists. 2) There are no known examples of 'infinite' mathematics in the universe. 3) It is therefore more reasonable to say that consciousness can be constructed with non-infinite mathematics than to postulate that it can't.

Disagree? Give us an example of a phenomenon that cannot be represented by a Turing Machine, and we'll talk.

Replies from: JohnH
comment by JohnH · 2011-04-22T05:25:18.256Z · LW(p) · GW(p)

I may hold a different belief but this is certainly a working hypothesis and one that should be explored to the fullest extent possible. That is I am not inclined to believe that we are Turing machines but I could be wrong on this as I do not know it to be the case. The hypothesis that we are Turing machines is one that should be explored as fully as possible. If we are not Turing machines then exploring the hypothesis that we are is worth pursuing as it will get us closer to understanding what it is we are.

Turing machines rely on a tape of infinite length at least in conception. I imagine the theory has been looked at with tapes of finite length?

comment by Recovering_irrationalist · 2008-10-05T22:13:00.000Z · LW(p) · GW(p)
Eliezer: imagine that you, yourself, live in a what-if world of pure mathematics

Isn't this true? It seems the simplest solution to "why is there something rather than nothing". Is there any real evidence against our apparently timeless, branching physics being part of a purely mathematical structure? I wouldn't be shocked if the bottom was all Bayes-structure :)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-06T03:26:00.000Z · LW(p) · GW(p)

RI, it shouldn't literally be Bayes-structure because Bayes-structure is about inference is about mind. I have certainly considered the possibility that what-if is all there is; but it's got some problems. Just because what-if is something that humans find deductively compelling does not explain how or why it exists Platonically - to suppose that it is necessary just because you can't find yourself not believing it, hardly unravels the mystery. And much worse, it doesn't explain why we find ourselves in a low-entropy universe rather than a high-entropy one (why our memories are so highly ordered).

Consequentialist, I just had a horrifying vision of a rationalist cult where the cultists are constantly playing pranks on the Master out of a sincere sense of duty.

I worry a bit less about being a cult leader since I noticed that Paul Graham also has a small coterie following him around accusing him of being a cult leader, and he's not even trying to save the world. In any case, I've delivered such warnings as I have to offer on both the nature of cultish thinking, and the traps that await those who are terrified of "being in a cult"; that's around as much as I can do.

comment by Court · 2008-10-06T03:58:00.000Z · LW(p) · GW(p)

Eliezer,
I'm a little disappointed, frankly. I would have thought you'd be over both God and the Problem of Evil by now. Possibly it goes to show just how difficult it is for people raised as (or by) theists to kill God in themselves.

But possibly you'll get there as you go along. I'd tell you what that was like but I don't know myself yet.

comment by Chad2 · 2008-10-06T05:09:00.000Z · LW(p) · GW(p)

Caledonian:

In an argument that is basically attempting to disprove the existence of God, it seems a little disingenuous to me to include premises that effectively rule out God's existence. If you aren't willing to at least allow the possibility of dualism for the sake of argument, then why bother talking about God at all?

Also, I am not sure what your notion of "infinite" mathematics is about. Can you elaborate or point me to some relevant resources?

comment by Doug_S. · 2008-10-06T06:16:00.000Z · LW(p) · GW(p)

No, for him it won't be better.

Well, there's also the perspective of the newborn and the person it grows up into; if we consider that perspective, it probably would prefer that it exists. I don't want The Future to contain "me"; I want it to contain someone better than "me". (Or at least happier, considering that I would prefer to not have existed at all.) And I really doubt that my frozen brain will be of much help to The Future in achieving that goal.

comment by Ben_Jones · 2008-10-06T09:07:00.000Z · LW(p) · GW(p)

the probabilities for cryonics look good.

They don't have to look good, they just have to beat the probabilities of your mind surviving the alternatives. Current alternatives: cremation, interment, scattering over your favourite football pitch. Currently I'm wavering between cryonics and Old Trafford.

Eliezer, I'm ridiculously excited about the next fifty years, and only slightly less excited about the fun theory sequence. Hope it chimes with my own.

comment by Vladimir_Golovin · 2008-10-06T10:38:00.000Z · LW(p) · GW(p)

Excellent post, agree with every single line of it. It's not depressing for me -- I went through that depression earlier, after finally understanding evolution.

One nitpick -- I find the question at the end of the text redundant.

We already know that all this world around us is just an enormous pattern arising out of physically determined interactions between particles, with no 'essence of goodness' or other fundamental forces of this kind.

So the answer to your question seems obvious to me -- if we don't like patterns we see around us (including us ourselves, no exceptions), all we need to do is to use physics to arrange particles into a certain pattern (for example superintelligence) that, in future, produces patterns desirable for us. That's all.

Eliezer, what's your answer to the question?

comment by haig2 · 2008-10-06T11:29:00.000Z · LW(p) · GW(p)

On the existential question of our pointless existence in a pointless universe, my perspective tends to oscillate between two extremes:

1.) In the more pessimistic (and currently the only rationally defensible) case, I view my mind and existence as just a pattern of information processing on top of messy organic wetware and that is all 'I' will ever be. Uploading is not immortality, it's just duplicating that specific mind pattern at that specific time instance. An epsilon unit of time after the 'upload' event that mind pattern is no longer 'me' and will quickly diverge as it acquires new experiences. An alternative would be a destructive copy, where the original copy of me (ie. me that is typing this right now) is destroyed after or at the instance of upload. Or I might gradually replace each synapse of my brain one by one with a simulator wirelessly transmitting the dynamics to the upload computer until all of 'me' is in there and the shell of my former self is just discarded. Either way, 'I' is destroyed eventually--maybe uploading is a fancier form of preserving ones thoughts for posterity as creating culture and forming relationships is pre-singularity, but it does not change the fact that the original meatspace brain is going to eventually be destroyed, no matter what.

The second case, what I might refer to as an optimistic appeal to ignorance, is to believe that though the universe appears pointless according to our current evidence, there may be some data point in the future that reveals something more that we are ignorant to at the moment. Though our current map reveals a neutral territory, the map might be incomplete. One speculative position taken directly from physics is the idea that I am a Boltzmann Brain. If such an idea can be taken seriously (and it is) than surely there are other theoretically defensible positions where my consciousness persists in some timeless form one way or another. (Even Bostrom's simulation argument gives another avenue of possibility)

I guess my two positions can be simplified into:
1.) What we see is all there is and that's pretty fucked up, even in the best case scenario of a positive singularity.

2.) We haven't seen the whole picture yet, so just sit back, relax, and as long as you have your towel handy, don't panic.

comment by Alex9 · 2008-10-06T12:42:00.000Z · LW(p) · GW(p)

RE: the reek of cultishness and dogma, I agree.

Regardless of whether you want to argue that being in a cult might be ok or not anything to worry about, the fact is this sort of thing doesn't look good to other people. You're going to win many converts -- at least the kind you want -- by continuing to put on quasi-religious, messianic airs, and welcoming the sort of fawning praise that seems to come up a lot in the comments here. There's obviously some sharp thinking going on in these parts, but you guys need to pay a bit more attention to your PR.

comment by Alex9 · 2008-10-06T12:44:00.000Z · LW(p) · GW(p)

Not going to win, that should read.

comment by pdf23ds · 2008-10-06T13:21:00.000Z · LW(p) · GW(p)

by continuing to put on quasi-religious, messianic airs

Huh. Let a guy have a bit of poetic license now and then, eh? I really don't see what you mean.

comment by Alice · 2008-10-06T14:15:00.000Z · LW(p) · GW(p)

The request that we should 'fix the world' suggests that a.)we know that it is broken and b.)we know how to fix it; I am not so sure that this is the case. When one says 'X is wrong/unfair/undesirable etc., one is more often than not actually making a statement about one's state of mind rather than the state of reality i.e., one is saying 'I think or feel that X is wrong/unfair/undesirable'. Personally, I don't like to see images of suffering and death but I'm not sure that my distaste for suffering and death is enough to confidently assert that they are wrong or that they should be avoided. For example, without the facility for pain that leads to suffering we probably wouldn't make it past infancy and without death the world would be even more overpopulated than it is currently. No matter how rigorous and free from preconditioned religious thinking our reasoning is, 'what we would like to see' is still a matter of personal taste to some extent. Feeling and reasoning ineract in such an intricate and inseparable way that, while one may like to think one has reached conclusions about right/wrong/good/bad/just/fair etc. in a wholly dispassionate and rational way, it is likely that personal feelings have slipped in there unnoticed and unquestioned and added a troublesome bias.

comment by Consequentialist · 2008-10-06T14:48:00.000Z · LW(p) · GW(p)

Alice, can't tell crap from great? Don't worry, 90% of people share your inability. Why? Because 90% of everything is crap. (Sturgeon's law)

Lets fix the things that are obviously crap first. After that, well address the iffy things.

Towards less crappy greatness.

comment by Alex9 · 2008-10-06T15:23:00.000Z · LW(p) · GW(p)

You've said the bit about Paul Graham twice now in this thread; do you actually consider that good reasoning, or are you merely being flip? Paul Graham's followers may or may not be cultish to some degree, but that doesn't bear on the question of whether your own promotional strategies are sound ones. Let me put it this way: you will need solid, technically-minded, traditionally-trained scientists and engineers in your camp if you ever hope to do the things you want to do. The mainstream science community, as a matter of custom, doesn't look favorably upon uncredentialed lone wolves making grandiose pronouncements about "saving the world." This smacks scarily of religion and quackery. Like it or not, credibility is hugely important; be very careful about frittering it away.

comment by Alice · 2008-10-06T15:47:00.000Z · LW(p) · GW(p)

I take your point...if your point is 'we gotta start somewhere'. Nontheless, the use of 'obviously' is problematic and misleading. To whom is it obvious? To you? Or perhaps you and your friends, or you and other people on the internet who tend to think in the same way as you and with whom you generally agree? Don't get me wrong, I have a very clear idea of what I think is crap (and I strongly suspect it'd be similar to yours) and I'm just as keen to impose my vision of the 'uncrap' on the world as the next person. However, I can't help but be troubled by the thought that the mass murder of jews, gypsies, the mentally retarted and homosexuals was precipitated by the fact that Hitler et al thought it was 'obvious' that they were crap and needed fixing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-06T15:54:00.000Z · LW(p) · GW(p)

Whoops! I thought that comment had been swallowed by the ether, so I said it again. Turns out it's on the previous page. Dup has been deleted.

comment by Consequentialist · 2008-10-06T15:57:00.000Z · LW(p) · GW(p)

On the unfairness of existence:

Those who (want to) understand and are able, joyously create things that have always existed as potentials.

Those who don't (want to) understand and can't do anything real, make stuff up that never was possible and never will be.

The former last forever in eternal glory, spanning geological timescales and civilizations, for the patterns they create are compatible with the structure of the universe and sustained by it, while oblivion is reserved for the latter.

Science. The real stuff. Be all you can be.

comment by Consequentialist · 2008-10-06T16:00:00.000Z · LW(p) · GW(p)

Science. Keepin' it real.

comment by Caledonian2 · 2008-10-06T16:17:00.000Z · LW(p) · GW(p)
In an argument that is basically attempting to disprove the existence of God, it seems a little disingenuous to me to include premises that effectively rule out God's existence.

How exactly can you construct a disproof of X without using premises that rule out X? That's what disproving is.

Non-infinite mathematics: also known as finite mathematics, also known as discrete mathematics. Non-continuum. Not requiring the existence of the real numbers.

To the best of our knowledge, reality only seems to require the integers, although constructing models that way is a massive pain. If you can give us an example of a physical phenomenon that cannot be generated by a Turing Machine's output -- just one example -- then I will grant that we have no grounds for presuming human cognition can be so constructed. Also, you'll win several Nobel Prizes and go down in history as one of the greatest scientist-thinkers ever.

I'm not holding my breath.

Replies from: JohnH
comment by JohnH · 2011-04-22T05:37:30.519Z · LW(p) · GW(p)

You can accept X as a premise and come to a contradiction of X with other accepted premises. Coming to something that seems absurd may also be grounds for doubting X, but doesn't disprove X. It might also be possible to prove that X and ~X are consistent with the other premises, which if the desire is to disprove X should be enough to safely ignore the possibility X is correct without further information.

I think for the Turing Machine part of this P/NP would need to be resolved first so he would also win 1 million dollars (or if P=NP then depending on his preferences he might not want to publish and use his code to solve every open question out there and get himself pretty much as much money as he wished)

Replies from: khafra
comment by khafra · 2011-04-22T18:51:33.197Z · LW(p) · GW(p)

Caledonian was a combination contrarian and curmudgeon back in the OvercomingBias days, and hasn't been around in years; so you probably won't get a direct reply.

However, if I understand this comment correctly as a follow-up to this one, you may want to look into the Church-Turing Thesis. The theory "physics is computable" is still somewhat controversial, but it has a great deal of support. If physics is computable, and humans are made out of physics, then by the Church-Turing Thesis, humans are Turing Machines.

Replies from: JohnH
comment by JohnH · 2011-04-22T19:59:22.310Z · LW(p) · GW(p)

I am actually familiar with the Church-Turing Thesis, as well as both Godel's incompleteness proof and the Halting problem. The theory that humans are Turing machines is one that needs to be investigated.

Replies from: AnthonyC
comment by AnthonyC · 2015-12-13T20:38:11.475Z · LW(p) · GW(p)

'The theory that humans are Turing machines is one that needs to be investigated."

Yes, but that question isn't where we need to start necessarily. It is a subset of a possibly much simpler problem: are the laws of physics Turing computable? If so, then humans cannot do anything Turing machines cannot do. if not, then the human-specific question remains open.

We don't know, and there are many relevant-but-not-too-convincing arguments either way.

The laws of physics as generally taught are both continuous (symmetries, calculus in QM), and quantum (discrete allowed particle states, Planck length, linear algebra in QM).

No one has ever observed nature calculating with an uncomputable general real (or complex) variable (how could we with human minds and finitely precise instrumentation?), while any computable algebraic or transcendental number seems to be fair game. But, building a model of physics that rules out general real variables is apparently much more difficult.

Even if there are general real variables in physics, they may only arise as a result of previous general real variables, in which case whatever-the-universe-runs-on may be able to handle them symbolically instead of explicitly. Anyone here who decides to read my many-years-late wall of text have any idea what the implications would be of this one? Possibly may or may not allow construction of architectures that are not Turing computing but also not fully general, limited by whatever non-Turing-computable stuff happens to have always existed?

If space and time are quantized (digital), that makes for more even trouble with special and general relativity - are the Planck length/time/etc.somehow reference frame dependent?

Also, see http://lesswrong.com/lw/h9c/can_somebody_explain_this_to_me_the_computability/

comment by Consequentialist · 2008-10-06T16:17:00.000Z · LW(p) · GW(p)

Science. Live a life with a purpose.
Science. Live a life worth living.

comment by Will_Pearson · 2008-10-06T17:15:00.000Z · LW(p) · GW(p)
is creating a child after the right and proper fashion of creating a child. If I do not wish to be a father, I think that it is acceptable for me to avoid it; and certainly the skill to deliberately not create a child would be less than the skill to deliberately create a healthy child.

I would agree, I am not trying to create a child either. I'm trying to create brain stuff, and figure out how to hook it up to a human so that it becomes aligned to that humans brain. Admittedly it is giving more power to children, but I think the only feasible way to get adults is for humans to get power and the self-knowledge of what we are, and grow up in a painful fashion. We have been remarkably responsible with nukes, and I am willing to bet on humanity becoming more responsible as it realises how powerful it really is. You are and SIAI are good data point for this supposition.

After my degree going back 9 years or so I spent some time thinking about seedAIs of a variety of fashions, not calling them such. I could see no way for a pure SeedAI to be stable, unless it starts off perfect. To exist is to be imperfect, from everything I have learnt of the world. You always see the edited highlights of peoples ruminations, so when you see us disagree you think we have not spent time on it.

That is also probably part of why I don't want to be in the future, I see it as needing us to be adults and my poor simian brain is not suited to perpetual responsibility or willing to change to be so.

comment by Chad2 · 2008-10-06T18:52:00.000Z · LW(p) · GW(p)

>How exactly can you construct a disproof of X without using
>premises that rule out X? That's what disproving is.

Sure, a mathematical proof proceeds from its premises and therefore any results achieved are entailed in those premises. I am not sure we are really in the real of pure mathematics here but I probably should have been more precise in my statement. In a non-mathematical discussion, a slightly longer chain of reasoning is generally preferred -- starting with the premise that dualism is false is a little uncomfortably close to starting with the premise that God doesn't exist for my taste.

>If you can give us an example of a physical phenomenon that
>cannot be generated by a Turing Machine's output -- just
>one example -- then I will grant that we have no grounds
>for presuming human cognition can be so constructed.

For the record (not that I have any particular standing worthy of note): I am not a dualist and I believe 100% that human cognition is a physical phenomenon that could be captured by a sufficiently complex Turing Machine. Can I prove this to be the case? No I can't and I don't really care to try -- and it's likely "above my level". The only reason I piped up at all is because I think strawman arguments are unconvincing and do a disservice to everyone.

>Also, you'll win several Nobel Prizes and go down in history
>as one of the greatest scientist-thinkers ever.
>I'm not holding my breath.

Don't worry -- I have no such aspiration, so you can comfortably continue with your respiration.

comment by Random_Passerby · 2008-10-06T22:25:00.000Z · LW(p) · GW(p)

"Bayesian cult encourages religious people to kill God in themselves" - how's that for a newspaper headline?

P.S. I'd delete this comment after a certain amount of time, you might not want it to get cached by google or something.

comment by Phil_Goetz · 2008-10-07T00:12:00.000Z · LW(p) · GW(p)

The various silly people who think I want to keep the flesh around forever, or constrain all adults to the formal outline of an FAI, are only, of course, making things up; their imagination is not wide enough to understand the concept of some possible AIs being people, and some possible AIs being something else.
Presuming that I am one of these "silly people": Quite the opposite, and it is hard for me to imagine how you could fail to understand that from reading my comments. It is because I can imagine these things, and see that they have important implications for your ideas, and see that you have failed to address them, that I infer that you are not thinking about them.

And this post reveals more failings along those lines; imagining that death is something too awful for a God to allow is incompatible with viewing intelligent life in the universe as an extended system of computations, and again suggests you are overly-attached to linking agency and identity to discrete physical bodies. The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely; this, also, is evidence of not thinking deeply about identity in deep time. The way you speak about the danger facing you - not the danger facing life, which I agree with you about; but the personal danger of death - suggests that you want to personally live on beyond the Singularity; whereas more coherent interpretations of your ideas that I've heard from Mike Vassar imply annihilation or equivalent transformation of all of us by the day after it. It seems most likely to me either that you're intentionally concealing that the good outcomes of your program still involve the "deaths" of all humans, or that you just haven't thought about it very hard.

What I've read of your ideas for the future suffers greatly from your not having worked out (at least on paper) notions of identity and agency. You say you want to save people, but you haven't said what that means. I think that you're trying to apply verbs to a scenario that we don't have the nouns for yet.

It is extraordinarily difficult to figure out how to use volunteers. Almost any nonprofit trying to accomplish a skilled-labor task has many more people who want to volunteer their time than they can use. The Foresight Institute has the same problem: People want to donate time instead of money, but it's really, really hard to use volunteers. If you know a solution to this, by all means share.
The SIAI is Eliezer's thing. Eliezer is constitutionally disinclined to value the work of other people. If the volunteers really want to help, they should take what I read as Eliezer's own advice in this post, and start their own organization.

comment by Recovering_irrationalist · 2008-10-07T00:25:00.000Z · LW(p) · GW(p)
it doesn't explain why we find ourselves in a low-entropy universe rather than a high-entropy one

I didn't think it would solve all our questions, I just wondered if it was both the simplest solution and lacking good evidence to the contrary. Would there be a higher chance of being a Boltzmann brain in a universe identical to ours that happened to be part of a what-if-world? If not, how is all this low-entropy around me evidence against it?

Just because what-if is something that humans find deductively compelling does not explain how or why it exists Platonically

How would our "Block Universe" look different from the inside if it was a what-if-Block-Universe? It all adds up to...

Not trying to argue, just curious.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-10-07T02:07:00.000Z · LW(p) · GW(p)
How would our "Block Universe" look different from the inside if it was a what-if-Block-Universe? It all adds up to...

I'm not saying this is wrong, but in its present form, isn't it really a mysterious answer to a mysterious question? If you believed it, would the mystery seem any less mysterious?

comment by Phil_Goetz · 2008-10-07T02:14:00.000Z · LW(p) · GW(p)

suggests that you want to personally live on beyond the Singularity; whereas more coherent interpretations of your ideas that I've heard from Mike Vassar imply annihilation or equivalent transformation of all of us by the day after it
Oops. I really should clarify that Mike didn't mention annihilation. That's my interepretation/extrapolation.

comment by steven · 2008-10-07T04:37:00.000Z · LW(p) · GW(p)

Eliezer, doesn't "math mysteriously exists and we live in it" have one less mystery than "math mysteriously exists and the universe mysteriously exists and we live in it"? (If you don't think math exists it seems like you run into indispensability arguments.)

IIRC the argument for a low-entropy universe is anthropic, something like "most non-simple universes with observers in them look like undetectably different variants of a simple universe rather than universes with dragons in them".

comment by steven · 2008-10-07T05:53:00.000Z · LW(p) · GW(p)

Alastair Malcolm:

in any comparison of all possible combinations of bit/axiom strings up to any equal finite (long) length (many representing not only a world but also (using 'spare' string segments inside the total length) extraneous features such as other worlds, nothing in particular, or perhaps 'invisible' intra-world entities), it is reasonable to suppose that the simplest worlds (ie those with the shortest representing string segments) will occur most often across all strings, since they will have more 'spare' irrelevant bit/axiom combinations up to that equal comparison length, than those of more complex worlds (and so similarly for all long finite comparison lengths).

Thus out of all worlds inhabitable by SAS's, we are most likely to be in one of the simplest (other things being equal) - any physics-violating events like flying rabbits or dragons would require more bits/axioms to (minimally) specify their worlds, and so we should not expect to find ourselves in such a world, at any time in its history.

(SAS means self-aware substructure)

comment by Tim_Tyler · 2008-10-07T06:36:00.000Z · LW(p) · GW(p)

Re: The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely [...]

Er... ;-) Many futurists seem to have it in for death. Bostrom, Kurzweil, Drexler, spring to mind. To me, the main problem seems to be uncopyable minds. If we could change our bodies like a suit of clothes, the associated problems would mostly go away. We will have copyable minds once they are digital.

comment by Recovering_irrationalist · 2008-10-07T12:09:00.000Z · LW(p) · GW(p)
I'm not saying this is wrong, but in its present form, isn't it really a mysterious answer to a mysterious question? If you believed it, would the mystery seem any less mysterious?

Hmm. You're right.

Darn.

comment by Phil_Goetz · 2008-10-07T17:05:00.000Z · LW(p) · GW(p)

Re: The way you present this, as well as the discussion in the comments, suggests you think "death" is a thing that can be avoided by living indefinitely [...]

Er... ;-) Many futurists seem to have it in for death. Bostrom, Kurzweil, Drexler, spring to mind. To me, the main problem seems to be uncopyable minds. If we could change our bodies like a suit of clothes, the associated problems would mostly go away. We will have copyable minds once they are digital.


"Death" as we know it is a concept that makes sense only because we have clearly-defined locuses of subjectivity.

If we imagine a world where

- you can share (or sell) your memories with other people, and borrow (or rent) their memories

- most of "your" memories are of things that happened to other people

- most of the time, when someone is remembering something from your past, it isn't you

- you have sold some of the things that "you" experienced to other people, so that legally they are now THEIR experiences and you may be required to pay a fee to access them, or to erase them from your mind

- you make, destroy, augment, or trim copies of yourself on a daily basis; or loan out subcomponents of yourself to other people while borrowing some of their components, according to the problem at hand, possibly by some democratic (or economic) arbitration among "your" copies

- and you have sold shares in yourself to other processes, giving them the right to have a say in these arbitrations about what to do with yourself

- "you" subcontract some of your processes - say, your computation of emotional responses - out to a company in India that specializes in such things

- which is advantageous from a lag perspective, because most of the bandwidth-intensive computation for your consciousness usually ends up being distributed to a server farm in Singapore anyway

- and some of these processes that you contract out are actually more computationally intensive than the parts of "you" that you own/control (you've pooled your resources with many other people to jointly purchase a really good emotional response system)

- and large parts of "you" are being rented from someone else; and you have a "job" which means that your employer, for a time, owns your thoughts - not indirectly, like today, but is actually given write permission into your brain and control of execution flow while you're on the clock

- but you don't have just one employer; you rent out parts of you from second to second, as determined by your eBay agent

- and some parts of you consider themselves conscious, and are renting out THEIR parts, possibly without notifying you

- or perhaps some process higher than you in the hierarchy is also conscious, and you mainly work for it, so that it considers you just a part of itself, and can make alterations to your mind without your approval (it's part of the standard employment agreement)

- and there are actually circular dependencies in the graph of who works for whom, so that you may be performing a computation that is, unknown to you, in the service of the company in India calculating your emotional responses

- and these circles are not simple circles; they branch and reconverge, so that the computation you are doing for the company in India will be used to help compute the emotions of trillions of "people" around the world


In such a world, how would anybody know if "you" had died?

comment by Tim_Tyler · 2008-10-07T17:48:00.000Z · LW(p) · GW(p)

That sounds like the "One Big Organism" concept. Nick Bostrom has also written about that - e.g. see his What is a Singleton?

The fictional Borg work similarly, I believe. Death would become rather like cutting your toenails.

comment by Nate_Barna · 2008-10-07T17:57:00.000Z · LW(p) · GW(p)

Phil: [. . .] In such a world, how would anybody know if "you" had died?
Perhaps anyone else knowing whether you're alive or dead wouldn't matter. You die when you lose sufficient component magnitudes and claim strengths on your components. If you formulate the sufficient conditions, you know what counts as death for your decisions, thus for you. If you formulate the sufficiency also as instance in a greater network, you and others know what counts as death for you. In either case, unless you're dying to be suicidally abstract, you're somebody and you know what it means for you to die.

comment by Phil_Goetz · 2008-10-07T19:27:00.000Z · LW(p) · GW(p)

Tim -

What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.

There are a variety of proposals floating about for ways to get the benefits of competition without actually having competition. The problem with competition is that it opens the doors to many moral problems. Eliezer may believe that correct Bayesian reasoners won’t have these problems, because they will agree about everything. This ignores the fact that it is not computationally efficient, physically possible, or even semantically possible (the statement is incoherent without a definition of “agent”) for all agents to have all available information. It also ignores the fact that randomness, and using a multitude of random starts (in competition with each other), are very useful in exploring search spaces.

I don't think we can eliminate competition; and I don't think we should, because most of our positive emotions were selected for by evolution only because we were in competition. Removing competition would unground our emotional preferences (eg, loving our mates and children, enjoying accomplishment), perhaps making their continued presence in our minds evolutionarily unstable, or simply superfluous (and thus necessarily to be disposed of, because the moral imperative I have most confidence that a Singleton would follow is to use energy efficiently).

The concept of a singleton is misleading, because it makes people focus on the subjectivity (or consciousness; I use these terms as synonyms) of the top level in the hierarchy. Thus, just using the word Singleton causes people to gloss over the most important moral questions to ask about a large hierarchical system. For starters, where are the locuses of consciousness in the system? Saying “just at the top” is probably wrong.

Imagining a future that isn’t ethically repugnant requires some preliminary answers to questions about consciousness, or whatever concept we use to determine what agents need to be included in our moral calculations. One line of thought is to impose information-theoretical requirements on consciousness, such as that a conscious entity has exactly one possible symbol grounding connecting its thoughts to the outside world. You can derive lower bounds for consciousness from this supposition. Another would be to posit that the degree of consciousness is proportional to the degree of freedom, and state this with an entropy measurement relating a processes’ inputs to its possible outputs.

Having constraints such as these would allow us to begin to identify the agents in a large, interconnected system; and to evaluate our proposals.

I'd be interested in whether Eliezer thinks CEV requires a singleton. It seems to me that it does. I am more in favor of an ecosystem or balance-of-power approach that uses competition, than a totalitarian machine that excludes it.

comment by nv · 2008-10-07T20:12:00.000Z · LW(p) · GW(p)

>To exist is to be imperfect
A thing that that philosophical types like to do that I dislike is making claims about about what it is to exist in general, claims that presumably would apply to all minds or 'subjects', when in fact those claims concern at most only the particular Homo Sapiens condition, and are based only on the experiences of one particular Homo Sapiens.

>However, I can't help but be troubled by the thought that the
>mass murder of jews, gypsies, the mentally retarted and
>homosexuals was precipitated by the fact that Hitler et al
>thought it was 'obvious' that they were crap and needed fixing.
To point out the obvious, Alice at least judges Hitler's actions as crap, and judges the imposition on the world of values that have not been fully considered as crap, and would like to impose those value on the world. This is was a good post on the subject: http://www.overcomingbias.com/2008/08/invisible-frame.html

comment by Will_Pearson · 2008-10-07T20:54:00.000Z · LW(p) · GW(p)
A thing that that philosophical types like to do that I dislike is making claims about about what it is to exist in general, claims that presumably would apply to all minds or 'subjects', when in fact those claims concern at most only the particular Homo Sapiens condition, and are based only on the experiences of one particular Homo Sapiens.

My claim is mainly based on physics of one sort of another. For one the second law of thermodynamics. All systems will eventually degrade to whatever is most stable. Neutrons, IIRC. And unless a set of neutrons in thermodynamic equilibria happen to be your idea of perfection, or your idea of perfection is impermanent, then my statement stands.

Another one is acting, a quantum system decoheres or splits the universe into two possible worlds. The agent doesn't know which of the possible worlds it is in (unless it happens to have a particle in super position with the decohering system), so has to split the difference and act as if it could be in either. As such it is imperfect.

comment by Tim_Tyler · 2008-10-07T21:04:00.000Z · LW(p) · GW(p)

What I described involves some similar ideas, but I find the notion of a singleton unlikely, or at least suboptimal. It is a machine analogy for life and intelligence. A machine is a collection of parts, all working together under one common control to one common end. Living systems, by contrast, and particularly large evolving systems such as ecosystems or economies, work best, in our experience, if they do not have centralized control, but have a variety of competing agents, and some randomness.

The idea of one big organism is not really that it will be, in some sense "optimal". It's more that it might happen - e.g. as the result of an imbalance of the powers at the top.

We have very limited experience with political systems. The most we can say is that so far, we haven't managed to get communism to be as competitive as capitalism. However, that may not matter. If all living system fuse, they won't have any competition, so how well they operate together would not be a big issue.

In theory, competition looks very bad. Fighting with each other can't possibly be efficient. Almost always, battles should be done under simulation - so the winner can be determined early - without the damage and waste of a real fight. There's a huge drive towards cooperation - as explained by Robert Wright.

comment by Phil_Goetz · 2008-10-07T21:38:00.000Z · LW(p) · GW(p)

In theory, competition looks very bad. Fighting with each other can't possibly be efficient. Almost always, battles should be done under simulation - so the winner can be determined early - without the damage and waste of a real fight. There's a huge drive towards cooperation - as explained by Robert Wright.
We're talking about competition between optimization processes. What would it mean to be a simulation of a computation? I don't think there is any such distinction. Subjectivity belongs to these processes; and they are the things which must compete. If the winner could be determined by a simpler computation, you would be running that computation instead; and the hypothetical consciousness that we were talking about would be that computation instead.

comment by Tim_Tyler · 2008-10-07T22:05:00.000Z · LW(p) · GW(p)

If the winner could be determined by a simpler computation, you would be running that computation instead [...]

Well, that's the point. Usually it can be, and often we're not. There's a big drive towards virtualising combat behaviour in nature. Deer snort at each other, sea lions bellow - and so on: signalling who is going to win without actually fighting. Humans do the same thing with national sports - and with companies - where a virtual creature dies, and the people mostly walk away. But we are still near the beginning of the curve. There are still many fights, and a lot of damage done. Huge improvements in this area could be made.

comment by Phil_Goetz · 2008-10-07T22:18:00.000Z · LW(p) · GW(p)

Tim - I'm asking the question whether competition, and its concomitant unpleasantness (losing, conflict, and the undermining of CEV's viability), can be eliminated from the world. Under a wide variety of assumptions, we can characterize all activities, or at least all mental activities, as computational. We also hope that these computations will be done in a way such that consciousness is still present.

My argument is that optimization is done best by an architecture that uses competition. The computations engaged in this competition are the major possible loci for consciousness. You can't escape this by saying that you will simulate the competition, because this simulation is itself a computation. Either it is also part of a possible locus of consciousness, or you have eliminated most of the possible loci of consciousness, and produced an active but largely "dead" (unconscious) universe.

comment by Hopefully_Anonymous3 · 2008-10-07T22:22:00.000Z · LW(p) · GW(p)

Alex, I admit I hope the fawning praisers, who are mostly anonymous, are Eliezer's sockpuppets. Rather than a dozen or more people on the internet who read Eliezer's posts and feel some desire to fawn. But it's mostly an aesthetic preference -I can't say it makes a real difference in accomplishing shared goals, beyond being a mild waste of time and energy.

comment by S._Puppet · 2008-10-07T23:16:00.000Z · LW(p) · GW(p)

So. Hard. To. Say. Anything. Bad. About. Eliezer.

:)

Aren't you a bit biased here? If one expresses positive views about Eliezer, that's fawning, obsequiousness, or other rather exaggerated word, but negative views and critique is just business as usual. As usual.

It would be better if talking about people ceased and ideas and actions got 100% attention.

Remove the talk about people from politics and what's left? Policies? I don't know what the people/policies ratio in political discussion in the media is, but often it feels like most of the time is spent on talking about the politicians, not about policies. I guess it's supposed to be that way.

comment by Tim_Tyler · 2008-10-08T08:02:00.000Z · LW(p) · GW(p)

My argument is that optimization is done best by an architecture that uses competition.

Optimization is done best by an architecture that performs trials, inspects the results, makes modifications and iterates. No sentient agents typically need to be harmed during such a process - nor do you need multiple intelligent agents to perform it.

Remember the old joke: "Why is there only one Monopolies Commission?"

The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better - despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other's efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.

comment by Phil_Goetz · 2008-10-08T17:21:00.000Z · LW(p) · GW(p)
Optimization is done best by an architecture that performs trials, inspects the results, makes modifications and iterates. No sentient agents typically need to be harmed during such a process - nor do you need multiple intelligent agents to perform it.

Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.

If these nations are non-intelligent, and non-conscious, or even unemotional, and incorporate no such intelligences in themselves, then you have a dead world devoid of consciousness.

If they do incorporate agents, then for them not to be "harmed", they need not to feel bad if their trial fails. What would it mean to build agents that weren't disappointed if they failed to find a good optimum? It would mean stripping out emotions, and probably consciousness, as an intermediary between goals and actions. See "dead world" above.

Besides being a great horror that is the one thing we must avoid above all else, building a superintelligence devoid of emotions ignores the purpose of emotions.

First, emotions are heuristics. When the search space is too spiky for you to know what to do, you reach into your gut and pull out the good/bad result of a blended multilevel model of similar situations.

Second, emotions let an organism be autonomous. The fact that they have drives that make them take care of their own interests, makes it easier to build a complicated network of these agents that doesn't need totalitarian top-down Stalinist control. See economic theory.

Third, emotions introduce necessary biases into otherwise overly-rational agents. Suppose you're doing a Monte Carlo simulation with 1000 random starts. One of these starts is doing really well. Rationally, the other random starts should all copy it, because they want to do well. But you don't want that to happen. So it's better if they're emotionally attached to their particular starting parameters.

It would be interesting if the free market didn't actually reach an optimal equilibrium with purely rational agents, because such agents would copy the more successful agents so faithfully that risks would not be taken. There is some evidence of this in the monotony of the movies and videogames that large companies produce.

The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better - despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other's efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.

This is the argument for communism. Why should we resurrect it? What conditions will change so that this now-unworkable approach will work in the future? I don't think there are any such conditions that don't require stripping your superintelligence of most of the possible niches where smaller consciousnesses could reside inside it.

comment by yters · 2008-10-09T05:18:00.000Z · LW(p) · GW(p)

Ha, I get it now, FAI is about creating god.

Anyways, no matter what you do, mind annihilation is certain in our universe, i.e. 2nd law of thermodynamics.

comment by Mayson_Lancaster · 2008-10-09T06:50:00.000Z · LW(p) · GW(p)

@Doug S. Read "The Gentle Seduction" by Marc Stiegler. And, if you haven't already, consider anti-depressants: I know a number of people whom they have saved from suicide.

comment by Alexei_Turchin · 2008-10-09T11:30:00.000Z · LW(p) · GW(p)

The main question here is -

Can finite state machine have qualia?

Because suffering is qualia. If it is not, it does''t matter. It is easy to write a programm that print "i am suffering" if you press a button.

So, God cannot influence on result of work of final automata. But He can give it qualia, or switch qualia off for it - and nobody would ever mention it.

So, existing of final automat Universe, don''t prove absense of God, because God could change qualia without changing result of work of the automat.

comment by Nick_Tarleton · 2008-10-09T11:34:00.000Z · LW(p) · GW(p)
So, God cannot influence on result of work of final automata. But He can give it qualia, or switch qualia off for it - and nobody would ever mention it.

What makes you think qualia aren't necessarily bound to algorithms?

comment by Passing_Through · 2008-10-09T20:10:00.000Z · LW(p) · GW(p)

Why the hangup about turing-completeness?

In a finite universe world there are no true turing machines, as there are no infinite tapes; thus if you are going to be assigning some philosophical heft to turing-completeness you are being a bit sloppy, and should be saying "show me something that provably cannot be computed by a finite state machine of any size".

comment by Kiba · 2008-10-10T12:14:00.000Z · LW(p) · GW(p)

Yes, some Buddhist sects allow for complete annihilation of self. Most of the Zen sects, actually. No gods, no afterlife, just here-and-now, whichever now you happen to be considering. Reincarnation is simply the reconfiguration of you, moment by moment, springing up from the Void (or the quantum foam, if you prefer), each moment separate and distinct from the previous or the subsequent. Dogen (and Bankei and Huineng, for that matter) understood the idea of Timeless Physics very well.

comment by Caledonian2 · 2008-10-10T13:38:00.000Z · LW(p) · GW(p)

What makes you think qualia aren't necessarily bound to algorithms?
What makes you think that 'qualia' are a meaningful concept?

I have never come across anyone who could present a coherent and intelligible definition for the word that didn't automatically render the referent non-existent.

Before we try to answer the question, we need to establish that the question is a valid one. 'How many angels can dance on the head of a pin?' is not one of the great mysteries, because the question is only meaningful in a context of specific, unjustifiable beliefs. Eliminate those beliefs and there's no more question.

comment by Marcus_Geduld · 2008-10-10T19:39:00.000Z · LW(p) · GW(p)

Note: I'm an atheist who, like you, agrees that there's no divine plan and that, good or bad, shit happens.

That said, I think there's a hole in your argument. You're convincing when you claim that unfair things happen on Earth; you're not convincing when you claim there's no afterlife where Earthly-unfairness is addressed.

Isn't that the whole idea (and solace) of the afterlife? (Occam's Razor stops me from believing in an afterlife, but you don't delve into that in your essay.) A theist could easily agree with most of your essay but say, "Don't worry, all those Holocaust victims are happy in the next world."

The same holds for your Game of Life scenario. Let's say I build an unfair Game of Life. I construct rules that will lead to my automata suffering, run the simulation, automata DO suffer, and God doesn't appear and change my code.

Okay, but how do I know that God hasn't extracted the souls of the automata and whisked them away to heaven? Souls are the loophole. Since you can't measure them (since they're not part of the natural world but are somehow connected to the natural world), God can cure unfairness (without messing with terrestrial dice) by playing right by souls.

My guess is that, like me, you simply don't believe in souls because such a belief is an arbitrary, just-because-it-feels-good belief. My mind -- trained and somehow naturally conditioned to cling to Occam's Razer -- just won't go there.

comment by Pete6 · 2008-10-15T04:56:00.000Z · LW(p) · GW(p)

So you don't think you could catch up? If you had been frozen somewhere between -10000 and -100 years and revived now, don't you think you could start learning what the heck it is people are doing and understand nowadays? Besides a lot of the pre-freeze life-experience would be fully applicable to present. Everyone starts learning from the point of birth. You'd have headway compared to those who just start out from nothing.
There are things we can meaningfully contribute to even in a Sysop universe, filled with Minds. We, after all, are minds, too, which have the inherent quality of creativity - creating new, evermore elegant and intricate patterns - at whatever our level is, and self-improvement; optimization.

No, you couldn't. Someone from 8,000BC would not stand a chance if they were revived now. The compassionate thing to do, really, would be to thaw them out and bury them.

Yes, they would be worse off than children. Don't underestimate the importance of development when it comes to the brain.

Minds don't have the "inherent quality of creativity". Autistics are the obvious counterexample.

if I could create a VPOP that did not have subjective experience (or the confusion we name subjective experience)

The confusion we name subjective experience? TBH Eli, that sounds like neomystical crap. See below.

I have never come across anyone who could present a coherent and intelligible definition for the word that didn't automatically render the referent non-existent.

Qualia are neural representations of certain data. They can induce other neurological states, creating what we know as the first-person. So what? I don't see why so called reductionists quibble over this so much. They exist, just get over it and study it if you really want to.

comment by Nick_Tarleton · 2008-10-15T05:03:00.000Z · LW(p) · GW(p)
Minds don't have the "inherent quality of creativity". Autistics are the obvious counterexample.

Sorry, but no, no, no.

comment by Tim_Tyler · 2008-10-16T14:20:00.000Z · LW(p) · GW(p)

The evidence for the advantages of cooperation is best interpreted as a lack of our ability to manage large complex structures effectively. We are so bad at it that even a stupid evolutionary algorithm can do better - despite all the duplication and wasted effort that so obviously involves. Companies that develop competing products to fill a niche in ignorance of each other's efforts often is the stupid waste of time that it seems. In the future, our management skills will improve.

This is the argument for communism. Why should we resurrect it? What conditions will change so that this now-unworkable approach will work in the future? [...]

Nature excels at building large-scale cooperative structures. Multicellular organisms are one good example, and the social insects are another.

If the evidence for the superiority of competition over cooperation consists of "well, nobody's managed to get cooperation in the dominant species working so far", then it seems to me, that's a pretty feeble kind of evidence - offering only extremely tenuous support to the idea that nobody will ever get it to work.

The situation is that we have many promising examples from nature, many more promising examples from history, a large trend towards globalisation - and a theory about why cooperation is naturally favoured.

comment by Tim_Tyler · 2008-10-16T14:31:00.000Z · LW(p) · GW(p)

Some of your problems will be so complicated, that each trial will be undertaken by an organization as complex as a corporation or an entire nation.

Maybe - but test failures are typically not a sign that you need to bin the offending instance. Think of how programmers work. See some unit test failures? Hit undo a few times, until they go away again. Rarely do you need to recycle on the level of worms.

comment by Abe · 2008-10-16T15:37:00.000Z · LW(p) · GW(p)

I find it strange how atheists always feel able to speak for God. Can you speak for your human enemies? Can you even speak for your wife, if you have one? Why would you presume to think you can say what God would or wouldn't allow?

comment by Nate_Barna · 2008-10-16T17:18:00.000Z · LW(p) · GW(p)

Abe: I find it strange how atheists always feel able to speak for God.
Sometimes, they're not trying to speak for God, as they're not first assuming that an ideally intelligent God exists. Rather, they're imagining and speaking about the theist assumption that an ideally intelligent God exists, and then they carefully draw inferences which tend to end up incoherent on that grounding. However, philosophy of religion reasonably attempts coherence, and not all atheists are completely indifferent toward it.

comment by Caledonian2 · 2008-10-16T17:41:00.000Z · LW(p) · GW(p)

Qualia are neural representations of certain data.
That is NOT what that word is generally used to refer to.
So what? I don't see why so called reductionists quibble over this so much.
Probably because you're using the word incorrectly, so you don't understand what they're objecting to.

comment by Pete6 · 2008-10-16T19:21:00.000Z · LW(p) · GW(p)

That is NOT what that word is generally used to refer to.

Why, because it's a meaningful definition - and people are generally referring to something utterly meaningless? If you want me to define what people, in general, are talking about then of course I can't give a meaningful definition.

But I contend that this is meaningful, and it is what people are referring to - even if they don't know how to properly talk about it.

Imagine person A says that negative numbers are not even conceptually possible, or that arithmetic or whatever can't be performed with them. Person B contends otherwise. Person A asks how one could possibly add negative numbers, and B responds with a lecture about algebraic structures. A objects, "But people aren't generally referring to algebraic structures when they talk about maths, etc. I wasn't talking about that, I was talking about (4-7) and (-2*14) and how these things make no sense."

Well I contend that even if people don't know what they're talking about when they say "qualia this" or "qualia that" - and, in general, they're using gibberish definitions and speech - they're actually trying to talk about something close to the definition I've given.

you're using the word incorrectly

Again, if the only "correct" way to use the word is in the same manner as it is generally thought of, then of course you will never find a sensible definition because none of the sensible definitions are in common use - so you've ruled them out a priori, and are touting a tautology. But I'm not going to define what other people think they mean by a word - I'm going to define the ontology of the situation. If that's at odds with what people think they're talking about, then so what? People talk about God and think they're referring to this guy in the sky who is actually real - doesn't mean it's what's really going on, or that that's a really accurate definition of God (which would lead to the ontological argument being sound).

comment by Matthew_C. · 2008-10-16T20:05:00.000Z · LW(p) · GW(p)

What makes you think that 'qualia' are a meaningful concept?

The problem, of course, is that qualia (or more generally, experiencing-ness) is not a concept at all (well there is a concept of experiencing-ness, but that is just the concept, not the actuality). A metaphor for experiencing-ness is the "theater of awareness" in which all concepts, sensations, and emotions appear and are witnessed. But experiencing-ness is prior to any and all concepts.

comment by Abe · 2008-10-17T02:26:00.000Z · LW(p) · GW(p)

Nate Barna: Sometimes, they're not trying to speak for God, as they're not first assuming that an ideally intelligent God exists. Rather, they're imagining and speaking about the theist assumption that an ideally intelligent God exists, and then they carefully draw inferences which tend to end up incoherent on that grounding. However, philosophy of religion reasonably attempts coherence, and not all atheists are completely indifferent toward it.

It may be true that some times atheists carefully draw inferences from the idea of an ideally intelligent God. I have yet to see it. Eliezer doesn't seem to be at all careful when he says, "The obvious example of a horror so great that God cannot tolerate it, is death - true death, mind-annihilation. I don't think that even Buddhism allows that. So long as there is a God in the classic sense - full-blown, ontologically fundamental, the God - we can rest assured that no sufficiently awful event will ever, ever happen. There is no soul anywhere that need fear true annihilation; God will prevent it." There is no careful inference there. There is just bald-face assertion.

Why would a being that can create minds at will flinch at their annihilation? The absolute sanctity of minds, even before God, is the sentiment of modern western man, not a careful deduction based on an inconceivably superior intelligence.

The truth is, we don't even know what a SIAI would do, let alone a truly transcendent being, like God. If one going to try and falsify a concept of God, it should at least be a concept more authoritative than the ad hoc imaginings of an atheist.

comment by Nate_Barna · 2008-10-17T02:39:00.000Z · LW(p) · GW(p)

Abe: Why would a being that can create minds at will flinch at their annihilation? The absolute sanctity of minds, even before God, is the sentiment of modern western man, not a careful deduction based on an inconceivably superior intelligence.
An atheist can imagine God having the thought: As your God, I don't care that you deny Me. Your denial of Me is inconsequential and unimpressive in the greater picture necessarily inaccessible to you. If this is an ad hoc imagining, then your assumption, in your question, that a being who can create minds at will doesn't flinch at their annihilation must also be ad hoc.

comment by HalFinney · 2008-10-17T03:26:00.000Z · LW(p) · GW(p)

Following on to the sub-thread here, initiated by Recovering Irrationalist, about whether mathematical existence may be all there is, and that we live in it.

What does that say about the title of the post, Beyond the Reach of God?

Wouldn't it imply that there are those who are indeed beyond God's reach, since even God Himself cannot change the nature of mathematics? That is, God does not really have any control over the multiverse; it exists in an invariant form independent of God's existence.

However we can also argue that there are worlds within the multiverse where something like God exists and does have control, as well as worlds where there is no such entity. (This requires understanding "control" as being compatible with the deterministic nature of the mathematical multiverse.) The question then arises as to which kind of world within the multiverse we inhabit. Are we beyond the reach of God?

A case can be made that this kind of question is poorly posed, because entities with brain structures identical to our own inhabit many places in the multiverse, hence we have to view our experiences as being a sort of superposition of all those instances. So we should ask, what fraction of our copies exist in universes controlled by a God-like entity, versus what fraction exist in universes without any such controller? At this point the traditional arguments come into play about how likely the universe we see about us is likely to be compatible with the ability and motivations of a God-like entity, whether such an entity would allow injustice and evil to exist, etc.

comment by Doug_S. · 2008-10-17T04:25:00.000Z · LW(p) · GW(p)

@Doug S. Read "The Gentle Seduction" by Marc Stiegler. And, if you haven't already, consider anti-depressants: I know a number of people whom they have saved from suicide.

::Googles "The Gentle Seduction"::

Yeah, that's a very beautiful story. And yes, I take antidepressants. They just change my feelings, not my beliefs. Their subjective effect on me can best be described as "Yes, my life still sucks, but I'm cheerful anyway!" If I honestly prefer retroactive non-existence even when happy, doesn't that suggest that my assessment stems from something other than a lack of happiness chemicals in my brain?

Fear not, I have no intention of committing suicide in the near future. Although I prefer the world in which I never existed, the world in which I exist and and then die tomorrow is worse than the one in which I exist and continue to exist for several more years. (Specifically, my untimely death would cause great misery to several people that care about me, so I will refrain from dying before they do.)

comment by Freddie · 2008-10-19T16:44:00.000Z · LW(p) · GW(p)

Nothing really matters,
Anyone can see,
Nothing really matters-,nothing really matters to me,

Any way the wind blows....

**
Queen - Bohemian Rhapsody
We miss you Freddie....

comment by S.o.G. · 2008-10-19T20:06:00.000Z · LW(p) · GW(p)

I don't understand why you believe this thought exercise leads to despair or unhappiness. I went through this thought experiment many years ago, and the only significant impact it had on me was that I evaluate risk very differently than most people around me. I'm no less happy (or more depressed at least) or motivated, and I experience about as much despair as a non-secular optimist: occasional brief glimpses of it which quickly evaporate as I consider my options and choices.

And, to be honest, the process of looking at existentialism and going through some of the chains of thought mentioned in this article definitely improved my young adulthood. It made it more interesting and exciting, and made me more interesting to other people.

In any case, despair is not such a bad thing. Look at Woody Allen. Is he a joyless unproductive fear-ridden hermit? uh, no. But he's certainly thought through everything in this article, and accepted it, and turned it all into a source of amusement, curiosity, and intellectual stimualtion.

comment by Lee6 · 2008-10-26T06:27:00.000Z · LW(p) · GW(p)

Lots of ideas here. They only seem to work if God is primarily concerned about fairness on earth. What if God is not so concerned about our circumstance as He in our response to circumstance. After all, He has an eternal perspective, while our perspective is limited by what we see of life. If this were true, then earth, and our existence, are like a big machine designed specifically to sort out the few good from the bad. Being raised in an Orthodox Jewish family, I’m sure you encountered countless examples in the bible where bad stuff happened to good people. This is no great revelation of truth – it’s plainly obvious, so the authors of the bible obviously had no problem reconciling this dilemma, countless Jews and Christians have no difficulty reconciling these facts. They’re probably not all idiots or wishful thinkers, so perhaps they understand a perspective that you have not considered. Maybe God doesn’t settle all accounts on your time table, but rather His. Maybe your values (and mine) do not perfectly align with a perfect God –so who then should change? I completely understand why people spend their lives praying to God, searching for understanding, proselytizing to slammed doors. I cannot fathom why an atheist (an authentic atheist) would waste a moment of their precious short life writing endlessly on something they believe to be pure fiction. As for me, I’ll keep praying.

Replies from: JohnH
comment by JohnH · 2011-04-22T06:00:58.007Z · LW(p) · GW(p)

In particular as an Orthodox Jew he should be very familiar with Deuteronomy, Isaiah, and Jeremiah where the scattering of the Jews and there centuries of oppression and persecution are predicted as well as there eventual gathering after they have reached the point that they thought they were going to be forgotten forever and utterly destroyed, such that the survivors of the horrors are predicted to say at the number of them, these where did they come from? for we were left alone to paraphrase Isaiah 49:21.

comment by homunq · 2009-08-04T17:22:58.947Z · LW(p) · GW(p)

I tend to resolve these issues with measure-problem hand-waving. Basically, since any possible universe exists (between quantum branching, inflationary multiverse, and simulated/purely mathematical existence), any collection of particles (such as me sitting here) exists with a practically uncountable set of futures and pasts, many of which make no sense (bolzman brains). The measure problem is, why is that "many" not actually "most"? The simplest answer is the anthropic one: because that kind of existence simply "doesn't count". So, there is some set of qualities of the universe as we know it that make it "count", let's call that set "consciousness". And, personally, I think that this set includes not only the existence of optimizing agents (ourselves), but also the fact that these agents are fundamentally limited in something similar to the ways that we are. In other words, the very existence of some FAI which can keep all of your bad decisions (for any given definition of "bad") from having consequences, means that "consciousness" as we know it has ended. Whatever exists on the other side of that barrier is simply incommensurable with my values here on this side. It's "game over". I can have perfect faith that my "me" will never see it completed - by definition, since then I'd no longer be a conscious "me" under my definition.

That means I am much more motivated to look for (weakly) "incremental" solutions to the problems I see with the world than for truly revolutionary ones like FAI or cryonics. (I regard the last "revolutionary" change to be the evolution of humanity itself - so "incremental" in this sense encompasses all human revolutions to date. The end of death would not be incremental in this sense.)

Sure, I can see where this is more of a justification for acting like a normal person than a rational exploration of fully coherent value space. Yet I can also argue that being meta-rational means justifying, not re-questioning, certain axioms of behavior.

Shorter me: "solving the whole world" leaves me cold, despite fun theory and all. So does ending death, or avoiding it personally. So me not signing up for cryo is perfectly rational.

While I acknowledge that this might not be the most complete and coherent possible set of values, I see no evidence that it's specifically incoherent, and it is complete enough for me. The Singularity Institute set of values may be more complete and just as non-incoherent, but I suspect that mine are operationally superior, or at least less likely to be falsified, since they attain similar coherence with less of a divergence from evolved human behaviour.

comment by SforSingularity · 2009-09-07T22:01:06.662Z · LW(p) · GW(p)

or maybe "I hope he becomes Jewish."

That is a rotten thing to wish upon any adult male. Think of the pain!

comment by Bugle · 2010-01-22T00:23:34.562Z · LW(p) · GW(p)

Last night I was reading through your "coming of age" articles and stopped right before this one, which neatly summarizes why I was physically terrified. I've never before experienced sheer existential terror, just from considering reality.

comment by thomblake · 2010-03-16T13:56:08.752Z · LW(p) · GW(p)

Absolute, utter, exceptionless neutrality.

I hate these filthy Neutrals, Kif. With enemies you know where they stand but with Neutrals, who knows? It sickens me.

comment by [deleted] · 2011-07-03T05:53:11.000Z · LW(p) · GW(p)

Are there any useful summaries of strategies to rearrange priorities and manage time to deal with the implications of this post? I get the existential terror part. We're minds in constant peril, basically floating on a tattered raft in the middle of the worst hurricane ever imagined. I'm sure only few of the contributors here think that saying, "this sucks but oh well" is a good idea. So what do we do?

Since I've started reading LW, I have started to devote way more of my life to reading. I read for hours each day now, mostly science literature, philosophy, economics, lots of links to external things from LW. But it hardly feels like enough and every choice I make about what to read feels like a precious one indeed. I am a grad student, and I think often about the rapidly changing landscape of PhD jobs. Should I be content going to an industrial job and paying for cryonics and hoping to nudge people in my social circles to adopt more rational hygiene in their beliefs (while working on doing so myself as well)? I know no one can really answer that kind of question for me, but other people can simulate that predicament and offer advice. Is there any?

If an intelligence explosion does happen in the next few decades, why am I even spending precious minutes worrying about what skill set to train into myself for such-and-such an industry or such-and-such a career? Those types of tasks might even be the very first tasks to be subsumed by advanced technology (much the way that technology displaces legal research assistants faster than janitors). The world isn't fair. I could study advanced math and engineering and hit my career at just the moment in history when those stop being people-tasks. I could be like the Reeks and Recs from Vonnegut's Player Piano. This is serious beeswax here. I want to make a Bayesian decision about how to spend my time and what skill set to train into myself. It would seem like this site is among the best places to pose the question and ask for direction to sweet updatable evidence. But where is some?

Replies from: Delta
comment by Delta · 2012-09-12T09:34:58.715Z · LW(p) · GW(p)

I realise it is over a year later but can I ask how it went, or whether anyone has advice for someone in a similar position? I felt similar existential terror when reading The Selfish Gene and realising on one level I'm just a machine that will someday break down, leaving nothing behind. How do you respond to something like that? I get that you need to strike a balance between being sufficiently aware of your fragility and mortality to drive yourself to do things that matter (ideally supporting measures that will reduce said human fragility) but not so much you obsess over it and become depressed, but it can seem a pretty tricky balance to strike, especially if you are temperamentally inclined towards obsessiveness, negativity and akrasia.

Replies from: None
comment by [deleted] · 2012-11-22T05:23:38.160Z · LW(p) · GW(p)

There's lots to say, but I'll reserve it for a full discussion post soon, and I'll come back here and post a link.

Replies from: Delta
comment by Delta · 2012-11-22T11:03:11.509Z · LW(p) · GW(p)

Sounds good, I'll look forward to it.

comment by Mass_Driver · 2011-09-09T11:22:16.183Z · LW(p) · GW(p)

Not every child needs to stare Nature in the eyes. Buckling a seatbelt, or writing a check, is not that complicated or deadly. I don't say that every rationalist should meditate on neutrality. I don't say that every rationalist should think all these unpleasant thoughts. But anyone who plans on confronting an uncalibrated challenge of instant death, must not avoid them.

Granted. Now, where are the useful, calibrated challenges? I am like a school-age child in my rationality; I can read and understand this passage about neutrality and think about it for a moment, but I cannot yet hold it in my mind as I go out into the world to do work. But I want to do work, and I want it to be useful. Is there anything I can do short of disconnecting from society and meditating on rationality until neutrality seems intuitive?

comment by deeb · 2011-11-19T15:04:01.266Z · LW(p) · GW(p)

Unfortunately, this post, dated 4 October 2008, blatantly ignores the good sense of the 'Occam's Razor' one, dated 26 September 2007. http://lesswrong.com/lw/jp/occams_razor/ It is very naive to argue along the lines of "cellular automata are Turing complete, hence we can build a cellular automaton simulating anything we want to". This is just using the term "Turing complete" in the same way as the poor barbarians of the 'Occam's Razor' post use the term "Thor", viz., as a sort of totem you wave around in order to spare you the work of actually thinking things through. Well, of course you can imagine a cellular automaton simulating anything you like, as long as it isn't self-contradictory. But there lies the problem, it is very difficult to know whether some concept is self-contradictory just using natural language before you have actually gone and built it. Who is telling you that all the moral and spiritual aspects of the conditio humana aren't going to pop up in your simulation as epiphenomena, by necessity, just as they did in this universe? That's right, you can't know until you have done the simulation. The smug "Is this world starting to sound familiar?" really cuts two ways in this case.

Replies from: deeb
comment by deeb · 2011-11-25T09:20:18.231Z · LW(p) · GW(p)

Nice. Here I present what I genuinely think is a flaw in this article, and instead of getting replies, I am just voted down "below threshold". I believe I have pointed out exactly what I disagree with and why. I would have been happy to hear people disagreeing or asking me to look at this from some other perspective. But apparently there is a penalty for violating the unwritten community rule that "Eliezer's posts are unfailingly brilliant and flawless". I have learned a lot from this website. There are sometimes very deep ideas, and intelligent debate. But I think the community is not for me, so I will let this account die and go back to lurking.

Replies from: antigonus, Randolf, ArisKatsaris, lessdazed
comment by antigonus · 2011-11-25T09:46:35.659Z · LW(p) · GW(p)

I didn't vote down your post (or even see it until just now), but it came across as a bit disdainful while being written rather confusingly. The former is going to poorly dispose people toward your message, and the latter is going to poorly dispose people toward taking the trouble to respond to it. If you try rephrasing in a clearer way, you might see more discussion.

Replies from: Randolf
comment by Randolf · 2011-11-25T10:39:06.573Z · LW(p) · GW(p)

Then maybe, instead of just downvoting, these persons should have asked him to clarify and repharse his post. This would have actually led to an interesting dicussion, while downvoting gave nobody nothing. Maybe it should be possible to downvote a post only if you also reply to that post.

Replies from: kilobug
comment by kilobug · 2011-11-25T11:14:54.885Z · LW(p) · GW(p)

That would kill the main idea of downvoting which is to improve the signal/noise ratio by ensuring comments made by "trolls" just aren't noticed anymore unless people really want to see them.

Downvoting does lead to abuses, and I do consider that downvoting deeb's comment was not really needed, but forcing to make comments will kill the purpose, and not really prevent the abuses.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-25T13:59:04.425Z · LW(p) · GW(p)

That would kill the main idea of downvoting which is to improve the signal/noise ratio by ensuring comments made by "trolls" just aren't noticed anymore unless people really want to see them.

Khan Academy employs a reputation system such as lesswrong. What happened is that completely useless comments, e.g. "YES WE KHAN!!!!!! :)", are voted up and drown all useful comments.

YouTube also employs a reputation system where people can upvote and downvote comments. And what happens?

Replies from: kilobug, MarkusRamikin
comment by kilobug · 2011-11-25T15:28:19.845Z · LW(p) · GW(p)

Well, Less Wrong is a well-kept garden not a mass public community like YouTube.

The karma system is not perfect, but IMHO it does more good than harm (and I say that even if some of my comments were downvoted). "Chinese People Suck" would be quickly downvoted below threshold here. At least, I give a high (>90%) confidence to it.

Replies from: lessdazed, MarkusRamikin
comment by lessdazed · 2011-11-25T18:09:32.617Z · LW(p) · GW(p)

Chinese People Suck

Replies from: Desrtopa
comment by Desrtopa · 2011-11-25T18:11:38.633Z · LW(p) · GW(p)

Downvoted for taking the too-obvious route.

Replies from: Friendly-HI
comment by Friendly-HI · 2011-11-25T18:13:38.359Z · LW(p) · GW(p)

Downvoted for pointing out the obvious.

comment by MarkusRamikin · 2011-11-25T19:16:58.105Z · LW(p) · GW(p)

I'd like to attest that I find the karma system (by which I understand not just the software but the way the community uses it) a huge blessing and part of LW's appeal to me. It is a strong incentive to pause and ask myself if I even have something to say before I open my mouth around here (which is why I haven't written a main blog post yet) rather than just fling crap at the wall like one does in the rest of the Internet.

The "downvotes vs replies" problem is, I think, for the most part a non-issue. Anyone who's been here a bit will know that if (generic) you ask for clarification of your downvotes, people will generally provide as long as you're not acting whiney or sore about it. And there will be nothing stopping you from constructively engaging them on the points raised (though beware to actually apply reading comprehension to what is said then, because people don't like it when you fail to update).

Replies from: XiXiDu
comment by XiXiDu · 2011-11-25T20:21:18.712Z · LW(p) · GW(p)

I'd like to attest that I find the karma system (by which I understand not just the software but the way the community uses it) a huge blessing and part of LW's appeal to me. It is a strong incentive to pause and ask myself if I even have something to say before I open my mouth around here...

Yes, I also see that a reputation system does have positive effects given certain circumstances. But would you want to have such a system employed on a global basis, where millions could downvote you for saying that there is no God? Obviously such a system would be really bad for the kind of people who read lesswrong and for the world as a whole.

That means that the use of the system on lesswrong is based on the assumption that it will only be used by people who are much like you and will therefore work well for you. But given that lesswrong is an open system, will it always stay that way? At what point is it going to fail on you, how will you notice, how do you set the threshold?

And given that the system works so well as to keep everyone who doesn't think like you off lesswrong, how are you going to notice negative effects of groupthink? Do we trust our abilities to seek truth enough to notice when the system starts to discourage people who are actually less wrong than lesswrong?

Replies from: MarkusRamikin
comment by MarkusRamikin · 2011-11-25T20:47:37.492Z · LW(p) · GW(p)

That means that the use of the system on lesswrong is based on the assumption that it will only be used by people who are much like you and will therefore work well for you. But given that lesswrong is an open system, will it always stay that way?

Well, nothing lasts forever, supposedly. If in future Less Wrong's quality gets diluted away, it won't matter to me if it keeps using the vote system or something else because I won't care to be on Less Wrong any more.

However, part of the function of the vote system is selection. To put it brutally, it drives away incompatible people (and signals positively to compatible ones). So I think LW will stay worthwhile for quite a while.

And yes, in a way this is one of your negatives from your other post which I actually think is a positive. If someone gets consistently downvoted, doesn't get why, AND can't ask and find out and update on that, then with some probability we can say we don't want them here. I'm sure we lose some good people this way too, but the system's better than nothing; at least what gets through the filter is much better than things would be without it.

comment by MarkusRamikin · 2011-11-25T19:52:59.578Z · LW(p) · GW(p)

What happened is that completely useless comments, e.g. "YES WE KHAN!!!!!! :)", are voted up and drown all useful comments

As far as I can tell, there are no useful comments in the comments section. In the complete absence of anything of substance (a situation LW is not in danger of being), simple community applause lights floating up is understandable. The situation in the Q&A section, where there is substance, appears better.

Also you picked a page which looks like it's largely populated by schoolchildren. Youtube is populated by, well, everyone. LW's audience is strongly selected. I don't know if I even need to say this, but it seems reasonable to expect the downvote system to be used more usefully on LW than on youtube.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-25T20:03:43.054Z · LW(p) · GW(p)

My point is that the negative aspects of such a system are rarely compared to the positive aspects.

People say that the reputation system employed by lesswrong holds the trolls at bay and reduces noise. Yet when I am showing that reputation systems frequently fail at doing so, the same people argue that lesswrong is different and that's why it works. Does it? Or does it just look like it works because lesswrong is different?

Do the positive effects really outweigh the negative?

comment by Randolf · 2011-11-25T10:29:06.026Z · LW(p) · GW(p)

Personally I think that this call voting is indeed useless and belongs to places such as Youtube or other such sites where you can't expect a meaningful discussion in the first place. Here, if a person disagrees with you, I believe she or he should post a counter argument instead of yelling "your are wrong!", that is, giving a negative vote.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-25T12:51:42.604Z · LW(p) · GW(p)

The problem with downvotes is that those who are downvoted are rarely people who know that they are wrong, otherwise they would have deliberately submitted something that they knew would be downvoted, in which case the downvotes would be expected and have little or no effect on the future behavior of the person.

In some cases downvotes might cause a person to reflect on what they have written. But that will only happen if the person believes that downvotes are evidence that their submissions are actually faulty rather than signaling that the person who downvoted did so for various other reasons than being right.

Even if all requirements for a successful downvote are met, the person might very well not be able to figure out how exactly they are wrong due to the change of a number associated with their submission. The information is simply not sufficient. Which will cause the person to either continue to express their opinion or avoid further discussion and continue to hold wrong beliefs.

With respect to the reputation system employed on lesswrong it is often argued that little information is better than no information. Yet humans can easily be overwhelmed by too much information. Especially if the information are easily misjudged and only provide little feedback. Such information might only add to the overall noise.

And even if the above mentioned problems wouldn't exist, reputation systems might easily reinforce any groupthink, if only by causing those who disagree to be discouraged and those who agree to be rewarded.

If everyone was perfectly rational a reputation system would be a valueable tool. But lesswrong is open to everyone. Even if most of the voting behavior is currently free of bias and motivated cognition it might not stay that way for very long.

Take for example the voting pattern when it comes to plain English, easily digestible submissions, versus highly technical posts including math. A lot of the latter category receives much less upvotes. The writing of technical posts is actively discouraged by this inevitable effect of a reputation system.

Worst of all, any reputation system protects itself by making those who most benefit from it defend its value.

Replies from: kilobug, lessdazed
comment by kilobug · 2011-11-25T15:41:29.091Z · LW(p) · GW(p)

Well, there are two different aspects in Less Wrong system : the global karma of a person, and the score of a comment.

I agree that the "global karma of a person" is of mitigated use. It does sometimes give me a little kick to be more careful in writing on LW (and I'm probably not the only one), but only slightly, and it does have significant drawbacks.

But the score of one comment has a different purpose : the purpose is that a third party (not the one who posted the comment nor the one who put the upvote/downvote) can easily select comments worth to read and those which are not. In that regard, it works relatively well - not perfectly, but better than nothing. And in that regard, it doesn't really matter if the OP understands why he is downvoted, and in that regard, explaining why you downvote does more harm than good - it decreases the signal/noise ratio (unless the explanation itself is very interesting, like it points to a fallacy that is not commonly recognized).

comment by lessdazed · 2011-11-25T18:12:19.109Z · LW(p) · GW(p)

The writing of technical posts is actively discouraged

Less encouraged.

comment by ArisKatsaris · 2011-11-25T12:29:41.625Z · LW(p) · GW(p)

I didn't see you complaining for the upvotes you got in other comments. You just barge in here to accuse us of groupthink if you get downvoted (never complaining about unjust upvotes), because you can't even imagine any legitimate reason that could have gotten you downvotes for a badly written and incoherent post. It seems a very common practice in the last couple weeks -- CriticalSteel did it, sam did it, you now do it.

As for your specific comment, it was utterly muddled and confused -- it didn't even understand what the article it was responding to was about. For example what was there in the original article that made you think "Who is telling you that all the moral and spiritual aspects of the conditio humana aren't going to pop up in your simulation"? is actually disagreeing with something in the article?

And on top of that you add strange inanities, like the claim that "moral and spiritual aspects" of the human condition (which for some reason you wrote in Latin, perhaps to impress us with fancy terms -- which alone would have deserved a downvote) are epiphenomenal in our universe. The very fact that we can discuss them means they affect our material world (e.g. by typing posts in this forum about them), which means they are NOT epiphenomenal.

You didn't get downvotes from me before, but you most definitely deserve them, so I'll correct this omission on both the parent and the grandparent post.

Replies from: XiXiDu
comment by XiXiDu · 2011-11-25T14:22:12.758Z · LW(p) · GW(p)

Your comment saddens me. It displays the typical lesswrong mindset that lesswrong is the last resort of sanity and everyone else is just stupid and not even worthy of more than a downvote.

I don't see much evidence that a lot of people here even try to understand the other side or try to politely correct them.

If you really dislike everyone else so much why don't you people turn this into a private mailing list where only those that are worthwhile can participate? Or make a survey a mandatory part of the registration procedure where everyone who fails some basic measure is told to go away.

Either that or you stop bitching and ignore stupid comments. Or you actually try to refine people's rationality by communicating the insights that the others miss.

The very fact that we can discuss them means they affect our material world (e.g. by typing posts in this forum about them), which means they are NOT epiphenomenal.

Have you tried Wikipedia? "In the more general use of the word a causal relationship between the phenomena is implied: the epiphenomenon is a consequence of the primary phenomenon;"

What he tried to say is that "moral and spiritual aspects" of the human condition might be an implied consequence of the initial state of a certain cellular automaton.

Replies from: ArisKatsaris, lessdazed
comment by ArisKatsaris · 2011-11-25T15:03:55.095Z · LW(p) · GW(p)

It displays the typical lesswrong mindset that lesswrong is the last resort of sanity and everyone else is just stupid and not even worthy of more than a downvote.

Really? I think my flaw has generally been the opposite, I try to talk to people far beyond the extent that it is meaningful Just recently that was exemplified.

If you really dislike everyone else so much why don't you people

Who is "us people"? People that downvoted deeb without a comment? But I'm not one of them -- I downvoted him only after explaining in detail why he's being downvoted. The typical LW member? You've been longer in LessWrong than I have been, I believe, and have a much higher karma score. I'm much closer to being an outsider than you are.

Have you tried Wikipedia?

You are looking at the medicinal section -- when one talks about spiritual or moral aspects of the human condition, the philosophical meaning of the word is normally understood. "In philosophy of mind, epiphenomenalism is the view that mental phenomena are epiphenomena in that they can be caused by physical phenomena, but cannot cause physical phenomena. " as the article you linked to says.

Perhaps you know what he tried to say, but I don't. Even if he meant what you believe him to have meant (which is still a wrong usage of the word), I still don't see how this works as a meaningful objection to the article.

Replies from: XiXiDu, MixedNuts
comment by XiXiDu · 2011-11-25T15:40:14.341Z · LW(p) · GW(p)

Well, your reply to me is much better. You exposed some flaws in my reasoning with actual arguments while being more polite even given the fact that my comment wasn't. That's what I like to see more of.

...which is still a wrong usage of the word...

I used that word before the way I indicated. Not that you are wrong...but I usually look up words at Wikipedia or Merriam-Webster and when the definition seems fit use them for my purposes. Sure, that's laziness on my side. But it might be useful to sometimes apply a bit of guesswork at what someone else could have meant on an international forum.

I'm much closer to being an outsider than you are.

I guess. I just can't identify with most people here so it's hard to see me as a part of this community.

comment by MixedNuts · 2011-11-25T16:03:05.467Z · LW(p) · GW(p)

Yes, but you don't try to learn true things from the points they make, or even to gently coax them out of innocent mistakes. You try to hand them their asses in front of an audience. And the audience already knows that mysterianism is silly. Insulting someone doesn't teach them not to write incoherent posts, and doesn't teach outsiders that incoherent posts are bad. It does teach us that you are badass, but we've sort of gotten the point by now.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-11-25T16:48:26.295Z · LW(p) · GW(p)

Good points.

Except the part about being "badass", which is probably the least like I feel, and is probably the least likely I 'teach' to anyone. "Weak-willed enough that I got counterproductively angry, and depressed" is closer to how I feel after I get more involved in a thread than I should have.

I should probably wait a couple hours before I reply to a post that annoys me -- by then I'll probably be able to better evaluate if a reply is actually worthwhile, and in what manner I should respond.

Replies from: wedrifid
comment by wedrifid · 2011-11-25T19:47:01.131Z · LW(p) · GW(p)

I should probably wait a couple hours before I reply to a post that annoys me -- by then I'll probably be able to better evaluate if a reply is actually worthwhile, and in what manner I should respond.

I find that more often the most useful approach turns out to be downvote then ignore. It is far too easy to get baited into conversations that are a lost cause from the moment they begin.

comment by lessdazed · 2011-11-25T18:06:05.359Z · LW(p) · GW(p)

It displays the typical lesswrong mindset that lesswrong is the last resort of sanity

It's not a guardian of truth. It grew from insanity, so others may have as well - from Dennett to Drescher, there's no telling about the ideas of others until they open their mouths.

[extreme solution A.] Either that or [opposite equally extreme solution B.]

I choose a third alternative that's less extreme.

The very fact that we can discuss them means they affect our material world (e.g. by typing posts in this forum about them),

In the more general use of the word a causal relationship between the phenomena is implied: the epiphenomenon is a consequence of the primary phenomenon

The directionality isn't symmetrical, it only goes one way. Wikipedia:

An epiphenomenon can be an effect of primary phenomena, but cannot affect a primary phenomenon. In philosophy of mind, epiphenomenalism is the view that mental phenomena are epiphenomena in that they can be caused by physical phenomena, but cannot cause physical phenomena.

comment by lessdazed · 2011-11-25T17:56:26.709Z · LW(p) · GW(p)

But apparently there is a penalty for violating the unwritten community rule that "Eliezer's posts are unfailingly brilliant and flawless".

The outside view is that someone complaining about being downvoted for specific reasons is usually wrong about such reasons. Perhaps someone could compile a list of cases.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-25T19:19:12.165Z · LW(p) · GW(p)

That isn't surprising. If I can correctly analyze my reasons for being downvoted, I can probably also figure out that complaining about it doesn't get me anything I want.

comment by Will_Newsome · 2012-06-01T11:00:25.881Z · LW(p) · GW(p)

I wrote this when I was like sixteen, before I'd ever heard of LessWrong:

Dreaming of things yet to come, bleeding ink into the sand,
we passed the days on hither shore, for time measureless to man;
and soon our bloody mark was made on the sand on which we ran.

Footprints upon the sunless shore were rarer than the stains,
the moon cast light upon the trees, the moon gave us its rain,
and to each other in the night we shouted our refrain:

“We immortal run here, waiting; we don’t die despite the bleeding;
we will continue their lives’ taking if they won’t take our heeding;
and we would trade their lives for light, for these stains are worth reading.”

So surprising, mountain rising, o’er the sunless rainbow land,
we deathless died of shock and bled as God lifted His hand;
and these last words that we had bled are stained there in the sand:

Still stained there in the sand, beyond the reach of man.

comment by fractallambda · 2012-06-14T09:50:12.235Z · LW(p) · GW(p)

But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler's personality, often in defiance of his generals and advisors.

Bullshit.

Besides the contemporary accounts of the inevitability of war, there's also, computer modelling of state relations using pre-war evidence (In fact, the modelling even generates the split for the cold war, before the start of WWII).

For the record, Heinrich Heine also said in 1821: "Das war ein Vorspiel nur, dort wo man Bücher verbrennt, verbrennt man auch am Ende Menschen." (That was but a prelude; where they burn books, they will ultimately burn people also.)

Besides that, your entire post could mostly be summarised as The universe doesn't care.

[sarcasm]

No, really? I never knew.

[/sarcasm]

As for saying that 'God wouldn't allow it', It's a ridiculous argument that anthropomorphises an imaginary creation, that was invoked in the first place to reassure people living in a world that never made sense. Of course god wouldn't allow it! He wouldn't allow it by definition. The 'Problem of Evil' is the very reason god exists in human minds in the first place! To use it to attack religion is as unsatisfactory as a rebuttal can get.

comment by Epiphany · 2012-12-25T06:18:55.980Z · LW(p) · GW(p)

Thank you, Eliezer. I will cherish this article. People build their entire world views to run from this and here you are depicting the profound brutality - not obscured with fluff, but stripped naked by your words.

a world beyond the reach of God, an utterly unprotected world where anything at all can happen. ... Someone who wants to dance the deadly dance with Nature, does need to understand what they're up against: Absolute, utter, exceptionless neutrality. ... challenges are not calibrated to your skills

I feel a great relief reading these simple and profound insights: somebody else gets it. Somebody who others pay attention to is warning the rest and I now have a new way to communicate this to others.

I don't say that every rationalist should meditate on neutrality.

I have experienced the neutrality. There's no turning back. I know that there is nothing to trust.

I didn't expect to find any solace that would lighten the harrowing effect of this realization. Your communication gave me that solace. Thank you.

comment by Entraya · 2014-02-17T09:25:16.461Z · LW(p) · GW(p)

I have never really had a problem with the complete neutrality of life. It doesn't really change what happens, since it's inevitable. I think there is a certain art to learning and not let what you learn consume you with despair. If there is no inherent meaning of life and it is all just what happens, so what? It won't really change anything about your life or how you live it unless you allow it to. And if you die and the part of reality that is your consciousness will entirely seize to exist, so what? You won't be alive to give a dahm about it.

Acknowledge the truth, give it a polite nod, and continue on with your life. It will be there regardless, as will your immediate life

comment by more_wrong · 2014-06-02T13:56:23.663Z · LW(p) · GW(p)

Yet who prohibits? Who prevents it from happening?

Eliezer seems absurdly optimistic to me. He is relying on some unseen entity to reach in and keep the laws of physics stable in our universe. We already see lots of evidence that they are not truly stable, for example we believe in both the electroweak transition and earlier transitions, of various natures depending on your school of physics. We /just saw/ in 1998 that unknown laws of physics can creep up and spring out by surprise, suddenly 'taking over' 74 percent of the Universe's postulated energy supply.

Who is keeping the clockwork universe running? Or, why is the hardware and software (operating system etc) for the automata fishbowl so damned stable? Is it part of the Library of Babel? Well, that's a good explanation, but the Bible's universe is in there also and arguably a priori more probable than a computer system that runs many worlds quantum mechanics sims on a panuniversal scale, and never halts and/or catches fire. It is very hard to keep a perfectly stable web server going when demand keeps expanding, and that seems much simpler.

Look into the anthropic principle literature and search on 'Lee Smolin' or start somewhere like http://evodevouniverse.com/wiki/Cosmological_natural_selection_(fecund_universes) for some reasoned speculation on how we might have wound up in such a big, rich, diverse universe from simpler origins.

I don't think it is rational to accept the stability of natural law without some reasonable model for either the origins of said law, or some timeless physics model that has stable physics with apparently consistent evolution as an emergent property, or it's a gift from entities with powers far beyond ours, or some such.

If the worst that can happen is annihilation, you're living in a fool's paradise. Many people have /begged/ for true death and in the animal world things happen all the time that are truly horrifying to most human eyes. I will avoid going into sickening detail here, but let me refer you to H.P. Lovecraft and Charles Stross for some examples of how the universe might have been slightly less friendly.

Replies from: None
comment by [deleted] · 2014-06-02T15:43:06.239Z · LW(p) · GW(p)

Oh come off it. There has to be some way that the universe actually works, at bottom. That way of working must be logical/causal, as if it's not, then everything happens and also nothing happens, including all logically contradictory things, more-or-less all the time. Since we don't observe anything remotely like that degree of sheer chaos, there must be laws. We don't always know the "absolute" laws, in fact we can only sometimes detect our ignorance (by having an experimental result we can't consistently explain), but we can build up models of universal lawfulness that keep working up to information leakage from an Outside universe (which would itself have to have laws of its own).

There's no point worrying about Yog-Sothoth when we've already got Azathoth and Clippy on our plates.

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-18T23:28:06.850Z · LW(p) · GW(p)

On WWII - the "more or less the same way" is actually rather flexible. Usual advocates of "don't put it all on Hitler" say (AFAIK) something among the lines "historical balance was such that SOME major war involving Germany was bound to occur and, given the rise of tech, be about as deadly and as propaganda-fueled; Hitler did not invent or destroy so much tech as to change that particular statement" but not "there would arise a party against Jews which would win over Communists, make an alliance with another "communists", then betray them". Without Hitler we may have found Communist Germany led by Telman out to destroy Capitalism, instead of Jews, and then fighting against USSR before actually managing that for reasons similar to why Trotsky was exiled and then killed - because Stalin would fear Telman the Führer to be an opponent of his. Or something entirely different - I've already given too much detail to my alternative history. The only thing advocated is "major, deadly, propaganda-fueled war in several decades after the Last War with Germany as a prominent player".

comment by Yitz (yitz) · 2020-05-28T20:23:36.849Z · LW(p) · GW(p)

It’s interesting that you say that a Good God wouldn’t destroy a soul, as one of the biggest issues I’m currently finding myself having with Orthodox Judaism is that according to the Talmud at least, there have been a number of historical cases of souls being completely destroyed, which seems rather incompatible with the rest of Orthodox Jewish morality....I don’t know about the Christian or Muslim God, but they do both seem to believe that some people burn in hell forever, which is arguably worse than simply not existing. I really don’t get how this isn’t discussed more often in conventional theism...

comment by EniScien · 2022-05-27T13:42:42.276Z · LW(p) · GW(p)

I do believe that World War II was largely doomed. But I also believe that the bombs on Hiroshima could easily have been avoided, it seems everyone would agree that in this case it really was decided by a roll of the dice. I sometimes have moments when I think of the zero-sum game as something that would always be found by science, but then I remind myself that we could easily live in a world where there is no such thing, we just got lucky that our indifferent universe at least allows some movement towards the light. I have personally experienced causeless injustice, and I have no faith in faith, so it obviously feels to me that we live in a causal world that is not under the influence of any plan.