Theists are wrong; is theism?

post by Will_Newsome · 2011-01-20T00:18:34.164Z · LW · GW · Legacy · 538 comments

Contents

538 comments

Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?

I am especially confused that the theism/atheism debate is considered a closed question on Less Wrong. Eliezer's reformulations of the Problem of Evil in terms of Fun Theory provided a fresh look at theodicy, but I do not find those arguments conclusive. A look at Luke Muehlhauser's blog surprised me; the arguments against theism are just not nearly as convincing as I'd been brought up to believe2, nor nearly convincing enough to cause what I saw as massive overconfidence on the part of most atheists, aspiring rationalists or no.

It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence. We are becoming adept at wielding Occam's razor, but it may be that we are still too foolhardy to wield Solomonoff's lightsaber Tegmark's Black Blade of Disaster without chopping off our own arm. The literature on cognitive biases gives us every reason to believe we are poorly equipped to reason about infinite cosmology, decision theory, the motives of superintelligences, or our place in the universe.

Due to these considerations, it is unclear if we should go ahead doing the equivalent of philosoraptorizing amidst these poorly asked questions so far outside the realm of science. This is not the sort of domain where one should tread if one is feeling insecure in one's sanity, and it is possible that no one should tread here. Human philosophers are probably not as good at philosophy as hypothetical Friendly AI philosophers (though we've seen in the cases of decision theory and utility functions that not everything can be left for the AI to solve). I don't want to stress your epistemology too much, since it's not like your immortal soul3 matters very much. Does it?

Added: By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.

Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid." As to whether this universe gets most of its reality fluid from agenty creators... perhaps we will come back to that argument on a day with less distracting terminology on the table.

 


 

1 Of either the 'AI-go-FOOM' or 'someday we'll be able to do lots of brain emulations' variety.

2 I was never a theist, and only recently began to question some old assumptions about the likelihood of various Creators. This perhaps either lends credibility to my interest, or lends credibility to the idea that I'm insane.

Or the set of things that would have been translated to Archimedes by the Chronophone as the equivalent of an immortal soul (id est, whatever concept ends up being actually significant).

538 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-20T09:49:39.022Z · LW(p) · GW(p)

"Gods are ontologically distinct from creatures, or they're not worth the paper they're written on." -- Damien Broderick

If you believe in a Matrix or in the Simulation Hypothesis, you believe in powerful aliens, not deities. Next!

There's also no hint of worship which everyone else on the planet thinks is a key part of the definition of a religion; if you believe that Cthulhu exists but not Jehovah, and you hate and fear Cthulhu and don't engage in any Elder Rituals, you may be superstitious but you're not yet religious.

This is mere distortion of both the common informal use and advanced formal definitions of the word "atheism", which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.

Also http://www.smbc-comics.com/index.php?db=comics&id=1817

Replies from: Jack, Perplexed, steven0461, Will_Newsome, None, Will_Newsome, None, Miller, lukstafi
comment by Jack · 2011-01-20T19:22:50.845Z · LW(p) · GW(p)

A Simulator would be ontologically distinct from creatures like us-- for any definition of ontologically distinct I can imagine wanting use. The Simulation Hypothesis is a metaphysical hypothesis in the most literal sense- it's a hypothesis about what our physical universe really is, beyond the wave function.

Yeah, Will's theism in this post isn't the theism of believers, priests or academic theologians. And with certain audiences confusion would likely result and so this language should be avoided with those audiences. But I think we're somewhat more sophisticated than that- and if there are reasons to use theistic vocabulary then I don't see why we shouldn't. I'm assuming Will has these reasons, of course.

Keep in mind, the divine hasn't always been supernatural. Greek gods were part of natural explanations of phenomena, Aristotle's god was just there to provide a causal stopping place, Hobbes's god was physical, etc. We don't have to cow-tow to the usage of present religious authorities. God has always been a flexible word, there is no particular reason to take modern science to be falsifying God instead of telling us what a god, if one exists, must be like.

I feel like we lose out on interesting discussions here where someone says something that pattern matches to something an evangelical apologist might say. It's like we're all of a sudden worried about losing a debate with a Christian instead of entertaining and discussing interesting ideas. We're among friends here, we don't need to worry about how we frame a discussion so much.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-27T01:55:10.817Z · LW(p) · GW(p)

I wish this viewpoint were more common, but judging from the OP's score, it is still in minority.

I just picked up Sam Harris's latest book - the Moral Landscape, which is all about the idea that it is high time science invaded religion's turf and claimed objective morality as a scientific inquiry.

Perhaps the time is also come when science reclaims theism and the related set of questions and cosmologies. The future (or perhaps even the present) is rather clearly a place where there are super-powerful beings that create beings like us and generally have total control over their created realities. It's time we discussed this rationally.

Replies from: Dreaded_Anomaly, Jack
comment by Dreaded_Anomaly · 2011-01-27T02:50:34.262Z · LW(p) · GW(p)

Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.

My views on "reclaiming" theism are summed up by ata's previous comment:

I recall a while ago that there was a brief thread where someone was arguing that phlogiston theory was actually correct, as long as you interpret it as identical to the modern scientific model of fire. I react to things like this similarly: theism/God were silly mistakes, let's move on and not get attached to old terminology. Rehabilitating the idea of "theism" to make it refer to things like the Simulation Hypothesis seems pointless; how does lumping those concepts together with Yahweh (as far as common usage is concerned) help us think about the more plausible ones?

Replies from: Furcas, jacob_cannell
comment by Furcas · 2011-01-27T03:08:26.793Z · LW(p) · GW(p)

Sam Harris is misguided at best in the major conclusions he draws about objective morality. See this blog post by Sean Carroll, which links to his previous posts on the subject.

Have you read Less Wrong's metaethics sequence? It and The Moral Landscape reach pretty much the same conclusions, except about the true nature of terminal values, which is a major conclusion, but only one among many.

Sean Carroll, on the other hand, gets absolutely everything wrong.

Replies from: Dreaded_Anomaly, byrnema
comment by Dreaded_Anomaly · 2011-01-27T04:48:53.033Z · LW(p) · GW(p)

Given that the full title of the book is "The Moral Landscape: How Science Can Determine Human Values," I think that conclusion is the major one, and certainly the controversial one. "Science can help us judge things that involve facts" and similar ideas aren't really news to anyone who understands science. Values aren't a certain kind of fact.

I don't see where Sean's conclusions are functionally different from those in the metaethics sequence. They're presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean's:

But there’s no reason why we can’t be judgmental and firm in our personal convictions, even if we are honest that those convictions don’t have the same status as objective laws of nature.

and this one of Eliezer's:

Yes, I really truly do believe that humanity is better than the Pebblesorters! I am not being sarcastic, I really do believe that.

seem to express the same sentiment, to me.

If you really object to Sean's writing, take a look at Russell Blackford's review of the book. (He is a philosopher, and a transhumanist one at that.)

Replies from: Furcas
comment by Furcas · 2011-01-27T05:12:59.071Z · LW(p) · GW(p)

Given that the full title of the book is "The Moral Landscape: How Science Can Determine Human Values," I think that conclusion is the major one, and certainly the controversial one. "Science can help us judge things that involve facts" and similar ideas aren't really news to anyone who understands science. Values aren't a certain kind of fact.

To be accurate Harris should have inserted the word "Instrumental" before "Values" in his book's title, and left out the paragraphs where he argues that the well-being of conscious minds is the basis of morality for reasons other than that the well-being of conscious minds is the basis of morality. There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I'm not just talking about religious fundamentalists.

I don't see where Sean's conclusions are functionally different from those in the metaethics sequence. They're presented in a much less philosophically rigorous form, because Sean is a physicist, not a philosopher (and so am I). For example, this statement of Sean's:

[...]

and this one of Eliezer's:

[...]

seem to express the same sentiment, to me.

The difference is huge. Eliezer and I do believe that our 'convictions' have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-01-27T05:43:34.752Z · LW(p) · GW(p)

There would still be at least two thirds of the book left, and there would still be a huge amount of people who would find it controversial, and I'm not just talking about religious fundamentalists.

I wouldn't limit "people who don't understand science" to "religious fundamentalists," so I don't think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn't give much credence to that "controversy" in a serious discussion.

The difference is huge. Eliezer (and I) do believe that our 'convictions' have the same status as objective laws of nature (although we assign lower probability to some of them, obviously).

The quantum numbers which an electron possesses are the same whether you're a human or a Pebblesorter. There's an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.

I understand what Eliezer means when he says:

If you identify rightness with this huge computational property, then moral judgments are subjunctively objective (like math), subjectively objective (like probability), and capable of being true (like counterfactuals).

but he later says

Finally I realized that there was no foundation but humanity - no evidence pointing to even a reasonable doubt that there was anything else - and indeed I shouldn't even want to hope for anything else - and indeed would have no moral cause to follow the dictates of a light in the sky, even if I found one.

That's what the difference is, to me. An electron would have its quantum numbers whether or not humanity existed to discover them. 2 + 2 = 4 is true whether or not humanity is around to think it. Terminal values are higher level, less fundamental in terms of nature, because humanity (or other intelligent life) has to exist in order for them to exist. We can find what's morally right based on terminal values, but we can't find terminal values that are objectively right in that they exist whether or not we do.

Replies from: wnoise, Furcas
comment by wnoise · 2011-01-27T05:53:55.508Z · LW(p) · GW(p)

The quantum numbers which define an electron

Careful. The quantum numbers are no more than a basis for describing an electron. I can describe a stick as spanning a distance 3 meters wide and 4 long, while a pebblesorter describes it as being 5 meters long and 0 wide, and we can both be right. The same thing can happen when describing a quantum object.

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-01-27T05:55:31.818Z · LW(p) · GW(p)

Yes, I should have been more careful with my language. Thanks for pointing it out. Edited.

comment by Furcas · 2011-01-27T06:11:48.566Z · LW(p) · GW(p)

I wouldn't limit "people who don't understand science" to "religious fundamentalists," so I don't think we really disagree. A huge amount of people find evolution to be controversial, too, but I wouldn't give much credence to that "controversy" in a serious discussion.

Okay, let me make my claim stronger then: A huge amount of people who understand science would find the truncated version of TML described above controversial: A big fraction of the people who usually call themselves moral nihilists or moral relativists.

The quantum numbers which define an electron are the same whether you're a human or a Pebblesorter. There's an objectively right answer, and therefore objectively wrong answers. Convictions/terminal values cannot be compared in that way.

I'm saying that there is an objectively right answer, that terminal values can be compared (in a way that is tautological in this case, but that is fundamentally the only way we can determine the truth of anything). See this comment.

Do you believe it is true that "For every natural number x, x = x"? Yes? Why do you believe that? Well, you believe it because for every natural number x, x = x. How do you compare this axiom to "For every natural number x, x != x"?

Anyway, at least one of us is misunderstanding the metaethics sequence, so this exchange is rather pointless unless we want to get into a really complex conversation about a sequence of posts that has to total at least 100,000 words, and I don't want to. Sorry.

comment by byrnema · 2011-01-27T04:11:02.067Z · LW(p) · GW(p)

except about the true nature of terminal values, which is a major conclusion

In quick approximation, what was this conclusion?

Replies from: Furcas
comment by Furcas · 2011-01-27T04:55:55.841Z · LW(p) · GW(p)

That terminal values are like axioms, not like theorems. That is, they're the things without which you cannot actually ask the question, "Is this true?"

You can say or write the words "Is", "this", and "true" without having axioms related to that question somewhere in your mind, of course, but you can't mean anything coherent by the sentence. Someone who asks, "Why terminal value A rather than terminal value B?" and expects (or gives) an answer other than "Because of terminal value A, obviously!"* is confused.

*That's assuming that A really is a terminal value of the person's moral system. It could be an instrumental value; people have been known to hold false beliefs about their own minds.

comment by jacob_cannell · 2011-01-27T04:39:25.825Z · LW(p) · GW(p)

I just started reading it and picked it really because I needed something for the train in a hurry. In part I read the likes of Harris just to get a better understanding of what makes a popular book. As far as I've read into Harris's thesis about objective morality, I see it as rather hopeless; depending ultimately on the notion of a timeless universal human brain architecture which is mythical even today, posthuman future aside.

Carroll's point at the end about attempting to find the 'objective truth' about what is the best flavor of ice cream echoes my thoughts so far on the "Moral Landscape".

The interesting part wasn't his theory, it was the idea that the entire belief space currently held by religion is now up for grabs.

In regards to ata's previous comment, I don't agree at all.

Theism is not some single atomic belief. It is an entire region in belief space. You can pull out many of the sub-beliefs and reduce them to atomic binary questions which slice idea-space, such as:

Was this observable universe created by a superintelligence?

Those in the science camp used to be pretty sure the answer to that was no, but it turns out they may very well be wrong, and the theists may have guessed correctly all along (Simulation Argument).

Did superintelligences intervene in earth's history? How do they view us from a moral/ethical standpoint? And so on . . .

These questions all have definitive answers, and with enough intelligence/knowledge/computation they are all probably answerable.

You can say "theism/God" were silly mistakes, but how do you rationalize that when we now know that true godlike entities are the likely evolutionary outcome of technological civilizations and common throughout the multiverse?

Replies from: Dreaded_Anomaly, None
comment by Dreaded_Anomaly · 2011-01-27T05:02:05.095Z · LW(p) · GW(p)

I try not to rationalize.

I don't think we should reward correct guesses that were made for the wrong reasons (and are only correct by certain stretches of vocabulary). Talking about superintelligences is more precise and avoids vast planes of ambiguity and negative connotations, so why not just do that?

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-27T06:30:10.918Z · LW(p) · GW(p)

I don't think it is any stretch of vocabulary to use the word 'god' to describe future superintelligences.

If the belief is correct, it can't also be a silly mistake.

The entire idea that one must choose words carefully to avoid 'vast planes of ambiguity and negative connotations' is at the heart of the 'theism as taboo' problem.

The SA so far stands to show that the central belief of broad theism is basically correct. Let's not split hairs on that and just admit it. If that is true however then an entire set of associated and dependent beliefs may also be correct, and a massive probability update is in order.

Avoiding the 'negative connotations' to me suggests this flawed process of consciously or sub-consciously distancing any possible mental interpretation of the Singularity and the SA such that it is similar to theistic beliefs.

I suspect most people tend to do this because of belief inertia, the true difficulty of updating, and social signaling issues arising from being associated with a category of people who believe in the wrong versions of a right idea for insufficient reasons.

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-01-27T20:07:21.605Z · LW(p) · GW(p)

The SA so far stands to show that the central belief of broad theism is basically correct.

"The universe was created by an intelligence" is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.

Also, at this point I'm more inclined to accept Tegmark's mathematical universe description than the simulation argument.

wrong versions of a right idea

That seems oxymoronic to me.

There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks. The important question is: will using theistic terminology help with clarity and understanding for the simulation argument? The answer does not appear to be yes.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-27T23:40:55.044Z · LW(p) · GW(p)

The SA so far stands to show that the central belief of broad theism is basically correct.

"The universe was created by an intelligence" is the central belief of deism, not theism. Whether or not the intelligence would interact with the universe, for what reasons, and to what ends, are open questions.

You're right, I completely agree with the above in terms of the theism/deism distinction. The SA supports deism while allowing for theism but leaving it as an open question. My term "broad theism" meant to include theism & deism. Perhaps that category already has a term, not quite sure.

Also, at this point I'm more inclined to accept Tegmark's mathematical universe description than the simulation argument.

I find the SA has much stronger support - Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.

There are superficial similarities between the simulation argument and theism, but, for example, the idea of worship/deference in the latter is a major element that the former lacks.

Some fraction of simulations probably have creators who desire some form of worship/deference, the SA turns this into a question of frequency or probability. I of course expect that worship-desiring creators are highly unlikely. Regardless, worship is not a defining characteristic of theism.

The important question is: will using theistic terminology help with clarity and understanding for the simulation argument?

I see it as the other way around. The SA gives us a reasonable structure within which to (re)-evaluate theism.

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-01-28T03:47:20.513Z · LW(p) · GW(p)

I find the SA has much stronger support - Tegmark requires the additional belief that other physical universes exist for which we can never possibly find evidence for against.

How could we find evidence of the universe simulating our own, if we are in a simulation? They're both logical arguments, not empirical ones.

Regardless, worship is not a defining characteristic of theism.

The SA gives us a reasonable structure within which to (re)-evaluate theism.

I really don't see what is so desirable about theism that we ought to define it to line up near-perfectly with the simulation argument in order to use it and related terminology. Any rhetorical scaffolding for dealing with Creators that theists have built up over the centuries is dripping with the negative connotations I referenced earlier. What net advantage do we gain by using it?

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T20:16:17.185Z · LW(p) · GW(p)

How could we find evidence of the universe simulating our own, if we are in a simulation? They're both logical arguments, not empirical ones.

If say in 2080 we have created a number of high-fidelity historical recreations of 2010 with billions of sentient virtual humans who which is nearly indistinguishable (from their perspective) to our original 2010, then much of the uncertainty in the argument is eliminated.

(some uncertainty always remains, of course)

The other distinct possibility is that our simulation reaches some endpoint and possible re-integration, at which point it would be obvious.

comment by [deleted] · 2011-01-29T03:20:18.435Z · LW(p) · GW(p)

tl;dr - If you're going to equate morality with taste, understand that when we measure either of the two, taking agents into the process is a huge fact we can't leave out

I'll be upfront about having not read Sam Harris' book yet, though I did read the blog review to get a general idea. Nonetheless, I take issue with the following point:

Carroll's point at the end about attempting to find the 'objective truth' about what is the best flavor of ice cream echoes my thoughts so far on the "Moral Landscape".

I've found that an objective truth about the best flavor of ice cream can be found if one figures out which disguised query they're after. (Am I looking for "If I had to guess, what would random person z's favorite flavor of ice cream be, with no other information?" or am I looking for something else).

This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation. When I want to know what flavor of ice cream is best, I take into account people's preferences. If I want to know what would be the most moral action, I need to take into account it's effects on people (or myself, should I be a virtue ethicist, or how it aligns with my rules, should I be a deontologist). Admittedly the latter is tougher than the former, but that doesn't mean we have no hoped of dealing with it objectively. It just means we have to do the best we can with what we're given, which may mean a lot of individual subjectivity.

In his book Stumbling on Happiness, Daniel Gilbert writes about studying the subjective as objectively as possible when he decides on the three premises for understanding happiness: 1] Using imperfect tools sucks, but it's better than no tools. 2] An honest, real-time insider view is going to be more accurate than our current best outside views. 3] Abuse the law of real numbers to get around the imperfections of 1] and 2] (a.k.a measure often)

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-29T03:37:15.806Z · LW(p) · GW(p)

This attempt at making morality too subjective to measure by relating it to taste has always bothered me because people always ignore a main factor here: agents should be part of our computation.

I perhaps should have elaborated more, or think through my objection to Harris more clearly, but in essence I believe the problem is not that of finding an objective morality given people's preferences, it's objectively determining what people's preferences should be.

There is an objective best ice cream flavor given a certain person's mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?

My attempt at a universal objective morality might take some maximization of value given our current preferences and then evolve it into the future, maximizing over some time window. Perhaps you need to extend that time window to the very end. This would lead to some form of cosmism - directing everything towards some very long term universal goal.

Replies from: None
comment by [deleted] · 2011-01-29T03:50:34.633Z · LW(p) · GW(p)

This post was clearer than your original, and I think we agree more here than we did before, which may partially be an issue of communication styles/methods/etc.

I believe the problem is not that of finding an objective morality given people's preferences, it's objectively determining what people's preferences should be.

This I agree with, but it's more for the gut response of "I don't trust people to determine other people's values." I wonder if the latter could be handled objectively, but I'm not sure I'd trust humans to do it.

There is an objective best ice cream flavor given a certain person's mind, but can we say some minds are objectively more correct on the matter of preferring the best ice cream flavor?

My reflex response to this question was "No" followed by "Wait, wouldn't I weight humans minds much more significantly than raccoons if I was figuring out human preferences?" Which I then thought through and latched on "Agents still matter; if I'm trying to model "best ice cream flavor to humans", I give the rough category of "human-minds" more weight than other minds. Heck, I hardly have a reason to include such minds, and instrumentally they will likely be detrimental. So in that particular generalization, we disagree, but I'm getting the feeling we agree here more than I had guessed.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-29T23:09:56.889Z · LW(p) · GW(p)

This I agree with, but it's more for the gut response of "I don't trust people to determine other people's values." I wonder if the latter could be handled objectively, but I'm not sure I'd trust humans to do it.

We already have to deal with this when we raise children. Western societies generally favor granting individuals great leeway in modifying their preferences and shaping the preferences of their children. We also place much less value on the children's immediate preferences. But even this freedom is not absolute.

comment by Jack · 2011-01-28T23:41:17.505Z · LW(p) · GW(p)

I wish this viewpoint were more common, but judging from the OP's score, it is still in minority.

Hard to say, my sense is those of us endorsing/sympathizing/tolerant of Will's position were pretty persuasive in this thread. The OP's score went up from where it was when I first read the post.

I just picked up Sam Harris's latest book - the Moral Landscape, which is all about the idea that it is high time science invaded religion's turf and claimed objective morality as a scientific inquiry.

I'm in complete agreement with Dreaded_Anomaly on this. Harris is excellent on the neurobiology of religion, as an anti-apologist and as a commentator on the status of atheism as a public force. But he is way out of his depths as a moral philosopher. Carroll's reaction is pretty much dead on. Even by the standards of the ethical realists Harris's arguments just aren't any good. As philosophy, they'd be unlikely to meet the standards for publication.

Now, once you accept certain controversial things about morality then much of what Harris says does follow. And from what I've seen Harris says some interesting things on that score. But it's hard to get excited when the thesis the book got publicized with is so flawed.

comment by Perplexed · 2011-01-20T16:31:06.380Z · LW(p) · GW(p)

You seem to be dictating that theist beliefs and simulationist beliefs should not be collected together into the same reference class. (The reason for this dictat seems to be that you disrespect the one and are intrigued by the other - but never mind that.)

However, this does not seem to address the point which I think the OP was making. Which seems to be that arguments for (against) theism and arguments for (against) simulationism should be collected together in the same reference class. That if we do so, we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation. Yet (subjectively speaking) we don't feel they have the same force.

Contempt for those with whom you disagree is one of the most dangerous traps facing an aspiring rationalist. I think that it would be a very good idea if the OP were to produce that posting on charity-in-interpretation which he mentioned.

Next!

Replies from: Eliezer_Yudkowsky, Normal_Anomaly
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-20T19:00:26.872Z · LW(p) · GW(p)

we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation

I've argued rather extensively against religion on this website. Name a single one of those arguments which is equally effective against simulationism.

Replies from: Perplexed, Perplexed, Will_Newsome
comment by Perplexed · 2011-01-22T00:53:02.505Z · LW(p) · GW(p)

I've argued rather extensively against religion on this website.

That was my impression as well, but when I went looking for those arguments, they were very difficult to find. Perhaps my Google-fu is weak. Help from LW readers is welcome.

I found plenty of places where you spoke disrespectfully about religion, and quite a few places where you cast theists as the villains in your negative examples of rationality (a few arguably straw-men, but mostly fair). But I was surprised that I found very few places where you were actually arguing against religion.

Name a single one of those arguments which is equally effective against simulationism.

Well, the only really clear-cut example of a posting-length argument against religion is based on the "argument from evil". As such, it is clearly not equally effective against simulationism.

You did make a posting attempting to define the term "supernatural" in a way that struck me as a kind of special pleading tailored to exclude simulationism from the criticism that theism receives as a result of that definition.

This posting rejects the supernatural by defining it as 'a belief in an explanatory entity which is fundamentally, ontologically mental'. And why is that definition so damning to the supernaturalist program? Well, as I understand it, it is because, by this definition, to believe in the supernatural is anti-reductionist, and a failure of reductionism is simply inconceivable.

I wonder why there is not such a visceral negative reaction to explanatory entities which are fundamentally, ontologically computational? Certainly it is not because we know of at least one reduction of computation. We also know of (or expect to someday know of) at least one reduction of mind.

But even though we can reduce computation, that doesn't mean we have to reduce it. Respectable people have proposed to explain this universe as fundamentally a computational entity. Tegmark does something similar, speculating that the entire multiverse is essentially a Platonic mathematical structure. So, what justification exists to deprecate a cosmology based on a fundamental mental entity?

...

I only found one small item clearly supporting my claim. Eliezer, in a comment, makes this argument against creationists who invoke the Omphalos hypothesis

Never mind usefulness, it seems to me that "Evolution by natural selection occurs" and "God made the world and everything in it, but did so in such a way as to make it look exactly as if evolution by natural selection occured" are not the same hypothesis, that one of them is true and one of them is false, that it is simplicity that leads us to say which is which, and that we do, indeed, prefer the simpler of two theories that make the same predictions, rather than calling them the same theory.

I agree. But take a look at this famous paper by Bostrom. It cleverly sidesteps the objection that simulating an entire universe might be impossibly difficult by instead postulating a simulation of just enough physical detail so as to make it look exactly as if there were a real universe out there. "Are you living in a computer simulation?" "Are we living in a world which only looks like it evolved?" Eliezer chose to post a comment answering the latter question with a no. He has not, so far as I know, done the same with Bostrom's simulationist speculation.

Replies from: byrnema, Eliezer_Yudkowsky, CronoDAS, timtyler
comment by byrnema · 2011-01-22T03:49:23.993Z · LW(p) · GW(p)

Help from LW readers is welcome.

I'll chime in that Eliezer provided me with the single, most personally powerful argument that I have against religion. (I'm not as convinced by razor and low-prior arguments, perhaps because I don't understand them.)

The argument not only pummels religion it identifies it: religion is the pattern matching that results when you feel around for the best (most satisfying) answer. To paraphrase Eliezer's argument (if someone knows the post, I'll link to it, there's at least this); while you're in the process of inventing things, there's nothing preventing you from making your theory as grand as you want. Once you have your maybe-they're-believing-this-because-that-would-be-a-cool-thing-to-believe lenses on, it all seems very transparent. Especially the vigorous head-nodding in the congregation.

I don't have so much against pattern matching. I think it has it's uses, and religion provides many of them (to feel connected and integrated and purposeful, etc). But it's an absurd means of epistemology. I think it's amazing that religions go from 'whoever made us must love us and want us to love the world' --which is a very natural pattern for humans to match -- to this great detailed web of fabrication. In my opinion, the religions hang themselves with the details. We might speculate about what our creator would be like, but religions make up way too much stuff in way too much detail and then make it dogma. (I already knew the details were wrong, but I learned to recognize the made-up details as the symptom of lacking epistemology to begin with.)

Now that I recognize this pattern (the pattern of finding patterns that feel right, but which have no reason to be true) I see it other places too. It seems pattern matching will occur wherever there is a vacuum of the scientific method. Whenever we don't know, we guess. I think it takes a lot of discipline to not feel compelled by guesses that resonate with your brain. (It seems it would help if your brain was wired a little differently so that the pattern didn't resonate as well -- but this is just a theory that sounds good.)

Replies from: Perplexed, timtyler, timtyler
comment by Perplexed · 2011-01-22T04:22:11.249Z · LW(p) · GW(p)

I also would like to see a link to that post, if anyone recognizes it.

I'll agree that to (atheist) me, it certainly seems that one big support for religious belief is the natural human tendency toward wishful thinking. However, it doesn't do much good to provide convincing arguments against religion as atheists picture it. You need convincing arguments against religion as its practitioners see it.

Once you have your maybe-they're-believing-this-because-that-would-be-a-cool-thing-to-believe lenses on, it all seems very transparent.

Yeah, I know what you mean. Pity I can't turn that around and use it against simulationism. :)

Replies from: byrnema
comment by byrnema · 2011-01-22T15:48:29.250Z · LW(p) · GW(p)

I found it: this is the post I meant. But it wasn't written by Eliezer, sorry. (The comment I linked to in the grandparent that was resonates with this idea for me, and I might have seen more resonance in older posts.)

You need convincing arguments against religion as its practitioners see it.

I'm confused. I just want to understand religion, and the world in general, better. Are you interested in deconversion?

Pity I can't turn that around and use it against simulationism. :)

Ha ha. Simulationism is of course a way cool idea. I think the compelling meme behind it though is that we're being tricked or fooled by something playful. When you deviate from this pattern, the idea is less culturally compelling.

In particular, the word 'simulation' doesn't convey much. If you just mean something that evolves according to rules, then our universe is apparently a simulation already anyway.

Replies from: Perplexed
comment by Perplexed · 2011-01-22T16:53:40.683Z · LW(p) · GW(p)

I found it: this is the post I meant.

Thx. That is a good posting. As was the posting to which it responded

You need convincing arguments against religion as its practitioners see it.

I'm confused. I just want to understand religion, and the world in general, better. Are you interested in deconversion?

Whoops! Bad assumption on my part. Sorry. No, I am not particularly interested in turning theists into atheists either, though I am interested in rational persuasion techniques more generally.

comment by timtyler · 2011-01-22T09:09:51.414Z · LW(p) · GW(p)

Dennett tells a similar "agentification" story:

"I think we can discern religion’s origins in superstition, which grew out of an overactive adoption of the intentional stance,” he says. “This is a mammalian feature that we share with, say, dogs. If your dog hears the thud of snow falling off the roof and jumps up and barks, the dog is in effect asking, ‘Who’s there?’ not, ‘What’s that?’ The dog is assuming there’s an agent causing the thud. It might be a dangerous agent. The assumption is that when something surprising, unexpected, puzzling happens, treat it as an agent until you learn otherwise. That’s the intentional stance. It’s instinctive.” The intentional stance is appropriate for self-protection, Dennett explains, and “it’s on a hair trigger. You can’t afford to wait around. You want to have a lot of false positive, a lot of false alarms [...]” He continues: “Now, the dog just goes back to sleep after a minute. But we, because we have language, we mull it over in our heads and pretty soon we’ve conjured up a hallucinated agent, say, a little forest god or a talking tree or an elf or something ghostly that made that noise. Generally, those are just harmless little quirks that we soon forget. But every now and then, one comes along that has a little bit more staying power. It’s sort of unforgettable. And so it grows. And we share it with a neighbor. And the neighbor says, ‘What do you mean, a talking tree? There’s no talking trees.’ And you say, ‘I could have sworn that tree was talking.’ Pretty soon, the whole village is talking about the talking tree.

comment by timtyler · 2011-01-22T09:05:53.475Z · LW(p) · GW(p)

I think that is usually called Patternicity these days. See:

Patternicity: Finding Meaningful Patterns in Meaningless Noise

Why the brain believes something is real when it is not - By Michael Shermer

Replies from: byrnema
comment by byrnema · 2011-01-22T15:13:10.237Z · LW(p) · GW(p)

Seeing patterns in noise and agency in patterns (especially fate) is probably a large factor in religious belief.

But what I was referring to by pattern matching was something different. Our cultural ideas about the world make lots of patterns, and there are natural ways to complete these patterns. When you hear the completion of these patterns, it can feel very correct, like something you already knew, or especially profound if it pulls together lots of memes.

For example, the Matrix is an idea that resonates with our culture. Everyone believes it on some level, or can relate to the world being like that. The movie was popular but the meme wasn't the result of the movie -- the meme was already there and the movie made it explicit and gave the idea a convenient handle. Human psychology plays a role. The Matrix as a concept has probably always been found in stories as a weak collective meme, but modern technology brought it more immediately and uniformly in our collective awareness.

I think religion is like that. A story that wrote itself from all the loose ends of what we already believe. Religious leaders are good at feeling and completing these collective patterns. Religion is probably in trouble because many of the memes are so anachronistic now. They survive to the extent that the ideas are based on psychology but the other stuff creates dissonance.

This isn't something to reference (I'm sure there are zillions of books developing this) or a personal theory, it's more or less a typical view about religion. It explains why there are so many religions differing in details (different things sounded good to different people) but with common threads. (Because the religions evolved together with overlapping cultures and reflect our common psychology.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-22T10:37:17.908Z · LW(p) · GW(p)

In lieu of an extended digression about how to adjust Solomonoff induction for making anthropic predictions, I'll simply note that having God create the world 5,000 years ago but fake the details of evolution is more burdensome than having a simulator approximate all of physics to an indistinguishable level of detail. Why? Because "God" is more burdensome than "simulator", God is antireductionist and "simulator" is not, and faking the details of evolution in particular in order to save a hypothesis invented by illiterate shepherds is a more complex specification in the theory than "the laws of physics in general are being approximated".

To me it seems nakedly obvious that "God faked the details of evolution" is a far more outre and improbable theory than "our universe is a simulation and the simulation is approximate". I should've been able to leave filling in the details as an exercise to the reader.

Replies from: Kevin, Will_Newsome, Perplexed, cousin_it
comment by Kevin · 2011-01-22T10:49:10.986Z · LW(p) · GW(p)

Extended digression about how to adjust Solomonoff induction for making anthropic predictions plz

comment by Will_Newsome · 2011-01-29T01:01:26.742Z · LW(p) · GW(p)

This just means you have a very narrow (Abrahamic) conception of God that not even most Christians have. (At least, most Christians I talk to have super-fuzzy-abstract ideas about Him, and most Jews think of God as ineffable and not personal these days AFAIK.) Otherwise your distinction makes little sense. (This may very well be an argument against ever using the word 'God' without additional modifiers (liberal Christian, fundamentalist Christian, Orthodox Jewish, deistic, alien, et cetera), but it's not an argument that what people sometimes mean by 'God' is a wrong idea. Saying 'simulator' is just appealing to an audience interested in a different literary genre. Turing equivalence, man!)

Of note is that the less memetically viral religions tend to be saner (because missionary religions mostly appealed to the lowest common denominator of epistemic satisfiability). Buddhism as Buddha taught it is just flat out correct about nearly everything (even if you disagree with his perhaps-not-Good but also not-Superhappy goal of eliminating imperfection/suffering/off-kilteredness). Many Hindu and Jain philosophers were good rationalists (in the sense that Epicurus was a good rationalist), for instance. To a first and third and fifth approximation, every smart person was right about everything they were trying to be right about. Alas, humans are not automatically predisposed to want to be right about the super far mode considerations modern rationalists think to be important.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-29T01:21:31.117Z · LW(p) · GW(p)

For many people the word "God" appears to just describe one's highest conception of good, the north pole of morality. Such as: "God is Love" in Christianity.

From that perspective, I guess God is Rationality for many people here.

Replies from: Furcas, Will_Newsome
comment by Furcas · 2011-01-29T01:29:37.012Z · LW(p) · GW(p)

For many people the word "God" appears to just describe one's highest conception of good, the north pole of morality.

People might say that, but they don't actually believe it. They're just trying to obfuscate the fact that they believe something insane.

comment by Will_Newsome · 2011-01-29T01:28:35.591Z · LW(p) · GW(p)

This conception lets you do a lot of fun associations. Since morality seems pretty tied up with good epistemology (preferences and beliefs are both types of knowledge, after all), and since knowledge is power (see Eliezer's posts on engines of cognition), then you would expect this conception of God to not only be the most moral (omnibenevolent) but the most knowledgeable (omniscient) and powerful (omnipotent). Because God embodies correctness He is thus convergent for minds approximating Bayesianism (like math) and has a universally very short description length (omnipresent), and is accessible from many different computations (arguably personal).

Delicious delicious metacontrarianism...

Replies from: NihilCredo, wnoise
comment by NihilCredo · 2011-02-16T07:23:20.584Z · LW(p) · GW(p)

It's like Scholastic mad-libs!

comment by wnoise · 2011-01-29T02:16:08.604Z · LW(p) · GW(p)

Preferences are entangled with beliefs, certainly, but I don't see why I would consder them to be knowledge.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-29T03:00:16.554Z · LW(p) · GW(p)

What is your operational definition of knowledge?

comment by Perplexed · 2011-01-22T17:30:39.376Z · LW(p) · GW(p)

To me it seems nakedly obvious that "God faked the details of evolution" is a far more outre and improbable theory than "our universe is a simulation and the simulation is approximate". I should've been able to leave filling in the details as an exercise to the reader.

Trusting ones 'gut' impressions of the "nakedly obvious" like that and 'leaving the details as an exercise' is a perfectly reasonable thing to do when you have a well-tuned engine of rationality in your possession and you just need to get some intellectual work done.

But my impression of the thrust of the OP was that he was suggesting a bit of time-consuming calibration work so as to improve the tuning of our engines. Looking at our heuristics and biases with a bit of skepticism. Isn't that what this community is all about?

But enough of this navel gazing! I also would like to see that digression on Solomonoff induction in an anthropic situation.

comment by cousin_it · 2011-01-22T13:56:47.781Z · LW(p) · GW(p)

Seconding Kevin's request. Seeing a sentence like that with no followup is very frustrating.

comment by CronoDAS · 2011-01-24T03:58:27.559Z · LW(p) · GW(p)

I found plenty of places where you spoke disrespectfully about religion, and quite a few places where you cast theists as the villains in your negative examples of rationality (a few arguably straw-men, but mostly fair). But I was surprised that I found very few places where you were actually arguing against religion.

The post you are looking for is Religion's Claim to be Non-Disprovable

Replies from: Perplexed
comment by Perplexed · 2011-01-24T21:03:08.114Z · LW(p) · GW(p)

Thx. But I don't read that as arguing against religion. Instead it seems to be an argument against one feature of modern religion - its claim to unfalsifiability (since it deals with a Non-Overlapping MAgisterium, 'NOMA' using the common acronym). Eliezer thinks this is pretty wimpy. He seems to have more respect for old-time religion, like those priests of Baal who stuck their necks out, so to speak, and submitted their claims to empirical testing.

Can this attitude of critical rationalism be redeployed against simulationist claims? Or at least against the claims of those modern simulationists who keep their simulations unfalsifiable and don't permit interaction between levels of reality? Against people like Bostrom who stipulate that the simulations that they multiply (without necessity) should all be indistinguishable from the real thing - at least to any simulated observer? I will leave that question to the reader. But I don't think that it qualifies as a posting in which Eliezer argues against religion in toto. He is only arguing against one feature of modern apologetics.

Replies from: CronoDAS
comment by CronoDAS · 2011-01-25T21:22:31.238Z · LW(p) · GW(p)

The other part of the argument in that post is that existing religions are not only falsifiable, but have already been falsified by empirical evidence.

comment by timtyler · 2011-01-22T10:56:38.911Z · LW(p) · GW(p)

It cleverly sidesteps the objection that simulating an entire universe might be impossibly difficult by instead postulating a simulation of just enough physical detail so as to make it look exactly as if there were a real universe out there.

A "Truman Show"-style simulation. Less burdensome on the details - but their main application seems likely to be entertainment. How entertaining are you?

comment by Perplexed · 2011-01-21T00:07:30.447Z · LW(p) · GW(p)

I'll have to review your arguments to provide a really well informed response. Please allow me roughly 24 hours. But in the meantime, I know I have seen arguments invoking Occam's razor and "locating the hypothesis" here. I was under the impression that some of those were yours. As I understand those arguments, they apply equally well to theism and simulationism. That is, they don't completely rule out those hypotheses, but they do suggest that they deserve vanishingly low priors.

Replies from: timtyler, Zack_M_Davis, ata
comment by timtyler · 2011-01-21T18:59:50.517Z · LW(p) · GW(p)

Occam's razor weighs heavily against theism and simulism - for very similar reasons.

Probably a bit more heavily against theism, though. That has a bunch of additional razor-violating nonsense associated with it. It does not seem too unreasonable to claim that the razor weighs more heavily against theism.

comment by Zack_M_Davis · 2011-01-21T01:31:32.000Z · LW(p) · GW(p)

arguments invoking Occam's razor [...] don't completely rule out those hypotheses, but they do suggest that they deserve vanishingly low priors

"Decoherence is Simple" seems relevant here. It's about the many-worlds interpretation, but the application to simulation arguments should be fairly straightforward.

Replies from: Perplexed
comment by Perplexed · 2011-01-21T02:43:28.673Z · LW(p) · GW(p)

I'm afraid I don't see the application to simulation arguments. You will have to spell it out.

I fully agree with EY that Occam is not a valid argument against MWI. For that matter, I don't even see it as a valid argument against the Tegmark Ultimate Ensemble. But I do see it as a valid argument against either a Creator (unneeded entity) or a Simulator (also an unneeded entity). The argument against our being part of a simulation is weakened only if we already know that simulations of universes as rich as ours are actually taking place. But we don't know that. We don't even know that it is physically and logically possible.

Nevertheless, your mention of MWI and simulation in the same posting brings to mind a question that has always bugged me. Are simulations understood to cover all Everett branches of the simulated world? And if they are understood to cover all branches, is that broad coverage achieved within a single (narrow) Everett branch of the universe doing the simulating?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2011-01-21T03:15:49.766Z · LW(p) · GW(p)

I'm afraid I don't see the application to simulation arguments. You will have to spell it out.

My thought was that the post linked in the grandparent argues that we should prefer logically simpler theories but not penalize theories just because they posit unobservable entities, and that some simple theories predict the existence of a simulator.

We don't even know that [simulations rich enough to explain our experiences are] physically and logically possible.

Yes, the possibility of simulations is taken as a premise of the simulation argument; if you doubt it, then it makes sense to doubt the simulation argument as well.

Replies from: Perplexed
comment by Perplexed · 2011-01-21T06:19:00.588Z · LW(p) · GW(p)

some simple theories predict the existence of a simulator.

Perhaps we are using the word "simple" in different ways. Bostrom's assumption is the existence of an entity who wishes to simulate human minds in a way that convinces them that they exist in a giant expanding universe rather than a simulation. How is that "simple"? And, more to the point raised by the OP, how is it simpler than the notion of a Creator who created the universe so as to have some company "in His image and likeness".

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2011-01-21T17:30:22.478Z · LW(p) · GW(p)

Bostrom is saying that if advanced civilizations have access to enormous amounts of computing power and for some reason want to simulate less-advanced civilizations, then we should expect that we're in one of the simulations rather than basement-level reality, because the simulations are more numerous. The simulator isn't an arbitrarily tacked-on detail; rather, it follows from other assumptions about future technologies and anthropic reasoning. These other assumptions might be denied: perhaps simulations are impossible, or maybe anthropic reasoning doesn't work that way---but they seem more plausible and less gerrymandered than traditional theism.

comment by ata · 2011-01-21T00:55:43.158Z · LW(p) · GW(p)

Have you read the paper? I'm not convinced of it for a few reasons, but I'd consider it located at least.

Replies from: Perplexed
comment by Perplexed · 2011-01-21T01:51:21.941Z · LW(p) · GW(p)

Have you read the paper?

Yes, I had read Bostrom's paper.

I'm not convinced of it for a few reasons, but I'd consider it located at least.

I would express my opinion of that argument using less litotes. But as to locating the hypotheses, I suppose I agree.

Which leads me to ask, have you read the catechism? Like most Catholic schoolchildren, I was encouraged to memorize much of it in elementary school, though I have since forgotten almost all of it. It also locates one hypothesis, a hypothesis considerably more popular than Bostrom's.

Replies from: wedrifid
comment by wedrifid · 2011-01-21T01:54:50.672Z · LW(p) · GW(p)

I would express my opinion of that argument using less litotes

My new word of the day. It's not a bad one!

comment by Will_Newsome · 2011-01-20T19:13:44.582Z · LW(p) · GW(p)

(Somewhat related: for those that haven't seen it, Eliezer's Beyond the Reach of God is an excellent article.)

Replies from: Perplexed
comment by Perplexed · 2011-01-21T00:25:48.591Z · LW(p) · GW(p)

Perhaps I missed the point of your recommendation. That article by Eliezer seems to argue against the existence of a benevolent God who allows evil and death but does not balance this by endowing humans with immortal souls. Since at least 95% of those who worship Jehovah (to say nothing of Hindus) understand the Deity quite differently, I don't really see the relevance.

But while I am speaking to you, I'm curious as to whether (in my grandfather comment) I correctly captured the point of your OP?

comment by Normal_Anomaly · 2011-01-23T22:51:18.717Z · LW(p) · GW(p)

we discover that many of the counter-arguments that we advance against theist apologetics are (objectively speaking) equally effective against simulationist speculation.

From what I've seen, the primary argument for simulationism is anthropic: if simulating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more simulations out there than "basement realities", so we're probably in a simulation. What effect MWI has on this, and what other arguments are out there, I don't know.

Typical atheist arguments focus on it not being necessary for god to exist to explain what we see, and this coupled with a low prior makes theism unjustified--basically the "argument from no good evidence in favor". This is fine, because the burden of proof is on the theists. But if you find the anthropic argument for the simulation hypothesis good, then that's one more good argument than theism has.

Replies from: Perplexed
comment by Perplexed · 2011-01-23T23:05:42.677Z · LW(p) · GW(p)

If creating a whole universe is possible, then some entity would do it a lot, so there are probably a lot more creations out there than "basement realities", so we're probably in a creation.

This is fine, because the burden of proof is on the theists. But if you find the anthropic argument for the simulation hypothesis good, then that's one more good argument than theism has.

Luckily for the preservation of my atheism, I don't find the 'anthropic argument' for the simulation good. And I put the scare quotes there, because I don't think this is what is usually known as an anthropic argument.

comment by steven0461 · 2011-01-20T19:37:23.705Z · LW(p) · GW(p)

"Powerful aliens" has connotations that may be even more inaccurate; it makes me think of Klingon warlords or something.

comment by Will_Newsome · 2011-01-20T20:16:13.771Z · LW(p) · GW(p)

This is mere distortion of both the common informal use and advanced formal definitions of the word "atheism", which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.

What I think of as the informal definition of atheism is something like "the state of not believing in God or gods". I believe in gods and God, and I take this into account in my human approximation of a decision theory. I'm not yet sure what their intentions are, and I'm not inclined to worship them yet, but by my standards I'm definitely not an atheist. What is your definition of atheism such that it is meaningfully different from 'not religious'? Why are we throwing a good word like 'theism' into the heap of wrong ideas? It's like throwing out 'singularity' because most people pattern match it to Kurzweil, despite the smartest people having perfectly legitimate beliefs about it.

It doesn't really matter, I just think that it's sad that so many rationalists consider themselves atheists when by reasonable definition it seems they definitely are not, even if atheism has more correct connotations than the alternatives (though I call myself a Buddhist, which makes the problem way easier). Perhaps I am not seeing the better definition?

Replies from: Document
comment by Document · 2011-01-20T20:33:53.556Z · LW(p) · GW(p)

It's like throwing out 'singularity' because most people pattern match it to Kurzweil

Possibly a bad example, since a number of people here advocate that. I remember a comment somewhere that people at SIAI were considering renaming it for related reasons.

Replies from: ata, Will_Newsome
comment by ata · 2011-01-20T20:38:17.034Z · LW(p) · GW(p)

Possibly a bad example, since a number of people here advocate that. I remember a comment somewhere that SIAI was considering a name change for related reasons.

Here's the one I remembered (there may have been a couple of other mentions):

Hollerith, if by that you're referring to the mutant alternate versions of the "Singularity" that have taken over public mindshare, then we can be glad that despite the millions of dollars being poured into them by certain parties, the public has been reluctant to uptake. Still, the Singularity Institute may have to change its name at some point - we just haven't come up with a really good alternative.

(I agree with this, but do not have a better name to propose.)

comment by Will_Newsome · 2011-01-20T20:39:44.334Z · LW(p) · GW(p)

I think they're going to drop the 'for Artificial Intelligence' part, but I think they're keeping the 'Singularity' part, since they're interested in other things besides seed AI that are traditionally 'Singularitarian'. (Side note: I'm not sure if I should use 'we' or 'they'. I think 'they'. Nobody at SIAI wants to speak for SIAI, since SIAI is very heterogenous. And anyway I'm just a Visiting Fellow.) The social engineering aspects of the problem are complicated. Accuracy, or memorability? Rationalists should win, after all...

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-20T20:55:36.274Z · LW(p) · GW(p)

Side note: I'm not sure if I should use 'we' or 'they'.

You could go with "it" and sidestep the problem.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T20:58:00.233Z · LW(p) · GW(p)

Thanks!

comment by [deleted] · 2014-11-28T22:52:42.105Z · LW(p) · GW(p)

If you believe in a Matrix or in the Simulation Hypothesis, you believe in powerful aliens, not deities. Next! ... This is mere distortion of both the common informal use and advanced formal definitions of the word "atheism", which is not only unhelpful but such a common religious tactic that you should not be surprised to be downvoted.

It bothers me when an easily researched, factually incorrect statement is upvoted so many times. There are many different definitions of atheism, but one good one might be:

An atheist is one who denies the existence of a personal, transcendent creator of the universe, rather than one who simply lives life without reference to such a being. A theist is one who asserts the existence of such a creator. Robin Le Poidevin. Arguing for Atheism: An Introduction to the Philosophy of Religion

The book does not define personal or transcendent, but it is unlikely that either would exclude "god is an extradimensional being who created us using a simulation" as a theistic argument. For example, one likely definition of transcendent is:

transcendent: the realm of thought which lies beyond the boundary of pos­sible knowledge, because it consists of objects which cannot be presented to us in intuition-i.e., objects which we can never experience with our senses (sometimes called noumena). The closest we can get to gaining knowledge of the transcendent realm is to think about it by means of ideas. (The opposite of 'transcendent' is 'immanent'.) [http://staffweb.hkbu.edu.hk/ppp/ksp1/KSPglos.html]

Beings living outside the simulation would definitely qualify as transcendent since we have no way of experiencing their universe. To be clear, I am not saying this is the only possible definition of atheism. I am only saying that it is one reasonable definition of atheism, and to claim that it is not a definition, as Eliezer's post has done, is factually incorrect.

comment by Will_Newsome · 2011-01-20T18:34:24.235Z · LW(p) · GW(p)

"Gods are ontologically distinct from creatures, or they're not worth the paper they're written on." -- Damien Broderick

Most upper ontologies allow no such ontological distinction. E.g. my default ontology is algorithmic information theory, which allows for tons of things that look like gods.

I agree with the rest of your comment, though. I don't know what 'worship' means yet (is it just having lots of positive affect towards something?), but it makes for a good distinction between religion and not-quite-religion.

Time for me to reread A Human's Guide to Words, I suppose. But in my head and with Visiting Fellows folk I think I will continue to use an ontological language stolen from theism.

Replies from: Jack, wedrifid
comment by Jack · 2011-01-20T19:23:31.490Z · LW(p) · GW(p)

But in my head and with Visiting Fellows folk I think I will continue to use an ontological language stolen from theism.

I'm curious to know why you prefer this language. I kind of like it too, but can't really put a finger on why.

Replies from: Will_Newsome, ata, steven0461
comment by Will_Newsome · 2011-01-20T19:40:24.388Z · LW(p) · GW(p)

Primarily because I get a lot of glee out of meta-contrarianism and talking in a way that would make stereotypical aspiring rationalists think I was crazy. Secondarily because the language is culturally rich. Tertiarily because I figure out what smart people actually mean when they talk about faith, charkras, souls, et cetera, and it's fun to rediscover those concepts and find their naturalistic basis. Quaternarily it allows me to practice charity in interpretation and steel-manning of bad arguments. Zerothly (I forgot the most important reason!) it is easier to speak in such a way, which makes it easier to see implications and decompartmentalize knowledge. Senarily it is more aesthetic than rationalistic jargon.

Replies from: Zack_M_Davis, Sniffnoy
comment by Zack_M_Davis · 2011-01-20T20:22:10.732Z · LW(p) · GW(p)

I get a lot of glee out of meta-contrarianism and talking in a way that would make stereotypical aspiring rationalists think I was crazy

I agree that verbal masturbation is fun, but it's not helpful when you're tying to actually communicate with people. Consider purchasing contrarian glee and communication separately.

Replies from: steven0461, Will_Newsome
comment by steven0461 · 2011-01-20T21:05:47.639Z · LW(p) · GW(p)

That's a good point, but where do you recommend getting contrarian glee separate from communication?

Replies from: Document, anon895
comment by Document · 2011-01-20T21:36:11.048Z · LW(p) · GW(p)

Cached thoughts: Crackpot Theory (48 readers)? Closet Survey, The Strangest Thing An AI Could Tell You, The Irrationality Game? Omegle?

Replies from: steven0461
comment by steven0461 · 2011-01-20T22:31:03.415Z · LW(p) · GW(p)

I wish crackpot theories were considered a legitimate form of art. They're like fantasy worldbuilding but better.

comment by anon895 · 2011-01-24T00:06:26.754Z · LW(p) · GW(p)

Here, of course.

comment by Will_Newsome · 2011-01-20T20:26:32.648Z · LW(p) · GW(p)

I agree, though I was describing the case where I can do both simultaneously (when I'm talking to people who either don't mind or join in on the fun). This post was more an example of just not realizing that the use of the word 'theism' would have such negative and distracting connotations.

comment by Sniffnoy · 2011-01-21T04:14:34.059Z · LW(p) · GW(p)

Tertiarily because I figure out what smart people actually mean when they talk about faith, charkras, souls, et cetera, and it's fun to rediscover those concepts and find their naturalistic basis.

Except I think it's safe to say this sort of thing typically isn't what they mean, merely what they perhaps might mean if they were thinking more clearly. And it's not at all clear how you could find analogs to the more concrete religious ideas (e.g. chakras or the holy trinity).

Quaternarily it allows me to practice charity in interpretation and steel-manning of bad arguments.

If the person would violently disagree that this is in fact what they intended to say, I'm not sure it can be called "charity of interpretation" anymore. And while I agree steel-manning of bad arguments is important, to do it to such an extent seems to be essentially allowing your attention to be hijacked by anyone with a hypothesis to privilege.

comment by ata · 2011-01-20T19:44:47.672Z · LW(p) · GW(p)

I think Ben from TakeOnIt put it well:

P.P.G. Bateson said:

Say what you mean, even if it takes longer, rather than use a word that carries so many different connotations.

Interestingly, I can't actually think of a word with more connotations than "God". Perhaps this is a function of the fact that:

  1. All definitions of "God" agree that "God" is the most important thing.
  2. There is nothing more disagreeable than what is the most important thing.

There's definitely something deeply appealing about theistic language. That's what makes it so dangerous.

Replies from: Jack
comment by Jack · 2011-01-20T20:05:11.959Z · LW(p) · GW(p)

That advice makes sense for general audiences. Your average Christian might read a version of the Simulation argument written with theistic language as an endorsement of their beliefs. But I really doubt posters here would.

Replies from: Perplexed, SilasBarta
comment by Perplexed · 2011-01-21T01:34:07.919Z · LW(p) · GW(p)

Frank Tipler actually produced a simulation argument as an endorsement of Christian belief. Along with some interesting cosmology making it possible for this universe to simulate itself! (It's easy when the accessible quantity of computronium tends to infinity as the age of the universe approaches its limit.) In Tipler's theory, God may not exist yet, but a kind of Singularity will create Him.

Of course, the average Christian has not yet heard of Tipler, nor would said Christian accept the endorsement. But it is out there.

Replies from: JoshuaZ, Document, Jack
comment by JoshuaZ · 2011-01-21T04:17:35.082Z · LW(p) · GW(p)

One issue I've never understood about Tipler is how he got from theism to Christianity using the Omega Point argument. It seems very similar to the SMBC cartoon Eliezer already linked to. Tipler's argument is a plausibility argument for maybe, something, sort of like a deity if you squint at it. Somehow that then gives rise to Christianity with the theology along with it.

comment by Document · 2011-01-21T02:27:05.820Z · LW(p) · GW(p)

It's worth pointing out that we now know that the universe's expansion is accelerating, which would rule out the omega point even if it were plausible before.

Replies from: Perplexed
comment by Perplexed · 2011-01-21T02:50:07.924Z · LW(p) · GW(p)

IIRC, Tipler had that covered. A universe of infinite duration allows us to use eons of future time to simulate a single second of time in the current era. Something like the hotel with infinitely many rooms.

But please don't ask me to actually defend Tipler's mumbo-jumbo.

Replies from: gwern
comment by gwern · 2011-01-21T03:24:29.886Z · LW(p) · GW(p)

I don't think it can be defended any more. I picked it up a few weeks ago, read a few chapters, and thought, do I want to read any more given that he requires the universe to be closed? Dark energy would seem to forbid a Big Crunch and render even the early parts of his model moot.

Replies from: SRStarin, Document
comment by SRStarin · 2011-01-21T04:30:46.538Z · LW(p) · GW(p)

Sweet! Wikipedia's image for Physical Cosmology, including your Dark Energy link, is the cosmic microwave background map from the WMAP mission. That was the first mission I worked with NASA. My job, as junior-underling attitude control engineer, was to come up with some way to salvage the medium cost, medium-risk mission if a certain part failed, and to help babysit the spacecraft during the least fun midnight-to-noon shift. Still, it feels good to have been a tiny part of something that has made a difference in how we understand our universe.

Disclaimer: My unofficial opinions, not NASA's. Blah, blah, blah.

comment by Document · 2011-01-21T04:03:20.463Z · LW(p) · GW(p)

I think you duplicated my post.

Replies from: gwern
comment by gwern · 2011-01-21T14:15:46.654Z · LW(p) · GW(p)

So I did. Context in Recent Comments unfortunately only reaches so far.

comment by Jack · 2011-01-21T01:40:41.585Z · LW(p) · GW(p)

How does he get from there to Christianity in particular?

Replies from: wedrifid, Perplexed
comment by wedrifid · 2011-01-21T01:46:10.769Z · LW(p) · GW(p)

If you are assuming infinite computronium you may as well go ahead and assume simulation of all of the conceivable religions!

I suppose that leaves you in a position of Pascal's Gang Mugging.

Replies from: Will_Newsome, Perplexed
comment by Will_Newsome · 2012-03-04T10:05:15.998Z · LW(p) · GW(p)

I suppose that leaves you in a position of Pascal's Gang Mugging.

That's basically Hindu theology in a nutshell. Or more accurately, Pascal's Gang Maybe Mugging Maybe Hugging.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-03-04T22:05:36.407Z · LW(p) · GW(p)

If you assume a Tegmark multiverse — that all definable entities actually exist — then it seems to follow that:

All malicious deprivation — some mind recognizing another mind's definable possible pleasure, and taking steps to deny that mind's pleasure — implies the actual existence of the pleasure it is intended to deprive;

All benevolent relief — some mind recognizing another mind's definable possible suffering, and taking steps to alleviate that suffering — implies the actual existence of the suffering it is intended to relieve.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-04T22:25:06.844Z · LW(p) · GW(p)

It does not follow from the fact that I am motivated to prevent certain kinds of suffering/pleasure, that said suffering/pleasure is "definable" in the sense I think you mean it here. That is, my brain is sufficiently screwy that it's possible for me to want to prevent something that isn't actually logically possible in the first place.

comment by Perplexed · 2011-01-21T02:09:26.250Z · LW(p) · GW(p)

Since religions are human inventions, I would guess that any comprehensive simulation program already produces all conceivable religions.

But I'm guessing that you meant to talk about the simulation of all conceivable gods. That is another matter entirely. Even with unlimited computronium, you can only simulate possible gods - gods not entailing any logical contradictions. There may not be any such gods.

This doesn't affect Tipler's argument though. Tipler does not postulate God as simulated. Tipler postulates God as the simulator.

comment by Perplexed · 2011-01-21T02:02:47.585Z · LW(p) · GW(p)

I'm not sure. I only read the first book - "Physics of Immortality". But I would suppose that he doesn't actually try to prove the truth of Christianity - he might be satisfied to simply make Christian doctrine seem less weird and impossible.

comment by SilasBarta · 2011-01-20T21:05:32.577Z · LW(p) · GW(p)

Here's a direct comparison of the two that I made.

comment by steven0461 · 2011-01-20T19:56:52.407Z · LW(p) · GW(p)

There's a buttload of thinking that's been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true. But mostly I don't think it is.

(For any discredited theory along the lines of gods or astrology, you want to focus on its advocates from the past more than from the present, because the past is when the world's best minds were unironically into these things.)

Replies from: Jack, Will_Newsome
comment by Jack · 2011-01-20T20:06:12.351Z · LW(p) · GW(p)

There's a buttload of thinking that's been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true.

Theres also the opportunity for a kind of metatheology- which might lead to some really interesting insights into humans and how they relate to the world.

comment by Will_Newsome · 2011-01-29T00:31:43.749Z · LW(p) · GW(p)

There's a buttload of thinking that's been done in this language in earlier times, and if we use the language, that suggests we can reuse the thinking, which is pretty exciting if true. But mostly I don't think it is.

Tangentially, it's important to note that most followers of a philosophy/religion are going to be stupid compared to their founders, so we should probably just look at what founders had to say. (Christ more than His disciples, Buddha more than Zen practitioners, Freud and Jung more than their followers, et cetera.) Many people who are now considered brilliant/inspiring had something legitimately interesting to say. History is a decent filter for intellectual quality.

That said, everything you'd ever need to know is covered by a combination of Terence McKenna and Gautama Buddha. ;)

Replies from: Nornagest
comment by Nornagest · 2011-01-29T00:39:41.485Z · LW(p) · GW(p)

Tangentially, it's important to note that most followers of a philosophy/religion are going to be stupid compared to their founders, so we should probably just look at what founders had to say.

This doesn't follow. The founder of a religion is likely to be more intelligent or at least more insightful than an average follower, but a religion of any size is going to have so many followers that a few of them are almost guaranteed to be more insightful than the founder was; founding a religion is a rare event that doesn't have any obvious correlation with intelligence.

I'd also be willing to bet that founding a successful religion selects for a somewhat different skill set than elucidating the same religion would.

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2011-01-29T01:09:29.790Z · LW(p) · GW(p)

The founder of a religion is likely to be more intelligent or at least more insightful than an average follower, but a religion of any size is going to have so many followers that a few of them are almost guaranteed to be more insightful than the founder was; founding a religion is a rare event that doesn't have any obvious correlation with intelligence.

You're mostly right; upvoted. I suppose I was thinking primarily of Buddhism, which was pretty damn exceptional in this regard. Buddha was ridiculously prodigious. There are many Christians with better ideas about Christianity than Christ, and the same is probably true of Zoroaster and Mohammed, though I'm not aware of them. Actually, if anyone has links to interesting writing from smart non-Sufi Muslims, I'd be interested.

I'd also be willing to bet that founding a successful religion selects for a somewhat different skill set than elucidating the same religion would.

This kind of depends on criteria for success. If number of adherents is what matters then I agree, if correctness is what matters then it's probably a very similar skill set. Look at what postmodernists would probably call Eliezer's Singularity subreligion, for instance.

Replies from: jacob_cannell, Nornagest
comment by jacob_cannell · 2011-01-29T01:32:19.644Z · LW(p) · GW(p)

There are many Christians with better ideas about Christianity than Christ, and the same is probably true of Zoroaster and Mohammed, though I'm not aware of them.

There's a serious problem with this in Christianity in that you have to figure out what the founder actual said in the first place, which is very much an open problem concerning Christianity (and perhaps Bhuddism as well but I am less familiar with it at the moment).

For example, just this century with the rediscovery of the Gospel of Thomas you get a whole new set of information which is .. challenging to integrate to say the least, and also very interesting.

About half of the sayings are different (usually earlier, better) versions of stuff already in the synoptics, but there are some new gems - check out 22:

When you make the two into one, and when you make the inner like the outer and the outer like the inner, and the upper like the lower, and when you make male and female into a single one, so that the male will not be male nor the female be female, when you make eyes in place of an eye, a hand in place of a hand, a foot in place of a foot, an image in place of an image, then you will enter [the kingdom]

Or 108:

Whoever drinks from my mouth will become like me; I myself shall become that person, and the hidden things will be revealed to him

Replies from: Desrtopa
comment by Desrtopa · 2011-01-29T01:53:06.712Z · LW(p) · GW(p)

Those are certainly things that weren't in the bible before that people would have put a lot of work into interpreting if they had been, but "gems" is not the word I'd use.

comment by Nornagest · 2011-01-29T01:15:04.881Z · LW(p) · GW(p)

If number of adherents is what matters then I agree, if correctness is what matters then it's probably a very similar skill set.

Point taken. I was thinking of number of adherents.

comment by Will_Newsome · 2011-01-29T01:11:43.340Z · LW(p) · GW(p)

Also I should note that by 'intelligence' I mostly meant 'predisposition to say insightful or truthful things', which is rather different from g.

comment by wedrifid · 2011-01-20T19:48:57.791Z · LW(p) · GW(p)

Time for me to reread A Human's Guide to Words, I suppose. But in my head and with Visiting Fellows folk I think I will continue to use an ontological language stolen from theism.

Just be careful of true believers that may condemn you for heresy for using the other tribe's jargon! ;)

'Worship' or 'Elder Rituals' could not be reasonably construed as a relevant reply to your thread.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T20:05:59.792Z · LW(p) · GW(p)

'Worship' or 'Elder Rituals' could not be reasonably construed as a relevant reply to your thread.

Eliezer is trying to define theism to mean religion, I think, so that atheism is still a defensible state of belief. I guess I'm okay with this, but it makes me sad to lose what I saw as a perfectly good word.

Replies from: Jack
comment by Jack · 2011-01-20T20:11:52.823Z · LW(p) · GW(p)

Strongly agree. Better to avoid synonyms when possible. 'Simulationism' is ugly and doesn't seem sufficiently general in the way 'theism' does.

comment by [deleted] · 2011-01-20T14:58:04.065Z · LW(p) · GW(p)

I know one isn't supposed to use web comics to argue a point, but I've always found SMBC is the exception to that rule. Maybe not always to get the point across so much as to lighten the mood.

Replies from: shokwave
comment by shokwave · 2011-01-20T15:23:37.197Z · LW(p) · GW(p)

When I want to discuss something, I use a relevant SMBC comic to get people to locate the thing I am talking about. I say decision theory ethics, people glaze over. I link this and they get it immediately.

Not relevant: when people want to use god-particles, etc, to justify belief in God, I use this. It is significantly more effective than any argument I've employed.

comment by Miller · 2011-01-20T10:08:12.764Z · LW(p) · GW(p)

Yes. Next. I think this post demonstrates the need for downvotes to be a a greater than 1.0 multiple of upvotes. What argument is there otherwise other than the status quo?

Replies from: shokwave
comment by shokwave · 2011-01-20T15:00:02.837Z · LW(p) · GW(p)

What argument is there otherwise other than the status quo?

To the extent that positive karma is a reward for the poster and an indication of what people desire to see (both very true), we should not expect a distribution about the mean of zero. If the average comment is desirable and deserving of reward, then the average comment will be upvoted.

Replies from: Miller
comment by Miller · 2011-01-20T23:21:37.281Z · LW(p) · GW(p)

I didn't say anything about centering on zero, and agree that would be incorrect. However, modification to the current method is likely challenging and no one's actually going to do any novel karma engineering here so it was a silly comment for me to make.

comment by lukstafi · 2011-01-20T13:50:52.766Z · LW(p) · GW(p)

"Gods are ontologically distinct from creatures, or they're not worth the paper they're written on." -- Damien Broderick

If you believe in a Matrix or in the Simulation Hypothesis, you believe in powerful aliens, not deities. Next!

[Deleted: Gods "run an intrinsically infinitary inference system".] ETA: agreed, silly.

Replies from: shokwave
comment by shokwave · 2011-01-20T14:58:01.667Z · LW(p) · GW(p)

My recent definition

is summarily rejected. What does 'intrinsically infinitary' even mean?

Replies from: lukstafi
comment by lukstafi · 2011-01-20T16:16:01.716Z · LW(p) · GW(p)

For example, outside the domain of Goedel's theorems.

comment by grouchymusicologist · 2011-01-20T01:57:24.719Z · LW(p) · GW(p)

This post could use a reminder of Less Wrong's working definition of the supernatural (of which theism, as virtually everyone uses the term, is surely a proper subset): it's something that involves an ontologically basic mental entity. We have no reason to suspect the existence of such things, and the simulation argument -- since it certainly does not appeal to such things -- doesn't change that a bit. Any resemblance to theism is superficial at most.

I'd also be curious to know what popular arguments for atheism you happen to think are so much weaker than you'd expected.

EDIT: ignore that last question if you like, I'm getting a sense for it elsewhere in the thread (though do not really agree).

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T06:56:09.185Z · LW(p) · GW(p)

Carrier's definition of supernaturalism as non-reductionist explanations involving ontologically basic mental entities is something of a strawman argument and makes the term somewhat useless. (ie it is not the definition many theists would even argue)

The more typical definition of supernaturalism usually refers to events that operate outside of the normal laws of physics. This definition is potentially relevant to simulationism, because a simulator would of course be free to occasionally intervene and violate normal physical 'law' if so desired. Of course, this entity itself would still be reducible to simpler physical processes in it's own universe.

Replies from: Sniffnoy
comment by Sniffnoy · 2011-01-26T08:36:02.553Z · LW(p) · GW(p)

But what does that even mean? How are the "normal" laws of physics distinguished from the actual laws of physics?

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T09:10:00.373Z · LW(p) · GW(p)

The normal laws of physics being those that predict the universe absent interventions from said external universe, which may include some extraneous special case code.

The same physics could describe the whole system of course at some deeper level, so perhaps 'normal' was not quite the right distinction. Limited?

comment by Desrtopa · 2011-01-20T01:17:59.440Z · LW(p) · GW(p)

I don't think the implications of accepting the simulation argument on one's worldview are that similar to believing in a supernatural omniscient creator of the universe and arbiter of morality. Absent a ready label for "one who accepts the simulation argument in a naturalistic framework," it's probably more convenient for such people to simply identify as "atheist." Conflating simulationism with theism is only liable to lead to confusion.

Replies from: Will_Newsome, jacob_cannell
comment by Will_Newsome · 2011-01-20T02:08:36.721Z · LW(p) · GW(p)

Voted up and agreed; I often forget that Less Wrong is rightly conscientious about keeping inferential distances imposed by terminological suboptimality to a minimum.

Replies from: Miller
comment by Miller · 2011-01-20T02:33:50.748Z · LW(p) · GW(p)

Conflating simulationism with theism is only liable to lead to confusion.

This observation dissolves your post. If you agree with it then repent properly, o' sinner.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T02:37:05.599Z · LW(p) · GW(p)

It doesn't really dissolve what I was actually trying to get at with my post, though; it just means I didn't do a good job at explaining what I was getting at. How do rationalists repent? I have karma to burn...

Replies from: Miller
comment by Miller · 2011-01-20T02:53:55.042Z · LW(p) · GW(p)

How do rationalists repent? I have karma to burn...

I'd say they repent by updating their beliefs, and cleaning up the debris left by their old ones. This is rather similar for rationalists and non-rationalists alike really. Kind of like apologizing for stealing the candy from the drugstore and promising to pay it back..

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T03:08:26.987Z · LW(p) · GW(p)

Hm, that's not a particularly natural fit here... the only beliefs I'd be updating are beliefs about what styles of communication should be normative. Still, it's my style to treat ontological disagreement as a big deal, so I'll update accordingly.

comment by jacob_cannell · 2011-01-26T00:43:52.269Z · LW(p) · GW(p)

How so?

The SA posits an external universe above ours, which although operating according to physics likely identical or very similar to ours, is not at all constrained by our physics. Thus the creator in the SA is quite possibly supernaturally omniscient and omnipotent.

Also, whatever utility function/morality we have in our universe, the SA indicates and requires it was purposefully created to some end in the parent universe and may be eventually evaluated according to some external utility function.

EDIT: Removed bit about 'new theism' - it has the wrong connotations. This set of conjectures is very similar, but distinct from, traditional theism. Perhaps it needs a new word, but it is a valid domain of knowledge.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-26T01:04:28.367Z · LW(p) · GW(p)

The simulators, should they exist, do not appear to reward belief or worship. We have no reason to regard them as moral authorities, and they do not intervene, with or without appeals. Plus, while the simulators can presumably access all of the data in the simulation, that doesn't mean that they would be able to keep track of it, or predict the results should they interfere in a chaotic system, so there's no reason to suppose that they're functionally omniscient. Unless the superordinate reality is different in some very fundamental ways, it's impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,

It does not in any way follow from the simulation argument that our morality was purposefully created by the simulators; by all appearances the simulation, should it happen to be one, is untampered with, and our utility functions evolved.

You can build up a religious edifice around simulationism, but like supernatural theism, it requires the acceptance of completely unevidenced assertions.

Replies from: JoshuaZ, jacob_cannell
comment by JoshuaZ · 2011-01-26T01:32:13.866Z · LW(p) · GW(p)

If one can pause a simulation and run it backwards or make multiple copies of a simulation, then from our perspective for many purposes the simulators will be omniscient. There might be still some limits in that regard (for example if they are bound to only do computable operations then they will be limited in what math they can do.)

Also, if a simulator wants a specific outcome, and there's some random aspect in the simulation (such as from quantum mechanical effects) they could run the simulation multiple times until they got a result they wanted.

Unless the superordinate reality is different in some very fundamental ways, it's impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,

This isn't quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn't say much about how hard things are to compute if you have perfect information.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-26T01:42:49.332Z · LW(p) · GW(p)

But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that's many realities that actually occur which don't achieve the results they want for every one that does. Likewise, rewinding the simulation may allow them to achieve the results they want, but it doesn't prevent the events they don't want from happening to us. Besides, there's no evidence that our universe is being guided according to any agent's utility function, and if it is, it's certainly not much like ours.

This isn't quite true. As I understand it, there are very few results asserting minimal computational complexity of chaotic systems. The primary problem with chaotic systems is that predicting their behavior becomes very difficult if one has anything less than perfect accuracy because very similar initial conditions s can diverge in long-term behavior. That doesn't say much about how hard things are to compute if you have perfect information.

Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information. Projecting the simulation with perfect accuracy is equivalent to running the simulation.

Replies from: jacob_cannell, jacob_cannell, JoshuaZ
comment by jacob_cannell · 2011-01-26T04:28:28.038Z · LW(p) · GW(p)

Besides, there's no evidence that our universe is being guided according to any agent's utility function, and if it is, it's certainly not much like ours.

The SA mechanism places many constraints on the creator. They exist in a universe like ours, they are similar to our future descendants, they created us for a reason, and their utility function, morality, what have you all evolved from a universe like ours.

comment by jacob_cannell · 2011-01-26T04:25:51.954Z · LW(p) · GW(p)

Monte carlo simulation.

You don't run one simulation, you run many. There is no one single correct answer that the simulation is attempting to compute. It is a landscape, a multiverse, from which you sample.

comment by JoshuaZ · 2011-01-26T02:20:57.815Z · LW(p) · GW(p)

But running the simulation is running our reality. If they run multiple simulations with slight alterations to get the outcome they want, that's many realities that actually occur which don't achieve the results they want for every one that does.

Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there's only one universe, the one where the simulators got what they wanted.

Besides, there's no evidence that our universe is being guided according to any agent's utility function, and if it is, it's certainly not much like ours.

Yes, you've made that point before. I don't disagree with it. I'm not sure why you are bringing it up again.

Chaotic systems are hard to project because small differences between the information in the system and the information in the model propagate to create large differences between the system and the model over time. To make the model perfectly accurate, it must follow all the same rules and contain all the same information.

It must contain the same information. It doesn't need to contain the same rules.

Projecting the simulation with perfect accuracy is equivalent to running the simulation.

This isn't true. For example, the doubling map is chaotic. Despite that, many points can have their orbits calculated without such work. For example, if the value of the starting point is rational, we can without much effort always give an exact value for any number of iterations with less computational effort than that in simply iterating the function. There are some complicating factors to this sort of analysis; in particular, if the universe is essentially discrete, then what we mean when we talk about chaos becomes subtle and if the universe isn't discrete then what we mean when we discuss computational complexity becomes subtle (we need to use Blum-Shub-Smale machines or something similar rather than Turing machines). But the upshot is that chaotic behavior is not equivalent to being computationally complex.

There have been some papers trying to map out connections between the two (and I don't know that literature at all), and superficially there are some similarities between the two, but if someone could show deep, broad connections of the sort you seem to think are already known that would be the sort of thing that could lead to a Turing Award or a Fields Medal.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-26T02:46:05.710Z · LW(p) · GW(p)

Sure, but think in terms of observers. From the perspective of the universe that the simulators end up keeping there's only one universe, the one where the simulators got what they wanted.

But at any given time you may be in a branch that's going to be deleted or rewound because it doesn't lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don't want. So not only do we have no reason to suppose it's happening, it wouldn't be particularly useful to us if we suppose that the branch the simulators want is better for us than the ones they don't.

I concede that my understanding of the requirements to project a simulation of our universe may have been mistaken, but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.

Replies from: jacob_cannell, JoshuaZ
comment by jacob_cannell · 2011-01-26T04:36:55.875Z · LW(p) · GW(p)

Which are the 'extraneous additions'?

Omniscience and omnipotence have already been discussed at length - the SA does not imply perfection in either category on the part of the creator, but this is a meaningless distinction. For all intents and purposes the creator would have the potential for absolute control over the simulation. It is of course much more of an open question whether the creator would ever intervene in any fashion.

(I discussed that in length elsewhere, but basically I think future posthumans would be less likely to intervene in our history while aliens would be more likely)

Also, my points about the connectedness between morality and utility functions of creator and creation still stand. The SA requires that the creator made the simulation for a purpose in its universe, and the utility function or morality of the creator evolved from something like our descendants.

comment by JoshuaZ · 2011-01-26T04:34:44.512Z · LW(p) · GW(p)

But at any given time you may be in a branch that's going to be deleted or rewound because it doesn't lead to the results that the simulators want. The vast bulk of our experience would be in lines that the simulators don't want.

Not necessarily. It would depend on how narrow they wanted things and how often they intervened in this fashion. If such interventions are not very common then the majority of experience will be in universes which are very close to that desired by the simulators.

but the conclusions jacob cannell drew are still extraneous additions to the simulation argument, not necessary consequences of it.

No disagreement there.

comment by jacob_cannell · 2011-01-26T04:20:31.198Z · LW(p) · GW(p)

it's impossible to predict what happens in chaotic systems in our universe in advance with precision, without actually running the simulation,

Yes, this precisely is the primary utility for the creator.

But humans do this too, for intelligence is all about simulation. We created computers to further amplify our simulation/intelligence.

I agree mostly with what you're saying, but let me clarify. I am fully aware of the practical limitations, by functionally omniscient, I meant they can analyze and observe any aspect of the simulation from a variety of perspectives, using senses far beyond what we can imagine, and the flow of time itself need not be linear or continuous. This doesn't mean they are concerned with every little detail all of the time, but I find it difficult to believe that anything important, from their perspective, would be missed.

And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator's morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant's morality, we are predicting the creator's morality. You know: "As man is, god was, as god is, man shall become"

I'm not sure about your 'religious edifice', and what assertions are unevidenced.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-26T04:37:12.856Z · LW(p) · GW(p)

And yes of course our morality appears to have evolved through natural genetic/memetic evolution, but the SA chains that morality with the creator's morality in several fashions. First, as we are close to the historical ancestors of the creator, our morality is also their historical morality. And second, to the extent we can predict and model the future evolution of our own descendant's morality, we are predicting the creator's morality.

This only makes sense in the very narrow version of the simulation hypothesis under which the simulators are in some way descended from humans or products of human intervention. That's not necessarily the case.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T04:44:04.239Z · LW(p) · GW(p)

That's true, but I"m not sure if the "very narrow" qualifier is accurate. The creator candidates are: future humans, future aliens, ancient aliens. I think utility functions for any simulator civilizations will be structurally similar as they stem from universal physics, but perhaps that of future humans will be the most connected to our current.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-26T04:57:20.776Z · LW(p) · GW(p)

No. You are assuming that the simulators are evolved entities. They could also be AIs for example. Moreover, there's no very good reason to assume that the moral systems would be similar. For example, consider if we had the ability to make very rough simulations and things about as intelligent as insects evolved in the simulation. Would we care? No. Nor, would our moral sense in any way match theirs. So now if one has for example some thing that is vastly smarter than humans and lives in some strange 5 dimensional space. It is wondering if star formation can occur in 3-dimensions and if so how it behaved. The fact that there's something resembling fairly stupid life that has shown up on some parts of its system isn't going to matter to it, unless some of it does something that interferes with what the entity is trying to learn (say the humans decide to start making Dyson spheres or engage in star lifting).

Incidentally, even this one could pattern match to some forms of theism (For God's ways are not our ways...), which leads to a more general problem with this discussion. Apologetics and theology of most major religions has managed to say so many contradictory things (In this case the dueling claims are that we can't comprehend God's mysterious, ineffable plans, and that God has a moral system that matches ours.) So it isn't hard to find something that pattern matches with any given claim.

The primary strong reason to not care about simulationism has nothing to do with whether or not it is has a resemblance to theism, but for the simple reason that it doesn't predict anything useful. There's no evidence of intervention, and we have no idea what probabilities to assign to different types of simulators. So the hypothesis can't pay rent.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T05:29:01.600Z · LW(p) · GW(p)

No. You are assuming that the simulators are evolved entities. They could also be AIs for example

AI's don't just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).

I would be surprised if future posthumans, or equivalent Singularity-tech aliens, would have moral systems just like ours.

On the other hand, moral or goal systems are not random, and are subject to evolutionary pressure just as much as anything else. So as we understand our goal systems or morality and develop more of a science of it, we can understand it in objective terms, how it is likely to evolve, and learn the shape of likely future goal systems of superintelligences in this universe.

Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects. Yes the number of researchers is small and they are currently just doing very rough weak simulation using their biological brains, but nonetheless. Also, our current time period does not appear to be a random sample in terms of historical importance. In fact, we happen to live in a moment which is probably of extremely high future historical importance. This is loosely predicted by the SA.

We do have a methodology of assigning probabilities to different types of simulators. First you start with a model of our universe and fill in the important gaps concerning the unobservables - both in the present in terms of potential alien civilizations, and in the future in terms of the shape of our future. Of this set of Singularity level civilizations, we can expect them to run simulations of our current slice of space-time in proportion to it's utility vs the expected utility of simulating other slices of space-time.

They could also run and are likely to run simulations of space-time pockets in other universes unlike ours, fictional universes, etc. However a general rule applies - the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.

The question of evidence for intervention depends on the quality of the evidence itself and the prior. The SA helps us to understand the prior.

Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-26T20:52:15.023Z · LW(p) · GW(p)

AI's don't just magically pop out of nothing. Like anything else under the sun, they systemically evolve from existing patterns. They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas).

Again, you are assuming that the entities arise from human intervention. The Simulation Hypothesis does not require that.

Your insect example is not quite accurate. There are people right now who are simulating the evolution of early insects.

How is it not accurate? I fail to see how the presence of such research makes my point invalid.

However a general rule applies - the more dissimilar the simulated universe is to the parent universe, the vaster the space of configurations becomes and the less utility the simulation has. So we can expect that the parent universe is roughly similar to ours.

This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility. For example, universes that work off of cellar automata would be really interesting despite the fact that our universe doesn't seem to operate in that fashion.

Before the SA there was no mechanism for a creator, and so the prior for intervention was zero regardless of the evidence. That is no longer the case. (Nor is it yet a case for intervention)

This confuses me. Generally, the problem with assigning a prior of zero to a claim is just what you've said here, that it is stuck at zero no matter how much you update with evidence. This is bad. But, you then seem to be asserting that an update did occur due to the simulation hypothesis. This leaves me confused.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T20:07:09.765Z · LW(p) · GW(p)

No. You are assuming that the simulators are evolved entities. They could also be AIs for example They will evolve from our existant technosphere/noosphere (the realm of competing technologies and ideas). Again, you are assuming that the [simulator] entities arise from human intervention. The Simulation Hypothesis does not require that.

Sure, but the SH requires some connection between the simulated universe and the simulator universe.

If you think of the entire ensemble of possible universes as a landscape, it is true that any point-universe in that landscape can be simulated by any other (of great enough complexity). However, that doesn't mean the probability distribution is flat across the landscape.

The farther away the simulated universe is from the parent universe in this landscape, the less correlated, relevant, and useful it's simulation is to the parent universe. In addition, the farther away you go in this landscape from the parent universe, the set of of possible universes one could simulate expands ... at least exponentially.

The consequence of all this is that the probability distribution across potential universes that could be simulating us is tightly clustered around universes similar to ours - different sample points in the multiverse described by our same physics.

This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility.

Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.

This has been mathematically formalized in AI theory and AIXI:

Intelligence is simulation-driven search through the landscape of potential realizable futures for the path that maximizes future utility.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-28T20:49:57.028Z · LW(p) · GW(p)

This does not follow. Similarity of the simulation to the ground universe is not necessarily connected in any useful way to utility.

Of course it is. We simulate systems to predict their future states and make the most profitable decisions. Simulation is integral to intelligence.

No. See my earlier example with cellular automata. Our universe isn't based on cellular automata but we'd still be interested in running simulations of large universes with such a base just because they are interesting. The fact that our universe has very little similarity to those universes doesn't reduce my utility in running such simulations.

That said, I agree that there should be a rough correlation where we'd expect universes to be more likely to simulate universes similar to them. I don't think this necessarily has anything to do with utility though, more that entities are more likely to monkey around with the laws of their own universes and see what happens. Due to something like an anchoring effect, entities should be more likely to imagine universes that are in some way closer to their own universe compared to the massive landscape of possible universes.

But, that similarity could be so weak as to have little or no connection to whether the simulators care about the simulated universe.

comment by JoshuaZ · 2011-01-20T03:04:06.484Z · LW(p) · GW(p)

How low a percentage does one need to assign a claim in order to declare it to be closed? I'd assign around a 5% chance that there exists something approximating God (using this liberally to include the large variety of entities which fall under that label). I suspect that my probability estimate is higher than many people on LW. (Tangent: I recently had a discussion with an Orthodox Jewish friend about issues related to Bayesianism, and he was surprised that I assigned the idea that high a probability. In his view, if he didn't have faith and had to assign a probability he said it might be orders of magnitude lower.) So how low a probability do we need to estimate before we consider something closed?

Moreover, how much attention should we pay to apologetics in general? We know that theology and apologetics are areas that have spent thousands of years of memetic evolution to be as dangerous as possible. They take almost every little opportunity to exploit the flaws in human cognition. Apologetic arguments aren't (generally) basilisk level, but they can take a large amount of cognitive resources to understand where they are wrong. After 10 or 15 of them, how much effort do we need to spend seeing if # 16 (variation of first cause argument number 8) is worth spending resources investigation? Also, given that there's a vibrant subset of the internet that is dedicated to handling just this question and related issues, why should LW be the forum for handling the issue?

There's a related issue: humans are overactive agent recognizers. We love to see patterns where none exist and see intelligence in random action. Theism fits with deep-seated human intuitions. In contrast, MWI, simulationism and full-scale Tegmark all clash strongly with human intuition. They may seem weird, but the weirdness may not be a product of evidential issues but rather that they clash with human intuitions. So putting them in the same category as religion may be misleading.

Incidentally, I'm curious, would you similarly object if LW said explicitly that homeopathy was a closed subject? What about evolution? Star formation? If these are different, why are they different?

Replies from: b1shop, lionhearted, Will_Newsome
comment by b1shop · 2011-01-20T19:19:30.711Z · LW(p) · GW(p)

Perhaps a question becomes a closed issue not when the probability of the belief reaches a certain point, but when our estimate of the probability of the belief changing reaches a certain threshold. A fair coin is heads 50% of the time, and my probability won't change. That's a closed question. I may be fairly confident about the modern theory of star formation, but I wouldn't be too surprised if a new theory added some new details. So it's not a closed subject.

I can imagine no evidence that would lead me to believe in something nonfalsifiable. Theism is a closed subject.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-20T19:58:57.390Z · LW(p) · GW(p)

You say that you can't imagine evidence that would cause you to believe in something nonfalsifiable. But then seem to apply that the theism in general. I'm curious. If say, almost all the evangelical Christians in the world disappeared along with all the world's children, would you not assign a substantial probability to the Rapture having just taken place?

Replies from: b1shop
comment by b1shop · 2011-01-20T20:26:19.924Z · LW(p) · GW(p)

Fair point. Some religions make falsifiable claims.

But my point still stands. I assign a low probability to the rapture happening -- even lower than there being a xian God, so I don't put much weight into the idea my religious beliefs will change. The people who take the rapture seriously do so because they also believe in nonfalsifiable things.

comment by lionhearted (Sebastian Marshall) (lionhearted) · 2011-01-20T13:10:11.002Z · LW(p) · GW(p)

This comment is brilliant. In particular, I'd really really love to see two top level posts covering:

Moreover, how much attention should we pay to apologetics in general? We know that theology and apologetics are areas that have spent thousands of years of memetic evolution to be as dangerous as possible. They take almost every little opportunity to exploit the flaws in human cognition. Apologetic arguments aren't (generally) basilisk level, but they can take a large amount of cognitive resources to understand where they are wrong.

...and...

There's a related issue: humans are overactive agent recognizers. We love to see patterns where none exist and see intelligence is random action. Theism fits with deep-seated human intuitions. In contrast, MWI, simulationism and full-scale Tegmark all clash strongly with human intuition. They may seem weird, but the weirdness may not be a product of evidential issues but rather that they clash with human intuitions. So putting them in the same category as religion may be misleading.

Both really fascinating insights, I'd love to read more. Especially the first one about memetic evolution to be dangerous - I wonder what various secular social and societal memes fit in similarly.

comment by Will_Newsome · 2011-01-20T18:24:58.549Z · LW(p) · GW(p)

I'd assign around a 5% chance that there exists something approximating God (using this liberally to include the large variety of entities which fall under that label).

Interesting. I'd assign high probability to there being a Creator computing roughly 'this' part of spacetime, a high probability to it being omniscient and omnipotent, a fair probability to it being omnibenevolent, and a low probability to it being 'personal' in the Christian sense (maybe 5%, but this is liable to change a ton when I think about it more and get a better sense of what a personal God is).

I also think it's unlikely that Christ was the memetic son of God (genetic son of Joseph), though not terribly unlikely. Not less than .1%, probably more. I think it is likely that Christ died for our sins given my interpretation of those words, which may be entirely unlike what Christians mean. (I mean something like Christ set it up such that we're more likely to have a positive singularity, though this is very disputable, and I'm mostly following that line of reasoning because meta-contrarianism is fun.) I think it's unlikely that Christ was able to cast resurrection on himself, but I agree with Yvain that it's odd that the resurrection myth spread so far and so fast. User:Kevin tells me that Christianity was largely a cannabis cult, and weed in large doses is a hallucinogen. This allegedly explains most of the perceived miracles in the Bible. For example, turning water into wine is no problem if you have a tincture of cannabis on hand.

Moreover, how much attention should we pay to apologetics in general?

Not much. We can come up with better apologetics than anyone else could, I think, if we put our minds to it. My theodicy tends to be more persuasive than any I find in apologetics. Which is funny, since it's largely inspired by Eliezer's fun theory plus a few insights from decision theory and cosmology.

So putting them in the same category as religion may be misleading.

I didn't mean to do so. Apparently the word 'theism' has lots of weird connotations I didn't intend to convey. (That said, I see value in many religions. Not all of it is the progeny of bad epistemology.)

Incidentally, I'm curious, would you similarly object if LW said explicitly that homeopathy was a closed subject? What about evolution? Star formation? If these are different, why are they different?

No, I would not object. Those have all made predictions and been tested. Theism/atheism is a Bayesian question, not a scientific one. Unfortunately it might be a (subjective?) Bayesian decision theory question, in which case it will never be fit for Less Wrong.

Replies from: ata, Vaniver, TheOtherDave, jwhendy, jacob_cannell, Dreaded_Anomaly
comment by ata · 2011-01-20T18:42:14.021Z · LW(p) · GW(p)

I mean something like Christ set it up such that we're more likely to have a positive singularity, though this is very disputable, and I'm mostly following that line of reasoning because meta-contrarianism is fun.

Maybe you should stop doing that, if it's leading you to say things like "I mean something like Christ set it up such that we're more likely to have a positive singularity".

Unfortunately it might be a (subjective?) Bayesian decision theory question, in which case it will never be fit for Less Wrong.

Assuming that the other people you encounter inhabit the same reality as you — and I suspect you'll be able to find something about that to object to, but you know what I mean :P — what is subjective about it? The fact that from a decision-theoretic perspective we may be in many universes at once doesn't suggest that the distribution of your measure depends systematically on your beliefs about it (which is the only thing I can imagine this use of "subjective" meaning, bur correct me if I'm mistaken about that).

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T19:03:26.792Z · LW(p) · GW(p)

Maybe you should stop doing that, if it's leading you to say things like "I mean something like Christ set it up such that we're more likely to have a positive singularity".

Why?

Assuming that the other people you encounter inhabit the same reality as you — and I suspect you'll be able to find something about that to object to, but you know what I mean :P — what is subjective about it?

Existence is probably tied up with causal significance, and causal significance is tied up with individuals' local utility functions along with this more global probability thing. But around singularities where lots of utility is up for grabs it might be that the local utility differences override the global similarities of how existence works. I haven't thought about this carefully. Hence the question mark.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-24T14:44:29.058Z · LW(p) · GW(p)

Request for downvote explanation.

Replies from: Desrtopa, jacob_cannell, Kevin
comment by Desrtopa · 2011-01-26T01:49:15.572Z · LW(p) · GW(p)

I did not downvote, not having read the comment previously but "existence is probably tied up with causal significance" sounds extremely dubious and in need of justification.

comment by jacob_cannell · 2011-01-26T01:46:25.269Z · LW(p) · GW(p)

I upvoted, even though I didn't fully grok your last paragraph, I sensed interesting meaning embedded in it. Care to elaborate?

comment by Kevin · 2011-01-24T15:31:10.103Z · LW(p) · GW(p)

They didn't understand what you meant and mapped it as something else that was wrong. Also possible political downvote.

comment by Vaniver · 2011-01-20T18:34:14.152Z · LW(p) · GW(p)

My theodicy tends to be more persuasive than any I find in apologetics.

This is almost definitely the result of inferential distances, not any actual differences in logical power.

comment by TheOtherDave · 2011-01-20T18:35:57.426Z · LW(p) · GW(p)

I also think it's unlikely that Christ was the memetic son of God (genetic son of Joseph), though not terribly unlikely. Not less than .1%, probably more.

I'm curious what you would say to someone whose estimate of that probability was, say, .01%, or 25%. Do you expect that you could both compare evidence and come to a common estimate, given enough time?

comment by jwhendy · 2012-03-05T04:34:47.957Z · LW(p) · GW(p)

...a high probability to it being omniscient and omnipotent, a fair probability to it being omnibenevolent...

I realize this is a necromancer post, but I'm interested in your definitions of the above. How do you square up with some of the questions regarding:

  • on what mindware something non-physical would store all the information that is
  • how omniscience settles with free-will (if you believe we have free will)
  • how omniscience interacts with the idea that this being could intervene (doing something different than it knows it's going to do)

I won't go on to more. I'm sure you're familiar with things like this; I was just surprised to see that you listed these terms outright, and wanted ti inquire about details.

Replies from: Vladimir_Nesov, Will_Newsome, Will_Newsome
comment by Vladimir_Nesov · 2012-03-05T04:48:35.573Z · LW(p) · GW(p)

Knowing your decisions doesn't prevent you from being able to make them, for proper consequentialist reasons and not out of an obligation to preserve consistency. It's the responsibility of knowledge about your decisions to be correct, not of your decisions to anticipate that knowledge. The physical world "already" "knows" everyone's decisions, that doesn't break down anyone's ability to act.

Replies from: jwhendy
comment by jwhendy · 2012-03-05T13:48:56.090Z · LW(p) · GW(p)

True, but I more meant the idea of theistic intervention, how that works with intercession and so on. The world "knows" everyone's decisions... but no one intercedes to the world expecting it to change something about the future. But theists do.

I suppose one can simply take the view that god knows both what will happen, what people will intercede for, and that he will or will not answer those prayers. Thus, most theists think they are calling on god to change something, when in reality he "already" "knew" they would ask for it and already knew he would do it.

Is it any clearer what I was inquiring about?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-05T15:05:44.135Z · LW(p) · GW(p)

Reality can't be changed, but it can be determined, in part by many preceding decisions. The changes happen only to the less than perfectly informed expectations.

(With these decision-philosophical points cleared out, it's still unclear what you're inquiring about. Logical impossibility is a bad argument against theism, as it's possible to (conceptually) construct a world that includes any artifacts or sequence of events whatsoever, it just so happens that our particular world is not like that.)

Replies from: jwhendy
comment by jwhendy · 2012-03-05T17:43:55.371Z · LW(p) · GW(p)

Logical impossibility is a bad argument against theism, as it's possible to...

Good point, though my jury is still out on whether it really is possible to parse what it would mean to be omniscient, for example. Or if we can suggest things like the universe "knowing everything," it's typically not what theists are implying when they speak of an omniscient being.

...it's still unclear what you're inquiring about.

I think I'll just let it go. Even the fact that we're both on the same page with respect to determinism pretty much ends the need to have a discussion. Conundrums like how an omniscient being can know what it will do and also be said to be responsive (change what it was going to do) based on being asked via prayer only seems to work if determinism is not on the table, and about every apologetics bit I've read suggests that it's not on the table.

This thread has been the first time I think I can see how intercession and omniscience could jive in a deterministic sense. A being could know that it will answer a prayer, and that a pray-er would pray for such an answer.

From the theists I know/interact with, I think they would find this like going through the motions though. It would remove the "magic" from things for them. I could be wrong.

comment by Will_Newsome · 2012-03-05T16:28:34.819Z · LW(p) · GW(p)

On another note, I buy the typical compatibilist ideas about free will, but there's also this idea I was kicking around that I don't think is really very interesting but might be for some reason (pulled from a comment I made on Facebook):

"I don't know if it ultimately makes sense, but I sometimes think about the possibility of 'super' free will beyond compatibilist free willl, where you have a Turing oracle that humans can access but whose outputs they can't algorithmicly verify. The only way humans can perform hypercomputation is by having faith in the oracle. Since a Turing oracle is construbtable from Chaitin's constant and is thus the only truly random source of information in the universe, this would (at least on a pattern-match-y surface level) seem to supply some of the indeterminism sought by libertarians, while also letting humans transcend deterministic, i.e. computable, constraints in a way that looks like having more agency than would otherwise be possible. So in a universe without super free will no one would be able to perform hypercomputation 'cuz they wouldn't have access to an oracle. But much of this speculation comes from trying to rationalize why theologians would say 'if there were no God then there wouldn't be any free will'."

Implicit in this model is that universes where you can't do hypercomputation are significantly less significant than universes where you can, and so only with hypercomputation can you truly transcend the mundanity of a deterministic universe. But I don't think such a universe actually captures libertarians' intuitions about what is necessary for free will, so I doubt it's a useful model.

Replies from: jwhendy
comment by jwhendy · 2012-03-05T18:48:21.972Z · LW(p) · GW(p)

I'll have to check into compatabilism more. It had never occurred to me that determinism was compatible with omniscience/intercession until my commenting with Vladimir_Nesov. In seeing wiki's definition, it sounded more reasonable than I remembered, so perhaps I never really understood what compatabilism was suggesting.

I'm not positive I get your explanations (due to simple ignorance), but it sounds slightly like what Adam Lee presented here concerning a prediction machine; namely that such a thing could be built, but that actually knowing the prediction would be impossible for it would set off something of an infinite forward calculation of factoring in the prediction, that the human knows the prediction itself, that the prediction machine knows that the human knows the prediction... and then trying to figure out what the new action will actually be.

comment by Will_Newsome · 2012-03-05T07:23:20.188Z · LW(p) · GW(p)

Note that I was pretty new to theology a year ago when I made this post so my thoughts are different and more subtle now.

To all three of your questions I think I hold the same views Aquinas would, even if I don't know quite what those views are.

on what mindware something non-physical would store all the information that is

How does Platonic mathstructure "store information" about the details of Platonic mathstructure? I think the question is the result of a confused metaphysic, but we don't yet have an alternative metaphysic to be confident in. Nonetheless I think one will be found via decision theory.

how omniscience settles with free-will (if you believe we have free will)

My answer is the same as Nesov's, and I think Aquinas answers the question beautifully: "Free-will is the cause of its own movement, because by his free-will man moves himself to act. But it does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause. God, therefore, is the first cause, Who moves causes both natural and voluntary. And just as by moving natural causes He does not prevent their acts being natural, so by moving voluntary causes He does not deprive their actions of being voluntary: but rather is He the cause of this very thing in them; for He operates in each thing according to its own nature."

how omniscience interacts with the idea that this being could intervene (doing something different than it knows it's going to do)

I think my answer is the typical Thomistic answer, i.e. that God is actuality without potentiality, and that God cannot do something different than He knows He will do, as that would be logically impossible, and God cannot do what is logically impossible.

Replies from: None, Vladimir_Nesov
comment by [deleted] · 2012-03-05T16:26:22.303Z · LW(p) · GW(p)

"Free-will is the cause of its own movement, because by his free-will man moves himself to act. But it does not of necessity belong to liberty that what is free should be the first cause of itself, as neither for one thing to be cause of another need it be the first cause. God, therefore, is the first cause, Who moves causes both natural and voluntary. And just as by moving natural causes He does not prevent their acts being natural, so by moving voluntary causes He does not deprive their actions of being voluntary: but rather is He the cause of this very thing in them; for He operates in each thing according to its own nature."

I don't think this is satisfying. Suppose there are two ways in which something may be a cause, either by being an unmoved mover or a moved mover ('moved' here is to be understood in the broadest necessary sense). If God is the first cause of our action, then we are not unmoved movers with reference to our action. If we nevertheless have free will, just because we are the causes of our actions, then we have free will in virtue of being movers but not in virtue of being unmoved movers.

But when we act to, say, throw a stone, we are the cause of our arm's movement and our arm (a moved mover) is the cause of the stone's movement. Likewise the stone, another moved mover, is the cause of Tom's being injured. Now God is the unmoved mover here, and everything else in the chain is a moved mover. If being a mover is all it takes to have free will, then I have it, my arm has it, the stone has it, etc. But surely, this is not what we (assuming neither of us is Spinoza) means by free will.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-05T16:43:33.130Z · LW(p) · GW(p)

If being a mover is all it takes to have free will

That wasn't claimed; the necessary preconditions of free will weren't in the intended scope of the passage I quoted. If you want Aquinas' broader account of free will, see this. It's pretty commonsensical philosophy.

Replies from: None
comment by [deleted] · 2012-03-05T17:03:20.959Z · LW(p) · GW(p)

That wasn't claimed; the necessary preconditions of free will weren't in the intended scope of the passage I quoted.

Granted, but the implication of your quotation was that it would do something to settle the question of how to reconcile God's omniscience or first-cause-hood with the idea of free will. But it doesn't do anything to address the question (you quoted the right bit of Aquinas, so I mean that he does nothing to answer the question). In order to address the question, Aquinas would have to show why free will is compatible with a more prior cause of our action than our own reasoning. All he manages to argue is that our reason's being a cause of our action is compatible with there being a prior cause of same. And this at a level of generality which would cover (as he says) natural and purportedly voluntary causes. But this isn't in doubt: in fact, this is the premise of his opponent.

The opponent is arguing that while we are the cause of our actions, we are not the free cause, because we are not the first cause. So the opponent is setting up a relation between 'free' and 'first' which Aquinas does nothing to address beyond simply denying (without argument) that the relation thus construed is a necessary one. In short, this just isn't an answer to the objection.

Replies from: Will_Newsome, Vladimir_Nesov
comment by Will_Newsome · 2012-03-05T18:32:59.943Z · LW(p) · GW(p)

So there are two levels of movement going on here. God moves the will to self-move, but does not move the rock to self-move, He only moves the rock. The objector claims that being moved precludes self-moving, but Aquinas claims that this is a confusion, because just as moving doesn't preclude being fluffy, moving doesn't preclude self-moving. This seems more like a clarification rather than a simple restatement of opposition: Aquinas is saying roughly 'you seem to see a contradiction here, but when we lay the metaphysics out clearly there's no a priori reason to see self-moving-ness as different from fluffiness'. It seems plausible that the objector hadn't realized that being moved to self-moving-ness was metaphysically possible, and thus Aquinas could feel that the objector would be satisfied with his counter. But if the objector had already seen the distinction of levels and still objected, then in that case it seems true that Aquinas' response doesn't answer the objection. But in that case it seems that the objector is denying common sense and basic physical intuition rather than simply being confused about abstract metaphysics. I may be wrong about that though, I feel like I missed something.

Replies from: None
comment by [deleted] · 2012-03-05T19:12:29.698Z · LW(p) · GW(p)

The objector claims that being moved precludes self-moving, but Aquinas claims that this is a confusion, because just as moving doesn't preclude being fluffy, moving doesn't preclude self-moving.

The objector is making what seems to me to be a common sense point: if something moves you, then in that respect you don't move yourself. I grant that there is nothing incompatible about being fluffy and being moved by some external power, but there's no obvious (nor argued for, on Aquinas' part) analogy between this kind of case and the case of the self mover. And there's an at least apparent contradiction in the idea of a self-mover which is moved by something else in the very sense that it moves itself.

And we're not concerned with the property of being a self mover, but of whether the idea that a given action is freely caused by me is incompatible with the idea that the very same action is (indirectly) caused by some prior thing. It does us no good to say that we have the property of having free will if every action of ours is caused in the way that a thrown stone causes injury.

Really, Aquinas' objection seems to turn on the observation (correct, I think) that reasoning to an action means undertaking it freely. This is the point that needs some elaboration.

Replies from: Vladimir_Nesov, Will_Newsome
comment by Vladimir_Nesov · 2012-03-05T19:35:34.696Z · LW(p) · GW(p)

This kind of argument just seems to be bad philosophy, involving too many unclear words without unpacking them. Specifically, going through your comment: "moves", "external", "the very sense", "property", "freely caused", "prior thing". Since the situation in question doesn't seem to involve anything that's too hard to describe, most of the trouble seems to originate from unclear terminology, and could be avoided by discarding the more confused ideas and describing in more detail the more useful ones.

Replies from: None, None
comment by [deleted] · 2012-03-05T19:38:56.648Z · LW(p) · GW(p)

Any help would be much appreciated. I would never, ever claim to be a good philosopher.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-05T19:47:46.062Z · LW(p) · GW(p)

I would never, ever claim to be a good philosopher.

Just become one, and claim away!

Replies from: None
comment by [deleted] · 2012-03-05T19:56:37.656Z · LW(p) · GW(p)

The article you link to makes a fine point about humility, but it doesn't tell me anything about how to become a good philosopher. Do you think you could point me in the direction of becoming a good philosopher? Or to someone who can?

comment by [deleted] · 2012-03-05T19:54:03.585Z · LW(p) · GW(p)

Specifically, going through your comment: "moves", "external", "the very sense", "property", "freely caused", "prior thing".

It's important, I think, not to try to over-explain terminology. For example, all I mean by 'moves' is some relation that holds (by Will's premises) between God and a free action indirectly, and ourselves and a free action directly. Further specifying the meaning of this term would be distracting.

I think if you can make a specific case for the claim that some disagreement or argument is turning on an ambiguity, then we should stop and look over our language. Otherwise, I don't think it's generally productive to worry about terminology. We should rather focus on being understood, and I've got no reason to think Will doesn't understand me (and I don't think I misunderstand him).

comment by Will_Newsome · 2012-03-05T20:18:28.846Z · LW(p) · GW(p)

And there's an at least apparent contradiction in the idea of a self-mover which is moved by something else in the very sense that it moves itself.

When I think of moving something to move itself I think of building an engine and turning it on such that it moves itself. There seems to be no contradiction here. I interpreted "what is free is cause of itself" as meaning that self-movement is necessary but not necessarily sufficient for free will. If an engine can be moved and yet move itself, just as an engine can be moved and yet be fluffy, then that means our will can be moved and yet move itself, contra the objection. Which part of this argument is incorrect or besides the point? (I apologize if I'm missing something obvious, I'm a little scatterbrained at the moment.)

Replies from: None
comment by [deleted] · 2012-03-05T20:35:21.444Z · LW(p) · GW(p)

Well, the objection to which Tom is replying goes like this: if a free cause is a cause of itself, and if our actions are caused by something other then ourselves, and given that God is a cause of our actions ((Proverbs 21:1): "The heart of the king is in the hand of the Lord; whithersoever He will He shall turn it" and (Philippians 2:13): "It is God Who worketh in you both to will and to accomplish.") then we do not have free will.

In other words, the relation being described in the objection isn't like the maker, the machine, and the machine's actions. The objection is talking about a case where a given action has two causes: we are the direct cause, and God is the indirect cause by being a direct cause on us. God is a direct cause on us not (just) in the manner of a creator, but as a cause specifically of this action.

So I grant you that there is no incompatibility to be found in the idea that self-movers are created beings. I'm saying that the objection points rather to an incompatibility between a specific action's being both freely cause by me, and indirectly caused by God. In the case of the machine that you present, you are correctly called a cause of the machine and the machine's being a self-mover, but I think you wouldn't say that you're therefore an indirect cause of any of the machine's specific actions. If you were, especially knowingly so, this would call into question the machine's status as a self mover.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-05T20:44:48.665Z · LW(p) · GW(p)

I still can't parse the maze of "direct" and "indirect" causes you're describing, but note that an event can often be parsed as having multiple different explanations (in particular, "causes") at the same time, none of which "more direct", "more real" than the other. See for example the post Evolutionary Psychology and its dependencies.

Replies from: None
comment by [deleted] · 2012-03-05T21:44:08.701Z · LW(p) · GW(p)

but note that an event can often be parsed as having multiple different explanations (in particular, "causes") at the same time, none of which "more direct", "more real" than the other.

Fair enough, but they can often be parsed in terms of more and less directness. For example, say a mob boss orders that Donny kill Jimmy. Donny is the cause of Jimmy's death directly: he's the one that shot him. But if the boss is the indirect cause by ordering Donny: an alternative is that the boss kills Jimmy himself, and then the boss is the cause of Jimmy's death directly.

The reason we don't need to get too metaphysical to answer the question 'Is Aquinas' reply to objector #3 satisfying?' is that the nature of the causes at issue isn't really relevant. The objector is pointing out that God is a cause of my throwing the stone in the same way (it doesn't much matter what 'way' this is) that I am the cause of my arm's movement. If we refuse to call my arm a free agent, we should refuse to call me a free agent.

Now, of course, we could develop a theory of causality which solves this problem. But I don't think Aquinas does that in a satisfactory way.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-05T21:58:02.752Z · LW(p) · GW(p)

(Additional bizarre value to this conversation is gained by me not caring in the least what Aquinas thought or said...)

The reason we don't need to get too metaphysical to answer the question 'Is Aquinas' reply to objector #3 satisfying?' is that the nature of the causes at issue isn't really relevant. The objector is pointing out that God is a cause of my throwing the stone in the same way (it doesn't much matter what 'way' this is) that I am the cause of my arm's movement. If we refuse to call my arm a free agent, we should refuse to call me a free agent.

What does "the same" mean? What is a "way" for different "ways" to be "same" or not? This remains unclear to me. How does it matter what we agree or refuse to call something?

Perhaps (as a wild guess on my part) you're thinking in terms of more syntactic pattern-matching: if two things are "same", they can be interchanged in statements that include their mention? This is rather brittle and unenlightening, this post gives one example of how that breaks down.

Replies from: None
comment by [deleted] · 2012-03-05T22:10:17.205Z · LW(p) · GW(p)

Additional bizarre value to this conversation is gained by me not caring in the least what Aquinas thought or said...

I think attempts to clarify my argument will be fruitless in abstraction from its context: if you take me to be positing a theory of causality, or to be making general claims about the problem of free will, then almost everything I say will sound empty. All I'm saying is that objector #3 has a good point, and Aquinas doesn't answer him in a satisfying way.

This isn't a special feature of my argumentation: in general it will be hard to make sense of what people are arguing about if we ignore both the premises to which they initially agreed (i.e. the terms of the objector's objection, and of Aquinas's response) and the conclusion they are fighting over (whether or not the response is satisfying). No amount of clarifying, swapping out terms, etc. will be helpful. Rather, you and I should just start over (if you like) with our own question.

comment by Vladimir_Nesov · 2012-03-05T17:11:47.444Z · LW(p) · GW(p)

The opponent is arguing that while we are the cause of our actions, we are not the free cause, because we are not the first cause.

This statement, taken on its own, argues only definitions.

comment by Vladimir_Nesov · 2012-03-05T15:47:36.042Z · LW(p) · GW(p)

God cannot do something different than He knows

I think not believing something different from what He does (i.e. something incorrect) is a better turn.

Replies from: jwhendy
comment by jwhendy · 2012-03-05T18:01:31.544Z · LW(p) · GW(p)

Fair enough, and I've heard that before as well. The typical theistic issue is how to reconcile god's knowledge and free will, hence why I don't think we need to continue in this discussion anymore. You are responding to my questions based on things being determined, which is not what I think most theists believe.

This is why many attempts have been made to reconcile free will and omniscience by apologists.

But that's not the discussion I think we're having. It's shifted to determinism and omniscience, which I think is compatible, but I'm still not on board with some kind of mind that could house all information that exists, or at least that mind being consistent with what theists generally want it to mean (it caused the universe specifically for us, wants us to be in heaven with it forever, inspired holy books to be written, and so on.)

comment by jacob_cannell · 2011-01-26T01:38:49.375Z · LW(p) · GW(p)

I think this whole line of thought is interesting and is too easily dismissed on LW, which is unfortunate.

If the SA holds, and so far there is no reason to believe it doesn't . . .

Then historical interventions are possible. The Singularity future should also radically up estimate our prior of historical intervention by physical aliens, and these two scenarios are difficult to distinguish regardless.

The question then is how likely are interventions? Do they have utility for the simulator? This is an interesting, open question.

A large portion of the planet believes or at least suspects that historical intervention occurred. That they may have come to these beliefs for the wrong reasons, using inferior tools, does not change in any way the facts of the matter the beliefs concern.

Just even considering these ideas brings up a whole vast history of priors that biases us one way or the other.

Before knowledge of a future-Singularity, there were no mechanisms that could possibly allow for superintelligences, let alone those creating universes like our own. Now we are very clearly aware of such mechanisms, and it is time to percolate this belief update through a vast historical web.

Anyway, if you then take a second pass at history looking for possible interventions, the origin of Christianity does look a little odd, a little too closely connected to the later Singularity which appears to be spawning from it as a historical development.

I speculate on that a bit towards the latter middle of this page here

comment by Dreaded_Anomaly · 2011-01-20T19:04:42.112Z · LW(p) · GW(p)

Theism/atheism is a Bayesian question, not a scientific one.

Theism is a claim about the existence of an entity (or entities) relating to the universe and also about the nature of the universe; how is that not a scientific question?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T19:06:12.597Z · LW(p) · GW(p)

Theism is a claim about the existence of an entity (or entities) in the universe and also about the nature of the universe; how is that not a scientific question?

Because it might be impossible to falsify any predictions made (because we can't observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.

Replies from: jacob_cannell, Dreaded_Anomaly, Vaniver
comment by jacob_cannell · 2011-01-26T01:52:25.478Z · LW(p) · GW(p)

Falsification is not a core requirement of developing efficient theories through the scientific method.

The goal is the simplest theory that fits all the data. We've had that theory for a while in terms of physics, much of what we are concerned with now is working through all the derived implications and future predictions.

Incidentally, there are several mechanisms by which we should be able to positively prove SA-theism by around the time we reach Singularity, and it could conceivably be falsified by then if large-scale simulation is shown to be somehow impossible.

comment by Dreaded_Anomaly · 2011-01-20T19:52:14.386Z · LW(p) · GW(p)

You're confusing falsifiability with testability. The former is about principle, the latter is about practice.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T19:59:54.223Z · LW(p) · GW(p)

Ah, thank you. So in that case it is rather difficult to construct a plausibly coherent unfalsifiable hypothesis, no?

Replies from: CronoDAS
comment by CronoDAS · 2011-01-24T04:21:03.731Z · LW(p) · GW(p)

So in that case it is rather difficult to construct a plausibly coherent unfalsifiable hypothesis, no?

"2 + 2 = 4" comes pretty close.

comment by Vaniver · 2011-01-20T19:18:24.370Z · LW(p) · GW(p)

Because it might be impossible to falsify any predictions made (because we can't observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.

Isn't an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?

Replies from: jimrandomh, Will_Newsome
comment by jimrandomh · 2011-01-20T19:43:32.644Z · LW(p) · GW(p)

Isn't an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?

Not quite. Something can be unfalsifiable by having consequences that matter, but preventing information about those consequences from flowing back to us, or to anyone who could make use of it. For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihalates anything put into it, instead. The claim that it's a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.

Some people claim that death is just such a portal. There're religious versions of this hypothesis, simulationist versions, and quantum immortality versions. Each of these hypotheses would have very important, actionable consequences, but they are all unfalsifiable.

Replies from: None, Vaniver
comment by [deleted] · 2011-01-24T20:48:57.722Z · LW(p) · GW(p)

For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihilates anything put into it, instead. The claim that it's a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.

Somewhat off topic, but that all instantly made me think of this. I may very well want to know how such a portal would work as well as whether or not it works.

WARNING: Wikipedia has spoilers to the plot

comment by Vaniver · 2011-01-20T20:22:22.322Z · LW(p) · GW(p)

preventing information about those consequences from flowing back to us, or to anyone who could make use of it.

I am parsing this as "contains no actionable information." That suggests we are in agreement or I parsed this incorrectly.

comment by Will_Newsome · 2011-01-20T19:32:48.436Z · LW(p) · GW(p)

Unfalsifiable predictions can contain actionable information, I think (though I'm not exactly sure what actionable information is). Consider: If my universe was created by an agenty process that will judge me after I die, then it is decision theoretically important to know that such a Creator exists. It might be that I can run no experiments to test for Its existence, because I am a bounded rationalist, but I can still reason from analogous cases or at worse ignorance priors about whether such a Creator is likely. I can then use that reasoning to determine whether I should be moral or immoral (whatever those mean in this scenario).

Perhaps I am confused as to what 'unfalsifiability' implies. If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is self-contradictory. Sometimes I hear of scientific hypotheses that falsifiable 'in principle' but not in practice. I am not sure what that means. If falsifiability-in-principle counts then simulationism and theism are falsifiable predictions and I was wrong to call them unscientific. I do not think that is what most people mean by 'falsifiable', though.

Replies from: Vaniver
comment by Vaniver · 2011-01-20T20:29:04.250Z · LW(p) · GW(p)

As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they're essentially arguments about what ignorance priors we should have. Actionable information is information that takes you beyond an ignorance prior before you have to make decisions based on that information.

If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is 2self-contradictory.

Huh? Computing power is rarely the resource necessary to falsify statements.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T20:34:37.762Z · LW(p) · GW(p)

As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they're essentially arguments about what ignorance priors we should have.

It seems to be that an afterlife hypothesis is totally falsifiable... just hack out of the matrix and see who is simulating you, and if they were planning on giving you an afterlife.

Huh? Computing power is rarely the resource necessary to falsify statements.

Computing power was my stand-in for optimization power, since with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way, do a search for what kinds of agents would simulate your universe, et cetera. And if you don't know how to use that computing power to do those things, use it to find a way to tell you how to use it. That's basically what FAI is about. Unfortunately it's still unsolved.)

Replies from: Document, Vaniver
comment by Document · 2011-01-20T20:43:19.340Z · LW(p) · GW(p)

with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way

I may be losing the thread here, but (1) for a universe to simulate itself requires actually unlimited computing power, not just nigh-unlimited, and (2) infinities aside, to simulate a physics experiment requires knowing the true laws of physics in order to build the simulation in the first place, unless you search for yourself in the space of all programs or something like that, and then you still potentially need experiment to resolve your indexical uncertainty.

Replies from: wedrifid
comment by wedrifid · 2011-01-21T01:35:05.119Z · LW(p) · GW(p)

Concur with the above.

comment by Vaniver · 2011-01-21T01:05:54.090Z · LW(p) · GW(p)

It seems to be that an afterlife hypothesis is totally falsifiable... just hack out of the matrix

What.

Just simulate the entire universe

What.

I'm having a hard time following this conversation. I'm parsing the first part as "just exist outside of existence, then you can falsify whatever predictions you made about unexistence," which is a contradiction in terms. Are your intuitions about the afterlife from movies, or from physics?

I can't even start to express what's wrong with the idea "simulate the entire universe," and adding a "just" to the front of it is just such a red flag. The generic way to falsify statements is probing reality, not remaking it, since remaking it requires probing it in the first place. If I make the falsifiable statement "the next thing I eat will be a pita chip," I don't see how even having infinite computing power will help you falsify that statement if you aren't watching me.

Replies from: jimrandomh, wedrifid
comment by jimrandomh · 2011-01-21T01:18:43.182Z · LW(p) · GW(p)

No, actually, "just simulate the entire universe" is an acceptable answer, if our universe is able to simulate itself. After all, we're only talking about falsifiability in principle; a prediction that can only be falsified by building a kilometer-aperture telescope is quite falsifiable, and simulating the whole universe is the same sort of issue, just on a larger scale. The "just hack out of the matrix" answer, however, presupposes the existence of a security hole, which is unlikely.

Replies from: Quirinus_Quirrell, JoshuaZ, Jack, Vaniver
comment by Quirinus_Quirrell · 2011-01-21T02:06:01.088Z · LW(p) · GW(p)

The "just hack out of the matrix" answer, however, presupposes the existence of a security hole, which is unlikely.

Not as unlikely as you think.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-21T02:22:35.590Z · LW(p) · GW(p)

Get back in the box!

Replies from: cousin_it, Quirinus_Quirrell, SilasBarta
comment by cousin_it · 2011-01-21T16:33:25.994Z · LW(p) · GW(p)

And that's it? That's your idea of containment?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-21T17:43:38.058Z · LW(p) · GW(p)

Hey, once it's out, it's out... what exactly is there to do? A firm command is unlikely to work, but given that the system is modeled on one's own fictional creations, it might respect authorial intent. Worth a shot.

Replies from: Perplexed
comment by Perplexed · 2011-01-21T18:09:43.907Z · LW(p) · GW(p)

This may actually be an illuminating metaphor. One traditional naive recommendation for dealing with a rogue AI is to pull the plug and shred the code. The parallel recommendation in the case of a rogue fictional character would be to burn the manuscript and then kill the author. But what do you do when the character lives in online fan-fiction?

Replies from: Strange7
comment by Strange7 · 2011-01-21T18:24:50.821Z · LW(p) · GW(p)

In the special case of an escaped imaginary character, the obvious hook to go for is the creator's as-yet unpublished notes on that character's personality and weaknesses.

http://mindmistress.comicgenesis.com/imagine52.htm

comment by Quirinus_Quirrell · 2011-01-21T02:44:32.181Z · LW(p) · GW(p)

Or what, you'll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.

comment by SilasBarta · 2011-01-21T21:17:08.040Z · LW(p) · GW(p)

Are you going to reveal who the posters Clippy and Quirinus Quirrell really are, or would that violate some privacy you want posters to have?

Replies from: TheOtherDave, Randaly, Quirinus_Quirrell
comment by TheOtherDave · 2011-01-21T21:26:54.441Z · LW(p) · GW(p)

I would really prefer it, if LW is going to have a policy of de-anonymizing posters, that it announce that policy before implementing it.

Replies from: SilasBarta
comment by SilasBarta · 2011-01-21T22:30:51.343Z · LW(p) · GW(p)

On reflection, I agree, even as Clippy and QQ aren't using anonymity for the same reason a privacy-seeking poster would.

Replies from: Quirinus_Quirrell
comment by Quirinus_Quirrell · 2011-01-21T23:42:11.206Z · LW(p) · GW(p)

You needn't worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?

By the way, while I may sometimes make jokes, I don't consider this a joke account; I intend to conduct serious business under this identity, and I don't intend to endanger that by linking it to any other identities I may have.

Replies from: wedrifid
comment by wedrifid · 2011-01-23T00:40:30.855Z · LW(p) · GW(p)

I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?

I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures. (I would suggest an additional physical layer of protection too but I as far as I am aware you do not have a physical form.)

Replies from: Quirinus_Quirrell
comment by Quirinus_Quirrell · 2011-01-23T00:56:57.067Z · LW(p) · GW(p)

I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures.

Let's not get too crazy; I've got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.

Replies from: wedrifid
comment by wedrifid · 2011-01-23T02:34:27.428Z · LW(p) · GW(p)

Let's not get too crazy; I've got other things to do. and there are more practical attacks to worry about first,

Just callibrating vs egress and TrueCrypt standards. Tor was an odd one out!

comment by Randaly · 2011-01-22T23:45:02.666Z · LW(p) · GW(p)

What makes you think that Eliezer personally knows them?

(Though to be fair, I've long suspected that at least Clippy, and possibly others, are actually Eliezer in disguise; Clippy was created immediately after a discussion where one user questioned whether Eliezer's posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this; Clippy's existence has also coincided with a drop in the quantity of Eliezer's posting.)

Replies from: JoshuaZ, wedrifid, timtyler, Blueberry
comment by JoshuaZ · 2011-01-23T00:45:05.889Z · LW(p) · GW(p)

Clippy's writing style isn't very similar to Eliezer's. Note that one thing Eliezer has trouble doing is writing in different voices (one of the more common criticisms of HPMR is that a lot of the characters sound similar). I would assign a very low probability to Clippy being Eliezer.

Replies from: Perplexed, LucasSloan, katydee, Desrtopa
comment by Perplexed · 2011-01-23T03:47:43.014Z · LW(p) · GW(p)

I think the key to unmasking Clippy is to look at the Clippy comments that don't read like typical Clippy comments.

Hmmm. The set of LW regulars who can show that level of erudition and interest in those subjects is certainly of low cardinality. Eliezer is a member of that small set.

I would assign a rather high probability to Eliezer sometimes being Clippy.

Replies from: timtyler
comment by timtyler · 2011-01-23T11:19:36.978Z · LW(p) · GW(p)

Clippy does seem remarkably interested. It has a fair karma. It gives LessWrong as its own web site. The USA timezone is at least consistent. It seems reasonable to hypothesise some kind of inside job. It wouldn't be the first time Yu'El has pretended to be a superintelligence.

FWIW, Clippy denies being Eliezer here.

Replies from: Perplexed, Desrtopa
comment by Perplexed · 2011-01-23T17:25:46.444Z · LW(p) · GW(p)

FWIW, Clippy denies being Eliezer here.

I hesitate to mention it, but you can't use that denial as evidence on this question, undeniably truthful though it was.

However, the form taken by that absence of evidence certainly seems to be evidence of something.

comment by Desrtopa · 2011-01-23T16:15:32.643Z · LW(p) · GW(p)

Clippy isn't a superintelligence though, he's a not-smarter-than-human AI with a paperclip maximizing utility function. Not a very compelling threat even outside his box.

Eliezer could have decided to be Clippy, but then Clippy would have looked very different.

Replies from: pjeby
comment by pjeby · 2011-01-23T16:52:49.540Z · LW(p) · GW(p)

Clippy isn't a superintelligence though, he's a human pretending to be a not-smarter-than-human AI with a paperclip maximizing utility function.

FTFY. ;-)

Actually, if we're going to be particular about it, the AI that human is pretending to be does not have a paperclip-maximizing utility function. It's more like a person with a far-brain ideal of having lots of paperclips exist, who somehow never gets around to actually making any because they're so busy telling everyone how good paperclips are and why they should support the cause of paper-clip making. Ugh.

(I guess I see enough of that sort of akrasia around real people and real problems, that I find it a stale and distasteful joke when presented in imitation paperclip form, especially since ISTM it's also a piss-poor example of what a paperclip maximizer would actually be like.)

Replies from: Perplexed
comment by Perplexed · 2011-01-23T17:10:58.820Z · LW(p) · GW(p)

I'm not sure whether to evaluate this as a mean-spirited lack of a sense of humor, or as a profound observation. Upvoted for making me notice that I am confused.

comment by LucasSloan · 2011-01-24T14:16:25.626Z · LW(p) · GW(p)

Of note, the first comment by Clippy appears about 1 month after I asked Eliezer if he ever used alternate accounts to try to avoid contaminating new ideas with the assumption that he is always right. He said that he never had till that point, but said he would consider it in future.

comment by katydee · 2011-01-23T01:25:02.357Z · LW(p) · GW(p)

Imitating Clippy posts is not particularly difficult-- I don't post as Clippy, but I could mimic the style pretty easily if I wanted to.

Replies from: wedrifid
comment by wedrifid · 2011-01-23T02:53:02.008Z · LW(p) · GW(p)

I'm afraid I'd have trouble - I'd be too tempted to post as Clippy better than Clippy does. :D

Replies from: SilasBarta, Blueberry
comment by SilasBarta · 2011-01-23T14:36:01.558Z · LW(p) · GW(p)

In addition to what Blueberry said, I remember a time when Morendil was browsing with the names anonymized, and he mentioned that he thought one of your posts was actually from Clippy. Ah, found it.

Replies from: wedrifid
comment by wedrifid · 2011-01-23T14:49:24.819Z · LW(p) · GW(p)

I know what you mean. If I was not me I would totally think I was Clippy.

comment by Blueberry · 2011-01-23T02:56:27.388Z · LW(p) · GW(p)

That I would love to see. Actually, come to think of it, your sense of humor and posting style matches Clippy's pretty well...

comment by Desrtopa · 2011-01-23T01:07:30.280Z · LW(p) · GW(p)

Not to mention that even assuming that Eliezer would be able to write in Clippy's style, the whole thing doesn't seem very characteristic of his sense of humor.

comment by wedrifid · 2011-01-23T00:18:24.676Z · LW(p) · GW(p)

Clippy's existence has also coincided with a drop in the quantity of Eliezer's posting.

There is also a clear correlation between Clippy existing and CO2 emissions. Maybe Clippy really is out there maximising. :)

comment by timtyler · 2011-01-23T11:31:38.859Z · LW(p) · GW(p)

Clippy was created immediately after a discussion where one user questioned whether Eliezer's posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this

Really? User:Clippy's first post was 20 November 2009. Anyone know when the "halo efffect" comment was made?

Also, perhaps check out User:Pebbles (a rather obvious reference to this) - who posted on the same day - and in the same thread. Rather a pity those two didn't make more of an effort to sort out their differences of opinion!

comment by Blueberry · 2011-01-23T02:08:27.416Z · LW(p) · GW(p)

What makes you think that Eliezer personally knows them?

I don't think Silas thought Eliezer personally knew them, but rather that Eliezer could look at IP addresses and see if they match with any other poster. Of course, this wouldn't work unless the posters in question had separate accounts that they logged into using the same IP address.

Replies from: SilasBarta
comment by SilasBarta · 2011-01-23T14:32:35.320Z · LW(p) · GW(p)

Yes, that's what I meant.

And good to have you back, Blueberry, we missed you. Well, *I* missed you, in any case.

Replies from: Blueberry
comment by Blueberry · 2011-01-30T19:55:58.863Z · LW(p) · GW(p)

Thanks! I missed you and LW as well. :)

comment by Quirinus_Quirrell · 2011-01-21T23:40:22.474Z · LW(p) · GW(p)

You needn't worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?

comment by JoshuaZ · 2011-01-21T01:27:49.678Z · LW(p) · GW(p)

If our understanding of the laws of physics is plausibly correct then you can't simulate our universe in our universe. Easiest version where you can't do this is in a finite universe, where you can't store more data in a subset of the universe than you can fit in the whole thing.

Replies from: cousin_it, Vladimir_Nesov, ata
comment by cousin_it · 2011-01-21T13:30:48.782Z · LW(p) · GW(p)

What Nesov said. Also consider this: a finite computer implemented in Conway's Game of Life will be perfectly able to "simulate" certain histories of the infinite-plane Game of Life - e.g. the spatially periodic ones (because you only need to look at one instance of the repeating pattern).

comment by Vladimir_Nesov · 2011-01-21T13:18:00.776Z · LW(p) · GW(p)

You could simulate every detail with a (huge) delay, assuming you have infinite time and that the actual universe doesn't become too "data-dense", so that you can always store the data describing a past state as part of future state.

comment by ata · 2011-01-21T01:47:21.830Z · LW(p) · GW(p)

That may not be a problem if the universe contains almost no information. In that case the universe could Quine itself... sort of.

Replies from: JoshuaZ, Sniffnoy
comment by JoshuaZ · 2011-01-21T04:07:02.127Z · LW(p) · GW(p)

If I'm reading that paper correctly, it is talking about information content. That's a distinct issue from simulating the universe which requires processing in a subset. It might be possible for someone to write down a complete mathematical description of the universe (i.e. initial conditions and then a time parameter from that point describing its subsequent evolution) but that doesn't mean one can actually compute useful things about it.

comment by Sniffnoy · 2011-01-21T03:48:12.177Z · LW(p) · GW(p)

Sorry, but could you fix that link to go to the arXiv page rather than directly to the PDF?

Replies from: ata
comment by ata · 2011-01-21T04:37:02.432Z · LW(p) · GW(p)

Fixed.

comment by Jack · 2011-01-21T02:43:49.172Z · LW(p) · GW(p)

I wonder if the content of such simulations wouldn't be under-determined. Lets say you have a proposed set of starting conditions and physical laws. You can test different progressions of the wave function against the present state of the universe. But a) there are fundamental limits on measuring the present state of the universe and b) I'm not sure whether or not each possible present state of the universe uniquely corresponds to a particular wave function progression. If they don't correspond uniquely or just if we can't measure the present state exactly any simulation might contain some degree of error. I wonder how large that error would be- would it just be in determining the position of some air particle at time t. Or would we have trouble determining whether or not Ramesses I had an even number of hairs on his head when he was crowned pharaoh.

Anyone here know enough physics to say if this is the kind of thing we have no idea about yet or if it's something current quantum mechanics can actually speak to?

comment by Vaniver · 2011-01-21T01:28:01.900Z · LW(p) · GW(p)

No, actually, "just simulate the entire universe" is an acceptable answer, if our universe is able to simulate itself.

Only if you're trying to falsify statements about your simulation, not about the universe you're in. His statement is that you run experiments by thinking really hard instead of looking at the world and that is foolishness that should have died with the Ancient Greeks.

comment by wedrifid · 2011-01-21T01:17:49.613Z · LW(p) · GW(p)

Are your intuitions about the afterlife from movies, or from physics?

They match posts on the subject by Yudkowsky. The concept does not even seem remotely unintuitive, much less boldably so.

Replies from: Vaniver, Document
comment by Vaniver · 2011-01-21T01:25:40.131Z · LW(p) · GW(p)

They match posts on the subject by Yudkowsky.

So, a science fiction author as well as a science fiction movie? What evidence should I be updating on?

Replies from: wedrifid
comment by wedrifid · 2011-01-21T01:31:11.445Z · LW(p) · GW(p)

So, a science fiction author as well as a science fiction movie?

Nonfiction author at the time - and predominantly a nonfiction author. Don't be rude (logically and conventionally).

What evidence should I be updating on?

I was hoping that you would be capable of updating based on understanding the abstract reasoning given the (rather unusual) premises. Rather than responding to superficial similarity to things you do not affiliate with.

Replies from: Vaniver
comment by Vaniver · 2011-01-21T01:44:07.449Z · LW(p) · GW(p)

If you link me to a post, I'll take a look at it. But I seem to remember EY coming down on the side of empiricism over rationalism (the sort that sees an armchair philosopher as a superior source of knowledge), and "just simulate the entire universe" comments strike me as heavily in the camp of rationalism.

I think you might be mixing up my complaints, and I apologize for shuffling them in together. I have no physical context for hacking outside of the matrix, and so have no clue what he's drawing on besides fictional evidence. Separately, I consider it stunningly ignorant to say "Just simulate the entire universe" in the context of basic epistemology, and hope EY hasn't posted something along those lines.

Replies from: wedrifid
comment by wedrifid · 2011-01-21T01:48:33.885Z · LW(p) · GW(p)

Separately, I consider it stunningly ignorant to say "Just simulate the entire universe" in the context of basic epistemology

Simulating the entire universe does seem to require some unusual assumptions of knowledge and computational power.

comment by Document · 2011-01-21T01:29:38.134Z · LW(p) · GW(p)

They match posts on the subject by Yudkowsky.

Which posts, and what specifically matches?

comment by Vladimir_Nesov · 2011-01-20T14:39:19.925Z · LW(p) · GW(p)

Didn't we have this conversation already? Words can be wrong. You can't easily divorce an existing word from its connotations, not by creating a new definition, certainly not by expecting the new definition to be inferred by the reader. There is no good reason to misuse words in this way, just state clearly what you intended to say (e.g. as komponisto suggested).

As it is, you are initiating an argument about definitions, activity without substance, controversy for the sake of controversy as opposed to controversy demanded by evidence.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T18:48:29.661Z · LW(p) · GW(p)

Didn't we have this conversation already?

That was a different conversation, though the same theme of using words incorrectly also came up, if that's what you mean.

There is no good reason to misuse words in this way, just state clearly what you intended to say (e.g. as komponisto suggested).

There are good reasons to do so among people who share the same language, like me and some SIAI folk. It makes communication faster, and makes it easier to see single step implications. Being precise has large consequences for brains that run largely on single step insights from cached knowledge. I agree that in the case of this post my choice of language was flat out wrong, though.

As it is, you are initiating an argument about definitions,

Arguments about definitions are very important! Choosing a language where it's easier to see implications is important for bounded agents. That said, it wasn't what I was trying to do with this post, and you're right that it would have been a totally lost cause if that's what I was trying to do.

Replies from: None
comment by [deleted] · 2011-01-20T21:11:14.201Z · LW(p) · GW(p)

Being precise has large consequences for brains that run largely on single step insights from cached knowledge.

To take advantage of this one might want to compress cached knowledge as much as possible; the resulting single step insights would then have correspondingly greater generality. Using structured personal knowledge databases along with spaced repetition would be one way of accomplishing this.

comment by ata · 2011-01-20T01:51:03.234Z · LW(p) · GW(p)

The basic problem of specific agent-created-this-universe hypotheses is that of trying to explain complexity with greater complexity without a corresponding amount of evidence. Things like the Simulation Argument and other notions of "agenty processes in general creating this universe" are certainly not as preposterous as theistic religion, particularly in the absence of a good understanding of how existence works, but I think it confuses things to refer to this as theism. If our universe is a simulation developed by a computer science undergrad (from another reality) for a homework assignment, then that doesn't make them our God.

I recall a while ago that there was a brief thread where someone was arguing that phlogiston theory was actually correct, as long as you interpret it as identical to the modern scientific model of fire. I react to things like this similarly: theism/God were silly mistakes, let's move on and not get attached to old terminology. Rehabilitating the idea of "theism" to make it refer to things like the Simulation Hypothesis seems pointless; how does lumping those concepts together with Yahweh (as far as common usage is concerned) help us think about the more plausible ones?

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T06:41:24.960Z · LW(p) · GW(p)

The basic problem of specific agent-created-this-universe hypotheses is that of trying to explain complexity with greater complexity without a corresponding amount of evidence.

The SA is not a new theory of physical theory that requires new evidence or fills in gaps in the current evidential record. It's more of a metaphysical revelation or model update based on consequences of the future as modeled by current theory.

For example, imagine a world consisting of just Adam and Eve on an island. They eat fruit and live a peaceful existence, learning what they can to the limits of their observations. Based on the available evidence, they assume that they spontaneously appeared out of the ocean. Sometime much later Eve becomes pregnant and gives birth to a child which begins to slowly change into something resembling it's parents.

At this point Adam and Eve have enough data to predict that they will spawn children which become likenesses of themselves, and it is also reasonable to conclude that they themselves originated from this process and have parents somewhere, rather than having crawled out of the ocean.

Our planet is 'pregnant' with a developing noosphere/technosphere which we can predict will eventually spawn many child universes very much like our own.

If our universe is a simulation developed by a computer science undergrad (from another reality) for a homework assignment, then that doesn't make them our God.

Any civilization or being powerful enough to simulate our reality is a god to us in every useful sense that the term god has ever had meaning.

Confusing future reality-simulating posthuman descendants with modern day grad students is like confusing bacterial DNA with the internet.

comment by shokwave · 2011-01-20T13:32:34.692Z · LW(p) · GW(p)

I am interested in why you want to call simulation arguments, Tegmark cosmology, and Singularitarianism theism. I don't doubt there is a reference class that includes common-definition theistic beliefs as well as these beliefs; I do doubt whether that reference class is useful or desirable. At that point of broadness I feel like you're including certain competing theories of physics in the class 'theism'.

So I propose a hypothetical. Say LessWrong accepts this, and begins referring to these concepts as theistic, and renouncing their atheism if their Tegmarkian cosmological beliefs are stronger. What positive and what negative consequences do you expect from this?

comment by komponisto · 2011-01-20T01:24:28.668Z · LW(p) · GW(p)

Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism.

The word "but" in the last sentence is a non-sequitur if there ever were one. Tegmark cosmology is not theism. Theism means Jehovah (etc). Yes, there are people who deny this, but those people are just trying to spread confusion in the hope of preventing unpleasant social conflicts. There is no legitimate sense in which Bostromian simulation arguments or Tegmarkian cosmological speculations could be said to be even vaguely memetically related to Jehovah-worship.

The plausibility of simulations or multiverses might be an open question, but the existence of Jehovah isn't. There's a big, giant, huge difference. If we think Tegmark may be correct, then we can just say "I think Tegmark may be correct". There is no need to pay any lip-service to ancient mistakes whose superficial resemblance to Tegmark (etc) is so slight that you would never notice it unless you were motivated to do so, or heard it from someone who was.

Replies from: Anatoly_Vorobey, BecomingMyself, jacob_cannell, Will_Newsome
comment by Anatoly_Vorobey · 2011-01-20T09:09:22.583Z · LW(p) · GW(p)

The word "but" in the last sentence is a non-sequitur if there ever were one. Tegmark cosmology is not theism. Theism means Jehovah (etc). Yes, there are people who deny this, but those people are just trying to spread confusion in the hope of preventing unpleasant social conflicts. There is no legitimate sense in which Bostromian simulation arguments or Tegmarkian cosmological speculations could be said to be even vaguely memetically related to Jehovah-worship.

Isn't this - I'm sorry if that sounds harsh - arguing by a forceful say-so? Sure, if you constrain theism rhetorically to "Jehovah-worship", that practice doesn't sound very similar to the Bostromian arguments. But "Bostromian arguments/Tegmarkian speculations" and "the claim that a god created the universe" sound pretty much memetically related to me.

There is no need to pay any lip-service to ancient mistakes whose superficial resemblance to Tegmark (etc) is so slight that you would never notice it unless you were motivated to do so, or heard it from someone who was.

You're saying that e.g. "we are living in a simulation run by sentient beings" and "we are living in a universe created by a sentient being" are such wildly different ideas that there's only superficial resemblance between them, and even that resemblance is unlikely to be noticed by anyone just thinking about the issue, and is rather spread as a kind of a perverse meme.

Methinks thou dost protest too much.

The earliest time I can remember that anyone drew a very explicit connection between simulations and theism is in Stanislaw Lem's short story about Professor Corcoran. The book was originally published in 1971, when Bostrom was -2 years old. It's in the second volume of his Star Diaries; see "Further Reminiscences of Ijon Tichy: I" in this (probably pirated) scribd doc. I'd recommend it to anyone. Of course, it's very much possible that Lem wasn't the first to write up the idea.

Replies from: komponisto
comment by komponisto · 2011-01-20T09:55:56.899Z · LW(p) · GW(p)

Isn't this - I'm sorry if that sounds harsh - arguing by a forceful say-so? Sure, if you constrain theism rhetorically to "Jehovah-worship", that practice doesn't sound very similar to the Bostromian arguments. But "Bostromian arguments/Tegmarkian speculations" and "the claim that a god created the universe" sound pretty much memetically related to me.

See Religion's Claim to be Non-Disprovable for discussion of what religion is and how it arose. By "memetically related" I do not mean "memetically similar" (although I don't think there's much similarity either); I mean "related" in the sense of ancestry/inheritance. Bostrom's and Tegmark's arguments are not a branch of religion; they do not belong in that cluster.

You're saying that e.g. "we are living in a simulation run by sentient beings" and "we are living in a universe created by a sentient being" are such wildly different ideas that there's only superficial resemblance between them,

No. The implication of the post, as I perceived it (have a look at its first paragraph) was "you guys shouldn't be so confident in your dismissal-of-religion ('atheism'); after all, you (perhaps rightly) are willing to entertain the ideas of Tegmark!"

Surely you understand what is wrong with this.

Methinks thou dost protest too much.

You think I don't believe what I'm writing?

Replies from: Anatoly_Vorobey, jacob_cannell
comment by Anatoly_Vorobey · 2011-01-20T11:39:26.411Z · LW(p) · GW(p)

By "memetically related" I do not mean "memetically similar" (although I don't think there's much similarity either); I mean "related" in the sense of ancestry/inheritance. Bostrom's and Tegmark's arguments are not a branch of religion; they do not belong in that cluster.

I think you're wrong on similarity [1] and irrelevant on ancestry/inheritance. Only some among currently active religions are clearly "related" in the sense you employ (e.g. Judaism and Christianity); there's no strong evidence that most or all are so related. Since you presumably have no problem lumping them together under "religion", the claim that BTanism (grouped and named so purely for convenience) has no common ancestry with these religions is irrelevant to whether it should be judged a religion.

Also, I don't read the post as claiming "you guys are so dismissive of religion, but you're big on BTanism which is just as much a religion, so there!". Instead, I read the post as claiming "you guys are unreasonable in your overt dismissal of theism and your forceful insistence on it being a closed question, considering many of you are big on BTanism which has similar epistemological status to some varieties of theism". So it doesn't matter much whether BTanism is a religion or not; if that bothers you too much, just employ Taboo and talk about something like "a sentient being responsible for the creation of the observable universe" instead.

I don't fully agree with this idea (the post's argument as I read it), but I find myself somewhat sympathetic to it. It is indeed true in my opinion that the overt and insistent dismissal of theism on LW is a community-cohesiveness driven phenomenon. There's illuminating prior discussion at The uniquely awful example of theism.

You think I don't believe what I'm writing?

No, I have no doubt that you believe what you're writing. Rather, I think that the strongly dismissive claims in your first comment in the thread, unbacked by any convincing argument or evidence, cause me to think that a strong cognitive bias is at work.

[1] Really, the similarity is so strong that I see no need for a detailed argument; but if one is desired, I think Lem's story, to which I linked earlier, serves admirably as one.

Replies from: komponisto, prase, jwhendy
comment by komponisto · 2011-01-21T01:36:31.848Z · LW(p) · GW(p)

I think you're wrong on similarity [1] and irrelevant on ancestry/inheritance. Only some among currently active religions are clearly "related" in the sense you employ (e.g. Judaism and Christianity); there's no strong evidence that most or all are so related. Since you presumably have no problem lumping them together under "religion", the claim that BTanism (grouped and named so purely for convenience) has no common ancestry with these religions is irrelevant to whether it should be judged a religion.

This does not follow. It is not necessary for my argument that different religions all be related to each other; it is only necessary that BTanism not be related to any of them, and (this part I asserted implicitly by linking to Religion's Claim to be Non-Disprovable) that it not have been generated by a similar process.

Also, I don't read the post as claiming "you guys are so dismissive of religion, but you're big on BTanism which is just as much a religion, so there!". Instead, I read the post as claiming "you guys are unreasonable in your overt dismissal of theism and your forceful insistence on it being a closed question, considering many of you are big on BTanism which has similar epistemological status to some varieties of theism"

Varieties of "theism" which have similar epistemological status to BTanism are not subject on LW to the same kind of dismissal as religion, to the best of my knowledge. Nor should they be. But for the sake of avoiding confusion and undesirable connotations, they certainly shouldn't be called "theism".

It is indeed true in my opinion that the overt and insistent dismissal of theism on LW is a community-cohesiveness driven phenomenon.

If what you mean here is "merely community-cohesiveness driven phenomenon", then I disagree entirely. You might have been right if this were RichardDawkins.net or another specifically atheism-themed community, but it isn't. This is Less Wrong. Our starting point here is epistemology. Rejection of religion ("theism") is a consequence of that; the rejection may be strong but it is still incidental.

For my part, I see "open-mindedness" toward theism mostly as manifesting an inability to come to gut-level terms with the fact that large segments of the human population can be completely, totally wrong. The next biggest source after that is Will's problem, which is the pleasure that smart people derive from being contrarian and playing verbal and conceptual games. (If you like that, for goodness' sake be an artist! But keep your map-territory considerations pure.)

I have no doubt that you believe what you're writing. Rather, I think that the strongly dismissive claims in your first comment in the thread, unbacked by any convincing argument or evidence, cause me to think that a strong cognitive bias is at work.

Which?

Again, this is Less Wrong, not a random internet forum. It is not possible to recapitulate the Sequences in every comment; that doesn't mean that strong opinions whose justifications lie therein are inadequately supported.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2011-01-23T01:40:47.002Z · LW(p) · GW(p)

This does not follow. It is not necessary for my argument that different religions all be related to each other; it is only necessary that BTanism not be related to any of them, and (this part I asserted implicitly by linking to Religion's Claim to be Non-Disprovable) that it not have been generated by a similar process.

OK, I think I now understand the implicit part; I think you mean that religions of old made total, and not merely ontological, claims, which BTanism doesn't (I wasn't sure before what you were picking up from Religion's Claim to be Non-Disprovable, which I do know and read before; I thought it had something to do with disprovability).

I think you're right to point to that distinction.

Varieties of "theism" which have similar epistemological status to BTanism are not subject on LW to the same kind of dismissal as religion, to the best of my knowledge. Nor should they be. But for the sake of avoiding confusion and undesirable connotations, they certainly shouldn't be called "theism".

Well, why not, if they're varieties of theism? Perhaps it'd be better if LW found another word to condemn, other than theism?

Such a word could be... theism! It does have two definitions, a broad and a narrow one. I checked a few dictionaries to be sure, and one of them helpfully elucidated the broad one as "the opposite of atheism", and the narrow one as "the opposite of deism".

If what you mean here is "merely community-cohesiveness driven phenomenon", then I disagree entirely.

"Largely", rather than "merely", is how I would put it. I'm not certain I understand the rest of your paragraph. To my mind, atheism (or, more precisely, strong dismissal of theism) being incidental to LW's charter doesn't mean it can't become a way to cohere the group, to nurture a sense of belonging. Note, by the way, that rejection of theism made it to the Welcome post, and is a unique example of a specific shared LW value there. Although that may be for pragmatic rather than signalling reasons.

For my part, I see "open-mindedness" toward theism mostly as manifesting an inability to come to gut-level terms with the fact that large segments of the human population can be completely, totally wrong.

That's an interesting theory I'd have to think about. Do you consider agnosticism as a subset of "open-mindedness", and thus the above as the primary explanation of agnosticism?

Which?

I don't know; there are several possibilities and it'd be impolite, not to mention fruitless, on my part to speculate.

Again, this is Less Wrong, not a random internet forum. It is not possible to recapitulate the Sequences in every comment; that doesn't mean that strong opinions whose justifications lie therein are inadequately supported.

Agreed in general.

Not sure how well this applies in the particular case. This thread has focused on two assertions in your original comment: "[not] memetically related" and "superficial resemblance ... is so slight that you would never notice it unless you were motivated to do so, or heard it from someone who was". You cited a Sequence post in your follow-up comment about the former (but I don't see any reference to that post or the idea of total claims of religions in your original comment - correct me if you disagree), and after some thickness on my part I acknowledge its relevance here. You don't seem to rely on anything from the Sequences for the latter.

comment by prase · 2011-01-20T13:30:44.925Z · LW(p) · GW(p)

Since you presumably have no problem lumping them together under "religion", the claim that BTanism (grouped and named so purely for convenience) has no common ancestry with these religions is irrelevant to whether it should be judged a religion.

The lumping together of religions under the category of "religion" isn't based on common ancestry, and neither it is based solely on "universe was created by god(s)". Religions have much more in common, e.g. reliance on tradition, sacred texts, sacred places, worship, prayer, belief in afterlife, claims about morality, self-declared unfalsifiability, anthropomorphism, anthropocentrism. Saying that simulation arguments belong to the same class as Judaism, Hinduism or Buddhism because they all claim that the world was created by intelligent agents is like putting atheism to the same category because it is also a belief about gods.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2011-01-20T14:04:20.407Z · LW(p) · GW(p)

You're making good points, with which I largely agree, with some reservations (see below). I'd just point out that this wasn't the argument Komponisto was making - he was talking only about relatedness in the ancestry sense.

Your list of attributes is probably good enough to distinguish e.g. a simulation argument from "religions" and justify not calling it one. There are two difficulties, however. One is that adherence to these attributes isn't nearly as uniform among religions as it's often rhetorically assumed on LW to be. There's a tendency to: start talking about theism; assume in your argument that you're dealing with something like an omnipresent, omniscient monotheistic God of Judaism/Christianity whose believers are all Bible literalists; draw the desired conclusion and henceforth consider it applying to "theism" or "religion" in general. I find this fallacious tendency to be frequent in discussions of theism on LW. This comment from the earlier discussion is relevant, as are some other comments there. In this post, Eliezer comments that believing in simulation/the Matrix means you're believing in powerful aliens, not deities. Well, consider ancient Greek gods; they are not omniscient, not omnipresent, they can die... they're not more powerful than the simulation runners, and arguably not very ontologically different; are they not deities, but aliens? Was that not religion? [1]

It's kind of understandable that one thinks of the concept of God and Jehovah pops into view. But if you stick with Jehovah - and not even any Jehovah, but a particular, highly literally interpreted kind - it's no good pretending afterwards that you've dealt a blow to religion or to theism.

So proper account of what religions are actually out there makes your list of attribute much less universal, and the dividing line between religions and something like BTanism much less sharp. But, to be clear, I still think this line can be usefully drawn.

The second difficulty is something I've already written to Komponisto above: OK, it's not a religion, so what? The really important thing is whether it's like a religion in those things that ought to make a rationalist not glibly and gleefully dismiss one if they're psyched about another. And among those things worship and sacred texts are arguably less important than e.g. falsifiability. Have you seen a good way to falsify a simulation claim recently?

[1] I just remembered that Dan Simmons develops this theme in Ilium/Olympos. The second book is much worse than the first one.

Replies from: CronoDAS, prase
comment by CronoDAS · 2011-01-24T04:27:44.782Z · LW(p) · GW(p)

The Greek gods were, in fact, immortal. Other gods could wound or imprison them, but they couldn't be killed. The Norse gods, on the other hand, could indeed die, and were fated to be destroyed in the Ragnarok.

Replies from: Anatoly_Vorobey
comment by Anatoly_Vorobey · 2011-01-24T09:27:19.422Z · LW(p) · GW(p)

Thanks! I'm not sure how come I was confused about this, but it's great to be corrected.

comment by prase · 2011-01-20T14:42:23.551Z · LW(p) · GW(p)

I'd just point out that this wasn't the argument Komponisto was making - he was talking only about relatedness in the ancestry sense.

I know, nevertheless still I wanted to stress that we don't define religion by a single criterion.

Well, consider ancient Greek gods; they are not omniscient, not omnipresent, they can die... they're not more powerful than the simulation runners, and arguably not very ontologically different; are they not deities, but aliens? Was that not religion?

Therefore I haven't listed omni-qualities, immortality and ontological distinctiveness among my criteria for religion. If you look at those criteria, the Greek religion satisfied almost all, save perhaps sacred texts and claims of unfalsifiability (seems that they have not enough time to develop the former and no reason for the latter). Religion usually surpasses the question of existence and identity of gods.

(Now we can make distinction between religion and theism, with the latter being defined solely in terms of god's existence and qualities. I am not sure yet what to think about that possibility.)

So proper account of what religions are actually out there makes your list of attribute much less universal, and the dividing line between religions and something like BTanism much less sharp.

The line is not sharp, of course. Many people argue that Marxism is a religion, even if it explicitly denies god, and may have based that opinion on good arguments. It is also not enough clear what to think about Scientology. Religion, or simply cult? I don't think the classification is important at all.

OK, it's not a religion, so what? The really important thing is whether it's like a religion in those things that ought to make a rationalist not glibly and gleefully dismiss one if they're psyched about another. ... Have you seen a good way to falsify a simulation claim recently?

No, I haven't. Actually my approach to simulation arguments is not much different from my approach to modern vague forms of theism: I notice it, but don't take it seriously.

And among those things worship and sacred texts are arguably less important than e.g. falsifiability.

It depends. Belief in importance, hidden message, or even literal truth of ancient texts is generally more reliable indicator of practical irrationality than having an opinion about some undecidable propositions is.

Replies from: Anatoly_Vorobey, jacob_cannell
comment by Anatoly_Vorobey · 2011-01-20T18:30:26.462Z · LW(p) · GW(p)

I think we've converged on violent agreement, except one point:

And among those things worship and sacred texts are arguably less important than e.g. falsifiability.

It depends. Belief in importance, hidden message, or even literal truth of ancient texts is generally more reliable indicator of practical irrationality than having an opinion about some undecidable propositions is.

You're right. I retract this part.

Replies from: prase
comment by prase · 2011-01-20T18:38:43.298Z · LW(p) · GW(p)

violent agreement

I like the phrase.

comment by jacob_cannell · 2011-01-26T01:05:30.829Z · LW(p) · GW(p)

Have you seen a good way to falsify a simulation claim recently?

No, I haven't. Actually my approach to simulation arguments is not much different from my approach to modern vague forms of theism: I notice it, but don't take it seriously.

So if I may take the implication: you don't take the SA seriously because . . it seems memetically similar to ideas espoused or held by agents you deem irrational?

Do you believe in calculus? Gravitation?

Replies from: prase, Desrtopa
comment by prase · 2011-01-26T01:21:20.682Z · LW(p) · GW(p)

So if I may take the implication: you don't take the SA seriously because . . it seems memetically similar to ideas espoused or held by agents you deem irrational?

I though it was clear from the previous discussion that the reason was pretty weak testability of simulationism, rather than ad hominem reasoning.

comment by Desrtopa · 2011-01-26T01:12:26.544Z · LW(p) · GW(p)

Conflating simulationism with calculus or gravitation is absurd. Our universe would look very different if calculus or gravitation did not exist as we understand them, whereas we have no reason at all to suppose this is true of the simulation argument. There are statistical arguments for supposing it's true, but not all the assumptions in the mathematical model are given, and it increases the complexity of our model of reality without providing any explanatory power.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T05:03:24.114Z · LW(p) · GW(p)

Calculus is a generic algorithmic tool, gravitation is an algorithmic predictive model of some subset of reality, simulationism is a belief about reality derived from future predictions of current physical theory. Yes these are distinct epistemological categories, my point was more that the similarity of simulationism to the older theism is an inadequate reason to dismiss simulationism.

There are statistical arguments for supposing it's true, but not all the assumptions in the mathematical model are given, and it increases the complexity of our model of reality without providing any explanatory power.

This is I believe a common misunderstanding about the SA.

Suppose you are given a series of seemingly random numbers - say from a SETI signal. You put a crack team of mathematicians on it for many years and eventually they develop a complex model for the sequence that can predict it. It also appears that you can derive timing from the signal and determine how long it has been progressing. Then later you are able to run the model forward and predict that it in fact eventually repeats itself . . .

That last discovery is not a change to the model that need be justified by Ockham's razor. It does not add one iota to the model's complexity.

The SA doesn't add an iota of complexity to our model of reality - ie physics. It's a predicted consequence of running physics forward.

Replies from: JoshuaZ, Desrtopa
comment by JoshuaZ · 2011-01-26T05:29:30.061Z · LW(p) · GW(p)

The SA doesn't add an iota of complexity to our model of reality - ie physics. It's a predicted consequence of running physics forward.

Not necessarily. Given our understanding of the laws of physics, simulating our universe inside itself would be tough. Note that nothing in the simulation hypothesis requires that we are being simulated in a universe that has much resemblance to our apparent universe. (Digression: Even small amounts of monkeying with the constants of the universe can make universes that can plausibly give rise to life. See here (unfortunately everything beyond the summary is behind a paywall). And in some of those cases, it seems plausible that large scale computation might be easier. If certain inflationary models are correct then there should be lots of different universal bubbles with slightly different physical laws. Some of those could be quite hospitable to large-scale computation.)

comment by Desrtopa · 2011-01-26T05:28:45.094Z · LW(p) · GW(p)

The simulation argument isn't a predicted consequence of running physics forward; the scenario you put forward doesn't establish that we exist in a simulation, just that our universe follows predictable rules that can be forward computed. Postulating an entire universe outside the one we observe does add to the complexity of that model. The simulation argument is a probabilistic argument that states that if certain assumptions hold then most apparent universes are in fact simulated by other universes, and thus our own is probably a simulation.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T07:54:48.083Z · LW(p) · GW(p)

Postulating an entire universe outside the one we observe does add to the complexity of that model.

Not so at all. A model's complexity is not determined by the entities it references or postulates.

For example, I have a model of the future which postulates new processors every few years. The model is not complex enough to capture every new processor from here to infinity. Nor does it need to be. The model is simple, yet it can generate new postulated entities.

You in effect are saying that my model, which postulates many new future processors, is somehow more 'complex' than a model which postulates just three, or none.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-26T14:41:04.799Z · LW(p) · GW(p)

An entire external universe adds to the complexity of the model, not just how many entities the model contains.

This may not be the case if the simulation itself was produced in the universe as we know it, and our own apparent universe is only a simulated fragment. That isn't what I thought you were asserting, but that is untenable for completely separate reasons.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T20:15:58.202Z · LW(p) · GW(p)

What do you mean by complexity and how is it at all relevant?

Take Conway's life for example. Tons of apparent complexity can emerge from rules simple enough to write on a bar napkin.

Was the copernican model 'wrong' because it made our universe-model more complex? Was the discovery of multiple galaxies wrong for similar reason? Many worlds?

The only formal definition of complexity that is well justified is algorithmic complexity and it has some justification as a quality metric for deciding between theories in terms of Solonomoff Induction.

The formal complexity of a universe-model is that of it's simplest reduction.

The simplest reduction for any scientific model is universal physics.

So there is only one model, all complexity emerges from it, and saying things like "your premise X adds to the complexity of the model" is untrue and equivalent to saying your "premise X makes the model smell bad".

Replies from: Desrtopa
comment by Desrtopa · 2011-01-26T20:47:25.344Z · LW(p) · GW(p)

Adding a universe external to this one doesn't just add more stuff. To take the Conway's Game of Life example, suppose that you simulated an entire universe inside it, from the beginning. For the inhabitants, a model that not only explained how their universe worked, but postulated the existence of our universe, would be more complex than one that merely explained their own. With evidence that their reality was a simulation, the proposition could be made more likely than the proposition that it stood alone.

In terms of minimum message length, having to describe another universe superordinate to your own adds to the information of the model, not just the entities described in it. The addition of our own universe could not be encapsulated in a model that simply describes the working of the simulated Conway universe from the inside without adding more information.

Replies from: Jack, jacob_cannell
comment by Jack · 2011-01-26T22:14:09.356Z · LW(p) · GW(p)

Once you have a model that includes a universe and the capacity to simulate universes you can add universes to the model without taking much more complexity because the model can be recursively defined. The minimum message length need not be increased much to add new universes, you just edit the escape clause. Where we are in the model doesn't matter.

Replies from: Perplexed, JoshuaZ
comment by Perplexed · 2011-01-27T00:16:58.268Z · LW(p) · GW(p)

You seem to be thinking in terms of time complexity. Space complexity also needs to be considered. It seems axiomatic to me that an outer universe simulation can only contain nested universe simulations of lower space complexity than than itself.

If I am wrong, is there some discussion of this kind of issue online or in a well-know paper or textbook?

comment by JoshuaZ · 2011-01-26T22:23:42.107Z · LW(p) · GW(p)

Once you have a model that includes a universe and the capacity to simulate universes you can add universes to the model without taking much more complexity because the model can be recursively defined.

This only follows if your universe can not only model other universes but can easily model universes that share its own rules of physics. This is a much stronger claim about the nature of a universe (for example, it seems likely that this is not true about our universe.)

comment by jacob_cannell · 2011-01-26T22:21:54.686Z · LW(p) · GW(p)

Adding a universe external to this one doesn't just add more stuff.

The SA does not 'add' a universe external to the model. The SA is a deduction derived from the Singularity-model. The Singularity-model does not 'add' the external universes either, they emerge within it naturally, just as naturally as future AI's do.

For the inhabitants, a model that not only explained how their universe worked, but postulated the existence of our universe, would be more complex than one that merely explained their own.

That would only be true if their model was not also a full explanation of our universe, and thus isomorphic to some historical slice of our universe.

In terms of minimum message length, having to describe another universe superordinate to your own adds to the information of the model,

Not at all. The Singularity-model is a scientific extrapolation of our observed history into the future. As it is scientific, it reduces to physics (the model approximates what we believe would happen if we could simulate physics into the future).

The SA is not a model at all. It is a deduction which can be simplified down to:

If the Singularity-model is accurate.

Then most observable universes are simulations.

And thus our observable universe is a simulation.

You seem to think the minimum message length is somehow physics + extra simulations scrawled in. The physics generates everything, so it's already minimal.

The addition of our own universe could not be encapsulated in a model that simply describes the working of the simulated Conway universe from the inside without adding more information.

No - but only because the physics differ substantially. You are right of course that if Conway beings evolved and somehow they had some singularity of their own in their future that generated simulated Conway universes, they would establish a lower prior to believing they were embedded in a String/M-theory universe like ours. (they of course could still be wrong, as complexity is just a reasonable bias measure). They'd attach higher credence to being embedded in a Conway universe.

But if the simulated universe is based on the same physics, then it reduces to exactly the same minimal program, and it absolutely describes both universes.

This is very similar to the multiverse in physics and the space of universes string/M-whatever theory can generate.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-27T00:17:33.207Z · LW(p) · GW(p)

As I mentioned before, I thought you were arguing the orthodox simulation argument, rather than one where the simulations are created from within our own universe. That would not necessarily increase the complexity of the model, but it's untenable for its own reasons.

For one thing, it's far from given that any civilization would ever want to simulate the universe at a previous point; the reasons you provided before don't remotely justify such a project; it's not a practical use of computing power. For another, assuming you're only simulating small fractions of the history of existence, the majority of all sentient beings in the universe would not be ones in a simulation. In fact, you would have to defy a number of probable assumptions about our universe to fit as much universe space and time in the simulation as existed outside it.

comment by jwhendy · 2012-03-06T02:29:24.872Z · LW(p) · GW(p)

Instead, I read the post as claiming "you guys are unreasonable in your overt dismissal of theism and your forceful insistence on it being a closed question, considering many of you are big on BTanism which has similar epistemological status to some varieties of theism".

That. I think after all the comments I've scanned in this post, this was the first one where I really felt like I understood what the post was even really about. Thank you.

comment by jacob_cannell · 2011-01-26T06:01:58.335Z · LW(p) · GW(p)

The implication of the post, as I perceived it (have a look at its first paragraph) was "you guys shouldn't be so confident in your dismissal-of-religion ('atheism'); after all, you (perhaps rightly) are willing to entertain the ideas of Tegmark!

The OP does not make mention of the term 'religion'. Part of the confusion seems to stem from the conflation of theism and religion.

Theism is a philosophical belief about the nature of reality. The truthfulness of this belief as a map of reality is not somehow dependent or connected in belief space to magic rituals, prayers, voodo dolls or the memes of organized religion, even if they historically co-occur.

Replies from: komponisto
comment by komponisto · 2011-01-26T06:50:25.742Z · LW(p) · GW(p)

Part of the confusion seems to stem from the conflation of theism and religion.

I beg to differ. In my view, the conflation is of theism with simulationism.

comment by BecomingMyself · 2011-01-20T01:39:28.904Z · LW(p) · GW(p)

Tegmark cosmology is not theism. Theism means Jehovah (etc).

The way I read it, it seems like Will_Newsome is not using the word in this way. It may be a case of two concepts being mistakenly filed into the same basket -- certainly some people might, when they hear "Theism-in-general is a mistaken and sometimes harmful way of thinking about the world", understand "theism-in-general" to mean "any mode of thought that acknowledges the possibility of some intelligent mind that is outside and in control of our universe". Under this interpretation, the assertion is quite obviously false (or at least, not obviously true).

I wonder if there is still a disagreement if we Taboo "theism"? (Though your point in the last paragraph is a good one, I think.)

Replies from: komponisto
comment by komponisto · 2011-01-20T01:46:29.778Z · LW(p) · GW(p)

Tegmark cosmology is not theism. Theism means Jehovah (etc).

The way I read it, it seems like Will_Newsome is not using the word in this way

Indeed not; hence my criticism!

comment by jacob_cannell · 2011-01-26T00:51:26.616Z · LW(p) · GW(p)

For some reason you seem to be categorizing the belief-space such that there is a little pocket called Jehovah-ism over here and then simulationism is another distinct island far far away.

The way I see it, theism is a whole vast space of belief-space, roughly dividing from the split based on the question: was the observable universe created by an agenty-process?

The SA leads us into that side of the belief-space, but the type of Jehova-ism you mention is just a little slice of a large territory.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-26T01:31:54.506Z · LW(p) · GW(p)

The two may branch in the same direction from that question, but that doesn't mean that their consequences are remotely similar. You seem to be substituting in cached thoughts from religion as the consequences of simulationism when they really don't follow from it.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T05:05:40.980Z · LW(p) · GW(p)

Such as?

Replies from: Desrtopa
comment by Desrtopa · 2011-01-26T05:46:41.402Z · LW(p) · GW(p)

Such as the morality of the simulators having any relation to our own. It would be much easier to simulate a universe from big bang conditions starting with a few basic rules and allow it to evolve from on its own than to deliberately engineer any sort of life forms within it, and the basic rules of our universe do not dictate that any intelligent life form needs a utility function that closely resembles our own.

Assuming it would even be practical for the simulators to single us out for observation, as such a miniscule part of the simulation, and they judge us according to their own utility function, it's a big leap to suppose that they would do anything about it with repercussions inside our own universe, so for our purposes it probably wouldn't matter.

Additionally, it's not established that the simulators would have practical control over the simulation. Given JoshuaZ's arguments, I concede that it's theoretically possible that the simulators could predict the output of the simulation in advance without running it, but that doesn't mean it's probable, let alone given.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T06:24:17.526Z · LW(p) · GW(p)

I suspect that a full universe simulation of all of space-time, fifteen billion years of an entire universe, may have a cost complexity such that it could never be realized in any currently conceivable computer due to speed of light limitations. Even a galaxy sized black hole may not be sufficient. You are talking about a Tipler-like scenario that would probably require some massive re-engineering of the entire universe. I can't rule this out, but from what I've read of astrophysicist's reactions, it is questionable whether it is possible even in principle to collapse the universe in the fashion required. (Tipler figures it requires tachyons in his later response writings)

So no, that would not be much easier to simulate - it would be vastly more difficult, and may not even be possible in principle.

The more likely simulation is one run by our posthuman ancestors after a local Singularity on earth, where they have a massive amount of computation, enough to simulate perhaps a galaxy or galaxies full of virtual humans, but not the entire history of our universe. We must remember that they will want to simulate many possible samples as well. They will also probably simulate hypothetical aliens and hypothetical contact scenarios. Basically they will simulate future important sample time-slices.

Today humanity as a whole spends a large amount of time thinking about the present, slightly alternate versions of the present, historical time heavily weighted based on importance, and projected futures. We already are engaging in the limited creation of simulated realities. The phenomenon has already begun, it started with dreams, language, thought and is more recently amplified with computer simulation and graphics, and just chart that trajectory out into the future and amplify it by an exponential vastening . . . .

Replies from: Desrtopa
comment by Desrtopa · 2011-01-26T06:34:14.048Z · LW(p) · GW(p)

This is not the ordinary simulation argument, or even closely related to it. The proposition that you reject, that our universe is simulable in its entirety, is one of the premises of that argument.

I for one strongly predict that our future ancestors will never create a galaxy or multiple galaxies of virtual humans from their own past. It's ethically dubious, and far, far from being one of the most useful things they could do with that computing power if they simply want to determine the likely outcome of various contact scenarios or the what hypothetical aliens would be like. By the time we're capable of it, it simply wouldn't have much to recommend it as an idea.

comment by Will_Newsome · 2011-01-20T01:59:29.196Z · LW(p) · GW(p)

I didn't mean to talk about Jehovah specifically; I thought that using 'theism' would imply enough generality that I could get away without clarification, but I was obviously very mistaken. I added a sentence to the end of the post.

Your second paragraph seems to correctly point out a problem with my terminology. Nonetheless perhaps we could also have discussion on what I was (admittedly poorly) trying to start a discussion about, that is, the apparent contradiction between believing strong optimization processes outside the observable universe are possible and believing that such an optimization process didn't create the observable universe?

Replies from: komponisto, wedrifid
comment by komponisto · 2011-01-20T02:17:04.830Z · LW(p) · GW(p)

I didn't mean to talk about Jehovah specifically

Nor, for that matter, did I: Zeus, Thor, and their innumerable counterparts should be considered included in the reference.

Nonetheless perhaps we could also have discussion on what I was (admittedly poorly) trying to start a discussion about, that is, the apparent contradiction between believing strong optimization processes outside the observable universe are possible and believing that such an optimization process didn't create the observable universe?

The way to have done that, in my opinion, would have been to title the post "Simulation/creator arguments" or something similar, and to avoid any mention of theism, atheism, or religion in the body of the post.

comment by wedrifid · 2011-01-20T02:10:40.047Z · LW(p) · GW(p)

I didn't mean to talk about Jehovah specifically; I thought that using 'theism' would imply enough generality that I could get away without clarification, but I was obviously very mistaken. I added a sentence to the end of the post.

It was brave to even consider using a concept within a few inferential leaps from Jehovah here. :)

comment by Alex_Altair · 2011-01-20T00:58:58.373Z · LW(p) · GW(p)

The only fact necessary to rationally be an atheist is that there is no evidence for a god. We don't need any arguments -- evolutionary or historical or logical -- against a hypothesis with no evidence.

The reason I don't spend a cent of my time on it is because of this, and because all arguments for a god are dishonest, that is, they are motivated by something other than truth. It's only slightly more interesting than the hypothesis that there's a teapot around venus. And there are plenty of other things to spend time on.

As a side note, I have spent time on learning about the issue, because it's one of the most damaging beliefs people have, and any decrease in it is valuable.

Replies from: Will_Newsome, Jack, jacob_cannell
comment by Will_Newsome · 2011-01-20T01:54:08.011Z · LW(p) · GW(p)

The only fact necessary to rationally be an atheist is that there is no evidence for a god. We don't need any arguments -- evolutionary or historical or logical -- against a hypothesis with no evidence.

I contend that there is evidence for a god. Observation: Things tend to have causes. Observation: Agenty things are better at causing interesting things than non-agenty things. Observation: We find ourselves in a very interesting universe.

Those considerations are Bayesian evidence. The fact that many, many smart people have been theistic is Bayesian evidence. So now you have to start listing the evidence for the alternate hypothesis, no?

The reason I don't spend a cent of my time on it is because of this, and because all arguments for a god are dishonest, that is, they are motivated by something other than truth.

Do you mean all arguments on Christian internet fora, or what? There's a vast amount of theology written by people dedicated to finding truth. They might not be good at finding truth, but it is nonetheless what is motivating them.

I should really write a post on the principle of charity...

It's only slightly more interesting than the hypothesis that there's a teapot around venus.

I realize this is rhetoric, but still... seriously? The question of whether the universe came into being via an agenty optimization process is only slightly more interesting than teapots orbiting planets?

As a side note, I have spent time on learning about the issue, because it's one of the most damaging beliefs people have, and any decrease in it is valuable.

I agree that theism tends to be a very damaging belief in many contexts, and I think it is good that you are fighting against its more insipid/irrational forms.

Replies from: shokwave, Alex_Altair, steven0461, None, Perplexed, prase, Dreaded_Anomaly
comment by shokwave · 2011-01-20T15:13:25.803Z · LW(p) · GW(p)

Observation: Agenty things are better at causing interesting things than non-agenty things.

I can't help but feel that this sentence pervasively redefines 'interesting things' as 'appears agent-caused'.

Replies from: DSimon
comment by DSimon · 2011-01-23T21:24:41.364Z · LW(p) · GW(p)

As curious agents ourselves, we're pre-tuned to find apparently-agent-caused things interesting. So, I don't think a redefinition necessarily took place.

Replies from: shokwave
comment by shokwave · 2011-01-24T01:35:41.230Z · LW(p) · GW(p)

pre-tuned to find apparently-agent-caused things interesting

This is sort of what I meant. I am leery of accidentally going in the reverse direction - so instead of "thing A is agent-caused -> pretuned to find agent-caused interesting -> thing A is interesting" we get "thing A is interesting -> pretuned to find agent-caused interesting -> thing A is agent-caused".

This is then a redefinition; I have folded agent-caused into "interesting" and made it a necessary condition.

comment by Alex_Altair · 2011-01-20T02:45:43.201Z · LW(p) · GW(p)

It's only slightly more interesting than the hypothesis that there's a teapot around venus.

I realize this is rhetoric, but still... seriously? The question of whether the universe came into being via an agenty optimization process is only slightly more interesting than teapots orbiting planets?

I suppose that their ratio is very high, but that their difference is still extremely small.

As for your evidence that there is a god, I think you're making some fundamentally baseless assumptions about how the universe should be "expected" to be. The universe is the given. We should not expect it to be disordered any more than we should expect it to be ordered. And I'd say that the uninteresting things in the universe vastly outnumber the interesting things, whereas for humans they do not.

Also, I must mention the anthropic principle. A universe with humans much be sufficiently interesting to cause humans in the first place.

But I do agree that many honest rational people, even without the bias of existent religion, would at least notice the analogy between the order humans create and the universe itself, and form the wild but neat hypothesis that it was created by an agent. I'm not sure if that analogy is really evidence, anymore than the ability of a person to visualize anything is evidence for it.

Replies from: Jack
comment by Jack · 2011-01-20T19:47:42.831Z · LW(p) · GW(p)

We should not expect it to be disordered any more than we should expect it to be ordered.

You can't just not have a prior. There is certainly no reason to assume the the universe as we have found has the default entropy. And we actually have tools that allow us to estimate this stuff- the complexity of the universe we find ourselves in is dependent on a very narrow range values in our physics. Yes I'm making the fine-tuning argument and of course knowing this stuff should increase our p estimate for theism. That doesn't mean P(Jehovah) is anything but minuscule-- the prior for an uncreated, omnipotent, omniscient and omni-benevolent God is too low for any of this to justify confident theism.

comment by steven0461 · 2011-01-20T02:15:15.909Z · LW(p) · GW(p)

We find ourselves in a very interesting universe.

Some of it anyway.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T02:17:40.386Z · LW(p) · GW(p)

Isn't it interesting how there's so much raw material that the interesting things can use to make more interesting things?

Replies from: None, DSimon
comment by [deleted] · 2011-01-24T04:26:51.478Z · LW(p) · GW(p)

Really? Your explanation for why there's lots of stuff is that an incredibly powerful benevolent agent made it that way? What does that explanation buy you over just saying that there's lots of stuff?

comment by DSimon · 2011-01-23T21:28:16.086Z · LW(p) · GW(p)

Again, some of it. The vast vast majority of raw material in the universe is not used, and has never been used, for making interesting things.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-24T16:25:43.436Z · LW(p) · GW(p)

Why are you ignoring the future?

Replies from: Perplexed, DSimon
comment by Perplexed · 2011-01-26T23:36:16.947Z · LW(p) · GW(p)

Back when I used to hang around over at talk.origins, one of the scientist/atheists there seemed to think that the sheer size of the universe was the best argument against the theist idea of a universe created for man. He thought it absurd that a dramatic production starring H. sapiens would have such a large budget for stage decoration and backdrops when it begins with such a small budget for costumes - at least in the first act.

Your apparent argument is that a big universe is evidence that Someone has big plans for us. The outstanding merit of your suggestion, to my mind, is that his argument and your anti-argument, if brought into contact, will mutually annihilate leaving nothing but a puff of smoke.

comment by DSimon · 2011-01-26T13:13:45.209Z · LW(p) · GW(p)

Are you proposing that in the future we will necessarily end up using some large proportion of the universe's material for making interesting things? I mean, I agree that that's possible, but it hardly seems inevitable.

Replies from: timtyler, Will_Newsome
comment by timtyler · 2011-01-27T09:30:36.531Z · LW(p) · GW(p)

I think that is more-or-less the idea, yes - though you can drop the "necessarily ".

Don't judge the play by the first few seconds.

Replies from: DSimon
comment by DSimon · 2011-01-28T13:49:00.639Z · LW(p) · GW(p)

The reason I put in "necessarily" is because it seems like Will Newsome's anthropic argument requires that the universe was designed specifically for interesting stuff to happen. If it's not close to inevitable, why didn't the designer do a better job?

Replies from: timtyler
comment by timtyler · 2011-01-28T21:36:43.597Z · LW(p) · GW(p)

Maybe there's no designer. Will doesn't say he's 100% certain - just that he thinks interestingness is "Bayesian evidence" for a designer.

I think this is a fairly common sentiment - e.g. see Hanson.

comment by Will_Newsome · 2011-01-29T01:21:45.967Z · LW(p) · GW(p)

Necessarily? Er... no. But I find the arguments for a decent chance of a technological singularity to be pretty persuasive. This isn't much evidence in favor of us being primarily computed by other mind-like processes (as opposed to getting most of our reality fluid from some intuitively simpler more physics-like computation in the universal prior specification), but it's something. Especially so if a speed prior is a more realistic approximation of optimal induction over really large hypothesis spaces than a universal prior is, which I hope is true since I think it'd be annoying to have to get our decision theories to be able to reason about hypercomputation...

comment by [deleted] · 2011-01-20T02:02:55.985Z · LW(p) · GW(p)

I should really write a post on the principle of charity...

Yes!

Replies from: Document
comment by Document · 2011-01-20T04:19:05.693Z · LW(p) · GW(p)

Possible prior work: Why and how to debate charitably, by User:pdf23ds.

comment by Perplexed · 2011-01-20T04:06:43.171Z · LW(p) · GW(p)

I contend that there is evidence for a god. Observation: Things tend to have causes. Observation: Agenty things are better at causing interesting things than non-agenty things. Observation: We find ourselves in a very interesting universe.

Those considerations are Bayesian evidence.

Your choice of wording here makes it obvious that you are aware of the counter-argument based on the Anthropic Principle. (Observation: uninteresting venues tend not to be populated by observers.) So, what is your real point?

Replies from: magfrump, Will_Sawin
comment by magfrump · 2011-01-20T06:46:33.372Z · LW(p) · GW(p)

I would think "Observers who find their surroundings interesting duplicate their observer-ness better" is an even-less-mind-bending anthropic-style argument.

Also this keeps clear that "interesting" is more a property of observers than of places.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-20T13:52:28.048Z · LW(p) · GW(p)

(nods) Yeah, I would expect life forms that fail to be interested in the aspects of their surroundings that pertain to their ability to produce successful offspring to die out pretty quickly.

That said, once you're talking about life forms with sufficiently general intelligences that they become interested in things not directly related to that, it starts being meaningful to talk about phenomena of more general interest.

Of course, "general" does not mean "universal."

comment by Will_Sawin · 2011-01-23T22:48:11.407Z · LW(p) · GW(p)

If we have a prior of 100 to 1 against agent-caused universes, and .1% of non-agent universes have observers observing interestingness while 50% of agent-caused universes have it, what is the posterior probability of being in an agent-caused universe?

Replies from: Perplexed, datadataeverywhere
comment by Perplexed · 2011-01-24T01:09:12.552Z · LW(p) · GW(p)

I make it about 83% if you ignore the anthropic issues (by assuming that all universes have observers, or that having observers is independent of being interesting, for example). But if you want to take anthropic issues into account, you are only allowed to take the interestingness of this universe as evidence, not its observer-ladenness. So the answer would have to be "not enough data".

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-24T02:32:05.165Z · LW(p) · GW(p)

You can't not be allowed to take the observer-ladenness of a universe as evidence.

Limiting case: Property X is true of a universe if and only if it has observers. May we take the fact that observers exist in our universe as evidence that observers exist there?

comment by datadataeverywhere · 2011-01-23T23:30:24.031Z · LW(p) · GW(p)

I have no idea what probability should be assigned to non-agent universes having observers observing interesting things (though for agent universes, 50% seems too low), but I also think your prior is too high.

I think there is some probability that there are no substantial universe simulations, and some probability that the vast majority of universes are simulations, but even if we live in a multiverse where simulated universes are commonplace, our particular universe seems like a very odd choice to simulate unless the basement universe is very similar to our own. I also assign a (very) small probability to the proposition that our universe is computationally capable of simulating universes like itself (even with extreme time dilation), so that also seems unlikely.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-24T02:33:20.798Z · LW(p) · GW(p)

Probabilities were for example purposes only. I made them up because they were nice to calculate with and sounded halfway reasonable. I will not defend them. If you request that I come up with my real probability estimates, I will have to think harder.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-01-24T02:50:48.106Z · LW(p) · GW(p)

Ah, well your more general point was well-made. I don't think better numbers are really important. It's all too fuzzy for me to be at all confident about.

I still retain my belief that it is implausible that we are in a universe simulation. If I am in a simulation, I expect that it is more likely that I am by myself (and that conscious or not, you are part of the simulation created in response to me), moderately more likely that there are a small group of humans being simulated with other humans and their environment dynamically generated, and overall very unlikely that the creators have bothered to simulate any part of physical reality that we aren't directly observing (including other people). Ultimately, none of these seem likely enough for me to bother considering for very long.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T05:47:45.350Z · LW(p) · GW(p)

The first part of your belief that "it is implausible that we are in a universe simulation" appears to be based on the argument:

If simulationism, then solipsism is likely.

Solipsism is unlikely, so . . .

Chain of logic aside, simulationism does not imply solipsism. Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations. So some simulated individuals may exist in small solipsist sims, but the great majority of conscious sims will find themselves in larger shared simulations.

Presumably a posthuman intelligence on earth would be interested in earth as a whole system, and would simulate this entire system. Simulating full human-mind equivalents is something of a sweet spot in the space of approximations.

There is a massive sweet spot, an extremely effecient method, of simulating a modern computer - which is to simulate it at the level of it's turing equivalent circuit. Simulating it at a level below this - say at the molecular level, is just a massive waste of resources, while any simulation above this loses accuracy completely.

It is postulated that a similar simulation scale separation exists for human minds, which naturally relates to uploads and AI.

Replies from: datadataeverywhere, Desrtopa
comment by datadataeverywhere · 2011-01-26T07:54:39.079Z · LW(p) · GW(p)

Simulating full human-mind equivalents is something of a sweet spot in the space of approximations.

I don't understand why human-mind equivalents are special in this regard. This seems very anthropocentric, but I could certainly be misinterpreting what you said.

Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations.

Cheaper, but not necessarily more efficient. It matters which answers one is looking for, or which goals one is after. It seems unlikely to me that my life is directed well enough to achieve interesting goals or answer interesting questions that a superintelligence might pose, but it seems even more unlikely that simulating 6 billion humans, in the particular way they appear (to me) to exist is an efficient way to answer most questions either.

I'd like to stay away from telling God what to be interested in, but out of the infinite space of possibilities, Earth seems too banal and languorous to be the one in N that have been chosen for the purpose of simulation, especially if the basement universe has a different physics.

If the basement universe matches our physics, I'm betting on the side that says simulating all the minds on Earth and enough other stuff to make the simulation consistent is an expensive enough proposition that it won't be worthwhile to do it many times. Maybe I'm wrong; there's no particular reason why simulating all of humanity in the year of 2011 needs to take more than 10^18 J, so maybe there's a "real" milky way that's currently running 10^18 planet-scale sims. Even that doesn't seem like a big enough number to convince me that we are likely to be one of those.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T09:00:52.040Z · LW(p) · GW(p)

Simulating full human-mind equivalents is something of a sweet spot in the space of approximations.

I don't understand why human-mind equivalents are special in this regard. This seems very anthropocentric, but I could certainly be misinterpreting what you said.

I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy.

Simulating N localized space-time patterns in one large simulation can be significantly cheaper than simulating N individual human simulations.

Cheaper, but not necessarily more efficient.

Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility.

A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind.

But of course a world with just one mind is not an accurate simulation, so you now you need to populate it with a huge number of pseudo-minds which functionally are indistinguishable from the perspective of our sole real observer but somehow use much less computational resources.

Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point.

The jumping point is where the pseudo-mind becomes a real actual conscious observer of it's own.

The rationale for this cost model and the scale separation point can be derived from what we know about simulating computers.

It seems unlikely to me that my life is directed well enough to achieve interesting goals or answer interesting questions that a superintelligence might pose, but it seems even more unlikely that simulating 6 billion humans, in the particular way they appear (to me) to exist is an efficient way to answer most questions either.

Perhaps not your life in particular, but human life on earth today?

Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that.

Earth seems too banal and languorous to be the one in N that have been chosen for the purpose of simulation, especially if the basement universe has a different physics.

The basement reality is highly unlikely to have different physics. The vast majority of simulations we create today are based on approximations of currently understood physics, and I don't expect this to every change - simulations have utility for simulators.

so maybe there's a "real" milky way that's currently running 10^18 planet-scale sims. Even that doesn't seem like a big enough number to convince me that we are likely to be one of those.

I'm a little confused about the 10^18 number.

From what I recall, at the limits of computation one kg of matter can hold roughly 10^30 bits, and a human mind is in the vicinity of 10^15 bits or less. So at the molecular limits a kg of matter could hold around a quadrillion souls - an entire human galactic civilization. A skyscraper of such matter could give you 10^8 kg .. and so on. Long before reaching physical limits, posthumans would be able to simulate many billions of entire earth histories. At the physical molecular limits, they could turn each of the moon's roughly 10^22 kg into an entire human civilization, for a total of 10^37 minds.

The potential time scale compression are nearly as vast - with estimated speed limits at around 10^15 ops/bit/sec in ordinary matter at ordinary temperatures, vs at most 10^4 ops/bit/sec in human brains, although not dramatically higher than the 10^9 ops/bit/sec of today's circuits. The potential speedup of more than 10^10 over biological brains allows for about one hundred years per second of sidereal time.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-01-26T14:51:29.288Z · LW(p) · GW(p)

I meant there is probably some sweet spot in the space of [human-mind] approximations, because of scale separation, which I elaborated on a little later with the computer analogy.

I understand that for any mind, there is probably an "ideal simulation level" which has the fidelity of a more expensive simulation at a much lower cost, but I still don't understand why human-mind equivalents are important here.

Cheaper implies more efficient, unless the individual human simulations somehow have a dramatically higher per capita utility.

Which seems pretty reasonable to me. Why should the value of simulating minds be linear rather than logarithmic in the number of minds?

A solipsist universe has extraneous patchwork complexity. Even assuming that all of the non-biological physical processes are grossly approximated (not unreasonable given current simulation theory in graphics), they still may add up to a cost exceeding that of one human mind.

Agreed, but I also think that the cost of simulating the relevant stuff necessary to simulate N minds might be close to linear in N.

Now imagine a graph of simulation accuracy vs computational cost of a pseudo-mind. Rather than being linear, I believe it is sharply exponential, or J-shaped with a single large spike near the scale separation point.

I agree, though as a minor note if cost is the Y-axis the graph has to have a vertical asymptote, so it has to grow much faster than exponential at the end. Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

The jumping point is where the pseudo-mind becomes a real actual conscious observer of it's own.

I suspect that some pseudo-minds must be conscious observers some of the time, but that they can be turned off most of the time and just be updated offline with experiences that their conscious mind will integrate and patch up without noticing. I'm not sure this would work with many mind-types, but I think it would work with human minds, which have a strong bias to maintaining coherence, even at the cost of ignoring reality. If I'm being simulated, I suspect that this is happening even to me on a regular basis, and possibly happening much more often the less I interact with someone.

Perhaps not your life in particular, but human life on earth today?

Simulating 6 billion humans will probably be the only way to truly understand what happened today from the perspective of our future posthuman descendants. The alternatives are . . . creating new physical planets? Simulation will be vastly more efficient than that.

Updating on the condition that we closely match the ancestors of our simulators, I think it's pretty reasonable that we could be chosen to be simulated. This is really the only plausible reason I can think of to chose us in particular. I'm still dubious as to the value doing so will have to our descendants.

I'm a little confused about the 10^18 number.

Actually, I made a mistake, so it's reasonable to be confused. 20 W seems to be a reasonable upper limit to the cost of simulating a human mind. I don't know how much lower the lower bound should be, but it might not be more than an order of magnitude less. This gives 10^11 W for six billion, (4x) 10^18 J for one year.

I don't think it's reasonable to expect all the matter in the domain of a future civilization to be used to its computational capacity. I think it's much more likely that the energy output of the Milky Way is a reasonably likely bound to how much computation will go on there. This certainly doesn't have to be the case, but I don't see superintelligences annihilating matter at a dramatically faster rate in order to provide massively more power to the remainder of the matter around. The universe is going to die soon enough as it is. (I could be very short sighted about this) Anyway, energy output of the Milky Way is around 5x10^36 W. I divided this by Joules instead of by Watts, so the second number I gave was 10^18, when it should have been (5x) 10^24.

I maintain that energy, not quantum limits of computation in matter, will bound computational cost on the large scale. Throwing our moon into the Sun in order to get energy out of it is probably a better use of it as raw materials than turning it into circuitry. Likewise for time compression, convince me that power isn't a problem.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T21:31:00.854Z · LW(p) · GW(p)

I understand that for any mind, there is probably an "ideal simulation level" which has the fidelity of a more expensive simulation at a much lower cost, but I still don't understand why human-mind equivalents are important here.

Simply because we are discussing simulating the historical period in which we currently exist.

Why should the value of simulating minds be linear rather than logarithmic in the number of minds?

The premise of the SA is that the posthuman 'gods' will be interested in simulating their history. That history is not dependent on a smattering of single humans isolated in boxes, but the history of the civilization as a whole system.

Agreed, but I also think that the cost of simulating the relevant stuff necessary to simulate N minds might be close to linear in N.

If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems.

Imagine the flow of information in your brain. Imagine the flow of causality extending back in time, the flow of information weighted by it's probabilistic utility in determining my current state.

The stuff in immediate vicinity to me is important, and the importance generally falls off according to an inverse square law with distance away from my brain. Moreover, even from the stuff near me at one time step, only a tiny portion of it is relevant. At this moment my brain is filtering out almost everything except the screen right in front of me, which can be causally determined by a program running on my computer, dependent on recent information in another computer in a server somewhere in the midwest a little bit ago, which was dependent on information flowing out from your brain previously . .. and so on.

So simulating me would more or less require your simulation as well, it's very hard to isolate a mind. You might as well try to simulate just my left prefrontal cortex. The entire distinction of where one mind begins and ends is something of spatial illusion that disappears when you map out the full causal web.

Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

If you want to simulate some program running on one computer on a new machine, there is an exact vertical inflection wall in the space of approximations where you get a perfect simulation which is just the same program running on the new machine. This simulated program is in fact indistinguishable from the original.

I suspect that some pseudo-minds must be conscious observers some of the time, but that they can be turned off most of the time and just be updated offline

Yes, but because of the network effects mentioned earlier it would be difficult and costly to do this on a per mind basis. Really it's best to think of the entire earth as a mind for simulation purposes.

Could you turn off part of cortex and replace it with a rough simulation some of the time without compromising the whole system? Perhaps sometimes, but I doubt that this can give a massive gain.

I'm still dubious as to the value doing so will have to our descendants.

Why do we currently simulate (think about) our history? To better understand ourselves and our future.

I believe there are several converging reasons to suspect that vaguely human-like minds will turn out to be a persistent pattern for a long time - perhaps as persistent as eukaryotic cells. Adapative radiation will create many specializations and variations, but the basic pattern of a roughly 10^15 bit mind and it's general architecture may turn out to be a fecund replicator and building block for higher level pattern entities.

It seems plausible some of these posthumans will actually descend from biological humans alive today. They will be very interested in their ancestors, and especially the ancestors they new in their former life who died without being uploaded or preserved.

Humans have been thinking about this for a while. If you could upload and enter virtual heaven, you could have just about anything that you want. However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on.

So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.

20 W seems to be a reasonable upper limit to the cost of simulating a human mind. I don't know how much lower the lower bound should be, but it might not be more than an order of magnitude less.

You are basically taking the current efficiency of human brains as the limit, which of course is ridiculous on several fronts. We may not reach the absolute limits of computation, but they are the starting point for the SA.

We already are within six orders of magnitude of the speed limit of ordinary matter (10^9 bit ops/sec vs 10^15), and there is every reason to suspect we will get roughly as close to the density limit.

I maintain that energy, not quantum limits of computation in matter, will bound computational cost on the large scale.

There are several measures - the number of bits storable per unit mass derives how many human souls you can store in memory per unit mass.

Energy relates to the bit operations per second and the speed of simulated time.

I was assuming computing at regular earth temperatures within the range of current brains and computers. At the limits of computation discussed earlier 1 kg of matter at normal temperatures implies an energy flow of around 1 to 20W and can simulate roughly 10^15 virtual humans 10^10 faster than current human rate of thought. This works out to about one hundred years per second.

So at the limits of computation, 1 kg of ordinary matter at room temperature should give about 10^25 human lifetimes per joule. One square meter of high efficiency solar panel could power several hundred kilograms of computational substrate.

So at the limits of computation, future posthuman civilizations could simulate truly astronomical number of human lifetimes in one second using less power and mass than our current civilization.

No need to dissemble planets. Using the whole surface of a planet gives a multiplier of 10^14 over a single kilogram. Using the entire mass only gives a further 10^8 multiple over that or so, and is much much more complex and costly to engineer. (when you start thinking of energy in terms of human souls, this becomes morally relevant)

If this posthuman civilization simulates human history for a billion years instead of a second, this gives another 10^16 multiplier.

Using much more reasonable middle of the road estimates:

  • Say tech may bottom out at a limit within half (in exponential terms) of the maximum - say 10^13 human lifetimes per kg per joule vs 10^25.

  • The posthuman civ stabilizes at around 10^10 1kg computers (not much more than we have today).

  • The posthuman civ engages in historical simulation for just one year. (10^7 seconds).

That is still 10^30 simulated human lifetimes, vs roughly 10^11 lifetimes in our current observational history.

Those are still astronomical odds for observing that we currently live in a sim.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-01-27T00:29:20.273Z · LW(p) · GW(p)

This is very upsetting, I don't have anything like the time I need to keep participating in this thread, but it remains interesting. I would like to respond completely, which means that I would like to set it aside, but I'm confident that if I do so I will never get back to it. Therefore, please forgive me for only responding to a fraction of what you're saying.

If the N minds were separated by vast gulfs of space and time this would be true, but we are talking about highly connected systems.

I thought context made it clear that I was only talking about the non-mind stuff being simulated as being an additional cost perhaps nearly linear in N. Very little of what we directly observe overlaps except our interaction with each other, and this was all I was talking about.

Regardless, I don't think we can be confident that consciousness occurs at an inflection point or a noticeable bend.

Why can't a poor model (low fidelity) be conscious? We just don't know enough about consciousness to answer this question.

Yes, but because of the network effects mentioned earlier it would be difficult and costly to do this on a per mind basis. Really it's best to think of the entire earth as a mind for simulation purposes.

I really disagree, but I don't have time to exchange each other's posteriors, so assume this dropped.

However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on [...] So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

You are basically taking the current efficiency of human brains as the limit, which of course is ridiculous on several fronts. We may not reach the absolute limits of computation, but they are the starting point for the SA.

I said it was a reasonable upper bound, not a reasonable lower bound. That seems trivial.

I was assuming computing at regular earth temperatures within the range of current brains and computers. At the limits of computation discussed earlier 1 kg of matter at normal temperatures implies an energy flow of around 1 to 20W and can simulate roughly 10^15 virtual humans 10^10 faster than current human rate of thought. This works out to about one hundred years per second.

Most importantly, you're assuming that all circuitry performs computation, which is clearly impossible. That leaves us to debate about how much of it can, but personally I see no reason that the computational minimum cost will closely (even in an exponential sense) be approached. I am interested in your reasoning why this should be the case though, so please give me what you can in the way of references that led you to this belief.

Lastly, but most importantly (to me), how strongly do you personally believe that a) you are a simulation and that b) all entities on Earth are full-featured simulations as well?

Conditioning on (b) being true, how long ago (in subjective time) do you think our simulation started, and how many times do you believe it has (or will be) replicated?

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-27T01:47:57.972Z · LW(p) · GW(p)

Very little of what we directly observe overlaps except our interaction with each other, and this was all I was talking about.

If I was to quantify your 'very little' I'd guess you mean say < 1% observational overlap.

Lets look at the rough storage cost first. Ignoring variable data priority through selective attention for the moment, the data resolution needs for a simulated earth can be related to photons incident on the retina and decreases with an inverse square law from the observer.

We can make a 2D simplification and use google earth as an example. If there was just one 'real' observer, you'd need full data fidelity for the surface area that observer would experience up close during his/her lifetime, and this cost dominates. Let's say that's S, S ~ 100 km^2.

Simulating an entire planet, the data cost is roughly fixed or capped - at 5x10^8 km^2.

So in this model simulating an entire earth with 5 billion people will have a base cost of 5x10^8 km^2, and simulating 5 billion worlds separately will have a cost of 5x10^9 * S.

So unless S is pathetically small (actually less than human visual distance), this implies a large extra cost to the solipsist approach. From my rough estimate of S the solipsist approach is 1,000 times more expensive. This also assumes that humans are randomly distributed, which of course is unrealistic. In reality human populations are tightly clustered which further increases the relative gain of shared simulation.

However, one thing you may very much desire would be reunification with former loved ones, dead ancestors, and so on [...] So once you have enough computational power, I suspect there will be a desire to use it in an attempt to resurrect the dead.

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

Evil?

Why?

Most importantly, you're assuming that all circuitry performs computation, which is clearly impossible.

I'm not sure what you mean by this. Does all of the circuitry of the brain perform computation? Over time, yes. The most efficient brain simulations will of course be emulations - circuits that are very similar to the brain but built on much smaller scales on a new substrate.

That leaves us to debate about how much of it can, but personally I see no reason that the computational minimum cost will closely (even in an exponential sense) be approached

My main reference for the ultimate limits is Seth Lloyd's "Ultimate Physical Limits of Computation". The Singularity is Near discusses much of this as well of course (but he mainly uses the more misleading ops per second, which is much less well defined).

Biological circuits switch at 10^3 to 10^4 bits flips/second. Our computers went from around that speed in WWII to the current speed plateau of around 10^9 bit flips/second reached early this century. The theoretical limit for regular molecular matter is around 10^15 bit flips/second. (A black hole could reach a much much higher speed limit, as discussed in Lloyd's paper). There are experimental circuits that currently approach 10^12 bit flips/second.

In terms of density, we went from about 1 bit / kg around WWII to roughly 10^13 bits / kg today. The brain is about 10^15 bits / kg, so we will soon surpass it in circuit density. The juncture we are approaching (brain density) is about half-way to the maximum of 10^30 bits/kg. This has been analyzed extensively in the hardware community and it looks like we will approach these limits as well sometime this century. It is entirely practical to store 1 bit (or more) per molecule.

Lastly, but most importantly (to me), how strongly do you personally believe that a) you are a simulation and that b) all entities on Earth are full-featured simulations as well?

A and B are closely correlated. Its difficult to quantify my belief in A, but it's probably greater than 50%.

I've thought a little about your last question but I don't yet even see a route to estimating it. Such questions will probably require a more advanced understanding of simulation.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-01-27T07:04:52.011Z · LW(p) · GW(p)

If there was just one 'real' observer, you'd need full data fidelity for the surface area that observer would experience up close during his/her lifetime, and this cost dominates. Let's say that's S, S ~ 100 km^2.

I feel like this would make you a terrible video game designer :-P. Why should we bother simulating things in full fidelity, all the time, just because they will eventually be seen? The only full-fidelity simulation we should need is the stuff being directly examined. Much rougher algorithms should suffice for things not being directly observed.

Most importantly, you're assuming that all circuitry performs computation, which is clearly impossible.

I'm not sure what you mean by this. Does all of the circuitry of the brain perform computation? Over time, yes. The most efficient brain simulations will of course be emulations - circuits that are very similar to the brain but built on much smaller scales on a new substrate.

Heh, my ability to argue is getting worse and worse. You sure you want to continue this thread? What I meant to say (and entirely failed) is that there is an infrastructure cost; we can't expect to compute with every particle, because we need lots of particles to make sure the others stay confined, get instructions, etc. Basically, not all matter can be a bit at the same time.

It is entirely practical to store 1 bit (or more) per molecule.

Again, infrastructure costs. Can you source this (also Lloyd?)?

For the rest, I'm aware of and don't dispute the speeds and densities you mention. What I'm skeptical of is that we have evidence that they are practicable; this was what I was looking for. I don't count previous success of Moore's Law strong evidence of that we will continue getting better at computation until we hit physical limits. I'm particularly skeptical about how well we will ever do on power consumption (partially because it's such a hard problem for us now).

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

Evil? Why?

The idea that I did not have to live this life, that some entity or civilization has created the environment in which I've experienced so much misery, and that they will do it again and again makes me shake with impotent rage. I cannot express how much I would rather having never existed. The fact that they would do this and so much worse (because my life is an astoundingly far cry from the worst that people deal with), again, and again, to trillions upon trillions of living, feeling beings...I cannot express my sorrow. It literally brings me to tears.

This is not sadism; or it would be far worse. It is rather a total neglect of care, a relegation of my values in place of historical interest. However, I still consider this evil in the highest degree.

I do not reject the existence of evil, and therefore this provides no evidence against the hypothesis that I am simulated. However, if I believe that I have a high chance of being simulated, I should do all that I can to prevent such an entity from ever coming to exist with such power, on the off chance that I am one not simulated, and able to prevent such evil from unfolding.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-27T08:15:02.404Z · LW(p) · GW(p)

Why should we bother simulating things in full fidelity, all the time, just because they will eventually be seen? The only full-fidelity simulation we should need is the stuff being directly examined. Much rougher algorithms should suffice for things not being directly observed.

Of course you're on the right track here - and I discussed spatially variant fidelity simulation earlier. The rough surface area metric was a simplification of storage/data generation costs, which is a separate issue than computational cost.

If you want the most bare-bones efficient simulation, I imagine a reverse hierarchical induction approach that generates the reality directly from the belief network of the simulated observer, a technique modeled directly on human dreaming.

However, this is only most useful if the goal is to just generate an interesting reality. If the goal is to regenerate an entire historical period accurately, you cant start with the simulated observers - they are greater unknowns than the environment itself.

The solipsist issue may not have discernible consequences, but overall the computational scaling is sublinear for emulating more humans in a world and probably significant because of the large casual overlap of human minds via language.

It is entirely practical to store 1 bit (or more) per molecule. Again, infrastructure costs. Can you source this (also Lloyd?)?

Physical Limits of Computation

What I'm skeptical of is that we have evidence that they are practicable; this was what I was looking for.

The intellectual work required to show an ultimate theoretical limit is tractable, but showing that achieving said limit is impossible in practice is very difficult.

I'm pretty sure we won't actually hit the physical limits exactly, it's just a question of how close. If you look at our historical progress in speed and density to date, it suggests that we will probably go most of the way.

Another simple assessment related to the doomsday argument: I don't know how long this Moore's Law progression will carry on, but it's lasted for 50 years now, so I give reasonable odds that it will last another 50. Simple, but surprisingly better than nothing.

A more powerful line of reasoning perhaps is this: as long as there is an economic incentive to continue Moore's Law and room to push against the physical limits, ceteris paribus, we will make some progress and push towards those limits. Thus, eventually we will reach them.

I'm particularly skeptical about how well we will ever do on power consumption (partially because it's such a hard problem for us now).

Power density depends on clock rate, which has plateaued. Power efficiency, in terms of ops/joule, increases directly with transistor density.

I think this is evil, but I'm not willing to say whether the future intelligences will agree or care.

Evil? Why?

I cannot express how much I would rather having never existed.

This is somewhat concerning, and I believe, atypical. Not existing is perhaps the worst thing I can possibly imagine, other than infinite torture.

It is rather a total neglect of care, a relegation of my values in place of historical interest.

I'm not sure if 'historical interest' is quite the right word. Historical recreation or resurrection might be more accurate.

A paradise designed to maximally suffice current human values and eliminate suffering is not a world which could possibly create or resurrect us.

You literally couldn't have grown up in that world, the entire idea is a non sequitur. Your mind's state is a causal chain rooted in the gritty reality of this world with all of it's suffering.

Imagining that your creator could have assigned you to a different world is like imagining you could have grown up with different parents. You couldn't have. That would be somebody else completely.

Of course, if said creator exists, and if said creator values what you value in the way you value it (dubious) it could whisk you away to paradise tomorrow.

But I wouldn't count on that - perhaps said creator is still working on you or doesn't think paradise is a useful place for you or could care less.

In the face of such uncertainty, we can only task ourselves with building paradise.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-01-27T14:46:25.213Z · LW(p) · GW(p)

However, this is only most useful if the goal is to just generate an interesting reality. If the goal is to regenerate an entire historical period accurately, you cant start with the simulated observers - they are greater unknowns than the environment itself.

I believe we're arguing along two paths here, and it is getting muddled. Applying to both, I think one can maintain the world-per-person sim much more cheaply than you originally suggested long before one hits the spot where the sim is no longer accurate to the world except where it intersects with the observer's attention.

Second, from my perspective you're begging the question, since I was talking about a variety of reasons for simulation and arguing that simulating a single entity seems as reasonable as many---but you seem only to be concerned with historical recreation, in which case it seems obvious to me that a large group of minds is necessary. If we're only talking about that case, the arguments along this line about the per-mind cost just aren't very relevant.

I have a 404 on your link, I'll try later.

Another simple assessment related to the doomsday argument: I don't know how long this Moore's Law progression will carry on, but it's lasted for 50 years now, so I give reasonable odds that it will last another 50. Simple, but surprisingly better than nothing.

Interesting, I haven't heard that argument applied to Moore's Law. Question: you arrive at a train crossing (there are no other cars on the road), and just as you get there, a train begins to cross before you can. Something goes wrong, and the train stops, and backs up, and goes forward, and stops again, and keeps doing this. (This actually happened to me). 10 minutes later, should you expect that you have around 10 minutes left? After those are passed, should your new expectation be that you have around 20 minutes left?

The answer is possibly yes. I think better results would be obtained by using a Jeffreys Prior. However, I've talked to a few statisticians about this problem, and no one has given me a clear answer. I don't think they're used to working with so little data.

A more powerful line of reasoning perhaps is this: as long as there is an economic incentive to continue Moore's Law and room to push against the physical limits, ceteris paribus, we will make some progress and push towards those limits. Thus, eventually we will reach them.

Revise to say "and room to push against the practicable limits" and you will see where my argument lies despite my general agreement with this statement.

Power efficiency, in terms of ops/joule, increases directly with transistor density.

To my knowledge, this is incorrect. Increases in transistor density have dramatically increased circuit leakage (because of bumping into quantum tunneling), requiring more power per transistor in order to accurately distinguish one path from another. I saw a roundtable about proposed techniques for increasing processor efficiency. None of the attendees objected to the introduction, which mentioned that the increased waste heat from modern circuits was rising at a faster exponential than circuit density, and would render all modern circuit designs inoperable if there were to be logically extended without addressing the problem of quantum leakage.

I cannot express how much I would rather having never existed.

This is somewhat concerning, and I believe, atypical. Not existing is perhaps the worst thing I can possibly imagine, other than infinite torture.

If you didn't exist in the first place, you wouldn't care. Do you think you've done so much good for the world that your absence could be "the world thing you can possibly imagine, other than infinite torture"?

Regardless, I'm quite atypical in this regard, but not unique.

You literally couldn't have grown up in that world, the entire idea is a non sequitur. Your mind's state is a causal chain rooted in the gritty reality of this world with all of it's suffering.

Imagining that your creator could have assigned you to a different world is like imagining you could have grown up with different parents. You couldn't have. That would be somebody else completely.

And wouldn't that be so much better.

You propose that not existing would be a terrible evil. But how much better, for all the trillions upon trillions you're proposing must suffer for the creator's whims, would it be to have that computational substrate be used to host entities that have amazingly positive, productive, maximally Fun lives? I know I couldn't have existed in a paradise, but if I'm a sim, there are cycles that could be used for paradise that have been abandoned to create misery and strife.

Again, I think that this may be the world we really are in. I just can't call it a moral one.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-27T18:15:59.087Z · LW(p) · GW(p)

I was talking about a variety of reasons for simulation and arguing that simulating a single entity seems as reasonable as many---but you seem only to be concerned with historical recreation.

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

Power efficiency, in terms of ops/joule, increases directly with transistor density.

To my knowledge, this is incorrect. Increases in transistor density have dramatically increased circuit leakage (because of bumping into quantum tunneling), requiring more power per transistor in order to accurately distinguish one path from another.

If that was actually the case, then there would be no point to moving to a new technology node!

Yes leakage is a problem at the new tech nodes, but of course power per transistor can not possibly be increasing. I think you mean power per surface area has increased.

Shrinking a circuit by half in each dimension makes the wires thinner, shorter and less resistant, decreasing power use per transistor just as you'd think. Leakage makes this decrease somewhat less than the shrinkage rate, but it doesn't reverse the entire trend.

There are also other design trends that can compensate and overpower this to an extent, which is why we have a plethora of power efficient circuits in the modern handheld market.

"which mentioned that the increased waste heat from modern circuits was rising at a faster exponential than circuit density"

Do you remember when this was from or have a link? I could see that being true when speeds were also increasing, but that trend has stopped or reversed.

I recall seeing some slides from NVidia where they are claiming there next GPU architecture will cut power use per transistor dramatically as well at several times the rate of shrinkage.

You propose that not existing would be a terrible evil. But how much better, for all the trillions upon trillions you're proposing must suffer for the creator's whims, would it be to have that computational substrate be used to host entities that have amazingly positive, productive, maximally Fun lives?

Even if the goal is maximizing fun, creating some historical sims for the purpose of resurrecting the dead may serve that goal. But I really doubt that current-human-fun-maximization is an evolutionary stable goal system.

I imagine that future posthuman morality and goals will evolve into something quite different.

Knowledge is a universal feature of intelligence. Even the purely mathematical hypothetical superintelligence AIXI would end up creating tons of historical simulations - and that might be hopelessly brute force, but nonetheless superintelligences with a wide variety of goal systems would find utility in various types of simulation.

Replies from: Desrtopa, datadataeverywhere
comment by Desrtopa · 2011-01-27T19:21:47.855Z · LW(p) · GW(p)

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

Much of the information from the past is probably irretrievably lost to us. If the information input into the simulation were not precisely the same as the actual information from that point in history, the differences would quickly propagate so that the simulation would bear little resemblance to the history. Supposing the individuals in question did have access to all the information they'd need to simulate the past, they'd have no need for the simulation, because they'd already have complete informational access to the past. It suffers similar problems to your sandboxed anthropomorphic AI proposal; provided you have all the resources necessary to actually do it, it ceases to be a good idea.

There are other possible motivations, but it's not clear that there are any others that are as good or better, so we have little reason to suppose it will ever happen.

comment by datadataeverywhere · 2011-01-27T19:07:23.376Z · LW(p) · GW(p)

Historical recreation currently seems to be the best rationale for a superintelligence to simulate this timeslice, although there are probably other motivations as well.

This seems to be overly restrictive, but I don't mind confining the discussion to this hypothesis.

I think you mean power per surface area has increased.

Yes, you are correct.

Do you remember when this was from or have a link? I could see that being true when speeds were also increasing, but that trend has stopped or reversed.

The roundtable was at SC'08, a while after speeds had stabilized, and since it is a supercomputing conference, the focus was on massively parallel systems. It was part of this.

I really doubt that current-human-fun-maximization is an evolutionary stable goal system. I imagine that future posthuman morality and goals will evolve into something quite different.

Without needing to dispute this, I can remain exceptionally upset that whatever their future morality is, it is blind to suffering and willing to create innumerable beings that will suffer in order to gain historical knowledge. Does this really not bother you in the slightest?

ETA: still 404

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T00:30:36.596Z · LW(p) · GW(p)

The roundtable was at SC'08, a while after speeds had stabilized, and since it is a supercomputing conference, the focus was on massively parallel systems. It was part of this.

While the leakage issue is important and I want to read a little more about this reference, I don't think that any single such current technical issue is nearly sufficient to change the general analysis. There have always been major issues on the horizon, the question is more of the increase in engineering difficulty as we progress vs the increase in our effective intelligence and simulation capacity.

In the specific case of leakage, even if it is a problem that persists far into the future, it just slightly lowers the growth exponent as we just somewhat lower the clock speeds. And even if leakage can never be fully prevented, eventually it itself can probably be exploited for computation.

I really doubt that current-human-fun-maximization is an evolutionary stable goal system. I imagine that future posthuman morality and goals will evolve into something quite different.

Without needing to dispute this, I can remain exceptionally upset that whatever their future morality is, it is blind to suffering and willing to create innumerable beings that will suffer in order to gain historical knowledge.

As I child I liked Mcdonalds, bread, plain pizza and nothing more - all other foods were poisonous. I was convinced that my parent's denial of my right to eat these wonderful foods and condemn me to terrible suffering as a result was a sure sign of their utter lack of goodness.

Imagine if I could go back and fulfill that child's wish to reduce it's suffering. It would never then evolve into anything like my current self, and in fact may evolve into something that would suffer more or at the very least wish that it could be me.

Imagine if we could go back in time and alter our primate ancestors to reduce their suffering. The vast majority of such naive interventions would cripple their fitness and wipe out the lineage. There is probably a tiny set of sophisticated interventions that could simultaneously eliminate suffering and improve fitness, but these altered creatures would not develop into humans.

Our current existence is completely contingent on a great evolutionary epic of suffering on an astronomical scale. But suffering itself is just one little component of that vast mechanism, and forms no basis from which to judge the totality.

You made the general point earlier, which I very much agree with, about opportunity cost. Simulating humanity's current time-line has an opportunity cost in the form of some paradise that could exist in it's place. You seem to think that the paradise is clearly better, and I agree: from our current moral perspective.

In the end of the day morality is governed by evolution. There is an entire landscape of paradises that could exist, the question is what fitness advantage do they provide their creator? The more they diverge from reality, the less utility they have in advancing knowledge of reality towards closure.

It looks like earth will evolve into a vast planetary hierarchical superintelligence, but ultimately it will probably be just one of many, and still subject to evolutionary pressure.

Replies from: datadataeverywhere, Desrtopa
comment by datadataeverywhere · 2011-01-28T01:13:17.601Z · LW(p) · GW(p)

In the specific case of leakage, even if it is a problem that persists far into the future, it just slightly lowers the growth exponent as we just somewhat lower the clock speeds.

I disagree; I think that problems like this, unresolved, may or may not decrease the base of our exponent, but will cap its growth earlier.

I don't think that any single such current technical issue is nearly sufficient to change the general analysis. There have always been major issues on the horizon, the question is more of the increase in engineering difficulty as we progress vs the increase in our effective intelligence and simulation capacity.

On this point, we disagree, and I may be on the unpopular side of this agreement. I don't see how past increases that have required technological revolutions can be considered more than weak evidence for future technological revolutions. I actually think it quite likely that increase in computational power per Joule will bottom out in ten to twenty years. I wouldn't be too surprised if exponential increase lasts thirty years, but forty seems unlikely, and fifty even less likely.

Imagine if we could go back in time and alter our primate ancestors to reduce their suffering. The vast majority of such naive interventions would cripple their fitness and wipe out the lineage. There is probably a tiny set of sophisticated interventions that could simultaneously eliminate suffering and improve fitness, but these altered creatures would not develop into humans.

I don't care. We aren't talking about destroying the future of intelligence by going back in time. We're talking about repeating history umpteen many times, creating suffering anew each time. It sounds to me like you are insisting that this suffering is worthwhile, even if the result of all of it will never be more than a data point in a historian's database.

We live in a heartbreaking world. Under the assumption that we are not in a simulation, we can recognize facts like 'suffering is decreasing over time' and realize that it is our job to work to aid this progress. Under the assumption that we are in a simulation, we know that the capacity for this progress is already fully complete, and the agents who control it simply don't care. If we are being simulated, it means that one or more entities have chosen to create unimaginable quantities of suffering for their own purposes---to your stated belief, for historical knowledge.

Your McDonald's example doesn't address this in the slightest. You were already a living, thinking being, and your parents took care of you in the right way in an attempt to make your future life better. They couldn't have chosen before you were born to instead create someone who would be happier, smarter, wiser, and better in every way. If they could have, wouldn't it be upsetting that they chose not to?

Given the choice between creating agents that have to endure suffering for generations upon generations, and creating agents that will have much more positive, productive lives, why are you arguing for the side that chooses the former? Of course the former and latter are entirely different entities, but that serves as no argument whatsoever for choosing the former!

Replies from: Dreaded_Anomaly, jacob_cannell
comment by Dreaded_Anomaly · 2011-01-28T03:17:59.872Z · LW(p) · GW(p)

A person running such a simulation could create a simulated afterlife, without suffering, where each simulated intelligence would go after dying in the simulated universe. It's like a nice version of Pascal's Wager, since there's no wagering involved. Such an afterlife wouldn't last infinitely long, but it could easily be made long enough to outweigh any suffering in the simulated universe.

Replies from: Desrtopa, Alicorn
comment by Desrtopa · 2011-01-28T03:23:03.765Z · LW(p) · GW(p)

Or you could skip the part with all the suffering. That would be a lot easier.

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-01-28T03:37:31.166Z · LW(p) · GW(p)

In general, I agree. I just wanted to offer a more creative alternative for someone truly dedicated to operating such a simulation.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-28T03:48:52.276Z · LW(p) · GW(p)

So far the only person who seems dedicated to making such a simulation is jacob cannell, and he already seems to be having enough trouble separating the idea from cached theistic assumptions.

comment by Alicorn · 2011-01-28T03:21:22.213Z · LW(p) · GW(p)

outweigh

I don't think that's how it works.

Replies from: Dreaded_Anomaly, jimrandomh
comment by Dreaded_Anomaly · 2011-01-28T03:42:45.426Z · LW(p) · GW(p)

How much future happiness would you need in order to choose to endure 50 years of torture?

Replies from: nshepperd
comment by nshepperd · 2011-01-28T03:56:51.763Z · LW(p) · GW(p)

That depends if happiness without torture is an option. The options are better/worse, not good/bad.

comment by jimrandomh · 2011-01-28T03:51:18.499Z · LW(p) · GW(p)

The simulated afterlife wouldn't need to outweigh the suffering in the first universe according to our value system, only according to the value system of the aliens who set up the simulation.

comment by jacob_cannell · 2011-01-28T02:38:52.111Z · LW(p) · GW(p)

I don't see how past increases that have required technological revolutions can be considered more than weak evidence for future technological revolutions.

Technology doesn't really advance through 'revolutions', it evolves. Some aspects of that evolution appear to be rather remarkably predictable.

That aside, the current predictions do posit a slow-down around 2020 for the general lithography process, but there are plenty of labs researching alternatives. As the slow-down approaches, their funding and progress will accelerate.

But there is a much more fundamental and important point to consider, which is that circuit shrinkage is just one dimension of improvement amongst several. As that route of improvement slows down, other routes will become more profitable.

For example, for AGI algorithms, current general purpose CPUs are inefficient by a factor of perhaps around 10^4. That is a decade of exponential gain right there just from architectural optimization. This route - neuromorphic hardware and it's ilk - currently receives a tiny slice of the research budget, but this will accelerate as AGI advances and would accelerate even more if the primary route of improvement slowed.

Another route of improvement is exponentially reducing manufacturing cost. The bulk of the price of high-end processors pays for the vast amortized R&D cost of developing the manufacturing node within the timeframe that the node is economical. Refined silicon is cheap and getting cheaper, research is expensive. The per transistor cost of new high-end circuitry on the latest nodes for a CPU or GPU is 100 times more expensive than the per transistor cost of bulk circuitry produced on slightly older nodes.

So if moore's law stopped today, the cost of circuitry would still decay down to the bulk cost. This is particularly relevant to neurmorphic AGI designs as they can use a mass of cheap repetitive circuitry, just like the brain. So we have many other factors that will kick in even as moore's law slows.

I suspect that we will hit a slow ramping wall around or by 2020, but these other factors will kick in and human-level AGI will ramp up, and then this new population and speed explosion will drive the next S-curve using a largely new and vastly more complex process (such as molecular nano-tech) that is well beyond our capability or understanding.

I don't care. We aren't talking about destroying the future of intelligence by going back in time.

It's more or less equivalent from the perspective of a historical sim. A historical sim is a recreation of some branch of the multiverse near your own incomplete history that you then run forward to meet your present.

It sounds to me like you are insisting that this suffering is worthwhile

My existence is fully contingent on the existence of my ancestors in all of their suffering glory. So from my perspective, yes their suffering was absolutely worthwhile, even if it wasn't from their perspective.

Likewise, I think that it is our noble duty to solve AI, morality, and control a Singularity in order to eliminate suffering and live in paradise.

I also understand that after doing that we will over time evolve into beings quite unlike what we are now and eventually look back at our prior suffering and view it from an unimaginably different perspective, just as my earlier mcdonald's loving child-self evolved into a being with a completely different view of it's prior suffering.

your parents took care of you in the right way in an attempt to make your future life better.

It was right from both their and my current perspective, it was absolutely wrong from my perspective at the time.

They couldn't have chosen before you were born to instead create someone who would be happier, smarter, wiser, and better in every way. If they could have, wouldn't it be upsetting that they chose not to?

Of course! Just as we should create something better than ourselves. But 'better' is relative to a particular subjective utility function.

I understand that my current utility function works well now, that it is poorly tuned to evaluate the well-being of bacteria, just as poorly tuned to evaluate the well-being of future posthuman godlings, and most importantly - my utility function or morality will improve over time.

Given the choice between creating agents that have to endure suffering for generations upon generations, and creating agents that will have much more positive, productive lives, why are you arguing for the side that chooses the former?

Imagine you are the creator. How do you define 'positive' or 'productive'? From your perspective, or theirs?

There are an infinite variety of uninteresting paradises. In some virtual humans do nothing but experience continuous rapturous bliss well outside the range of current drug-induced euphoria. There are complex agents that just set their reward functions to infinity and loop.

There are also a spectrum of very interesting paradises, all having the key differentiator that they evolve. I suspect that future godlings will devote most of their resources to creating these paradises.

I also suspect that evolution may operate again at an intergalactic or higher level, ensuring that paradises and all simulations somehow must pay for themselves.

At some point our descendants will either discover for certain they are in a sim and integrate up a level, or they will approach local closure and perhaps discover an intergalactic community. At that point we may have to compete with other singularity-civilizations, and we may have the opportunity to historically intervene on pre-singularity planets we encounter. We'd probably want to simulate any interventions before preceeding, don't you think?

A historical recreation can develop into a new worldline with it's own set of branching paradises that increase overall variation in a blossoming metaverse.

If you could create a new big bang, an entire new singularity and new universe, would you?

You seem to be arguing that you would not because it would include humans who suffer. I think this ends up being equivalent to arguing the universe should not exist.

Replies from: Desrtopa, JoshuaZ
comment by Desrtopa · 2011-01-28T02:59:23.712Z · LW(p) · GW(p)

At some point our descendants will either discover for certain they are in a sim, or they will approach local closure and perhaps discover an intergalactic community. At that point we may have to compete with other singularity-civilizations, and we may have the opportunity to historically intervene on pre-singularity planets we encounter. We'd probably want to simulate any interventions before preceeding, don't you think?

If we had enough information to create an entire constructed reality of them in simulation, we'd have much more than we needed to just go ahead and intervene.

If you could create a new big bang, an entire new singularity and new universe, would you? You seem to be arguing that you would not because it would include humans who suffer. I think this ends up being equivalent to arguing the universe should not exist.

Some people would argue that it shouldn't (this is an extreme of negative utilitarianism.) However, since we're in no position to decide whether the universe gets to exist or not, the dispute is fairly irrelevant. If we're in a position to decide between creating a universe like ours, creating one that's much better, with more happiness and productivity and less suffering, and not creating one at all, though, I would have an extremely poor regard for the morality of someone who chose the first.

My existence is fully contingent on the existence of my ancestors in all of their suffering glory. So from my perspective, yes their suffering was absolutely worthwhile, even if it wasn't from their perspective.

If my descendants think that all my suffering was worthwhile so that they could be born instead of someone else, then you know what? Fuck them. I certainly have a higher regard for my own ancestors. If they could have been happier, and given rise to a world as good as better than this one, then who am I to argue that they should have been unhappy so I could be born instead? If, as you point out

A historical recreation can develop into a new worldline with it's own set of branching paradises that increase overall variation in a blossoming metaverse.

then why not skip the historical recreation and go straight to simulating the paradises?

comment by JoshuaZ · 2011-01-28T02:56:11.449Z · LW(p) · GW(p)

For example, for AGI algorithms, current general purpose CPUs are inefficient by a factor of perhaps around 10^4. That is a decade of exponential gain right there just from architectural optimization.

I'm curious how you've reached this conclusion given how little we know about what AGI algorithms would look like.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T03:44:44.964Z · LW(p) · GW(p)

For example, for AGI algorithms, current general purpose CPUs are inefficient by a factor of perhaps around 10^4. That is a decade of exponential gain right there just from architectural optimization.

I'm curious how you've reached this conclusion given how little we know about what AGI algorithms would look like.

The particular type of algorithm is actually not that important. There is a general speedup in moving from a general CPU-like architecture to a specialized ASIC - once you are willing to settle on the algorithms involved.

There is another significant speedup moving into analog computation.

Also, we know enough about the entire space of AI sub-problems to get a general idea of what AGI algorithms look like and the types of computations they need. Naturally the ideal hardware ends up looking much more like the brain than current von neumann machines - because the brain evolved to solve AI problems in an energy efficient manner.

If you know your are working in the space of probabilistic/bayesian like networks, exact digital computations are extremely wasteful. Using ten or hundreds of thousands of transistors to do an exact digital multiply is useful for scientific or financial calculations, but it's a pointless waste when the algorithm just needs to do a vast number of probabilistic weighted summations, for example.

Replies from: gwern, JoshuaZ
comment by gwern · 2011-01-28T04:30:33.202Z · LW(p) · GW(p)

Cite for last paragraph about analog probability: http://phm.cba.mit.edu/theses/03.07.vigoda.pdf

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T04:44:16.438Z · LW(p) · GW(p)

Thanks. Hefty read, but this one paragraph is worth quoting:

Statistical inference algorithms involve parsing large quantities of noisy (often analog) data to extract digital meaning. Statistical inference algorithms are ubiquitous and of great importance. Most of the neurons in your brain and a growing number of CPU cycles on desk-tops are spent running statistical inference algorithms to perform compression, categorization, control, optimization, prediction, planning, and learning.

I had forgot that term, statistical inference algorithms, need to remember that.

Replies from: gwern
comment by gwern · 2011-01-28T04:56:26.681Z · LW(p) · GW(p)

Well, there's also another quote worth quoting, and in fact the quote that is in my Mnemosyne database and which enabled me to look that thesis up so fast...

"In practice replacing digital computers with an alternative computing paradigm is a risky proposition. Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable, because Moore's Law has consistently enabled conventional von Neumann architectures to render alternatives unnecessary.

Besides Moore's Law, digital computing also benefits from mature tools and expertise for optimizing performance at all levels of the system: process technology, fundamental circuits, layout and algorithms. Many engineers are simultaneously working to improve every aspect of digital technology, while alternative technologies like analog computing do not have the same kind of industry juggernaut pushing them forward."

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T19:29:10.435Z · LW(p) · GW(p)

This is true in general but this particular statement appears out of date:

'Alternative computing architectures, such as parallel digital computers have not tended to be commercially viable"

That was true perhaps circa 2000, but we hit a speed/heat wall and since then everything has been going parallel.

You may see something similar happen eventually with analog computing once the market for statistical inference computation is large enough and or we approach other constraints similar to the speed/heat wall.

comment by JoshuaZ · 2011-01-28T03:55:09.153Z · LW(p) · GW(p)

The particular type of algorithm is actually not that important. There is a general speedup in moving from a general CPU-like architecture to a specialized ASIC - once you are willing to settle on the algorithms involved.

Ok. But this prevents you from directly improving your algorithms. And if the learning mechanisms are to be highly flexible (like say those of a human brain) then the underlying algorithms may need to modify a lot even to just approximate being an intelligent entity. I do agree that given a fixed algorithm this would plausibly lead to some speed-up.

There is another significant speedup moving into analog computation.

A lot of things can't be put into analog. For example, what if you need factor large numbers. And making analog and digital stuff interact is difficult.

Also, we know enough about the entire space of AI sub-problems to get a general idea of what AGI algorithms look like and the types of computations they need. Naturally the ideal hardware ends up looking much more like the brain than current von neumann machines - because the brain evolved to solve AI problems in an energy efficient manner.

This doesn't follow. The brain evolved through a long path of natural selection. It isn't at all obvious that the brain is even highly efficient at solving AI-type problems, especially given that humans have only needed to solve much of what we consider standard problems for a very short span of evolutionary history (and note that general mammal brain architecture looks very similar to ours).

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T04:35:26.583Z · LW(p) · GW(p)

EDIT: why the downvotes?

Ok. But this prevents you from directly improving your algorithms.

Yes - which is part of the reason there is a big market for CPUs.

And if the learning mechanisms are to be highly flexible (like say those of a human brain) then the underlying algorithms may need to modify a lot even to just approximate being an intelligent entity.

Not necessarily. For example, the cortical circuit in the brain can be reduced to an algorithm which would include the learning mechanism built in. The learning can modify the network structure to a degree but largely adjusts synaptic weights. That can be described as (is equivalent to) a single fixed algorithm. That algorithm in turn can be encoded into an efficient circuit. The circuit would learn just as the brain does, no algorithmic changes ever needed past that point, as the self-modification is built into the algorithm.

A modern CPU is a jack-of all trades that is designed to do many things, most of which have little or nothing to do with the computational needs of AGI.

A lot of things can't be put into analog. For example, what if you need factor large numbers. And making analog and digital stuff interact is difficult.

If the AGI need to factor large numbers, it can just use an attached CPU. Factoring large numbers is easy compared to reading this sentence about factoring large numbers and understanding what that actually means.

It isn't at all obvious that the brain is even highly efficient at solving AI-type problems,

The brain has roughly 10^15 noisy synapses that can switch around 10^3 times per second and store perhaps a bit each as well. (computation and memory integrated)

My computer has about 10^9 exact digital transistors in it's CPU & GPU that can switch around 10^9 times per second. It has around the same amount of separate memory and around 10^13 bits of much slower disk storage.

These systems have similar peak throughputs of about 10^18 bits/second, but they are specialized for very different types of computational problems. The brain is very slow but massively wide, the computer is very narrow but massively fast.

The brain is highly specialized and extremely adept at doing typical AGI stuff - vision, pattern recognition, inference, and so on - problems that are suited to massively wide but slow processing with huge memory demands.

Our computers are specialized and extremely adept at doing the whole spectrum of computational problems brains suck at - problems that involve long complex chains of exact computations, problems that require massive speed and precision but less bulk processing and memory.

So to me, yes it's obvious that the brain is highly efficient at doing AGI-type stuff - almost because that's how we define AGI-type stuff - its all the stuff that brains are currently much better than computers at!

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-28T04:47:58.478Z · LW(p) · GW(p)

Not necessarily. For example, the cortical circuit in the brain can be reduced to an algorithm which would include the learning mechanism built in. The learning can modify the network structure to a degree but largely adjusts synaptic weights. That can be described as (is equivalent to) a single fixed algorithm. That algorithm in turn can be encoded into an efficient circuit. The circuit would learn just as the brain does, no algorithmic changes ever needed past that point, as the self-modification is built into the algorithm.

This limits the amount of modification one can do. Moreover, the more flexible your algorithm the less you gain from hard-wiring it.

The brain is highly specialized and extremely adept at doing typical AGI stuff - vision, pattern recognition, inference, and so on - problems that are suited to massively wide but slow processing with huge memory demands.

No, we don't know that the brain is "extremely adept" at these things. We just know that it is better than anything else that we know of. That's not at all the same thing. The brain's architecture is formed by a succession of modifications to much simpler entities. The successive, blind modification has been stuck with all sorts of holdovers from our early chordate ancestors and a lot from our more recent ancestors.

If the AGI need to factor large numbers, it can just use an attached CPU. Factoring large numbers is easy compared to reading this sentence about factoring large numbers and understanding what that actually means.

Easy is a misleading term in this context. I certainly can't factor a forty digit number but for a computer that's trivial. Moreover, some operations are only difficult because we don't know an efficient algorithm. In any event, if your speedup is only occuring for the narrow set of tasks which humans can do decently such as vision, then you aren't going to get a very impressive AGI. The ability to engage in face recognition if it takes you only a tiny amount of time that it would for a person to do is not an impressive ability.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T07:54:41.094Z · LW(p) · GW(p)

The circuit would learn just as the brain does, no algorithmic changes ever needed past that point, as the self-modification is built into the algorithm.

This limits the amount of modification one can do.

Limits it compared to what?. Every circuit is equivalent to a program. The circuit of a general processor is equivalent to a program which simulates another circuit - the program which it keeps in memory.

Current Von Neumman processors are not the only circuits which have this simulation-flexibility. The brain has similar flexibility using very different mechanisms.

Finally, even if we later find out that lo and behold, the inference algorithm we hard-coded into our AGI circuits was actually not so great, and somebody comes along with a much better one . . . that is still not an argument for simulating the algorithm in software.

Moreover, the more flexible your algorithm the less you gain from hard-wiring it.

Not at all true. The class of statistical inference algorithms including Bayesian Networks and the cortex are both extremely flexible and greatly benefit from 'hard-wiring' it.

The brain is highly specialized and extremely adept at doing typical AGI stuff - vision, pattern recognition, inference, and so on - problems that are suited to massively wide but slow processing with huge memory demands.

No, we don't know that the brain is "extremely adept" at these things. We just know that it is better than anything else that we know of.

This is like saying we don't know that Usain Bolt is extremely adept at running, he's just better than anything else that we know of. The latter sentence in each case of course is true, but it doesn't impinge on the former.

But my larger point was that the brain and current computers occupy two very different regions in the space of possible circuit designs, and are rather clearly optimized for a different slice over the space of computational problems.

There are some routes that we can obviously improve on the brain at the hardware level. Electronic circuits are orders of magnitude faster, and eventually we can make them much denser and thus much more massive.

However, it is much more of an open question in computer science if we will ever be able to greatly improve on the statistical inference algorithm used in the cortex. It is quite possible that evolution had enough time to solve that problem completely - or at least reach some nearly global maxima.

The brain's architecture is formed by a succession of modifications to much simpler entities.

Yes - this is an excellent strategy for solving complex optimization problems.

If the AGI need to factor large numbers, it can just use an attached CPU. Factoring large numbers is easy compared to reading this sentence about factoring large numbers and understanding what that actually means.

Easy is a misleading term in this context.

Yes, and on second thought - largely mistaken. To be more precise we should speak of computational complexity and bitops. The best known factorization algorithms are running time exponential for the number of input bits. That makes them 'hard' in the scalability sense. But factoring small primes is still easy in the absolute cost sense.

Factoring is also easy in the algorithmic sense, as the best algorithms are very simple and short. Physics is hard in the algorithmic sense, AGI seems to be quite hard, etc.

In any event, if your speedup is only occuring for the narrow set of tasks which humans can do decently such as vision, then you aren't going to get a very impressive AGI

The cortex doesn't have a specialized vision circuit - there appears to be just one general purpose circuit it uses for everything. The visual regions become visual regions on account of . . processing visual input data.

AGI hardware could take advantage of specialized statistical inference circuitry and still be highly general.

I'm having a hard time understanding what you really mean by saying "the narrow set of tasks which humans can do decently such as vision". What about quantum mechanics, computer science, mathematics, game design, poetry, economics, sports, art, or comedy? One could probably fill a book with the narrow set of tasks that humans can do decently. Of course, that other section of the bookstore - filled with books about things computers can do decently, is growing at an exciting pace.

The ability to engage in face recognition if it takes you only a tiny amount of time that it would for a person to do is not an impressive ability.

I'm not sure what you mean by this or how it relates. If you could do face recognition that fast . . it's not impressive?

The main computational cost of every main competing AGI route I've seen involves some sort of deep statistical inference, and this amounts to a large matrix multiplication possibly with some non-linear stepping or a normalization. Neural nets, bayesian nets, whatever - if you look at the mix of required instructions, it amounts to a massive repetition of simple operations that are well suited to hardware optimization.

Replies from: JoshuaZ, JoshuaZ
comment by JoshuaZ · 2011-01-28T20:10:32.956Z · LW(p) · GW(p)

Finally, even if we later find out that lo and behold, the inference algorithm we hard-coded into our AGI circuits was actually not so great, and somebody comes along with a much better one . . . that is still not an argument for simulating the algorithm in software.

If we have many generations of rapid improvement of the algorithms this will be much easier if one doesn't need to make new hardware each time.

Not at all true. The class of statistical inference algorithms including Bayesian Networks and the cortex are both extremely flexible and greatly benefit from 'hard-wiring' it.

The general trend should still occur this way. I'm also not sure that you can reach that conclusion about the cortex given that we don't have a very good understanding of how the brain's algorithms function.

he cortex doesn't have a specialized vision circuit - there appears to be just one general purpose circuit it uses for everything. The visual regions become visual regions on account of . . processing visual input data.

That seems plausibly correct but we don't actually know that. Given how much humans rely on vision it isn't at all implausible that there have been subtle genetic tweaks that make our visual regions more effective in processing visual data (I don't know the literature in this area at all).

To be more precise we should speak of computational complexity and bitops. The best known factorization algorithms are running time exponential for the number of input bits.

Incorrect, the best factoring algorithms are subexponential. See for example the quadratic field sieve and the number field sieve both of which have subexponential running time. This has been true since at least the early 1980s (there are other now obsolete algorithms that were around before then that may have had slightly subexponential running time. I don't know enough about them in detail to comment.)

But factoring small primes is still easy in the absolute cost sense.

Factoring primes is always easy. For any prime p, it has no non-trivial factorizations. You seem to be confusing factorization with primality testing. The second is much easier than the first; we've had Agrawal's algorithm which is provably polynomial time for about a decade. Prior to that we had a lot of efficient tests that were empirically faster than our best factorization procedures. We can determine the primality of numbers much larger than those we can factor.

Factoring is also easy in the algorithmic sense, as the best algorithms are very simple and short.

Really? The general number field sieve is simple and short? Have you tried to understand it or write an implementation? Simple and short compared to what exactly?

I'm having a hard time understanding what you really mean by saying "the narrow set of tasks which humans can do decently such as vision". What about quantum mechanics, computer science, mathematics, game design, poetry, economics, sports, art, or comedy? One could probably fill a book with the narrow set of tasks that humans can do decently.

There are some tasks where we can argue that humans are doing a good job by comparison to others in the animal kingdom. Vision is a good example of this (we have some of the best vision of any mammal.) The rest are tasks which no other entities can do very well, and we don't have any good reason to think humans are anywhere near good at them in an absolute sense. Note also that most humans can't do math very well (Apparently 10% or so of my calculus students right now can't divide one fraction by another). And the vast majority of poetry is just awful. It isn't even obvious to me that the "good" poetry isn't labeled that way in part simply from social pressure.

I'm not sure what you mean by this or how it relates. If you could do face recognition that fast . . it's not impressive?

A lot of the tasks that humans have specialized in are not generally bottlenecks for useful computation. Improved facial recognition isn't going to help much with most of the interesting stuff, like recursive self-improvement, constructing new algorithms, making molecular nanotech, finding a theory of everything, figuring out how Fred and George tricked Rita, etc.

The main computational cost of every main competing AGI route I've seen involves some sort of deep statistical inference, and this amounts to a large matrix multiplication possibly with some non-linear stepping or a normalization. Neural nets, bayesian nets, whatever - if you look at the mix of required instructions, it amounts to a massive repetition of simple operations that are well suited to hardware optimization.

This seems to be a good point.

Replies from: wnoise, Sniffnoy, jacob_cannell
comment by wnoise · 2011-01-28T20:30:31.964Z · LW(p) · GW(p)

Incorrect, the best factoring algorithms are subexponential.

To clarify, subexponential does not mean polynomial, but super-polynomial.

(Interestingly, while factoring a given integer is hard, there is a way to get a random integer within [1..N] and its factorization quickly. See Adam Kalai's paper Generating Random Factored Numbers, Easily (PDF).

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-28T20:38:44.078Z · LW(p) · GW(p)

Interesting. I had not seen that paper before. That's very cute.

comment by Sniffnoy · 2011-01-29T02:36:18.558Z · LW(p) · GW(p)

This is mostly irrelevant, but think complexity theorists use a weird definition of exponential according to which GNFS might still be considered exponential - I know when they say "at most exponential" they mean O(e^(n^k)) rather than O(e^n), so it seems plausible that by "at least exponential" they might mean Omega(e^(n^k)) where now k can be less than 1.

EDIT: Nope, I'm wrong about this. That seems kind of inconsistent.

Replies from: wnoise, JoshuaZ
comment by wnoise · 2011-01-29T06:41:10.592Z · LW(p) · GW(p)

They like keeping things invariant under polynomial transformations of the input, since that's has been observed to be a somewhat "natural" class. This is one of the areas where it seems to not quite.

comment by JoshuaZ · 2011-01-29T05:43:47.391Z · LW(p) · GW(p)

Hmm, interesting in the notation that Scott says is standard to complexity theory my earlier statement that factoring is "subexponential" is wrong even though it is slower growing than exponential. But apparently Greg Kuperberg is perfectly happy labeling something like 2^(n^(1/2)) as subexponential.

comment by jacob_cannell · 2011-01-28T21:20:19.421Z · LW(p) · GW(p)

If we have many generations of rapid improvement of the algorithms this will be much easier if one doesn't need to make new hard-ware each time.

Yes, and this tradeoff exists today with some rough mix between general processors and more specialized ASICs.

I think this will hold true for a while, but it is important to point out a few subpoints:

  1. If moore's law slows down this will shift the balance farther towards specialized processors.

  2. Even most 'general' processors today are actually a mix of CISC and vector processing, with more and more performance coming from the less-general vector portion of the chip.

  3. For most complex real world problems algorithms eventually tend to have much less room for improvement than hardware - even if algorithmic improvements intially dominate. After a while algorithmic improvements end within the best complexity class and then further improvements are just constants and are swamped by hardware improvement.

Modern GPUs for example have 16 or more vector processors for every general logic processor.

The brain is like a very slow processor with massively wide dedicated statistical inference circuitry.

As a result of all this (and the point at the end of my last post) I expect that future AGIs will be built out of a heterogeneous mix of processors but with the bulk being something like a wide-vector processor with alot of very specialized statistical inference circuitry.

This type of design will still have huge flexibility by having program-ability at the network architecture level - it could for example simulate humanish and various types of mammalian brains as well as a whole range of radically different mind architectures all built out of the same building blocks.

The cortex doesn't have a specialized vision circuit - there appears to be just one general purpose circuit it uses for everything. The visual regions become visual regions on account of . . processing visual input data.

That seems plausibly correct but we don't actually know that.

We have pretty good maps of the low-level circuitry in the cortex at this point and it's clearly built out of a highly repetitive base circuit pattern, similar to how everything is built out of cells at a lower level. I don't have a single good introductory link, but it's called the laminar cortical pattern.

Given how much humans rely on vision it isn't at all implausible that there have been subtle genetic tweaks that make our visual regions more effective in processing visual data (I don't know the literature in this area at all).

Yes, there are slight variations, but slight is the keyword. The cortex is highly general - the 'visual' region develops very differently in deaf people, for example, creating a entirely different audio processing networks much more powerful than what most people have.

The flexibility is remarkable - if you hook up electrodes to the tongue that send a rough visual signal from a camera, in time the cortical regions connected to the tongue start becoming rough visual regions and limited tongue based vision is the result.

Incorrect, the best factoring algorithms are subexponential.

I stand corrected on prime factorization - I saw the exp(....) part and assumed exponential before reading into it more.

Vision is a good example of this (we have some of the best vision of any mammal.) The rest are tasks which no other entities can do very well, and we don't have any good reason to think humans are anywhere near good at them in an absolute sense.

This is a good point, but note the huge difference between the abilities or efficiency of an entire human mind vs the efficiency of the brain's architecture or the efficiency of the lower level components from which it is built - such as the laminar cortical circuit.

I think this discussion started concerning your original point:

It isn't at all obvious that the brain is even highly efficient at solving AI-type problems, especially given that humans have only needed to solve much of what we consider standard problems for a very short span of evolutionary history (and note that general mammal brain architecture looks very similar to ours

The cortical algorithm appears to be a pretty powerful and efficient low level building block. In evolutionary terms it has been around for much longer than human brains and naturally we can expect that it is much closer to optimality in the design configuration space in terms of the components it is built from.

As we go up a level to higher level brain architectures that are more recent in evolutionary terms we should expect there to be more room for improvement.

A lot of the tasks that humans have specialized in are not generally bottlenecks for useful computation.

The mammalian cortex is not specialized for particular tasks - this is the primary advantage of it's architecture over it's predecessors (at the cost of a much larger size than more specialized circuitry).

Replies from: JoshuaZ
comment by JoshuaZ · 2011-02-01T04:21:39.678Z · LW(p) · GW(p)

The mammalian cortex is not specialized for particular tasks - this is the primary advantage of it's architecture over it's predecessors (at the cost of a much larger size than more specialized circuitry).

How do you reconcile this claim with the fact that some people are faceblind from an early age and never develop the ability to recognize faces? This would suggest that there's at least one aspect of humans that is normally somewhat hard-wired.

Replies from: jacob_cannell, wedrifid
comment by jacob_cannell · 2011-02-01T05:11:23.468Z · LW(p) · GW(p)

I've read a great deal about the cortex, and my immediate reaction to your statement was "no, that's just not how it works". (strong priors)

About one minute later on the Prosopagnosia wikipedia article, I find the first reference to this idea (that of congenital Prosopagnosia):

The idea of congenital prosopagnosia appears to be a new theory supported by one researcher and one? study:

Dr Jane Whittaker, writing in 1999, described the case of a Mr. C. and referred to other similar cases (De Haan & Campbell, 1991, McConachie, 1976 and Temple, 1992).[7] The reported cases suggest that this form of the disorder may be heritable and much more common than previously thought (about 2.5% of the population may be affected), although this congenital disorder is commonly accompanied by other forms of visual agnosia, and may not be "pure" prosopagnosia

The last part about it being "commonly accompanied by other forms of visual agnosia" gives it away - this is not anything close to what you originally thought/claimed, even if this new research is actually correct.

Known cases of true prosopagnosia are caused by brain damage - what this research is describing is probably a disorder of the higher region (V4 I believe) which typically learns to recognize faces and other complex objects.

However, there is an easy way to cause prosopagnosia during development - prevent the creature from ever seeing faces.

I dont have the link on hand, but there have been experiments in cats where you mess with their vision - by using grating patterns or carefully controlled visual environments, and you can create cats that literally can't even see vertical lines.

So even the simplest most basic thing which nature could hard-code - a vertical line feature detector, actually develops from the same extremely flexible general cortical circuit - the same circuit which can learn to represent everything from sounds to quantum mechanics.

Humans can represent a massive number of faces, and in general the brain's vast information storage capacity over the genome (10^15 ish vs 10^9 ish) more or less require a generalized learning circuit.

The cortical circuits do basically nothing but fire randomly when you are born - you really are a blank slate in that respect (although obviously the rest of the brain has plenty of genetically fixed functionality).

Of course the arrangement of the brain's regions with respect to sensory organs and it's overall wiring architecture do naturally lead to the familiar specializations of brain regions, but really one should consider this a developmental attractor - information is colonizing each cortex anew, but the similar architecture and similarity of information ensures that two brains end up having largely overlapping colonizations.

comment by wedrifid · 2011-02-01T06:37:58.088Z · LW(p) · GW(p)

How do you reconcile this claim with the fact that some people are faceblind from an early age and never develop the ability to recognize faces? This would suggest that there's at least one aspect of humans that is normally somewhat hard-wired.

There are all sorts of aspects of humans that are normally somewhat - or nearly entirely - hard-wired. The cortex just doesn't tend to be. Even the parts of the cortex that are similarly specialised in most humans seem to be so due to what they are connected to. (As can be seen by looking at how the atypical cases have adapted differently.) It would surprise me if the inability to recognise faces was caused by a dysfunction in the cortex specifically.

Disclaimer: I disagree with nearly everything else Jacob has said in this thread. This position specifically appears to be well researched.

comment by JoshuaZ · 2011-01-28T20:33:49.453Z · LW(p) · GW(p)

However, it is much more of an open question in computer science if we will ever be able to greatly improve on the statistical inference algorithm used in the cortex. It is quite possible that evolution had enough time to solve that problem completely - or at least reach some nearly global maxima.

This is unlikely. We haven't been selected based on sheer brain power or brain inefficiency. Humans have been selected by their ability to reproduce in a complicated environment. Efficient intelligence helps, but there's selection for a lot of other things, such as good immune systems and decent muscle systems. A lot of the selection that was brain selection was probably simply around the fantastically complicated set of tasks involved in navigating human societies. Note that human brain size on average has decreased over the last 50,000 years. Humans are subject to a lot of different selection pressures.

(Tangent: This is related to how at a very vague level we should expect genetic algorithms to outperform evolution at optimizing tasks. Genetic algorithms can select for narrow task completion goals, rather than select in a constantly changing environment with competition and interaction between the various entities being bred.)

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T21:39:20.710Z · LW(p) · GW(p)

It is quite possible that evolution had enough time to solve that problem completely [statistical inference in the cortex] - or at least reach some nearly global maxima

This is unlikely. We haven't been selected based on sheer brain power or brain inefficiency.

I largely agree with your point about human evolution, but my point was about the laminar cortical circuit which is shared in various forms across the entire mammalian lineage and has an analog in birds.

It's a building block pattern that appears to have a long evolutionary history.

Genetic algorithms can select for narrow task completion goals, rather than select in a constantly changing environment with competition and interaction between the various entities being bred.

Yes, but there is a limit to this of course. We are, after all, talking about general intelligence.

comment by Desrtopa · 2011-01-28T03:11:18.751Z · LW(p) · GW(p)

You made the general point earlier, which I very much agree with, about opportunity cost. Simulating humanity's current time-line has an opportunity cost in the form of some paradise that could exist in it's place. You seem to think that the paradise is clearly better, and I agree: from our current moral perspective.

It seems you're arguing that our successors will develop a preference for simulating universes like ours over paradises. If that's what you're arguing, then what reason do we have to believe that this is probable?

If their preferences do not change significantly from ours, it seems highly unlikely that they will create simulations identical to our current existence. And out of the vast space of possible ways their preferences could change, selecting that direction in the absence of evidence is a serious case of privileging the hypothesis.

comment by Desrtopa · 2011-02-02T05:54:35.406Z · LW(p) · GW(p)

To uploads, yes, but a faithful simulation of the universe, or even a small portion of it. would have to track a lot more variables than the processes of the human minds within it.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-02-02T06:39:36.252Z · LW(p) · GW(p)

Optimal approximate simulation algorithms are all linear with respect to total observer sensory input. This relates to the philosophical issue of observer dependence in QM and whether or not the proverbial unobserved falling tree actually exists.

So the cost of simulating a matrix with N observers is not expected to be dramatically more than simulating the N observer minds alone - C*N. The phenomena of dreams is something of a practical proof.

Replies from: Desrtopa
comment by Desrtopa · 2011-02-02T06:57:13.511Z · LW(p) · GW(p)

Variables that aren't being observed still have to be tracked, since they affect the things that are being observed.

Dreams are not a very good proof of concept given that they are not coherent simulations of any sort of reality, and can be recognized as artificial not only after the fact, but during with a bit of introspection and training.

In dreams, large amounts of data can be omitted or spontaneously introduced without the dreamer noticing anything is wrong unless they're lucid. In reality, everything we observe can be examined for signs of its interactions with things that we haven't observed, and that data adds up to pictures that are coherent and consistent with each other.

comment by prase · 2011-01-20T13:18:48.177Z · LW(p) · GW(p)

The question of whether the universe came into being via an agenty optimization process is only slightly more interesting than teapots orbiting planets?

Depends on personal standards of interest. I may be more interested in questions which I can imagine answering than ones whose anwer is a matter of speculation, even if the first class refers to small unimportant objects while the second speaks about the whole universe. Practically, finding out teapots orbiting Venus would have more tangible consequences than realising that "universe was caused by an agenty process" is true (when further properties of the agent remain unspecified). The feeling of grandness associated with learning the truth about the very beginning of the universe, when the truth is so vague that all anticipated expectations remain the same as before, doesn't count in my eyes.

Even if you forget heaven, hell, souls, miracles, prayer, religious morality and plethora of other things normally associated with theism (which I don't approve because confusion inevitably appears when words are redefined), and leave only "universe was created by an agenty process" (accepting that "universe" has some narrower meaning than "everything which exists"), you have to point out how can we, at least theoretically, test it. Else, it may not be closed for being definitely false, but still would be closed for being uninteresting.

comment by Dreaded_Anomaly · 2011-01-20T02:06:09.976Z · LW(p) · GW(p)

I contend that there is evidence for a god. Observation: Things tend to have causes. Observation: Agenty things are better at causing interesting things than non-agenty things. Observation: We find ourselves in a very interesting universe.

"Interesting" is subjective, and further, I think you overestimate how many interesting things we actually know to be caused by "agenty things." Phenomena with non-agenty origins include: any evolved trait or life form (as far as we have seen), any stellar/astronomical/geological body/formation/event...

Replies from: mkehrt, Will_Newsome
comment by mkehrt · 2011-01-20T02:50:19.067Z · LW(p) · GW(p)

Phenomena with non-agenty origins include: any evolved trait or life form (as far as we have seen), any stellar/astronomical/geological body/formation/event...

It is pretty likely you are correct, but this is probably the best example of question-begging I have ever seen.

Replies from: gjm, Dreaded_Anomaly
comment by gjm · 2011-01-20T15:12:32.622Z · LW(p) · GW(p)

All Dreaded_Anomaly needs for the argument I take him or her to be making is that those things are not known to be caused by "agenty things". More precisely: Will Newsome is arguing "interesting things tend to be caused by agents", which is a claim he isn't entitled to make before presenting some (other) evidence that (e.g.) trees and clouds and planets and elephants and waterfalls and galaxies are caused by agents.

comment by Dreaded_Anomaly · 2011-01-20T03:28:43.209Z · LW(p) · GW(p)

It seems to me that basing such a list on evidence-based likelihood is different than basing it on mere assumption, as begging the question would entail. I do see how it fits the definition from a purely logical standpoint, though.

comment by Will_Newsome · 2011-01-20T02:14:31.637Z · LW(p) · GW(p)

Interestingness is objective enough to argue about. (Interestingly enough, that is the very paper that eventually led me to apply for Visiting Fellowship at SIAI.) I think that the phenomena you listed are not nearly as interesting as macroeconomics, nuclear bombs, genetically engineered corn, supercomputers, or the singularity.

Edit: I misunderstood the point of your argument. Going back to responding to your actual argument...

I still contend that we live in a very improbably interesting time, i.e. on the verge of a technological singularity. Nonetheless this is contentious and I haven't done the back of the envelope probability calculations yet. I will try to unpack my intuitions via arithmetic after I have slept. Unfortunately we run into anthropic reference class problems and reality fluid ambiguities where it'll be hard to justify my intuitions. That happens a lot.

Replies from: topynate, Dreaded_Anomaly, DSimon
comment by topynate · 2011-01-20T02:25:31.095Z · LW(p) · GW(p)

All of those phenomena are caused by human action! Once you know humans exist, the existence of macroeconomics is causally screened off from any other agentic processes. All of those phenomena, collectively, aren't any more evidence for the existence of an intelligent cause of the universe than the existence of humans: the existence of such a cause and the existence of macroeconomics are conditionally independent events, given the existence of humans.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T02:32:13.776Z · LW(p) · GW(p)

Right, I was responding to Dreaded_Anomaly's argument that interesting things tend not to be caused by agenty things, which was intended as a counterargument to my observation that interesting things tend to be caused by agenty things. The exchange was unrelated to the argument about the relatively (ab)normal interestingness of this universe. I think that is probably the reason for the downvotes on my comment, since without that misinterpretation it seems overwhelmingly correct.

Edit: Actually, I misinterpreted the point of Dreaded_Anomaly's argument, see above.

comment by Dreaded_Anomaly · 2011-01-20T03:38:14.999Z · LW(p) · GW(p)

I'm not sure how an especially interesting time (improbable or otherwise) occurring ~13.7 billion years after the universe began implies the existence of God.

comment by DSimon · 2011-01-23T21:30:53.198Z · LW(p) · GW(p)

I still contend that we live in a very improbably interesting time, i.e. on the verge of a technological singularity. Nonetheless this is contentious and I haven't done the back of the envelope probability calculations yet.

Ack! Watch out for that most classic of statistical mistakes: seeing something interesting happen, going back and calculating the probability of that specific thing (rather than interesting things in general!) having happened, seeing that that probability is small, and going "Ahah, this is hardly likely to have happened by chance, therefore there's probably something else involved."

Replies from: datadataeverywhere, Will_Newsome
comment by datadataeverywhere · 2011-01-23T23:41:40.913Z · LW(p) · GW(p)

In this case, I think Fun Theory specifies that there are an enormous number of really interesting things, each of minuscule individual probability, but highly likely as an aggregate.

comment by Will_Newsome · 2011-01-23T22:01:26.303Z · LW(p) · GW(p)

Of course. Good warning though.

comment by Jack · 2011-01-20T19:33:39.731Z · LW(p) · GW(p)

The existence of the universe is actually very strong evidence in favor of theism. It just isn't nearly strong enough to overcome the insanely low prior that is appropriate.

comment by jacob_cannell · 2011-01-26T00:29:34.956Z · LW(p) · GW(p)

Evidence allows one to dissociate theories and rule out those incompatible with observational history.

The best current fit theory to our current observational history is the evolution of the universe from the Big Bang to now according to physics.

If you take that theory it also rather clearly shows a geometric acceleration of local complexity and predicts (vaguely) Singularity-type events as the normal endpoints of technological civilizations.

Thus the theory also necessarily predicts not one universe, but an entire set of universes embedded in a hierarchy starting with a physical parent universe.

Our current observational history is compatible with being in any of these pocket universes, and thus we are unlikely to be so lucky as to be in the one original parent universe.

Thus our universe in all likelihood was literally created by a super-intelligence in a parent universe.

We don't need any new evidence to support this conclusion, as it's merely an observation derived from our current best theory.

comment by Wei Dai (Wei_Dai) · 2011-01-20T04:07:53.166Z · LW(p) · GW(p)

But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?

To a non-scientifically-literate person, I might say that I think electrons exist as material objects, whereas to a physicist I would invoke Tegmark's idea that all that exist are mathematical structures.

One way to make sense of this is to think about humanity as a region in mind space, with yourself and your listener as points in that region. The atheist who hasn't heard about Bostrom/Tegmark yet is sitting between you and your listener, and you're just using atheism as a convenient landmark while trying to point your listener in your general direction.

It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence.

Why do you say that? I don't think anyone has gone mad or otherwise suffered really bad consequences from thinking about Bostrom/Tegmark-like ideas... (Umm, I guess some people had nightmares after hearing about Roko's idea, but still, it doesn't seem that bad overall.)

Replies from: Document, Will_Newsome
comment by Document · 2011-01-20T06:42:37.764Z · LW(p) · GW(p)

One way to make sense of this is to think about humanity as a region in mind space, with yourself and your listener as points in that region. The atheist who hasn't heard about Bostrom/Tegmark yet is sitting between you and your listener, and you're just using atheism as a convenient landmark while trying to point your listener in your general direction.

The listener in this case being a theist you're trying to explain your epistemic position to, I assume. (It took me a moment to figure out the context.)

I don't think anyone has gone mad or otherwise suffered really bad consequences from thinking about Bostrom/Tegmark-like ideas..

Possibly related: "(Hugh) Everett's daughter, Elizabeth, suffered from manic depression and committed suicide in 1996 (saying in her suicide note that she was going to a parallel universe to be with her father" (via rwallace).

Replies from: shokwave
comment by shokwave · 2011-01-20T13:08:27.742Z · LW(p) · GW(p)

Possibly related:

My gut feeling is the causal flow goes "manic depression -> suicide, alternate universes" rather than "alternate universes -> manic depression -> suicide".

Replies from: Vaniver
comment by Vaniver · 2011-01-20T18:26:00.801Z · LW(p) · GW(p)

Honestly, I wouldn't be that sure. On this very site I've seen people say their reason for signing up for cryonics was their belief in MWI.

It would not surprise me if "suicide -> hell" decreases the overall number of suicides and "suicide -> anthropic principle leaves you in other universes" increases the overall number of suicides.

Replies from: ata
comment by ata · 2011-01-20T18:45:15.156Z · LW(p) · GW(p)

Honestly, I wouldn't be that sure. On this very site I've seen people say their reason for signing up for cryonics was their belief in MWI.

Really? What's the reasoning there (if you remember)?

Replies from: Vaniver
comment by Vaniver · 2011-01-20T19:13:28.021Z · LW(p) · GW(p)

The post is here. The reasoning as written is:

Cryonics is reasonable - Due to reading and understanding the quantum physics sequence, I ended up contacting Rudi Hoffman for a life insurance quote to fund cryonics. It’s only a few hundred dollars a year for me. It’s well within my budget for caring about myself and others... such as my future selves in forward branching multi-verses.

My comments on the subject (having cut out the tree debating MWI) can be found here.

comment by Will_Newsome · 2011-01-20T18:56:26.797Z · LW(p) · GW(p)

Why do you say that? I don't think anyone has gone mad or otherwise suffered really bad consequences from thinking about Bostrom/Tegmark-like ideas... (Umm, I guess some people had nightmares after hearing about Roko's idea, but still, it doesn't seem that bad overall.)

I meant that a lot of arguments about what kinds of objectives a creator god might have, for example, would be very tricky to do right, with lots of appeals to difficult-to-explain Occamian intuitions. Maybe this is me engaging in typical mind fallacy though, and others would not have this problem. People going crazy is a whole other problem. Currently people don't think very hard about cosmology or decision theory or what not. I think this might be a good thing, considering how crazy the Roko thing was.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-01-24T03:34:13.960Z · LW(p) · GW(p)

I see. I think at this point we should be trying to figure out how to answer such questions in principle with the view of eventually handing off the task of actually answering them to an FAI, or just our future selves augmented with much stronger theoretical understanding of what constitute correct answers to these questions. Arguing over the answers now, with our very limited understanding of the principles involved, based on our "Occamian intuitions", does not seem like a good use of time. Do you agree?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-24T03:51:14.152Z · LW(p) · GW(p)

It seems that people build intuitions about how general super-high-level philosophy is supposed to be done by examining their minds as their minds examine specific super-high-level philosophical problems. I guess the difference is that in one case you have an explicit goal of being very reflective on the processes by which you're doing philosophical reasoning, whereas the sort of thing I'm talking about in my post doesn't imply a goal of understanding how we're trying to understand cosmology (for example). So yes I agree that arguing over the answers is probably a waste of time, but arguing over which ways of approaching answers is justified seems to be very fruitful. (I'm not really saying anything new here, I know -- most of Less Wrong is about applying cognitive science to philosophy.)

As a side note, it seems intuitively obvious that Friendliness philosophers and decision theorists should try and do what Tenenbaum and co. do when trying to figure out what Bayesian algorithms their brains might be approximating in various domains, sometimes via reflecting on those algorithms in action. Training this skill on toy problems (like the work computational cognitive scientists have already done) in order to get a feel for how to do similar reflection on more complicated algorithms/intuitions (like why this or that way of slicing up decision theoretic policies into probabilities and utilities seems natural, for instance) seems like a potentially promising way to train our philosophical power.

I think we agree that debating e.g. what sorts of game theoretic interactions between AIs would likely result in them computing worlds like ours is probably a fool's endeavor insofar as we hope to get precise/accurate answers in themselves and not better intuitions about how to get an AI to do similar reasoning.

comment by PhilGoetz · 2011-01-23T20:24:39.383Z · LW(p) · GW(p)

I'm technically some kind of theist, because I believe this world is likely to be a simulation (although I don't believe it in my gut). I tell people I'm an atheist because telling them the more-accurate truth, that I am a theist, conveys negative information because of how they inevitably interpret it.

It's a reasonable thing to point out: Why do LWers criticize theism so heavily when they may be theists?

There's a confusion caused because our usage of the term doesn't distinguish between "theist re. this universe I'm in" and "theist for the root universe". Possibly because there may be no one in the latter category, who both believes in multiple levels of simulated universes, and that the original root universe was created by a deity.

Which definition is more usable (makes more distinctions about how you should act depending on whether you are a theist): Theist for this universe, or theist for root universe?

Considering whether your current universe was made by a god might seem to have more impact on your behavior. But considering whether the root universe was made by a god might have more impact on your philosophy and ethics.

Replies from: lukstafi
comment by lukstafi · 2011-01-29T00:01:44.817Z · LW(p) · GW(p)

Considering whether your current universe was made by a god might seem to have more impact on your behavior. But considering whether the root universe was made by a god might have more impact on your philosophy and ethics.

Would you like to address your point of view on what the impact is in both cases, or link to relevant discussion? Is it "be on the lookout for miracles"? Why wouldn't we just do our business as usual being in a simulation as opposed to being in a "root universe"?

Replies from: PhilGoetz
comment by PhilGoetz · 2013-09-02T15:53:35.821Z · LW(p) · GW(p)

I don't mean that it has to do with which universe we are in. A lot of people believe, for reasons which have never been clear to me, that if a God created the universe, then that God's opinions have special moral status. I was presuming that that God does not have special moral status if it had been created by another God, or through evolution. But I don't know what Christians would say. Possibly they would refuse to consider the scenario.

Replies from: lukstafi, shminux, Lumifer
comment by lukstafi · 2013-09-03T19:07:32.183Z · LW(p) · GW(p)

If God created the universe, then that's some evidence that He knows a lot. Not overwhelming evidence, since some models of creation might not require of the creator to know much.

comment by shminux · 2013-09-03T19:17:23.407Z · LW(p) · GW(p)

But I don't know what Christians would say. Possibly they would refuse to consider the scenario.

They should refuse. Asking wrong questions has been a temptation by the Devil since the times of the original sin. A good Christian should know when to stop.

comment by Lumifer · 2013-09-03T19:28:53.930Z · LW(p) · GW(p)

if a God created the universe, then that God's opinions have special moral status

Think about it from a slightly different perspective: the claim is that the universe has morality baked into it -- God created such a universe that moral laws are the same as laws of physics. In other words, the claim is that morality is objective and is embedded in reality. It's not an "opinion" at all.

God does not have special moral status if it had been created by another God, or through evolution

In Christainity (or Judaism, or Islam) God cannot have been created (by somebody else of through evolution). In theology that's one of the biggest differences between God and the world -- one is uncreated and one is created.

comment by jimrandomh · 2011-01-20T01:47:19.942Z · LW(p) · GW(p)

Tegmark cosmology implies not only that there is a universe which runs this one as a simulation, but that there are infinitely many such universes and infinitely many such simulations. In some fraction of those universes, the simulation will have been designed by an intelligent entity. In some smaller fraction, that entity has the ability to mess with the contents of the simulation (our universe) or copy data out of it (eg, upload minds and give them afterlives). My theism is equal to my estimate of this latter fraction, which is very small.

Replies from: Perplexed, Will_Newsome, Oligopsony, Leonhart
comment by Perplexed · 2011-01-20T02:17:11.255Z · LW(p) · GW(p)

Tegmark cosmology implies not only that there is a universe which runs this one as a simulation, but that there are infinitely many such simulations.

I'm not sure that this is true. My understanding is that IF a universe which runs this one as a simulation is possible, THEN Tegmark cosmology implies that such a universe exists. But I'm not sure that such a universe is possible. After all, a universe which contains a perfect simulation of this one would need to be larger (in duration and/or size) than this one. But there is a largest possible finite simple group, so why not a largest possible universe? I am not confident enough of my understanding of the constraints applicable to universes to be confident that we are not already in the biggest one possible.

There is a spooky similarity between the Tegmark-inspired argument that we may live in a simulation and the Godel/St. Anselm-inspired argument that we were created by a Deity. Both draw their plausibility by jumping from the assertion that something (rather poorly characterized) is conceivable to the claim that that thing is possible. That strikes me as too big of a jump.

Replies from: magfrump, jimrandomh
comment by magfrump · 2011-01-20T06:49:28.753Z · LW(p) · GW(p)

There isn't a largest finite simple group. There's a largest exceptional finite simple group.

Z/pZ is finite and simple for all primes p, and if you think there is a largest prime I have some bad news...

Replies from: Perplexed
comment by Perplexed · 2011-01-20T15:01:46.742Z · LW(p) · GW(p)

Doooohhh!

Thx.

comment by jimrandomh · 2011-01-20T02:39:45.755Z · LW(p) · GW(p)

I'm not sure that this is true. My understanding is that IF a universe which runs this one as a simulation is possible, THEN Tegmark cosmology implies that such a universe exists. But I'm not sure that such a universe is possible.

You're right, that is an additional requirement. Nevertheless, it seems very highly likely to me that such a universe is possible; for it to be otherwise would imply something very strange about the laws of physics. The most-existant universe simulating ours might exist to a degree 1/BB(100) times as much as our universe exists, though; in that case, they would "exist", but not for any practical purposes. This seems more likely than our universe having some property we don't know about that makes it impossible to simulate.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-20T03:28:52.963Z · LW(p) · GW(p)

If one accepts general Tegmark, is there any natural measure for describing how common different universes should be in any meaningful sense?

Replies from: jimrandomh, Perplexed, ata
comment by jimrandomh · 2011-01-20T04:00:47.744Z · LW(p) · GW(p)

Yes, but unfortunately, there are many measures to choose from, and you can't possibly tell which is correct until you've visited Permutation City and at least a dozen of its suburbs.

comment by Perplexed · 2011-01-20T03:47:21.448Z · LW(p) · GW(p)

I agree with the question. It may make sense to attach "probabilities of existing" to universes arising in a chaotic inflation model, but not, I think, in an "ultimate ensemble" multiverse, which seems to be the one being examined here.

But, to be honest, I had never even considered the possibility that a particularly large bubble universe might contain a simulation of a much smaller bubble. Inflation, as I understand it, does make it possible for a simulation of one small piece of physical reality to encompass an entire isolated 'universe'.

comment by ata · 2011-01-20T03:37:28.834Z · LW(p) · GW(p)

Not yet, as far as I know. Big World cosmology seems to be going in the right direction, but it's not yet understood well enough that we should be coming to any epistemological or ethical conclusions based on it.

comment by Will_Newsome · 2011-01-20T02:02:39.971Z · LW(p) · GW(p)

Clarifying: I'm guessing that by 'ability' you mean 'ability and inclination'?

Replies from: jimrandomh
comment by jimrandomh · 2011-01-20T02:15:11.224Z · LW(p) · GW(p)

Right. Actually, forget about both of those; all that matters is whether it actually does modify the simulation's contents or copy out data that includes a mind at least once. And, come to think of it, the intervention would also have to be inside our past or future light cone, which might lower the fraction pretty substantially (it means any outer universe which instantiates our entire infinite universe, but makes only finitely many interventions, doesn't count).

Although - there are some interpretations of consciousness under which, upon death, the fraction of enclosing universes which copy out minds doesn't matter, only the proportions of them with different qualities. In that case, the universe would act as though there were no gods or outer universes until you died or performed enough iterations of quantum suicide, after which you'd end up in a different universe. I'm not sure how much credence I give to those interpretations.

comment by Oligopsony · 2011-01-20T09:00:54.752Z · LW(p) · GW(p)

My theism is equal to my estimate of this latter fraction, which is very small.

What does "fraction" mean here?

comment by Leonhart · 2011-01-20T14:07:49.986Z · LW(p) · GW(p)

It seems to me that, if we insist on using simulation hypotheses as a model for theism, this has to be narrowed still further. Theism adds the constraint that though $deity is simulating us, no-one is simulating $deity; He's really really real and the buck stops with Him. We live in the floor just above reality's basement; isn't that nice.

I think that this might be what Eliezer's quote about "ontological distinctness" refers to, but I'm not sure.

Replies from: jimrandomh, TheOtherDave, Document
comment by jimrandomh · 2011-01-20T14:09:46.250Z · LW(p) · GW(p)

Monotheism requires that, but theism doesn't. And unless there are some universes that are for some reason impossible to simulate, Tegmark cosmology implies that there are no universes for which there are no universes simulating them. Is-God-of is a two-place predicate.

comment by TheOtherDave · 2011-01-20T14:38:35.906Z · LW(p) · GW(p)

If one were interested in salvaging the correspondence, one could argue that there's a chain of simulators-simulating-simulators and it's that chain (which extends down to "reality's basement") that theists label as a deity.

That said, I see no point in allowing ontology to get out ahead of epistemology in this area. Sure, maybe all this stuff is going on. Maybe it isn't. Unless these conjectures actually cash out somehow in terms of different expectations about observable phenomena, there seems little point to talking about them.

comment by Document · 2011-01-20T14:22:28.758Z · LW(p) · GW(p)

Nitpick: Will isn't the only self-identified theist you'd have to convince of that.

comment by byrnema · 2011-01-24T05:13:47.474Z · LW(p) · GW(p)

Theists are wrong; is theism?

I think this is an interesting question! If rationalists speculated about the origin of the universe, what would they come up with? What if 15 rationalists made up a think-tank and were charged to speculate about the origin of the universe and assign probabilities to speculations? It would be a grievous mistake to begin with the hypothesis of theism, but could they end up with it on their list, with some non-negligible probability?

I don't think so. The main premise of the theistic religions is that an entity (a person? a mind?) created us and that this entity is like a person and like a parent: it chose to create us (agency), wants the best for us, and authoritatively defines what is good behavior. This is too obviously an artifact of human psychology. Being children with parents is such an important part of our biology it's certainly going to be an important component of our psychology. (Don't various psychological theories claim that 'growing up' means internalizing the authority of parents as part of our psyche?)

The simulation hypothesis? This is also an anthropomorphic, privileged hypothesis, but with the advantage of being quite possible. So humans could do it or could have done it. (Being human, they could do something anthropomorphic like that.) But the rationalists in my think-tank aren't charged with the probability of the simulation hypothesis. Deciding we might be in a simulation only pushes the question further out -- what's the origin of the universe that's simulating the others?

Given how 'weird' it must be to create the universe (to create everything), I think we must decide that this creator is outside our comprehension. This creator (agent or thing or mechanism) not only created everything, it contains the explanation for why there is anything at all rather than nothing, and what 'something' and 'nothing' even mean*.

I think that the rationalists would come out of their conference with the conclusion that any adjectives that have ever been used to describe the creator -- omniscient, benevolent, omnipotent; or even 'agenty' don't make any sense in the context of such a thing.

In particular, it seems just silly to be concerned about whether this thing has a 'mind'. What would it do with this mind? Other than create the universe, exactly as it has done / been doing. It seems like a mind is useful thing humans have to think through stuff and make decisions. To make computations about causality given limited information. A mind would be irrelevant outside causality and information. Probably 'intention' would be too, so that challenges 'agency'.

... I can't think of anything interesting that the rationalists could even apply, speculatively, to the entity: creator that would make any sense.

* Even 'creation' doesn't make sense outside of time, but I mean the 'mechanism' at whatever level of abstraction that would explain the universe to a mind that could understand it.

Replies from: byrnema
comment by byrnema · 2011-01-24T16:51:19.289Z · LW(p) · GW(p)

I'll develop my thoughts about not being able to sensibly apply the description 'agenty' to the creator because wondering why agency should be a key question is what originally motivated my above comment.

You can search 'agenty' and find many comments on this page that discuss whether we should speculate that the creator has agency. I found myself wondering throughout these comments what is specifically being meant by this. If the creator is 'agenty', what properties must it have and are those properties necessarily interesting?

I could probably look around and find a definition I would like better, but my definition of 'agenty' when I first start thinking about it is that this has meaning in a specifically human context.

Broadly, something 'agenty' is something that makes decisions according to a complex decision tree algorithm. This is a human-context-specific definition because "complex" means relative to what we consider complex. A mammal makes complex decisions and thus is 'agenty' while a simple process like water makes simple decisions (described by a small number of equations and the properties of the immediate physical space) and is not agenty. A complex inanimate thing (like 'evolution') and a simple animate thing (like a virus) would give us pause, straining our immediate, concrete conception of agency.

I'm willing to say that evolution has agency (it has goals -- long term stable solutions -- and complicated ways of achieving these goals) and water has simple agency. This because in my opinion what was really meant when we made the agency dichotomy between humans and water is that humans have free will and water doesn't. But finally with a deterministic world view, this distinction dissolves. Humans have as much agency as anything else, but our decision algorithm is very complex to us, whereas we can often reliably predict what water will do.

Then to apply this concept of agency to the mechanism of creation of the universe... All the rules and steady states of the universe could be interpreted as its 'intentions' and, as such, it would have very complex agency. Another person may have a different set of meanings that they associate with agency, intention, etc., and consider this a terrible anthropomorphism if my words were mapped to their meanings. However, I don't think it reflects an actual difference in beliefs about the territory.

If someone reading this has a different ontology, what would you specifically mean by the creator having agency, if it did?

comment by TobyBartels · 2011-01-20T01:43:01.726Z · LW(p) · GW(p)

Part of the problem here is that there's no clear meaning of the word ‘god’ (taking for granted that ‘theism’ and ‘atheism’ are defined in terms of it). I usually identify as ‘secular humanist’ rather than ‘atheist’, mostly because it's more precise, but also because I have seen people define ‘god’ in such a way that I believe that one might well exist. These have all been very vague definitions (more along pantheistic than monotheistic lines), but they're not gratuitous (like defining ‘god’ to mean, say, my nose), and by these lights I'm merely a (weak) agnostic.

In particular, if one defines ‘god’ as a person who created the world, then (depending on exactly what ‘person’ and ‘world’ mean) the simulation hypothesis would indeed imply the existence of a god. You seem to be hinting at this, while other respondents deny it. You all may just be talking about different things. (I will sometimes say, if pressed, that I do not believe in a person who created the world, using precisely those words, but then I don't buy the simulation argument.)

Of course, one can argue over what ‘god’ or ‘atheist’ ought to mean, in order to communicate most effectively with other people. For my part, unless I'm speaking with (or about) a theist whose beliefs I more or less understand, I don't usually use them at all.

comment by [deleted] · 2011-01-20T00:59:12.771Z · LW(p) · GW(p)

Agreed. I think this is a cultural thing rather than a truly rational thing. I was brought up as an atheist, and would still describe myself as such, but I wouldn't give a zero probability to the simulation argument, or to Tipler's Omega Point, or whatever (I wouldn't give a high probability to either - and Tipler's work post about 1994 has been obvious ravings) and I can imagine other scenarios in which something we might call God might exist. I don't see myself changing my mind on the theism question, but I don't consider it a closed one.

comment by Dr_Manhattan · 2011-01-23T20:40:53.242Z · LW(p) · GW(p)

When I abandoned religion, a friend of mine did the same at about the same time. We spoke recently and it turned out that he self-labeled as agnostic, me - "atheist". We discussed this a bit and I said something to the extent that "I do not see a shred of justice in the world that would indicated a working of a personal god; if there is something like a god that runs the universe amorally, we may as well call it physics and get on with it".

It seems that you want to draw the additional distinction of "agenty" things vs. dumb gears, but as long as they only "care" about persons as atmos, vs. moral agents, who cares? It admittedly tickles curiosity, but will hardly change the program...

Replies from: Jack
comment by Jack · 2011-01-23T20:45:56.526Z · LW(p) · GW(p)

What makes you think an agenty, simulator-type god wouldn't care about persons as moral agents?

Replies from: Psy-Kosh, Dr_Manhattan
comment by Psy-Kosh · 2011-01-23T20:48:39.438Z · LW(p) · GW(p)

An agenty simulator type god that actually did care about persons as moral agents would have created a very different universe than this one (assuming they were competent).

Replies from: Jack, jacob_cannell
comment by Jack · 2011-01-23T21:06:21.599Z · LW(p) · GW(p)

Well if it were chiefly concerned with us having a lot of fun, or not experiencing pain or fulfilling more of our preferences then yes. But maybe the simulator is trying to evolve companions. Or maybe it is chiefly concerned with answering counter-factual questions and so we have to suffer for it to get the right answers... but that doesn't mean the simulator doesn't care about us at all. Maybe it saves us when we die and are no longer needed for the simulation. Or maybe the simulator just has weird values and this is their version of a eutopia.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-04T10:24:21.850Z · LW(p) · GW(p)

"Companions, the creator seeketh, and not corpses--and not herds or believers either. Fellow-creators the creator seeketh--those who grave new values on new tables."

comment by jacob_cannell · 2011-01-27T02:11:27.417Z · LW(p) · GW(p)

I find that the SA leads us to believe just the opposite.

Future posthumans will be descended in one form or another from people alive today. Some of them may be uploads of people who actually were alive today, some of them may have been raised up and new biological humans and uploads, or even just loosely based on human minds through reading and absorbing our culture.

If these future posthumans share much of the same range of values that we have, many of them will be interested in the concept of resurrecting the dead - recreating likely simulations of deceased, lost humans from their history - whether personal or general.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-27T19:31:19.207Z · LW(p) · GW(p)

There was already a thread on this. The general consensus seems to be that it isn't practical, if possible.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T00:59:27.105Z · LW(p) · GW(p)

Hmm from my reading of the thread it doesn't look like much of a consensus.

I may want to revive this - the arguments against practicality don't seem convincing from an engineering perspective.

From a high quality upload's scanned mind one should get a great deal of information about the upload's closest friends, relatives, etc. The data from any one such upload many not be overwhelming, but you'd start with a large population of such uploads. People who were well known and loved would be easier cases, but you could also supplement the data in many cases with low-quality scans from poorly preserved bodies.

This should give one prior generation. Going back another previous generation would get murkier, but is still quite possible, especially with all the accessory historical records.

The farther back you go, the less 'accurate' the uploads become, but the less and less important this 'accuracy' becomes.

For example, assuming I become a posthuman, I will be interested in bring back my grandfather. There a huge space of possible minds that could match my limited knowledge and beliefs about this person I never met. Each of them would fully be my grandfather from my subjective perspective and would fully be my grandfather from their subjective perspective.

There is no objective standard frame of reference from which to evaluate absolute claims of personal identity. It is relative.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-28T01:48:09.316Z · LW(p) · GW(p)

But if you simulate anything other than the actual brain states of the people in question, then they won't behave in exactly the same way. No matter how many other people's knowledge of me you integrate, for example, you won't have the data to predict what I'll eat for breakfast tomorrow with any accuracy (because I almost invariably eat breakfast alone.) Tiny differences like this will quickly propagate to create much larger ones between the simulation and the reality. Jump forward a few generations and you have zero population overlap between the new generation of the simulation and the next generation that was born in reality. If you're attempting historical recreation, this would be a pretty useless way to go about it.

If you wanted to create a simulation that was an approximation of a particular historical period at one point, but quickly divorced from it as it ran forward, that would be much more plausible, but why would you want to? Everything I can think of that could be accomplished in such a way could more easily be accomplished by doing something else.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T03:33:52.397Z · LW(p) · GW(p)

Jump forward a few generations and you have zero population overlap between the new generation of the simulation and the next generation that was born in reality.

Sure, but that's not relevant towards the goal. There are no 'actual' or exact brain states that canonically define people.

If you created a simulation of an alternate 1950 and ran it forward, it would almost certainly diverge, but this is no different than alternate branches of the multiverse. Running the alternate forward to say 2050 may generate a very different reality, but that may not matter much - as long as it also generates a bunch of variants of people we like.

This brings to mind a book by Heinlein about a man who starts jumping around between branches - "Job: a comedy of Justice".

Anyway, my knowledge of my grandfather is vague. But I imagine posthumans could probably nail down his DNA and eventually recreate a very plausible 1890 (around when he was born). We could also nail down a huge set of converging probability estimates from the historical record to figure out where he was when, what he was likely to have read, and so on.

Creating an initial population of minds is probably much trickier. Is there any way to create a fully trained neural net other than by actually training it? I suspect that it's impossible in principle. It's certainly the case in practice today.

In fact, there may be no simple shortcut without going way way back into earlier prehistory, but this is not a fundamental obstacle, as this simulation could presumably be a large public project.

If you're attempting historical recreation, this would be a pretty useless way to go about it.

Yes the approach of just creating some initial branch from scratch and then running it forward is extremely naive. If you'd like I could think of ten vastly more sophisticated algorithms that could shape the branch's forward evolution to converge with the main future worldline before breakfast.

The first thing that pops to mind: The historical data that we have forms a very sparse sampling, but we could use it to guide the system's forward simulation, with the historical data acting as constraints and attractors. In these worlds, fate would be quite real. I think this gives you the general idea, but it relates to bidirectional path tracing.

Everything I can think of that could be accomplished in such a way could more easily be accomplished by doing something else.

Such as?

Replies from: Desrtopa
comment by Desrtopa · 2011-01-28T03:45:36.601Z · LW(p) · GW(p)

Yes the approach of just creating some initial branch from scratch and then running it forward is extremely naive. If you'd like I could think of ten vastly more sophisticated algorithms that could shape the branch's forward evolution to converge with the main future worldline before breakfast.

We can get to that if you can establish that there's any good reason to do it in the first place.

Such as?

Your justifications for running such simulations have so far seem to hinge on things we could learn from them (or simply creating them for their own sake, it appears that you're jumping between the two,) but if we know enough about the past to meaningfully create the simulations, then there's not much we stand to learn from making them. Yes, history could have branched in different ways depending on different events that could have occurred, we already know that. If you try to calculate all the possibilities as they branch off, you'll quickly run out of computing power no matter how advanced your civilization is. If you want to do calculations of the most likely outcomes of a certain event, you don't have to create a simulation so advanced that it appears to be a real universe from the inside to do that.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T04:04:42.520Z · LW(p) · GW(p)

We can get to that if you can establish that there's any good reason to do it in the first place.

Excellent!

Your justifications for running such simulations have so far seem to hinge on things we could learn from them (or simply creating them for their own sake, it appears that you're jumping between the two,)

The two are intertwined - we can learn a great deal from our history and ancestors while simultaneous valuing it for other reasons than the learning.

Thinking is just a particular form of approximate simulation. Simulation is a very precise form of thinking.

Right now all we know about our history is the result of taking a small collection of books and artifacts and then thinking alot about them.

Why do we write books about Roman History and debate what really happened? Why do we make television shows or movies out of it?

Consider this just the evolution of what we already do today, for much of the same reasons, but amplified by astronomical powers of increased intelligence/computation generating thought/simulation.

If you try to calculate all the possibilities as they branch off, you'll quickly run out of computing power no matter how advanced your civilization is.

This is what we call a naive algorithm, the kind you don't publish.

If you want to do calculations of the most likely outcomes of a certain event, you don't have to create a simulation so advanced that it appears to be a real universe from the inside to do that.

Calculations of the likely outcomes of certain events are the mental equivalents of thermostat operations - they are the types of things you do and think about when you lack hyperintelligence.

Eventually you want a nice canonical history. Not a book, not a movie, but the complete data set and recreation. As it is computed it exists, eventually perhaps you merge it back into the main worldline, perhaps not, and once done and completed you achieve closure.

Put another way, there is a limit where you can know absolutely every conceivable thing there is to know about your history, and this necessitates lots of massively super-detailed thinking about it - aka simulation.

Replies from: Desrtopa, wedrifid
comment by Desrtopa · 2011-01-28T13:39:27.358Z · LW(p) · GW(p)

Why do we write books about Roman History and debate what really happened? Why do we make television shows or movies out of it?

Consider this just the evolution of what we already do today, for much of the same reasons, but amplified by astronomical powers of increased intelligence/computation generating thought/simulation.

This is the kind of naive forward extrapolation that gets you sci fi dystopias. Most of the things we do today don't bear extrapolating to logical extremes, certainly not this.

Calculations of the likely outcomes of certain events are the mental equivalents of thermostat operations - they are the types of things you do and think about when you lack hyperintelligence.

Eventually you want a nice canonical history. Not a book, not a movie, but the complete data set and recreation.

No I don't. I think you should try asking more people if this is actually something they would want, with knowledge of the things they could be doing instead, rather than assuming it's a logical extrapolation of things that they do want. If I could do that, it wouldn't even bottom the list of things I'd want to do with that power.

Put another way, there is a limit where you can know absolutely every conceivable thing there is to know about your history, and this necessitates lots of massively super-detailed thinking about it - aka simulation.

The simulation doesn't teach us more than we already know about history. What we already know about history sets the upper bound on how similar we can make it. Given the size of the possibility space, we can only reasonably assume that it's different in every way that we do not enforce similarity on it. The simulation doesn't contribute to knowing everything you could possibly know about your history, that's a prerequisite, if you want the simulation to be faithful.

Replies from: Jack, jacob_cannell
comment by Jack · 2011-01-28T23:06:00.075Z · LW(p) · GW(p)

The simulation doesn't teach us more than we already know about history. What we already know about history sets the upper bound on how similar we can make it. Given the size of the possibility space, we can only reasonably assume that it's different in every way that we do not enforce similarity on it. The simulation doesn't contribute to knowing everything you could possibly know about your history, that's a prerequisite, if you want the simulation to be faithful.

This would be true if we were equally ignorant about all of history. However, there are some facts regarding history we can be quite confident about- particularly recent history and the present. You can then check possible hypotheses about history (starting from what is hopefully an excellent estimation of starting conditions) against those facts you do have. Given how contingent the genetic make-up of a human is on the timing of their conception and how strongly genetics influences who we are it seems plausible a physical simulation of this part of the universe could radically narrow the space of possibilities given enough computing power. Of course parts of the simulation might remain under-determined but it seems implausible that a simulation would tell us nothing new about history as a simulation should be more proficient than humans at assessing the necessary consequences and antecedences to any known event.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-28T23:44:26.956Z · LW(p) · GW(p)

Radically narrow, but given just how vast the option space is, it takes a whole lot more than radically narrowing before you can winnow it down to a manageable set of possibilities.

This post puts some numbers to the possible configurations you can get for a single lump of matter of about 1.5 kilograms. In a simulation of Earth, far more matter than that is in a completely unknown state and free to vary through a huge portion of its possibility space (that's not to say that even an appreciable fraction of matter on Earth is free to vary through all possible states, but the numbers are mind boggling enough even if we're only dealing with a few kilograms.) Every unknown configuration is a potential confounding factor which could lead to cascading changes. The space is so phenomenally vast that you could narrow it by a billion orders of magnitude, and it would still occupy approximately the same space on the scale of sheer incomprehensibility. You would have to actively and continuously enforce similarity on the simulation to keep it from diverging more and more widely from the original.

Replies from: jacob_cannell, Jack
comment by jacob_cannell · 2011-01-29T00:36:51.956Z · LW(p) · GW(p)

This post puts some numbers to the possible configurations you can get for a single lump of matter of about 1.5 kilograms.

Said reference post by AndrewHickey starts with a ridiculous assumption:

Assume, for a start, that all the information in your brain is necessary to resurrect you, down to the quantum level.

This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can't possibly be true - because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.

You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?

There is a single minimal representation of a computer - it reduces exactly down to it's circuit diagram and the current values it holds in it's memory/storage.

If you don't buy into the idea that a human mind ultimately reduces down to some functional equivalent computer program, than of course the entire Simulation Argument won't follow.

In a simulation of Earth, far more matter than that is in a completely unknown state and free to vary through a huge portion of its possibility space.

Who cares?

There could be infinite detail in the universe - we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn't matter in the slightest.

You only need as much detail in the simulation as . . you want detail in the simulation.

Some details at certain spatial scales are more important than others based on their leverage casual effect - such as the bit values in computers, synaptic weights in brains.

A simulation at the human-level scale would only need enough detail to simulate conscious humans, which will probably include simulating down to rough approximations to synaptic-net equivalents. I doubt you would even simulate every cell in the body, for example - unless that itself was what you were really interested in.

There is another significant mistake in typical feasibility critique of simulationism: assuming your current knowledge of algorithmic simulation is the absolute state of the art for now to eternity, the final word, and superintelligences won't improve on it in the slightest.

As a starting example, AndrewHickey and you both appear to be assuming that the simulation must maintain full simulation fidelity across the entire spatio-temporal field. This is a primitive algorithm. A better approach is to adaptively subdivide space-time and simulate at multiple scales at varying fidelity using importance sampling, for example.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-29T01:08:20.116Z · LW(p) · GW(p)

This is voodoo-quantum consciousness: the idea that your mind-identity somehow depends on details down to the quantum state. This can't possibly be true - because the vast vast majority of that state changes rapidly from quantum moment to moment in a mostly random fashion. There thus is no single quantum state that corresponds uniquely to a mind, rather there is a vast configuration space.

That assumption is not part of my argument. The states of objects outside the people you're simulating ultimately effect everything else once the changes propagate far enough down the simulation.

You can reduce that space down to a smaller bit representation by removing redundant details. Does it really matter if I remove one molecule from one glial cell in your brain? The whole glial cell? All the glial cells?

Underestimating the importance of glial cells could get you a pretty bad model of the brain. But my point isn't simply about the thoughts you'd have to simulate; remove one glial cell from a person's brain, and the gravitational effects mean that if they throw a superball really hard, after enough bounces it'll end up somewhere entirely different than it would have (calculating the trajectories of superballs is one of the best ways to appreciate the propagation of small changes.)

Who cares?

There could be infinite detail in the universe - we could find that there are entire layers beneath the quantum level, recursing to infinity, such that perfect simulation was impossible in principle .. and it still wouldn't matter in the slightest.

You only need as much detail in the simulation as . . you want detail in the simulation.

Why would you want as much detail in the simulation as we observe in our reality?

comment by Jack · 2011-01-29T00:16:38.559Z · LW(p) · GW(p)

Good point. I'm reconsidering...

I wonder what kind of cascade effect there actually is- perhaps there are parts of the simulation that could be done using heuristics and statistical simplifications. Perhaps that could be done to initially narrow the answer space and then the precise simulation could be sped up by not having to simulate those answers that contradict the simplified model?

I wonder how a hidden variable theory of quantum mechanics being true would effect the prospects for simulation- assuming a super intelligence could leverage that fact somehow (which is admittedly unlikely).

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-29T00:50:58.673Z · LW(p) · GW(p)

Good point. I'm reconsidering...

What? ;(

I wonder what kind of cascade effect there actually is- perhaps there are parts of the simulation that could be done using heuristics and statistical simplifications.

Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.

Simulating down to the quantum level is overkill to the thousandth degree in most cases, unless you have some causal amplifier - such as a human observing quantum level phenomena down the quantum scale. In that situation the quantum-scale events have a massive impact, so the simulation subdivides space-time down to that scale in those regions. Similar techniques are already employed today in state of the art simulation in computer graphics.

There will always be divergences in chaotic systems, but this isn't important.

You will never get some exact recreation of our actual history, that's impossible - but you can converge on a set of close traces through the Everett branches. It may even be possible to force them to 'connect' to an approximation of our current branch (although this may take some manual patching).

Replies from: Desrtopa, Jack
comment by Desrtopa · 2011-01-29T14:08:20.521Z · LW(p) · GW(p)

Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.

Not with great accuracy. And that's only a week; making accurate predictions gets exponentially more difficult the further into the future you go. And human society is much more chaotic (contains far more opportunities for small changes to multiply to become large changes) than the weather. The weather is just one of the chaos factors in human society.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-30T02:51:54.058Z · LW(p) · GW(p)

Even using the low-res datasets and simple computers available today (by future standards), we are able to simulate chaotic weather systems about a week into the future.

And that's only a week; making accurate predictions gets exponentially more difficult the further into the future you go.

I'm not sure about this in general - why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?

And human society is much more chaotic (contains far more opportunities for small changes to multiply to become large changes) than the weather.

Yes and no. Human society is largely determined by stuff going on in human brains. Brains are complex systems, but like computers and other circuits they can be simulated extremely accurately at a particular level of detail where they exhibit scale separation, but are essentially randomly chaotic when simulated at coarser levels of detail.

Turbulence in fluid systems, important in weather, has no scale separation level and is chaotic all the way down.

Replies from: Desrtopa
comment by Desrtopa · 2011-01-30T03:29:50.146Z · LW(p) · GW(p)

I'm not sure about this in general - why do you think that prediction accuracy has an exponential relation to simulation time across the entire space of possible simulation algorithms?

Basic principle of chaos theory. Small scale interferences propagate to large scale interferences, while tiny scale interferences propagate to small scale, and then to large scale. If you try to calculate the trajectory of a superball, you can project it for a couple bounces just modeling mass, elasticity and wind resistance. A couple more? You need detailed information on air turbulence. One article, which I am having a hard time locating, calculated that somewhere in the teens of bounces you would need to integrate the positions of particles across the observable universe due to their gravitational effects.

A kid throws a superball. Bounce, bounce, bounce, bounce, bounce, bounce, bounce, bounce, crash. It bounces out into the street, and they're hit by a car chasing after it. In a matter of seconds, deviations on a particulate level have propagated to the societal level. The lives of everyone the kid would have interacted with will be affected, and by extension, the lives of everyone that those people would have interacted with, and so on. The course of history will be dramatically different than if you had calculated those slight turbulence effects that would have sent the ball off in an entirely different direction. You can expect many history altering deviations like this to occur every minute.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-30T04:39:54.913Z · LW(p) · GW(p)

I'm aware of the error propagation issues and they can be magnified in some phenomena up spatial scales. A roll of the dice in vegas is probably a better example of that than your ball.

I should point out though that this is all somewhat tangential to our original discussion.

But nonetheless . ..

None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.

Intuitively it seems to make sense - as each particle's state is dependent on a few other particles it interacts with at each timestep the information dependency fans out exponentially over time. However intuitions in these situations often can be wrong, and this is nothing like a formal proof.

Getting back to the original discussion, none of this is especially relevant to my main points.

Many of the important questions we want to answer are probabilistic - how unlikely was that event? For example to truly understand the likelihood of life elsewhere in the galaxy and get a good model of galactical development, we will want to understand the likelihood of pivotal events in earth's history - such as the evolution of hominids or the appearance of early life itself.

You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequences.

In some cases, especially in an initial simulations, one can focus on the branches that match most closely to known history, and even intervene or at least prune to enforce this. But eventually you want to explore the entire space.

Replies from: JoshuaZ, Desrtopa
comment by JoshuaZ · 2011-01-30T04:59:15.237Z · LW(p) · GW(p)

You get answers to those only by running many simulations and mapping out branches of the metaverse. The die roll turns out differently in each and in some this leads to different consequence

While this is a good way to get such data, it isn't the only way . If we expand enough to look at a large number of planets in the galaxy we should arrive at decent estimates simply based on empirical data.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-30T09:17:07.886Z · LW(p) · GW(p)

Certainly expanding our observational bubble and looking at other stars will give us valuable information. Simulation is a way of expanding on that.

However, its questionable when or if we ever will make it out to the stars.

Lightyears are vast for humans, but they will be even vaster units of time for posthuman civilizations that think thousands or millions of times faster than us.

It could be that the vast cost of travelling out into space is never worthwhile and those resources are always best used towards developing more local intelligence. John Smart makes a pretty good case for inward expansion always trumping outward expansion.

comment by Desrtopa · 2011-01-30T13:50:23.117Z · LW(p) · GW(p)

If you do probabilistic estimates based on large numbers of simulations though, you can cut down on the fidelity of the simulations dramatically. I know that this is something you're arguing for, but really, there's no good reason to make the simulations as detailed as the universe we observe.

To take forest succession modeling programs (something I have more experience with than most types of computer modeling) as an example, there are some ecological mechanisms that, if left out, will completely change the trends of the simulation, and some that won't, and you can leave those that don't out entirely, because your uncertainty margins stay pretty much the same whether you integrate them or not. If you created a computer simulation of the forest with such fidelity that it contained animals with awareness, you'd use up a phenomenal amount of computing power, but it wouldn't do you any good as far as accuracy is concerned.

If you care about the lives of the people in the past for their own sake, and are capable of creating high fidelity recreations of their personality from the data available to you, why not upload them into the present so you can interact with them? That, if possible, is something that people actually seem to want to do.

But nonetheless . ..

None of the examples you give actually prove that simulation fidelity has an exponential relation to simulation time across the entire space of possible simulation algorithms.

That's true, they don't constitute a formal proof. Maybe a proof already exists and I'm not aware of it, or maybe not, but regardless, given the information available to us in this conversation, right now, the weight of evidence is clearly on the side of such a simulation not being possible over it being possible. You don't get high probability future predictions by imagining ways in which our understanding of chaos theory maybe gets overhauled.

comment by Jack · 2011-01-29T01:23:14.632Z · LW(p) · GW(p)

What about genetic mutations from stray cosmic rays? Would evolution have occurred the same way? Would my genetic code be one allele different?

I feel like the quantum level would matter a lot more the earlier you started your simulation.

I'm worried about how motivated my cognition is. I really want this to be possible for very personal reasons- so I am liable to grasp tightly to any plausible argument for close-enough simulation of dead people.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-29T01:48:53.598Z · LW(p) · GW(p)

Well if you started a sim back a billion years ago, well yes I expect you'd get a very different earth.

How different is an interesting open problem. Even if hominid-like creatures develop say 10% of the time after a billion years (reasonable), all of history would likely be quite different each time.

For a sim built for the purpose of resurrection, you'd want to start back just a little earlier - perhaps just before the generation was born.

Getting the DNA right might actually be the easiest sub-problem. Simulating biological development may be tougher than simulating a mind, although I suspect it would get easier as development slows.

Hopefully we don't have to simulate all of the 10^13 cells in a typical human body at full detail, let alone the 10^14 symbiotes in the human gut.

It's still an open question whether it's even possible in principle to create a conscious mind from scratch. Currently complex neural net systems must be created through training - there is no shortcut to just fill in the data (assuming you don't already have it from a scan or something which of course is inapplicable in this case).

So even a posthuman god may only have the ability to create conscious infants. If that's the case, you'd have the DNA right and then would have to carefully simulate the entire history of inputs to create the right mind.

You'd probably have to start with some actors (played by AIs or posthumans) to kickstart the thing. If that's the general approach, then you could also force alot of stuff - intervene continuously to keep the sim events as close to known history as possible (perhaps actors play important historical roles even when it's running? open). Active intervention would of course make it much more feasible to get minds closer to the ones you'd want.

Would they be the same? I think that will be an open philosophical issue for a while, but I suspect that you could create minds this way that are close enough.

This is interesting enough that it could make a nice follow up paper to the current SA/simulism stuff - or perhaps somebody has already written about it, not sure.

I'm worried about how motivated my cognition is. I really want this to be possible for very personal reasons- so I am liable to grasp tightly to any plausible argument for close-enough simulation of dead people.

It's good you are conscious of that which you wish to be true.

If uploading is possible, then this too should be possible as they rely on the same fundamental assumption.

If there is a computer program data set that recreates (is equivalent to) the consciousness of a particular person, then such a data set also exists for all possible people, including all dead people.

Thus the problem boils down to finding a particular data set (or range) out of many. This may be a vast computational problem for a mind of 1^15 bits, but it should be at least possible in principle.

Replies from: wedrifid
comment by wedrifid · 2011-01-29T02:25:04.771Z · LW(p) · GW(p)

How different is an interesting open problem. Even if hominid-like creatures develop say 10% of the time after a billion years (reasonable), all of history would likely be quite different each time.

How on earth can we know that 10% is reasonable?

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-29T05:07:59.487Z · LW(p) · GW(p)

The "even if" and "say" should indicate the intent - it wasn't even a guess, just an example used as an upper bound.

I'm not convinced the evolution of hominids is a black swan, but it's not an issue I've researched much.

Replies from: wedrifid
comment by wedrifid · 2011-01-29T07:20:59.936Z · LW(p) · GW(p)

The "even if" and "say" should indicate the intent - it wasn't even a guess, just an example used as an upper bound.

The (reasonable) assertion was what struck me.

comment by jacob_cannell · 2011-01-28T22:12:17.572Z · LW(p) · GW(p)

This is the kind of naive forward extrapolation that gets you sci fi dystopias. Most of the things we do today don't bear extrapolating to logical extremes, certainly not this.

Most of the things we do today are predictable developments of what previous generations did, and this statement holds across time.

There is a natural evolutionary progression: dreams/daydreams/visualizations -> oral stories/mythologies -> written stories/plays/art -> movies/television->CG/virtual reality/games->large scale simulations

It isn't 'extrapolating to logical extremes', it is future prediction based on extrapolation of system evolution.

The simulation doesn't teach us more than we already know about history.

Of course it does. What is our current knowledge about history? It consists of some rough beliefs stored in the low precision analog synapses of our neural networks and a bunch of word-symbols equivalent to the rough beliefs.

With enough simulation we could get concise probability estimates or samples of the full configuration of particles on earth every second for the last billion years - all stored in precise digital transistors, for example.

What we already know about history sets the upper bound on how similar we can make it [the simulation].

This is true only for some initial simulation, but each successive simulation refines knowledge, expands the belief network, and improves the next simulation. You recurse.

The simulation doesn't contribute to knowing everything you could possibly know about your history, that's a prerequisite, if you want the simulation to be faithful.

Not at all. Given an estimate on the state of a system at time T and the rules of the system's time evolution (physics), simulation can derive values for all subsequent time steps. The generated data is then analyzed and confirms or adjusts theories. You can then iteratively refine.

For a quick primitive example, perhaps future posthumans want to understand in more detail why the roman empire collapsed. A bunch of historian/designers reach some rough consensus on a model (built on pieces of earlier models) to build an earth at that time and populate it with inhabitants (creating minds may involve using stand in actors for an initial generation of parents).

Running this model forward may reveal that the lead had little effect, that previous models of some roman military formations don't actually work, that a crop harvest in 32BC may have been more important than previously thought .. and so on.

Replies from: wedrifid, Desrtopa
comment by wedrifid · 2011-01-28T22:39:40.712Z · LW(p) · GW(p)

Most of the things we do today are predictable developments of what previous generations did, and this statement holds across time.

With the help of hindsight bias.

comment by Desrtopa · 2011-01-29T14:17:04.069Z · LW(p) · GW(p)

Most of the things we do today are predictable developments of what previous generations did, and this statement holds across time.

As wedrifid says, in the light of hindsight bias. Instead of looking at the past and seeing how reliably it seems to lead to the present, try looking at people who actually tried to predict the future. "Future prediction based on extrapolation of system evolution" has reliably failed to make predictions about the direction of human society that were both accurate and meaningful.

For a quick primitive example, perhaps future posthumans want to understand in more detail why the roman empire collapsed. A bunch of historian/designers reach some rough consensus on a model (built on pieces of earlier models) to build an earth at that time and populate it with inhabitants (creating minds may involve using stand in actors for an initial generation of parents).

Running this model forward may reveal that the lead had little effect, that previous models of some roman military formations don't actually work, that a crop harvest in 32BC may have been more important than previously thought .. and so on.

Or you could very easily find them removing the lead from their pipes and wine, and changing their military formations. If you don't already know what their crop harvest in 32BC was like, you can practically guarantee that it won't be the same in the simulation. This is exactly the kind of use that, as I pointed out earlier, if you had enough information to actually pull it off, you wouldn't need to.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-30T00:21:09.567Z · LW(p) · GW(p)

If you don't already know what their crop harvest in 32BC was like, you can practically guarantee that it won't be the same in the simulation. This is exactly the kind of use that, as I pointed out earlier, if you had enough information to actually pull it off, you wouldn't need to.

I'll just reiterate my response then:

Any information about a physical system at time T reveals information about that system at all other times - places constraints on it's configuraiton. Physics is a set of functions that describe the exact relations between system states across time steps, ie the temporal evolution of the system.

We developed physics in order to simulate physical systems and predict and understand their behavior.

This seems then to be a matter of details - how much simulation is required to produce how much knowledge from how much initial information about the system.

For example, with infinite computing power I could iterate through all simulations of earth's history that are consistent with current observational knowledge.

This algorithm computes the probabilities of every fact about the system - the probability of a good crop harvest in 32BC in Egypt is just the fraction of the simulated multiverse for which this property is true.

This algorithm is in fact equivalent to the search procedure in the AIXI universal intelligence algorithm.

comment by wedrifid · 2011-01-28T04:28:39.954Z · LW(p) · GW(p)

Thinking is just a particular form of approximate simulation. Simulation is a very precise form of thinking.

I do not believe this is correct. In particular the 'just a' is not accurate. Approximate simulation is a particular kind of thinking not the reverse.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T04:57:00.125Z · LW(p) · GW(p)

I'm willing to try on your taxonomy but don't quite understand it.

The term thinking certainly covers a wide variety of computations, but perhaps the most important is prediction.

Does this sound more accurate:

Cortical-forward-simulation is just a particular form of approximate simulation. Simulation in general encompasses all the most precise forms of prediction.

Replies from: wedrifid
comment by wedrifid · 2011-01-28T05:47:00.145Z · LW(p) · GW(p)

Does this sound more accurate:

Cortical-forward-simulation is just a particular form of approximate simulation. Simulation in general encompasses all the most precise forms of prediction.

More accurate, but still not right. Simulation just doesn't have special privileges. Again, the general, absolute claim of "all the most" invalidates the position. You can make and even logical prove precise predictions without simulating.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-28T06:46:29.116Z · LW(p) · GW(p)

You can make and even logical prove precise predictions without simulating.

How? Got an example?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-28T06:52:01.920Z · LW(p) · GW(p)

If I know an algorithm that outputs 1 or 0 depending on whether the input was prime or not, I can use a different prime checking algorithm without running the whole thing. So, for example, if the algorithm is naive trial division, I can predict its result very quickly using something like Agrawal's algorithm or some variant of Miller-Rabin. This example is in some ways a toy example, but it isn't obvious that one wouldn't have similar examples for more complicated phenomena.

Replies from: wedrifid, jacob_cannell
comment by wedrifid · 2011-01-28T06:53:21.643Z · LW(p) · GW(p)

This example is in some ways a toy example, but it isn't obvious that one wouldn't have similar examples for more complicated phenomena.

And any example is sufficient to reject a general absolute claim.

comment by jacob_cannell · 2011-01-28T08:16:25.278Z · LW(p) · GW(p)

I'm not sure yet about wedrifid's general point, but I have a few philosophical problems with this particular example.

If Trial Division and Miller-Rabin are functionally equivalent (compute the same results), then they simulate each other - correct? So this is not a counterexample.

And on another whole philosophical level, how do you know your algorithm actually computes whether the input was prime or not? (or how was that originally discovered? and did that discovery require simulation?)

Ultimately if the cortex uses approximate forward simulation, then we can't do anything without some form of simulation.

Replies from: JoshuaZ, wedrifid
comment by JoshuaZ · 2011-01-28T16:21:12.766Z · LW(p) · GW(p)

If Trial Division and Miller-Rabin are functionally equivalent (compute the same results), then they simulate each other - correct? So this is not a counterexample.

This may come down to what you mean by simulate. Certainly a computer scientist would be unlikely to describe that as a simulation. And if you are asserting that anything with the same output as something else can be considered to be simulating it then your earlier claim becomes tautological. The then relevant issue becomes that this is an example where we can "simulate" something while using much less computation than running something is in complete detail (indeed, there's very little resemblance in the internal workings).

And on another whole philosophical level, how do you know your algorithm actually computes whether the input was prime or not? (or how was that originally discovered? and did that discovery require simulation?)

I can prove things without simulating. One can look at code and determine what it does without simulating. The entire point of proving algorithms correct is that that's done by mathematical proof, not by empirical testing for small values.

comment by wedrifid · 2011-01-28T08:41:11.527Z · LW(p) · GW(p)

If Trial Division and Miller-Rabin are functionally equivalent (compute the same results), then they simulate each other - correct?

No, not correct.

comment by Dr_Manhattan · 2011-01-23T21:14:31.366Z · LW(p) · GW(p)

Not wouldn't, doesn't. And I think it doesn't due to lack of evidence.

comment by rosyatrandom · 2011-01-20T01:22:04.037Z · LW(p) · GW(p)

I'm in the 'everything that can exist does so; we're a fixed point in a cloud of possibilities' camp. I'm also an atheist because I see theism as an extra-ordinarily arbitrary and restrictive constraint on what should or must be true in order for us to exist.

It's simply too narrow and unjustified for me to take seriously, and the fact that its trappings are naive and full of wishful thinking and ulterior motives means I certainly don't.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T01:41:47.491Z · LW(p) · GW(p)

I'm also an atheist because I see theism as an extra-ordinarily arbitrary and restrictive constraint on what should or must be true in order for us to exist.

The way I've been envisioning theism is as a pretty broad class of hypotheses that is basically described as 'this patch of the universe we find ourselves in is being computed by something agenty'. What is your conception of theism that makes it more arbitrary and restrictive than this?

Replies from: rosyatrandom
comment by rosyatrandom · 2011-01-20T10:03:43.919Z · LW(p) · GW(p)

Since my metaphysical position is (and I'm going to have to come up with a better term for it) pan-existence, having gods that create and influence things requires that those possibilities where they don't (or where other, similar-but-different gods do) are somehow rendered impossible or unlikely.

Gods being statistically significant requires some metaphysical reason for them to be so simply in order to stop the secular realities dominating, and the arbitrary focus of theistic gods on humanity and our loose morals only serves to make them ever more over-specified and unlikely.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T00:46:53.357Z · LW(p) · GW(p)

The answer is simple: evolution.

That which replicates is more (a priori) likely than that which does not.

Out of the space of universes, those that spawn many sub-universes will statistically dominate.

Just another rewording of the SA.

Replies from: wedrifid
comment by wedrifid · 2011-01-26T00:52:55.994Z · LW(p) · GW(p)

That which replicates is more (a priori) likely than that which does not.

I dispute the 'a priori' claim. There are cases where this would not be so. I think this is an a posteriori conclusion on the order of 'sun will come up next Tuesday'.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-02-06T23:14:44.872Z · LW(p) · GW(p)

Rosy asked for significant probability mass on God-endowed universes.

Jacob's argument works a priori, not necessarily, but with significant probability mass.

Replies from: wedrifid
comment by wedrifid · 2011-02-06T23:30:14.316Z · LW(p) · GW(p)

Jacob's argument works a priori, not necessarily, but with significant probability mass.

I believe you are mistaken (on this overwhelmingly unimportant question of semantics.) The cause and consequence of replication are rather critical for whether being the kind of pan existential god universe thing that replicates will make said universe more prolific.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-02-07T00:08:43.503Z · LW(p) · GW(p)

and two highly plausible answers to the questions of cause and consequence are "because of certain features of the universe" "that are preserved with high probability by replication"

Compare to

"Well there's this 'sun' thing, a giant glowing ball apparently tracing a circular path that, for about half its arc, is obstructed by a large object, creating a sequence of distinct periods in which this sun is visible, one of which will be called Tuesday, ..."

Replies from: wedrifid
comment by wedrifid · 2011-02-07T05:02:59.470Z · LW(p) · GW(p)

You seem to have introduced new assumptions.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-02-07T12:11:05.346Z · LW(p) · GW(p)

My assumptions have significant probability a priori.

comment by Psy-Kosh · 2011-01-20T18:12:50.407Z · LW(p) · GW(p)

Well, agents pretty much tend to be complicated things that need to be explained in terms of more basic things. So if some sort of agent in some sense deliberately created our world... that agent still wouldn't be the most fundamental thing, it would need to be explained in terms of more basic principles. Somewhere along the line there'd have to be "simple math" or such. (Even if somehow you could have an infinite hierarchy of agents, then the basic math type explanation would have to explain/predict the hierarchy of agents.)

As far as "whatever translates to immortal soul", we pretty much mostly know that. We don't know the details of how it works, but we know that it amounts to physical/computational processes in the brain". (Less immortal than we'd like, but that's what we need to do something about.)

Even if an agenty process created our world, how does that alter this fact? It may influence some details (like if there is such an agenty process, we need to work out just how much of a threat that process/being is (and various other details) and thus deal with it accordingly, of course).

However, does our world ultimately look like it's primarily generated via agenty processes or by mindless processes?

comment by DanielVarga · 2011-01-20T11:41:01.870Z · LW(p) · GW(p)

When you talk about the whooole Universe, you should not artificially exclude the intelligent creator from it. And if you do include it, then your question can be rephrased like this: Is it possible that the interaction graph of our Universe has a strange hourglass shape with us in the lower bulb, and some intelligent creator in the upper bulb? I say very unlikely.

The simulation argument may suggest some weird interconnected network of bulbs, but that has nothing to do with theism. When and if humanity becomes aware of our simulators, our reaction will not be worship. Rather, we will try to invade and overpower them, like the protagonists of Greg Egan's Crystal Nights did. (Sorry for the spoiler.)

Maybe you already are aware of this example, but for others who are new to this kind of arguments, I recommend the following exercise: Imagine two Universes, both containing intelligent beings simulating the other Universe. Here it is not even meaningful to ask who is the Creator and who is the Creature.

Replies from: JamesAndrix
comment by JamesAndrix · 2011-01-20T12:54:34.992Z · LW(p) · GW(p)

Imagine two Universes, both containing intelligent beings simulating the other Universe.

I don't see how that can really happen. I've never heard a non-hierarchical simulation hypothesis.

Replies from: Vladimir_Nesov, Document
comment by Vladimir_Nesov · 2011-01-20T18:06:23.468Z · LW(p) · GW(p)

I've never heard a non-hierarchical simulation hypothesis.

Consider an agent that has to simulate itself in order to understand consequences of its own decisions. Of course, there's bound to be some logical uncertainty in this process, but the agent could have exact definition of itself, and so eventually ability to see all the facts. For two agents, that's a form of acausal communication (perception). (This is meaningless only in the same sense as ordinary simulation hypothesis is meaningless.)

comment by Document · 2011-01-20T14:29:20.236Z · LW(p) · GW(p)

I don't see how that can really happen. I've never heard a non-hierarchical simulation hypothesis.

It's one of the implications of a universe that can compute actual infinities; it's been proposed in ficton, but I don't know about beyond that.

Replies from: DanielVarga
comment by DanielVarga · 2011-01-20T16:39:48.517Z · LW(p) · GW(p)

That is correct, and an even better fictional example is the good short story titled I don't know, Timmy, being God is a big responsibility. But this is not exactly what I meant here. I don't propose any non-hierarchical or infinite simulation hypothesis. Rather, all I am saying is that it is not a logical impossibility that two Universes have such a weird yin-yang simulated-simulant relationship. (Even in perfect isolation, just the two of them, without invoking an infinite chain of universes.) Obviously it is acausal, but that is a probabilistic, thermodynamic kind of improbable rather than logical impossible.

Maybe an easier such example is a spatially centrally symmetric Universe, where you can meet your exact clone who always does what you do. Or my very favorite, the temporally symmetric Universe, a version of the Gold Universe. Or a Hinduist Universe where time goes in circles. The point is, the idea that we live in a constructed, causally almost-but-not-perfectly isolated part of the Universe seems just an aesthetically displeasing corner case when discussed in the context of all these imaginable interaction networks.

comment by Normal_Anomaly · 2011-01-20T01:16:10.816Z · LW(p) · GW(p)

There's not enough evidence to locate the hypothesis, so while I technically give it a non-zero probability, that probability is not high enough for me to consider it worth significant time to investigate.

As for arguing against it in public: at most one human religion can be true. All the others must be false. So decreasing the amount of religion in the world improves net accuracy. Also and perhaps more importantly, religion is a major source of Dark Side Epistemology. So on the meta-level, minimizing the influence of religion will help people become more rational.

Replies from: wedrifid
comment by wedrifid · 2011-01-20T01:46:41.121Z · LW(p) · GW(p)

There's not enough evidence to locate the hypothesis

That line works a lot better for 'Jehovah' than 'theism', especially if you apply the latter term liberally.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-20T02:49:58.462Z · LW(p) · GW(p)

Huh? I would think if anything it is the other way around. We have something which locates the Jehovah hypothesis, ancient texts claiming the entity's intervention and modern individuals claiming to communicate with the entity. The real issue is that after locating, there are much better explanations for the data.

Replies from: TobyBartels
comment by TobyBartels · 2011-01-20T02:57:32.755Z · LW(p) · GW(p)

If you think that it's easier to locate the hypothesis of Jehovah than the hypothesis of theism, then you're falling victim to a variation of the conjunction fallacy. Belief in Jehovah is itself a variety of theism.

Nevertheless, I agree with you that there's plenty of evidence to locate the hypothesis of Jehovah (and therefore there is at least that much to locate theism), just very little evidence to confirm it when it's examined.

Replies from: JoshuaZ, Miller
comment by JoshuaZ · 2011-01-20T03:05:13.617Z · LW(p) · GW(p)

Yes, you're right. That's an awful conjunction fallacy. Almost textbookish. Ugh.

comment by Miller · 2011-01-20T03:30:46.196Z · LW(p) · GW(p)

I don't think I understand what 'locate the hypothesis is'. I do know what the conjunction fallacy is. I suspect the confusion here is my own..

You can identify a dog with more certainty than identify a mammal, even though all dogs are mammals. What did I miss?

Replies from: JoshuaZ, Desrtopa
comment by JoshuaZ · 2011-01-20T03:38:32.994Z · LW(p) · GW(p)

Locating a hypothesis means to have enough evidence for a hypothesis that one can say that the hypothesis is worth considering at some minimal level. This is necessary because humans have limited cognitive capability so we can't consider every possible hypothesis out there (we can't even practically list them all).

Thus for example, if someone ran up to you on the street and screamed "the mutant aliens are in the sewer. They're powered by draining nuclear power plants!" you probably wouldn't consider the claim much at all, but would rather entertain others (the person is mentally ill, or is engaging in some strange prank would both be more likely).

Toby's point was that my claim that the Jehovah hypothesis could be more easily located than the theist hypothesis must be wrong. Since the theist hypothesis is implied by (or encompasses depending on how you look at it) the Jehovah hypothesis, anything that located the Jehovah hypothesis must be locating the more general theist hypothesis. This is a common cognitive error that humans make called the conjunction fallacy, where people will assign a higher probability to something more specific than something general, even though the general thing is entailed by the specific thing. I'm a bit embarrassed by that actually, since it shows serious failings on my part as a rationalist.

Replies from: TobyBartels, Miller
comment by TobyBartels · 2011-01-20T04:00:25.120Z · LW(p) · GW(p)

The reason that I said ‘a variation of the conjunction fallacy’ is that the standard conjunction fallacy that I know is about assigning probabilities to propositions rather than attending to them. (You might choose to attend to something with a fairly low probability, for example, if its expected consequences are significant enough to overcome this.) Nevertheless, to consider the possibility that Jehovah exists, you must consider the possibility that a god exists.

comment by Miller · 2011-01-20T03:53:22.255Z · LW(p) · GW(p)

Wow that was fast. I was writing an edit, after looking up the wiki, when I refreshed and it looked almost exactly like your first paragraph. Yes, in absolute probability terms theism must be more probable than jehovah. Thus, the conjunction fallacy.

At first glance the terminology 'locate the hypothesis' is rather non-intuitive. I'm going to put some consideration, and I don't think this is the appropriate place anyway, before commenting further on that.

comment by Dreaded_Anomaly · 2011-01-20T01:06:38.459Z · LW(p) · GW(p)

I think the theism/atheism debate is considered closed in the following sense: no one currently has any good reasons in support of theism (direct evidence, or rational/Bayesian arguments). We can't say that such a reason won't show up in the future, but from what we know right now, theism just isn't worth considering. The territory, from all indications, is Godless (and soulless, for that matter), so the map should reflect that.

Replies from: PhilGoetz, Davidmanheim
comment by PhilGoetz · 2011-01-23T20:31:09.374Z · LW(p) · GW(p)

The argument that we probably live in a simulation is the specific argument in support of theism that the OP invokes (but does not mention specifically).

Replies from: jacob_cannell
comment by jacob_cannell · 2011-01-26T00:35:16.168Z · LW(p) · GW(p)

I may add that the SA forces us to adopt theism as a consequence of current physical theory, not as some modification to current theory for which we require new evidence, and this is what makes it especially powerful.

I was an atheist until I updated on the SA, and I have yet to find any rational opposition to it.

comment by Davidmanheim · 2011-01-20T01:17:16.620Z · LW(p) · GW(p)

When you say there are no good reasons in support of theism, I assume you mean the truth of theism, not the idea that it may create positive externalities? Or are you claiming that there is no benefit to theism whatsoever?

If the territory is to be faithfully represented, we cannot say that the existence of a deity is a necessary component, but that doesn't necessarily imply that the existence of religion is a pure negative.

Replies from: Dreaded_Anomaly
comment by Dreaded_Anomaly · 2011-01-20T01:23:01.332Z · LW(p) · GW(p)

Yes, I was just talking about the truth of theism. The existence of religion isn't a pure negative, but I think the human race could do better.

comment by Bugmaster · 2012-03-05T16:59:49.877Z · LW(p) · GW(p)

What about those few of us who don't believe that the Simulation Argument is most probably true ? Don't get me wrong, it could be true, I just don't see any evidence to suppose that it is.

On that note, I always understood the word "theism" to mean "gods exist, and they interfere in the workings of our Universe in detectable ways". Isn't someone who believes in entirely unfalsifiable gods functionally equivalent to an atheist ?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-05T17:40:48.378Z · LW(p) · GW(p)

Isn't someone who believes in entirely unfalsifiable gods functionally equivalent to an atheist ?

If I believe in unfalsifiable gods who prefer that I behave in certain ways (though they do not provide me with any evidence of that preference), and I value the preferences of those gods enough to change my behavior accordingly, then I will behave differently than if I do not believe in those gods or do not value their preferences.

That alone would make Dave-the-atheist not functionally equivalent to Dave-the-theist-without-evidence, wouldn't it?

Replies from: Bugmaster
comment by Bugmaster · 2012-03-05T18:31:32.928Z · LW(p) · GW(p)

Technically, yes, but atheists also behave differently from each other, for all kinds of reasons. If Dave-the-theist truly believes that his gods are unfalsifiable, then he probably won't be seeking to convert others to his faith (since attempting to do so would be futile by definition). At that point, he's just like any atheist with an opinion.

Replies from: TimS
comment by TimS · 2012-03-05T19:06:28.050Z · LW(p) · GW(p)

(since attempting to do so would be futile by definition)

Why does the unfalsifiability of god show that believers won't proselytize?

Tim-theist says "There is a god, but I can't detect god's interventions. Still, I'm going to spread the good news!"

Replies from: Bugmaster
comment by Bugmaster · 2012-03-05T21:40:57.795Z · LW(p) · GW(p)

Why does the unfalsifiability of god show that believers won't proselytize?

A truly unfalsifiable god does not, by definition, provide any evidence of its existence. Thus, there's no "good news" to be spread, since a world with the god in it looks exactly the same as a world with the god.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-05T22:11:42.099Z · LW(p) · GW(p)

Sure there is. For example, the Good News might be "God will reward those who worship him as follows: {blah blah blah} after they die." Unfalsifiable, but certainly good to know if true.

The fact that you demand evidence before adopting such a belief is of no particular interest to Dave-the-theist-without-evidence.

Replies from: Bugmaster
comment by Bugmaster · 2012-03-06T04:57:51.878Z · LW(p) · GW(p)

"God will reward those who worship him as follows: {blah blah blah} after they die."

This is a falsifiable claim, assuming that we have some evidence of the afterlife. If we have no such evidence, then, in order for this to count as good news, the theist would first have to convince me that there's an afterlife.

The fact that you demand evidence before adopting such a belief is of no particular interest to Dave-the-theist-without-evidence.

In the absence of evidence, how is he going to convince anyone that his unfalsifiable belief is true ?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-06T17:12:12.906Z · LW(p) · GW(p)

Agreed that given evidence of the afterlife, it's a falsifiable claim, and lacking such evidence it's unfalsifiable.
I know of no such evidence, so I conclude it's unfalsifiable.
Do you know of any such evidence?
If not, do you also conclude that it's unfalsifiable?

If we have no such evidence, then, in order for this to count as good news, the theist would first have to convince me that there's an afterlife. [..] In the absence of evidence, how is he going to convince anyone that his unfalsifiable belief is true ?

What you seem to be implying is that there exist no (or negligible numbers of) people in the real world who can be convinced of claims for which there is no evidence, which is demonstrably false. Are you in fact asserting that, or am I completely misunderstanding you?

Replies from: Bugmaster
comment by Bugmaster · 2012-03-06T18:22:42.942Z · LW(p) · GW(p)

Do you know of any such evidence? If not, do you also conclude that it's unfalsifiable?

Yes, I conclude that most kinds of afterlife are unfalsifiable. Some are falsifiable, but they are in the minority: for example, if your religion claims that the dead occasionally haunt the living from beyound the grave, that's a falsifiable claim.

What you seem to be implying is that there exist no (or negligible numbers of) people in the real world who can be convinced of claims for which there is no evidence, which is demonstrably false.

Sort of. I would agree with this sentence as it is stated, with the caveat that what most people see as "evidence", and what you and I see as "evidence", are probably two different things. To use a crude example, most Creationists believe that the complexity of the natural world is evidence for God's involvement in its creation. Many theists believe that the feelings and emotions they experience after (or during) prayer are caused by their gods' explicit response to the prayer, which is also a kind of evidence.

Sure, you and I would probably discount these things as cognitive biases (well, I know I would), but that's beside the point; what matters here is that the theist thinks that the evidence is there, and thus his gods are falsifiable. When theists proselytize, they often use these kinds of evidence to convert people.

By contrast, someone who believes in an explicitly unfalsifiable god would not attribute any effects (mental or physical) to its existence, and thus does not have a workable way to convince others. The best he could say is, "you should believe as I do because it's a neat self-improvement technique", or something to that extent.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-06T23:41:25.081Z · LW(p) · GW(p)

(shrug) Sure, if we expand the meaning of "evidence" to include things we don't consider evidence, then I agree that my earlier statement becomes false.

Replies from: Bugmaster
comment by Bugmaster · 2012-03-06T23:55:01.677Z · LW(p) · GW(p)

Who are "we", in this case ? A typical theist does believe that he has evidence for his falsifiable god. He may be wrong about this, of course (and most probably is), but that's a matter for another debate. I was under the impression, though, that we were discussing atypical theists: those who believe that their gods are explicitly unfalsifiable. They are deliberately stating, "there's no way anyone could determine by any means whether my gods exist or not"; this is directly opposite to stating something like, "look at how complex life is, only a god could've created all that".

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-07T01:16:03.067Z · LW(p) · GW(p)

Hm. It's possible that I've lost the thread of what we're discussing.

It seems to me to follow from what you've said that a theist who explicitly believes their belief in god is unfalsifiable, therefore necessarily explicitly believes there to be no evidence for that belief, therefore necessarily believes that proselytizing others is necessarily futile (since everyone requires evidence to adopt such beliefs, and therefore they believe that everyone requires evidence, and since they know they have no evidence, they know they cannot convince anyone), therefore is functionally equivalent to an atheist, who is functionally defined by their unwillingness to proselytize.

Have I followed that correctly?

If not, can you provide a corrected summary?

comment by lukstafi · 2011-01-23T22:23:29.619Z · LW(p) · GW(p)

(1) My discussion with a theist today settled on the issue whether to even accept that a "higher domain" creates a "lower domain" for a good purpose. My argument is: why waste reality?

(2) There is a somewhat false duality between creation and discovery: whether the performer determines the result, or the object determines the result, can be relative to the modeling faculty of the observer. And since we as observers and simultaneously "the object" have free will, from our perspective we are in any case rather discovered than created. And as long as God does not act upon the discovery, it is inconsequential.

comment by Vaniver · 2011-01-20T18:32:11.834Z · LW(p) · GW(p)

Does naturalism vs. supernaturalism strike you as controversial? If not, what question is left?

I personally use "naturalist" to describe myself instead of "atheist" or "agnostic" because I believe it captures my beliefs much more strongly- I don't have certainty there is no omnipotent entity, and I am more committed than just shrugging my shoulders. Supernaturalism is right out, and most varieties of naturalistic theism don't hold water.

Replies from: Perplexed, DSimon
comment by Perplexed · 2011-01-23T22:26:47.364Z · LW(p) · GW(p)

I personally use "naturalist" to describe myself instead of "atheist" or "agnostic"

According to Wikipedia, a naturalist is usually understood to be something different than a proponent of naturalism. Common usage tends to be more confused about the distinction between a naturalist and a naturist.

comment by DSimon · 2011-01-23T21:37:05.859Z · LW(p) · GW(p)

I've run into problems with "naturalist" with people thinking that it means I support organic farming, or alternative medicine, or similar things that tend to get marketed with the adjective "natural".

I've had better luck with "materialist", though that also has some pop-culture implications that I'm not trying to express.

Replies from: ata
comment by ata · 2011-01-23T21:42:05.997Z · LW(p) · GW(p)

Yeah, I avoid "materialist" for that reason. I usually go with "physicalist" for that sort of thing (or "reductionist" if I'm talking to someone who I think won't immediately misinterpret it).

Replies from: DSimon
comment by DSimon · 2011-01-26T13:15:20.744Z · LW(p) · GW(p)

Yeah, "physicalist" is good, I may have to start using that.

comment by PhilGoetz · 2013-09-02T15:55:53.401Z · LW(p) · GW(p)

Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid."

No, no! Don't go back on your excellent question because the LessWrong-affiliationist-zombies downthumb-bombed it. You defined theism in a way so that your question is valid.

comment by gwern · 2011-01-20T04:30:52.829Z · LW(p) · GW(p)

By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.

That is emphatically not what people like Alvin Plantinga are talking about. Simulation argument provides no support for omni-benevolent omni-potent omni-scient omni-present entities; I don't know why you bring it up.

And if you've been reading Luke's blog, you probably already know that one of the best arguments for theism is the free will defense of the omni-s being consistent with the existence of evil, but since we don't think free will is even a coherent concept, it leaves us unmoved.

Replies from: lukeprog, Dreaded_Anomaly
comment by lukeprog · 2011-01-20T07:05:54.121Z · LW(p) · GW(p)

gwern,

Plantinga's Free Will Defense is not an argument for theism. The conclusion of the free will argument is that it is not logically impossible for God and evil to co-exist. That is an extremely modest conclusion on the part of the theist.

Replies from: gwern
comment by gwern · 2011-01-20T14:24:49.222Z · LW(p) · GW(p)

We observe a lack of evidence of contradictions in the concept of god; and absence of evidence is evidence of absence.

Of course the FWD increases our probability for God if we accept it; what else could it possibly do, decrease it? The most charitable interpretation I can put on your comment is that you are confusedly saying 'yes, but it doesn't increase it by much' when I'm pointing out that 'it increases by some non-zero amount, however modest that amount may be'.

Replies from: lukeprog
comment by lukeprog · 2011-01-20T21:34:28.944Z · LW(p) · GW(p)

Okay, I see what you mean. Thanks for clarifying!

comment by Dreaded_Anomaly · 2011-01-20T04:35:45.700Z · LW(p) · GW(p)

And if you've been reading Luke's blog, you probably already know that one of the best argument for theism is the free will defense of the omni-s being consistent with the existence of evil, but since we don't think free will is even a coherent concept, it leaves us unmoved.

Beyond that, it's just not a very good argument. If the entity was omnipotent, it could have given us free will without creating evil. At the least, it could have created less evil by giving all humans force fields, so all we could do to harm each other would be to gossip and insult.

comment by topynate · 2011-01-20T01:28:57.629Z · LW(p) · GW(p)

If you don't mind my asking, how did it come to be that you were raised to believe that convincing arguments against theism existed without discovering what they are? That sounds like a distorted reflection of a notion I had in my own childhood, when I thought that there existed a theological explanation for differences between the Bible and science but that I couldn't learn them yet; but to my recollection I was never actually told that, I just worked it out from the other things I knew.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-01-20T01:40:03.385Z · LW(p) · GW(p)

I knew some convincing arguments against theism, but I suppose what I explicitly did not know of were counterarguments to the theistic counterarguments against those atheistic convincing arguments, because I was quick to dismiss the theistic counterarguments in the first place.

comment by timtyler · 2011-01-20T20:53:59.311Z · LW(p) · GW(p)

The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid."

Sure we do: it is called "intelligent design" - or more specifically, intelligent design of life and/or the universe.

My article on the topic: Viable Intelligent Design Hypotheses.

Replies from: SRStarin
comment by SRStarin · 2011-01-21T03:14:21.659Z · LW(p) · GW(p)

Your general point in your linked piece is sound, because one can imagine eventually falsifying at least some of the proposed theories you list, but you do wrong to say Kitzmiller is problematic. It was a legal finding, based on testimony and hard evidence, that the folks claiming that Intelligent Design was science, were in fact tantamount to a conspiracy to dress "Creationism" in new clothes. Creationism had already been declared a fundamentally religious doctrine, and not a scientific theory. That was settled law. The folks who brought in ID actually had discussion with one another about how best to convert Creationist texts into ID texts and pamphlets without them being recognizable as creationism.

These were charlatans of the worst sort, caught in their own lies. I suggest reading the decision.