[Link] A superintelligent solution to the Fermi paradox

post by Will_Newsome · 2012-05-30T20:08:18.333Z · LW · GW · Legacy · 75 comments

Here.

Long story short, it's an attempt to justify the planetarium hypothesis as a solution to the Fermi paradox. The first half is a discussion of how it and things like it are relevant to the intended purview of the blog, and the second half is the meat of the post. You'll probably want to just eat the meat, which I think is relevant to the interests of many LessWrong folk.

The blog is Computational Theology. It's new. I'll be the primary poster, but others are sought. I'll likely introduce the blog and more completely describe it in its own discussion post when more posts are up, hopefully including a few from people besides me, and when the archive will give a more informative indication of what to expect from the blog. Despite theism's suspect reputation here at LessWrong I suspect many of the future posts will be of interest to this audience anyway, especially for those of you who take interest in discussion of the singularity. The blog will even occasionally touch on rationality proper. So you might want to store the fact of the blog's existence somewhere deep in the back of your head. A link to the blog's main page can be found on my LessWrong user page if you forget the url.

I'd appreciate it if comments about the substance of the post were made on the blog post itself, but if you want to discuss the content here on LessWrong then that's okay too. Any meta-level comments about presentation, typos, or the post's relevance to LessWrong, should probably be put as comments on this discussion post. Thanks all!

75 comments

Comments sorted by top scores.

comment by Logos · 2012-05-30T21:36:05.034Z · LW(p) · GW(p)

To summarize my reasons for downvoting, after first reading the entire contents of the linked blog:

There are standard scenarios in which our world is a hoax, e.g. a computer simulation or stage-managed by aliens. These are plausible enough to be non-negligible in their most general form, although claims of weird specific hoaxes are unlikely. Given some weird observation, like waking up with a blue tentacle, a claim of a weird specific hoax is the most likely non-delusory explanation.

Because of the schizophrenia you have previously mentioned here, you make a lot of weird observations, and have trouble interpreting mundane coincidences as mundane. You also picked up a lot of ideas from the Less Wrong community. So you reach out to the hoax hypotheses to justify your delusions and hallucinations, and go on to encrust them with theological language. This is both a common tendency in paranoid schizophrenics, and a way to assert opposition to and claim superiority to Less Wrong, per your usual self-admitted trolling.

This approach seem unlikely to lead to fruitful or pleasant reading. And empirically, the ratio of nonsense, "raving crank style," and insanity to interesting ideas (all available elsewhere) is far too high. The situation is sad, but I want to see less of this, including posts linking to it, so I downvoted.

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-05-30T23:15:09.147Z · LW(p) · GW(p)

Perhaps I should also note that I disagree with your analysis on various points.

Because of the schizophrenia you have previously mentioned here, you make a lot of weird observations, and have trouble interpreting mundane coincidences as mundane.

I'm schizotypal I suppose, but not schizophrenic given the standard definition. I don't think I have any trouble interpreting mundane coincidences as mundane.

You also picked up a lot of ideas from the Less Wrong community.

Not especially so, actually.

So you reach out to the hoax hypotheses to justify your delusions and hallucinations

No, I honestly prefer something like Thomism to tricky hoaxes.

go on to encrust them with theological language

At Computational Theology I haven't even really gotten into theology yet, and I certainly haven't claimed that any supposed paranormal influences are or aren't related to God.

This is both a common tendency in paranoid schizophrenics

I'm not sure what "this" is that you're referring to. Theological language? I don't think schizophrenics commonly try to "justify" their delusions by couching them in terms of theological language. What would the point be? I don't get it. Note that talking about the abstract nature of God and so on is completely unrelated to common schizophrenic symptoms like thinking one is God or that one is somehow an ontologically privileged person.

a way to assert opposition to and claim superiority to Less Wrong

No, I don't represent LessWrong as a thing in that way. Some on LessWrong are very interesting, some aren't. I try to only talk to the interesting folk, even if they have serious disagreements with me. I certainly don't think I'm "superior" to sundry people who participate on LessWrong.

per your usual self-admitted trolling.

I rarely troll—few of my LessWrong comments are downvoted. Is trolling relevant to the post? I don't think the writing style and content of the post smacks of superiority, and I don't think it's trolling. It seems to me to be an argument made in good faith in the hopes of calling attention to a hypothesis that is rightly or wrongly seen as neglected.

This approach seem unlikely to lead to fruitful or pleasant reading.

Which approach? I don't think I'm trolling, or condescend-ing. Regarding pleasantness, is there something else wrong with my writing style? Regarding fruitfulness, is it that you're not interested in the things I discuss for whatever reason, or, more likely, is it that I generally don't come up with ideas that catalyze further fruit-bearing insights for you? If the latter, I agree this is a problem, which is why I've created Computational Theology to have some place to plant seeds in the process of conceptual gardening. Hopefully having my own blog will allow me to share various interesting and significant ideas that I've had for a long time but that I've never had a chance to share on LessWrong. Hanging out at SingInst for a few years led me to have a lot of cool thoughts that ideally should be shared with the greater LessWrong community.

And empirically, the ratio of nonsense, "raving crank style," and insanity to interesting ideas (all available elsewhere) is far too high.

What are you referring to? Few of my comments here are downvoted, and many are heavily upvoted. Also, I've put forth many original ideas that have been upvoted by the LessWrong community. Presumably those comments would not be "available elsewhere".

The situation is sad, but I want to see less of this, including posts linking to it, so I downvoted.

Fair enough!

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2012-05-31T01:12:26.619Z · LW(p) · GW(p)

I rarely troll—few of my LessWrong comments are downvoted.

(Empirical data: According to a karma histogram program someone posted some months ago (I saved a copy locally, but regrettably have forgotten the author's identity), 294 of 2190 of your recent comments (about 13.4%) have negative karma as of around 1735 PDT today.)

[Edited to add: However, as Will points out in the child, it might be misleading to simply count downvoted comments, because it is believed that some users mass-downvote the comments of certain others rather than judging each comment individually; only 80 out of the 2190 comments under consideration (about 3.7%) were voted to -4 or below.]

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T01:13:07.378Z · LW(p) · GW(p)

Thanks!

Note that much of that is likely due to karmassassination, not legitimate downvoting.

Replies from: Vladimir_Nesov, Zack_M_Davis
comment by Vladimir_Nesov · 2012-05-31T01:48:07.253Z · LW(p) · GW(p)

Note that much of that is likely due to karmassassination, not legitimate downvoting.

Disagree. I approve downvoting of most of your comments that were downvoted to -2 or below, for reasons triggered by those particular comments. This makes it plausible that they were downvoted for similar reasons, rather than in a way insensitive to qualities of individual comments.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T02:08:57.910Z · LW(p) · GW(p)

Right, but I also know that karmassassination has occurred at various points, and any karmassassination is likely to take up a disproportionate chunk of the downvotes. No?

Zack's statistic of -4 or below is the most pertinent. It's at 3.7%.

People will naturally wish to compare this with the percentage of my comments that are +4 or more. Zack tells us that this percentage is 19.2%.

So there's clearly a very large asymmetry. What one makes of it depends on a lot of other background stuff.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-05-31T02:33:56.552Z · LW(p) · GW(p)

I also know that karmassassination has occurred at various points, and any karmassassination is likely to take up a disproportionate chunk of the downvotes. No?

Not necessarily. Taboo "karmassassination", what were you actually observing? One scenario is that some comments you make draw attention and people look over the recent N of your posts, judge them individually, but it turns out that the judgment is mostly negative. Another is that people who want to discourage a certain type of comments downvote multiple already-downvoted posts without paying too much attention, expecting that downvotes that are already present carry sufficient evidence in the context. Both cases result in surges of negative votes which remain sensitive to qualities of individual comments.

People will naturally wish to compare this with the percentage of my comments that are +4 or more. Zack tells us that this percentage is 19.2%.

You're drifting from the topic, I'm not discussing a net perception of your participation, only explanations for the negatively judged contributions. Your writing them off as not-particularly-meaningful (effect of "karmassassination" rather than of comments' negative qualities) seems like a rationalization, given the observations above.

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-05-31T04:06:17.722Z · LW(p) · GW(p)

Like, I'm not trying to avoid the knowledge that I often make contributions to LessWrong that aren't well-received. It happens, more for me than for others. I was just pointing out that I've also noticed strict karmassassination sometimes, not necessarily often in my 2190 most recent comments. It's just a thing to take into account. The karmassassination I have experience with is often not of the sort that you describe. But I'm perfectly willing to accept such explanations sometimes, and I've already noticed that they explain a few big chunks lost a few months back.

comment by Will_Newsome · 2012-05-31T02:38:37.382Z · LW(p) · GW(p)

I don't write all of them off as meaningless, of course! Didn't mean to imply that. Some comments just aren't positive contributions to LessWrong. It happens, and it happpens to me more than others. I'm not denying that at all.

comment by Zack_M_Davis · 2012-05-31T01:28:45.146Z · LW(p) · GW(p)

Note that much of that is likely due to karmassassination, not legitimate downvoting.

Oh, that's a good point---I've added an addendum to the grandparent.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T01:39:09.801Z · LW(p) · GW(p)

I have a request, which you're not at all obligated to fulfill of course. But could you tell me what percentage of my 2190 most recent comments have received 4 or more upvotes?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2012-05-31T01:50:59.231Z · LW(p) · GW(p)

19.2%

(And I am sorry if it was rude of me to have initiated this exchange at all, but surely it will be understood that this is the type of venue where if someone uses a word like most or few and one happens to have the actual data easily available, then one should be encouraged to share it.)

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T02:04:03.243Z · LW(p) · GW(p)

Not at all! I very much appreciate the data. Thank you for sharing.

comment by Will_Newsome · 2012-05-30T21:45:52.305Z · LW(p) · GW(p)

The linked argument doesn't require blue-tentacle-like psi phenomena. See the three bullet points that apply when there's no superintelligent influence. The planetarium hypothesis is completely disjunctive with psi arguments, and explains the Fermi paradox even in the absence of psi. It's also not just my hypothesis—there's historical precedent, as has been linked to in the post. ETA: I hope that the second, Fermi-centric half of the linked post can be judged on its own terms and inspire debate about its arguments, regardless of the various theological or paranormal claims that might exist elsewhere on the blog.

[My primary interpretation of the downvotes for this comment is basically: "I want to discourage people from talking about psi, parapsychology, or anything like that—we all know that magic doesn't exist, so we should try to explain phenomena that actually exist and that are therefore actually interesting. Admittedly you (Will_Newsome) didn't spontaneously bring up psi in your comment, and your comment is a more-or-less reasonable reply to its parent, but downvoting this comment is the easiest way to punish you for associating LessWrong with blatantly irrational speculation."]

Replies from: gwern
comment by gwern · 2012-05-30T22:09:27.371Z · LW(p) · GW(p)

I'm a tad annoyed that it apparently breaks my space bar - arrow keys and pgup/pgdwn work, but space does nothing.

Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation besides philosophy of mind (most of which theories, I believe, would not predict any output difference in the absence of real qualia in a simulation) or efficiency (which to the extent we can analyze at all, weighs in strongly for simulation being more efficient).

I also don't understand how such an entity would even build a planetarium in the first place. Wouldn't any physical shell badly interfere with predictions of planetary or cometary orbits? Or cause parallax? etc. What would the timing be, and is there really no natural records that would throw off a planetarium constructed just in time for humans to be fooled (akin to testing the fine structure constant by looking at natural nuclear reactors from millions/billions of years ago)?

Replies from: JoshuaZ, Will_Newsome
comment by JoshuaZ · 2012-05-30T22:17:27.353Z · LW(p) · GW(p)

efficiency (which to the extent we can analyze at all, weighs in strongly for simulation being more efficient).

Can you expand on this? This isn't obvious to me.

Replies from: gwern
comment by gwern · 2012-05-30T23:33:06.289Z · LW(p) · GW(p)

Existing matter seems highly redundant, and building a full-scale 1:1 replica, as it were, means you cannot opt for any amount of approximation by definition or possible optimization.

I would draw an analogy to NP problems: yes, the best way to solve the pathologically hardest instances of any NP problem is brute force, just like there are probably arrangements of matter which cannot be calculated more efficiently by computronium than the actual arrangement of matter. But nevertheless, SAT solvers run remarkably fast on many real-world problem and far faster than anyone focused on the general asymptotic behavior would expect, and we have no reason to believe the world itself is a pathological instance of worlds.

Replies from: Will_Newsome, JoshuaZ
comment by Will_Newsome · 2012-05-30T23:42:41.831Z · LW(p) · GW(p)

One possible objection: what if humans are doing hypercomputation? E.g., being created by evolution (which is fundamentally "tied into" reality's computation) lets humans tap into the latent computation of the universe in a way that an algorithmic AI can't emulate, so it keeps humans around to use as hypercomputers. Various people have proposed similar hypotheses. I think this objection can be met, though.

Replies from: gwern
comment by gwern · 2012-05-30T23:49:43.480Z · LW(p) · GW(p)

The usual anti-Penrose point comes to mind: if quantum microtubules are really that useful, we can probably just build them into chips, and better, and the problem goes away.

Unless you mean the "tieing into" somehow requires a prefrontal cortex, at least 1 kidney, a working gallbladder, etc, in which case I think that's just sheer privileging of hypothesis with not a scrap of evidence for it.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-30T23:54:55.265Z · LW(p) · GW(p)

Former, not the latter. And yes, the anti-Penrose point applies, but we can skirt it by postulating that the superintelligence is limited in its decision theory—it can recognize good results when it seems them, much as TDT can recognize that UDT beats it at counterfactual mugging, but it's architecturally constrained not to self-modify into the winning thing. So humans might run native hypercomputation or native super-awesome decision theory that an AI could exploit but that the AI would know it couldn't emulate given its knowledge of its own limited architecture.

Replies from: gwern
comment by gwern · 2012-05-30T23:59:54.429Z · LW(p) · GW(p)

I guess you're distantly alluding to the old discussion of 'what would AIXI do if it ran into a hypercomputing oracle?' in modern guise. I'm afraid I know too little about TDT or UDT to appreciate the point. It just seems a little far-fetched - so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP, we're also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way?

If we were being mined for our computational potential, I can't help but feel human lives ought to be less repetitive than they are.

Replies from: Will_Newsome, Eugine_Nier
comment by Will_Newsome · 2012-05-31T00:10:02.076Z · LW(p) · GW(p)

I believe is generally regarded as being orders of magnitude less likely than say P=NP

Haven't seen any surveys, but I don't think so. I think hypercomputation is considered by some important people to be more likely than P=NP. I believe very few people have really considered it, so you shouldn't take anyone's off-the-cuff impressions as meaning very much unless you know they've thought a lot about the limitations of theoretical computer science. I don't really have any ax to grind on the matter, but I think hypercomputation is neglected.

we're also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way?

I think my points were supposed to be disjunctive, not conjunctive. A broken decision theory or a limited theory of computation can both result in humans outcompeting superintelligences on certain very specific decision problems or (pseudo-)computations. Wei Dai's "Metaphilosophical Mysteries" is relevant.

If we were being mined for our computational potential, I can't help but feel human lives ought to be less repetitive than they are.

Given some models, yes. Given other models, the AI might not be able to locate what parts of the system have the special sauce and what parts don't, so it's more likely to let humans be.

Replies from: gwern
comment by gwern · 2012-05-31T02:00:50.034Z · LW(p) · GW(p)

Your link isn't a stupid person, but to some extent, the lack of interest in hypercomputation says what the field thinks of it. Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.

Wei Dai's link is pretty controversial.

Replies from: Will_Newsome, Eugine_Nier, Eugine_Nier
comment by Will_Newsome · 2012-05-31T02:18:11.747Z · LW(p) · GW(p)

Not sure, but it seems that whenever I get into discussions with you it's usually about some potentially-important edge case or something. Strange.

But anyway, yeah. I just want to flag hypercomputation as a speculative thing that it might be worth taking an interest in, much like mirror matter. One or two of my default models are probably very similar to yours when it comes down to betting odds.

comment by Eugine_Nier · 2012-05-31T04:52:59.036Z · LW(p) · GW(p)

Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.

But only after it was discovered that the theory of quantum mechanics implied it was theoretically possible.

comment by Eugine_Nier · 2012-05-31T03:24:49.148Z · LW(p) · GW(p)

Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs.

My understanding of the history is that everyone believed the extended Church-Turing thesis until someone noticed that the (already established) theory of quantum mechanics contradicted it.

Replies from: gwern
comment by gwern · 2012-05-31T03:33:07.529Z · LW(p) · GW(p)

I don't think I've ever seen anyone invoke the extended Church-Turing thesis by either name or substance before quantum computing came around.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-05-31T03:55:28.013Z · LW(p) · GW(p)

People were talking about P-time before quantum computing and implicitly assuming that it applied to any computer they could build.

Replies from: gwern
comment by gwern · 2012-05-31T04:00:52.349Z · LW(p) · GW(p)

I don't see how one would apply "P-time" to "any computer they could build".

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-05-31T04:46:24.974Z · LW(p) · GW(p)

I meant "apply" in the sense that one applies a mathematical model to a phenomenon. Specifically, it was implicitly assumed the the notion of polynomial time captured what was actually possible to compute in polynomial time.

comment by Eugine_Nier · 2012-05-31T03:23:29.349Z · LW(p) · GW(p)

It just seems a little far-fetched - so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP

Um, you do realize you're comparing apples and oranges there, since one is a statement about physics and the other a statement about mathematics.

Replies from: gwern
comment by gwern · 2012-05-31T03:30:11.875Z · LW(p) · GW(p)

In this area, I do not think there is such a hard and fast distinction.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-05-31T04:56:52.383Z · LW(p) · GW(p)

So, how would you phrase the existence of hypercomputation as a mathematical statement?

Replies from: gwern
comment by gwern · 2012-05-31T14:13:14.741Z · LW(p) · GW(p)

Presumably something involving recursively enumerable functions...

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-06-01T02:28:25.807Z · LW(p) · GW(p)

As someone who understands computational theory, I strongly suspect you're seriously confused about how computational complexity theory works. As I don't have the time or interest to give a course in computational complexity, might I recommend asking the original question on mathoverflow if you are interested.

Apologies if that came off as rude.

comment by JoshuaZ · 2012-05-30T23:47:57.260Z · LW(p) · GW(p)

I don't find this argument persuasive or even strong. n qubits can't simulate n+1 qubits in general. In fact, n qubits can't even in general simulate n+1 bits. This suggests that if our understanding of the laws of physics are close to correct for our universe and the larger universe (whether holographic planetarium or simulationist), simulation should be tough.

Replies from: gwern
comment by gwern · 2012-05-30T23:56:58.962Z · LW(p) · GW(p)

That may be, but such a general point would be about arbitrary qubits or bits, when a simulation doesn't have to work over all or even most arrangements.

Replies from: JoshuaZ, JoshuaZ
comment by JoshuaZ · 2012-05-31T14:27:00.719Z · LW(p) · GW(p)

Hmm, so thinking about this more, I think that Holevo's theorem can probably be interpreted in a way that much more substantially restricts what one would need to know about the other n bits in order to simulate them, especially since one is apparently simulating not just bits but qubits. But I don't really have a good understanding of this sort of thing at all. Maybe someone who knows more can comment?

Another issue which backs up simulation being easier- if one cares primarily about life forms one doesn't need a detailed simulation then of the inside of planets and stars. The exact quantum state of every iron atom in the core of the planet for example shouldn't matter that much. So if one is mainly simulating the surface of a single planet in full detail, or even just the surfaces of a bunch of planets, that's a lot less computation.

One other issue is that I'm not sure you can have simulations run that much faster than your own physical reality (again assuming that the simulated universe uses the same basic physics as the underlying universe). See for example this paper which shows that most classical algorithms don't get major speedup from a quantum computer beyond a constant factor. That constant factor could be big, but this is a pretty strong result even before one is talking about general quantum algorithms. Of course, if the external world didn't quite work the same (say different constants for things like the speed of light) this might not be much of an issue at all.

comment by JoshuaZ · 2012-05-31T02:45:25.645Z · LW(p) · GW(p)

Hmm, that's a good point. So it would then come down to how much of an expectation of what the simulation is likely to do do you need in order to get away with using fewer qubits. I don't have a good intuition for that, but the fact that BQP is likely to be fairly small compared to all of PSPACE suggests to me that one can't really get that much out of it. But that's a weak argument. Your remark makes me update in favor of simulationism being more plausible.

comment by Will_Newsome · 2012-05-30T22:19:00.896Z · LW(p) · GW(p)

I'm a tad annoyed that it apparently breaks my space bar - arrow keys and pgup/pgdwn work, but space does nothing.

Google's fault. Thanks for letting me know, though.

Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation

Right—the argument is pretty modest. It's mostly just that the planetarium hypothesis is on par with other hypotheses like the simulation argument.

I also don't understand how such an entity would even build a planetarium in the first place.

Yeah, I left this to "a wizard did it"—if you accept simulation, then you can mix and match bigger and smaller planetariums around your brain or around the solar system to pose various physical problems. The planetarium hypothesis is sort of continuous with the simulation hypothesis if you like simulationistic assumptions. [ETA: And I didn't address any of those problems at any scale, because there's a problem for each scale. Factor your intuitions about the improbability of actually engineering a planetarium into your a posteriori estimate, to get a custom fit probability.]

comment by Mitchell_Porter · 2012-05-31T00:43:32.818Z · LW(p) · GW(p)

I like the idea, certainly not as a preferred explanation of the Fermi paradox, but as an addition to the list of explanations. But as gwern points out, getting the "planetarium" to work isn't so easy. Comets and planets ought to feel its mass, in fact comets ought to collide with it on the way out. It has to produce radiation patterned so as to imitate interstellar parallax. And it has to physically emit very high energy particles such as we detect on earth in cosmic rays. It's one form of the hypothesis "there's an invisible wall right there, projecting the appearance of a world beyond." And the main issue facing such a hypothesis is, what about the things that go into or come out of the wall?

Replies from: Eugine_Nier, Will_Newsome, Will_Newsome
comment by Eugine_Nier · 2012-05-31T03:37:32.034Z · LW(p) · GW(p)

It has to produce radiation patterned so as to imitate interstellar parallax. And it has to physically emit very high energy particles such as we detect on earth in cosmic rays.

It doesn't have to be "perfect". Keep in mind the old joke about the experimental and theoretical physicist:

Experimental physicist: I did an experiment and the sign on constant X came out positive.

Theoretical physicist: It's easy to see that it should be that way because of reasons Y and Z.

Some time later

E: Oops, turns out there was a mistake in my experiment, the sign on constant X should really be negative.

T: It's even easier to see why that should be the case.

comment by Will_Newsome · 2012-05-31T00:54:40.567Z · LW(p) · GW(p)

I like the idea, certainly not as a preferred explanation of the Fermi paradox, but as an addition to the list of explanations.

That's my take as well. Personally, my pet hypothesis is the Thomistic God, but there are three or so solutions that I treat as live. I'm not committed to any of 'em.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-05-31T01:26:41.536Z · LW(p) · GW(p)

One version of the idea, that I do normally favor, is the "cranium hypothesis", which says that my brain is surrounded by a wall and that everything I experience is a sort of reconstruction of what's on the other side of that wall, rather than being the thing itself. But that doesn't explain the Fermi paradox.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T01:34:37.406Z · LW(p) · GW(p)

But you agree that a significantly bigger wall could explain the Fermi paradox in theory?

Also I figured you might be partial to naive realism. I am, if only because I'd have considered it obviously completely retarded a year ago. IIRC the Thomists have a solution to some problem of intentionality where you directly perceive something's form itself. (Er, it's not a form, what's it called? Weird word, starts with an 'h'.) Seems like it fits well with monadology, but I guess not quantum monadology. ...You know, that monads don't change at all is really quite important. I know you know that, but still, "quantum monadology" is a pretty meh name.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-05-31T05:47:49.788Z · LW(p) · GW(p)

But you agree that a significantly bigger wall could explain the Fermi paradox in theory?

It's certainly a way to have a universe full of dark megastructures efficiently harvesting energy on behalf of ancient superintelligences, coexisting with a planet of yokels who just see a wilderness of stars squandering their radiative output. But I would rate 1. Great Filter 2. the "wilderness" is actually alive and busy but the yokels don't know how to see it that way 3. appearances are even more thoroughly illusory than in the planetarium scenario, all as more likely.

I figured you might be partial to naive realism.

That would make hallucination impossible. I think we have direct awareness of something, but not the outside world. The "something" is either part of us or it's alongside "us" in the brain.

comment by Will_Newsome · 2012-05-31T00:51:28.896Z · LW(p) · GW(p)

And the main issue facing such a hypothesis is, what about the things that go into or come out of the wall?

Luckily the point at which we start sending conscious beings out beyond the solar system is by one hypothesis the point at which we reach a technological singularity. How a planetarium would interact with an AI, only God knows. But for things like Voyager, it's of course no problem: the superintelligence eats the Voyager, and in its place sends back the signals the Voyager would have sent back if it hadn't gotten eaten.

comment by JoshuaZ · 2012-05-31T00:27:06.795Z · LW(p) · GW(p)

Reading this piece is difficult.

The first sentence of the second paragraph starts off

But because theology has traditionally been mostly Christian

That's not true. It might be that you are only aware of Christian theology, but very similar issues have been extensively discussed in other religions. Islamic theology is a pretty strong example.

I'm going to skip commenting on most of the theological discussion (aside from noting that sentences being grammatically well-formed doesn't mean they have content) and per your request move directly to the part about Fermi issues.

imultaneous satisfaction of diverse preferences. What if some humans don't want to be affected by otherwordly influences, or even don't want such influences to exist at all, for anyone? Then the utilitarian solution would be to influence the people that want the superintelligence to influence them while simultaneously avoiding any impact on the people that don't want to be influenced. Furthermore, to somewhat satisfy the preferences of those who don't want any influence to exist for anyone at all, the superintelligence could pull off a Necker-cube-like illusion: whether or not you saw the superintelligent influences would depend on what preconceptions you had in mind when interpreting the w

This sounds extremely close to the claim that miracles happen but non-believers just don't see them or don't want to see them. This ignores the many times people sincerely pray for miracles and nothing happens. Many agnostics and atheists would much rather be in a universe with some sort of powerful intervention: but this one doesn't look like it. Moreover, some people who become irreligious do so precisely because of the apparent absence of miracles.

Your arguments for non-intervention are more interesting. I had not seen the idea of non-intervention being Schelling point which seems novel.

Your claim that we've explored a lot of the answerspace around the Fermi question seems to be highly questionable- the question has only been around for about fifty years, not many people have seriously thought about it, and our actual set of data that is useful for locating or ruling out hypotheses is tiny.

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-05-31T00:38:47.878Z · LW(p) · GW(p)

Reading this piece is difficult.

How so? I'm posting on the blog to practice my writing in preparation for writing a treatise, so any suggestions for improvement would be greatly appreciated. I also wrote it while on Adderall which affects my style in various ways.

that sentences are grammatically well-formed doesn't mean they have content

Are you referring to the sentences I wrote, or to sentences like "I talked to a spirit the other day"?

That's not true. It might be that you are only aware of Christian theology, but very similar issues have been extensively discussed in other religions. Islamic theology is a pretty strong example.

Hm? You disagree with the "mostly"? Maybe you're thinking of the majority—I was thinking of the mode. Do you agree that the mode of theology is Christian, given some informal, intuitive measure?

This sounds extremely close to the claim that miracles happen but non-believers just don't see them or don't want to see them.

Yes, and I'm not sure but I think similar arguments are made by religious folk. It's just a possibility of course, and it relies heavily on the notion that at least on some topics we don't have strong introspective access to our preferences. I'm of course aware of people who search for God or gods in good faith and don't find Him/them, and that is indeed a counterargument, but how strong a counteragument it is depends on other unmentioned variables. I leave it to the reader to fill in the values for those variables.

Your claim that we've explored a lot of the answerspace around the Fermi question seems to be highly questionable

Right, I was just sharing my impression to wrap up the post. It could easily be wrong.

Replies from: Manfred, JoshuaZ
comment by Manfred · 2012-05-31T01:48:32.739Z · LW(p) · GW(p)

Reading this piece is difficult.

How so?

For me, it was primarily because you had large stretches with low communication per word.

For example:

Though Logos is always involved somehow, today's post will be mostly pneumatological. Wik tells us that pneumatology is "the study of spiritual beings and phenomena, especially the interactions between humans and God." In Christian theology pneumatology is always about the Holy Spirit, but here at Computational Theology we're not quite that pigeonholed, so we'll discuss the interactions between humans and all spiritual beings, who may or may not be God. ('Cuz after all, how could you tell? We'll discuss that problem—the problem of discernment—in future posts. Expect some algorithmic information theory.) And if you accept Crowley's rule—to interpret every phenomenon as a particular dealing of God with your soul—then all phenomena are subject to pneumatology anyway.

Compare with

This post will be primarily about the interaction between humans and spirits, e.g. gods or invisibly-acting AIs.

Replies from: Will_Newsome, Will_Newsome, Will_Newsome
comment by Will_Newsome · 2012-05-31T02:30:07.274Z · LW(p) · GW(p)

Also, I have to keep in mind that many people have complained that my writing is much too compressed, relying too much on hidden or external concepts or inferences. Hopefully I can strike a balance between inscrutable esotericity and points that belabor the point.

comment by Will_Newsome · 2012-05-31T02:12:50.882Z · LW(p) · GW(p)

Thanks!

Yeah, that's the Adderall talking. I'm planning to write a book (a treatise), where there's more room to expand and explain. But I suppose I should practice my skills on the appropriate medium. So I'll try to cut down on excursions like the above in the future. [ETA: Actually, I won't. There were good reasons to have the quoted part in there.]

comment by Will_Newsome · 2012-06-01T11:29:45.325Z · LW(p) · GW(p)

(Upon further reflection, replied here.)

Replies from: Manfred
comment by Manfred · 2012-06-01T14:43:47.449Z · LW(p) · GW(p)

Aw. How about at least treating my impression as evidence, rather than dismissing it.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-06-01T15:14:55.690Z · LW(p) · GW(p)

Of course I'm treating it as evidence. I'm not insane. For me especially, it's not even possible for me to dismiss someone's impression without treating it as evidence.

Replies from: Manfred
comment by Manfred · 2012-06-01T20:49:10.799Z · LW(p) · GW(p)

Great :D

Replies from: Will_Newsome
comment by Will_Newsome · 2012-06-01T22:19:38.476Z · LW(p) · GW(p)

Mostly. It also causes a lot of stress, due to, e.g., a total inability to disregard negative social judgments. This has been true my whole life, and it's caused me to become a very strange person. That said, I find it entirely worth it, because I think it makes me a better rationalist and a better person, at least in the limit.

comment by JoshuaZ · 2012-05-31T02:38:13.938Z · LW(p) · GW(p)

Manfred summarized the issues with readability pretty well, but the issue is slightly more complicated. There were also sections in the theology bit especially where it felt like there were a lot of unstated premises.

You disagree with the "mostly"? Maybe you're thinking of the majority—I was thinking of the mode. Do you agree that the mode of theology is Christian, given some informal, intuitive measure?

In that case, I'm not sure, and I suspect that any intuition is going to be drastically impacted by availibility bias. For example, I know intellectually that there's a lot of Hindu theology out there, but my rough intuition for how much is out there for different groups is wildly in favor of the Abrahamic religions and then a little bit to Buddhism and that only because I took an intro Buddhism class in college. I suspect that any sort of judgment about such a mode is more a statement about what religions one has been exposed to more than anything else.

Overall, I think this would have been much better received if it had not made any mention of theology at all and had just been presented with just the second half as a discussion of variants of the Zoo/Planetarium hypotheses.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T02:45:32.293Z · LW(p) · GW(p)

There were also sections in the theology bit especially where it felt like there were a lot of unstated premises.

I didn't flag them? Usually I'll flag assumptions, and then you can choose to take them on or not. If I'm not flagging them then they shouldn't be used further down in the post. Were they? Sorry if I'm unjustifiably crowdsourcing.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-31T02:58:37.289Z · LW(p) · GW(p)

Well:

In Christian theology pneumatology is always about the Holy Spirit

This seems likely but I know that some denominations have discussed the nature of angels and their interaction with humans.

Superintelligence" just means an extremely intelligent agent, and gods are, by hypothesis, extremely intelligent agents.

If you said "God" or the "Christian God" here that might be ok, but you seem to be trying to smuggling in a notion about deities that simply isn't true for the lowercase gods.

There were other points I think that showed up the first time I read it, but I'm not reading it as carefully now (reading this is a bit exhausting).

Replies from: Will_Newsome
comment by Will_Newsome · 2012-06-01T11:22:28.889Z · LW(p) · GW(p)

Okay, given those two examples I think your objections are nitpicks. I think you're probably unsatisfied with the piece for other, unmentioned reasons that you might not have introspective access to. Same with the people who upvoted Manfred's comment, which singles out the only paragraph in the piece that could really be interpreted as containing much too much fluff, and even then I explicitly recommended that people who weren't interested in the meta stuff about the blog skip ahead to the discussion of the solution only.

Overall, given the criticisms of the piece, I think I should be satisfied that I didn't leave out anything important, and that people who are unsatisfied with it are mostly not the people I want in my audience anyway. I'm left thinking that my primary aim should be to experiment with writing style more.

comment by Will_Newsome · 2012-05-31T07:32:16.937Z · LW(p) · GW(p)

Your arguments for non-intervention are more interesting. I had not seen the idea of non-intervention being Schelling point which seems novel.

It also applies to the AI risk debate. I've made the argument in that context before here on LW. I believe User:Dmytry started to champion it at some point.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-31T14:00:31.654Z · LW(p) · GW(p)

Yes, I've seen in in that sort of context. It seems much less plausible that an AI would try to reach a Schelling point of that sort. It requires it to have a very human notion of intervention. While it is plausible that other evolved entities would have such a notion, figuring out how to get it to understand that would seem to be possibly extremely difficult.

comment by jacob_cannell · 2012-05-30T23:20:30.333Z · LW(p) · GW(p)

I opened the link to your blog and had an initial negative aesthetic/readability reaction, which is a typical problem I've encountered when jumping away from Less Wrong. LW is highly optimized for clean readability. How does your cathedral background image help quickly communicate the ideas of your post? Also, the italicized text in particular is hard to read. The visual jump from LW to your blog's layout is jarring, and this immediately sets up an internal negative 'ugh' reaction. I'm attentive to these aesthetic details because I've encountered the same problem in my own blog.

Your post is long and intertwines a number of distantly related complex ideas. I scanned it and quickly came to a decision to commit perhaps a couple of minutes to skim/speed read and then reply here.

I am not immediately put off by equating super-intelligences to gods, mixing in some theological references/analogies, or even all the unsubtle connotations of the blog title "Computational Theology". However, I'm pretty certain I am atypical for LW in these regards. I have a general interest in evolution of religions, early Christianity in particular, and the similarity between transhumanist visions of the future and some strains of Christian Eschatology, for strategic reasons if nothing else.

I take Simulism somewhat seriously, and suspect that the Simulist cosmology could be the next major copernican worldview shift. On the other hand, I don't have much interest or place much credence in psi-phenom. So given all that I jumped down and mainly tried to find your justification for the planetarium hypothesis.

From my understanding of future technological capability, a physical planetarium will always be an extraordinarily expensive endeavor. What's the point?

If the point is intervention, to alter the developmental trajectory of a planet, there are vastly cheaper options: small levers with huge future effects. And even if the desired intervention is of a specific variety along the lines of "let humanity develop as if it was the first civilization", that could be achieved by constructing an alternate universe tweaked for that future at a tiny fraction of the cost of building a planetarium.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-30T23:36:10.745Z · LW(p) · GW(p)

How does your cathedral background image help quickly communicate the ideas of your post?

On the main page it seems unmotivated—vague connections to God and architecture—but if you click "About" you see more of the photo. The photo was selected because it implies God but its emphasis is architecture, and the highly organized structure of the architecture is supposed to evoke formalism and technology, thus linking God to computationalism. I couldn't think of anything better, and honestly I quite like the photo. Any suggestions?

And yes, if you're willing to accept the simulation solution, then the planetarium hypothesis just isn't as good an explanation. The planetarium hypothesis is mostly for people who are skeptical of simulationism, or people who want to have a backup hypothesis in case simulationism doesn't work, like myself. I generally prefer something like simulationism, but the planetarium hypothesis is my second favored hypothesis.

ETA: I've changed the typeface to Times New Roman, which should make the italics easily readable. Thanks for the feedback, I wasn't sure if the the olde font was appropriate or not.

Replies from: jacob_cannell
comment by jacob_cannell · 2012-05-31T00:26:51.349Z · LW(p) · GW(p)

To me the full background picture is visually distracting. I find it aesthetically jarring/unpleasing, and probably subconsciously associate that visual style with hastily constructed blogs, or at least blogs outside of my typical reading preference. I prefer the background image to be constrained to just the top of the blog, in the typical fashion of blogs like LW. If you really like the photo, have a link to it or embed it in the article somewhere.

The planetarium hypothesis is mostly for people who are skeptical of simulationism, or people who want to have a backup hypothesis in case simulationism doesn't work, like myself.

You wording suggests you view these hypotheses as tools required to achieve some predetermined objective, rather than just as beliefs subject to observational revision.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T00:46:37.909Z · LW(p) · GW(p)

Thanks for the feedback. I'll go for a solid background. (ETA: Changed to timeless black to be a little easier on the eyes than some websites. Unfortunately I can't change the page's color to a light grey, will have to use some CSS. I'll consider further optimization later.)

You wording suggests you view these hypotheses as tools required to achieve some predetermined objective, rather than just as beliefs subject to observational revision.

Both and neither. I have many different epistemic practices, and I also try to switch up my epistemological approaches often. Coherentism, pragmatism, correspondence, whatever—ultimately I think the foundations of epistemology are to be found in decision theory, and any other epistemological approaches are just phenomenal shards of the fundamental nature of rationality. Hypotheses can be tools, hypotheses can be correspondences—whatever leads to intellectual fruit. "May we not forget interpretations consistent with the evidence, even at the cost of overweighting them." Similarly, may we not forget epistemologies consistent with potentially optimal decisions, even at the cost of overweighting them. We must be meta, we must be large.

comment by Will_Newsome · 2012-05-30T21:11:12.972Z · LW(p) · GW(p)

This post has thus far gotten an upvote and two [eta:3] downvotes. Downvoters: what do you dislike about this post? Please let me know so I can accommodate your discussion-section-content preferences in the future. Thanks for any feedback!

Replies from: FAWS, Oscar_Cunningham
comment by FAWS · 2012-05-31T10:55:21.259Z · LW(p) · GW(p)

You mostly talk about your new blog instead of the idea the post claims to be about, and the post largely sounds like an advertisement. Two paragraphs summarizing your idea and one sentence talking about the blog (preferably worded as a disclaimer instead of an advertisement) would have been better.

comment by Oscar_Cunningham · 2012-05-31T00:34:18.230Z · LW(p) · GW(p)

No rationality info!

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T00:40:37.019Z · LW(p) · GW(p)

Thanks.

comment by Eugine_Nier · 2012-05-31T03:56:28.359Z · LW(p) · GW(p)

By the way, your "Leibniz' monads" link is broken.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-31T04:11:35.397Z · LW(p) · GW(p)

fixed thx

comment by jsalvatier · 2012-05-30T20:30:38.218Z · LW(p) · GW(p)

Aesthetics issue: that slide up animation at the beginning is bad; makes me feel a bit queasy.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-05-30T20:35:14.410Z · LW(p) · GW(p)

I don't like it either. I think I'll have to edit the CSS to change it. Hopefully that'll work. Overall Google's aesthetics are pretty decent but sometimes they really goof. The Blogger platform has recently been revamped so hopefully they'll make a few tweaks soon. Overall I like the new platform, and Blogger makes it pretty easy to have guest authors post whenever they want to. (ETA: There are alternative layouts of course, but they're not as beautiful. I'd prefer to keep the current layout but just somehow get rid of those annoying transitions.)