What would an ultra-intelligent machine make of the great filter?

post by James_Miller · 2010-11-28T18:47:52.503Z · score: -3 (8 votes) · LW · GW · Legacy · 10 comments

 

Imagine that an ultra-intelligent machine emerges from an intelligence explosion.  The AI (a) finds no trace of extraterrestrial intelligence, (b) calculates that many star systems should have given birth to star faring civilizations so mankind hasn’t pass through most of the Hanson/Grace great filter, and (c) realizes that with trivial effort it could immediately send out some self-replicating von Neumann machines that could make the galaxy more to its liking.  

Based on my admittedly limited reasoning abilities and information set I would guess that the AI would conclude that the zoo hypothesis is probably the solution to the Fermi paradox and because stars don’t appear to have been “turned off” either free energy is not a limiting factor (so the Laws of Thermodynamics are incorrect) or we are being fooled into thinking that stars unnecessarily "waste” free energy (perhaps because we are in a computer simulation).

 

10 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2010-11-28T19:16:30.197Z · score: 17 (17 votes) · LW(p) · GW(p)

You are stating "I think ultra-intelligent machine will believe X", but this simply means that you believe X, so why the talk about ultra-intelligent machines? It serves no purpose.

comment by Vladimir_Nesov · 2010-11-28T20:08:34.334Z · score: 6 (6 votes) · LW(p) · GW(p)

(It looks like LW version of the "All reasonable/rational/Scottish people believe X" dark side rhetoric is "Ultra-intelligent machines will believe X".)

comment by timtyler · 2010-11-29T17:51:21.940Z · score: -1 (1 votes) · LW(p) · GW(p)

(b) is counterfactual today. Nobody can calculate how many nearby star systems should have given birth to star faring civilizations - since nobody knows p(origin of life). We can't even make life from plausible inorganic materials yet. We are clueless - and thus highly uncertain.

comment by James_Miller · 2010-11-28T19:32:17.530Z · score: -2 (2 votes) · LW(p) · GW(p)

OK. I think that if an ultra-intelligent AI determines that (a), (b) and (c) are correct then the zoo hypothesis is probably the solution to Fermi's paradox. I think this last sentence "serves a purpose" because (a), (b) and (c) seem somewhat reasonable and thus after reading my post a reader would give a higher weight to the zoo hypothesis being true.

comment by [deleted] · 2010-11-29T11:34:30.849Z · score: -1 (1 votes) · LW(p) · GW(p)

So you are using the ultra-intelligent AI as a kind of Omega, then? To establish that (a), (b), and (c) are definitely true?

comment by JoshuaZ · 2010-11-28T19:05:02.927Z · score: 3 (3 votes) · LW(p) · GW(p)

This has the major assumption that the AI will conclude that it simply isn't the first to pass the great filter. I suspect that a strong AI in that sort of context would have good reason to think otherwise.

comment by James_Miller · 2010-11-28T19:17:24.899Z · score: 0 (0 votes) · LW(p) · GW(p)

It's not a direct assumption because an implication of (a) and (b) is that the AI is extremely unlikely to be the first that has passed the great filter. But if the AI believes that no other explanation including the zoo hypothesis has a non-trivial probability of being correct then the AI would conclude that mankind probably is the first to have passed the great filter.

comment by anonym · 2010-12-13T06:30:12.796Z · score: 0 (0 votes) · LW(p) · GW(p)

Why don't you explain your reasoning for your conclusion based on (a), (b), and (c)? Merely saying "I would guess that" is not persuasive.

comment by Vaniver · 2010-11-28T19:17:20.609Z · score: -5 (9 votes) · LW(p) · GW(p)

My guess is, being ultra-intelligent, it will realize that reshaping the galaxy was a primitive urge and focus on local affairs where it can stay connected with its effects. Filter solved!

comment by James_Miller · 2010-11-28T19:18:31.986Z · score: 0 (0 votes) · LW(p) · GW(p)

This would likely mean that free energy wasn't a constraint on the AI's optimization powers or lifespan.