Resolving the Fermi Paradox: New Directions

post by jacob_cannell · 2015-04-18T06:00:33.871Z · LW · GW · Legacy · 59 comments

Contents

  The Past
  The Future(s)
    Collapse/Extinction:
    Biological/Mixed Civilization:
    PostBiological Warm-tech AI Civilization:
    From Warm-tech to Cold-tech
    Stellar Escape Trajectories
    The Great Game 
    Spheres of Influence
    Conditioning on our Observational Data
    Observational Selection Effects
    Rogue Planets
    Conclusion
None
59 comments

Our sun appears to be a typical star: unremarkable in age, composition, galactic orbit, or even in its possession of many planets.  Billions of other stars in the milky way have similar general parameters and orbits that place them in the galactic habitable zone.  Extrapolations of recent expolanet surveys reveal that most stars have planets, removing yet another potential unique dimension for a great filter in the past.  

According to Google, there are 20 billion earth like planets in the Galaxy.

A paradox indicates a flaw in our reasoning or our knowledge, which upon resolution, may cause some large update in our beliefs.

Ideally we could resolve this through massive multiscale monte carlo computer simulations to approximate Solonomoff Induction on our current observational data.  If we survive and create superintelligence, we will probably do just that.

In the meantime, we are limited to constrained simulations, fermi estimates, and other shortcuts to approximate the ideal bayesian inference.

The Past

While there is still obvious uncertainty concerning the likelihood of the series of transitions along the path from the formation of an earth-like planet around a sol-like star up to an early tech civilization, the general direction of the recent evidence flow favours a strong Mediocrity Principle.

Here are a few highlight developments from the last few decades relating to an early filter:

  1. The time window between formation of earth and earliest life has been narrowed to a brief interval.  Panspermia has also gained ground, with some recent complexity arguments favoring a common origin of life at 9 billion yrs ago.[1]
  2. Discovery of various extremophiles indicate life is robust to a wider range of environments than the norm on earth today.
  3. Advances in neuroscience and studies of animal intelligence lead to the conclusion that the human brain is not nearly as unique as once thought.  It is just an ordinary scaled up primate brain, with a cortex enlarged to 4x the size of a chimpanzee.  Elephants and some cetaceans have similar cortical neuron counts to the chimpanzee, and demonstrate similar or greater levels of intelligence in terms of rituals, problem solving, tool use, communication, and even understanding rudimentary human language.  Elephants, cetaceans, and primates are widely separated lineages, indicating robustness and inevitability in the evolution of intelligence.

So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction - but see this reply for an argument for an early filter).

The Future(s)

When modelling the future development of civilization, we must recognize that the future is a vast cloud of uncertainty compared to the past.  The best approach is to focus on the most key general features of future postbiological civilizations, categorize the full space of models, and then update on our observations to determine what ranges of the parameter space are excluded and which regions remain open.

An abridged taxonomy of future civilization trajectories :

Collapse/Extinction:

Civilization is wiped out due to an existential catastrophe that sterilizes the planet sufficient enough to kill most large multicellular organisms, essentially resetting the evolutionary clock by a billion years.  Given the potential dangers of nanotech/AI/nuclear weapons - and then aliens, I believe this possibility is significant - ie in the 1% to 50% range.

Biological/Mixed Civilization:

This is the old-skool sci-fi scenario.  Humans or our biological descendants expand into space.  AI is developed but limited to human intelligence, like CP30.  No or limited uploading.

This leads eventually to slow colonization, terraforming, perhaps eventually dyson spheres etc.

This scenario is almost not worth mentioning: prior < 1%.  Unfortunately SETI in current form is till predicated on a world model that assigns a high prior to these futures.

PostBiological Warm-tech AI Civilization:

This is Kurzweil/Moravec's sci-fi scenario.  Humans become postbiological, merging with AI through uploading.  We become a computational civilization that then spreads out some fraction of the speed of light to turn the galaxy into computronium.  This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.

One of the very few reasonable assumptions we can make about any superintelligent postbiological civilization is that higher intelligence involves increased computational efficiency.  Advanced civs will upgrade into physical configurations that maximize computation capabilities given the local resources.

Thus to understand the physical form of future civs, we need to understand the physical limits of computation.

One key constraint is the Landauer Limit, which states that the erasure (or cloning) of one bit of information requires a minimum of kTln2 joules.  At room temperature (293 K), this corresponds to a minimum of 0.017 eV to erase one bit.  Minimum is however the keyword here, as according to the principle, the probability of the erasure succeeding is only 50% at the limit.  Reliable erasure requires some multiple of the minimal expenditure - a reasonable estimate being about 100kT or 1eV as the minimum for bit erasures at today's levels of reliability.

Now, the second key consideration is that Landauer's Limit does not include the cost of interconnect, which is already now dominating the energy cost in modern computing.  Just moving bits around dissipates energy.

Moore's Law is approaching its asymptotic end in a decade or so due to these hard physical energy constraints and the related miniaturization limits.

I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.

From Warm-tech to Cold-tech

There is a way forward to vastly increased energy efficiency, but it requires reversible computing (to increase the ratio of computations per bit erasures), and full superconducting to reduce the interconnect loss down to near zero.

The path to enormously more powerful computational systems necessarily involves transitioning to very low temperatures, and the lower the better, for several key reasons:

  1. There is the obvious immediate gain that one gets from lowering the cost of bit erasures: a bit erasure at room temperature costs 100 times more than a bit erasure at the cosmic background temperature, and a hundred thousand times more than an erasure at 0.01K (the current achievable limit for large objects)
  2. Low temperatures are required for most superconducting materials regardless.
  3. The delicate coherence required for practical quantum computation requires or works best at ultra low temperatures.
At a more abstract level, the essence of computation is precise control over the physical configurations of a device as it undergoes complex state transitions.  Noise/entropy is the enemy of control, and temperature is a form of noise.  

Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero.  Unfortunately, such a device would be delicate to a degree that is hard to imagine - even a single misplaced high energy particle could cause enormous damage.

In this model, advanced computational civilization would take the form of a compact body (anywhere from asteroid to planet size) that employs layers of sophisticated shielding to deflect as much of the incoming particle flux as possible.  The ideal environment for such a device is as far away from hot stars as one can possibly go, and the farther the better.  The extreme energy efficiency of advanced low temperature reversible/quantum computing implies that energy is not a constraint.  These advanced civilizations could probably power themselves using fusion reactors for millions, if not billions, of years.

Stellar Escape Trajectories

For a cold-tech civilization, one interesting long term strategy involves escaping the local star's orbit to reach the colder interstellar medium, and eventually the intergalactic medium.

If we assume that these future civs have long planning horizons (reasonable), we can consider this an investment that has an initial cost in terms of the energy required to achieve escape velocity and a return measured in the future integral of computation gained over the trajectory due to increased energy efficiency.  Expendable boost mass in the system can be used, and domino chains of complex chaotic gravitational assist maneuvers computed by deep simulations may offer a route to expel large objects using reasonable amounts of energy.[3]

The Great Game 

Given the constraints of known physics (ie no FTL), it appears that the computational brains housing more advanced cold-tech civs will be incredibly vulnerable to hostile aliens.  A relativistic kill vehicle is a simple technology that permits little avenue for direct defense.  The only strong defense is stealth.

Although the utility functions and ethics of future civs are highly speculative, we can observe that a very large space of utility functions lead to similar convergent instrumental goals involving control over one's immediate future light cone.  If we assume that some civs are essentially selfish, then the dynamics suggest successful strategies will involve stealth and deception to avoid detection combined with deep simulation sleuthing to discover potential alien civs and their locations.

If two civs both discover each other's locations around the same time, then MAD (mutually assured destruction) dynamics takeover and cooperation has stronger benefits.  The vast distances involve suggest that one sided discoveries are more likely.

Spheres of Influence

A new civ, upon achieving the early postbiological stage of development (earth in say 2050?), should be able to resolve the general answer to the fermi paradox using advanced deep simulation alone - long before any probes would reach distant stars.  Assuming that the answer is "lots of aliens", then further simulations could be used to estimate the relative likelihood of elder civs interacting with the past lightcone.  

The first few civilizations would presumably realize that the galaxy is more likely to be mostly colonized, in which case the ideal strategy probably involves expansion of actuator type devices (probes, construction machines) into nearby systems combined with construction and expulsion of advanced stealthed coldtech brains out into the void.  On the other hand, the very nature of the stealth strategy suggests that it may be hard to confidently determine how colonized the galaxy is. 

For civilizations appearing later, the situation is more complex.  The younger a civ estimates itself to be in the cosmic order, the more likely it becomes that it's local system has already come under an alien influence.

From the perspective of an elder civ, an alien planet at a pre-singularity level of development has no immediate value.  Raw materials are plentiful - and most of the baryonic mass appears to be interstellar and free floating.  The tiny relative value of any raw materials on a biological world are probably outweighed - in the long run - by the potential future value of information trade with the resulting mature civ.

Each biological world - or seed of a future elder civ - although perhaps similar in abstract, is unique in details.  Each such world is valuable in the potential unique knowledge/insights it may eventually generate - directly or indirectly.  From a pure instrumental rational standpoint, there is some value in preserving biological worlds to increase general knowledge of civ development trajectories.

However, there could exist cases where the elder civ may wish to intervene.  For example, if deep simulations predict that the younger world will probably develop into something unfriendly - like an aggressive selfish/unfriendly replicator - then small pertubations in the natural trajectory could be called for.  In short the elder civ may have reasons to occasionally 'play god'.

On the other hand, any intervention itself would leave a detectable signature or trace in the historical trajectory which in turn could be detected by another rival or enemy civ!  In the best case these clues would only reveal the presence of an alien influence.  In the worst case they could reveal information concerning the intervening elder civ's home system and the likely locations of its key assets.

Around 70,000 years ago, we had a close encounter with Scholz's star, which passed with 0.8 light years of the sun (within the oort cloud).  If the galaxy is well colonized, flybys such as this have potentially interesting implications  (that particular flyby corresponds to the estimated time of the Toba super-eruption, for example).

Conditioning on our Observational Data

Over the last few decades SETI has searched a small portion of the parameter space covering potential alien civs.  

SETI's original main focus concerned the detection of large permanent alien radio beacons.  We can reasonably rule out models that predict advanced civs constructing high energy omnidirectional radio beacons.

At this point we can also mostly rule out large hot-tech civilizations (energy constrained civilizations) that harvest most of the energy from stars.

Obviously detecting cold-tech civilizations is considerably more difficult, and perhaps close to impossible if advanced stealth is a convergent strategy.

However, determining whether the galaxy as a whole is colonized by advanced stealth civs is a much easier problem.  In fact, one way or another the evidence is already right in front of us.  We now know that most of the mass in the galaxy is dark rather than light.  I have assumed that coldtech still involves baryonic matter and normal physics, but of course there is also the possibility that non-baryonic matter could be used for computation.  Either way, the dark matter situation is favorable.  Focusing on normal baryonic matter, the ratio of dark/cold to light/hot is still large - very favorable for colonization.

Observational Selection Effects

All advanced civs will have strong instrumental reasons to employ deep simulations to understand and model developmental trajectories for the galaxy as a whole and for civilizations in particular.  A very likely consequence is the production of large numbers of simulated conscious observers, ala the Simulation Argument.  Universes with the more advanced low temperature reversible/quantum computing civilizations will tend to produce many more simulated observer moments and are thus intrinsically more likely than one would otherwise expect - perhaps massively so.

 

Rogue Planets


If the galaxy is already colonized by stealthed coldtech civs, then one prediction is that some fraction of the stellar mass has been artificially ejected.  Some recent observations actually point - at least weakly - in this direction.

From "Nomads of The Galaxy"[4]

We estimate that there may be up to ∼ 10^5 compact objects in the mass range 10^−8 to 10^−2M⊙
per main sequence star that are unbound to a host star in the Galaxy. We refer to these objects as
nomads; in the literature a subset of these are sometimes called free-floating or rogue planets.

Although the error range is still large, it appears that free floating planets outnumber planets bound to stars, and perhaps by a rather large margin.

Assuming the galaxy is colonized:  It could be that rogue planets form naturally outside of stars and then are colonized.  It could be they form around stars and then are ejected naturally (and colonized).  Artificial ejection - even if true - may be a rare event.  Or not.  But at least a few of these options could potentially be differentiated with future observations - for example if we find an interesting discrepancy in the rogue planet distribution predicted by simulations (which obviously do not yet include aliens!) and actual observations.

Also: if rogue planets outnumber stars by a large margin, then it follows that rogue planet flybys are more common in proportion.

 

Conclusion

SETI to date allows us to exclude some regions of the parameter space for alien civs, but the regions excluded correspond to low prior probability models anyway, based on the postbiological perspective on the future of life.  The most interesting regions of the parameter space probably involve advanced stealthy aliens in the form of small compact cold objects floating in the interstellar medium.

The upcoming WFIST telescope should shed more light on dark matter and enhance our microlensing detection abilities significantly.  Sadly, it's planned launch date isn't until 2024.  Space development is slow.

 

59 comments

Comments sorted by top scores.

comment by MarsColony_in10years · 2015-04-18T18:32:26.667Z · LW(p) · GW(p)

First, in the interest of full disclosure, the reason I'm here on LW is to maximize my contribution to promoting intelligent life. It currently appears that maximizing the number of Quality Adjusted Life Years integrated over the period from now until the heat death of the universe can only be achieved through spaceflight and spreading life/AI through the solar system, and then the galaxy. This can be done through either directed panspermia or by spreading intelligent life/AI directly. I have spent the last year or so trying to find any flaws in my understanding, and so I'm about to do everything I can to tear your initial argument to shreds. That's not necessarily because I don't agree with you, (although my reasoning diverges about halfway through) but rather a concerted effort to avoid confirmation bias. I don't want to devote my entire life to something sub-optimal, just because I'm afraid to put my views under scrutiny.

So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction).

You mentioned several possibilities for a great filter in the past, but that was by no means a comprehensive list. Here's a longer list, off the top of my head:

  1. Habitable stars are rare. (roughly sun-sized, minimal solar flares, etc) Poor candidate, as you point out.

  2. Habitable planets are rare. (Orbit within the habitable zone, liquid H2O, ingredients for life) You touched on this, but our understanding of the source of Earth's water is poor, so I don't think we can discard this as a possibility. We have an oddly large moon, which may have played a role. First, it's gravity ensured that the Earth's rotational axis is roughly parallel to it's orbital plane most of the time. This means that the planet is baked roughly evenly, rather than spending millions of years with the north pole facing the sun. Tidal forces also effect the mantle, which creates our magnetosphere, which in turn prevents atmospheric loss to space. There are a surprising number of other theories linking the moon to life on Earth.

  3. Panspermia / Abiogenesis is rare. (transport may be limited by radiation/mutations, while genesis of new life may require rare environments or energy sources) We have reasonable evidence that life could survive within rocks blasted off of a planet's surface long enough to seed nearby planets, but not necessarily that life could survive the long voyage between nearby stars. We've demonstrated that most, but not all, essential amino acids can be generated under conditions similar to those of early Earth. Also, there's a weird coincidence where the formation of the first life on earth seems to coincide well with the end of the late heavy bombardment, which might have created conditions conducive to the formation of life late enough after planetary formation that geological activity could settle down a bit. There doesn't seem to be any reason why there should have been a second heavy bombardment period, though, so that may be unique to our solar system.

  4. Either photosynthesis is rare, or the Oxygen Catastrophe generally kills off all species. (High concentrations of oxygen are highly poisonous, which caused a massive extinction event. Additionally, losing all that CO2 from the atmosphere cooled earth tremendously since the sun wasn't so bright. This caused the longest Snowball Earth episode in the planet's history, in which all the planet's oceans froze solid and all the land was covered in one massive glacier.) It seems likely that life could never have recovered from this.

  5. Prokaryotic life is common, but Eukaryotic life is rare. (It's really hard to evolve a cell nucleus.) Eukaryotes only appeared about 2 billion years after Prokaryotes; halfway through the chain of evolution from the first life until today.

  6. Eukaryotic life is common, but multicellular life is rare. We've only had it for ~500 million years.

  7. Multicellular life is common, but complex life on land is rare. It's possible that we could never have developed spines or crawled onto land, or that animal life itself might be rare. This seems much less plausible, since it seems to have sprung directly from the evolution of multicellular life, in a fairly spectacular explosion of complexity.

  8. Complex life is common, but is regularly wiped out before it can become intelligent. There have been 5 big extinction events in earth's history, most recently the meteor that killed the dinosaurs. Although these weren't enough to wipe out all life on earth, there are several cosmic threats that could. These include collision with another planet or other sufficiently large object, which might be caused by orbital periods synching up with Jupiter or by passing stars or black-holes. Additionally, Gamma Ray Bursts are extremely common, and might regularly wipe out all life in the inner solar system, where the stars are closer together. This would explain why we evolved out on the edge of a spiral arm of the milky way, and not closer to the galactic center.

  9. Complex life is common, but intelligent life is rare. There seem to be a lot of somewhat intelligent creatures that aren't closely related to us. (Parrots, octopus, dolphins, etc.) There are even several animals that make limited use of tools. What is rare, however, appears to be the capacity for abstract thought. Chimps can learn from each other by copying, but have a hard time learning or teaching each other without demonstrating. We're also much better at learning by copying others, but we can also learn from abstract symbols written on a piece of paper. This appears to be a result of runaway evolution, where humans selected for mates with a high capacity for abstract thought, perhaps via a high capacity to predict others actions and plot accordingly.

  10. Intelligent life is common, but technological civilizations are rare. We have had several steady-state conditions over our specie's history. We used the first simple stone tools ~2.3 million years ago, and then stood upright and invented fire 1.5 million years ago. We haven't evolved noticeably over the past 200,000 years, and yet we only developed agriculture and colonized the planet 10,000 years ago. Some of that may be due to the most recent ice age, but not all of it. We didn't invent bronze or written language until 5,000 years ago. All the great advanced civilizations made relatively small advances in technology, and put all their efforts into infrastructure rather than R&D. The only thing the Romans invented was concrete; everything else was an adaptation of ideas from other cultures. Western civilization is really the first culture to invest heavily in R&D, and we generally suck at it. Places like silicon valley are the exception to the rule.

Given all this, I wouldn't be so quick to assume that the great filter is in front of us. All this must be weighed against the risks posed by all the various existential risks. Nuclear war was a close call in the cold war, and the risk is an order of magnitude lower now, but is by no means gone. AI gets discussed a lot on here, but I don't think biological warfare gets the attention it deserves. Our understanding of biology is growing rapidly, and I think it may one day be relatively easy for anyone to genetically engineer a unusually dangerous virus or pandemic. Additionally, advanced civilizations in general tend to only last on the order of hundred years, according to this paper. That's more or less in line with the Future of Humanity Institute's informal Global Catastrophic Risk Survey. (The mean estimate for humanity's chance of going extinct this century was on the order of a 20%.) That said, Nick Bostram himself appears to think that the great filter is more likely to lie behind us than ahead of us. To me, it seems like it could easily go either way, but since Bostram has been researching this much longer than I have, I'm inclined to shift my probability estimate a bit further toward the great filter being behind us.

Replies from: MarsColony_in10years, jacob_cannell
comment by MarsColony_in10years · 2015-04-18T20:03:51.186Z · LW(p) · GW(p)

The above dealt primarily with the first half of your post, but let me also address the 2nd half. You've assigned several probability estimates to various outcomes of our civilization:

  1. Collapse/Extinction: "in the 1% to 50% range." I'm inclined to agree with you on this one, as described in the last paragraph of my above post.

  2. Biological/Mixed Civilization: “This scenario is almost not worth mentioning: prior < 1%” I think you've defined this a bit too narrowly. We don't yet see any limiting factor for AI advancement besides physics, but that doesn't mean that one won't make itself apparent. Maybe this factor will turn out to be teraFLOPS (aka limited by Moore's law) or energy (limited by our energy production capacity) or even matter (limited by the amount of rare earth elements necessary to make computronium). But it could also happen that we fail to make a super-intelligence at all, or that AI eventually achieves most, but not all, of humans mental abilities. The livelihood of a general intelligence increases asymptotically with time, but I think it would be a mistake to assume that it is increasing asymptotically toward 1. It could easily be getting closer and closer to 0.8 or some other value which is hard to calculate. The existence of the human mind shows that consciousness can be built out of atoms, but not necessarily that it can be built out of a string of transistors, or that it is simple enough that we can ever understand it well enough to reproduce it in code. There's also the existential risk of developing a flawed AI. We only have 1 shot at it, and the evidence seems to be against developing one correctly on the first try. I suspect that the supermajority of civilizations that develop AI's develop flawed AIs. Even if 90% develop an AI before going to the stars, perhaps >99.9999% are wiped out by a poorly designed AI. This would lead to many more “Biological/Mixed Civilizations” than AI civilizations, if the flawed AI's tend to wipe themselves out or not to spread out into the universe.

  3. PostBiological Warm-tech AI Civilization: “I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.” This seems slightly low to me, but not by much. “This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.” Although this state doesn't flow from energy being a limiting factor (aka biological/mixed civilizations may also be energy limited) I agree that such a civilization would eventually become energy limited. I see 2 ways of solving this: better harvesting (aka Dyson swarms, since Dyson spheres are likely mass-limited) or broader civilization (if it takes less energy to send a colony to the nearest star, then you do that before you start building a Dyson swarm).

  4. From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass. I'd probably put less, but that's not actually my main contention. I don't buy that this is sufficient reason to travel to the interstellar medium, away from such a ready energy and matter source as a solar system. You list 3 reasons: lower energy bit erasures, superconductivity, and quantum computer efficiency. Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc. Only a few superconductors require temperatures below ~50 Kelvin, and you can get that anywhere perpetually shaded from the sun, such as the craters on the north and south poled of the moon (~30 Kelvin). If you want it somewhere else, stop an asteroid from spinning and build a computer on the dark side. I'm not sure that quantum computers need to be below that either. Anywhere you go, you'll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability? In order to expand exponentially, such a system would still need huge amounts of matter for superconductors and whatever else.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-18T21:07:20.438Z · LW(p) · GW(p)

Collapse/Extinction: "in the 1% to 50% range."

I'm inclined to agree with you on this one, as described in the last paragraph of my above post.

I should haved pointed out that even a high probability of collapse is unlikely to act as a filter, because it has to be convergent and a single civ can colonize.

From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass.

It is where I am putting most of my prior probability mass. There are three considerations:

  1. Engineering considerations - the configurations which maximize computation are those where the computational mass is far from heat sources such as stars which limit computation. With reversible computing, energy is unlikely to be a constraint at all, and the best use of available mass probably involves ejecting the most valuable mass out of the system.

  2. Stealth considerations - given no radical new physics, it appears that stealth is the only reliable way to protect a civ's computational brains. Any civ hanging out near a star would be a sitting duck.

  3. Simulation argument selection effects - discussed elsewhere, but basically the coldtech scenario tends to maximize the creation of simulations which produce observers such as ourselves.

After conditioning on observations of the galaxy to date, the coldtech scenario contains essentially all of the remaining probability mass. Of course, our understanding of physics is incomplete, and I didn't have time to list all of the plausible models for future civs. There is the transcension scenario, which is related to my model of coldtech civs migrating away from the galactic disk.

One other little thing I may have forgot to mention in the article: the distribution of dark matter is that of a halo, which is suspiciously close to what one would expect in the expulsion scenario, where elder civs are leaving the galaxy in directions away from the galactic disk. Of course, that effect is only relevant if a good chunk of the dark matter is usable for computation.

Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc.

No - I should have elaborated on the model more, but the article was already long.

Given some planemo (asteroid,moon,planet whatever) of mass M, we are concerned with maximizing the total quantity of computation in ops over the future that we can extract from that mass M.

If high tech reversible/quantum computing is possible, then the designs which maximize the total computation are all temperature limited, due to Landauer's limit.

Now there are actually many constraints to consider. There is a structural constraint that even if your device creates no heat, there is a limit to the ops/s achievable by one molecular transistor - and this actually is also related to Landauer's principle. Whether the computer is reversible or not, it still requires about 100kT j per reliable bitop - the difference is that the irreversible computer converts that energy into heat, whereas the reversible design recycles it.

If reversible/quantum computing is possible, then there is no competition - the reversible designs will scale to enormously higher computational densities (that would result in the equivalent of nuclear explosions if all of those bits were erased).

Temperature then becomes the last key thing you can optimize for, as the background temperature limits your effective cooling capability.

Anywhere you go, you'll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability?

Well - assuming that really powerful reversible computing is possible, then the answer - rather obviously - is yes.

But again energy harvesting is only necessary if energy is a constraint, which it isn't in the coldtech model.

Why not just build an inferior computer design that only achieves 10% of the maximum capacity? Intelligence requires computation. As long as there exists some reasonably low energy technique for ejecting from the solar system, it results in a large payoff multiplier. Of course you can still leave a bunch of stuff in the system, and perhaps even have a form of a supply line - although that could reduce stealth and add risk.

There is admittedly a lot of hand waving going on in this model. If I had more time I would develop a more accurate model focusing on some of the key unknowns.

One key variable is the maximum practical reversibility ratio, which is the ratio of bitops of computation per bitop erased. This determines the maximum efficiency gain from reversible computing. Physics doesn't appear to have a hard limit for this variable, but there will probably be engineering limits.

For example, an advanced civ will at the very least want to store its observational data from its sensors in a compressed form, which implies erasing some minimal number of bits. But if you think about a big civ occupying a sphere, the input bits/s coming in from a few sparse sensor ports on the surface is going to be incredibly tiny compared to the bitop/s rate across the whole volume.

Replies from: MarsColony_in10years
comment by MarsColony_in10years · 2015-04-20T02:06:52.663Z · LW(p) · GW(p)

First, let me try to summarize your position formally. Please let me know if I'm misrepresenting anything. We seem to be talking past each other on a couple subtopics, and I thought this might help clear things up.

1 p(type III civilization in milky way) ≈ 1

1.1 p(reversible computing | type III civilization in milky way) ≈ .9

1.1.1 p(¬energy or mass limited | reversible computing) ≈ 1

1.1.1.1 p(interstellar space | ¬ energy or mass limited) is large

1.1.1.2 p(intergalactic space | ¬ energy or mass limited) is very large

1.1.1.3 p( (interstellar space ↓ intergalactic space) | ¬ energy or mass limited) ≈ 0

1.1.2 p(energy or mass limited | reversible computing) ≈ 0

1.2 p(¬reversible computing | type III civilization in milky way) ≈ .1

2 p(¬type III civilizations in milky way) ≈ 0

Note that 1.1.1.1 and 1.1.1.2 are not mutually exclusive, and that ↓ is the joint denial / NOR boolean logic operator. Personally, after talking with you about this and reading through the reversible computing Wikipedia article (which I found quite helpful), my estimates have shifted up significantly. I originally started to build my own sort of probability tree similar to the one above, but it quickly became quite complex. I think the two of us are starting out with radically different structures in our probability trees. I tend to presume that the future has many more unknown factors than known ones, and so is fundamentally extremely difficult to predict with any certainty, especially in the far future.

The only thing we know for sure is the laws of physics, so we can make some headway by presuming that one specific barrier is the primary limiting factor of an advanced civilization, and see what logical conclusions we can draw from there. That's why I like your approach so much; before reading it I hadn't really given much thought to civilizations limited primarily by things like Laudauer's limit rather than energy or raw materials. However, without knowing their utility function, it is difficult to know for sure what limits will be their biggest concern. It's not even certain that such a civilization would have one single unified utility function, although it's certainly likely.

If I was in the 18th century and trying to predict what the 21st century would be like, even if I was a near-perfect rationalist, I would almost certainly get almost everything wrong. I would see limiting factors like transportation and food. From this, I might presume that massive numbers of canals, rather than the automobile, would address the need for trade. I would also presume that food limited population growth, and might hypothesize that once we ran out of land to grow food we would colonize the oceans with floating gardens. The 18th century notion of a type I civilization would probably be one that farmed the entire surface of a planet, rather than one that harvested all solar energy. The need for electricity was not apparent, and it wasn't clear that the industrial revolution would radically increase crop yields. Perhaps fusion power will make electricity use a non issue, or perhaps ColdTech will decrease demand to the point where it is a non-issue. These are both reasonably likely hypotheses in a huge, mostly unexplored, hypothesis space.

But let's get to the substance of the matter.

1 and 2: I tried to argue for a substantially lower p value here, and I see that you responded, so I'll answer on that fork instead. This comment is likely to be long enough as is. :)

1.1 and 1.2: I definitely agree with you that a sufficiently advanced civilization would probably have ColdTech, but among many, many other technologies. It's likely to be a large fraction of the mass of all their infrastructure, but I'm not sure if it would be a super-majority. This would depend to a large degree on unknown unknowns.

1.1.1 and 1.1.2: I'm inclined to agree with you that ColdTech technology itself isn't particularly mass or energy limited. You had this to say:

  1. Engineering considerations - the configurations which maximize computation are those where the computational mass is far from heat sources such as stars which limit computation. With reversible computing, energy is unlikely to be a constraint at all, and the best use of available mass probably involves ejecting the most valuable mass out of the system.

I would still think that manufacturing and ejecting ColdTech is likely to be extremely mass and energy intensive. If the civilization expands exponentially limited only by their available resources, the observable effects would look much like other forms of advanced civilizations. Are you arguing that they would stay quite small for the sake of stealth? If so, wouldn't it still make sense to spread out as much as possible, via as many independent production sites as possible? You touch on this briefly:

As long as there exists some reasonably low energy technique for ejecting from the solar system, it results in a large payoff multiplier. Of course you can still leave a bunch of stuff in the system, and perhaps even have a form of a supply line - although that could reduce stealth and add risk.

I don't see any reason not to just keep sending material out in different directions. Perhaps this is the underlying assumption that caused us to disagree, since I didn't make the distinction between manufacturing being mass/energy limited and the actual computation being mass/energy limited. When you say that such a civilization isn't mass/energy limited, are you referring to just the ColdTech, or the production too?

It seems like you could just have the ejected raw materials/ColdTech perform a course correction and series of gravity assists based on the output from a random number generator, once they were out of observational distance from the origin system. This would ensure that no hostile forces could determine their location by finding the production facility still active. Instead of a handfull of hidden colonies, you could turn a sizable fraction of a solar system's mass, or even a galaxies mass, into computonium.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-20T03:03:24.959Z · LW(p) · GW(p)

Hmm I'm not sure what to make of your probability tree yet .. . but in general I don't assign such high probabilities to any of these models/propositions. Also, I'm not sure what a type III civilization is supposed to translate to in the cold dark models that are temperature constrained rather than energy constrained. I guess you are using that to indicate how much of the galaxy's usable computronium mass is colonized?

It is probably unlikely that even a fully colonized galaxy would have a very high computronium ratio: most of the mass is probably low value and not worth bothering with.

That's why I like your approach so much; before reading it I hadn't really given much thought to civilizations limited primarily by things like Laudauer's limit rather than energy or raw materials

Thanks. I like your analogies with food and other early resources. Energy is so fundamental that it will probably always constrain many actions (construction still requires energy, for example), but it isn't the only constraint, and not necessarily the key constraint for computation.

I would still think that manufacturing and ejecting ColdTech is likely to be extremely mass and energy intensive.

Yes - agreed. (I am now realizing ColdTech really needs a better name)

If the civilization expands exponentially limited only by their available resources, the observable effects would look much like other forms of advanced civilizations.

No, the observable effects vary considerably based on the assumed technology. Let's compare three models: stellavore, BHE (black hole entity) transcension, and CD (cold dark) arcilects.

The stellavore model predicts that civs will create dyson spheres, which should be observable during the long construction period and may be observable afterwards. John Smart's transcension model predicts black holes entities arising in or near stellar systems (although we could combine that with ejection I suppose). The CD arcilect model predicts that civs will cool down some of the planemos in their systems, possibly eject some of those planemos, and then also colonize any suitable nomads.

Each theory predicts a different set of observables. The stellavore model doesn't appear to match our observations all that well. The other two seem to match, although also are just harder to detect, but there are some key things we could look for.

For my CD arcilect model, we have already have some evidence for a large amount of nomads. Perhaps there is a way to distinguish between artificial and natural ejections. Perhaps the natural pattern is ejections tend to occur early in system formation, whereas artificial ejections occur much later. Perhaps we could even get lucky and detect an unusually cold planemo with microlensing. Better modelling of the dark matter halos may reveal a match between ejection models for at least a baryonic component of the halo.

For the CDA model stars become somewhat wasteful, which suggests that civs may favour artificial supernovas if such a thing is practical. At the moment I don't see how one could get the energy/mass to do such a thing.

Those are just some quick ideas, I haven't really looked into it all that much.

Are you arguing that they would stay quite small for the sake of stealth? If so, wouldn't it still make sense to spread out as much as possible, via as many independent production sites as possible?

No, I agree that civilizations will tend to expand and colonize, and yes stealth considerations shouldn't prevent this.

I don't see any reason not to just keep sending material out in different directions. . .

Thinking about it a little more, I agree. And yes when I mention not being energy constrained, that was in reference only to computation, not construction. I assume efficient construction is typically in place, using solar or fusion or whatever.

It seems like you could just have the ejected raw materials/ColdTech perform a course correction and series of gravity assists based on the output from a random number generator, once they were out of observational distance from the origin system. This would ensure that no hostile forces could determine their location by finding the production facility still active.

Yes, this seems to be on the right track. However, the orbits of planetary bodies are very predictable and gravity assists are reversible operations (I think), which seems to imply that the remaining objects in the system will contain history sufficient for predicting the ejection trajectory (for a rival superintelligence). You can erase the history only by creating heat ... so maybe you end up sending some objects into the sun? :) Yes actually that seems pretty doable.

comment by jacob_cannell · 2015-04-18T20:08:32.817Z · LW(p) · GW(p)

Thanks for writing this up, I'll add a direct link from the main article under the historical model/early filter section.

So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction).

You mentioned several possibilities for a great filter in the past, but that was by no means a comprehensive list.

Yes. The article was already probably too long, and I wanted to focus on the future predictive parts of the model.

Before responding to some of your specific points, I will focus on a couple of key big picture insights that favor "lots of aliens" over any filter at all.

Bayesian Model Selection.

Any model/hypothesis which explains our observations as very rare events is intrinsically less likely than other models that explain our observations as typical events. This is just a simple consequence of Bayesian inference/Solonomoff Induction. A very rare event model is one which has a low P(E|H), which it must overcome with a high prior P(H) to defeat other hypothesis classes which explain the observations as typical (high probability) outcomes.

This is not a quite a knockdown argument against the entire class of rare earth models, but it is close.

Observational Selection Effects due to the Simulation Argument

Some physical universes tend to produce tons of simulated universes containing observers such as ourselves. This acts a very large probability multiplier that strongly favors models which produce tons of simulations. The class of models I propose where there are 1.) lots of aliens and 2.) strong motivations to simulate the history of other alien civs are exactly the types of conditions that maximize the creation of simulations and observers.

Now on to the potential early filter stages:

(1. Habitable stars are abundant (20 to 40 billion suitable candidates in the GHZ of our galaxy)

(2. Habitable planets are rare/abundant. Water is common - mars and many other bodies in our system have significant amounts of water.

We have an oddly large moon,

This is true. Our moon is unusual compared to the moons of other planets we can see. However, from the evidence in our system we can only conclude that our moon is roughly a 1 in 100 or 1 in 1000 event, not a 1 in a billion event. Even so, it is not at all clear that a moon like our is necessary for life. There are many other means to the same end.

Even if our planet is a typical draw, it is likely to be an outlier in at least a few dimensions.

(3. Panspermia / Abiogenesis

Recent evidence seems to favor panspermia. For example - see the "Life Before Earth" paper and related.

Also, there's a weird coincidence where the formation of the first life on earth seems to coincide well with the end of the late heavy bombardment,

That's only a weird coincidence if one assumes abiogenesis on earth. Panspermia explains that 'coincidence' perfectly.

(5. Prokaryotic -> Eukaryotic

(6. Multicellular

(7. "Complex Land Life"

Again any model that explains these evolutionary developments as rare events is intrinsically less likely than models which explain the developments as likely events. Systemic evolutionary theory - especially its computational and complexity theory variants - explains how variation and selection over time inevitably and automatically explores the genetic search space and moves through a series of attractors of escalating complexity. The events you describe are not rare - they are the equivalent of the main sequence for biology.

(8. Complex life is common, but is regularly wiped out before it can become intelligent.

Of all your points, I think this one is perhaps the most important. Large extinctions have also acted as key evolutionary catalysts, so the issue is somewhat more complex. To understand this issue in more detail, we should build galaxy simulations which model the distribution of these events. This would give us a better understanding of the variance in evolutionary timescales, which could give us a better idea concerning the predicted distribution over the age of civilizations. On worlds that have too many extinction events, life is wiped out. On worlds that have too few, life gets stuck. We can observe only that on our world the exact sequence of extinction events resulted in a path from bacteria to humans that took about 5 billion years. It is intrinsically unlikely that our exact sequence was somehow optimal for the speed of evolution, and other worlds could have evolved faster.

(9. Complex life is common, but intelligent life is rare.

I addressed this point specifically. Chimpanzees have about 5 billion cortical neurons, elephants have a little more, some whales/dolphins are comparable. All 3 creatures display comparable very high levels of intelligence. Chimpanzees are very similar to the last common ancestor between ourselves and other primates - essentially they are right on the cusp of evolving into techno-cultural intelligence. So complex intelligence evolved in parallel in 3 widely separated lineages.

This is actually some of the strongest evidence against an early filter - as it indicates that the trajectory towards high intelligence is a strong attractor.

What is rare, however, appears to be the capacity for abstract thought.

This is basically nonsense unless you define 'abstract thought' as 'human language'. Yes language (and more specifically complex lengthy cultural education - as feral humans do not have abstract thought in the way we do) is the key to human 'abstract thought'. However, elephants and chimpanzees (and perhaps some cetaceans) are right on the cusp of being able to learn language. The upper range of their language learning ability comes close to the lower range of our language learning ability.

If you haven't seen it yet, I highly recommend the movie "Project Nim", which concerns an experiment in the 1970's with attempting to raise a chimp like a human, using sign language.

In short, chimpanzee brains are very much like our own, but with a few differences in some basic key variables (tweaks). Our brains are both larger and tuned for slower development (neotany). A chimpanzee actually becomes socially intelligent much faster than a human child, but the chimp's intelligence also peaks much earlier. Chimps need to be able to survive on their own much earlier than humans. Our intelligence is deeper and develops much more slowly, tuned for a longer lifespan in a more complex social environment.

The reason that we are the only species to evolve language/technology is simple. Language leads to technology which quickly leads to civilization and planetary dominance. It is a winner take all effect.

(10. Technological civilization

Once you have language, technology and civilization follows with high likelihood.

We haven't evolved noticeably over the past 200,000 years, and yet we only developed agriculture and colonized the planet 10,000 years ago.

Hunter gatherers expanded across the globe and lived an easy life, hunting big dumb game until such game became rare, extinct, or adapted defenses. This led to a large extinction of the megafauna about 10,000 years ago, and then agriculture follows naturally once the easy hunting life becomes too hard.

We didn't invent bronze or written language until 5,000 years ago

Follows directly from agriculture leading to larger populations and warring city-states.

Given all this, I wouldn't be so quick to assume that the great filter is in front of us.

I wouldn't be so quick to assume that there is a filter at all - that is the much larger assumption.

Replies from: None, MarsColony_in10years
comment by [deleted] · 2015-04-19T03:20:28.213Z · LW(p) · GW(p)

It should be noted the "life before earth" paper is INFAMOUS amongst bioinformaticists for cherrypicking data to fit an exponential trend, having an incoherent conception of biological complexity, and generally not having anything to do with how evolution actually works. Reading it is PAINFUL.

It truly is 'not even wrong'.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-19T05:06:34.577Z · LW(p) · GW(p)

I agree with that - I mean its main graph has only 5 datapoints.

Still - the general idea (even if poorly executed) is interesting and could be roughly correct - but showing it in the way they intend to will require much more sophisticated computable measures of biological complexity. Machine learning techniques - acting as general compressors - could eventually help with that.

Replies from: None
comment by [deleted] · 2015-04-19T15:38:11.933Z · LW(p) · GW(p)

But any measure of biological complexity you could care to generate can increase or decrease over evolutionary time. Looking at modern organisms doesn't help you.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-19T18:37:29.483Z · LW(p) · GW(p)

But any measure of biological complexity you could care to generate can increase or decrease over evolutionary time.

Only at high frequencies. But at a more general level we have strong reasons to believe that the basic form of the argument is correct - that the overall complexity of the terrestrial biome has generally increased over the course of history from the origin of life up to today. Computational models of evolution more than suggest this - it is almost a given.

The problem of course is in actually quantifying the biome complexity - using say KC type measures, which require sophisticated compression. In fact, one's ability to compute the true KC measure is only achieved in the limit of perfect compression - which incidentally corresponds to perfect understanding of the data!. But with more sophisticated compression we could perhaps approach or estimate that limit.

A useful approximate measure would need to consider the full set of DNA in existence across the biome at a certain point in time. Duplications and related transformations are obviously compressible, whereas handling noise-like variation is more of a challenge. One way to handle it is to consider random draws from the implied species-defining distribution. For a species with lots of high variance/noisy (junk) sequences, the high variance sections then become highly compressible because one only has to specify the aggregate distribution (such that draws from that distribution would implement the phenotype). At the limit a sequence which is completely unused and under no selection pressure wouldn't contribute anything to the K-complexity.

comment by MarsColony_in10years · 2015-04-19T07:10:02.189Z · LW(p) · GW(p)

Bayesian Model Selection.

Any model/hypothesis which explains our observations as very rare events is intrinsically less likely than other models that explain our observations as typical events.

This is true for all cases where the observer is not noticeably entangled in a causal manner with the event they are trying to observe. Otherwise, the Observation Selection Effect can contribute false evidence. If we presumed that earth is typical, then there should also be life on Mars, and in most other solar systems. However, we wouldn't ever have asked the question if we hadn't evolved into intelligent life. The same thing that caused us to ask the question also caused the one blue-green data point that we have.

To illustrate: If you came across an island in the middle of the ocean, you might do well to speculate that such islands must be extremely common for you to come across one in the middle of the ocean. However, if you see smoke rising from beyond the horizon, and sail for days until finally reaching a volcanic island, you could not assign the same density to such volcanic islands as to ordinary islands. The same thing that caused you to observe the volcanic island also caused you to search for it in the first place. In the case of observable life, the Observation Selection Effect is much, much stronger because there's no way we could conceivably have asked the question if we hadn't come into existence somehow. P(life is common|life on earth)=P(life is common), because knowing that life did evolve on earth can't give us Bayesian evidence for or against the hypothesis that life is common.

Observational Selection Effects due to the Simulation Argument

Some physical universes tend to produce tons of simulated universes containing observers such as ourselves.

This changes things, potentially. Everything I've said in previous posts has been conditional on the assumption that we don't live in a simulation. If we do, it is likely that our universe roughly resembles the real universe in some aspects. Perhaps they are running a precise simulation based on reality, or perhaps they are running a simulation based on a small change to reality, as an experiment. However, the motives of such a civilization are difficult to predict with any accuracy, so I suspect that the vast majority of possible hypotheses are things we haven't even thought of yet. (unknown unknowns.) So, although your specific hypothesis becomes more likely if we are in a simulation, so do all other possible hypotheses predicting large numbers of simulations.

Now on to the potential early filter stages:

(2) Oops. I should have specified huge amounts of liquid water in the inner solar system. Mars has icecaps, and some of Jupiter's moons are ice-balls, possibly with a liquid center. Earth has rather a lot of water, despite being well inside the frost line. When the planets were forming from an accretion disc, the material close to the sun would have caused any available water to evaporate, for the same reason there isn't much water on the moon (at least outside a couple craters on the poles, which are in continuous shadow). Far enough out, though, and the sun's heat is disperse enough that ice is stable; hence the icy moons of Jupiter. The best hypothesis we have is that some mechanism transported a large amount of water to Earth after it formed, perhaps via comets or asteroids. It just occurred to me that this might have been during the late heavy bombardment, or it might be just another coincidence. As you point out regarding our large moon, complex systems can be expected to have many, many 1-in-100 coincidences, simply because of statistics.

(3) Panspermia / Abiogenesis: it sounds like “Life Before Earth” isn't a mainstream consensus, based on a couple comments below. I do know, however, that mainstream biology does teach Panspermia alongside Abiogenesis, so neither of them appears to be a clear winner by merit of scientific evidence. I'm not even sure of how to practically estimate their respective complexities, in order to use Occam's Razor or Solomonoff Complexity to posit a reasonable prior. It would be nice to bound the problem enough to estimate the probabilities of both with sufficient accuracy to determine which is more likely. Until then though, I guess we'll have to leave it at 50/50%.

Also, there's a weird coincidence where the formation of the first life on earth seems to coincide well with the end of the late heavy bombardment,

That's only a weird coincidence if one assumes abiogenesis on earth. Panspermia explains that 'coincidence' perfectly.

The late heavy bombardment coinciding with the start of life is only explained by panspermia if (1) the rocks came from outside the solar system, which is unlikely given the huge amount of material, or (2) the rocks brought life from another source within our own solar system. This could also be explained if life required the large influx of matter/energy/climate disturbance/heating or whatever, or if life was continuously wiped out by the harsh environment until it finally started flourishing when it ended.

(8) Good point about extinction events being an evolutionary catalyst. Aside from possibly generating the primordial soup for Abiogenesis, snowball earths may have catalyzed early advancements, and mammals wouldn't have been able to supersede dinosaurs without a certain meteor.

(9) Perhaps “abstract thought” isn't the perfect term to use, since it is common enough to have become vague instead of precise. The stress should be on the word “abstract”, not on the word “thought”. Chimps and many other animals do have simple language, although no complex grammar structures. They can't abstract an arbitrary series of motions necessary to make or use a tool into language, and communicate it without showing it. Abstract language is most of what I'm referring to, but not all of it.

Language leads to technology which quickly leads to civilization and planetary dominance. It is a winner take all effect.

This is likely why neanderthals went extinct, although we coexisted for quite a while. It still doesn't explain why there aren't octopus civilizations, since we haven't changed that environment much until extremely recently. We haven't evolved noticeably in hundreds of thousands of years, but haven't colonized the planet until the last ~16,000 years. If our colonization is the only thing holding back other potential intelligent life, we'd expect to see elephants and parrots at least at the stone tool or fire level of technology. Why don't octopus hunt with spears or lobster traps?

I skipped over a lot of your good points, largely because I see them as correct. I sill don't buy the argument that life is common though, although I'd be less confident in any such assertion in either direction if we were in a simulation, just because of the huge amount of uncertainty that adds to things.

Replies from: None, jacob_cannell
comment by [deleted] · 2015-04-19T15:25:17.007Z · LW(p) · GW(p)

The origin of life on earth being coincident with the end of the late heavy bombardment could entirely be an artifact of the fact that no rock from before that time survives to this day. It could well be older on Earth. The reworking of the crust was not complete at any given time, it took hundreds of megayears and at any given time most of the crust would be undisturbed.

Water in the inner system has the complication of the fact that not only do you need to get water, you need to hold onto water. Small objects will not hold onto light molecules out of sheer gravity issues. Mars is not holding onto water or other atmospheric gases well at all because of both gravitational issues, and solar radiation sputtering the upper atmosphere off into space. Venus has all the gravity it could need, but A - no geomagnetic field (at least at this point in its history) and B - got so hot that all the water went into the atmosphere, where it gets cracked by radiation in the upper atmosphere and the hydrogen leaves (allowing the oxygen to react with volcanic gases giving you sulfuric acid and phosphoric acid and the like). The same process happens on Earth but the pool of atmospheric water is SO MUCH SMALLER due to all the liquid volume and the fact that we have this lovely temperature trap in our atmosphere that makes it condense out before it gets too high up that the rate is extremely small.

I would say the only time you can call humans 'dominant' is after the widespread adoption of agriculture, which was much more gradual than many people think - people were probably propagating seedless figs 20k years ago and much longer ago were altering the composition of plants and animals in various biomes just via their actions. Since agriculture got big we have become ecosystem engineers in the vein of bears, but rather larger in our effects. We have been creating new large-scale-symbiotic biomes where plants and animals flow matter and energy into each other and where we take care of dispersal rather than the plants themselves doing as much of it, for example. That's the unique aspect of humanity. Since then we have also started breaking into non-biological forms of energy - raw sunlight, water flow, the black rocks that are basically 500 megayears of stored sunlight - and have been using those for our purposes too in addition to the biological energy flow that all other biomes deal with.

It will be very interesting to see how human ecology continues to change after the extremely concentrated energy sources that represent most of the power we have used over the last 200 years go away. The end result might be very big but might not have the sheer flux of inefficient extractive growth - think weeds colonizing a freshly plowed field, versus an old-growth forest.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-04-19T15:36:11.780Z · LW(p) · GW(p)

Dear CellBioGuy, what is your intuition on what preceded procaryotes?

Replies from: None, None
comment by [deleted] · 2015-04-19T16:31:18.380Z · LW(p) · GW(p)

That is a very interesting question and one which there's constant research going into.

A few initial points. First, its becoming clearer and clearer that 'prokaryotes' is a very poor grouping to use for much of anything. The bacteria most of us think of are the smaller, faster-replicating members of the eubacteria. There's also the archaebacteria, which are deeply and fundamentally different from the eubacteria in their membrane composition, cell wall structure, DNA organization, and transcription machinery.

Second, it's becoming more and more clear that the eukaryotes are indeed the result of an early union of eubacteria and archaebacteria. I saw some very cool research at a conference last December bolstering the “eocyte hypothesis” - the idea that the Eukaryotic nuclear genome roots in one particular spot of the archaebacterial tree plus loads of horizontal gene transfer from the eubacteria that became the mitochondria. You can't root it there just by aligning things, this was long enough ago that base sequence is effectively randomized, you need to look at what sorts of proteins exist, characters that change very rarely as opposed to mere sequence, and its a very hard question that has required a LOT of sequence data from a LOT of organisms. Most of our DNA structure and transcription and some of our protein processing looks like the archaebacteria, but basically all of our metabolism looks like the eubacteria. This is interesting in the light of recent discoveries of symbiotic pairings of archaebacteria and eubacteria in nature in which they exchange metabolic products.

Anyways, the eubacteria and archaebacteria have deeply different transcription machinery and make their membranes in fundamentally different ways. Central carbon metabolism is all but identical though as is a lot of other pathways, and the core biochemistry. I've seen work proposing that the eubacteria and archaebacteria may have diverged before living things managed to synthesize their own membrane components rather than scavenging them from the environment. I've also seen interesting work to the effect that certain clay minerals can assemble fatty acids and other such membrane-building substances from acetate under the proper energetic conditions.

There's also a lot of diverstiy in DNA and RNA processing methods that isn't in any of the cellular life – there are truly bizarre ways of doing this that you only find in viruses. Viruses mutate incredibly rapidly and so you cannot try to root them anywhere, they change too fast. That being said there are proposals that they may be primordial, elements of the very wide range of possible nucleic acid processing mechanisms that existed before the current forms of cellular life really were established and took off. The eubacterial and archaebacterial models may have taken off with remnants of the rest winding up parasitizing them.

Rampant horizontal transfer of genes, especially early when cell identity might not have been so strong, makes all this very complicated.

There's a school of thought in origin of life research that autocatalytic metabolism was important, and another that replicating polymers were important. The former posits that metal-ion driven cyclical reactions like the citric acid cycle can take off and take over, and wind up producing lots of interesting chemical byproducts that can then capture it and become discrete self-replicating units. The latter points out that elongating polymers in membrane bubbles speed the growth and splitting of these bubbles. They're both probably important. It should be noted too that these ideas intersect – one of the popular metabolic ideas, polyphosphate, is actually represented in our nucleic acids. Polyphosphate is an interesting substance that can be built up by the right chemial reactions, and can drive other ones when it breaks down. Every ATP, GTP, etc is a nice chemical handle on the end of a chain of three phosphates – a short polyphosphate. By breaking down those polyphosphates you build polymers.

Proteins obviously came very early and gave a huge advantage, and the genetic code is damn near universal with all deviations from the standard one obvioulsy coming in after the fact. Whatever could make proteins probably took over quickly. The initial frenzy, whatever it was, probably eventually lead to a diverse population of compartments processing their nucleic acids in diverse ways and sending pieces of their codes back and forth, which eventually gained advantages by building their own membranes, and eventually cell walls, in different ways. Some of these populations probably took off like mad, making the eubacteria and archaebacteria, and others remained only as horizontally transferred elements like viruses or transposons or the like.

Written in a hurry, may be edited or clarified/extended later.

comment by [deleted] · 2015-05-06T23:30:19.274Z · LW(p) · GW(p)

Of interest!

Even more recent evidence for the eocyte hypothesis!

http://www.the-scientist.com/?articles.view/articleNo/42902/title/Prokaryotic-Microbes-with-Eukaryote-like-Genes-Found/

A clade of archaebacteria found via metagenomics at an undersea vent (uncultured). Contains huge numbers of eukaryotic characteristic genes that are important for formerly eukaryotic specific functions. The eukaryotes cluster within this clade rather than as a sister clade.

comment by jacob_cannell · 2015-04-19T17:20:48.260Z · LW(p) · GW(p)

P(life is common|life on earth)=P(life is common), because knowing that life did evolve on earth can't give us Bayesian evidence for or against the hypothesis that life is common.

That math is rather obviously wrong. You are so close here - just use Bayes.

We have 2 mutually exclusive models: life is common, and life is rare. To be more specific, lets say that the life is common theory posits that life is a 1 in 10 event, the life is rare theory posits that life is a 1 in a billion event.

Let's say that our priors are P(life is common) = 0.09, and P(life is rare) = 0.91

Now, our observation history over this solar system tells us that life evolved on earth - and probably only complex life on earth, although there may be simple life on mars or some of the watery moons.

As we are just comparing two models, we can compare likelihoods

P(life is common | life on earth) ]= P(life on earth | life is common) P(life is common) = 0.09 * 0.1 = 0.009 ~ 10^-2

P(life is rare | life on earth) ]= P(life on earth | life is rare) P(life is rare) = 10^-9 * 0.91 ~ 10^-9

To convert to actual probabilities we would need to divide by P(life on earth), but that doesn't really matter because it is a constant normalizing factor.

However, the motives of such a civilization are difficult to predict with any accuracy, so I suspect that the vast majority of possible hypotheses are things we haven't even thought of yet. (unknown unknowns.) So, although your specific hypothesis becomes more likely if we are in a simulation, so do all other possible hypotheses predicting large numbers of simulations.

I agree with your general analysis here, although it is important to remember that the full hypothesis space is always infinite. For tractable inference, we focus on a small subset of the most promising theories/models.

When considering the wide space of potential simulators, we must focus on key abstractions. For example, we can focus on models in which advanced civs have convergent instrumental reasons for creating large numbers of simulations. I am currently aware of a couple of wide classes of models that predict lots of sims. Besides aliens simulating other aliens, our descendants could have strong motivations to simulate us - as a form of resurrection for example, in addition to the common motivator for improving world models. There is also the possibility of creating new artificial universes, in which case there may be interesting strong motivators to create lots of universes and lots of simulations as a precursor step.

(3) Panspermia / Abiogenesis: it sounds like “Life Before Earth” isn't a mainstream consensus, based on a couple comments below.

No - that paper is not even really mainstream. I mentioned it as an example of the panspermia model and the resulting potentially expanded timeframe for the history of life. If life is really that old, then it becomes less likely that a single early elder civ colonized the galaxy early and dominated.

Replies from: MarsColony_in10years
comment by MarsColony_in10years · 2015-04-20T04:12:47.502Z · LW(p) · GW(p)

P(life is common|life on earth)=P(life is common), because knowing that life did evolve on earth can't give us Bayesian evidence for or against the hypothesis that life is common.

That math is rather obviously wrong. You are so close here - just use Bayes.

Perhaps I should have used an approximately equal to symbol instead of an equals sign, to avoid confusion. And thanks for the detailed writeup. I would agree 100% if you substituted "planet X" for "earth". Basically, I'm arguing that using ourselves as a data point is a form of the observational selection effect, just like survivorship bias.

As for the math, I'll pull an example from An Intuitive Explanation of Bayes' Theorem:

Similarly, let's suppose that we have a less discriminating test, mammography, that still has a 20% rate of false negatives, as in the original case. However, mammography has an 80% rate of false positives. In other words, a patient without breast cancer has an 80% chance of getting a false positive result on her mammography test. If we suppose the same 1% prior probability that a patient presenting herself for screening has breast cancer, what is the chance that a patient with positive mammography has cancer?

  • Group 1: 100 patients with breast cancer.

  • Group 2: 9,900 patients without breast cancer.

After mammography* screening:

  • Group A: 80 patients with breast cancer and a "positive" mammography*.

  • Group B: 20 patients with breast cancer and a "negative" mammography*.

  • Group C: 7920 patients without breast cancer and a "positive" mammography*.

  • Group D: 1980 patients without breast cancer and a "negative" mammography*.

The result works out to 80 / 8,000, or 0.01. This is exactly the same as the 1% prior probability that a patient has breast cancer! A "positive" result on mammography doesn't change the probability that a woman has breast cancer at all. You can similarly verify that a "negative" mammography also counts for nothing. And in fact it must be this way, because if mammography has an 80% hit rate for patients with breast cancer, and also an 80% rate of false positives for patients without breast cancer, then mammography is completely uncorrelated with breast cancer.

In that example, the reason the posterior probability equals the prior probability is that the "test" isn't causally linked with the cancer. You have to assume the same the same sort of thing for cases in which you are personally entangled. For example, if I watched my friend survive 100 rounds of solo Russian Roulette, then Baye's theorem would lead me to believe that there was a high probability that the gun was empty or only had 1 bullet. However, if I myself survived 100 rounds, I couldn't afterward conclude a low probability, because there would be no conceivable way for me to observe anything but 10 wins. I can't observe anything if I'm dead.

Does what I'm saying make sense? I'm not sure how else to put it. Are you arguing that Baye's theorem can still output good data even if you feed it skewed evidence? Or are you arguing that the evidence isn't actually the result of survivorship bias/observation selection effect?

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-20T04:52:55.808Z · LW(p) · GW(p)

For example, if I watched my friend survive 100 rounds of solo Russian Roulette, then Baye's theorem would lead me to believe that there was a high probability that the gun was empty or only had 1 bullet. However, if I myself survived 100 rounds, I couldn't afterward conclude a low probability, because there would be no conceivable way for me to observe anything but 10 wins. I can't observe anything if I'm dead.

Obviously you can't observe anything if you are dead, but that isn't interesting. What matter is comparing the various hypothesis that could explain the events.

The case where you yourself survive 100 rounds is somewhat special only in that you presumably remember whether you put bullets in or not and thus already know the answer.

Pretend, however that you suddenly wake up with total amensia. There is a gun next to you and a TV then shows a video of you playing 100 rounds of roulette and surviving - but doesn't show anything before that (where the gun was either loaded or not).

What is the most likely explanation?

  1. the gun was empty in the beginning
  2. the gun had 1 bullet in the beginning

With high odds, option 1 is more likely. This survorship bias/observation selection effect issue you keep bringing up is completely irrelevant when comparing two rival hypothesis that both explain the data!

Here is another, cleaner and simpler example:

Omega rolls a fair die which has N sides. Omega informs you the roll comes up as a '2'. Assume Omega is honest. Assume that dice can be either 10 sided or 100 sided, in about the same ratio.

What is the more likely value of N?

  1. 100

  2. 10

Here is my solution:

priors: P(N=100) = 1, P(N=10) = 1

P(N=100 | roll(N) = 2) = P(roll(N)=2 | N=100) P(N=100) = 0.01

P(N=10 | roll(N) = 2) = P(roll(N)=2 | N = 10) P(N=10) = 0.1

So N=10 is 10 times more likely than N= 100.

comment by see · 2015-04-18T21:11:39.343Z · LW(p) · GW(p)

Er, a few species of placental mammal are hardly "widely separated lineages". Trying to draw conclusions for completely alien biologies by looking at convergent evolution inside a clade with a single common ancestor in the last 2-or-3% of the history of life on Earth is absurd. And the fact that the Placentalia start with an unusually high EQ among vertebrates-as-a-whole make it a particularly unsuitable lineage for estimating the possibilities of independent evolution of high animal intelligence.

Replies from: jpet, jacob_cannell
comment by jpet · 2015-04-19T01:24:34.801Z · LW(p) · GW(p)

Parrots and other birds seem to be about that intelligent, and octopi are close.

Perhaps that's an argument for the difficulty of the chimp to human jump: we have (nearly) ape-level intelligence evolving multiple times, so it can't be that hard, but most lineages plateaued there.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-19T05:14:22.874Z · LW(p) · GW(p)

The conditions for the chimp to human jump require a series of changes where each brain increase enables better language/tools that pays for the increased costs.

Parrots/birds don't seem to have a feasible path like that - light bodies designed for flight, lack of hands. Cetaceans can easily grow and support large brains but fire doesnt work under water and most tool potentials are limited. Elephants seem to be the most likely runner up, if primates weren't around - perhaps in a few tens of millions or hundreds of years there could have been a pachyderm civilization.

So yeah - it might be somewhat rare, but its hard to say, as it didn't take that long on earth.

comment by jacob_cannell · 2015-04-18T23:08:46.958Z · LW(p) · GW(p)

Er, a few species of placental mammal are hardly "widely separated lineages".

Sure they are - given that the placental clade contains most of the extant mammal diversity.

Trying to draw conclusions for completely alien biologies by looking at convergent evolution inside a clade with a single common ancestor in the last 2-or-3% of the history of life on Earth is absurd.

Hardly. Using the "last 2-or-3% of the history of life on Earth" is perhaps disingenuous, as evolution is highly nonlinear. The entire period from the cambrian explosion to now is what - 15% of the history of life?

More importantly - elephants, cetaceans and primates occupy widely diverse environments and niches.

And the fact that the Placentalia start with an unusually high EQ among vertebrates-as-a-whole

EQ is a rather poor indicator of intelligence compared to total synapse count.

The common placentilia ancestors are believed to be small rodent like insectivores which had small brains - presumably on the order of 21 million neurons in the cortex, similar to rats. The fact that brains increased by a factor of 2 to 3 orders of magnitude in 3 divergent branches of placentilia is evidence to me for robustness in selection for high intelligence.

Now of course, it's fairly easy for evolution to just make a brain bigger. The difficulty is in scaling up the brain in the right way to actually increase intelligence. Rodent brains scale better than lizard brains, and elephant, cetecean, and primate brains scale even better still. So evolution found increasingly better scaling strategies over time, and in some occasions in parallel.

Replies from: see
comment by see · 2015-04-19T05:38:11.862Z · LW(p) · GW(p)

Sure they are - given that the placental clade contains most of the extant mammal diversity.

The very issue is that "mammal diversity" is vastly insufficient to make any conclusions about general independent evolutionary trends. The number of potential explanations of the advantages of intelligence derived from features from the recent common evolutionary origin completely overwhelms any evidence for general factors.

For one example, if someone were to demonstrate that intelligence is usually useful for a species of animals where the adults, by a quirk of evolution, have to take active care of their young for an extended time — BOOM. A huge quantity of the "independence" is blown up in favor of a single ancestral cause, the existence of nursing of the young in mammals. And the same happens every other time you can show intelligence specifically helps given an ancestrally-derived feature or is promoted by an ancestrally-derived feature in the whole group. The placental mammals are far, far too alike in life cycle, biochemistry, et cetera for parallel evolution within the group to be good evidence of real evolutionary independence of a trait on a scale of completely separate planetary biome evolutions.

The entire period from the cambrian explosion to now is what - 15% of the history of life?

That's not disingenuity, that's driving home the point. The octopus, separated by that whole stretch of 15%, is a far better case for evolutionary independence of intelligence than puttering around with various branches of the placental mammals — but still not nearly as good as if we had a non-animal example (or even better, a non-eukaryote). Unless and until we have good evidence of the probability of the evolution of animal-analogues, near-ape-level intelligence being (in general) weakly useful for animals (with Cephalopoda, Aves, and Mammalia being the only three classes we know have it or even strongly suspect from the fossil record have ever had it) is hardly strong evidence that near-ape-or-better intelligence is a highly probable feature of life-in-general.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-19T19:05:56.016Z · LW(p) · GW(p)

Earlier, I said:

The fact that brains increased by a factor of 2 to 3 orders of magnitude in 3 divergent branches of placentilia is evidence to me for robustness in selection for high intelligence.

Which is in error - I should have said "evidence indicating high intelligence is a robust developmental attractor". As you point out, evolution rarely selects for high intelligence in individual lineages.

.. is hardly strong evidence that near-ape-or-better intelligence is a highly probable feature of life-in-general.

No, but this has become a digression. For the potential big picture galaxy models in consideration, we are more concerned with discerning between intelligence being highly improbable or only weakly improbable. The impact arising from the difference between weakly improbable and highly probable is relatively minuscule in comparison.

I am claiming only that the evidence for parallel development of intelligence on earth is sufficient to conclude that intelligence is in the vicinity of weakly improbable to probable, rather than highly improbable.

comment by James_Miller · 2015-04-18T15:46:56.045Z · LW(p) · GW(p)

Our universe might be fined-tuned for life because there are a huge number of universes each with different laws of physics and only under a tiny set of these laws can sentient life exist and we shouldn’t be surprised to live in one of these fine-tuned universes. Our universe might also be fine-tuned for the Fermi paradox, especially if advanced civilizations often create paperclip maximizers.

Perhaps if you look at the subset of all possible laws of physics under which sentient life can exist, in a tiny subset of these you will get a Fermi paradox because, say, some quirk in the laws of physics makes interstellar travel very hard or creates a trap that destroys all civilizations before they become spacefaring. Civilizations such as ours will constantly arise in these universes.

In contrast, imagine that in universes fine-tuned for life but not the Fermi paradox civilizations often create some kind of paperclip maximizer that spreads at the maximum possible speed making the development of further life impossible. As a result, these universes tend to contain very few observers such as us. Consequently, even though a far higher percentage of universes might be fined-tuned for just life than for both life and the Fermi paradox, most civilizations might exist in the latter.

Replies from: jacob_cannell, dxu
comment by jacob_cannell · 2015-04-18T18:04:10.705Z · LW(p) · GW(p)

Our universe might be fined-tuned for life because there are a huge number of universes each with different laws of physics and only under a tiny set of these laws can sentient life exist and we shouldn’t be surprised to live in one of these fine-tuned universes.

I was assuming Solonomoff Induction over the full space of computable universes, which is a more principled take on fine tuning and selection effects. We should expect to find ourselves in the universe described by the simplest theory (TOE) which explains our observations.

Our universe might also be fine-tuned for the Fermi paradox, especially if advanced civilizations often create paperclip maximizers.

Paperclip maximizers are a specific absurdity with probability near zero, and I find that discussing them sucks insight out of the discussion.

Perhaps if you look at the subset of all possible laws of physics under which sentient life can exist,

This full set is infinite, complex, and irrelevant - for all we know much of this space could have life radically different than our own. It is more productive to focus on the subset of the multiverse with physics like ours - compatible with our observations. In other words - we are hardly a random observer sampled from the full set of 'sentient life'.

in a tiny subset of these you will get a Fermi paradox because, say, some quirk in the laws of physics makes interstellar travel very hard or creates a trap that destroys all civilizations before they become spacefaring.

Interstellar travel does look pretty hard, but not hard enough to prevent slow colonization. For this argument to apply to our section of the multiverse, it would need to involve new unknown physics. This is one possibility - but it seems low probability compared to other options as discussed in my post.

creates a trap that destroys all civilizations before they become spacefaring. Civilizations such as ours will constantly arise in these universes.

But then they are destroyed. I didn't describe this in my post, but from an observational selection effect, it is crucial to consider the effect of deep simulations. The universes that produce lots of deep simulations with sentient observers will tend to swamp out all other possibilities, such as those where civilizations arise but do not produce deep simulations.

The models I discussed in my post all tend to produce enormous amounts of computation applied to deep simulation - which creates enormous numbers of observers such as ourselves and vastly outweighs universes where civilizations are destroyed.

In contrast, imagine that in universes fine-tuned for life but not the Fermi paradox civilizations often create some kind of paperclip maximizer that spreads at the maximum possible speed making the development of further life impossible.

Right - we don't live in that kind of universe.

Replies from: James_Miller
comment by James_Miller · 2015-04-18T18:25:40.348Z · LW(p) · GW(p)

Paperclip maximizers are a specific absurdity with probability near zero, and I find that discussing them sucks insight out of the discussion.

Strongly disagree, and in general it's dangerous to dismiss an argument by asserting that it's stupid and that merely discussing it is bad.

This full set [of all possible laws of physics under which sentient life can exist] is infinite,

How can you be so sure of this?

Replies from: nbouscal
comment by nbouscal · 2015-04-18T18:33:00.723Z · LW(p) · GW(p)

This full set [of all possible laws of physics under which sentient life can exist] is infinite,

How can you be so sure of this?

Presumably there is some level of resolution at which changes to the fundamental constants no longer have appreciable effects. Maybe it's the thousandth decimal place; maybe it's the googolth decimal place; but it seems extremely unlikely to me that there isn't such a level. Given this assumption, the set of possible laws is clearly infinite.

Replies from: James_Miller
comment by James_Miller · 2015-04-18T18:37:42.376Z · LW(p) · GW(p)

Doesn't this all imply that the set of meaningfully different laws is finite? Also, what if there is a smallest possible level of resolution?

Replies from: Luke_A_Somers, nbouscal
comment by Luke_A_Somers · 2015-04-18T22:26:36.349Z · LW(p) · GW(p)

You are aiming at meaningfully distinct. nbouscal is aiming at the functionally equivalent.

comment by nbouscal · 2015-04-18T20:19:40.279Z · LW(p) · GW(p)

It doesn't, because the reals are infinite in two ways: any given interval is infinite, but the number of intervals is also infinite.

Also this only applies to changing the constants but keeping the general structure the same; you can also create further laws by changing the structure of the laws itself. There are a lot of degrees of freedom.

comment by dxu · 2015-04-18T15:58:07.767Z · LW(p) · GW(p)

Our universe might be fined-tuned for life because there are a huge number of universes each with different laws of physics and only under a tiny set of these laws can sentient life exist and we shouldn’t be surprised to live in one of these fine-tuned universes.

So what you're saying here is that we can conclude we live in one of these fine-tuned universes simply by updating on the fact of our existence, correct?

Replies from: James_Miller
comment by James_Miller · 2015-04-18T16:30:56.560Z · LW(p) · GW(p)

Our existence + the Fermi paradox, and it's a high probability rather than a certainty.

comment by gwern · 2016-03-15T20:19:47.107Z · LW(p) · GW(p)

Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero. Unfortunately, such a device would be delicate to a degree that is hard to imagine - even a single misplaced high energy particle could cause enormous damage. In this model, advanced computational civilization would take the form of a compact body (anywhere from asteroid to planet size) that employs layers of sophisticated shielding to deflect as much of the incoming particle flux as possible. The ideal environment for such a device is as far away from hot stars as one can possibly go, and the farther the better. The extreme energy efficiency of advanced low temperature reversible/quantum computing implies that energy is not a constraint. These advanced civilizations could probably power themselves using fusion reactors for millions, if not billions, of years.

I don't understand why this predicts no Dyson spheres, no visible mega-engineering, etc, and convergent self-limiting to a handful of solar systems and cold brains per civilization.

Computing near the Sun costs more because it's hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem. You say that the reversible brains don't need that much energy. OK, but more computing power is always better, the cold brains want as much as possible, so what limits them? If it's energy, then they will want to pipe in as much energy as possible from their local star. If it's putting matter into the right configuration for cold brains and shielding, then they will... want to pipe in as much matter lifted by energy as possible from their local star so they can build even more cold brains. Space is vast, so it's not like they're going to run out of cold places to put cold brains, and even if they do, well, a Dyson sphere around a star will fix that, so they'll keep expanding with the matter & energy. Interconnects and IO use up a lot of energy? Well, we already know how to solve that. Whatever the binding limit to their computational power is, it seems to be solved by either more matter, more energy, or both, and the largest available source of both is stars, far from being 'trash heaps'.

And since they are already expanding, their massive redundancy and deep space stealth/mobility means relativistic strikes are irrelevant, and so the usual first-mover expansionary convergent argument applies. So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with. This doesn't sound remotely like a Fermi paradox resolution.

Replies from: jacob_cannell
comment by jacob_cannell · 2016-03-16T17:54:37.696Z · LW(p) · GW(p)

Computing near the Sun costs more because it's hotter, sure. Fortunately, I understand that the Sun produces hundreds, even thousands of times more energy than a little fusion reactor does, so some inefficiencies are not a problem.

Every practical computational tech substrate has some error bounded compute/temperature curve, where computational capability quickly falls to zero past some upper bound temperature. Even for our current tech, computational capacity essentially falls off a cliff somewhere well below 1,000K.

My general point is that the really advanced computing tech shifts all those curves over - towards lower temperatures. This is a hard limit of physics, it can not be overcome. So for a really advanced reversible quantum computer that employs superconduction and long coherence quantum entanglement, 1K is just as impossible as 1,000K. It's not entirely a matter of efficiency.

Another way of looking at it - advanced tech just requires lower temperatures - as temperature is just a measure of entropy (undesired/unmodeled state transitions). Temperature is literally an inverse measure of computational potential. The ultimate computer necessarily must have a temperature of zero.

You say that the reversible brains don't need that much energy.

At the limits they need zero. Approaching anything close to those limits they have no need of stars. Not only that, but they couldn't survive any energy influx much larger than some limit, and that limit necessarily must go to zero as their computational capacity approaches theoretical limits.

If it's energy, then they will want to pipe in as much energy as possible from their local star.

No. There is an exact correct amount of energy to pipe in based on their viable operating temperature of their current tech civ. And this amount goes to zero as you advance up the tech.

It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn't that be great? No. It would cook the earth.

The same principle applies, but as you advance up the ultra-tech ladder, the temp ranges get lower and lower (because remember, temp is literally an inverse measure of maximum computational capabillity).

OK, but more computing power is always better, the cold brains want as much as possible, so what limits them?

Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate - in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.

So you should get a universe of Dyson spheres feeding out mass-energy to the surrounding cold brains who are constantly colonizing fresh systems for more mass-energy to compute in the voids with

Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass).

Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.

So the most valuable mass that gets colonized first would be the rogue planets/nomads - which apparently are more common than attached planets.

If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.

The big unknown variable is again what the end of tech in the universe looks like, which gets back to that new universe creation question. If that kind of ultimate/magic tech is possible, civs will invest everything in to that, and you have less colonization, depending on the difficulty/engineering tradeoffs.

Replies from: gwern
comment by gwern · 2016-03-17T00:14:16.809Z · LW(p) · GW(p)

Given some lump of matter, there is of course a maximum information storage capacity and a max compute rate - in a reversible computer the compute rate is bounded by the maximum energy density the system can structurally support which is just bounded by its mass. In terms of ultimate limits, it really depends on whether exotic options like creating new universes are practical or not. If creating new universes is feasible, there probably are no hard limits, all limits becomes soft.

This still doesn't answer my question. I understand your points about why colder is better, my question is: why don't they expand constantly with ever more cold brains, which are collectively capable of ever more computation? My smartphone processor is more energy-efficient than my laptop, but that doesn't mean datacenters don't exist or are useless or aren't popping up like mushrooms.

At the limits they need zero.

Correct me if I'm wrong, but zero energy consumption assumes both coldness and slowness, doesn't it? Slowness is a problem for a superintelligence. What good is super-efficiency if it takes millennia to calculate answers which some more energy would have solved quicker? Time is not free.

It may help to consider applying your statement to our current planet civ. What if we could pipe in 10000x more energy than we currently receive from the sun. Wouldn't that be great? No. It would cook the earth.

That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively. Turn it into a Matrioshka brain or something from one of Ander's papers on optimal large-scale computing artifacts.

Dyson spheres are extremely unlikely to be economically viable/useful, given the low value of energy past a certain tech level (vastly lower energy need per unit mass). Cold brains need some mass, the question then is how the colonization value of mass varies across space. Mass that is too close to a star would need to be moved away from the star, which is very expensive.

Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....

If colonization continues long enough, it will spread to lower and lower valued real estate. So eventually smaller rocky bodies in the outer system get stripped away, slowly progressing inward.

Which ends in everything being used up, which even if all that planet engineering and moving doesn't require Dyson spheres, is still inconsistent with our many observations of exoplanets and leaves the Fermi paradox unresolved.

Replies from: jacob_cannell
comment by jacob_cannell · 2016-03-17T05:02:20.752Z · LW(p) · GW(p)

I understand your points about why colder is better, my question is: why don't they expand constantly with ever more cold brains, which are collectively capable of ever more computation?

At any point in development, investing resources in physical expansion has a payoff/cost/risk profile, as does investing resources in tech advancement. Spatial expansion offers polynomial growth, which is pretty puny compared to the exponential growth from tech advancement. Furthermore, the distances between stars are pretty vast.

If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it's economic payoff compared to chasing Moore's Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome. If the tech singularity leads to stronger outcomes ala new universe manipulations, then you never need to colonize, it's best to just invest everything locally. And of course there is the spectrum in between, where you get some colonization, but the timescale is slowed.

Correct me if I'm wrong, but zero energy consumption assumes both coldness and slowness, doesn't it?

No, not for reversible computing. The energy required to represent/compute a 1 bit state transition depends on reliability, temperature, and speed, but that energy is not consumed unless there is an erasure. (and as energy is always conserved, erasure really just means you lost track of a bit)

In fact the reversible superconducting designs are some of the fastest feasible in the near term.

That would be great. If we had 10,000x more energy (and advanced technology etc), we could disassemble the Earth, move the parts around, and come up with useful structures to compute with it which would dissipate that energy productively.

Biological computing (cells) doesn't work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

Yes, it is expensive. Good thing we have a star right there to move all that mass with. Maybe its energy could be harnessed with some sort of enclosure....

I'm not all that confident that moving mass out system is actually better than just leaving it in place and doing best effort cooling in situ. The point is that energy is not the constraint for advancing computing tech, it's more mass limited than anything, or perhaps knowledge is the most important limit. You'd never want to waste all that mass on a dyson sphere. All of the big designs are dumb - you want it to be as small, compact, and cold as possible. More like a black hole.

Which ends in everything being used up, which even if all that planet engineering and moving doesn't require Dyson spheres, is still inconsistent with our many observations of exoplanets and

It's extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not 'use up' more than a tiny fraction of the matter of earth, and so on.

leaves the Fermi paradox unresolved.

From the evidence for mediocrity, the lower KC complexity of mediocrity, and the huge number of planets in the galaxy, I start with a prior strongly favoring reasonably high number of civs/galaxy, and low odds on us being first.

We have high uncertainty on the end/late outcome of a post-singularity tech civ (or at least I do, I get the impression that people here inexplicably have extremely high confidence in the stellavore expansionist model, perhaps because of lack of familiarity with the alternatives? not sure).

If post-singularity tech allows new universe creation and other exotic options, you never have much colonization - at least not in this galaxy, from our perspective. If it does not, and there is an eventual end of tech progression, then colonization is expected.

But as I argued above, even colonization could be hard to detect - as advanced civs will be small/cold/dark.

Transcension is strongly favored a priori for anthropic reasons - transcendent universes create far more observers like us. Then, updating on what we can see of the galaxy, colonization loses steam: our temporal rank is normal, whereas most colonization models predict we should be early .

For transcension, naturally its hard to predict what that means .. . but one possibility is a local 'exit' at least from the perspective of outside observers. Creation of lots of new universes, followed by physical civ-death in this universe, but effective immortality in new universes (ala game theoretic horse trading across the multiverse). New universe creation could also potentially alter physics in ways that permit further tech progression. Either way, all of the mass is locally invested/used up for 'magic' that is incomprehensibly more valuable than colonization.

Replies from: gwern
comment by gwern · 2016-03-17T16:54:20.050Z · LW(p) · GW(p)

If you plot our current trajectory forward, we get to a computational singularity long long before any serious colonization effort. Space colonization is kind of comical in it's economic payoff compared to chasing Moore's Law. So everything depends on what the endpoint of the tech singularity is. Does it actually end with some hard limit to tech? - If it does, and slow polynomial growth is the only option after that, then you get galactic colonization as the likely outcome.

So your entire argument boils down to another person who thinks transcension is universally convergent and this is the solution to the Fermi paradox? I don't see what your reversible computing detour adds to the discussion, if you can't show that making only a few cold brains sans any sort of cosmic engineering is universally convergent.

Biological computing (cells) doesn't work at those temperatures, and all the exotic tech far past bio computers requires even lower temperatures. The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

I never said anything about using biology or leaving the Earth intact. I said quite the opposite.

It's extremely unlikely that all the matter gets used up in any realistic development model, even with colonization. Life did not 'use up' more than a tiny fraction of the matter of earth, and so on.

You need to show your work here. Why is it unlikely? Why don't they disassemble solar systems to build ever more cold brains? I keep asking this, and you keep avoiding it. Why is it better to have fewer cold brains rather than more? Why is it better to have less computational power than more? Why do all this intricate engineering for super-efficient reversible computers in the depths of the void, and only make a few and not use up all the local matter? Why are all the answers to these questions so iron-clad and so universally compelling that none of the trillions of civilizations you get from mediocrity will do anything different?

Replies from: jacob_cannell
comment by jacob_cannell · 2016-03-17T17:44:54.084Z · LW(p) · GW(p)

So your entire argument boils down to another person who thinks transcension is universally convergent and this is the solution to the Fermi paradox?

No . .. As I said above, even if transcension is possible, that doesn't preclude some expansion. You'd only get zero expansion if transcension is really easy/fast. On the convergence issue, we should expect that the main development outcomes are completely convergent. Transcension is instrumentally convergent - it helps any realistic goals.

I don't see what your reversible computing detour adds to the discussion, if you can't show that making only a few cold brains sans any sort of cosmic engineering is universally convergent.

The reversible computing stuff is important for modeling the structure of advanced civs. Even in transcension models, you need enormous computation - and everything you could do with new universe creation is entirely compute limited. Understanding the limits of computing is important for predicting what end-tech computation looks like for both transcend and expand models. (for example if end-tech optimal were energy limited, this predicts dyson spheres to harvest solar energy)

The temperatures implied by 10,000x energy density on earth preclude all life or any interesting computation.

I never said anything about using biology or leaving the Earth intact. I said quite the opposite.

Advanced computation doesn't happen at those temperatures, for the same basic reasons that advanced communication doesn't work for extremely large values of noise in SNR. I was trying to illustrate the connection between energy flow and temperature.

You need to show your work here. Why is it unlikely? Why don't they disassemble solar systems to build ever more cold brains? I keep asking this, and you keep avoiding it.

First let us consider the optimal compute configuration of a solar system without any large-scale re-positioning, and then we'll remove that constraint.

For any solid body (planet,moon,asteroid,etc), there is some optimal compute design given it's structural composition, internal temp, and incoming irradiance from the sun. Advanced compute tech doesn't require any significant energy - so being closer to the sun is not an advantage at all. You need to expend more energy on cooling (for example, it takes about 15 kilowatts to cool a single current chip from earth temp to low temps, although there have been some recent breakthroughs in passive metamaterial shielding that could change that picture). So you just use/waste that extra energy cooling the best you can.

So, now consider moving the matter around. What would be the point of building a dyson sphere? You don't need more energy. You need more metal mass, lower temperatures and smaller size. A dyson sphere doesn't help with any of that.

Basically we can rule out config changes for the metal/rocky mass (useful for compute) that: 1.) increase temperature 2.) increase size

The gradient of improvement is all in the opposite direction: decreasing temperature and size (with tradeoffs of course).

So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.

One of the big unknowns of course being the timescale, which depends on the transcend issue.

Now for the star itself, it has most of the mass, but that mass is not really accessible, and most of it is in low value elements - we want more metals. It could be that the best use of that matter is to simply continue cooking it in the stellar furnace to produce more metals - as there is no other way, as far as i know.

But doing anything with the star would probably take a very long amount of time, so it's only relevant in non-transcendent models.

In terms of predicted observations, in most of these models there are few if any large structures, but individual planetary bodies will probably be altered from their natural distributions. Some possible observables: lower than expected temperatures, unusual chemical distributions, and possibly higher than expected quantities/volumes of ejected bodies.

Some caveats: I don't really have much of an idea of the energy costs of new universe creation, which is important for the transcend case. That probably is not a reversible op, and so it may be a motivation for harvesting solar energy.

There's also KIC 8462852 of course. If we assume that it is a dyson swarm like object, we can estimate a rough model for civs in the galaxy. KIC 8462852 has been dimming for at least a century. It could represent the endphase of a tech civ, approaching it's final transcend state. Say that takes around 1,000 years (vaguely estimating from the 100 years of data we have).

This dimming star is one out of perhaps 10 million nearby stars we have observed in this way. Say 1 in 10 systems will ever develop life, the timescale spread or deviation is about a billion years - then we should expect to observe about 1 in 10 million endphase dimming stars, given that phase lasts only 1,000 years. This would of course predict a large number of endstate stars, but given that we just barely detected KIC 8462852 because it was dimming, we probably can't yet detect stars that already dimmed and then stabilized long ago.

Replies from: None, gwern
comment by [deleted] · 2016-03-19T22:04:42.137Z · LW(p) · GW(p)

Advanced computation doesn't happen at those temperatures

Could it make sense to use an enormous amount of energy to achieve an enormous amount of cooling? Possibly using laser cooling or some similar technique?

comment by gwern · 2016-03-18T16:25:35.838Z · LW(p) · GW(p)

Advanced computation doesn't happen at those temperatures, for the same basic reasons that advanced communication doesn't work for extremely large values of noise in SNR. I was trying to illustrate the connection between energy flow and temperature.

And I was trying to illustrate that there's more to life than considering one cold brain in isolation in the void without asking any questions about what else all that free energy could be used for.

So, now consider moving the matter around. What would be the point of building a dyson sphere? You don't need more energy. You need more metal mass, lower temperatures and smaller size. A dyson sphere doesn't help with any of that.

A Dyson sphere helps with moving matter around, potentially with elemental conversion, and with cooling. If nothing else, if the ambient energy of the star is a big problem, you can use it to redirect the energy elsewhere away from your cold brains.

But doing anything with the star would probably take a very long amount of time, so it's only relevant in non-transcendent models.

Exponential growth. I think Sandberg's calculated you can build a Dyson sphere in a century, apropos of KIC 8462852's oddly gradual dimming. And you hardly need to finish it before you get any benefits.

So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.

You say 'may', but that seems really likely. After all, what 'complex set of unknowns' will be so fine-tuned that the answer will, for all civilizations, be 0 rather than some astronomically large number? This is the heart of your argument! You need to show this, not handwave it! You cannot show that this resolves the Fermi paradox unless you make a solid case that cold brains will find harnessing solar systems' energy and matter totally useless! As it stands, this article reads like '1. reversible computing is awesome 2. ??? 3. no expansion, hence, transcension 4. Fermi paradox solved!' No, it's not. Stop handwaving and show that more cold brains are not better, that there are zero uses for all the stellar energy and mass, and there won't be any meaningful colonization or stellar engineering.

There's also KIC 8462852 of course. If we assume that it is a dyson swarm like object, we can estimate a rough model for civs in the galaxy. KIC 8462852 has been dimming for at least a century. It could represent the endphase of a tech civ, approaching it's final transcend state. Say that takes around 1,000 years (vaguely estimating from the 100 years of data we have).

Which is a highly dubious case, of course.

we probably can't yet detect stars that already dimmed and then stabilized long ago.

I don't see why the usual infrared argument doesn't apply to them or KIC 8462852.

Replies from: jacob_cannell, jacob_cannell
comment by jacob_cannell · 2016-03-19T04:26:27.434Z · LW(p) · GW(p)

I don't see why the usual infrared argument doesn't apply to them or KIC 8462852.

If by infrared argument, you refer to the idea that a dyson swarm should radiate in the infrared, this is probably wrong. This relies on the assumption that the alien civ operates at earth temp of 300K or so. As you reduce that temp down to 3K, the excess radiation diminishes to something indistinguishable to the CMB, so we can't detect large cold structures that way. For the reasons discussed earlier, non-zero operating temp would only be useful during initial construction phases, whereas near-zero temp is preferred in the long term. The fact that KIC 8462852 has no infrared excess makes it more interesting, not less.

comment by jacob_cannell · 2016-03-18T20:11:19.129Z · LW(p) · GW(p)

A Dyson sphere helps with moving matter around, potentially with elemental conversion, and with cooling.

Moving matter - sure. But that would be a temporary use case, after which you'd no longer need that config, and you'd want to rearrange it back into a bunch of spherical dense computing planetoids.

potentially with elemental conversion

This is dubious. I mean in theory you could reflect/recapture star energy to increase temperature to potentially generate metals faster, but it seems to be a huge waste of mass for a small increase in cooking rate. You'd be giving up all of your higher intelligence by not using that mass for small compact cold compute centers.

If nothing else, if the ambient energy of the star is a big problem, you can use it to redirect the energy elsewhere away from your cold brains.

Yes, but that's just equivalent to shielding. That only requires redirecting the tiny volume of energy hitting the planetary surfaces. It doesn't require any large structures.

Exponential growth.

Exponential growth = transcend. Exponential growth will end unless you can overcome the speed of light, which requires exotic options like new universe creation or altering physics.

I think Sandberg's calculated you can build a Dyson sphere in a century, apropos of KIC 8462852's oddly gradual dimming. And you hardly need to finish it before you get any benefits.

Got a link? I found this FAQ, where he says:

Using self-replicating machinery the asteroid belt and minor moons could be converted into habitats in a few years, while disassembly of larger planets would take 10-1000 times longer (depending on how much energy and violence was used).

That's a lognormal dist over several decades to several millenia. A dimming time for KIC 8462852 in the range of centuries to a millenia is a near perfect (lognormal) dist overlap.

So it may be worth while investing some energy in collecting small useful stuff (asteroids) into larger, denser computational bodies. It may even be worth while moving stuff farther from the star, but the specifics really depend on a complex set of unknowns.

You say 'may', but that seems really likely.

The recent advances in metamaterial shielding stuff suggest that low temps could be reached even on earth without expensive cooling, so the case I made for moving stuff away from the star for cooling is diminished.

Collecting/rearranging asteroids, and rearranging rare elements of course still remain as viable use cases, but they do not require as much energy, and those energy demands are transient.

After all, what 'complex set of unknowns' will be so fine-tuned that the answer will, for all civilizations, be 0 rather than some astronomically large number?

Physics. It's the same for all civilizations, and their tech paths are all the same. Our uncertainty over those tech paths does not translate into a diversity in actual tech paths.

You cannot show that this resolves the Fermi paradox unless you make a solid case that cold brains will find harnessing solar systems' energy and matter totally useless!

There is no 'paradox'. Just a large high-D space of possibilities, and observation updates that constrain that space.

I never ever claimed that cold brains will "find harnessing solar systems' energy and matter totally useless", but I think you know that. The key question is what are their best uses for the energy/mass of a system, and what configs maximize those use cases.

I showed that reversible computing implies extremely low energy/mass ratios for optimal compute configs. This suggests that advanced civs in the timeframe 100 to 1000 years ahead of us will be mass-limited (specifically rare metal element limited) rather than energy limited, and would rather convert excess energy into mass rather than the converse.

Which gets me back to a major point: endgames. For reasons I outlined earlier, I think the transcend scenarios more likely. They have a higher initial prior, and are far more compatible with our current observations.

In the transcend scenarios, exponential growth just continues up until some point in the near future where exotic space-time manipulations - creating new universes or whatever - are the only remaining options for continued exponential growth. This leads to an exit for the civ, where from the outside perspective it either physically dies, disappears, or transitions to some final inert config. Some of those outcomes would be observable, some not. Mapping out all of those outcomes in detail and updating on our observations would be exhausting - a fun exercise for another day.

The key variable here is the timeframe from our level to the final end-state. That timeframe determines the entire utility/futility tradeoff for exploitation of matter in the system, based on ROI curves.

For example, why didn't we start converting all of the useful matter of earth into babbage-style mechanical computers in the 19th century? Why didn't we start converting all of the matter into vaccuum tube computers in the 50's? And so on....

In an exponentially growing civ like ours, you always have limited resources, and investing those resources in replicating your current designs (building more citizens/compute/machines whatever) always has complex opportunity cost tradeoffs. You also are expending resources advancing your tech - the designs themselves - and as such you never expend all of your resources on replicating current designs, partly because they are constantly being replaced, and partly because of the opportunity costs between advancing tech/knowledge vs expanding physical infrastructure.

So civs tend to expand physically at some rate over time. The key question is how long? If transcension typically follows 1,000 years after our current tech level, then you don't get much interstellar colonization bar a few probes, but you possibly get temporary dyson swarms. If it only takes 100 years, then civs are unlikely to even leave their home planet.

You only get colonization outcomes if transcension takes long enough, leading to colonization of nearby matter, which all then transcend roughly within the timeframe of their distance from the origin. Most of the nearby useful matter appears to be rogue planets, so colonization of stellar systems would take even longer, depending on how far down it is in the value chain.

And even in the non-transcend models (say the time to transcend is greater than millions of years), you can still get scenarios where the visible stars are not colonized much - if their value is really low, compared to abundant higher value cold dark matter (rogue planets, etc), colonization is slow/expensive, and the timescale spread over civ ages is low.

comment by Lalartu · 2015-04-20T09:45:21.080Z · LW(p) · GW(p)

One of the most likely candidates for filter (and variant of our future) is not mentioned here. That is, technological progress will simply end much sooner than usually expected, without any catastrophic events. There is not a filter, but a solid wall on the way from current technology to dyson sphere and starship building.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-20T16:48:16.254Z · LW(p) · GW(p)

I agree this is a possibility - a special subtype of the collapse. It seems unlikely to be a convergent enough high probability outcome that it could explain the fermi paradox.

Replies from: Lalartu
comment by Lalartu · 2015-04-21T08:13:59.052Z · LW(p) · GW(p)

I mean not collapse, but that there is an option that technologies necessary for interstellar flight and megascale engineering are either impossible in themselves or impossible to obtain for any civilization.

comment by Jan_Rzymkowski · 2015-04-18T19:07:44.296Z · LW(p) · GW(p)

Does anybody now if dark matter can be explained as artificial systems based on known matter? It fits well the description of stealth civilization, if there is no way to nullify gravitational interaction (which seems plausible). It would also explain, why there is so much dark matter - most of the universe's mass was already used up by alien civs.

Replies from: None, James_Miller, RomeoStevens
comment by [deleted] · 2015-04-19T03:01:44.979Z · LW(p) · GW(p)

You can't get rid of the waste heat without it being visible. You can't even sequester it - you always need to dump it to a location of lower temperature.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-19T05:03:28.300Z · LW(p) · GW(p)

Apparently the planck spacecraft cooled itself (or just some instruments?) down to 0.1K for some period of time.

Presumably one could transfer the heat into a fluid and expel that as reaction mass.

From what I understand, using high abeldo/reflective materials, presumably an artificially cooled object could then be maintained at a temperature much lower than the 2.7K background for quite some time.

Replies from: None
comment by [deleted] · 2015-04-19T15:15:36.634Z · LW(p) · GW(p)

The Planck spacecraft had a series of radiative and conductive thermal shields between the spacecraft bus that contained all the power and control systems, and the instruments which were the part that were cooled. The bus kept the instruments in its shadow as well.

The heat of operation of the instruments had to go SOMEWHERE. There were a series of active cooling systems that generated heat while acting as heat pumps, pulling the heat from a constant flow of coolant (already pre-cooled to ~4k on Earth and stored in insulated bottles) and dumping it overboard via radiators to bring it down to 0.1k. This ran for a while until it ran out of helium coolant, which was constantly dumped overboard after sucking away heat from the operating instruments so as to avoid there being warm pipes in proximity to them.

http://sci.esa.int/planck/45498-cooling-system/?fbodylongid=2124

You can't just get rid of heat. To locally cool something, you have to heat up something else by more than the amount you cool the cold thing such that in the net you are actually heating the universe more.

http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/heatpump.html

Limiting heat flow in and out of a cold object is quite possible. But if its DOING anything it will generate heat.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-19T18:48:45.977Z · LW(p) · GW(p)

You can't just get rid of heat. To locally cool something, you have to heat up something else by more than the amount you cool the cold thing such that in the net you are actually heating the universe more.

Of course - which is why I mentioned expelling a coolant/reaction mass. Today's computers use a number of elements from the periodic table, but the distribution is very different than the distribution of matter in our solar system. It would be very unusual indeed if the element distributions over optimal computronium exactly matched that of typical solar system.

So when constructing an advanced low-temp arcilect, you could transfer heat to whatever mass is the least useful for computation and then expel it.

Limiting heat flow in and out of a cold object is quite possible. But if its DOING anything it will generate heat.

In theory with advanced reversible computing, there doesn't seem to be any hard limit on energy efficiency. A big arcilect built on reversible computing could generate extremely low heat even when computing near the maximal possible speed - only that required for occasional permanent bit erasures and error corrections.

Replies from: Gavin
comment by Gavin · 2015-04-20T04:46:02.708Z · LW(p) · GW(p)

It would be very unusual indeed if the element distributions over optimal computronium exactly matched that of typical solar system.

But if it were not the optimal computronium, but the easiest to build computroniom, it would be made up of whatever was available in the local area.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-20T04:56:25.357Z · LW(p) · GW(p)

Yes - and that is related to my point - the configuration will depend on the matter in the system and the options at hand, and the best development paths are unlikely to turn all of the matter into computronium.

comment by James_Miller · 2015-04-18T19:26:56.754Z · LW(p) · GW(p)

most of the universe's mass was already used up by alien civs.

But then why not all of it? Why leave anything for civs like ours?

Replies from: jacob_cannell
comment by jacob_cannell · 2015-04-18T20:27:35.144Z · LW(p) · GW(p)

Why haven't we turned all of earth into one huge factory/computer/whatever? I discussed some of this in my post.

Mass has some value as raw materials, but that does not imply that the mass near stars is the most valuable. In contrast, the mass near stars is very low value, because it is far too hot, and cooling it requires an investment of energy.

Most of the mass is actually free floating, and that is the high value mass anyway - as it is already colder and or easier to cool.

Furthermore early biological civilizations will also have present scientific value as objects of study, and potential future value as information/knowledge trading partners.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2015-04-18T22:27:39.407Z · LW(p) · GW(p)

Why haven't we? We are very far from being in a steady state.

Replies from: Gavin
comment by Gavin · 2015-04-20T04:52:43.404Z · LW(p) · GW(p)

Maybe the elder civs aren't either. It might take billions of years to convert an entire light cone into dark computronium. And they're 84.5% of the way done.

I'm guessing the issue with this is that the proportion of dark matter doesn't change if you look at older or younger astronomical features.

comment by RomeoStevens · 2015-04-18T21:14:18.166Z · LW(p) · GW(p)

I like this quote from Next Big Future: " looking on planets and around stars could be like primitives looking into the best caves and wondering where the advanced people are."

Replies from: James_Miller
comment by James_Miller · 2015-04-19T04:57:35.905Z · LW(p) · GW(p)

Spelunking.