Max Tegmark on our place in history: "We're Not Insignificant After All"

post by Roko · 2010-01-04T00:02:04.868Z · LW · GW · Legacy · 88 comments

Contents

88 comments

An uplifting message as we enter the new year, quoted from Edge.org:

We're Not Insignificant After All

Max Tegmark, Physicist, MIT

When gazing up on a clear night, it's easy to feel insignificant. Since our earliest ancestors admired the stars, our human egos have suffered a series of blows. For starters, we're smaller than we thought. Eratosthenes showed that Earth was larger than millions of humans, and his Hellenic compatriots realized that the solar system was thousands of times larger still. Yet for all its grandeur, our Sun turned out to be merely one rather ordinary star among hundreds of billions in a galaxy that in turn is merely one of billions in our observable universe, the spherical region from which light has had time to reach us during the 14 billion years since our big bang. Then there are probably more (perhaps infinitely many) such regions. Our lives are small temporally as well as spatially: if this 14 billion year cosmic history were scaled to one year, then 100,000 years of human history would be 4 minutes and a 100 year life would be 0.2 seconds. Further deflating our hubris, we've learned that we're not that special either. Darwin taught us that we're animals, Freud taught us that we're irrational, machines now outpower us, and just last month, Deep Fritz outsmarted our Chess champion Vladimir Kramnik. Adding insult to injury, cosmologists have found that we're not even made out of the majority substance.

The more I learned about this, the less significant I felt. Yet in recent years, I've suddenly turned more optimistic about our cosmic significance. I've come to believe that advanced evolved life is very rare, yet has huge growth potential, making our place in space and time remarkably significant.

The nature of life and consciousness is of course a hotly debated subject. My guess is that these phenomena can exist much more generally that in the carbon-based examples we know of.

I believe that consciousness is, essentially, the way information feels when being processed. Since matter can be arranged to process information in numerous ways of vastly varying complexity, this implies a rich variety of levels and types of consciousness. The particular type of consciousness that we subjectively know is then a phenomenon that arises in certain highly complex physical systems that input, process, store and output information. Clearly, if atoms can be assembled to make humans, the laws of physics also permit the construction of vastly more advanced forms of sentient life. Yet such advanced beings can probably only come about in a two-step process: first intelligent beings evolve through natural selection, then they choose to pass on the torch of life by building more advanced consciousness that can further improve itself.

Unshackled by the limitations of our human bodies, such advanced life could rise up and eventually inhabit much of our observable universe. Science fiction writers, AI-aficionados and transhumanist thinkers have long explored this idea, and to me the question isn't if it can happen, but if it will happen.

My guess is that evolved life as advanced as ours is very rare. Our universe contains countless other solar systems, many of which are billions of years older than ours. Enrico Fermi pointed out that if advanced civilizations have evolved in many of them, then some have a vast head start on us — so where are they? I don't buy the explanation that they're all choosing to keep a low profile: natural selection operates on all scales, and as soon as one life form adopts expansionism (sending off rogue self-replicating interstellar nanoprobes, say), others can't afford to ignore it. My personal guess is that we're the only life form in our entire observable universe that has advanced to the point of building telescopes, so let's explore that hypothesis. It was the cosmic vastness that made me feel insignificant to start with. Yet those galaxies are visible and beautiful to us — and only us. It is only we who give them any meaning, making our small planet the most significant place in our observable universe.

Moreover, this brief century of ours is arguably the most significant one in the history of our universe: the one when its meaningful future gets decided. We'll have the technology to either self-destruct or to seed our cosmos with life. The situation is so unstable that I doubt that we can dwell at this fork in the road for more than another century. If we end up going the life route rather than the death route, then in a distant future, our cosmos will be teeming with life that all traces back to what we do here and now. I have no idea how we'll be thought of, but I'm sure that we won't be remembered as insignificant.

 

A few thoughts: when considering the heavy skepticism that the singularity hypothesis receives, it is important to remember that there is a much weaker hypothesis, highlighted here by Tegmark, that still has extremely counter-intuitive implications about our place in spacetime; one might call it the bottleneck hypothesis - the hypothesis that 21st century humanity occupies a pivotal place in the evolution of the universe, simply because we may well be a part of the small space/time window during which it is decided whether earth-originating life will colonize the universe or not.

The bottleneck hypothesis is weaker than the singularity hypothesis - we can be at the bottleneck even if smarter-than-human AI is impossible or extremely impractical, but if smarter-than-human AI is possible and reasonably practical, then we are surely at the bottleneck of the universe. The bottleneck hypothesis is based upon less controversial science than the singularity hypothesis, and is robust to different assumptions about what is feasible in an engineering sense (AI/no AI, ems/no ems, nuclear rockets/generation ships/cryonics advances, etc) so might be accepted by a larger number of people.

Related is Hanson's "Dream Time" idea.

88 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2010-01-04T16:50:22.306Z · LW(p) · GW(p)

What strikes me about our current situation is not only are we at an extremely influential point in the history of the universe, but how few people realize this. It ought to give the few people in the know enormous power (relative to just about anyone else who has existed or will exist) to affect the future, but, even among those who do realize that we're at a bottleneck, few try to shape the future in any substantial way, to nudge it one way or another. Instead, they just go about their "normal" lives, and continue to spend their money on the standard status symbols and consumer goods.

What to make of this? If we follow straight revealed preference, we have to conclude that people have huge discount rates on distance or time, or to put it more straightforwardly, they are simply indifferent about what happens in nearly all of the universe. This is not a very palatable conclusion for those who lean towards preference utilitarianism. Robin's response (in "Dream Time") is to dismiss those preferences as "consequential delusions" and Eliezer's response (in CEV) is to hope that if people were more intelligent and rational they would have more interesting preferences.

Personally, I don't know what I want the future to be, but I still find it worthwhile to try to push it in certain directions, directions that I think are likely to be net improvements. And I also puzzle over why I appear to be in such an atypical position.

Replies from: MatthewB, whpearson, byrnema, cabalamat, Roko
comment by MatthewB · 2010-01-05T02:00:35.219Z · LW(p) · GW(p)

What strikes me about our current situation is not only are we at an extremely influential point in the history of the universe, but how few people realize this.

I am amazed at how few people are aware of the consequences of the next few decades. Yet, when I think about other influential points in history, I find that people were either just as ignorant, or actively resisting such change (Saboteurs & Luddites in the Textile industries in Holland and England for instance. Or, those who opposed industrialization in the USA at the end of the 19th century).

The last really major change, such as the one we are now in, was the end of the Industrial Revolution in the early 1900s. It was so very obvious that rural life was becoming a thing of the past, yet people fled into nostalgia and disbelief; talking about the benefits of the pastoral lifestyle (while ignoring the fact that it was backbreaking work that required dawn to dusk toil for little gain).

Those very few who were aware that it was indeed a time that would end the world as people knew it were able to improve their lot immensely. A new generation of wealth was created.

This same thing is happening now. And, you are correct:

It ought to give the few people in the know enormous power

Hopefully, this period will also result in the ability of the vast hoards of people living in or below the poverty line to rise above this. Ultimately, we could move into a post-scarcity economy, where all basic needs are fully (and comfortably) met, and thus free people to pursue more fulfilling work and leisure.

Of course, the book is still out on that one.

And I also puzzle over why I appear to be in such an atypical position.

That is something that I wonder as well. I've spent an inordinate amount of time at my school trying to educate people about the possibilities of the next 2 to 5 decades, yet it has mostly fallen on deaf ears (I would like to say even among the Computer Science/Engineering Professors and students, but it is more like especially among the Comp Sci/Engn Profs & Students). There have been a few who knew about the historical changes that are happening now, but I didn't need to educate them. They already knew. They are also the people whom I noticed that, like myself, were aiming at Berkeley, Stanford, CMU, MIT, etc.

So, maybe it is that those of us in the know should consider ourselves fortunate and make plans to help elevate others in the future who missed the boat (so to speak). I know that if I succeed at my goals, I plan to help out others whom I know have had hard times due to failure to plan well (or who made mistakes, such as I, earlier in their life and need a second chance)

Replies from: byrnema
comment by byrnema · 2010-01-05T02:50:41.542Z · LW(p) · GW(p)

Wow: MIT and Berkeley. You guys must have been the group that was right.

Replies from: MatthewB
comment by MatthewB · 2010-01-05T03:01:11.087Z · LW(p) · GW(p)

I should point out that I am the stupid one among them, which is why I have to limit myself to UC (Berkeley or UCSD - as they have a HUGE Cog Sci and AI program at UCSD that rivals Berkeley's). If I was not disabled (and old enough to be most of the group's father) I would probably be heading to MIT or CMU as well... Although, Berkeley is not shabby. My GPA suffered horribly when I first went back to school due to not taking my disability fully into account and not knowing my rights as a disabled person yet. I have finally managed to have a couple of semesters at 3+ GPA, but my overall GPA is still slightly below 3. I've been told that I will stand a good chance to get into Berkeley if I maintain the 3.2 to 3.7 semesters I've been getting since the end of 2008 (I only do 3/4 time, as I discovered in that first semester that I can't manage full time very well).

Thank you for the compliment though. I hope that I continue to be worthy of it.

comment by whpearson · 2010-01-04T17:35:54.318Z · LW(p) · GW(p)

My current position is I don't know what the correct action to take to nudge the world the way I want. The world seems to be working somewhat at this point and any nudge may send it into a path towards something that doesn't work (even sub-human AI might change the order of the world so much it stops working).

So my strategy is to try and prepare a nudge that could be used in case of emergencies. While trying to live a semi-normal life as well and cope with akrasia etc, it is not going quickly.

Replies from: Wei_Dai, MatthewB
comment by Wei Dai (Wei_Dai) · 2010-01-04T18:04:22.881Z · LW(p) · GW(p)

There are some actions that seem to be clear wins, like fighting against unFriendly AI. I find it difficult to see what kind of nudge you could prepare that would be effective in an emergency. Can you say more about the kind of thing you had in mind?

Replies from: whpearson
comment by whpearson · 2010-01-04T20:12:53.878Z · LW(p) · GW(p)

I think very fast UFAI is unlikely, so I tend to worry about the rest of the bottleneck. Slow AI* has its own dangers and is not a genie I would like to let out of the bottle unless I really need it. Even if the first Slow AI is Friendly it doesn't guarantee the next 1000 will be, so it depends on the interaction between the AI and the society that makes it.

Not that I expect to code it all myself. I really should be thinking about setting up an institution to develop and hide the information in such a way that it is distributed but doesn't leak. The time to release the information/code would be when there had been a non-trivial depopulation of earth and it was having trouble reforming an industrial society (or other time industrial earth was in danger). The reason not release it straight away would be to hope for better understanding of the future trajectory of the Slow AIs.

There might be an argument for releasing the information if we could show we would never get a better understanding of the future of the Slow AIs.

*By slow AI I mean AI that has as much likelihood of Fooming as unenhanced humans do, due to sharing similar organization and limitations of intelligence.

comment by MatthewB · 2010-01-04T20:54:46.441Z · LW(p) · GW(p)

Could you define sub-human AI, please?

It seems to me that we already have all manner of sub-human AI. The AIs that deal with telephone traffic, data mining, air-traffic control, the Gov't and Intelligence services, the Military, Universities that have AI programs, Zoos that have breeding programs (and sequence the genomes of endangered animals to find the best mate for the animal), etc.

Are these types of AI far too primitive to even be considered sub-human, in your opinion?

Replies from: whpearson
comment by whpearson · 2010-01-04T23:11:30.711Z · LW(p) · GW(p)

Are these types of AI far too primitive to even be considered sub-human, in your opinion?

Not exactly too primitive but of the wrong structure. Are you familiar with functional programming type notation? An off line learning system can be considered a curried function of type

classify :: Corpus -> (a -> b)

Where a and b are the input and output types, and Corpus is the training data. Consider a chess playing game that learns from previous chess games (for simplicity).

Corpus -> (ChessGameState -> ChessMove) or a data mining tool set up for finding terrorists

Corpus -> ((Passport, FlightItinerary) -> Float) where the float is the probability that the person travelling is a terrorist based on the passport presented and the itinerary.

They can be very good at their jobs, but they are predictable. You know their types. What I was worried about is learning systems that don't have a well defined input and output over their life times.

Consider the humble PC it doesn't know how many monitors it is connected to or what will be connected to its USB sockets. If you wanted to create a system that could learn to control it you would need to be from any type to any type, dependent upon what it had connected.* I think humans and animals are designed to be this kind of system as our brain has been selected to cope with many different types of body with minimal evolutionary change. It is what allows us to add prosthetics and cope with bodily changes over life (growth and limb/sense loss). These system are a lot more flexible as they can learn things quickly by restricting their search spaces, but still have a wide range of possible actions.

There are more considerations for an intelligence about the type of function that determines how the corpus/memory determines the current input/output mapping as well. But that is another long reply.

*You can represent any type to any other type as a large integer in a finite system. But with the type notation I am trying to indicate what the system is capable of learning at any one point. We don't search the whole space for computational resource reasons.

Replies from: MatthewB
comment by MatthewB · 2010-01-05T01:43:22.172Z · LW(p) · GW(p)

Thanks for the reply. It is very helpful.

I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC - hopefully Berkeley or UCSD - until this fall). Unfortunately, most Community and Junior Colleges don't teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering - or, the shorter name: Artificial Intelligence. At least that is what most of the people in the degree program are studying. Especially at Berkeley and UCSD, the two places I wish to go).

So, is what you are referring to, with a learning type system, not Sub-human equivalent because it has no random or Stochastic processes?

Or, to be a little more clear, they are not sub-human equivalent because they are highly deterministic and (as you put it) predictable.

I get what you mean about human body-type adaptation. We still have the DNA in our bodies for having tails of all types (from reptile to prehensile), and we still have DNA for other deprecated body plans. Thus, a human-equivalent AI would need to be flexible enough to be able to adapt to a change in its body plan and tools (at least this is what I am getting).

In another post (which I cannot find, as I need to learn how to search my old posts better), I propose that computers are another form of intelligence that is evolving with humans as the agent of selection and mutation. Thus, they have a vastly different evolutionary pathway than biological intelligence has had. I came up with this after hearing Eliezer Yudowski speak at one of the Singularity Summits (and maybe Convergence 08. I cannot recall if he was there or not). He talks about Mind Space, and how humans are only a point in Mind Space, and that the potential Mind Space is huge (maybe even unbounded. I hope that he will correct me if I have misunderstood this).

comment by byrnema · 2010-01-04T17:11:38.735Z · LW(p) · GW(p)

And I also puzzle over why I appear to be in such an atypical position.

And I was wondering why I was in such an atypical position of not caring.

You write of pushing the universe towards net improvements. By 'improvement', you mean relative to your particular or general human values. At a large and far scale, why should we have any loyalty to those values, especially if they are arbitrary (that is, not sufficiently general to mind space)? If the universe is meant to be dominated by the different values of other minds, why would I do anything but shrug my shoulders about that?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-04T17:49:29.952Z · LW(p) · GW(p)

I think just by asking the question "why should I care?", you probably already care more than most, who just go on doing what they always did without a second thought.

If I ask myself "why do I care?", the answer is that I don't seem to care much about the standard status symbols and consumer goods (bigger houses, faster cars, etc.), so what is left? Well for one thing, I care about knowledge, i.e., finding answers to questions that puzzle me, and I think I can do that much better in some futures than in others.

Replies from: AdeleneDawner, byrnema
comment by AdeleneDawner · 2010-01-04T18:48:35.204Z · LW(p) · GW(p)

Er... if you answered why you care, I'm failing to find where you did so. Listing what you care about doesn't answer the question.

I don't think it's controversial that 'why do you care about that' is either unanswerable, or answerable only in terms of something like evolution or neurochemestry, in the case of terminal values.

Replies from: byrnema, Wei_Dai
comment by byrnema · 2010-01-04T19:43:40.649Z · LW(p) · GW(p)

Listing what you care about doesn't answer the question.

There is a subtext to this question, which is that I believe we typically assume -- until it is demonstrated otherwise -- that our values are similar or overlap significantly, so it is natural to ask 'why do you value this' when maybe we really mean 'what terminal value do you think you're optimizing with this'? Disagreements about policy or 'what we should care about' are then often based on different beliefs about what achieves what than different values. It is true that if our difference in caring turns out to be based upon different values, or weighting values differently, then there's nothing much to discuss. Since I do value knowledge too, I wanted to further qualify how Wei Dai values knowledge, because I don't see how nudging the far future one way or another is going to increase Wei Dei's total knowledge.

comment by Wei Dai (Wei_Dai) · 2010-01-04T19:30:40.864Z · LW(p) · GW(p)

Byrnema had a specific objection to human values that are "arbitrary", and I think my response addressed that. To be more explicit, all values are vulnerable to the charge of being arbitrary, but seeking knowledge seems less vulnerable than others, and that seems to explain why I care more about the future than the average person. I was also trying to point out to Byrnema that perhaps she already cares more about the future than most, but didn't realize it.

comment by byrnema · 2010-01-04T18:03:55.059Z · LW(p) · GW(p)

To what extent does your caring about the future depend upon you being there to experience it?

Then my next question would be, how important is your identity to this value? For example, do you have a strong preference whether it is "you" that gains more and more knowledge of the universe, or any other mind?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-04T19:46:12.346Z · LW(p) · GW(p)

I might change my mind in the future, but right now my answers are "to a large extent" and "pretty important".

Why do you care what my values are, though, or why they are what they are? I find it fascinating that "value-seeking" is a common behavior among rationalist-wannabes (and I'm as guilty of it as anyone). It's almost as if the most precious resource in this universe isn't negentropy, but values.

ETA: I see you just answered this in your reply to Adelene Dawner:

I wanted to further qualify how Wei Dai values knowledge, because I don't see how nudging the far future one way or another is going to increase Wei Dei's total knowledge.

I expect that I can survive indefinitely in some futures. Does that answer your question?

Replies from: byrnema
comment by byrnema · 2010-01-04T19:58:56.897Z · LW(p) · GW(p)

It's almost as if the most precious resource in this universe isn't negentropy, but values.

That's an amusing observation, with some amount of truth to it.

The reason why I was asking is because I was seeking to understand why you care and I don't.

Given your reply, I think our difference in caring can be explained by the fact that when I imagine the far future, I don't imagine myself there. I'm also less attached to my identity; I wouldn't mind experiencing the optimization of the universe from the point of view of an alien mind, with different values. (This last bit is relevant if you want the future to be good just the sake of it being good, even if you're not there.)

comment by cabalamat · 2010-01-05T07:30:23.698Z · LW(p) · GW(p)

If we follow straight revealed preference, we have to conclude that people have huge discount rates on distance or time, or to put it more straightforwardly, they are simply indifferent about what happens in nearly all of the universe.

Maybe they just think that they can't affect what happens very much.

Replies from: MichaelBishop, Roko
comment by Mike Bishop (MichaelBishop) · 2010-01-06T21:06:16.286Z · LW(p) · GW(p)

People talk about global poverty and other issues they have little influence over. If people would at least talk about the long-term future of our species that would be a start.

comment by Roko · 2010-01-05T11:53:09.745Z · LW(p) · GW(p)

Nope, they haven't thought about it at all, or at least not rationally.

comment by Roko · 2010-01-04T21:08:38.836Z · LW(p) · GW(p)

but, even among those who do realize that we're at a bottleneck, few try to shape the future in any substantial way, to nudge it one way or another. Instead, they just go about their "normal" lives, and continue to spend their money on the standard status symbols and consumer goods.

The human brain is actually highly constrained by the social reality it finds itself in, and by the need to satisfy various biological drives (status, sex, child-rearing, romance, friendship/socializing, signaling).

Furthermore, it is not obvious to most people why they should actually care about other distant humans, rather than just talking about it. Distant future humans in trouble? Um, ok, what's on TV then?

comment by RobinHanson · 2010-01-04T02:29:50.840Z · LW(p) · GW(p)

It isn't clear what weaker-hypothesis tech self-destruction he thinks likely in the next century.

comment by whpearson · 2010-01-04T19:54:04.960Z · LW(p) · GW(p)

As paper-clippers would still be visible astronomically can we conclude that UFAI isn't very likely to be what wiped out previous Oases of life that got to our level of advancement? We really have to hypothesize very low chances of getting to human-level life or high non-visible means of stopping human-like things from spreading (nukes, bio terror, etc?).

Anyone have any favoured previous bottlenecks that we are likely to have dodged? None of the transitions life has gone through seems very special apart from the bottleneck of not getting self-replication started at all. I also don't see earth-like planets being more rare than perhaps 1 in 50 million.

Replies from: Roko, Roko
comment by Roko · 2010-01-04T21:00:01.827Z · LW(p) · GW(p)

As paper-clippers would still be visible astronomically can we conclude that UFAI isn't very likely to be what wiped out previous Oases of life that got to our level of advancement?

It is certainly the case that the fermi paradox seems provide little evidence in favor of, or against UFAI.

comment by Roko · 2010-01-04T21:00:57.294Z · LW(p) · GW(p)

Anyone have any favored previous bottlenecks that we are likely to have dodged?

It could be several. The Drake Equation has a lot of terms in it.

Replies from: thomblake, whpearson
comment by whpearson · 2010-01-04T23:56:51.806Z · LW(p) · GW(p)

Hmm I hadn't looked at the Drake equation.

It is interesting to note L, the life time of the species or how long you can notice a species. For a dyson sphere blocking the light of a star we would expect it to block for the life time of the star. And if you are making that large you have to make all the other terms a lot smaller than the estimates there.

Or were you thinking of a more realistic equation?

Replies from: Roko
comment by Roko · 2010-01-05T11:59:23.406Z · LW(p) · GW(p)

I think that the "L" in the drake equation is silly; once a civ gets to the level of colonization, it will [EDIT] probably [/EDIT] colonize and last forever.

Replies from: Jack, Alicorn
comment by Jack · 2010-01-05T14:08:01.662Z · LW(p) · GW(p)

Even if thats the case it just means that L is dominated by probability any given civilization gets to the level of colonization. Plenty of civilizations could destroy themselves first.

comment by Alicorn · 2010-01-05T13:49:17.100Z · LW(p) · GW(p)

A colonized civilization that still had contact between its various planets could still be wiped out by a plague or a sufficiently widespread war. Or they could commit mass suicide. Or they could be attacked and killed each and every one by another alien species. Why are you making such confident, general statements about a huge potential class of forms of life you have not met?

Replies from: Roko
comment by Roko · 2010-01-05T15:09:11.245Z · LW(p) · GW(p)

Well, let's take each objection in turn:

various planets could still be wiped out by a plague or a sufficiently widespread war.

The wave of expansion would travel faster than war, or at least at roughly the same speed. The dynamics of war in an infinite space seem to look like Hanson's hardscrapple frontier - everyone trying to move into new territory as quickly as possible. I'm not sure what you mean by plague, but it would have to travel outwards and might struggle to catch the expansion.

Or they could commit mass suicide.

Possible, but that seems like a rather contrived possibility, especially if one is then going to invoke it to explain the Fermi paradox; for one galactic civ to commit suicide seems unlikely, but for billions of them to do it without a single exception is ... well, there would need to be a convergent dynamic.

Or they could be attacked and killed each and every one by another alien species

As I said, the dynamics of war in space seem to be dynamics of expansion. You can just keep running away at .99c into fresh space, always expanding your empire and the amount of energy and matter you control. Of course, your enemy will do exactly the same in the opposite directions.

Why are you making such confident, general statements about a huge potential class of forms of life you have not met?

Well, if you make a statement of the form "at least one alien race will ... " you have a lot of chances to be right. Also, physics and game theory constrain them. Have you read Omohundro's Basic AI drives?

comment by Roko · 2010-01-04T13:51:02.531Z · LW(p) · GW(p)

Since some people have opined that maybe we're not alone in the universe, I'll write down the strongest argument in that I can think of in favor of this position. (To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse. )

The strongest reason that aliens might be invisible to us is that they are deliberately hiding. In fact I think that this is the only plausible reason.

Why would they be hiding? Well, they might be frightened that they're in a simulation, and that the simulators have some crude algorithms that search massive tracts of space (many, many hubble volumes in size) by simply looking for inhomogeneity on the galaxy or supercluster level. The advanced aliens don't want to get caught by the simulator.

This explanation would still work even if we're not in a simulation: the threat of it is enough. Even a small probability that we're in a simulation given the intelligence and data that an advanced alien civilization would have might be enough to offset the (perhaps small?) advantages of expansion, especially given recent work on bounded utility functions.

EDIT: This could be combined with the singularity hypothesis: perhaps all superintelligences, convergently decide not to expand.

Replies from: timtyler, Jach, Jack, Alicorn, MattFisher, Kaj_Sotala
comment by timtyler · 2010-01-04T21:29:00.990Z · LW(p) · GW(p)

It seems like a weak argument to me: living systems typically invade all available niches - if they are here, we would see them.

For a much better argument, consider:

http://en.wikipedia.org/wiki/Fermi_paradox#Intelligent_civilizations_are_too_far_apart_in_space_or_time

Replies from: Roko
comment by Roko · 2010-01-05T11:51:22.583Z · LW(p) · GW(p)

Tim: these civilizations would be superintelligences, they would not behave in a way that is typical of dumb life.

But I agree, I find this argument somewhat weak.

Replies from: timtyler
comment by timtyler · 2010-01-05T18:51:31.699Z · LW(p) · GW(p)

Superintelligent agents can be expected to evolve out of systems that evolved by random mutations.

The systems that they evolved from can be expected to have goals that result in tracking down and utililising every available source of negentropy.

They will build superintelligent agents to help them attain these goals - and so the resulting systems are likely to be even better at tracking down and utilising negentropy than the original systems were - since they will pursue the same ends with greater competence.

Systems with radically different goals are not logically impossible. I call those "handicapped superintelligences". If they ever meet any other agents, it seems that they will be at a disadvantage - since nature disapproves of deviations from god's utility function.

Replies from: Roko
comment by Roko · 2010-01-05T19:24:24.456Z · LW(p) · GW(p)

More important than negentropy is continued existence. If the simulation gets shut down, you've already lost all you already have.

Replies from: timtyler
comment by timtyler · 2010-01-05T19:51:00.635Z · LW(p) · GW(p)

Living systems maximise entropy. If the system dies out, it fails in doing that, and entropy increases more slowly. So: self-perpetuation is pretty much an automatic corollary of long-term entropy-maximisation. The best way to flatten those energy gradients is often to have lots of kids - and to let them help you.

comment by Jach · 2010-01-05T21:24:27.425Z · LW(p) · GW(p)

While I'm not in any way an expert in simulation making, wouldn't it seem just a bit too convenient that, in all the monstrous computing power behind making the universe run, the Overlords couldn't devise a pretty clever and powerful algorithm that would have found us already? Maybe you can help me see why there would only be a crude algorithm that superintelligences should fear being caught by, and why they wouldn't have considered themselves caught already.

Apart from this, I'm in agreement with other commenters that a stronger argument is the vastness of space.

Replies from: Roko, Roko
comment by Roko · 2010-01-06T11:16:36.764Z · LW(p) · GW(p)

Overlords couldn't devise a pretty clever and powerful algorithm that would have found us already?

Maybe the overlords are very, very, different life forms than us, and so they don't know what to look for other than large scale statistical anomalies?

But I must admit, this is a weakness of the hypothesis.

comment by Roko · 2010-01-06T11:10:45.248Z · LW(p) · GW(p)

Apart from this, I'm in agreement with other commenters that a stronger argument is the vastness of space.

More space = more stars = bigger problem. There are roughly 10^14 stars within 100 million light years of earth (as far as I can tell), which would make for 10^17 within 1 bn ly, and 10^20 within 10bn ly. The universe is a whole 13bn years old.

comment by Jack · 2010-01-04T15:33:23.008Z · LW(p) · GW(p)

I think wireheading, religious seclusion or some other memetic effect could keep them from expanding. I get that there is a selection effect in favor of expansionism but I think aliens could be weird enough that the entire range of variability is within isolationist boundaries. Besides, if you think the aliens can all agree to stop expanding to hide from theoretical simulators then I don't see why some other meme couldn't do the trick too.

Replies from: Roko
comment by Roko · 2010-01-04T20:54:44.281Z · LW(p) · GW(p)

Besides, if you think the aliens can all agree to stop expanding to hide from theoretical simulators then I don't see why some other meme couldn't do the trick too.

I can't think of anything else that would affect every single alien civ out of trillions; in fact, even this explanation seems unlikely to account for no expansion from every single one of trillions of civs.

I think wireheading,

Perhaps we should regard the fermi problem as evidence in favor of wireheading?

religious seclusion

I think that the "religious seclusion" idea is not a good bet. Remember, it has to work with really rather high fidelity.

or some other memetic effect could keep them from expanding

Some other mimetic effect seems more promising.

Replies from: Jordan
comment by Jordan · 2010-01-06T08:42:59.288Z · LW(p) · GW(p)

If the simulators are looking for intelligent life, then they may be removing it when they find it, to allow the rest of the universe to continue on producing other original life forms. Then, there would be intelligent life in hiding, and other intelligent life would be selected against.

comment by Alicorn · 2010-01-04T14:39:19.211Z · LW(p) · GW(p)

The strongest reason that aliens might be invisible to us is that they are deliberately hiding.

Couldn't they just be really, really far away; or have a structure and value system such that their bustling activity looks like natural phenomena; or be trying like hell to get our attention but finding us kind of hard to talk to?

Replies from: Roko
comment by Roko · 2010-01-04T14:59:06.567Z · LW(p) · GW(p)

Couldn't they just be really, really far away

Yes, they could lie beyond our cosmic horizon.

or have a structure and value system such that their bustling activity looks like natural phenomena

No; to see why you need to think about physics, engineering, thermodynamics and information processing. They'd want to capture low entropy sources of energy, and radiate away all that energy as high-entropy infra red radiation. This would be noticeable. Also, it would be an amazing coincidence if alien engineering just happened to look exactly the same as pristine nature.

Replies from: Jack, Cyan
comment by Jack · 2010-01-04T15:25:09.823Z · LW(p) · GW(p)

They'd want to capture low entropy sources of energy, and radiate away all that energy as high-entropy infra red radiation.

Could we detect Dyson spheres if they were out there?

Replies from: whpearson
comment by whpearson · 2010-01-04T16:03:51.395Z · LW(p) · GW(p)

Might we be able to see them if they went in front of galaxies or other highly luminous body?

Replies from: Jack, MatthewB
comment by Jack · 2010-01-04T18:01:05.109Z · LW(p) · GW(p)

Come to think of it, wouldn't they look a lot like black holes?

Replies from: whpearson
comment by whpearson · 2010-01-04T18:31:01.876Z · LW(p) · GW(p)

Not really. The black hole might have hawking radiation it should also gravitationally lens the surroundings more than a dyson sphere.

It might also have an accretion disc.

comment by MatthewB · 2010-01-04T16:37:53.283Z · LW(p) · GW(p)

Or if they entered a region of dust or gas that had a temperature differential to the sphere.

If fact, I just realized that if any such structures like Ringworlds existed, they would be trivially easy to spot if they were at the right aspect to us. It would look like a star with either a band across it, or like the star had a single ring. The ring on the far side of the star would also be tremendously bright (if angled to our solar system slightly) and much easier to spot with a telescope that blocked off the light of the central star.

Just an OT thought.

Replies from: Jack
comment by Jack · 2010-01-04T18:00:54.693Z · LW(p) · GW(p)

Stars are too bright and too far away and the ringworld too thin (at least I think there would be stability problems if they were built too wide, but I don't know enough). Worse, I thought the way we were trying to detect planets these days was by looking at the gravitational effect the planet has on the star. But wouldn't a ringworld balance out its own gravitation effect on the star?

Replies from: MatthewB
comment by MatthewB · 2010-01-04T20:47:04.807Z · LW(p) · GW(p)

The Kepler Observatory also detects planets by masking the light from the star and then looking for any bright spots (Coronal Masking).

Even more advanced telescopes will use this technique in combination with the gravity wobble. There are also diffraction and coronal masking.

http://www.iop.org/EJ/article/0004-637X/662/1/738/65461.web.pdf

http://spie.org/x24241.xml?ArticleID=x24241

why did I get down voted on that post?

comment by Cyan · 2010-01-04T15:12:20.752Z · LW(p) · GW(p)

you need to think about physics, engineering, thermodynamics and information processing

Like this guy did. Short short version: to get the most out of your star, you drain the energy of the emitted photons until they are (almost) at the temperature of the CMB. Infra-red is still harvestable.

Replies from: Roko
comment by Roko · 2010-01-04T20:06:43.089Z · LW(p) · GW(p)

Getting rid of all the stars in one part of space and replacing them with near CMB doesn't seem to produce what we see: it would produce a raised CMB signal in one direction (would it be strong enough to pick out? How close, exactly, to CMB would it be?), a reduced star density in one region, and presumably a bunch of half-eaten galaxies on the boundary of the region.

The point that "it would be an amazing coincidence if alien engineering just happened to look exactly the same as pristine nature" seems reasonable convincing to me.

Replies from: Cyan
comment by Cyan · 2010-01-04T20:22:19.490Z · LW(p) · GW(p)

I think the idea is not all the stars in a region -- it's one star per civilization. The basic idea is to maximally exploit energy transfer between the star and interstellar space, so optimal efficiency makes the artifact appear from the outside to be as close to interstellar space as possible.

comment by MattFisher · 2010-01-05T14:35:40.745Z · LW(p) · GW(p)

Another possibility is that they've gone through their own singularity, and no longer have a significant visible (to us) presence in physical space (or at least don't use radio waves any more), i.e. they've transcended.

Naturally we can only speculate about what new laws of physics become identifiable post-singularity, but string theory suggests there's a lot of extra dimensions around to which we don't have direct access (yet?). What if civilisations that transcend tend to find 4D spacetime too limiting and stretch out into the wider universe, like a small-town kid making it big?

It look like, if it happens for us, that it will happen within the next hundred years or so. Considering that we've only been using radio for a bit over a hundred years, that gives a roughly two hundred year window for any two pre-singularity civilisations to spot each other before one or both of them transcend. 0.4 seconds in the year of the cosmos.

comment by Kaj_Sotala · 2010-01-05T09:00:44.040Z · LW(p) · GW(p)

Myself, I suspect the most plausible explanation is that interstellar colonization is simply a lot harder than we think. Nobody yet has managed to build a self-replicating probe that will actually survive through interstellar space intact and manage to set up a colony, and it's not a given that anyone ever will. Add to this the fact that even if it were possible, it could be horribly expensive, with a return on investment at least hundreds of years away.

Replies from: Roko
comment by Roko · 2010-01-05T11:45:56.668Z · LW(p) · GW(p)

Kaj, I'm surprised! I think you should update against this explanation; of all the technologies discussed here (superintelligence, Dyson spheres, megascale engineering, nanotech), space colonization is the only one that is so simple that even contemporary humans have a definite recipe for doing it that would probably work. If humans can do it, to a superintelligence it would almost certainly be trivial.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-01-05T17:34:58.519Z · LW(p) · GW(p)

space colonization is the only one that is so simple that even contemporary humans have a definite recipe for doing it that would probably work

Maybe. The nearest solar system - which would apparently take about a hundred years to reach - is over four light years away, and it isn't even certain whether it has any planets. The closest planet that is actually known is apparently ten light years away, which would then take 250 years to reach. It's obviously not impossible that we - or some future superintelligence - might manage to build a craft that survived that whole journey intact enough to continue with a colonization effort, but it doesn't seem obviously possible either.

Replies from: Roko, Roko
comment by Roko · 2010-01-05T19:21:42.495Z · LW(p) · GW(p)

but it doesn't seem obviously possible either.

It seems fairly clearly possible because I can actually think of ways of getting there, including having all those tedious details filled in. Fleets of nuclear powered ships with 10-20 generations of humans on them could colonize thousands of worlds. The details of the nuclear powered ships and the technology to keep humans alive in space have been hashed out. Colonizing once there is tougher, we haven't really done the bulk of the work on how to colonize a new solar system, but it doesn't seem to require magical technology like nanotech to do it, just a large initial ship with a lot of infrasturcture on it and lots of fairly tedious planning and preparation.

Furthermore, I anticipate that technology will not stand still for the next milennium. What we can do now with difficulty, expenditure and much loss of life (colonizing the local stellar neighborhood) will, in 1000 years worth of technological development, be possible without much fuss. Nanotech, good AI systems, near-light-speed Von Neumann probes, and voila.

comment by Roko · 2010-01-05T18:01:41.225Z · LW(p) · GW(p)

might manage to build a craft that survived that whole journey intact enough to continue with a colonization effort

Don't send 1, send 1000.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-01-05T18:38:46.459Z · LW(p) · GW(p)

Granted.

comment by Mitchell_Porter · 2010-01-04T07:13:20.303Z · LW(p) · GW(p)

Tegmark:

this brief century of ours is arguably the most significant one in the history of our universe: the one when its meaningful future gets decided

There are at least two dubious inputs going into a statement like this. (And I see you making such a statement yourself, on your homepage, Roko!)

The first one is definitely a mistake, but perhaps not a very consequential one. Even if we accept, for the sake of argument, the proposition that the state of the visible universe indicates an absence of spacefaring intelligent life elsewhere, that is not the whole universe! All we can see is what is within our past light-cone. We have no information on what has happened in those galaxies since then - and in some cases, we are referring to billions of years. And then there's the whole rest of the universe, possibly infinite in extent, with which we have not yet interacted.

So the idea that the future of the entire cosmos hinges on what happens on Earth in the next few years is hyperbole, even if one accepts the usual premises.

The second dubious ingredient to these judgments is the premise itself. I discern two arguments in favor of the proposition that the universe is empty of intelligent life (among people who make statements like this):

1) The visible universe appears to be explicable entirely in terms of natural astrophysical structures. If there were galactic intelligences, they wouldn't let the stars shine freely into space, they'd surround them with Dyson spheres so as to harvest that radiative output for computational or other purposes, or they would otherwise engage in works of astrophysical engineering.

2) If Earth was in a region of space controlled by an alien intelligence, life, history, and evolution as we know it would not have happened. There would have been some sort of intervention, whether benevolent, or just to appropriate the atoms of this solar system for technological projects and other alien purposes.

Argument 1 is interesting but not exactly conclusive. Lots of the universe is dark matter. It may be that the dark matter has been appropriated by spacefaring intelligences and used so as to make every photon count, and that the regions of the universe which are visibly emitting light are interzones - demilitarized wilderness areas, regions where an aeon-long process of scouting and negotiation is occurring, and so forth. In fact, I find this interpretation rather more sensible than the assumption of a wholly uninhabited universe. But it's still very hypothetical. It's hard to even comprehensively list all the uncertainties which one must guess about in order to produce scenarios like this. You need to guess at the frequency with which spacefaring intelligence appears in the universe, the frequency with which it becomes expansionist, the difficulties associated with expansion, the stability of value systems and civilizational tendencies over million-year periods (if a super-civilization decides to end itself, it should be good at doing so; suicidal tendencies only have to rise to the top once; and millions of years offers a very long time for them to do so).

As for argument 2, well, there are certainly alien value systems which would leave us alone, though I have no idea how likely such value systems are to arise. But in any case, the idea that we are in a wilderness area between the local dark-matter civilizations is sufficient to explain why we've been left alone, without having to posit a completely empty universe.

Replies from: Roko
comment by Roko · 2010-01-04T12:09:19.680Z · LW(p) · GW(p)

and that the regions of the universe which are visibly emitting light are interzones - demilitarized wilderness areas, regions where an aeon-long process of scouting and negotiation is occurring, and so forth

This is not consistent with the spatial placement of bright versus dark matter - you'd expect the interzones to occur as thin surfaces, rather like boundries between bubbles. In reality, you find a bright galaxy with a doughnut-shaped halo of dark matter around it, and you find that around every visible galaxy, repeated again, and again and again. You never find half a galaxy bright, half dark, cut across the middle. Correct me if I am wrong on the cosmology here.

Furthermore - and more problematically - how is it that every single alien race out there plays exactly this same trick - even races that are at opposite edges of our cosmic horizon, so have had no chance to communicate because of the light speed limit?

it may be that the dark matter has been appropriated by spacefaring intelligences and used so as to make every photon count

What, even the high entropy photons that one would normally expect an information processing entity to excrete as waste heat? Have these aliens broken the second law of thermodynamics (no need to radiate in the infra-red) but not the first?

there are certainly alien value systems which would leave us alone

So, they're nice enough to leave the "wilderness" unspoiled, but horrible enough to let us die, suffer, etc? It seems unlikely to me. Not impossible, but unlikely. The default prediction would be that they would have taken the matter that earth was made of before life even evolved.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-01-05T04:27:24.707Z · LW(p) · GW(p)

Why does every alien race act the same and try to look invisible? Game theory. They all know that evolution will have produced unacceptable value systems elsewhere in the universe. They all know that they risk losing everything if they come into conflict with a superior hostile power. But a strategy of persistent stealth means that at least a certain amount of value will be achieved. So, they live in the dark matter and they don't advertise their presence by visibly messing with the light matter.

I'm not actually saying this is how it is. But it is a hypothesis (one of many, see below) consistent with all the data we have.

even the high entropy photons that one would normally expect an information processing entity to excrete as waste heat?

Maybe they pipe their waste heat into supermassive black holes.

Regarding the cosmic distribution of dark matter: On my reading there are two levels of structure inferred. There is a dark matter halo around most galaxies, and then there are filamentary networks on the supergalactic scale. The most common view is that it consists of exotic particles, not in the Standard Model, which gravitate but do not radiate. The filaments formed after the big bang due to gravitational clumping, and in turn seeded the formation of the galaxies from light matter.

If you accept that view, then the dark matter is not light matter with all the light being harvested, as I originally suggested. It's just a separate type of matter which doesn't glow in the first place. So if I want to promote the universal stealth hypothesis, perhaps I should deemphasize the role of dark matter and just emphasize the imperative to be invisible, to blend in against the natural cosmic backdrop.

However, our theories about the dark matter are rather tentative, and if you did assume a significant population of galactic and intergalactic civilizations with a common invisibility imperative, I wonder if that would make it plausible that far more of the dark matter is just light matter being kept dark. I don't know the astrophysical reasoning well enough to judge. But there is an issue here which I don't think astrophysicists and cosmologists have ever systematically considered: if large portions of the universe are already artificial, how would that affect our cosmological theories? A lot of cosmology involves elaborate inferences from data like observable atomic abundances. If that data is systematically skewed by biogenic or technogenic influences, then so are the inferences.

And speaking of tentativeness, and of elaborate inferences, all these future scenarios and cosmological theories (and the idea that most of the universe is destined to be converted to computronium is a cosmological theory) are hypotheses built on hypotheses. I think we are in a realm here where it is not rational to pretend to be Bayesian and assign these hypotheses probabilities. I suspect that the most we should say is that we have a list of qualitative hypotheses, each of which are internally consistent and consistent with the data, and then perhaps a list of increasingly dubious lesser hypotheses, which we can affirm to be unlikely for some reason or other. But it's not as if we even know the extent of the hypothesis space. An important aspect of the situation is that our list of "possible and not obviously unlikely" scenarios may grow at any time.

However, not only do people not adhere to this standard of objectivity, I doubt that it would even be advisable to enforce this as a norm, except reactively, after new hypotheses are invented and advocated enthusiastically, or new turns of argument are constructed. Thus, I was criticizing Tegmark and you for saying these things about the future as if you really do know them to be likely, but then I went and said I preferred my own idea about how everyone is hiding in the dark matter. Well, that was psychologically accurate, but I "preferred" it because it gave me more to think about - the stratagems and counter-stratagems of these hypothetical civilizations, and so forth. On purely epistemic grounds, I see no reason to prefer that scenario over the one you were originally advancing, namely, there are no civilizations or intelligences of cosmic extent already existing in the observable universe. The Great Filter might really be that strong; I don't know; it's consistent with everything I do know, so I must regard it as possible.

However, if we had both adhered to the principle of being agnostic on matters we don't actually know about, this exchange might never have occurred, and certain possibilities might never have been explored and critiqued. Then again, maybe agnosticism is so psychologically difficult that one should not worry about asserting it too much; the challenge will always be to assert it enough to rein in some of the false claims of knowledge which abound everywhere.

Returning to the original scenario, one more thought. It has a rather more pessimistic interpretation: that perhaps our past light-cone contains many examples of civilizations which got this far and were then destroyed. On that view, the odds are actually against us.

P.S. by referring to galactic "wilderness" I didn't imply a nature-loving value system. Wilderness simply means, a region not transformed by technology or civilized management. Wilderness zones may exist for reasons other than being nice. Maybe they only care about certain phenomena or possibilities which simply don't exist here. The more dubious aspect of this idea is not that they let us suffer, but rather that they are letting us develop technologically, to the threshold of being a threat to their own purposes.

comment by Jordan · 2010-01-06T08:52:05.223Z · LW(p) · GW(p)

Adding insult to injury, cosmologists have found that we're not even made out of the majority substance.

This line seems contrary to the rest of the paragraph. Not being made of the majority substance makes us more exceptional.

comment by PhilGoetz · 2010-01-07T03:49:19.769Z · LW(p) · GW(p)

For starters, we're smaller than we thought.

On the contrary. We now know that we are larger than the universe.

Our solar system has 1 sun and 8 "planets". Our galaxy contains about 300 billion stars. There are about 100 billion galaxies in the universe. That's a mere 3x10^22 stars.

A carbon atom has 1 nucleus and 12 electrons. One human cell contains about 100 trillion atoms. One human contains about 100 trillion cells. I myself contain almost as much mass as 3x10^27 carbon atoms.

I'm also older than the universe. The universe is about 14 billion years old. The natural timescale to use for the universe is hard to estimate. If the solar system is the basic component of our universe with respect to size, then we can use the timescale of the solar system as the basic time unit for the universe.

Suppose we say the unit of time for our solar system is one Jupiter year, or 11.86 earth years. The universe is then 1.2 billion Jupiter-years old.

The natural timescale of me is also hard to estimate. I know I'm going to win; since I'm more than 1.2 billion seconds old; but let's have a go anyway:

I am a chemical being, so it's fair to consider chemical but not nuclear processes. Hydrogen bonds break and form in less than a nanosecond (CJ Margulis et al. 2002, J. Phys. Chem. B, "Helix Unfolding and Intramolecular Hydrogen Bond Dynamics in Small α-Helices in Explicit Solvent"). So let's consider the nanosecond the natural timescale of me.

This makes my age, on my timescale, just a little less than the square of the universe's age on its timescale.

Take that, universe!

And if you want to truly crush the poor universe's spirit... try measuring information content.

comment by timtyler · 2010-01-04T21:23:20.874Z · LW(p) · GW(p)

A bottleneck is - conventionally - a narrow squeeze-through point with wider areas on either side. It seems challenging to see in what sense is the current era represents that sort of "bottleneck".

On the other hand, the founder effect might be relevant in this context.

comment by MatthewB · 2010-01-04T08:15:38.209Z · LW(p) · GW(p)

Isn't there another possibility about intelligent life not having been noticed?

Could it be that they are leaving us alone because they do not recognize us as useful intelligence, in much the same way that we don't recognize an ant colony as a useful intelligence (even though we know that there is a type of communal intelligence, their structures and presence in the world is just an artifact of what we see as primitive rule-following behavior)?

It might also be the case that they have radically different forms of communication that are not amenable to functioning with EM transmission.

Personally, I don't get too wound up about why there hasn't been a discovery of ET. It's just one of those things that will eventually sort itself out, and in the meantime, I would rather worry about making sure that our world is dressed appropriately for any surprise callers (we really don't want our world to answer the Interstellar Door while in our metaphorical underwear, or during a domestic dispute; husband and wife yelling and throwing things while the kids act up in response).

Replies from: Roko
comment by Roko · 2010-01-04T12:16:03.515Z · LW(p) · GW(p)

See my response to Mitchell_Porter.

Could it be that they are leaving us alone because they do not recognize us as useful intelligence

But why haven't they already taken all of the matter and energy we're made of? Why is the cosmos in a pristine, isotropic, homogeneous state?

Replies from: SilasBarta, MatthewB
comment by SilasBarta · 2010-01-04T12:57:30.945Z · LW(p) · GW(p)

I agree with most of the points you made in response to Mitchell_Porter, but there's a plausible answer to this:

But why haven't they already taken all of the matter and energy we're made of? Why is the cosmos in a pristine, isotropic, homogeneous state?

Because there are tremendous costs to coming all the way here to take our sun's energy. Any inconvenience we find in the years necessary to travel between the stars is worse for a species achieving superhuman intelligence because it feels longer to them.

I remember a writer on edge.org -- I will find it when I get a chance -- who proposed that a species that reaches a singularity will find a higher return to living in a virtual reality on their home world, where they don't have such communication problems. In that case, they would Dyson-sphere at most their own star, and therefore not interfere with the cosmos's isotropy or homogeny.

Replies from: Roko, MatthewB
comment by Roko · 2010-01-04T13:38:55.441Z · LW(p) · GW(p)

They would still want to colonize space as quickly as possible as insurance against some other civilization coming and doing them in with superior resources. They could use nonsentient von-neumann probes to do the colonization, it would be a background process that didn't interfere with their enjoyment of VR.

comment by MatthewB · 2010-01-04T16:30:08.441Z · LW(p) · GW(p)

This was one such idea that I had. I don't get the feeling that a superior intelligence is going to be so petty as to just waltz around the universe sucking up resources. That smacks, to me, of Hollywood horror stories. And, considering how poorly we understand Game Theory in relation to non-human species... It might be that they have a much more cooperative solution to the universe's problems.

After all, any post-Singularity Society should realize that the ultimate goal of their intelligence is to merge with the rest of the universe. Why would that mean that they must eliminate all other intelligences along the way, rather than using those intelligences as a proxy to complete the task of total utilization of the universe's energy.

My point is that we do not yet know enough to even begin speculating on possible motivations for ET, until such a time as we can sort out terrestrial motivations among our respective intelligent species.

And, as I said, we really should be worried about putting our terrestrial house in order first, as this will improve out chances during any first contact situation, whether it be potentially friendly, or potentially hostile.

Edit: Spelling (Of to Out)

Replies from: AngryParsley
comment by AngryParsley · 2010-01-05T04:32:24.741Z · LW(p) · GW(p)

I don't get the feeling that a superior intelligence is going to be so petty as to just waltz around the universe sucking up resources.

Sucking up resources? Nobody else is using them. They're just being wasted; hastening the heat death. One star will only last billions of years. All the matter in a galaxy could be organized to support a larger civilization for much, much longer.

After all, any post-Singularity Society should realize that the ultimate goal of their intelligence is to merge with the rest of the universe.

You are making a claim about a very large portion of possible minds. Also, I don't even know what you mean by "merge with the rest of the universe."

Why would that mean that they must eliminate all other intelligences along the way, rather than using those intelligences as a proxy to complete the task of total utilization of the universe's energy.

First, aliens that evolved separately wouldn't have similar values. Some of them would actually be threats if they valued paperclips above your civilization. Second, automated probes can do a much better job of organizing matter than some random apes with nukes.

My point is that we do not yet know enough to even begin speculating on possible motivations for ET, until such a time as we can sort of terrestrial motivations among our respective intelligent species.

Actually, yes, we do know enough to speculate. We notice that everywhere we look, there is no evidence of alien intelligences. The lights are on but nobody's home. So there are three possibilities:

  • Every civilization that has ever come into existence destroys itself.

  • Every civilization that has ever come into existence "transcends" in such a way to make the universe indistinguishable from ordinary stars and galaxies.

  • There aren't any other civilizations in our observable universe.

Replies from: Alicorn, MatthewB
comment by Alicorn · 2010-01-05T14:01:46.585Z · LW(p) · GW(p)

Sucking up resources? Nobody else is using them. They're just being wasted;

I'd like to point out that even among the forms of intelligence that we have met, this is not a universally compelling argument. Consider that many people have a strong negative emotional reaction to the idea of putting garbage on the moon (assuming this could be made efficient). Now, objecting to this makes precious little sense. The moon is huge: even if we decide we want a colony there later, we're unlikely to coat its entire surface in waste and leave no room there. In fact, even putting enough trash there to be visible from Earth would take a while if we decided we wanted to avoid that for aesthetic reasons. It just weirds people out to use the moon as a landfill. Why should we expect that aliens wouldn't have analogous reasons to avoid "sucking up" resources that are doing less good by themselves?

Replies from: AngryParsley, pdf23ds
comment by AngryParsley · 2010-01-05T15:57:06.337Z · LW(p) · GW(p)

Sure, if all aliens had values like that, then maybe all of them would stay in their original solar systems. But if even one is different, it would quickly expand.

comment by pdf23ds · 2010-01-06T00:37:03.989Z · LW(p) · GW(p)

It should be noted that right now humans are in a very unstable point in our evolution. Our scruples about garbage are almost certain to be eliminated if we continue to evolve under natural selection. Natural selection rewards the most efficient replicators, period.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-06T00:42:04.526Z · LW(p) · GW(p)

Our scruples about garbage are almost certain to be eliminated if we continue to evolve under natural selection.

By what method? How will agreeing to do things with garbage that we currently prefer not to do help individual humans reproduce more successfully?

comment by MatthewB · 2010-01-05T04:49:38.339Z · LW(p) · GW(p)

My point is that we do not yet know enough to even begin speculating on possible motivations for ET, until such a time as we can sort out terrestrial motivations among our respective intelligent species.

Actually, yes, we do know enough to speculate. We notice that everywhere we look, there is no evidence of alien intelligences. The lights are on but nobody's home. So there are three possibilities:

  • Every civilization that has ever come into existence destroys itself.

  • Every civilization that has ever come into existence "transcends" in such a way to make the universe indistinguishable from ordinary stars and galaxies.

  • There aren't any other civilizations in our observable universe.

OK... Those are all very valid reasons for why there are NO ETs. I admit that and have some similar thoughts myself.

However, if you will look at what I wrote again:

My point is that we do not yet know enough to even begin speculating on possible motivations for ET, until such a time as we can sort out terrestrial motivations among our respective intelligent species.

This comment is not about whether ET exists or not, but about what might motivate various behaviors, and that we cannot even sort out the motivations of terrestrial alien species (Cetaceans, Great Apes, Birds, Elephants, etc.). If we might be able to sort out the motivations of other types of biology for their respective behavior and goals, it might then give us a better frame of reference for why an ET Civilization might be acting in a peculiar manner.

Your comment in response to the previous quote of mine (in your post above), actually came closer to addressing what my main thesis of my prior post really is:

First, aliens that evolved separately wouldn't have similar values. Some of them would actually be threats if they valued paperclips above your civilization. Second, automated probes can do a much better job of organizing matter than some random apes with nukes.

This is exactly what I was getting at. ET that evolved on a different evolutionary path to humanity (and much of Earth Life) will have values that we might not be able to recognize nor correlate to our own behavior and values.

This was why I said that the consumption of resources might not be their top goal.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-05T05:17:54.027Z · LW(p) · GW(p)

This was why I said that the consumption of resources might not be their top goal.

Addressed at the beginning of my comment. You are making a claim about the values of every technological civilization that has ever existed. More importantly, this value (not utilizing unused resources) is completely the opposite of what most evolved life on Earth does. It's certainly the opposite of what humans do.

Your claim is that every civilization that has come about has said, "Welp, we really like the universe as it already is and we're gonna keep it that way. Moreover, we're so sure that every other alien civilization is going to come to the exact same conclusion as us that we won't even send out some probes to find and sterilize planets where hostile life could evolve."

Replies from: MatthewB
comment by MatthewB · 2010-01-05T06:11:19.272Z · LW(p) · GW(p)

I have made no such universal claim.

I have said that we do not know enough about any potential ET to know that their top priority would be the consumption of resources above all else.

If it has seemed that I made a universal claim about a potential ET, then I retract such a claim.

The only universal that I am claiming is that aliens will be different than humanity (barring a strange situation where all ETs turn out to be humans with latex foreheads), and that we are currently unable to really speculate on what might motivate these aliens.

All aliens will not have the same motivations, either (unless there is already a universal consciousness of some sort). They are likely to be as varied as life here on earth, and probably more so.

We may base some behavior upon our own, or upon terrestrial non-human intelligences (Squid, Octopi, Elephants, Apes, Birds...), but all this is likely to do is give us a methodology for interpreting an ETs behavior once one (or many) arrives on Earth.

I am not really even sure what your point is at this juncture, as you have missed the points that I was trying to make in the first posts.

Namely, that we need to put the socio-economic and cultural footings of the Earth into some some sort of shape for which we would not be embarrassed - This may make little to no difference to an ET who dropped by (should such ETs even be out there to drop by), but it will make all the difference in the world to us to be able to rapidly bring together a collective to help communicate with such a being.

I am not disagreeing with you about most of your points. Only that our sun would be sought as a resource to be used by these other intelligences. There may be some ETs who are competitive and antagonistic with other intelligences, and who will expand from their home making use of whatever they can in the process.

(Pardon me if I am not too clear right now. I have a fever from an infection in one of my legs)

Replies from: thomblake
comment by thomblake · 2010-01-06T16:05:44.334Z · LW(p) · GW(p)

I have a fever from an infection in one of my legs

OT: Skin infection? I get those. I almost died from the first one I got. Left leg doesn't work so well anymore. Be on your guard in the future since it will likely be more vulnerable to infection from now on.

Replies from: MatthewB
comment by MatthewB · 2010-01-07T04:12:39.107Z · LW(p) · GW(p)

Thanks for the warning. I have been living with this for 11 years now, so I understand pretty well how vulnerable I am to infection. In another post somewhere, I may have mentioned that I am disabled because of damage to both legs from a motorcycle accident.

Odds are likely that I may loose my left leg eventually if things continue the way they are now. It doesn't bother me as much as it used to do, and I hope that I will be able to keep my leg.

However, after seeing people like Amiee Mullins, or technologies such as some of the new Artificial/Bionic legs... I could probably be OK with a fake leg at this point in my life.

After all, if this Singularity business has anything to it, I will probably be getting rid of both of my legs anyway, to replace with cybernetic legs.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-07T16:25:46.265Z · LW(p) · GW(p)

After all, if this Singularity business has anything to it, I will probably be getting rid of both of my legs anyway, to replace with cybernetic legs.

Singularity has nothing to do with "cybernetic legs". See also: Re: Multiple Future Bens on SL4

"So if you're thinking that what you want involves chrome and steel, lasers and shiny buttons to press, neural interfaces, nanotechnology, or whatever great groaning steam engine has a place in your heart, you need to stop writing a science fiction novel with yourself as the main character, and ask yourself who you want to be."

Replies from: Cyan, MatthewB
comment by Cyan · 2010-01-07T16:27:07.206Z · LW(p) · GW(p)

Singularity has nothing to do with "cybernetic legs".

Aw, nuts.

comment by MatthewB · 2010-01-07T16:38:15.036Z · LW(p) · GW(p)

From what I read, the Singularity has to do with all manner of technologies, including AI, Robotics, and Nanotechnology, (edit: And Genetics) which focuses upon explosive growth of intelligence.

There are all manner of scenarios in which this may come about, and in most of them, the peripheral benefits will likely include technologies which will make cybernetic prostheses a pretty trivial thing.

However, all of that aside (and random thoughts by Eliezer from 2002), I am pretty confident that within the next 20 years, technologies such as Dr. Ted Berger's Neural Prosthesis, and the various current batch of cybernetic limbs (Google some, they are plentiful given the wars we are fighting) will give me some form of leg that will allow me to function either as I do now, or better.

This technology may or may not be central to the Singularity (depending upon whose version of it you wish to quote), yet it is technology that will be a consequence of the trip there.

comment by MatthewB · 2010-01-04T12:34:54.856Z · LW(p) · GW(p)

Not a clue. One possibility could be that they do not need to take all of the matter and energy we're made of. Maybe they do not yet have that capability.

I'd rather spend my time worrying about other things. It isn't that I don't find the possibility of ET exciting. I just think that they energy needed to contemplate why questions is better of used for other things.