Posts
Comments
Re: Nanotech That's exactly my point: if nanotech performs as advertised by its starriest-eyed advocates, then interstellar colonization can be done with small payloads and energy is cheap enough that they can be launched easily. That is a very big "if," and not one we can shrug off or assume in advance as the underlying principle of all our models.
What if nanotech turns out to have many of the same limits as its closest natural analogue, biological cells? Biotech is great for doing chemistry, but not so great for assembling industrial machinery (like large solar arrays) in a hostile environment.
As for the "nuclear cars and basement reactors" being out of the picture because of politics and not engineering, that's... really quite impressively not true, I think. Fission reactors create neutrons that slip through most materials like a ghost and can riddle you with radiation unless you stand far away or have excellent shielding. Radioactive thermal generators require synthetic or refined isotopes that are expensive by nature because they have to be [i]made[/i], atom by atom... and they're still quite radioactive if they're hot enough to be a useful power source.
The real problem isn't the atomic power source itself, it's the shielding you need to keep it from giving you cancer. There's no easy way to miniaturize that, because neutron capture cross-sections play no favorites and can't be tinkered with.
This stuff is not a toy, and there are very good reasons of engineering why it never made the leap from industrial equipment to household use, except in the smallest and most trivial scales (such as americium in smoke detectors). It's not just about politics.
The way you put it does seem to disparage biologists, yes. The biologists are doing work that is qualitatively different from what physicists do, and that produces results the physicists never will (without the aforementioned thousand tons of computronium, at least). In a very real sense, biologists are exploring an entirely different ideaspace from the one the physicists live in. No amount of investigation into physics in isolation would have given us the theory of evolution, for instance.
And weirdly, I'm not a biologist; I'm an apprentice physicist. I still recognize that they're doing something I'm not, rather than something that I might get around to by just doing enough physics to make their results obvious.
This is profoundly misleading. Physicists already have a good handle on how the things biological systems are made of work, but it's a moot point because trying to explain the details of how living things operate in terms of subatomic particles is a waste of time. Unless you've got a thousand tons of computronium tucked away in your back pocket, you're never going to be able to produce useful results in biology purely by using the results of physics.
Therefore, the actual study of biology is largely separate from physics, except for the very indirect route of quantum physics => molecular chemistry => biochemistry => biology. Most of the research in the field has little to do with those paths, and each step in the indirect chain is another level of abstraction that allows you to ignore more of the details of how the physics itself works.
I wouldn't have assigned much of a prior probability to either of those common sociobiological beliefs, myself. It would hardly surprise me if they were both complete nonsense.
So what do you mean when you say that these beliefs are "standard" or "widely held?" Obviously, I am not a representative sample of the population, so I may have no opinion on a widely held belief. But I'm not aware of strong evidence that these beliefs are widely held, or at any rate are more widely held than the evidence would warrant.
Or, with tongue firmly in cheek, I claim that I'm presenting counterevidence for the common belief that [insert proposition here] is a common belief...
The catch is that complex models are also usually very wrong. Most possible models of reality are wrong, because there are an infinite legion of models and only one reality. And if you try too hard to create a perfectly nuanced and detailed model, because you fear your bias in favor of simple mathematical models, there's a risk. You can fall prey to the opposing bias: the temptation to add an epicycle to your model instead of rethinking your premises. As one of the wiser teachers of one of my wiser teachers said, you can always come up with a function that fits 100 data points perfectly... if you use a 99th-order polynomial.
Naturally, this does not mean that the data are accurately described by a 99th-order polynomial, or that the polynomial has any predictive power worth giving a second glance. Tacking on more complexity and free parameters doesn't guarantee a good theory any more than abstracting them out does.
From a social psych standpoint, it's very interesting: why do people come up with something, then fail to use it in ways that we would consider obvious and beneficial?
I think a lot of it is hidden infrastructure we don't see, both mental and physical. People need tools to build things, and tools to come up with new ideas: the rules of logic and mathematics may describe the universe, but they are themselves mental tools. Go back to Hellenic civilization and you find a lot of the raw materials for the Industrial Revolution, what was missing? There are a lot of answers to that question: "cheap slaves messing up the economy," "no precision machining capability," "no mass consumption of timber, coal, and iron in quantities that force the adoption of industrial methods," and so on. They all boil down to "something subtle was missing, so that intelligent people didn't come up with the trick."
I speculate that one of the most important missing pieces was the habit of looking at everything as a source of potential new tricks for changing the world.
I know of no confirmed historical evidence of wheelbarrows being used until around the time of the Peloponnesian War in Greece, and as I understand it they subsequently vanished in the Greco-Roman world for roughly 1600 years until being reintroduced in the Middle Ages. Likewise, wheelbarrows are not evident in Chinese history until the first or second century AD.
So wheelbarrows are an application of wheels, but they're a much later application of the technology, one that did not arise historically for two to four millennia after the invention of the two or four-wheeled animal-drawn cart.
If we use a broader definition of wheelbarrow as "hand cart," we have older evidence stretching back at least to the ancient Indus Valley some time in the second or third millennium BC.
But if we stick only to inventions we have historical evidence of, there's still a gap of thousands of years between the invention of the wheel and the invention of the hand cart throughout Eurasia. The fact that Montezuma's Aztecs made no use of the wheelbarrow, rickshaw, or hand cart is hardly more remarkable than the fact that Charlemagne's Franks didn't, either.
To make this calculation in a MWI multiverse, you still have to place a zero (or extremely small negative) value on all the branches where you die and take most or all of your species with you. You don't experience them, so they don't matter, right? That's a specialized form of a general question which amounts to "does the universe go away when I'm not looking at it?"
If one can make rational decisions about a universe that doesn't contain oneself in it (and life insurance policies, high-level decorations for valor, and the like suggest this is possible), then outcomes we aren't aware of have to have some nonzero significance, for better or for worse.
As for "question in its own right," I think you misunderstood what I was getting at. If advanced civilizations are probable and all or nearly all of them try to go Omega, and they've all (in our experience, on this worldline) failed, it suggests that the probability must be extremely low, or that the power benefits to be had from going Omega are low enough that we cannot detect them over galaxy-scale distances.
In the first case, the odds of dissenters not drinking the "Omegoid" Kool-Aid increase: the number of people who will accept a multiverse that kills them in 9 branches and makes them gods in the 10th is probably somewhat larger than the number who will accept one that kills them in 999999999 branches and makes them gods in the 10^9th. So you'd expect dissenter cultures to survive the general self-destruction of the civilization and carry on with their existence by mundane means (or trying to find a way to improve the reliability of the Omega process)
In the second case (Omega civilizations are not detectable at galactic-scale distances), I would be wary of claiming that the benefits of going Omega are obvious. In which case, again, you'll get more dissenters.
A machine-phase civilization might still find (3a) or (3b) an issue depending on whether nanotech pans out. We think it will, but we don't really know, and a lot of technologies turn out to be profoundly less capable than the optimists expect them to be in their infancy. Science fiction authors in the '40s and '50s were predicting that atomic power sources would be strongly miniaturized (amusingly, more so than computing devices); that never happened and it looks like the minimum size for a reasonably safe nuclear reactor really is a large piece of industrial machinery.
If nanotech does what its greatest enthusiasts expect, then the minimum size of industrial base you need to create a new technological civilization in a completely undeveloped solar system is low (I don't know, probably in the 10-1000 ton range), in which case the payload for your starship is low enough that you might be able to convince people to help you build and launch it. Extremely capable nanotech also helps on the launch end by making the task of organizing the industrial resources to build the ship easier.
But if nanotech doesn't operate at that level, if you actually need to carry machine tools and stockpiles of exotic materials unlikely to be found in asteroid belts and so on... things could be expensive enough that at any point in a civilization's history it can think of something more interesting to do with the resources required to build an interstellar colony ship. Again, if the construction cost of the ship is an order of magnitude greater than the gross planetary product, it won't get built, especially if very few people actually want to ride it.
Also, could you define "singleton" for me, please?
Your aliens are assigning zero weight to their own death, as opposed to a negative weight. While this may be logical, I can certainly imagine a broadly rational intelligent species that doesn't do it.
Consider the problems with doing so. Suppose that Omega offers to give a friend of yours a wonderful life if you let him zap you out of existence. A wonderful life for a friend of yours clearly has a positive weight, but I'd expect you to say "no," because you are assigning a negative weight to death. If you assign a zero weight to an outcome involving your own death, you'd go for it, wouldn't you?
I think a more reasonable weighting vector would say "cessation of existence has a negative value, even if I have no subjective experience of it." It might still be worth it if the probability ratio of "superman to dead" is good enough, but I don't think every rational being would count all the universes without them in it as having zero value.
Moreover, many rational beings might choose to instead work on the procedure that will make them into supermen, hoping to reduce the probability of an extinction event. After all, if becoming a superman with probability 0.0001% is good, how much better to become one with probability 0.1%, or 10%, or even (oh unattainable of unattainables) 1!
Finally, your additional motivation raises a question in its own right: why haven't we encountered an Omega Civilization yet? If intelligence is common enough that an explanation for our not being able to find it is required, it is highly unlikely that any Omega Civilizations exist in our galaxy. For being an Omega Civilization to be tempting enough to justify the risks we're talking about, I'd say that it would have to raise your civilization to the point of being a significant powerhouse on an interstellar or galactic scale. In which case it should be far easier for mundane civilizations to detect evidence of an Omega Civilization than to detect ordinary civilizations that lack the resources to do things like juggle Dyson spheres and warp the fabric of reality to their whims.
The only explanation of this is that the probability of some civilization within range of us (either in range to reach us, or to be detected by us) having gone Omega in the history of the universe is low. But if that's true, then the odds are also low enough that I'd expect to see more dissenters from advanced civilizations trying to ascend, who then proceed to try and do things the old-fashioned way.
Good points. However: (1) Most of the cataclysms we see are either fairly explicable (supernovae) or seem to occur only at remote points in spacetime, early in the evolution of the universe, when the emergence of intelligent life would have been very unlikely. Quasars and gamma ray bursts cannot plausibly be industrial accidents in my opinion, and supernovae need not be industrial accidents.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that "99.9999% death plus 0.0001% superman" is inferior to "continued mortal existence."
(3)Again possible, but there will be a selection effect over time. Eventually, the remaining people (who, you will notice, live in a universe where people who try to ascend to godhood always die) will no longer think ascending to godhood is a good idea. Maybe the ancients were right and there really is a small chance that the ascent process works and doesn't kill you, but you have never seen it work, and you have seen your civilization nearly exterminated by the power-hungry fools who tried it the last ten times.
At what point do you decide that it's more likely that the ancients did the math wrong and the procedure just flat out does not work?
(4)The minority might have no problems with risks that do not have a track record of killing everybody. However, you have a point: a rational civilization that expects the galaxy to be heavily populated might be well advised to hide.
To our perspective, this is from (2): all advanced civilizations die off in massive industrial accidents; God alone knows what they thought they were trying to accomplish.
Also, wouldn't there still be people who chose to stay behind? Unless we're talking about something that blows up entire solar systems, it would remain possible for members of the advanced civilization to opt out of this very tempting choice. And I feel confident that for at least some civilizations, there will be people who refuse to bite and say "OK, you guys go inhabit a tiny subset of all universes as gods; we will stay behind and occupy all remaining universes as mortals."
If this process keeps going on for a while, you end up with a residual civilization composed overwhelmingly of people who harbor strong memes against taking extremely low-probability, high-payoff risks, even if the probability arithmetic indicates doing so.
For your proposal to work, it has to be an all-or-nothing thing that affects every member of the species, or affects a broad enough area that the people who aren't interested have no choice but to play along because there's no escape from the blast radius of the "might make you God, probably kills you" machine. The former is unlikely because it requires technomagic; the latter strikes me as possible only if it triggers events we could detect at long range.
This is my hypothesis (3c), with an implicit overlay of (3a).
Here goes:
Alternate explanations for rarity of intelligence:
3a) Interstellar travel is prohibitively difficult. The fact that the galaxy isn't obviously awash in intelligence is a sign that FTL travel is impossible or extremely unfeasible.
Barring technology indistinguishable from magic, building any kind of STL colonizer would involve a great investment of resources for a questionable return; intelligent beings might just look at the numbers and decide not to bother. At most, the typical modern civilization might send probes out to the nearest stellar neighbors. If the cost of sending a ton of cargo to Alpha Centauri is say, 0.0001% of your civilization's annual GDP, you're not likely to see anyone sending million-ton colony ships to Alpha Centauri. In which case intelligent life might be relatively common in the galaxy without any of it coming here; even the more ambitious cultures that actually did bother to make the trip to the nearest stars would tend to peter out over time rather than going through exponential expansion.
3b) Interstellar colonization is prohibitively difficult. If sending an STL colony expedition to another star is hard, sending one with a large enough logistics base to terraform a planet will be exponentially harder.
There are something on the order of 1000 stars within 50 to 60 light years of us. Assuming more or less uniform stellar densities, if the probability of a habitable planet appearing around any given star is much less than 0.1%, it's likely that such planets will remain permanently out of reach for a sublight colony ship. In that case, spreading one's civilization throughout the galaxy depends on being able to terraform planets across interstellar distances before setting up a large population on those worlds. Even if travel across short (~10 ly) interstellar distances is not prohibitively difficult, there might still be little or no incentive to colonize the available worlds beyond one's own star system. After all, if you're going to live in a climate-controlled bunker on an uninhabitable rock where you can't step outside without being freeze-dried or boiled alive, you might as well do it somewhere closer to home.
NOTE: This amounts to "super-difficult life," but it does not require that there are few intelligent species in the galaxy. If the emergence of life is (for lack of a better term) super-duper-difficult, or if most planets are inhospitable enough to make it impossible, then we could have many thousands of intelligent species in the galaxy without any of them being likely to reach each other.
3c) Interstellar colonization might be "psychologically" difficult. For instance, what if the next logical step in the evolution of modern civilization is an AI singularity, possibly coupled with some kind of uploading of consciousness into machines? Either way, our descendants of 200 years from now might well be, to our eyes, a civilization of robots. To a society of strong AIs, interstellar colonization is liable to look a little different. Traveling to even the nearest stars, you will be cut off from the rest of your civilization by a transmission gap on the order of 10^20 cycles just because of the lightspeed limit.*
That might sound like an even worse idea to them than spending a long lifetime in cryogenic storage and having a twenty year round trip communication cycle with Earth does to us. In which case they're likely to stay at home and come up with elaborate social activities or simulations to spend their time, because interstellar colonization is just too unpleasant to bear considering.
*Assuming roughly 1 THz computing, for relatively near stellar neighbors. This estimate is probably too low, but I need some numbers and I am nowhere near an expert on artificial intelligence or the probable limits of computer technology.
I dunno. I mean, a lot of horror stories that are famous for being good talk about stuff that can never be and should never be, but that nonetheless (in-story) is. I think it's that sense of a comforting belief about the world being violated that makes a good horror story, even if the prior probability of that belief being wrong is low.
I think you're misreading the story. It's not an argument in favor of irrationality, it's a horror story. The catch is that it's a good horror story, directed at the rationalist community. Like most good horror stories, it plays off a specific fear of its audience.
You may be immune to the lingering dread created by looking at all those foolish happy people around you and wondering if maybe you are the one doing something wrong. Or the fear that even if you act as rationally as you can, you could still box yourself into a trap you won't be able to think your way back out of. But quite a few of your peers are not so immune. I know I'm not, and that story managed to scare me pretty effectively.
The protagonist isn't an ideal rationalist, and the story isn't trying to assert that this is what the ideal rationalist does. Instead, the protagonist is an adolescent proto-rationalist, of a type many of us are familiar with, with her social instincts sucking her into a trap that a lot of us can understand well enough to dread.
And so there's a reason she thinks and acts like a Hollywood stereotype of an intelligent person is that, especially when they're just barely at the age of being able to really think at all. Where do you think Hollywood got the idea for the stereotype in the first place?
I submit that the reason so many of the average people think intelligent people act that way is because they lose social contact with the geniuses in high school, which is when they do think and act like that.
For a lot of the smartest people, being socially functional is a learned skill that comes late and not easily.
Countercounterevidence for 3: what are the assumptions made by those models of interstellar colonization?
Do they assume fusion power? We don't know if industrial fusion power works economically enough to power starships. Likewise for nanotech-type von Neumann machines and other tools of space colonization.
The adjustable parameters in any model for interstellar colonization are defined by the limits of capability for a technological civilization. And we don't actually know the limits, because we haven't gotten close enough to those limits to probe them yet. If the future looks like the more optimistic hard science fiction authors suggest, then the galaxy should be full of intelligence and we should be able to spot the drive flares of Orion-powered ships flitting around, or the construction of Dyson spheres by the more ambitious species. We should be able to see something, at any rate.
But if the future doesn't look like that, if there's no way to build cost-effective fusion reactors and the only really worthwhile sustainable power source is solar, if there are hard limits on what nanotech is capable of that limit its industrial applications, and so on... the barrier to entry for a planetary civilization hoping to go galactic may be so high that even with thousands of intelligent species to make the attempt, none of them make it.
This ties back into the hypotheses I left out of my post for the sake of brevity; I'm now considering throwing them in to explain my reasoning a little better. But I'm still not sure I should do it without invitation, because they are on the long side.
One thing that caught my eye is the presentation of "Universe is not filled with technical civilizations..." as data against the hypothesis of modern civilizations being probable.
It occurs to me that this could mean any of three things, which only one of which indicates that modern civilizations are improbable.
1) Modern civilizations are in fact as rare as they appear to be because they are unlikely to emerge. This is the interpretation used by this article.
2) Modern civilizations collapse quickly back to a premodern state, either by fighting a very destructive war, by high-probability natural disasters, by running out of critical resources, or by a cataclysmic industrial accident such as major climate change or a Gray Goo event.
This would undermine an attempt to judge the odds of modern civilizations emerging based on a small sample size. If (2) is true, the fact that we haven't seen a modern civilization doesn't mean it doesn't exist; it's more likely to mean that it didn't last long enough to appear on our metaphorical radar. All we know with high confidence is that there haven't been any modern civilizations on Earth before us, which places an upper bound on the likely range of probabilities for it to happen; Earth may be a late bloomer, but it's unlikely to be such a late bloomer that three or four civilizations would have had time to emerge before we got here.
3) The apparent rarity of modern civilizations could just be a sign that we are bad at detecting them. We know that alien civilizations haven't visited us in the historic past, that they haven't colonized Earth before we got here, and that they haven't beamed detectable transmissions at us, but those quite plausibly be explained by other factors. Some hypotheses come to mind for me, but I removed them for the sake of brevity; they are available if anyone's interested.
Anyway, where I was going with all this: I can see a lot of alternate interpretations to explain the fact that we haven't detected evidence of modern civilizations in our galaxy, some of which would make it hard to infer anything about the likelihood of civilizations emerging from the history of our own planet. That doesn't mean I think that considering the problem isn't worthwhile, though.