Could a single alien message destroy us?

post by Writer, Matthew Barnett (matthew-barnett) · 2022-11-25T07:32:24.889Z · LW · GW · 23 comments

This is a link post for https://youtu.be/st9EJg_t6yc

Contents

23 comments


Merely listening to alien messages might pose an extinction risk, perhaps even more so than sending messages into outer space. Our new video explores the threat posed by passive SETI and potential mitigation strategies.

Below, you can find the script of the video. Matthew Barnett, the author of this related post [LW · GW], wrote the first draft. Most of the original draft survives, but I've made significant restructuring, edits, deletions, and additions


One day, a few Earthly astronomers discover something truly remarkable. They’ve pointed their radio telescopes at a previously unexplored patch in the night sky, and recorded a binary message that is too inexplicable to have come from any natural source. Curious about what the distant aliens have sent us, the scientists begin trying to decipher the message. After an arduous process of code-breaking, the scientists find that the message encodes instructions on how to build a device. Unfortunately, the aliens left no description about what the device actually does.

Excited to share their discovery with the world, the astronomers agree to publish the alien instructions on the internet, and send a report to the United Nations. Immediately, the news captivates the entire world. For once, there is indisputable proof that we are not alone in the universe. And what’s more: the aliens have sent us a present, and no one knows what its purpose might be.

In a breathtaking frenzy that surpasses even the Space Race of the 1960s, engineers around the world rush to follow these instructions, to uncover the secrets behind the gift the aliens have left for us.

But soon after, a horrifying truth is revealed: the instructions do not describe a cure to all diseases, or a method of solving world hunger. Rather, the aliens have sent us explicit, and easy to follow instructions on how to build a very powerful bomb: an anti-matter explosive device with the yield of over one thousand hydrogen bombs. The most horrifying part is that the instructions require only common household materials, combined in just the right way.

The horror of this development begins to sink in around the world. Many propose that we should censor the information, in an attempt to prevent a catastrophe. But the reality is that the information is already loose. Sooner or later, someone will build the bombs, either from raw curiosity, or deliberate ill-intent. And then, right after that, the world will end.

This story is unrealistic. In real life, there’s probably no way to combine common household materials in just the right way to produce an antimatter bomb. Rather, this story illustrates the risk we take by listening to messages in the night sky, and being careless about how these potential messages are disseminated.

With this video, we don’t want to argue that humanity will necessarily go extinct if we listen to alien messages, nor that this is necessarily among the biggest threats we’re facing. In fact, the probability that humanity will go extinct in this exact way is small, but the risk we take from listening to alien messages is still an idea worth considering. As with all potential existential threats, the entire future of humanity is at stake.

We’ll model alien civilizations as being “grabby”, in the sense described by Robin Hanson’s paper on Grabby Aliens, which we covered in two previous videos. Grabby civilizations expand at a non-negligible fraction of the speed of light, and occupy all available star systems in their wake. By doing so, every grabby civilization creates a sphere of expanding influence. Together, all the grabby civilizations will one day enclose the universe with technology and intelligently designed structures.

However, since grabby aliens cannot expand at the speed of light, there is a second larger sphere centered around every grabby civilization’s origin, which is defined by the earliest radio signals sent by the alien civilization as it first gained the capacity for deep-space communication. This larger sphere expands at the speed of light, faster than the grabby civilization itself.

Let’s call the space between the first and second spheres the “outer shell” of the grabby alien civilization. If grabby alien civilizations leave highly distinct marks on galaxies and star systems they’ve occupied, then their civilization should be visible to any observers within this outer shell. As we noted in the grabby aliens videos, if we were in the outer shell of a grabby alien civilization, they would likely appear to be large in the night sky. On the other hand, if grabby civilizations left more subtle traces that we can’t currently spot with our technology, that would explain why we aren’t seeing them.

In this video, let’s assume that grabby aliens leave more subtle traces on the cosmos, making it plausible that Earth could be in the outer shell of a grabby alien civilization right now without us currently realizing that. This is a model variation, but it leaves the basics of the Grabby Aliens theory intact.

Here’s where things could turn out  dangerous for humanity. If, for example, a grabby alien civilization felt threatened by competition that it might encounter in the future, it could try to wipe out potential competitors inside this outer shell before they ever got the chance to meet physically. This is because, if they wanted, the grabby alien civilization could send out a manipulative deep-space message to any budding civilization in the outer shell gullible enough to listen, forcing their self-destruction.

In our illustrative story we made the example of instructions for building antimatter bombs with household material. A more realistic possibility could be instructions for building advanced artificial intelligence, which then turns out to be malicious.

We could make a number of plausible hypotheses about the content of the message, but it’s difficult to foresee what it would actually contain, as the alien civilization would be a lot more advanced than us, and, potentially, millions of years old. They would have much more advanced technology, and a lot of time to think carefully about what messages to send to budding civilizations. They  could spend centuries to craft the perfect message that would hijack or destroy infant civilizations that are unfortunate enough to tune in.

But maybe you’re still unconvinced. Potential first contact with aliens could even be the best thing to ever happen to humanity. Aliens might be very friendly to us, and could send us information that would help our civilization and raise our well-being to unprecedented levels. 

Perhaps this whole idea is rather silly. Our parochial, tribal brains are simply blind to the reality that very advanced aliens would have abandoned warfare, domination, and cold-hearted manipulation long ago, and would instead be devoted to the mission of uplifting all sentient life.

On the other hand, life on other planets probably arose by survival of the fittest, as our species did, which generally favors organisms that are expansionist and greedy for resources. Furthermore, we are more likely to get a message from an expansionist civilization than a non-expansionist civilization, since the latter civilizations will command far fewer resources and will presumably be more isolated from one another. This provides us even more reason to expect that any alien civilization that we detect might try to initiate a first strike against us.

It’s also important to keep in mind that the risk of a malicious alien message is still significant even if we think aliens are likely to be friendly. For instance, even if we believe that 90% of alien civilizations in the universe will be friendly to us in the future, the prospect of encountering the 10% that are unfriendly could be so horrifying that we are better plugging our ears and tuning out for now, at least until we grow up as a species, and figure out how to handle such information without triggering a catastrophe.

But even if SETI is dangerous, banning the search for extraterrestrial intelligence is an unrealistic goal at this moment in time. Even if it were the right thing to do to mitigate risk of premature human extinction, there is practically no chance that enough people will be convinced that this is the right course of action.

More realistically, we should instead think about what rules and norms humanity should adopt to robustly give our civilization a better chance at surviving a malicious SETI attack.

As a start, it seems wise to put in place a policy to review any confirmed alien messages for signs that they might be dangerous, before releasing any potentially devastating information to the public.

Consider two possible policies we could implement concerning how we review alien messages. 

In the first policy, we treat every alien message with an abundance of caution. After a signal from outer space is confirmed to be a genuine message from extraterrestrials, humanity forms a committee with the express purpose of debating whether this information should be released to the public, or whether it should be sealed away for at least another few decades, at which point another debate will take place.

In the second policy, after a signal is confirmed to be a genuine message from aliens, we immediately release all the data publicly, flooding the internet with whatever information aliens have sent us. In this second policy, there is no review process; everything we receive from aliens, no matter the content, is instantly declassified and handed over to the wider world without a moment’s hesitation.

If you are even mildly sympathetic to our thesis here — that SETI is risky for humanity — you probably agree that the second policy would be needlessly reckless, and might  put our species in danger. Yet, the second policy is precisely what the influential SETI Institute recommends humanity do in the event of successful alien contact. You can find more information in their document titled Protocols for an ETI Signal Detection, which was adopted unanimously by the SETI Permanent Study Group of the International Academy of Astronautics in 2010.

The idea that SETI might be dangerous is not new . It was perhaps first showcased in the 1961 British drama serial, A for Andromeda, in which aliens from Andromeda sent humanity the instructions on how to build an artificial intelligence whose final goal was to subjugate humanity. In the show, humans ended up victorious over the alien artificial intelligence, but  we would not be so lucky in the real world.

In intellectual communities and academia, the idea that SETI is dangerous has received very little attention, either positive or negative. In its place, the risk from METI has taken the spotlight, which is: sending messages to outer space rather than listening to them. This might explain why, as a species, we do not appear to currently be taking the risk from SETI very seriously.

Yet it’s imperative that humanity safeguards its own survival. If we survive the next few centuries, we have great potential as a species. In the long-run, we could reach the stars and become a grabby civilization ourselves, potentially expanding into thousands or millions of galaxies, creating trillions of worthwhile lives. But not necessarily endangering lives already present on other star systems, of course! To ensure we have a promising future, let’s proceed carefully with SETI. It could end up being the most important decision we ever make.


 

23 comments

Comments sorted by top scores.

comment by Lao Mein (derpherpize) · 2022-11-25T13:25:28.404Z · LW(p) · GW(p)

The band of malicious messages generated by advanced aliens that will both kill us and be detected by a panel of human experts seems extremely narrow. We're talking about the FDA being able to outsmart an AI running on a Matrioshka brain levels of narrow.

We then have to weigh the potential gains we get from utilizing the message without having to wait for the speed of bureaucracy. How many lives can we save by earlier implementation of alien technology? Plus, confirmation of alien civilizations make existential risk just regular risk. If we can see them, the fate of our light cone depends on their magnamity anyways. 

My calculations put the odds of a committee being able to achieve any good so low that we're better off just not having one.

Replies from: lc
comment by lc · 2022-11-25T13:50:18.184Z · LW(p) · GW(p)

We're talking about the FDA being able to outsmart an AI running on a Matrioshka brain levels of narrow.

To be fair, it's a really hard problem to send messages to arbitrary unknown civlizations that turn their planets into computronium even when they're skeptical. Maybe we'll get lucky and the best possible message is just a spec for some technology we can't verify.

comment by subconvergence · 2022-11-25T22:42:26.791Z · LW(p) · GW(p)

Any pointers on how it would even be possible for an alien civilisation to transmit complex instructions that could be deciphered?

Given a radio signal, I see how you could determine that it’s not natural, and then what?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2022-11-29T02:24:00.220Z · LW(p) · GW(p)

If the alien civilization is no more advanced than we are, then it probably cannot send us a message that would have a drastic effect on us.

But if the aliens can create powerful AI, they can just send that.

Specifically, the alien message might start with the first 50 primes, followed by a computer program that finds prime numbers -- written in some simple programming language. That is followed by a few more simple programs in the same programming language. Then comes the powerful AI (as a computer program in the same simple programming language). There are 100s of 1000s of people with the skills to use the examples in the first part of the message to get the AI in the second part of the message running on a computer here on Earth. 12 hours after such an alien message is published worldwide here on Earth, I expect that dozens of people would have the powerful AI running at good efficiency on their PCs. Obviously, I would prefer for SETI to retract their policy of publishing any alien message they might receive.

One simple program that is particularly illuminating for specifying a programming language is a simple interpreter for the same language: educational materials often call such a metacircular evaluator. Here is one example, but there are many. That example of course contains a lot of English prose intended to help the student understand the code. But there are 100s of 1000s of people who don't need the English prose: the code is probably all information they would need to get any program written in the same programming language running at good efficiency on their PC. The only reason I need the qualifier "probably" in the previous sentence is that there is a chance that a single program won't be enough: one or 2 more simple "sample" programs, e.g., for generating the Fibonacci sequence, might be required.

Keep in mind that the aliens are trying to make it as easy as possible for as many recipients as possible to run their code performantly.

Can the humans with the most skill in computing performantly run a complex computer program if all the information they have to go on is a few simple "sample programs" written in the same never-seen-before-by-humans programming language? -- that is the central issue. The answer is, yes, they can. Many of them can. And some of those many will be reckless enough to run the alien AI.

comment by cata · 2022-11-25T09:20:22.805Z · LW(p) · GW(p)

Related: https://en.wikipedia.org/wiki/His_Master's_Voice_(novel)

Replies from: avturchin
comment by avturchin · 2022-11-25T11:15:22.417Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/A_for_Andromeda 

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2022-11-25T11:20:38.171Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Macroscope_(novel)

Replies from: Darmani
comment by Darmani · 2022-11-25T19:20:05.693Z · LW(p) · GW(p)

https://galciv3.fandom.com/wiki/The_Galactic_Civilizations_Story#Humanity_and_Hyperdrive

comment by avturchin · 2022-11-25T08:41:39.097Z · LW(p) · GW(p)

There is a greater chance of observing self-replicating SETI messages than those that destroy planets

Replies from: MakoYass
comment by mako yass (MakoYass) · 2022-11-25T22:50:53.970Z · LW(p) · GW(p)

This depends on how fast they're spreading physically. If spread rate is close to c, I don't think that's the case, I think it's more likely that our first contact will come from a civ that hasn't received contact from any other civs yet (and SETI attacks would rarely land, most civs who hear them would either be too primitive or too developed to be vulnerable to them, before their senders arrive physically.).

Additionally, I don't think a viral SETI attack would be less destructive than what's being described.

Replies from: avturchin
comment by avturchin · 2022-11-26T11:10:55.045Z · LW(p) · GW(p)

Yes, SETI attacks works only if speed of civ travel is like 0.5c. In that case it covers 8 times more volume than physical travel.

And yes, it will be also destructive, but in different manner: not bombs, but AIs and self-replicating nanobots will appear. 

comment by Foyle (robert-lynn) · 2022-11-25T21:47:08.450Z · LW(p) · GW(p)

The gap between invention of radio and Superintelligent AI in our case (and perhaps most cases of evolution of intelligent life) appears to be <150 years.  A pretty narrow window to hit unless we are being actively observed - and that would likely imply they have had time to notice multicellular life on earth and get observers to us at low fractions of light speed.

If intelligent (inevitably superintelligent) Aliens exist and care about physical reality beyond their own stellar system then they can and likely will spread out to have a presence in every interesting star system in the galaxy within a million years - and planets with multicellular life are likely highly anomalous and interesting for curious Aliens.

It would be hard to believe that this hasn't already happened given 1-4e11 stars and 5-10e9 years 'window for life' in milky way, making the zoo hypothesis in my mind the most likely solution of the Fermi paradox (with weak anecdotal evidence in the form of seemingly increasingly furtive UFOs over last century).  Evolution selects for aliens that choose to propagate and endure, and the technology to do so is almost trivially easy once intelligence and superintelligence evolves, so if intelligence has evolved in the Milky Way and cares about other species developing, then it is clearly not hegemonic (evidenced by our continuing existence) and is likely already here.

If all this is the case - and aliens are here watching us then it also provides an existence proof than Alignment is possible.  Conversely if they are not here then that is perhaps weak evidence that Alignment is not possible - that Super intelligent AI is either auto-extinguishing or almost universally disinterested in biological life. 

comment by LGS · 2022-11-26T18:49:07.232Z · LW(p) · GW(p)

If SETI discovers alien messages, it seems likely that a lot of different players around the world will also be able to record these messages (anyone with a sufficiently good receiver -- I'm not sure how many of these there are right now, but surely more will develop within a couple years of the alien-message discovery).

At that point, I'm not sure if preventing the message from leaking is all that plausible. So even conditioned on all your assumptions, your plan buys the world, what, a couple of years? I mean, unless you think the existence of the alien message can be kept secret, but I highly doubt that.

comment by Slider · 2022-11-25T15:03:20.980Z · LW(p) · GW(p)

On the other hand, life on other planets probably arose by survival of the fittest, as our species did, which generally favors organisms that are expansionist and greedy for resources.

What is this based on? It is not survival of the spreadiest but the fittest. And it is not like r-selection is superior to K-selection.

Replies from: conor-sullivan
comment by Lone Pine (conor-sullivan) · 2022-11-26T19:16:04.785Z · LW(p) · GW(p)

Well, in the cosmic scheme, civilization that are not expansionist and greedy control a smaller share of the universe and therefore matter less.

Replies from: Slider
comment by Slider · 2022-11-26T19:51:36.889Z · LW(p) · GW(p)

The claim is about survival or the fittest being a thing that produces especially lot expansionist species.

I could argue in the same way that only species that manage to coordinate enough to have their spaceships fly make it to the stars. So there is a lower threshold of divisiveness and internal conflict under which we do not need to consider stuff. This would lead to think that any intelligent multimillenia life would be relatively peaceful.

For example in viruses while the spreadiest are infact the annoying ones, the fit virus also tends to be rather mild on those that carry it. If you are a grabby life which mandatorily needs to aquire more resources, if you ever get to a point where you can't get them you will implode. This could happen for example being stuck in globalization before becoming interplanetary where the strategy of "just walk further into the horizon" stops working. Rome liked its war loot, but having more and more provinces makes upkeep more and more challenging. The "average state" is not especially explosive. Survival of the fittest by itself is not a reason to think the products would be explosive.

comment by M. Y. Zuo · 2022-11-25T12:44:13.492Z · LW(p) · GW(p)

Does this message also contain an ulterior motive?

If so, or if not, how can we conclusively determine either?

Maybe instead of sharing it over the internet with public access, this should have been debated extensively in an expert committee? 

Replies from: MakoYass
comment by mako yass (MakoYass) · 2022-11-25T23:02:19.092Z · LW(p) · GW(p)

Ack. Despite the fact that we've been having the AI boxing/infohazards conversation for like a decade I still don't feel like I have a robust sense of how to decide whether a source is going to feed me or hack me. The criterion I've been operating on is like, "if it's too much smarter than me, assume it can get me to do things that aren't in my own interest", but most egregores/epistemic networks, which I'm completely reliant upon, are much smarter than me, so that can't be right.

Replies from: Vivek, M. Y. Zuo
comment by Vivek Hebbar (Vivek) · 2022-11-26T01:01:27.636Z · LW(p) · GW(p)

most egregores/epistemic networks, which I'm completely reliant upon, are much smarter than me, so that can't be right

*Egregore smiles*

comment by M. Y. Zuo · 2022-11-26T22:56:15.461Z · LW(p) · GW(p)

The wisest know nothing.

comment by Dagon · 2022-11-28T20:07:12.034Z · LW(p) · GW(p)

We're pretty darned fragile.  Incontrovertible proof of intelligent aliens might destabilize societies and lead to our destruction, regardless of the content of any messages.  In fact, we might destroy ourselves regardless of any aliens or messages.  If it comes temporally after receiving a message, I'm sure some of the aliens will want to take credit.

comment by Ansel · 2022-11-28T16:06:54.929Z · LW(p) · GW(p)

In my opinion, the risk analysis here is fundamentally flawed. Here's my take on the two main SETI scenarios proposed in the OP:

Automatic disclosure SETI - all potential messages are disclosed to the public pre analysis. This is dangerous if it is possible to send EDM (Extremely Dangerous Messages - world exploding/world hacking), and plausible to expect they would be sent.

Committee vetting SETI - all potential messages are reviewed by a committee of experts, who have the option of unilaterally concealing information they deem to be dangerous.

The argument in the OP hinges on portraying the first scenario as risky, with the second scenario motivated based on avoiding that risk. But the risk to be avoided there is fully theoretical, there's no concrete basis EDM (obviously if smart people think there can be/should be a concrete basis for them, I'd love to see it fleshed out).

But the second scenario has much more plausible risk! Conditioned on both scenarios eventually receiving alien messages, the second scenario could still be dangerous if EDM aren't real. By handling alien messages with unilateral secrecy, you're creating a situation where normal human incentives for wealth, personal aggrandizement, or even altruistic principles could lead a small, insular group to try to seize power using alien technology. The main assumption for this risk to be a factor, is that aliens sending us messages could have significantly superior technology. This seems more plausible than the existence of EDM, which is after all essentially the same claim but incredibly stronger.

Some people might even see the ability to seize power with alien tech as a feature, probably. But I think this is an underdiscussed and essential aspect to the analysis of public disclosure SETI vs secret committee SETI. To my mind, it dominates the risk of EDM until there's a basis for claiming that EDM are real.

comment by Lao Mein (derpherpize) · 2022-11-25T07:56:55.472Z · LW(p) · GW(p)

I assume this is to promote the infohazard policies that a lot of Rat-associated AI alignment teams are using?