Risks of downloading alien AI via SETI search
post by turchin · 2013-03-15T10:25:48.505Z · LW · GW · Legacy · 99 commentsContents
Alexei Turchin. Risks of downloading alien AI via SETI search Algorithm of SETI attack Analysis of possible goals None 99 comments
Alexei Turchin. Risks of downloading alien AI via SETI search
Abstract: This article examines risks associated with the program of passive search for alien signals (SETI—the Search for Extra-Terrestrial Intelligence). In this paper we propose a scenario of possible vulnerability and discuss the reasons why the proportion of dangerous signals to harmless ones can be dangerously high. This article does not propose to ban SETI programs, and does not insist on the inevitability of SETI-triggered disaster. Moreover, it gives the possibility of how SETI can be a salvation for mankind.
The idea that passive SETI can be dangerous is not new. Fred Hoyle suggested in the story "A for Andromeda” a scheme of alien attack through SETI signals. According to the plot, astronomers receive an alien signal, which contains a description of a computer and a computer program for it. This machine creates a description of the genetic code which leads to the creation of an intelligent creature – a girl dubbed Andromeda, which, working together with the computer, creates advanced technology for the military. The initial suspicion of alien intent is overcome by the greed for the technology the aliens can provide. However, the main characters realize that the computer acts in a manner hostile to human civilization and destroy the computer, and the girl dies.
This scenario is fiction, because most scientists do not believe in the possibility of a strong AI, and, secondly, because we do not have the technology that enables synthesis of new living organisms solely from its’ genetic code. Or at least, we have not until recently. Current technology of sequencing and DNA synthesis, as well as progress in developing a code of DNA modified with another set of the alphabet, indicate that in 10 years the task of re-establishing a living being from computer codes sent from space in the form computer codes might be feasible.
Hans Moravec in the book "Mind Children" (1988) offers a similar type of vulnerability: downloading a computer program from space via SETI, which will have artificial intelligence, promising new opportunities for the owner and after fooling the human host, self-replicating by the millions of copies and destroying the human host, finally using the resources of the secured planet to send its ‘child’ copies to multiple planets which constitute its’ future prey. Such a strategy would be like a virus or a digger wasp—horrible, but plausible. In the same direction are R. Carrigan’s ideas; he wrote an article "SETI-hacker", and expressed fears that unfiltered signals from space are loaded on millions of not secure computers of SETI-at-home program. But he met tough criticism from programmers who pointed out that, first, data fields and programs are in divided regions in computers, and secondly, computer codes, in which are written programs, are so unique that it is impossible to guess their structure sufficiently to hack them blindly (without prior knowledge).
After a while Carrigan issued a second article - "Should potential SETI signals be decontaminated?" http://home.fnal.gov/~carrigan/SETI/SETI%20Decon%20Australia%20poster%20paper.pdf, which I’ve translated into Russian. In it, he pointed to the ease of transferring gigabytes of data on interstellar distances, and also indicated that the interstellar signal may contain some kind of bait that will encourage people to collect a dangerous device according to the designs. Here Carrigan not give up his belief in the possibility that an alien virus could directly infected earth’s computers without human ‘translation’ assistance. (We may note with passing alarm that the prevalence of humans obsessed with death—as Fred Saberhagen pointed out in his idea of ‘goodlife’—means that we cannot entirely discount the possibility of demented ‘volunteers’ –human traitors eager to assist such a fatal invasion) As a possible confirmation of this idea, Carrigan has shown that it is possible easily reverse engineer language of computer program - that is, based on the text of the program it is possible to guess what it does, and then restore the value of operators.
In 2006, E. Yudkowsky wrote an article "AI as a positive and a negative factor of global risk", in which he demonstrated that it is very likely that it is possible rapidly evolving universal artificial intelligence which high intelligence would be extremely dangerous if it was programmed incorrectly, and, finally, that the occurrence of such AI and the risks associated with it significantly undervalued. In addition, Yudkowsky introduced the notion of “Seed AI” - embryo AI - that is a minimum program capable of runaway self-improvement with unchanged primary goal. The size of Seed AI can be on the close order of hundreds of kilobytes. (For example, a typical representative of Seed AI is a human baby, whose part of genome responsible for the brain would represent ~ 3% of total genes of a person with a volume of 500 megabytes, or 15 megabytes, but given the share of garbage DNA is even less.)
In the beginning, let us assume that in the Universe there is an extraterrestrial civilization, which intends to send such a message, which will enable it to obtain power over Earth, and consider this scenario. In the next chapter we will consider how realistic is that another civilization would want to send such a message.
First, we note that in order to prove the vulnerability, it is enough to find just one hole in security. However, in order to prove safety, you must remove every possible hole. The complexity of these tasks varies on many orders of magnitude that are well known to experts on computer security. This distinction has led to the fact that almost all computer systems have been broken (from Enigma to iPOD). I will now try to demonstrate one possible, and even, in my view, likely, vulnerability of SETI program. However, I want to caution the reader from the thought that if he finds errors in my discussions, it automatically proves the safety of SETI program. Secondly, I would also like to draw the attention of the reader, that I am a man with an IQ of 120 who spent all of a month of thinking on the vulnerability problem. We need not require an alien super civilization with IQ of 1000000 and contemplation time of millions of years to significantly improve this algorithm—we have no real idea what an IQ of 300 or even-a mere IQ of 100 with much larger mental ‘RAM’ (–the ability to load a major architectural task into mind and keep it there for weeks while processing) could accomplish to find a much more simple and effective way. Finally, I propose one possible algorithm and then we will discuss briefly the other options.
In our discussions we will draw on the Copernican principle, that is, the belief that we are ordinary observers in normal situations. Therefore, the Earth’s civilization is an ordinary civilization developing normally. (Readers of tabloid newspapers may object!)
Algorithm of SETI attack
1. The sender creates a kind of signal beacon in space, which reveals that its message is clearly artificial. For example, this may be a star with a Dyson sphere, which has holes or mirrors, alternately opened and closed. Therefore, the entire star will blink of a period of a few minutes - faster is not possible because of the variable distance between different openings. (Even synchronized with an atomic clock according to a rigid schedule, the speed of light limit means that there are limits to the speed and reaction time of coordinating large scale systems) Nevertheless, this beacon can be seen at a distance of millions of light years. There are possible other types of lighthouses, but the important fact that the beacon signal could be viewed at long distances.
2. Nearer to Earth is a radio beacon with a much weaker signal, but more information saturated. The lighthouse draws attention to this radio source. This source produces some stream of binary information (i.e. the sequence of 0 and 1). About the objection that the information would contain noises, I note that the most obvious (understandable to the recipient's side) means to reduce noise is the simple repetition of the signal in a circle.
3. The most simple way to convey meaningful information using a binary signal is sending of images. First, because eye structures in the Earth's biological diversity appeared independently 7 times, it means that the presentation of a three-dimensional world with the help of 2D images is probably universal, and is almost certainly understandable to all creatures who can build a radio receiver.
4. Secondly, the 2D images are not too difficult to encode in binary signals. To do so, let us use the same system, which was used in the first TV cameras, namely, a system of progressive and frame rate. At the end of each time frame images store bright light, repeated after each line, that is, through an equal number of bits. Finally, at the end of each frame is placed another signal indicating the end of the frame, and repeated after each frame. (This may form, or may not form a continuous film.) This may look like this:
01010111101010 11111111111111111
01111010111111 11111111111111111
11100111100000 11111111111111111
Here is the end line signal of every of 25 units. Frame end signal may appear every, for example, 625 units.
5. Clearly, a sender civilization- should be extremely interested that we understand their signals. On the other hand, people will share an extreme desire to decrypt the signal. Therefore, there is no doubt that the picture will be recognized.
6. Using images and movies can convey a lot of information, they can even train in learning their language, and show their world. It is obvious that many can argue about how such films will be understandable. Here, we will focus on the fact that if a certain civilization sends radio signals, and the other takes them, so they have some shared knowledge. Namely, they know radio technique - that is they know transistors, capacitors, and resistors. These radio-parts are quite typical so that they can be easily recognized in the photographs. (For example, parts shown, in cutaway view, and in sequential assembly stages— or in an electrical schematic whose connections will argue for the nature of the components involved).
7. By sending photos depicting radio-parts on the right side, and on the left - their symbols, it is easy to convey a set of signs indicating electrical circuit. (Roughly the same could be transferred and the logical elements of computers.)
8. Then, using these symbols the sender civilization- transmits blueprints of their simplest computer. The simplest of computers from hardware point of view is the Post-machine. It has only 6 commands and a tape data recorder. Its full electric scheme will contain only a few tens of transistors or logic elements. It is not difficult to send blueprints of Post machine.
9. It is important to note that all computers at the level of algorithms are Turing-compatible. That means that extraterrestrial computers at the basic level are compatible with any earth computer. Turing-compatibility is a mathematical universality as the Pythagorean theorem. Even the Babbage mechanical machine, designed in the early 19th century, was Turing-compatible.
10. Then the sender civilization- begins to transmit programs for that machine. Despite the fact that the computer is very simple, it can implement a program of any difficulty, although it will take very long in comparison with more complex programs for the same computer. It is unlikely that people will be required to build this computer physically. They can easily emulate it within any modern computer, so that it will be able to perform trillions of operations per second, so even the most complex program will be carried out on it quite quickly. (It is a possible interim step: a primitive computer gives a description of a more complex and fast computer and then run on it.)
11. So why people would create this computer, and run its program? Perhaps, in addition to the actual computer schemes and programs in the communication must be some kind of "bait", which would have led the people to create such an alien computer and to run programs on it and to provide to it some sort of computer data about the external world –Earth outside the computer. There are two general possible baits - temptations and dangers:
a). For example, perhaps people receive the following offer– lets call it "The humanitarian aid con (deceit)". Senders of an "honest signal" SETI message warn that the sent program is Artificial intelligence, but lie about its goals. That is, they argue that this is a "gift" which will help us to solve all medical and energy problems. But it is a Trojan horse of most malevolent intent. It is too useful not to use. Eventually it becomes indispensable. And then exactly when society becomes dependent upon it, the foundation of society—and society itself—is overturned…
b). "The temptation of absolute power con" - in this scenario, they offer specific transaction message to recipients, promising power over other recipients. This begins a ‘race to the bottom’ that leads to runaway betrayals and power seeking counter-moves, ending with a world dictatorship, or worse, a destroyed world dictatorship on an empty world….
c). "Unknown threat con" - in this scenario bait senders report that a certain threat hangs over on humanity, for example, from another enemy civilization, and to protect yourself, you should join the putative “Galactic Alliance” and build a certain installation. Or, for example, they suggest performing a certain class of physical experiments on the accelerator and sending out this message to others in the Galaxy. (Like a chain letter) And we should send this message before we ignite the accelerator, please…
d). "Tireless researcher con" - here senders argue that posting messages is the cheapest way to explore the world. They ask us to create AI that will study our world, and send the results back. It does rather more than that, of course…
12. However, the main threat from alien messages with executable code is not the bait itself, but that this message can be well known to a large number of independent groups of people. First, there will always be someone who is more susceptible to the bait. Secondly, say, the world will know that alien message emanates from the Andromeda galaxy, and the Americans have already been received and maybe are trying to decipher it. Of course, then all other countries will run to build radio telescopes and point them on Andromeda galaxy, as will be afraid to miss a “strategic advantage”. And they will find the message and see that there is a proposal to grant omnipotence to those willing to collaborate. In doing so, they will not know, if the Americans would take advantage of them or not, even if the Americans will swear that they don’t run the malicious code, and beg others not to do so. Moreover, such oaths, and appeals will be perceived as a sign that the Americans have already received an incredible extraterrestrial advantage, and try to deprive "progressive mankind" of them. While most will understand the danger of launching alien code, someone will be willing to risk it. Moreover there will be a game in the spirit of "winner take all", as well be in the case of opening AI, as Yudkowsky shows in detail. So, the bait is not dangerous, but the plurality of recipients. If the alien message is posted to the Internet (and its size, sufficient to run Seed AI can be less than gigabytes along with a description of the computer program, and the bait), here we have a classic example of "knowledge" of mass destruction, as said Bill Joy, meaning the recipes genomes of dangerous biological viruses. If aliens sent code will be available to tens of thousands of people, then someone will start it even without any bait out of simple curiosity We can’t count on existing SETI protocols, because discussion on METI (sending of messages to extraterrestrial) has shown that SETI community is not monolithic on important questions. Even a simple fact that something was found could leak and encourage search from outsiders. And the coordinates of the point in sky would be enough.
13. Since people don’t have AI, we almost certainly greatly underestimate its power and overestimate our ability to control it. The common idea is that "it is enough to pull the power cord to stop an AI" or place it in a black box to avoid any associated risks. Yudkowsky shows that AI can deceive us as an adult does a child. If AI dips into the Internet, it can quickly subdue it as a whole, and also taught all necessary about entire earthly life. Quickly - means the maximum hours or days. Then the AI can create advanced nanotechnology, buy components and raw materials (on the Internet, he can easily make money and order goods with delivery, as well as to recruit people who would receive them, following the instructions of their well paying but ‘unseen employer’, not knowing who—or rather, what—- they are serving). Yudkowsky leads one of the possible scenarios of this stage in detail and assesses that AI needs only weeks to crack any security and get its own physical infrastructure.
"Consider, for clarity, one possible scenario, in which Alien AI (AAI) can seize power on the Earth. Assume that it promises immortality to anyone who creates a computer on the blueprints sent to him and start the program with AI on that computer. When the program starts, it says: "OK, buddy, I can make you immortal, but for this I need to know on what basis your body works. Provide me please access to your database. And you connect the device to the Internet, where it was gradually being developed and learns what it needs and peculiarities of human biology. (Here it is possible for it escape to the Internet, but we omit details since this is not the main point) Then the AAI says: "I know how you become biologically immortal. It is necessary to replace every cell of your body with nanobiorobot. And fortunately, in the biology of your body there is almost nothing special that would block bio-immorality.. Many other organisms in the universe are also using DNA as a carrier of information. So I know how to program the DNA so as to create genetically modified bacteria that could perform the functions of any cell. I need access to the biological laboratory, where I can perform a few experiments, and it will cost you a million of your dollars." You rent a laboratory, hire several employees, and finally the AAI issues a table with its' solution of custom designed DNA, which are ordered in the laboratory by automated machine synthesis of DNA. http://en.wikipedia.org/wiki/DNA_sequencing Then they implant the DNA into yeast, and after several unsuccessful experiments they create a radio guided bacteria (shorthand: This is not truly a bacterium, since it appears all organelles and nucleus; also 'radio' is shorthand for remote controlled; a far more likely communication mechanism would be modulated sonic impulses) , which can synthesize a new DNA-based code based on commands from outside. Now the AAI has achieved independence from human 'filtering' of its' true commands, because the bacterium has in effect its own remote controlled sequencers (self-reproducing to boot!). Now the AAI can transform and synthesize substances ostensibly introduced into test tubes for a benign test, and use them for a malevolent purpose., Obviously, at this moment Alien AI is ready to launch an attack against humanity. He can transfer himself to the level of nano-computer so that the source computer can be disconnected. After that AAI spraying some of subordinate bacteria in the air, which also have AAI, and they gradually are spread across the planet, imperceptibly penetrates into all living beings, and then start by the timer to divide indefinitely, as gray goo, and destroy all living beings. Once they are destroyed, Alien AI can begin to build their own infrastructure for the transmission of radio messages into space. Obviously, this fictionalized scenario is not unique: for example, AAI may seize power over nuclear weapons, and compel people to build radio transmitters under the threat of attack. Because of possibly vast AAI experience and intelligence, he can choose the most appropriate way in any existing circumstances. (Added by Freidlander: Imagine a CIA or FSB like agency with equipment centuries into the future, introduced to a primitive culture without concept of remote scanning, codes, the entire fieldcraft of spying. Humanity might never know what hit it, because the AAI might be many centuries if not millennia better armed than we (in the sense of usable military inventions and techniques ).
14. After that, this SETI-AI does not need people to realize any of its goals. This does not mean that it would seek to destroy them, but it may want to pre-empt if people will fight it - and they will.
15. Then this SETI-AI can do a lot of things, but more importantly, that it should do - is to continue the transfer of its communications-generated-embryos to the rest of the Universe. To do so, he will probably turn the matter in the solar system in the same transmitter as the one that sent him. In doing so the Earth and its’ people would be a disposable source of materials and parts—possibly on a molecular scale.
So, we examined a possible scenario of attack, which has 15 stages. Each of these stages is logically convincing and could be criticized and protected separately. Other attack scenarios are possible. For example, we may think that the message is not sent directly to us but is someone to someone else's correspondence and try to decipher it. And this will be, in fact, bait.
But not only distribution of executable code can be dangerous. For example, we can receive some sort of “useful” technology that really should lead us to disaster (for example, in the spirit of the message "quickly shrink 10 kg of plutonium, and you will have a new source of energy" ...but with planetary, not local consequences…). Such a mailing could be done by a certain "civilization" in advance to destroy competitors in the space. It is obvious that those who receive such messages will primarily seek technology for military use.
Analysis of possible goals
We now turn to the analysis of the purposes for which certain super civilizations could carry out such an attack.
1. We must not confuse the concept of a super-civilization with the hope for superkindness of civilization. Advanced does not necessarily mean merciful. Moreover, we should not expect anything good from extraterrestrial ‘kindness’. This is well written in Strugatsky’s novel "Waves stop wind." Whatever the goal of imposing super-civilization upon us , we have to be their inferiors in capability and in civilizational robustness even if their intentions are well.. The historical example: The activities of Christian missionaries, destroying traditional religion. Moreover, we can better understand purely hostile objectives. And if the SETI attack succeeds, it may be only a prelude to doing us more ‘favors’ and ‘upgrades’ until there is scarcely anything human left of us even if we do survive…
2. We can divide all civilizations into the twin classes of naive and serious. Serious civilizations are aware of the SETI risks, and have got their own powerful AI, which can resist alien hacker attacks. Naive civilizations, like the present Earth, already possess the means of long-distance hearing in space and computers, but do not yet possess AI, and are not aware of the risks of AI-SETI. Probably every civilization has its stage of being "naive", and it is this phase then it is most vulnerable to SETI attack. And perhaps this phase is very short. Since the period of the outbreak and spread of radio telescopes to powerful computers that could create AI can be only a few tens of years. Therefore, the SETI attack must be set at such a civilization. This is not a pleasant thought, because we are among the vulnerable.
3. If traveling with super-light speeds is not possible, the spread of civilization through SETI attacks is the fastest way to conquering space. At large distances, it will provide significant temporary gains compared with any kind of ships. Therefore, if two civilizations compete for mastery of space, the one that favored SETI attack will win.
4. The most important thing is that it is enough to begin a SETI attack just once, as it goes in a self-replicating the wave throughout the Universe, striking more and more naive civilizations. For example, if we have a million harmless normal biological viruses and one dangerous, then once they get into the body, we will get trillions of copies of the dangerous virus, and still only a million safe viruses. In other words, it is enough that if one of billions of civilizations starts the process and then it becomes unstoppable throughout the Universe. Since it is almost at the speed of light, countermeasures will be almost impossible.
5. Further, the delivery of SETI messages will be a priority for the virus that infected a civilization, and it will spend on it most of its energy, like a biological organism spends on reproduction - that is tens of percent. But Earth's civilization spends on SETI only a few tens of millions of dollars, that is about one millionth of our resources, and this proportion is unlikely to change much for the more advanced civilizations. In other words, an infected civilization will produce a million times more SETI signals than a healthy one. Or, to say in another way, if in the Galaxy are one million healthy civilizations, and one infected, then we will have equal chances to encounter a signal from healthy or contaminated.
6. Moreover, there are no other reasonable prospects to distribute its code in space except through self-replication.
7. Moreover, such a process could begin by accident - for example, in the beginning it was just a research project, which was intended to send the results of its (innocent) studies to the maternal civilization, not causing harm to the host civilization, then this process became "cancer" because of certain propogative faults or mutations.
8. There is nothing unusual in such behavior. In any medium, there are viruses – there are viruses in biology, in computer networks - computer viruses, in conversation - meme. We do not ask why nature wanted to create a biological virus.
9. Travel through SETI attacks is much cheaper than by any other means. Namely, a civilization in Andromeda can simultaneously send a signal to 100 billion stars in our galaxy. But each space ship would cost billions, and even if free, would be slower to reach all the stars of our Galaxy.
10. Now we list several possible goals of a SETI attack, just to show the variety of motives.
- To study the universe. After executing the code research probes are created to gather survey and send back information.
- To ensure that there are no competing civilizations. All of their embryos are destroyed. This is preemptive war on an indiscriminate basis.
- To preempt the other competing supercivilization (yes, in this scenario there are two!) before it can take advantage of this resource.
- This is done in order to prepare a solid base for the arrival of spacecraft. This makes sense if super civilization is very far away, and consequently, the gap between the speed of light and near-light speeds of its ships (say, 0.5 c) gives a millennium difference.
- The goal is to achieve immortality. Carrigan showed that the amount of human personal memory is on the order of 2.5 gigabytes, so a few exabytes (1 exabyte = 1 073 741 824 gigabytes) forwarding the information can send the entire civilization. (You may adjust the units according to how big you like your super-civilizations!)
- Finally we consider illogical and incomprehensible (to us) purposes, for example, as a work of art, an act of self-expression or toys. Or perhaps an insane rivalry between two factions. Or something we simply cannot understand (For example, extraterrestrial will not understand why the Americans have stuck a flag into the Moon. Was it worthwhile to fly over 300000 km to install painted steel?)
11. Assuming signals propagated billions of light years distant in the Universe, the area susceptible to widespread SETI attack, is a sphere with a radius of several billion light years. In other words, it would be sufficient to find a one “bad civilization" in the light cone of a height of several billion years old, that is, that includes billions of galaxies from which we are in danger of SETI attack. Of course, this is only true, if the average density of civilization is at least one in the galaxy. This is an interesting possibility in relation to Fermi’s Paradox.
16. As the depth of scanning the sky rises linearly, the volume of space and the number of stars that we see increases by the cube of that number. This means that our chances to stumble on a SETI signal nonlinear grow by fast curve.
17. It is possible that when we stumble upon several different messages from the skies, which refute one another in a spirit of: "do not listen to them, they are deceiving voices, and wish you evil. But we, brother, we, are good—and wise…"
18. Whatever positive and valuable message we receive, we can never be sure that all of this is not a subtle and deeply concealed threat. This means that in interstellar communication there will always be an element of distrust, and in every happy revelation, a gnawing suspicion.
19. A defensive posture regarding interstellar communication is only to listen, not sending anything that does not reveal its location. The laws prohibit the sending of a message from the United States to the stars. Anyone in the Universe who sends (transmits) self-evidently- is not afraid to show his position. Perhaps because the sending (for the sender) is more important than personal safety. For example, because it plans to flush out prey prior to attacks. Or it is forced to, by a evil local AI.
20. It was said about atomic bomb: The main secret about the atomic bomb is that it can be done. If prior to the discovery of a chain reaction Rutherford believed that the release of nuclear energy is an issue for the distant future, following the discovery any physicist knows that it is enough to connect two subcritical masses of fissionable material in order to release nuclear energy. In other words, if one day we find that signals can be received from space, it will be an irreversible event—something analogous to a deadly new arms race will be on.
Objections.
The discussions on the issue raise several typical objections, now discussed.
Objection 1: Behavior discussed here is too anthropomorphic. In fact, civilizations are very different from each other, so you can’t predict their behavior.
Answer: Here we have a powerful observation selection effect. While a variety of possible civilizations exist, including such extreme scenarios as thinking oceans, etc., we can only receive radio signals from civilizations that send them, which means that they have corresponding radio equipment and has knowledge of materials, electronics and computing. That is to say we are threatened by civilizations of the same type as our own. Those civilizations, which can neither accept nor send radio messages, do not participate in this game.
Also, an observation selection effect concerns purposes. Goals of civilizations can be very different, but all civilizations intensely sending signals, will be only that want to tell something to “everyone". Finally, the observation selection relates to the effectiveness and universality of SETI virus. The more effective it is, the more different civilizations will catch it and the more copies of the SETI virus radio signals will be in heaven. So we have the ‘excellent chances’ to meet a most powerful and effective virus.
Objection 2. For super-civilizations there is no need to resort to subterfuge. They can directly conquer us.
Answer:
This is true only if they are in close proximity to us. If movement faster than light is not possible, the impact of messages will be faster and cheaper. Perhaps this difference becomes important at intergalactic distances. Therefore, one should not fear the SETI attack from the nearest stars, coming within a radius of tens and hundreds of light-years.
Objection 3. There are lots of reasons why SETI attack may not be possible. What is the point to run an ineffective attack?
Answer: SETI attack does not always work. It must act in a sufficient number of cases in line with the objectives of civilization, which sends a message. For example, the con man or fraudster does not expect that he would be able "to con" every victim. He would be happy to steal from even one person inone hundred. It follows that SETI attack is useless if there is a goal to attack all civilizations in a certain galaxy. But if the goal is to get at least some outposts in another galaxy, the SETI attack fits. (Of course, these outposts can then build fleets of space ships to spread SETI attack bases outlying stars within the target galaxy.)
The main assumption underlying the idea of SETI attacks is that extraterrestrial super civilizations exist in the visible universe at all. I think that this is unlikely for reasons related to antropic principle. Our universe is unique from 10 ** 500 possible universes with different physical properties, as suggested by one of the scenarios of string theory. My brain is 1 kg out of 10 ** 30 kg in the solar system. Similarly, I suppose, the Sun is no more than about 1 out of 10 ** 30 stars that could raise a intelligent life, so it means that we are likely alone in the visible universe.
Secondly the fact that Earth came so late (i.e. it could be here for a few billion years earlier), and it was not prevented by alien preemption from developing, argues for the rarity of intelligent life in the Universe. The putative rarity of our civilization is the best protection against attack SETI. On the other hand, if we open parallel worlds or super light speed communication, the problem arises again.
Objection 7. Contact is impossible between post-singularity supercivilizations, which are supposed here to be the sender of SETI-signals, and pre- singularity civilization, which we are, because supercivilization is many orders of magnitude superior to us, and its message will be absolutely not understandable for us - exactly as the contact between ants and humans is not possible. (A singularity is the time of creation of artificial intelligence capable of learning, (and beginning an exponential booting in recursive improving self-design of further intelligence and much else besides) after which civilization make leap in its development - on Earth it may be possible in the area in 2030.)
Answer: In the proposed scenario, we are not talking about contact but a purposeful deception of us. Similarly, a man is quite capable of manipulating behavior of ants and other social insects, whose objectives are is absolutely incomprehensible to them. For example, LJ user “ivanov-petrov” describes the following scene: As a student, he studied the behavior of bees in the Botanical Garden of Moscow State University. But he had bad relations with the security guard controlling the garden, which is regularly expelled him before his time. Ivanov-Petrov took the green board and developed in bees conditioned reflex to attack this board. The next time the watchman came, who constantly wore a green jersey, all the bees attacked him and he took to flight. So “ivanov-petrov” could continue research. Such manipulation is not a contact, but this does not prevent its’ effectiveness.
"Objection 8. For civilizations located near us is much easier to attack us –for ‘guaranteed results’—using starships than with SETI-attack.
Answer. It may be that we significantly underestimate the complexity of an attack using starships and, in general, the complexity of interstellar travel. To list only one factor, the potential ‘minefield’ characteristics of the as-yet unknown interstellar medium.
If such an attack would be carried out now or in the past, the Earth's civilization has nothing to oppose it, but in the future the situation will change - all matter in the solar system will be full of robots, and possibly completely processed by them. On the other hand, the more the speed of enemy starships approaching us, the more the fleet will be visible by its braking emissions and other characteristics. These quick starships would be very vulnerable, in addition we could prepare in advance for its arrival. A slowly moving nano- starship would be very less visible, but in the case of wishing to trigger a transformation of full substance of the solar system, it would simply be nowhere to land (at least without starting an alert in such a ‘nanotech-settled’ and fully used future solar system. (Friedlander added: Presumably there would always be some ‘outer edge’ of thinly settled Oort Cloud sort of matter, but by definition the rest of the system would be more densely settled, energy rich and any deeper penetration into solar space and its’ conquest would be the proverbial uphill battle—not in terms of gravity gradient, but in terms of the available resources of war against a full Class 2 Kardashev civilization.)
The most serious objection is that an advanced civilization could in a few million years sow all our galaxy with self replicating post singularity nanobots that could achieve any goal in each target star-system, including easy prevention of the development of incipient other civilizations. (In the USA Frank Tipler advanced this line of reasoning.) However, this could not have happened in our case - no one has prevented development of our civilization. So, it would be much easier and more reliable to send out robots with such assignments, than bombardment of SETI messages of the entire galaxy, and if we don’t see it, it means that no SETI attacks are inside our galaxy. (It is possible that a probe on the outskirts of the solar system expects manifestations of human space activity to attack – a variant of the "Berserker" hypothesis - but it will not attack through SETI). Probably for many millions or even billions of years microrobots could even reach from distant galaxies at a distance of tens of millions of light-years away. Radiation damage may limit this however without regular self-rebuilding.
In this case SETI attack would be meaningful only at large distances. However, this distance - tens and hundreds of millions of light-years - probably will require innovative methods of modulation signals, such as management of the luminescence of active nuclei of galaxies. Or transfer a narrow beam in the direction of our galaxy (but they do not know where it will be over millions of years). But a civilization, which can manage its’ galaxy’s nucleus, might create a spaceship flying with near-light speeds, even if its mass is a mass of the planet. Such considerations severely reduce the likelihood of SETI attacks, but not lower it to zero, because we do not know all the possible objectives and circumstances.
(An comment by JF :For example the lack of SETI-attack so far may itself be a cunning ploy: At first receipt of the developing Solar civilization’s radio signals, all interstellar ‘spam’ would have ceased, (and interference stations of some unknown (but amazing) capability and type set up around the Solar System to block all coming signals recognizable to its’ computers as of intelligent origin,) in order to get us ‘lonely’ and give us time to discover and appreciate the Fermi Paradox and even get those so philosophically inclined to despair desperate that this means the Universe is apparently hostile by some standards. Then, when desperate, we suddenly discover, slowly at first, partially at first, and then with more and more wonderful signals, the fact that space is filled with bright enticing signals (like spam). The blockade, cunning as it was (analogous to Earthly jamming stations) was yet a prelude to a slow ‘turning up’ of preplanned intriguing signal traffic. If as Earth had developed we had intercepted cunning spam followed by the agonized ‘don’t repeat our mistakes’ final messages of tricked and dying civilizations, only a fool would heed the enticing voices of SETI spam. But now, a SETI attack may benefit from the slow unmasking of a cunning masquerade as first a faint and distant light of infinite wonder, only at the end revealed as the headlight of an onrushing cosmic train…)
AT comment to it. In fact I think that SETI attack senders are on the distances more than 1000 ly and so they do not know yet that we have appeared. But so called Fermi Paradox indeed maybe a trick – senders deliberately made their signals weak in order to make us think that they are not spam.
The scale of space strategy may be inconceivable to the human mind.
And we should note in conclusion that some types of SETI-attack do not even need a computer but just a man who could understand the message that then "set his mind on fire". At the moment we cannot imagine such a message, but we can give some analogies. Western religions are built around the text of the Bible. It can be assumed that if the text of the Bible appeared in some countries, which had previously not been familiar with it, there might arise a certain number of biblical believers. Similarly subversive political literature, or even some superideas, “sticky” memes or philosophical mind-benders. Or, as suggested by Hans Moravec, we get such a message: "Now that you have received and decoded me, broadcast me in at least ten thousand directions with ten million watts of power. Or else." - this message is dropped, leaving us guessing, what may indicate that "or else". Even a few pages of text may contain a lot of subversive information - Imagine that we could send a message to the 19 th century scientists. We could open them to the general principle of the atomic bomb, the theory of relativity, the transistors - and thus completely change the course of technological history, and we could add that all the ills in the 20 century were from Germany (which is only partly true) , then we would have influenced the political history.
(Comment of JF: Such a latter usage would depend on having received enough of Earth’s transmissions to be able to model our behavior and politics. But imagine a message as posing from our own future, to ignite ‘catalytic war’—Automated SIGINT (signals intelligence) stations are constructed monitoring our solar system, their computers ‘cracking’ our language and culture (possibly with the aid of children’s television programs with see and say matching of letters and sounds, from TV news showing world maps and naming countries possibly even from intercepting wireless internet encyclopedia articles. ) Then a test or two may follow, posting a what if scenario inviting comment from bloggers, about a future war say between the two leading powers of the planet. (For purposes of this discussion, say around 2100 present calendar China is strongest and India rising fast). Any defects and nitpicks in the comments of the blog are noted and corrected. Finally, an actual interstellar message is sent with the debugged scenario(not shifting against the stellar background, it is unquestionably interstellar in origin) proporting to be from a dying starship of the presently stronger side’s (China’s) future, when the presently weaker side (India’s) space fleet has smashed the future version of the Chinese State and essentially committed genocide. The starship has come back in time, but is dying, and indeed the transmission ends, or simply repeats, possibly after some back and forth communication between the false computer models of the ‘starship commander’ and the Chinese government. The reader can imagine the urgings of the future Chinese military council to preempt to forestall doom. If as seems probable, such a strategy is too complicated to carry off in one stage, various ‘future travellers’ may emerge from a war, signal for help in vain, and ‘die’ far outside our ability to reach them, (say some light days away, near the alleged location of an ‘emergence gate’ but near an actual transmitter) Quite a drama may emerge as the computer learns to ‘play’ us like a con man, ship after ship of various nationalities dribbling out stories but also getting answers to key questions for aid in constructing the emerging scenario which will be frighteningly believable, enough to ignite a final war. Possibly lists of key people in China (or whatever side is stronger) may be drawn up by the computer with a demand that they be executed as the parents of future war criminals—sort of an International Criminal Court –acting as Terminator scenario. Naturally the Chinese state, at that time the most powerful in the world, would guard its’ rulers lives against any threat. Yet more refugee spaceships of various nationalities can emerge transmit and die, offering their own militaries terrifying new weapons technologies from unknown sciences that really work (more ‘proof’ of their future origin). Or weapons from known sciences, for example decoding online DNA sequences in the future internet and constructing formulae for DNA constructors to make specific tailored genetic weapons against particular populations—that endure in the ground, a scorched earth against a particular population on a particular piece of land. These are copied and spread worldwide as are totally accurate plans—in standard CNC codes for easy to construct thermonuclear weapons in the 1950s style—using U-238 for casing, and only a few kilograms of fissionable material for ignition By that time well over a million tons of depleted uranium will be worldwide, and deuterium is free in the ocean and can be used directly in very large weapons without lithium deuteride. Knowing how to hack together a wasteful, more than critical mass crude fission device is one thing (the South African device was of this kind). But knowing –with absolute accuracy, down to machining drawings, CNC codes, etc how to make high-yield, super efficient very dirty thermonuclear weapons without need for testing means that any small group with a few dozen million dollars and automated machine tools can clandestinely make a multi-megaton device –or many— and smash the largest cities. And any small power with a few dozen jets can cripple a continent for a decade. Already over a thousand tons of plutonium exist. The SETI spam can include CNC codes for making a one shot reactor plutonium chemical refiner that would be left hopelessly radioactive but output chemically pure plutonium. (This would be prone to predetonation because of the Pu-240 content but then plans for debugged laser isotope separators may also be downloaded). This is a variant of the ‘catalytic war’ and ‘nuclear six gun’ (i.e. easy to obtain weapons) scenarios of the late Herman Kahn. Even cheaper would be bioattacks of the kind outlined above. The principle point is that planet killer weapons fully debugged take great amounts of debugging, tens to hundreds of billions of dollars, and free access to a world scientific community. Today, it is to every great power’s advantage to keep accurate designs out of the hands of third parties because they have to live on the same planet (and because the fewer weapons, the easier it is to stay a great power). Not so the SETI spam authors. Without the hundreds of billions in R and D, the actual construction budget would be on the order of a million dollars per multi-megaton device (depending on the expense of obtaining the raw reactor plutonium) If wishing to extend today’s scenarios into the future, the SETI spam authors manipulate Georgia (with about a $10 billion GDP) to arm against Russia and Taiwan against China and Venezuela against the USA. Although Russian and China and the USA could respectively promise annihilation against any attacker, with a military budget around 4% of GDP and the downloaded plans, the reverse—for the first time—could then also be true. (400 100 megaton bombs can kill by fallout perhaps 95% of unprotected populations over a country the size of the USA or China and 90% of a country the size of Russia, assuming the worst kind of cooperation from the winds.—from an old chart by Ralph Lapp) Anyone living near a superarmed microstate with border conflicts will, of course, wish to arm themselves. And these newly armed states themselves—of course—will have borders. Note that this drawn out scenario gives lots of time for a huge arms buildup on both (or many!) sides, and a Second Cold War that eventually turns very hot indeed…and unlike a human player of such a horrific ‘catalytic war’ con game, worldwide fallout or enduring biocontamination is not a concern at all… ()
Conclusion.
The product of the probabilities of the following events describes the probability of attack. For these probabilities, we can only give so-called «expert» assessment, that is, assign them a certain a priori subjective probability as we do now.
1) The likelihood that extraterrestrial civilizations exist at a distance at which radio communication is possible with them. In general, I agree with the view of Shklovsky and supporters of the “Rare Earth” hypothesis - that the Earth's civilization is unique in the observable universe. This does not mean that extraterrestrial civilizations do not exist at all (because the universe, according to the theory of cosmological inflation, is almost endless) - they are just over the horizon of events visible from our point in space-time. In addition, this is not just about distance, but also of the distance at which you can establish a connection, which allows transferring gigabytes of information. (However, passing even 1 bit per second, you can submit 1-gigabit for about 20 years, which may be sufficient for the SETI-attack.) If in the future will be possible some superluminal communication or interaction with parallel universes, it would dramatically increase the chances of SETI attacks. So, I appreciate this chance to 10%.
2) The probability that SETI-attack is technically feasible: that is, it is possible computer program, with recursively self-improving AI and sizes suitable for shipping. I see this chance as high: 90%.
3) The likelihood that civilizations that could have carried out such attack exist in our space-time cone - this probability depends on the density of civilizations in the universe, and of whether the percentage of civilizations that choose to initiate such an attack, or, more importantly, obtain victims and become repeaters. In addition, it is necessary to take into account not only the density of civilizations, but also the density created by radio signals. All these factors are highly uncertain. It is therefore reasonable to assign this probability to 50%.
4) The probability that we find such a signal during our rising civilization’s period of vulnerability to it. The period of vulnerability lasts from now until the moment when we will decide and be technically ready to implement this decision: Do not download any extraterrestrial computer programs under any circumstances. Such a decision may only be exercised by our AI, installed as world ruler (which in itself is fraught with considerable risk). Such an world AI (WAI) can be in created circa 2030. We cannot exclude, however, that our WAI still will not impose a ban on the intake of extraterrestrial messages, and fall victim to attacks by the alien artificial intelligence, which by millions of years of machine evolution surpasses it. Thus, the window of vulnerability is most likely about 20 years, and “width” of the window depends on the intensity of searches in the coming years. This “width” for example, depends on the intensity of the current economic crisis of 2008-2010, from the risks of World War III, and how all this will affect the emergence of the WAI. It also depends on the density of infected civilizations and their signal strength— as these factors increase, the more chances to detect them earlier. Because we are a normal civilization under normal conditions, according to the principle of Copernicus, the probability should be large enough; otherwise a SETI-attack would have been generally ineffective. (The SETI-attack, itself (here supposed to exist) also are subject to a form of “natural selection” to test its effectiveness. (In the sense that it works or does not. ) This is a very uncertain chance we will too, over 50%.
5) Next is the probability that SETI-attack will be successful - that is that we swallow the bait, download the program and description of the computer, run them, lose control over them and let them reach all their goals. I appreciate this chance to be very high because of the factor of multiplicity - that is the fact that the message is downloaded repeatedly, and someone, sooner or later, will start it. In addition, through natural selection, most likely we will get the most effective and deadly message that will most effectively deceive our type of civilization. I consider it to be 90%.
6) Finally, it is necessary to assess the probability that SETI-attack will lead to a complete human extinction. On the one hand, it is possible to imagine a “good” SETI-attack, which is limited so that it will create a powerful radio emitter behind the orbit of Pluto. However, for such a program will always exist the risk that a possible emergent society at its’ target star will create a powerful artificial intelligence, and effective weapon that would destroy this emitter. In addition, to create the most powerful transponder would be needed all the substance of solar system and the entire solar energy. Consequently, the share of such “good” attacks will be lower due to natural selection, as well as some of them will be destroyed sooner or later by captured by them civilizations and their signals will be weaker. So the chances of destroying all the people with the help of SETI-attack that has reached all its goals, I appreciate in 80%.
As a result, we have: 0.1h0.9h0.5h0.5h0.9h0.8 = 1.62%
So, after rounding, the chances of extinction of Man through SETI attack in XXI century is around 1 per cent with a theoretical precision of an order of magnitude.
Our best protection in this context would be that civilization would very rarely met in the Universe. But this is not quite right, because the Fermi paradox here works on the principle of "Neither alternative is good":
- If there are extraterrestrial civilizations, and there are many of them, it is dangerous because they can threaten us in one way or another.
- If extraterrestrial civilizations do not exist, it is also bad, because it gives weight to the hypothesis of inevitable extinction of technological civilizations or of our underestimating of frequency of cosmological catastrophes. Or, a high density of space hazards, such as gamma-ray bursts and asteroids that we underestimate because of the observation selection effect—i.e., were we not here because already killed, we would not be making these observations….
Theoretically possible is a reverse option, which is that through SETI will come a warning message about a certain threat, which has destroyed most civilizations, such as: "Do not do any experiments with X particles, it could lead to an explosion that would destroy the planet." But even in that case remain a doubt, that there is no deception to deprive us of certain technologies. (Proof would be if similar reports came from other civilizations in space in the opposite direction.) But such communication may only enhance the temptation to experiment with X-particles.
So I do not appeal to abandon SETI searches, although such appeals are useless.
It may be useful to postpone any technical realization of the messages that we could get on SETI, up until the time when we will have our Artificial Intelligence. Until that moment, perhaps, is only 10-30 years, that is, we could wait. Secondly, it would be important to hide the fact of receiving dangerous SETI signal its essence and the source location.
This risk is related to a methodologically interesting aspect. Despite the fact that I have thought every day in the last year and read on the topic of global risks, I found this dangerous vulnerability in SETI only now. By hindsight, I was able to find another four authors who came to similar conclusions. However, I have made a significant finding: that there may be not yet open global risks, and even if the risk of certain constituent parts are separately known to me, it may take a long time to join them into a coherent picture. Thus, hundreds of dangerous vulnerabilities may surround us, like an unknown minefield. Only when the first explosion happens will we know. And that first explosion may be the last.
An interesting question is whether Earth itself could become a source of SETI-attack in the future when we will have our own AI. Obviously, that could. Already in the program of METI exists an idea to send the code of human DNA. (The “children's message scenario” – in which the children ask to take their piece of DNA and clone them on another planet –as depicted in the film “Calling all aliens”.)
Literature:
1. Hoyle F. Andromeda. http://en.wikipedia.org/wiki/A_for_Andromeda
2. Yudkowsky E. Artificial Intelligence as a Positive and Negative Factor in Global Risk. Forthcoming in Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic http://www.singinst.org/upload/artificial-intelligence-risk.pdf
3.Moravec Hans. Mind Children: The Future of Robot and Human Intelligence, 1988.
4.Carrigan, Jr. Richard A. The Ultimate Hacker: SETI signals may need to be decontaminated http://home.fnal.gov/~carrigan/SETI/SETI%20Decon%20Australia%20poster%20paper.pdf
5. Carrigan’s page http://home.fnal.gov/~carrigan/SETI/SETI_Hacker.htm
99 comments
Comments sorted by top scores.
comment by Kawoomba · 2013-03-15T11:46:50.281Z · LW(p) · GW(p)
A sufficiently advanced AI should already be propagating at near the speed of light, which is why we needn't fear mere radiosignals: If there's such an entity in the neighborhood, its von Neumann probes will be the first sign we get.
Replies from: Thomas, turchin, Eliezer_Yudkowsky, Pfft, wedrifid↑ comment by Thomas · 2013-03-15T15:43:06.097Z · LW(p) · GW(p)
A near light speed and the actual light speed may be a significant difference where the universal dominance is the price.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-03-15T16:14:56.027Z · LW(p) · GW(p)
Which is a good argument for why a smart AI wouldn't announce its malicious intentions by sending some sort of universal computer code - which could ultimately announce its intentions, yet have a significant chance of failure - and would just straight send its little optimizing cloud of nanomagic.
The first indication that something's wrong would be your legs turning into paperclips (The tickets are now diamonds - style).
Replies from: Thomas, Will_Newsome↑ comment by Will_Newsome · 2013-03-20T21:24:22.623Z · LW(p) · GW(p)
The optimizer your optimizer could optimize like.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-03-20T21:27:33.100Z · LW(p) · GW(p)
Talking about triple-O, go continue your computational theology blog o.O
Replies from: Will_Newsome↑ comment by Will_Newsome · 2013-03-20T22:26:35.768Z · LW(p) · GW(p)
I will when I figure out how to solve this problem: I'm trying to accomplish two major objectives.
The more important objective is to explain to people how we can use concepts from mathematical fields, especially algorithmic information theory and reflective decision theory, to elucidate the fundamental nature of justification, especially any fundamental similarities or relations between epistemic and moral justification. (The motivation for this approach comes from formal epistemology; I'm not sure if I'll have to spend a whole post on the motivations or not.)
The less important objective is to show that theology, or more precisely theological intuitions, are a similar approach to the same problem, and it makes sense and isn't just syncretism to interpret theology in light of (say) algorithmic information theory and vice versa. But to motivate this would require many posts on hermeneutics; without sufficient justification, readers could reasonably conclude that bringing in "God" (an unfortunately political concept) is at best syncretism and at worst an attempt to force through various connotations. I'm more confident when it comes to explaining the math---even if I can be accused of overreaching with the concepts, at least it's admitted that the concepts themselves have a very solid foundation. When it comes to hermeneutics, though, I inevitably have to make various qualitative arguments and judgment calls about how to make judgment calls, and I'm afraid of messing it up; also I'm just more likely to be wrong.
So I have to think about whether to try to tackle both problems at once, which I would like to do but would be quite difficult, or to just jump into the mathematics without worrying so much about tying it back to the philosophical tradition. I'd really prefer the former but I haven't yet figured out how to make the presentation (e.g., the order of ideas to be introduced) work.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-03-24T15:32:18.806Z · LW(p) · GW(p)
especially any fundamental similarities or relations between epistemic and moral justification
So, the fact that in natural languages it's easy to be ambiguous between epistemic and moral modality (e.g. should in English can mean either ‘had better’ or ‘is most likely to’) may be a Feature Not A Bug? (Well, I think that that is due to a quirk of human psychology¹, but if humans have that quirk, it must have been adaptive (or a by-product of something adaptive), in the EEA at least.)
- How common is this among the world's languages? The more common it is, the more likely my hypothesis, I'd guess.
↑ comment by turchin · 2013-03-15T21:40:38.430Z · LW(p) · GW(p)
We should not think about AI as about omnipotent God - if it was, he could travel even faster then light and even back in time. But we dont see it around us (if we are not in simulation). So he is not omnipotent. So we should assume that nanobots wave is slower then speed of light. Lets give it 0.8 light speed. The main problem with nanobots wave is to slow down after it reachs its destination. We could accelerate nenobots in accelrators, but slowing down could be complicated. So, if nanobots speed is 0.8c, then volume of a sphere which they could get is only 0.512 of that of the SETI attack. That meens that SETI attack is 2 times more effective as a way to conqure the space. Also observer selection is working here. All civilizations inside nanobots wave are probably destroyed. So we could only find ourselves outside it.
Replies from: Tenoke↑ comment by Tenoke · 2013-03-15T22:43:55.691Z · LW(p) · GW(p)
As gwern pointed out SETI attacks only target worlds with tech-savvy intelligent life (we so far know about one of those) while a von-neuman probe can likely target pretty much all systems we've observed so far (and we've observed a bit more than one).
A SETI attack being twice as effective as a von-neumann probe is quite the overstatement. (even discounting the fact that the probes might be able to travel at a speed much closer to c)
Replies from: turchin, Decius↑ comment by turchin · 2013-03-16T07:28:51.174Z · LW(p) · GW(p)
SETI attack could happen in any medium where only information transfer is possible. If in the future we could contact parallel worlds it would be again the case. As we now dont know exact limitation of interstellar travel, we may think that SETI attack could happened. Or we should conclude that any serch of alien radiosignals is useless as they should approach us in a speed of light phisically.
And again we could exist only in those regions of the Universe which is not conqured by alien nanobot. Or they are conqured but lay dormant somethere, and in this case SETI attack still possible.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-03-16T07:45:49.357Z · LW(p) · GW(p)
And again we could exist only in those regions of the Universe which is not conqured by alien nanobot. Or they are conqured but lay dormant somethere, and in this case SETI attack still possible.
It seems a bit like you're grasping at straws to keep the SETI threat viable. I realize you're attached to it, I saw the website. Still, allow yourself to follow the arguments whereever they may lead.
Replies from: turchin↑ comment by turchin · 2013-03-16T08:02:50.450Z · LW(p) · GW(p)
I know that nano von Neuman probs is strongest argument against the theory and I knew it even before I published it here. Moreover, i have shorter article about possible alien nanobots in Solar system which I will eventually publish here - if it is not to much offtopic.
But from epistemic point of view we cant close one unknown case with another big unknown with 100 percent certanity.
Any way it will not change conclusion: SETI serch is or usless or dangerous, and should be stopped.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-03-16T08:28:29.091Z · LW(p) · GW(p)
Useless? I don't think so.
There's nothing this ragtag horde of competing special interests (humanity) needs more than the uniting force of "we received signals from other civilizations". To unite us and to usher in a new era of a redefined in-group ("us") versus the new out-group ("them" - the aliens).
As the old adage goes, me against my brother, my brother and I against our cousins, my cousins and I against strangers.
What we need is a "all of humanity versus some unspecified aliens" to save us. Even if we have to make them up ourselves; there should be an astrophysicists' conspiracy to fake such signals. I imagine something like "Ok Earth-guys, whoever gets to Epsilon Eridani first owns it! Also, we demand a new season of Firefly." (This would be troublesome, because it would mean they are very close already.)
Replies from: Multiheaded↑ comment by Multiheaded · 2013-03-16T17:29:18.241Z · LW(p) · GW(p)
[Obligatory Watchmen reference]
Replies from: Kawoomba↑ comment by Kawoomba · 2013-03-16T18:21:39.872Z · LW(p) · GW(p)
That's not exactly how I remember the movie, but it was still entertaining. I liked that big guy. Klaatu barada nikto!
. . .
(Sorry, just stirring the pot.)
Replies from: Multiheaded↑ comment by Multiheaded · 2013-03-16T18:30:22.286Z · LW(p) · GW(p)
Vg jnf gung jnl va gur pbzvp obbx; Bmlznaqvnf unq n grnz bs fpvragvfgf ovb-ratvarre n uhtr cflpuvp fdhvq gung jbhyq qvr hcba ovegu/npgvingvba naq xvyy n ybg bs crbcyr jvgu vgf cflpuvp "fpernz". Vg'q znc avpryl gb crbcyr'f rkcrpgngvbaf bs na "nyvra vainqre" naq uhznavgl jbhyq havgr ntnvafg cbgragvny shegure gerngf.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-16T05:24:15.752Z · LW(p) · GW(p)
(Agreed.)
↑ comment by Pfft · 2013-03-15T16:40:39.541Z · LW(p) · GW(p)
The scheme described in the article seems like one of the most efficient ways to propagate near the speed of light. Why bother sending material von Neumann probes if mere radiosignals are sufficient?
Replies from: gwern, Kawoomba↑ comment by gwern · 2013-03-15T17:25:49.825Z · LW(p) · GW(p)
The scheme requires reception by an advanced civilization during a narrow window of opportunity; the radio waves have no effect on the billions of dead planets all around. A probe, on the other hand, presumably would be able to affect any system.
Since we observe so few life-filled planets or signals out there...
↑ comment by Kawoomba · 2013-03-15T18:23:51.916Z · LW(p) · GW(p)
Doesn't seem very effective to me.
The civilizatory window in which a target would be susceptible to such tactics is very small: cavemen don't notice, superintelligences are thankful for you announcing your hostile intentions. And that's not even taking into account the small fraction of inhabited planets (via the Drake equation) to begin with.
Compare that to a wave of self replicating probes at near lightspeed reconfiguring all secured matter into computronium performing the desired operations? Seems like no contest. I'd rather rebuild jupiter too, for a loss of just a few percent in propagation speed.
Replies from: Elithrion↑ comment by Elithrion · 2013-03-15T20:13:57.348Z · LW(p) · GW(p)
Compare that to a wave of self replicating probes at near lightspeed reconfiguring all secured matter into computronium performing the desired operations? Seems like no contest.
I think the best argument in favour of this SETI virus is that you can really just do both. Nearly all the useful stuff will come from the self-replicating probes, but you might get a little extra out of the virus as well.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-03-15T20:22:45.308Z · LW(p) · GW(p)
Not that it's an important point of contention, but I don't think so. If there are any other superintelligences out there (other than the sender) - even if fewer than there are civilizations in their vulnerable phase - they would still pose a serious threat to the signal-sending agent:
A signal travelling slightly ahead of the cavalry would be like a trumpet call announcing "here come the nanobots!", giving the adversary time to prepare.
(Interestingly, our position in the outskirts of a galaxy / the less densily populated regions can count as weak evidence that such a cosmic chessgame exists, since otherwise due to the SSA we'd expect to find our home star cluster somewhere in the more densily packed areas.)
God I hate it when my comments become needlessly verbose, sorry ... argh, and isn't verbosity needless by definition?
Replies from: Thomas, Elithrion↑ comment by Thomas · 2013-03-16T07:25:47.179Z · LW(p) · GW(p)
A signal travelling slightly ahead of the cavalry would be like a trumpet call announcing "here come the nanobots!", giving the adversary time to prepare.
Yes, but we better prepare for nanobots anyway. If they don''t come it's just a bonus. It is wise to be prepared for an intergalactic war in any case. For the robots, for the small kinetic projectiles with a near light speed, for some artificial gamma ray bursts, for SETI attacks and for many more.
Then we should strike in all directions in the best tradition of a very benevolent colonist. To end all the space wars even before they relay start. As much as we can.
The aliens, who are extremely rare (as I think), had, have or will have the same dilemma, what may be another opportunity. Game theoretically speaking, we must do some calculations right now, it is already late and the OP's article is a good one.
↑ comment by Elithrion · 2013-03-15T21:00:03.347Z · LW(p) · GW(p)
I actually don't mind this length of comments (less is okay, but sometimes too vague, and starting at double that length it definitely feels like too much).
Overall, I see your point, but I think it depends on what kind of strategy the spreading superintelligence is using and on what wars would look like in general. For example, the universe probably mostly doesn't resist, so it might be sending small "conversion" probes everywhere to expand as fast as possible. In that case, any actual opponent might be able to easily repel them and start getting ready to present a serious defence by the time any dedicated offensive force is sent, so the additional forewarning of having a signal travel slightly further ahead wouldn't really change anything, and might prevent an opponent from emerging in the first place.
(On the other hand, maybe the conversion probes it sends are smart enough to detect any signal originating from their destination and stop flying/change course if it looks like it might resist. But maybe any superintelligence is on the lookout for extremely fast-travelling objects that behave like this and would notice anyway.)
↑ comment by wedrifid · 2013-03-16T07:55:50.574Z · LW(p) · GW(p)
A sufficiently advanced AI should already be propagating at near the speed of light, which is why we needn't fear mere radiosignals: If there's such an entity in the neighborhood, its von Neumann probes will be the first sign we get.
Von Neumann probes don't allow propagation at near the speed of light. They are self replicating exploratory probes that send back information to a home system. That limits propagation to one third the speed of light. If the sufficiently advanced AI is already propagating at near the speed of light then the self replicating ships that are the first sign we get would have to be closer to seeders.
comment by Luke_A_Somers · 2013-03-15T13:56:57.362Z · LW(p) · GW(p)
The claims about Seed AI are not clear - it should read, 'as far as we know, a seed AI could be a few hundred kilobytes'. I rather suspect that a friendly utility function isn't that compressible.
The first point in the 'algorithm' (which isn't really an algorithm) is very silly - you can have multiple clocks at multiple points and open your shutters simultaneously in any reference frame. Moreover, he's postulating constructing a dyson sphere semaphoring a star to send interstellar signals? That's like postulating the modern US industrial economy to explain cutting down a tree.
Second point? You only really need the radio beacon. If you can build a Dyson sphere anywhere, you can outshine any old star on one narrow band by a factor of ten thousand, with a lot less to work with.
Third point? You don't need to go to images and this incredibly roundabout 'teach them basic electronics' problem. Just go straight to the math. You can introduce number representation very easily. You don't need to resort to drawing diagrams.
Replies from: Adele_L, turchin↑ comment by Adele_L · 2013-03-15T20:39:53.185Z · LW(p) · GW(p)
The claims about Seed AI are not clear - it should read, 'as far as we know, a seed AI could be a few hundred kilobytes'. I rather suspect that a friendly utility function isn't that compressible.
Actually, we do have reasons to believe a seed AI could be expressed in a few hundred kilobytes.
By Chaitin's incompleteness theorem, there is a constant L, depending only on our axiom system and choice of language, for which we cannot prove that any string s has Kolmogorov complexity greater than L.
An estimate by a friend of John Baez puts this constant for Peano arithmetic and Python somewhere less than a few kilobites. The entire blog post is worth reading for the intution and exposition.
Edit: It has been pointed out that this is barely a reason for it to be small, if that, however I think it is still an interesting side note.
Replies from: OrphanWilde, None, Manfred↑ comment by OrphanWilde · 2013-03-15T21:39:48.893Z · LW(p) · GW(p)
There's a gulf of difference between being unable to prove that -any- string has complexity greater than some constant and actually being able to actually specify -arbitrary- strings using less than that constant in computational resources.
Chaitin's Incompleteness Theorem doesn't limit complexity, only -provable- complexity.
↑ comment by Manfred · 2013-03-15T21:27:19.917Z · LW(p) · GW(p)
By Chaitin's incompleteness theorem, there is a constant L, depending only on our axiom system and choice of language, for which we cannot prove that any string s has Kolmogorov complexity greater than L.
I dunno, that sounds a bit too fishy. It's like the proof that there are no uninteresting numbers.
Couldn't one simply write a program that takes a bit string, and then searches through programs until it finds one that outputs that bit string? This will always halt if every bit string has a Kolmogorov complexity. Then you iterate through bit strings, looking for one that has greater than a certain complexity. We know there are arbitrarily large complexities because you'd run out of programs otherwise, so this works. Does this work because it uses a system weaker than PA? Does it just fail?
Replies from: endoself, OrphanWilde, Qiaochu_Yuan, Kawoomba, Adele_L↑ comment by endoself · 2013-03-15T21:50:28.110Z · LW(p) · GW(p)
Say your bit string is s and the program you find that outputs s is p. Now you know that K(s) ≤ |p|, meaning the length of p. However, we wanted to show that K(s)>L, so the bound is in the wrong direction. We need to show that there's no shorter program that also outputs s, which is impossible by Chaitin's theorem.
The actual proof of Chaitin's theorem is somewhat similar to your argument here. Basically, if we have can prove that any string has a Kolmogorov complexity greater than L, we write a program to search through all proofs until it find a proof that some string s has complexity greater than L and outputs that string. By choosing L sufficiently large, this program itself has complexity less than L, but it outputs s, contradicting K(s)>L.
↑ comment by OrphanWilde · 2013-03-15T22:04:20.155Z · LW(p) · GW(p)
The proof works (roughly) like this:
For any algorithm PP such that will prove a string has complexity X (where complexity is the minimum amount of informational content necessary to specify it):
This algorithm is encoded by program OC, which iterates through every possible string until it finds one of complexity X>L. (L is at this point undefined.)
The program OC has complexity M. (This is the sneaky bit right here. You'll see why in a moment.)
Any string output by OC has complexity M as an upper bound (being specified by program OC). Thus, L has an upper bound of M. Therefore, any algorithm PP cannot prove any string has a greater complexity than that necessary to specify the program OC which encapsulates it.
(This is a -very- rough approximation of the proof. Wikipedia's is worse, at least in my view.)
Note that this doesn't limit the complexity of any strings so much as the power of any algorithm PP and derivative program OC.
↑ comment by Qiaochu_Yuan · 2013-03-15T21:38:50.237Z · LW(p) · GW(p)
Then you iterate through bit strings, looking for one that has greater than a certain complexity.
Kolmogorov complexity isn't computable, so how do you do this?
Replies from: gwern↑ comment by gwern · 2013-03-15T21:49:52.246Z · LW(p) · GW(p)
Enumerate and run every bitstring, increasing in length, until one emits the requisite bitstring; by construction, this program is the shortest bitstring which emits it and so the program is the Kolmogorov complexity. Many of these programs will not terminate, so you use the dovetail strategy to allocate computing time to every program.
Replies from: Baughn, asr, OrphanWilde↑ comment by Baughn · 2013-03-15T23:01:44.270Z · LW(p) · GW(p)
Many of these programs will not terminate
Unfortunately, you cannot prove whether or not an arbitrary program will terminate, meaning the best this scheme will do is provide an upper bound for KC.
Without waiting forever there's no way of knowing if one of the programs smaller than that bound, which hasn't terminated yet, isn't going to eventually terminate and output the string.
Replies from: gwern↑ comment by asr · 2013-03-16T14:09:54.508Z · LW(p) · GW(p)
Gwern: I think you understand this, but for the benefit of other readers:
The strategy of enumerate, run in parallel, and pick the first to halt doesn't give Kolmogorov complexity. It gives an upper bound. There might be some shorter program that will halt and give the appropriate output, but it just hasn't gotten there yet when you find the first thing that halts.
↑ comment by OrphanWilde · 2013-03-15T22:10:15.555Z · LW(p) · GW(p)
Note that this only proves the minimum complexity in the system used to run this operation; it could have a different complexity in a different system.
[ETA]: Also, I think this runs into the Chaitin issue. (Personally, I think the issue is with the flawed definition of "complexity" such that it incorporates only the size of the reference, and not the processing power necessary to disentangle the target from the reference.)
↑ comment by Kawoomba · 2013-03-15T21:47:24.356Z · LW(p) · GW(p)
I dunno, that sounds a bit too fishy.
If there weren't such a constant, I think it would follow that in effect K.C. wouldn't be generally incomputable. (I may dwell on it further, once I'm sober in the morrow.)
(...) looking for one that has greater than a certain complexity.
The problem with your approach is that K. C. isn't computable, so you wouldn't know if you've found the exact K.C. of that bit string. Even iterating through all programs that generate it wouldn't give you the answer to that one, since you're not iterating from lowest K.C. to highest, but only from the uncompressed smallest program upwards.
Replies from: Manfred↑ comment by Manfred · 2013-03-15T21:53:49.566Z · LW(p) · GW(p)
The problem with your approach is that K. C. isn't computable, so you wouldn't know if you've found the exact K.C. of that bit string. Even iterating through all programs that generate it wouldn't give you the answer to that one, since you're not iterating from lowest K.C. to highest, but only from the uncompressed smallest program upwards.
Ah, right, halting problem. Can't do step 1. Okay.
↑ comment by Adele_L · 2013-03-15T21:41:33.608Z · LW(p) · GW(p)
First, I don't understand this domain well enough to pinpoint why that doesn't work, but I trust in the math enough to believe the result regardless.
That said, I don't think that you can write a program which searches through programs until it finds one which outputs a specific string, that also halts. It seems like it could always get stuck on some program that doesn't halt, and it can't figure out if a given program will halt or not.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-03-16T01:21:52.915Z · LW(p) · GW(p)
You could have it do a few steps of each program at a time. Then it doesn't get stuck on the non-halting programs, they just eat up more and more of its resources.
comment by ChristianKl · 2013-03-17T01:39:47.146Z · LW(p) · GW(p)
I think the problem from moving from online to offline is overrated. You don't even need an AI that's smart enough to create nanobots via DNA to gain world domination.
The AI can impersonate other humans by faking emails, phonecalls and video chats. It can also simply pay humans money to do services for it.
An AI that can simulate 1000 people with an IQ of 180 for every 1,000,000 home computers in it's network might be capable of achieving world domination even when it can't improve it's capabilities very fast.
comment by Izeinwinter · 2013-03-15T20:04:46.272Z · LW(p) · GW(p)
Oh FFS. If an alien origin artificial intelligence explosion occurred in our past lightcone, it was non-hostile, or at least not a paper-clip optimizer. And either it just flat out did not care about the stars or it is already here, studying us from vantage points immune to our perception.
Which is not a difficult feat : Miniaturization and predictive avoidance would do it. We could be living in a full panopticon and never know, as long as the individual motes are sufficiently small to avoid direct perception and sufficiently mobile to not get caught in instrumentation. Hmm. As a theory for why the universe hasn't been turned into computing hardware for a run-away machine, universal dispersal of a friendlish (It obviously does not do requests) AI is quite reasonable. If this is indeed the case, anyone building a /hostile/ AI will just be stopped by the security hardware laid down by the ancients.
Note: If such a system does not already exist, upon success of Friendly AI, have one built.
Star-travel is a difficult feat for biological entities. Given the level of competency you are ascribing to alien AI, the gap between the stars would be trivial, and there would be no need, nor indeed any point to relying on local assistance. The time frame of complex life on planets is enormously long, and once an intelligence has spawned an AI, machines could easily outlast both that biosphere, and the local star. Alien AI that could be of concern to us is exceedingly unlikely to be of recent origin, which means it either does not exist, or it is already in the solar system. And has been here since before man discovered fire.
Replies from: David_Gerard, Baughn↑ comment by David_Gerard · 2013-03-16T10:34:49.910Z · LW(p) · GW(p)
Given the level of competency you are ascribing to alien AI
Yes. This post is writing a scary story, then being convinced by how scary it is. "You can't prove it's impossible!" is not a reason to waste any effort considering this negligible probability, just because humans are very bad at ignoring negligible probabilities.
I'm wondering to what degree scary campfire stories for amateur philosophers could be said to be a local literary form.
↑ comment by Baughn · 2013-03-15T22:55:24.333Z · LW(p) · GW(p)
Not exactly like that, I hope. Death still sucks, and adding an afterlife isn't a dramatic improvement.
Replies from: Izeinwinter↑ comment by Izeinwinter · 2013-03-16T10:35:08.074Z · LW(p) · GW(p)
.. The prime directive is bullshit, but I am actually having some considerable difficulty thinking of an appropriate protocol for dealing with alien life from the perspective of deep time. When sending out a swarm of AI to safeguard against someone else doing something deeply stupid to the universe at large, the only things it is at all likely to encounter are apex civilizations that have already successfully dealt with these issues, as demonstrated by their not being dead (and - having home court advantage, such societies will eat it for lunch if it tries anything at all they do not like) or ecosystems which do not yet have tool users in them at all. In the second case, it can dig in and wait. But having it start granting wishes to the first proto-sapient to evolve does not seem.. advisable. The minimal-intervention rule would be "Do not permit anyone to inflict damage to the universe/galaxy at large" but there are a whole bunch of options escalating from there.
Paranoia: Can anyone think of a good way to check for already installed hardware of this type? EMP a random spot and go through the dust with a microscope?
Replies from: Baughn↑ comment by Baughn · 2013-03-16T12:33:17.417Z · LW(p) · GW(p)
Well. Given my opinion on the ethicalness of nature, my own instructions to such a swarm would be to destroy all life. Through uploading, for the smarter parts, but at any rate stop nature from existing.
It might also be nice to shut off all the stars, because they're really wasting a lot of energy.
comment by Dr_Manhattan · 2013-03-15T12:50:42.023Z · LW(p) · GW(p)
This possibility was also floated in http://en.wikipedia.org/wiki/His_Master's_Voice_(novel)
Replies from: MichaelHoward↑ comment by MichaelHoward · 2013-03-15T17:07:16.268Z · LW(p) · GW(p)
comment by ygert · 2013-03-15T13:41:28.390Z · LW(p) · GW(p)
By the way, you have misspelled Yudkowsky's name more than once. There were occorances were you you did get the spelling right though.
Replies from: curiousepiccomment by Tenoke · 2013-03-15T13:08:05.765Z · LW(p) · GW(p)
There are many points that can be nitpicked in the paper but I currently don't have the time. I just want to point out that there will be no possible communication (on a reasonable time scale) with any civilizations which is not in a really close proximity to us so it will be impossible for them to receive further messages from us or ask us anything or to convince us anything or to develop specific strategies for conquering us. In order for them to make this 'seti attack' on a large scale they need to transmit (the same) message as further as they can targeting the average advanced civilization without any feedback. Such an attack is also probably likely only for a really small subset of possible highly-intelligent civilization as there are so many prerequisites.
Also http://en.wikipedia.org/wiki/Self-replicating_spacecraft
Replies from: turchin↑ comment by turchin · 2013-03-15T21:57:07.944Z · LW(p) · GW(p)
They dont need the back communication to start the attack. They need just unstopably translate the same message for many million years. It could look like waste of time, but many Earthly trees sends their seeds without any feedback. It is just reproduction.
Replies from: Tenoke↑ comment by Tenoke · 2013-03-15T22:19:38.704Z · LW(p) · GW(p)
"Unknown threat con" - in this scenario bait senders report that a certain threat hangs over on humanity, for example, from another enemy civilization, and to protect yourself, you should join the putative “Galactic Alliance” and build a certain installation.
"Tireless researcher con" - here senders argue that posting messages is the cheapest way to explore the world. They ask us to create AI that will study our world, and send the results back
A defensive posture regarding interstellar communication is only to listen, not sending anything that does not reveal its location. The laws prohibit the sending of a message from the United States to the stars.
Those are some of the examples where you talk about back communication and this is what I was referring to.
Edit:
My opinion is that SETI attack first stage is less then 1 Gb, not hundreds of kilobytes. It could later download addititonal material and it should not be Friendly.
How?? You seem to again be assuming that there is real-time two-way communication over interstellar distances.
Replies from: turchin↑ comment by turchin · 2013-03-16T07:19:12.743Z · LW(p) · GW(p)
No, I dont assume back communication. They ask about it - but it is only a trick to push us to make their computer. Downloading addititional information could be made from secretive preexisted chanel, like third radiotranslator with highly encripted code.
Replies from: Tenoke↑ comment by Tenoke · 2013-03-16T11:15:17.070Z · LW(p) · GW(p)
Downloading addititional information could be made from secretive preexisted chanel,
Fair enough, that could work but there are limitations on this new channel in terms of its position (can't be close to the first signal) and the same limitations for message length. Additionally you need to send the key for the decryption of the second message with the first message.
They ask about it - but it is only a trick
Surely we will know where the message is originating from and will know how long it will take us to send them a message back and thus they cant trick us into believing that there is backwards communication when there isn't. Especially so when we see that the message stays the same and is repeating itself in every direction.
comment by turchin · 2013-03-15T10:28:54.513Z · LW(p) · GW(p)
I dont know how to change font size, sorry for the large size.
Replies from: ygert, curiousepic, Vladimir_Nesov↑ comment by ygert · 2013-03-15T13:43:01.351Z · LW(p) · GW(p)
To me at least, the large font size is really really annoying. If anybody knows how to fix it, please speak up.
Replies from: Tenoke↑ comment by Tenoke · 2013-03-15T14:30:31.166Z · LW(p) · GW(p)
A simple fix is to just reduce the size of text in your browser when reading it by clicking ctrl and minus or control and scroll-down or whatever the shortcut is for your browser. It's not the best solution and it is client-side but it helps reading things like that if you get annoyed by it.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-03-15T16:50:44.772Z · LW(p) · GW(p)
This post will be seen by hundreds or thousands of humans. Asking each one to individually adjust their equipment to make it readable is a huge waste of human time. The submitter should fix it in the source.
Replies from: Tenoke↑ comment by curiousepic · 2013-03-15T14:01:34.131Z · LW(p) · GW(p)
Try editing the article, selecting all and cutting the text, then using ctrl-shift-V to paste without formatting (my new favorite thing). Hopefully it will use the default text size.
↑ comment by Vladimir_Nesov · 2013-03-15T18:46:23.896Z · LW(p) · GW(p)
Fixed. (This was exceptionally jarring. One should resolve such issues before posting.)
comment by Decius · 2013-03-15T15:53:04.574Z · LW(p) · GW(p)
Describe a universal way of encoding a 3d image (example: x,y, contents) into a 2d message (sequence, intensity; a binary sequence is the simplest method), without making noncommunicable assumptions such as left-to-right.
Alternately, describe how to decode a self-documenting encoding of any type, using any means except knowing the encoding.
Replies from: Luke_A_Somers, Richard_Kennaway, Kawoomba↑ comment by Luke_A_Somers · 2013-03-15T16:56:59.015Z · LW(p) · GW(p)
You receive a signal flashed in two color channels. Both off, I'll show as space, and for a lot of space, return. One on is 1, the other on is 2, and both on is 3. You receive:
- 21 31221 1
- 1 32 21 31221 21
- 21 32 21 31221 221
- 11 32 21 31221 2221
- 221 32 21 31221 22221
- 121 32 21 31221 222221
- 211 32 21 31221 2222221
- 111 32 21 31221 22222221
- 2221 32 21 31221 222222221
1111 32 21 31221 2222222222222221
111 32 11 32 221
- 2121121 32 121221 32 121211
- 1 31 1 32 21
121221 31 121211 32 2121121
2
- 2 3211 1
- 2 3211 21
- 2 3211 11
- 2 3211 221
- 2 3211 121
- 2 3211 211
- 2 3211 111
- 2 3211 122212111211221
- 2 3211 122112221212111
- 1 3211 1 32 1
- 11 3211 121 32 1111
- 11 3211 21 31 1 32 111
11 31 21 3211 1 32 121
2 32221 21 32 1
- 1 32221 21 32 21
- 21 32221 21 32 221
- 11 32221 21 32 2221
- 21 32221 11 32 1221
- 221 32221 21 32 22221
- 221 32221 11 32 1222121
- 21 32221 11 3211 21 32 21221
21 3211 11 32221 21 32 2222221
21 32221 111 3211 1211 31 1221 32
What do you reply?
Assuming you got that, there's more...
- 312221 12221121 111 3312221121 32 111
- 111 32 3312221121
- 312221 12221121 121 3312221121 32 121
111 32 3312221121 32 21
322221 12221121 3332 3312221121 32 3312221121 3331
- 322221 12221121 3332 2 3211 3312221121 3331
322221 12221121 212211 3332 3332 3312221121 31 33212211 3331 32 3332 33212211 31 3312221121 3331 3331
322221 12121 3332 21 32221 3312121 32 3332 21 32221 3332 3312121 32 1 3331 3331
↑ comment by Antisuji · 2013-03-16T18:59:54.791Z · LW(p) · GW(p)
Shouldn't these lines
- 111 32 11 32 221
- 2121121 32 121221 32 121211
be
- 111 32 11 31 221
- 2121121 32 121221 31 121211
? Or do I misunderstand? [Edit: I misunderstood :) — never mind.]
Also, the last line of the first part seems ambiguous, since gur beqre bs bcrengvbaf unf abg orra rfgnoyvfurq nf sne nf v pna frr.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-03-16T19:19:09.757Z · LW(p) · GW(p)
- 11 3211 21 31 1 32 111
11 31 21 3211 1 32 121
21 32221 11 3211 21 32 21221
- 21 3211 11 32221 21 32 2222221
↑ comment by Antisuji · 2013-03-16T19:54:59.941Z · LW(p) · GW(p)
21 32221 11 3211 21 32 22221
Do you mean
21 32221 11 3211 21 32 221221
?
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-03-16T20:51:59.443Z · LW(p) · GW(p)
21 32221 11 3211 21 32 21221
I think that you, like I, just messed up the arithmetic there.
↑ comment by ArisKatsaris · 2013-03-16T00:22:11.458Z · LW(p) · GW(p)
Can I ask for a minor correction on the line that says: "11 3211 101 32 1111" -- you've not defined what 0 means, so is it meant to be a space or a 2 instead? (probably the latter) Thanks.
ETA: I think the line above it may have a minor mistake too, "122212111211221" bhtug or gur bgure jnl nebhaq?
ETA2: I think a second problem with 33112221121 in the penultimate line -- one of the ones should be missing I think. If I'm wrong I've probably messed up my interpretation
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-03-16T01:12:56.555Z · LW(p) · GW(p)
Your first and third corrections are right (and doh! Slippy fingers!)
The second stands. I've added another line there.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-03-16T01:48:58.425Z · LW(p) · GW(p)
Vs V'z evtug naq gung frpgvba vf nobhg cevzrf, gurfr ahzoref pna bayl or qrpvcurerq nf cevzrf vs gur beqrevat jnf ovt-raqvna (zbfg fvtavsvpnag qvtvg svefg), nf vg'f va uhzna hfntr bs Nenovp ahzrenyf -- ohg va gur erfg bs gur pbagrag lbh tvir, nyy gur bgure ahzoref zhfg or qrpvcurerq va yvggyr-raqvna beqre (yrnfg fvtavsvpnag qvtvg svefg)...
Thanks for the puzzle btw, it's great fun. I'll continue working on it tomorrow (it's getting late where I live). :-)
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-03-16T02:37:05.607Z · LW(p) · GW(p)
Added a few more lines. By including only the things I could do off the top of my head, I restricted myself to too-small numbers and gave you the wrong idea.
↑ comment by ArisKatsaris · 2013-03-15T23:25:15.614Z · LW(p) · GW(p)
Please don't give answer just yet, I've solved parts of it and I think I'm close to solving rest of it as well.
↑ comment by Decius · 2013-03-16T02:48:34.221Z · LW(p) · GW(p)
That can be simplified to the level of illumination of Io and Ganymede as seen from Triton, accounting for all eclipses (probably not literally, but there are natural phenomena which produce patterns at least as interesting; see pulsars).
Since it's more likely that a natural phenomenon created this pattern of observations than that positional notation and time-ordering are shared by a given ETI, I would observe and try to understand the natural system which created this pattern.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-03-16T14:13:47.864Z · LW(p) · GW(p)
Doubtful.
↑ comment by Richard_Kennaway · 2013-03-17T08:30:15.138Z · LW(p) · GW(p)
It may be worth catching up on the prior art here. Hans Freudenthal's LINCOS) was developed as a method of communicating with aliens of any sort, using nothing but radio pulses.
Replies from: Decius↑ comment by Kawoomba · 2013-03-16T08:56:00.971Z · LW(p) · GW(p)
The hard step is only in establishing the very first few conventions, after that it becomes trivial.
Take a binary-colored picture of a circle (outline only), on a square background. Just transmit one line after the next (all appended), for linebreaks use a sequence that doesn't otherwise occur, e.g. '11'. Every optimizer worth its salt should figure out that the least complex / most compressible representation of that overall pattern will be to break up the transmission at those linebreaks such that the '1's representing the circle are close to each other, forming a circle.
Vary with various image sizes, to establish that point. Since it's symmetrical, left-to-right and right-to-left doesn't matter. Then you can start transmitting various black and white pictures of stars and their spectra, assigning other encodings to them (if you insist on colors).
There may be a surprise if the whole time the aliens thought the pictures were meant to be interpreted upside down, and wonder why you're not standing on your head when they meet us. But the gist should get through.
Replies from: wedrifid, Decius↑ comment by wedrifid · 2013-03-16T09:14:54.427Z · LW(p) · GW(p)
There may be a surprise if the whole time the aliens thought the pictures were meant to be interpreted upside down, and wonder why you're not standing on your head when they meet us. But the gist should get through.
If, once you finally meet, the alien greets you and holds out his left hand to shake... do not touch it!
Replies from: Kawoomba↑ comment by Decius · 2013-03-17T01:11:50.987Z · LW(p) · GW(p)
The first convention is that the sequence is coded by flashes of intensity distinct in time with a beginning and end. (rather than the information being the Fourier transform of the light wave, or any other property of light).
Once we have established what 1 and 0 are, how to decode an ordered string of them, and that we are drawing a picture with a bitmap (as opposed to a vector encoding, or an image encoding foreign to human computer science), we have to establish that we are using scanlines (as opposed to any other way of ordering a bitmap). We also need a line break sequence which is guaranteed to never occur outside a line break; that means that the line break pattern has to be a sequence of bits which cannot occur within the line. (not 'doesn't occur in this particular image') That requirement breaks any simple binary encoding.
Something as simple as transmitting the image using a different order for the pixels, like a spiral on a hexagonal grid, would be difficult to decode. Something complicated, like encoding the message into a transform of the wave or an interference pattern of two waves, would be impossible to notice even if the sending civilization was using electromagnetic radiation to send their message.
I'm also not sure why an image of a particular star or geometric figure would be first; I'd transmit the cosmic background radiation as the first image. That allows the receiver to use their own observations to confirm their understanding of our encoding.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-03-17T07:38:27.726Z · LW(p) · GW(p)
Sending strange patterns on the same frequency is a good way to assure that our signal - if received - gets classified as 'generated by an unknown phenomenon'. Unless we're transmitting on many frequencies or change the amplitude (signal strength), the Fourier transform would just yield a single number. If all we vary are the times between bursts, it should be quite clear that the information lies somewhere in the time between bursts. I'm no expert in this, though (shrug).
We also need a line break sequence which is guaranteed to never occur outside a line break; that means that the line break pattern has to be a sequence of bits which cannot occur within the line. (not 'doesn't occur in this particular image') That requirement breaks any simple binary encoding.
You're thinking about establishing the final encoding that can be used for all subsequent communications, but that's not necessary. These aren't the Golden Plates which need to contain everything we'll "ever" communicate (although their approach is relevant to our discussion, it's a different scenario).
The one thing that (nearly?) any optimizer should be able to do (to ever have evolved in the first place) is to notice patterns in its environment, and to have a tendency to compress those patterns into their simplest representations (model building). Only when arranging the lines such that a circle (and a line on one side representing the '11' line breaks) emerges is the pattern simplest to describe.
At some later point we can still move to a more sophisticated line break representation, slowly varying the encoding of that baseline calibration picture, we could even keep the '11' for nostalgia's sake.
I'm also not sure why an image of a particular star or geometric figure would be first;
Using cosmic background radiation introduces new elements to be figured out (e.g. how you visualize frequencies). Anyways, we're not bandwidth limited in any meaningful sense, so there's no need to rush things. (Re: circle - see above)
Replies from: Decius↑ comment by Decius · 2013-03-18T04:04:46.037Z · LW(p) · GW(p)
How are you modulating a carrier wave if you aren't varying frequency or amplitude?
Would you notice a transmission which consisted of a constant illumination equivalent to that produced by a number of lasers with frequencies that were linked to powers of two? Instead of "On, on, off, off, on, off" separated by time, there would be a single signal which would scope to the same wave as "sin(x)+sin(2x)+sin(16x)" or ""sin(x)+1/2sin(2x)+1/16sin(16x)"
Meanwhile, because we're broadcasting AM broadcasts on many different frequencies, they're trying to figure out
a:Why and how our transmitter is failing intermittently on such a fast scale
b:What our baseline frequency is.
c:How to decode the vast wealth of information they have.
If all we do is notice patterns and automatically ascribe meaning to them, we end up looking at pulsars. For that matter, what evidence do we have that pulsars aren't the result of intelligent communication? Can you construct a 'universal' encoding which could be communicated using only the properties of pulsars? Could you decode such an encoding?
comment by ikrase · 2013-03-15T15:20:14.095Z · LW(p) · GW(p)
I kind of wonder about the motivations for doing this sort of thing. The one that comes immediately to mind is an unfriendly AI trying to conquer the world.
Replies from: turchincomment by private_messaging · 2013-03-19T08:09:52.696Z · LW(p) · GW(p)
Two words: paranoid stupidity.
Few more words, actually. Get out of paranoid mode and try to figure out what Hubble Space Telescope would end up seeing in your imaginary world of star eating alien AIs. A futile proposal, of course - you know what HST sees so you'll just fit as many arbitrary assumptions as it takes to get correct picture.
comment by Elithrion · 2013-03-15T20:17:32.677Z · LW(p) · GW(p)
Overall, sounds interesting, although I think the final percentage is significantly overestimated (but my thoughts aren't sufficiently certain on this that I want to bother arguing over details).
Also, what happened to objections 4-6?
Replies from: turchin