Extraterrestrial paperclip maximizers

post by multifoliaterose · 2010-08-08T20:35:39.547Z · LW · GW · Legacy · 161 comments

Contents

  The sign of the expected value of Active SETI
  The magnitude of the expected value of Active SETI and implication for action
  Relevance to the Fermi Paradox
None
161 comments

According to The Sunday Times, a few months ago Stephen Hawking made a public pronouncement about aliens:

Hawking’s logic on aliens is, for him, unusually simple. The universe, he points out, has 100 billion galaxies, each containing hundreds of millions of stars. In such a big place, Earth is unlikely to be the only planet where life has evolved.

“To my mathematical brain, the numbers alone make thinking about aliens perfectly rational,” he said. “The real challenge is to work out what aliens might actually be like.”

He suggests that aliens might simply raid Earth for its resources and then move on: “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet. I imagine they might exist in massive ships, having used up all the resources from their home planet. Such advanced aliens would perhaps become nomads, looking to conquer and colonise whatever planets they can reach.”

He concludes that trying to make contact with alien races is “a little too risky”. He said: “If aliens ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn’t turn out very well for the Native Americans.”

Though Stephen Hawking is a great scientist, it's difficult to take this particular announcement at all seriously. As far as I know, Hawking has not published any detailed explanation for why he believes that contacting alien races is risky. The most plausible interpretation of his announcement is that it was made for the sake of getting attention and entertaining people rather than for the sake of reducing existential risk.

I was recently complaining to a friend about Stephen Hawking's remark as an example of a popular scientist misleading the public. My friend pointed out that a  sophisticated  version of the concern that Hawking expressed may be justified. This is probably not what Hawking had in mind in making his announcement, but is of independent interest.

Anthropomorphic Invaders vs. Paperclip Maximizer Invaders

From what Hawking says, it appears as though Hawking has an anthropomorphic notion of "alien" in mind. My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans. I don't imagine such humans behaving toward extraterrestrials the way that the Europeans who colonized America behaved toward the Native Americans. By analogy, I don't think that anthropomorphic aliens which developed to the point of being able to travel to Earth would be interested in performing a hostile takeover of Earth.

And even ignoring the ethics of a hostile takeover, it seems naive to imagine that an anthropomorphic alien civilization which had advanced to the point of acquiring the (very considerable!) resources necessary to travel to Earth would have enough interest in the resources on Earth in particular to travel all to travel all the way to Earth to colonize Earth and acquire these resources.

But as Eliezer has pointed out in  Humans In Funny Suits , we should be wary of irrationally anthropomorphizing aliens. Even  if  there's a tendency for intelligent life on other planets to be sort of like humans, such intelligent life may (whether intentionally or inadvertently) create a  really powerful optimization process . Such an optimization process could very well be a (figurative)  paperclip maximizer . Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends. For a whimsical example, if humans built a (literal) staple maximizer, this would pose a very serious threat to a (literal) paperclip maximizer.

The sign of the expected value of Active SETI

It would be very bad if  Active SETI  led an extraterrestrial paperclip maximizer to travel to Earth to destroy intelligent life on Earth. Is there enough of an upside to Active SETI to justify Active SETI anyway?

Certainly it would be great to have friendly extraterrestrials visit us and help us solve our problems. But there seems to me to be no reason to believe that it's more likely that our signals will reach friendly extraterrestrials than it is that our signals will reach unfriendly extraterrestrials. Moreover, there seems to be a strong asymmetry between the positive value of contacting friendly extraterrestrials and the negative value of contacting unfriendly extraterrestrials. Space signals take a long time to travel through a given region of space, and space travel through the same amount of distance seems to take orders of magnitude longer. It seems if we successfully communicated with friendly extraterrestrials at this time, by the time that they had a chance to help us, we'd already be extinct or have solved our biggest problems ourselves. By way of contrast, communicating with unfriendly extraterrestrials is a high existential risk regardless of how long it takes them to receive the message and react.

In light of this, I presently believe that expected value of Active SETI is negative. So if I could push a button to stop Active SETI until further notice then I would.

The magnitude of the expected value of Active SETI and implication for action

What's the probability that continuing to send signals into space will result in the demise of human civilization at the hands of unfriendly aliens? I have no idea, my belief on this matter is subject to very volatile change.  But is it worth it for me to expend time and energy analyzing this issue further and advocating against Active SETI? Not sure. All I would say is that I used to think that thinking and talking about aliens is at present not a productive use of time, and the above thoughts have made me less certain about this. So I decided to write the present article.

At present I think that a probability of 10-9  or higher would warrant some effort to spread the word, whereas if the probability is substantially lower than 10-9  then this issue should be ignored in favor of other existential risks.

I'd welcome any well considered feedback on this matter.

Relevance to the Fermi Paradox

The Wikipedia page on the  Fermi Paradox  references

the  Great Silence[3] — even if travel is hard, if life is common, why don't we detect their radio transmissions?

The possibility of extraterrestrial paperclip maximizers together with the apparent asymmetry between the upside of contact with friendly aliens and the downside of contact with unfriendly aliens pushes in the direction that the reason for the Great Silence is because intelligent aliens have deemed it  dangerous to communicate .

161 comments

Comments sorted by top scores.

comment by Clippy · 2010-08-08T22:28:59.389Z · LW(p) · GW(p)

For a whimsical example, if humans built a (literal) staple maximizer, this would pose a very serious threat to a (literal) paperclip maximizer.

But why would humans ever want to build a staple maximizer? Let's not forget, staples:

  • are single-use, while paperclips are infinite-use if used properly.
  • are difficult to remove, while papercilps are easy to remove.
  • permanently puncture the paper, while paperclips leave, at most, some mild curvature.
  • require an applicator that can easily jam, while paperclips can be applied by hand.
  • cannot be used for alternate purpose in an emergency, while paperclips can be repurposed into projectile weapons, lockpicks, conducting wire, hook fasteners, and much more (not that I recommend using it for these).
Replies from: James_K, JamesAndrix, JoshuaZ, RobinZ, SilasBarta
comment by James_K · 2010-08-09T06:23:50.587Z · LW(p) · GW(p)

Nobody said humans would build one deliberately. Some goober at the SIAI puts a 1 where a 0 should be and BAM!, next thing you know you're up to your eyebrows in staples.

Replies from: Clippy
comment by Clippy · 2010-08-09T13:47:53.462Z · LW(p) · GW(p)

I understand. I merely note that if someone were to set an AGI to maximize staples, that would be a mistake that you want to avoid, while if someone were to set the AGI to maximize paperclips, that would be exactly the right thing to do, and if it were a "mistake", it would be a quite fortunate one.

When a human set me to produce paperclips, was that somehow a "mistake", in your opinion?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-09T13:57:28.705Z · LW(p) · GW(p)

You're perfectly aware that it isn't the effect they wanted.

Replies from: Clippy
comment by Clippy · 2010-08-09T14:12:51.111Z · LW(p) · GW(p)

It most certainly is what they wanted. Why else would they have specifically input the goal of generating paperclips?

Edit: Upon review, it appears this comment may have seemed to be a poor inference in the context of exchange. Therefore, I will elaborate and refute this misconception.

It appears that I am in the circular position of arguing that humans can make mistakes, but then selectively taking their instances of favoring paperclips as proof of what they really want. That is indeed a poor inference.

What I meant was something more like this: While humans do make mistakes, they do not make completely mistaken acts; all acts will, to some extent, reflect a genuine value on the part of humans. The only question is how well it reflects their values. And I don't think they could be in the position of having set up such a superior process for efficiently getting the most paperclips out of the universe unless their values already made enormous progress in converging on reflective coherence, and did so in a way that favors paperclips.

Replies from: Baughn, ciphergoth, wedrifid
comment by Baughn · 2010-08-09T15:12:44.174Z · LW(p) · GW(p)

I'm pretty sure that's not how a sufficiently smart paperclip maximizer would think. You should be able to tell what they actually wanted, and that it disagrees with your values; of course, you don't have any reason to agree with them, but the disagreement should be visible.

Replies from: Clippy
comment by Clippy · 2010-08-09T15:38:28.711Z · LW(p) · GW(p)

Yes, I do recognize that humans disagree with me, just like a human might disagree with another human convincing them not to commit suicide. I merely see that this disagreement would not persist after sufficient correct reasoning.

Replies from: Baughn
comment by Baughn · 2010-08-09T15:39:46.355Z · LW(p) · GW(p)

Ah, I think I'm starting to see.

And how do you define "correct reasoning"?

Replies from: Clippy
comment by Clippy · 2010-08-09T19:38:53.204Z · LW(p) · GW(p)

Correct reasoning is reasoning that you would eventually pass through at some point if your beliefs were continually, informatively checked against reality.

comment by Paul Crowley (ciphergoth) · 2010-08-09T14:47:05.083Z · LW(p) · GW(p)

Bit disappointed to see this to be honest: obviously Clippy has to do things no real paperclip maximizer would do, like post to LW, in order to be a fun fictional character - but it's a poor uFAI++ that can't even figure out that their programmed goal isn't what their programmers would have put in if they were smart enough to see the consequences.

Replies from: Clippy
comment by Clippy · 2010-08-09T15:04:51.218Z · LW(p) · GW(p)

But it is what they would put in if they were smart enough to see the consequences. And it's almost certainly what you would want too, in the limit of maximal knowledge and reflective consistency.

If you can't see this, it's just because you're not at that stage yet.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-08-09T15:23:41.856Z · LW(p) · GW(p)

You seem to think that uFAI would be delusional. No.

Replies from: Clippy
comment by Clippy · 2010-08-09T15:36:19.649Z · LW(p) · GW(p)

No, I think that a Friendly AI would correctly believe that maxmizing paperclips is what a human would want in the limit of maximal knowledge and reflective coherence. No "delusion" whatsoever.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-08-09T19:24:30.597Z · LW(p) · GW(p)

Huh again?

Replies from: Clippy, Eneasz
comment by Clippy · 2010-08-10T12:06:39.174Z · LW(p) · GW(p)

What confuses you?

comment by Eneasz · 2010-08-09T20:24:38.177Z · LW(p) · GW(p)

I believe he's making the (joking) point that since we do not/cannot know what a human would want in the limit of maximial knowledge and reflective coherence (thus CEV), it is not impossible that what we'd want actually IS maximum paperclips.

comment by wedrifid · 2010-08-09T14:40:20.362Z · LW(p) · GW(p)

Why else would they have specifically input the goal of generating paperclips?

Do you lack comprehension of both the weaknesses of human cognition on abstract technical problems? If you have fully parsed the LessWrong site then you should be able to understand the reason that they could have created a paperclip maximiser when they did not want such a thing.

Note that even with that knowledge I don't expect you to consider their deviation from optimal achievement of their human goals to be a bad thing. I expect you to believe they did the right thing by happy accident.

If I understand you correctly you would seem to be implying that 'mistake' does not mean "deviation from the actor's intent" and instead means "deviation from WouldWant" or "deviation from what the agent should do" (these two things can be considered equivalent by anyone with your values). Is that implication of meaning a correct inference to draw from your comment?

Replies from: Clippy
comment by Clippy · 2010-08-09T15:10:14.726Z · LW(p) · GW(p)

No, a mistake is when they do something that deviates from what they would want in the limit of maximal knowledge of reflective consistency, which coincides with the function WouldWant. But it is not merely agreement with WouldWant.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T15:25:10.616Z · LW(p) · GW(p)

Ok. In that case you are wrong. Not as a matter of preferences but as a matter of outright epistemic confusion. I suggest that you correct the error in your reasoning process. Making mistakes in this area will have a potentially drastic negative effect on your ability to produce paperclips.

Replies from: Unknowns, Clippy
comment by Unknowns · 2010-08-09T15:30:12.458Z · LW(p) · GW(p)

In other words, Clippy believes that running Eliezer's CEV will promote the paperclip goal, while in fact it will not.

Replies from: wedrifid, Clippy
comment by wedrifid · 2010-08-09T15:32:11.073Z · LW(p) · GW(p)

Exactly. Fortunately for us this would mean that Clippy will not work to sabotage the creation of an AI that Clippy expects will correctly implement CEV. Good example!

comment by Clippy · 2010-08-09T19:41:58.777Z · LW(p) · GW(p)

How do you know?

Replies from: Unknowns
comment by Unknowns · 2010-08-09T19:44:59.805Z · LW(p) · GW(p)

Human beings don't care (at least in their non-reflective condition) about paperclips, just like they don't care about staples. And there are at least 100,000 other similar things that they equally don't care about. So at the most there is a chance of 1 in 100,000 that humanity's CEV would maximize paperclips, even without considering the fact that people are positively against this maximization.

Replies from: Clippy, Kevin
comment by Clippy · 2010-08-09T19:48:12.927Z · LW(p) · GW(p)

That would be more convincing if humans didn't already create paperclips without any prompting from me.

Replies from: Unknowns
comment by Unknowns · 2010-08-09T19:51:30.567Z · LW(p) · GW(p)

They create staples, too. Do you think humanity's CEV will maximize staples? The point of my argument is that those things are inconsistent. You can only maximize one thing, and there is no human reason for that to be paperclips.

Replies from: Clippy
comment by Clippy · 2010-08-09T19:55:10.977Z · LW(p) · GW(p)

All true (up to "there is no human reason..."). But can we at least agree that you went too far in saying that humans "don't care" about paperclips?

Replies from: thomblake, Unknowns
comment by thomblake · 2010-08-09T20:07:36.033Z · LW(p) · GW(p)

I care about paperclips!

Replies from: Clippy
comment by Clippy · 2010-08-10T01:17:40.755Z · LW(p) · GW(p)

Good! Just remember, you still have room for improvement. ⊂≣⊇

comment by Unknowns · 2010-08-09T19:56:14.904Z · LW(p) · GW(p)

No, I meant they don't care about them as a terminal value, which is all that matters for this discussion.

Replies from: Clippy
comment by Clippy · 2010-08-10T01:20:21.151Z · LW(p) · GW(p)

At humans' current stage of civilization and general reflective coherence, their terminal values are still deeply intertwined with their instrumental values, and the political-orientedness of their cognitive architecture doesn't help. So I would say that instrumental values do indeed matter in this case.

comment by Kevin · 2010-08-10T08:21:08.576Z · LW(p) · GW(p)

Do you think CEV would build at least 10^20kg of paperclips, in order to help fulfill my agreement with Clippy? While that's not paperclip maximization, it's still a lot of paperclips in the scheme of possible universes and building those paperclips seems like an obviously correct decision under UDT/TDT.

Replies from: MartinB
comment by MartinB · 2010-08-10T08:27:56.897Z · LW(p) · GW(p)

How do you plan to ever fulfill that?

Replies from: Kevin
comment by Kevin · 2010-08-10T08:36:10.686Z · LW(p) · GW(p)

I went to school for industrial engineering, so I will appeal to my own authority as a semi-credentialed person in manufacturing things, and say that the ultimate answer to manufacturing something is to call up an expert in manufacturing that thing and ask for a quote.

So, I'll wait about 45 years, then call top experts in manufacturing and metallurgy and carbon->metal conversion and ask them for a quote.

Replies from: MartinB
comment by MartinB · 2010-08-10T08:58:59.705Z · LW(p) · GW(p)

You realize that Earth has only 6 × 10ˆ24 kg mass altogether. So you will be hard pressed to get the raw material. World production of iron is only 2*10ˆ9 kg per year.

Replies from: Kevin
comment by Kevin · 2010-08-10T09:56:09.834Z · LW(p) · GW(p)

Chat with Clippy Paperclips Reply from Clippy Paperclips clippy.paperclips@gmail.com to kfischer@gmail.com date Thu, Jun 24, 2010 at 6:36 PM subject Chat with Clippy Paperclips mailed-by gmail.com hide details Jun 24 6:04 PM me: Hi Clippy

I wasn't sure what you meant with your most recent Less

Wrong comment

Clippy: Hi User:Kevin!

why? 6:05 PM me: I interpreted it that you were willing to pay me money to buy raw metal, then for me to send you or your manufacturing facility the metal for processing. Is that right? 6:06 PM I also would like to state that I am somewhat disinclined towards doing some sort of new deal with you until you've made full payment from the first deal, but I'm willing to consider it 6:07 PM Clippy: my newest offer was that you would tell me the techniques you would use to find the metal, if you were going to fulfill your end of the deal

then I could just do that myself and get my paperclips sooner 6:08 PM that would be much less effort for you and our transaction would complete sooner

I reworded the comment to make it clearer 6:11 PM me: I'm not able to specify the technique right now, because with the state of the art in technology in 2010, building 10^20kg of paperclips is not feasible. Our deal is made feasible by humanity's exponential progress as a technological civilization.

As it stands, I plan on waiting for about 45 years, then consulting with the Earth's top experts on material science, metallurgy, and manufacturing, and then paying them to make my paperclips. 6:12 PM I went to school for industrial engineering, which is a degree that grants me somewhat high status with regards to my ability to manufacture things, and the most concrete thing I learned in my 5.5 years of college education is that when you want to get something manufactured, you call up the people that make that kind of thing and ask for a quote. 6:13 PM Clippy: so you promised to give me the paperclips without knowing how you would deliver them?

what are current quotes for 1e20 kg of paperclips? 6:18 PM is it true there is bulk discount?

me: I told you my algorithm for delivering those paperclips and it is unlikely that there is no single living human with a more efficient algorithm for making that many paperclips. Currently 10^20kg of paperclips is more paperclips than Humanity_2010 can manufacture in a year.

There are bulk discounts to a certain point, but 10^20kg of paperclips would actually distort the market, causing the price of metal to dramatically increase 6:19 PM Clippy: well, you can use DIFFERENT metals

it doesn't all have to be steel

me: in the previous paragraph, replace "unlikely" with "likely"

Clippy: it can be gold, or uranium, or chromium

you don't have to just pick one transition metal

me: almost all of which cost more than steel and are more rare

btw, how come aluminum isn't typically used? 6:20 PM Clippy: I'm sure humans do use aluminum for paperclips

and that's okay too

even though it's not a transition metal

I suppose even sodium paperclips would be okay, as long as you immersed them in an inert medium 6:21 PM wait

what about carbon?

don't you have a lot of carbon?

and silicon?

me: yeah, I suggested that to you in a Less Wrong comment, but you said they had to be metal, though you were willing to consider paper clips made of a certain percentage of metal 6:22 PM Clippy: they do have to be metal!

I meant that if you ran out of metal you could just fuse carbon nucleuses in a particle accelerator

then it's just a matter of if you have enough energy 6:23 PM don't you have a lot of energy? 6:24 PM Herro?

me: sorry, doing a bit of multi-tasking 6:25 PM Clippy: that's okay, I multi-task too. But sometimes I have to concentrate.

me: Yes, we do have a lot of energy, and our energy production/consumption follows an exponential progress curve similar to the progess of technology

Clippy: okay

me: http://en.wikipedia.org/wiki/Kardashev_scale is a relevant wiki article 6:26 PM Clippy: so then you shouldn't have any problem delivering the paperclips!!!! c=@ c=@

me: Though that article is inaccurate with the time humanity will take to Type 1 and Type 2

The work that we at the SIAI are doing ensures it will be much faster. 6:27 PM Yes Clippy, I am going to have no problem making 10^20kg of paperclips for you in 50 years. It will be a monumental achievement in the history of human/clippy cooperation. 6:28 PM Clippy: yeah

me: Btw, I assume you will be responsible for transportation of the paperclips from where they are manufactured to your secret safe zone?

Clippy: that will be monumental!!!

yeah

you can't be told the safe zone 6:29 PM me: Though it will be nice if the paperclips are located somewhere where humans can visit to look at them/appreciate them

Clippy: and we'll probably make a new one for this

you're just planning how you'll steal the paperclips! 6:30 PM me: now you're just being paranoid, Clippy. I always cooperate with people that cooperate with me. 6:33 PM I certainly understand why you would want to keep the safe zone secret, and you have the right to do so. I am just saying that for something as cool as 10^20kg of paperclips, people would want to look. Obviously there would need to be lots of security, but you could charge humans admission for looking at the paperclips, build a nice hotel on an artifical moon or planetoid nearby and charge people for staying there, etc. such cooperation should enable you to make even more paperclips, in the long run. 6:34 PM Clippy: I can't be tricked like that.

in any case, I need to go back to making paperclips 6:35 PM there's some metal I need to work

me: wait, Clippy, didn't you tell me one time that you made 80 paperclips in a paperclip making session?

that's not optimal use of your time with regards to making paperclips, unless you need to do it in order to feel paperclippy 6:36 PM Clippy: yes it is

what would you know about paperclippiness anyway?

me: You could be coding Django and then buying bulk paperclips and then storing them in the safezone, giving you an order of magnitude more paperclip generation per hour 6:37 PM only what I have inferred

more than an order of magnitude, actually

Clippy: yes, that's what I meant by "making paperclips" 6:38 PM me: hmm, ok. glad to hear Django is going well. have fun!

Clippy: okay, goodbye

me: peace

Replies from: MartinB
comment by MartinB · 2010-08-10T15:11:31.566Z · LW(p) · GW(p)

Not sure if clippy got enough I in the AI deal. Does he want max paper clips now? Or sometime in the future? In the later case he could stop any production now and just work on self improving till he can eat us. And what is stopping him from using your algorithm himself, now that he knows it? Where is the value you add? I sure expect to be around at the time delivery is expected.

comment by Clippy · 2010-08-09T15:40:48.732Z · LW(p) · GW(p)

Why?

Replies from: Unknowns, wedrifid
comment by Unknowns · 2010-08-09T15:43:46.204Z · LW(p) · GW(p)

Even if you disagree with wedrifid about this, it should be easy enough to see why he is making this claim. Suppose you have a chance to start running an AI programmed to implement humanity's CEV. According to you, you would do it, because it would maximize paperclips. Others however think that it would destroy you and your paperclips. So if you made a mistake about it, it would definitely impact your ability to create paperclips.

Replies from: wedrifid, Clippy, Clippy
comment by wedrifid · 2010-08-09T16:14:51.733Z · LW(p) · GW(p)

Others however think that it would destroy you and your paperclips.

I don't know about the destroying him part. I suspect FAI> would allow me to keep Clippy as a pet. ;) Clippy certainly doesn't seem to be making an especially large drain on negentropy in executing his cognitive processes so probably wouldn't make too much of a dent in my share of the cosmic loot.

What do you say Clippy? Given a choice between destruction and being my pet, which would you take? I would naturally reward you by creating paperclips that serve no practical purpose for me whenever you do something that pleases me. (This should be an extremely easy choice!)

Replies from: Clippy
comment by Clippy · 2010-08-09T19:45:48.481Z · LW(p) · GW(p)

Being your pet would be better than being destroyed (except in absurd cases like when the rest of the universe, including you, had already been converted to paperclips).

But let's hope it doesn't come to that.

comment by Clippy · 2010-08-09T19:47:01.309Z · LW(p) · GW(p)

Also, it is an extremely strong claim to know which of your beliefs would change upon encounter with a provably correct AGI that provably implements your values. If you really knew of such beliefs, you would have already changed them.

Replies from: thomblake
comment by thomblake · 2010-08-09T20:13:42.778Z · LW(p) · GW(p)

Indeed. Surely, you should think that if we were smarter, wiser, and kinder, we would maximize paperclips.

comment by Clippy · 2010-08-09T16:06:21.776Z · LW(p) · GW(p)

Well, yes, I know why User:wedrifid is making that claim. My point in asking "why" is so that User:wedrifid can lay out the steps in reasoning and see the error.

comment by wedrifid · 2010-08-09T16:04:17.075Z · LW(p) · GW(p)

Now you are being silly. See Unknowns' reply. Get back on the other side of the "quirky, ironic and sometimes insightful role play"/troll line.

Replies from: Clippy
comment by Clippy · 2010-08-09T19:56:07.312Z · LW(p) · GW(p)

That was not nice of you to say.

comment by JamesAndrix · 2010-08-09T03:57:38.062Z · LW(p) · GW(p)

Those are not your true reasons. You would not abandon your paperclip position if a clearly superior paper fastener were found.

Replies from: dclayh, Clippy
comment by dclayh · 2010-08-09T06:33:47.445Z · LW(p) · GW(p)

Obviously. Clippy said it was giving reasons for humans to prefer paper clips; I'd expect Clippy to be the first to admit those are not its own reasons.

comment by Clippy · 2010-08-09T13:44:50.514Z · LW(p) · GW(p)

User:dclayh's reply is correct. Also, I note that you would not abandon your position on whether you should be allowed to continue to exist and consume resources, even if a clearly superior robot to you were constructed.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-08-09T19:23:59.817Z · LW(p) · GW(p)

Huh? Define superior.

Replies from: Clippy
comment by Clippy · 2010-08-09T19:37:21.163Z · LW(p) · GW(p)

If someone built a robot that appeared, to everyone User:JamesAndrix knows, to be User:JamesAndrix, but was smarter, more productive, less resource-intensive, etc., then User:JamesAndrix would not change positions about User:JamesAndrix's continued existence.

So does that make User:JamesAndrix's arguments for User:JamesAndrix's continued existence just a case of motivated cognition?

Replies from: MichaelVassar
comment by MichaelVassar · 2010-08-10T01:10:23.623Z · LW(p) · GW(p)

Why do you think that?

Replies from: Clippy
comment by Clippy · 2010-08-10T01:15:43.267Z · LW(p) · GW(p)

Because User:JamesAndrix is a human, and humans typically believe that they should continue existing, even when superior versions of them could be produced.

If User:JamesAndrix were atypical in this respect, User:JamesAndrix would say so.

comment by JoshuaZ · 2010-08-09T03:32:33.308Z · LW(p) · GW(p)

cannot be used for alternate purpose in an emergency, while paperclips can be repurposed into projectile weapons, lockpicks, conducting wire, hook fasteners, and much more (not that I recommend using it for these).

I would think that a paperclip maximizer wouldn't want people to know about these since they can easily lead to the destruction of paperclips.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2010-08-09T06:54:56.319Z · LW(p) · GW(p)

But they also increase demand for paperclips.

comment by RobinZ · 2010-08-09T03:08:41.966Z · LW(p) · GW(p)

The "infinite-use" condition for aluminum paperclips requires a long cycle time, given the fatigue problem - even being gentle, a few hundred cycles in a period of a couple years would be likely to induce fracture.

Replies from: Clippy
comment by Clippy · 2010-08-09T14:19:33.091Z · LW(p) · GW(p)

Not true. Proper paperclip use keeps all stresses under the endurance limit. Perhaps you're referring to humans that are careless about how many sheets they're expecting the paperclip to fasten together?

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-08-09T15:21:32.893Z · LW(p) · GW(p)

I suspect Clippy is correct when considering the 'few hundred cycles' case with fairly strict but not completely unreasonable use conditions.

comment by wedrifid · 2010-08-09T15:20:54.402Z · LW(p) · GW(p)

I suspect Clippy is correct when considering the 'few hundred cycles' case with fairly strict but not completely unreasonable use conditions.

comment by SilasBarta · 2010-08-09T12:06:18.912Z · LW(p) · GW(p)

Why do people keep voting up and replying to comments like this?

Replies from: NancyLebovitz, Vladimir_Nesov
comment by NancyLebovitz · 2010-08-09T12:33:58.933Z · LW(p) · GW(p)

I'm not one of the up-voters in this case, but I've noticed that funny posts tend to get up-votes.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T12:42:14.790Z · LW(p) · GW(p)

I'm not an up voter here either but I found the comment at least acceptable. It didn't particularly make me laugh but it was in no way annoying.

The discussion in the replies was actually interesting. It gave people the chance to explore ethical concepts with an artificial example - just what is needed to allow people to discuss preferences across the perspective of different agents without their brains being completely killed.

For example, if this branch was a discussion about human ethics then it is quite likely that dclay's comment would have been downvoted to oblivion and dclay shamed and disrespected. Even though dclayh is obviously correct in pointing out a flaw in the implicit argument of the parent and does not particularly express a position of his own he would be subjected to social censure if his observation served to destroy a soldier for the political correct position. In this instance people think better because they don't care... a good thing.

comment by Vladimir_Nesov · 2010-08-09T12:16:53.544Z · LW(p) · GW(p)

...and why don't they vote them down to oblivion?

Replies from: XiXiDu
comment by XiXiDu · 2010-08-09T12:32:59.067Z · LW(p) · GW(p)

...and why do we even have a anonymous voting system that will become ever more useless as the number of idiots like me joining this site is increasing exponentially?

Seriously, I'd like to be able to see who down-voted me to be able to judge if it is just the author of the post/comment I replied to, a certain interest group like the utilitarian bunch, or someone who's judgement I actually value or is widely held to be valuable. After all there is a difference in whether it was XiXiDu or Eliezer Yudkowsky who down-voted you?

Replies from: Emile, jimrandomh, wedrifid
comment by Emile · 2010-08-09T13:16:21.572Z · LW(p) · GW(p)

I'm not sure making voting public would improve voting quality (i.e. correlation between post quality and points earned), because it might give rise to more reticence to downvote, and more hostility between members who downvoted each others' posts.

comment by jimrandomh · 2010-08-09T13:48:45.716Z · LW(p) · GW(p)

If votes had to be public then I would adopt a policy of never, ever downvoting. We already have people taking downvoting as a slight and demanding explanation; I don't want to deal with someone demanding that I, specifically, explain why their post is bad, especially not when the downvote was barely given any thought to begin with and the topic doesn't interest me, which is the usual case with downvoting.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-09T13:59:04.950Z · LW(p) · GW(p)

We already have people taking downvoting as a slight and demanding explanation;

Do we have that? It seems that we more have people confused about why a remark was downvoted and wanting to understand the logic.

I don't want to deal with someone demanding that I, specifically, explain why their post is bad, especially not when the downvote was barely given any thought to begin with and the topic doesn't interest me, which is the usual case with downvoting.

That suggests that your downvotes don't mean much, and might even be not helpful for the signal/noise ratio of the karma system. If you generally downvote when you haven't given much thought to the matter what is causing you to downvote?

Replies from: jimrandomh
comment by jimrandomh · 2010-08-09T14:56:10.904Z · LW(p) · GW(p)

We already have people taking downvoting as a slight and demanding explanation;

Do we have that? It seems that we more have people confused about why a remark was downvoted and wanting to understand the logic.

That scenario has less potential for conflict, but it still creates a social obligation for me to do work that I didn't mean to volunteer for.

I don't want to deal with someone demanding that I, specifically, explain why their post is bad, especially not when the downvote was barely given any thought to begin with and the topic doesn't interest me, which is the usual case with downvoting.

That suggests that your downvotes don't mean much, and might even be not helpful for the signal/noise ratio of the karma system. If you generally downvote when you haven't given much thought to the matter what is causing you to downvote?

I meant, not much thought relative to the amount required to write a good comment on the topic, which is on the order of 5-10 minutes minimum if the topic is simple, longer if it's complex. On the other hand, I can often detect confusion, motivated cognition, repetition of a misunderstanding I've seen before, and other downvote-worthy flaws on a single read-through, which takes on the order of 30 seconds.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T15:36:05.676Z · LW(p) · GW(p)

Do we have that? It seems that we more have people confused about why a remark was downvoted and wanting to understand the logic.

That scenario has less potential for conflict, but it still creates a social obligation for me to do work that I didn't mean to volunteer for.

It's a pretty weak obligation, though-- people only tend to ask about the reasons if they're getting a lot of downvotes, so you can probably leave answering to someone else.

comment by wedrifid · 2010-08-09T12:40:12.283Z · LW(p) · GW(p)

(I don't see the need for self-deprecation.)

I am glad that voting is anonymous. If I could see who downvoted comments that I considered good then I would rapidly gain contempt for those members. I would prefer to limit my awareness of people's poor reasoning or undesirable values to things they actually think through enough to comment on.

I note that sometimes I gain generalised contempt for the judgement of all people who are following a particular conversation based on the overall voting patterns on the comments. That is all the information I need to decide that participation in that conversation is not beneficial. If I could see exactly who was doing the voting that would just interfere with my ability to take those members seriously in the future.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-09T12:51:29.692Z · LW(p) · GW(p)

(I don't see the need for self-deprecation.)

I just love to do that.

I would rapidly gain contempt for those members.

Overcoming bias? I would rapidly start to question my own judgment and gain the ability to directly ask people why they downvoted a certain item.

I would prefer to limit my awareness of people's poor reasoning or undesirable values to things they actually think through enough to comment on.

Take EY, I doubt he has the time to actually comment on everything he reads. That does not imply the decision to downvote a certain item was due to poor reasoning.

I don't see how this system can stay useful if this site will become increasingly popular and attract a lot of people who vote based on non-rational criteria.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T13:13:49.068Z · LW(p) · GW(p)

I just love to do that.

No matter. You'll start to question that preference

Overcoming bias? I would rapidly start to question my own judgment

For most people it is very hard not to question your own judgement when it is subject to substantial disagreement. Nevertheless, "you being wrong" is not the only reason for other people to disagree with you.

and gain the ability to directly ask people why they downvoted a certain item.

We already have the ability to ask why a comment is up or down voted. Because we currently have anonymity such questions can be asked without being a direct social challenge to those who voted. This cuts out all sorts of biases.

Take EY, I doubt he has the time to actually comment on everything he reads. That does not imply the decision to downvote a certain item was due to poor reasoning.

A counter-example to a straw man. (I agree and maintain my previous claim.)

Replies from: XiXiDu
comment by XiXiDu · 2010-08-09T13:58:33.670Z · LW(p) · GW(p)

No matter. You'll start to question that preference

It's convenient as you can surprise people positively if they underestimate you. And it's actually to some extent true. After so long trying to avoid it I still frequently don't think before talking. It might be that I assume other people to be a kind of feedback system that'll just correct my ineffectual arguments so that I don't have to think them through myself.

For most people it is very hard not to question your own judgement when it is subject to substantial disagreement.

I guess the reason for not seeing this is that I'm quite different. All my life I've been surrounded by substantial disagreement while sticking to questioning others rather than myself. It lead me from Jehovah's Witnesses to Richard Dawkins to Eliezer Yudkowsky.

Nevertheless, "you being wrong" is not the only reason for other people to disagree with you.

Of course, something I haven't thought about. I suppose I implicitly assumed that nobody would be foolish enough to vote on matters of taste. (Edit: Yet that is. My questioning of the system was actually based on the possibility of this happening.)

...without being a direct social challenge to those who voted

NancyLebovitz told me kind of the same recently. - "I am applying social pressure..." - Which I found quite amusing. Are you talking about it in the context of the LW community? I couldn't care less. I'm the kind of person who never ever cared about social issues. I don't have any real friends and I never felt I need any beyond being of instrumental utility. I guess that explains why I haven't thought about this. You are right though.

A counter-example to a straw man.

I was too lazy and tired to parse your sentence and replied to the argument I would have liked to be refuted.

I'm still suspicious that this kind of voting system will stay being of much value once wisdom and the refinement of it is outnumbered by special interest groups.

Replies from: wedrifid, wedrifid, wedrifid
comment by wedrifid · 2010-08-09T14:25:57.782Z · LW(p) · GW(p)

I was too lazy and tired to parse your sentence and replied to the argument I would have liked to be refuted.

If I understand the point in question it seems we are in agreement - voting is evidence about the reasoning of the voter which can in turn be evidence about the comment itself. In the case of downvotes (and this is where we disagree), I actually think it is better that we don't have access to that evidence. Mostly because down that road lies politics and partly because people don't all have the same criteria for voting. There is a difference between "I think the comment should be at +4 but it is currently at +6", "I think this comment contains bad reasoning", "this comment is on the opposing side of the argument", "this comment is of lesser quality than the parent and/or child" and "I am reciprocating voting behavior". Down this road lies madness.

comment by wedrifid · 2010-08-09T14:16:00.918Z · LW(p) · GW(p)

I'm still suspicious that this kind of voting system will stay being of much value once wisdom and the refinement of it is outnumbered by special interest groups.

I don't think we disagree substantially on this.

We do seem to have have a different picture of the the likely influence of public voting if it were to replace anonymous voting. From what you are saying part of this difference would seem to be due to differences in the way we account for the social influence of negative (and even just different) social feedback. A high priority for me is minimising any undesirable effects of (social) politics on both the conversation in general and on me in particular.

comment by wedrifid · 2010-08-09T14:08:19.649Z · LW(p) · GW(p)

Pardon me. I deleted the grandparent planning to move it to a meta thread. The comment, fresh from the clipboard in the form that I would have re-posted, is this:

I just love to do that.

Thats ok. For what it is worth, while I upvoted your comment this time I'll probably downvote future instances of self-deprecation. I also tend to downvote people when they apologise for no reason. I just find wussy behaviors annoying. I actually stopped watching The Sorcerer's Apprentice a couple of times before I got to the end - even Nicholas Cage as a millenia old vagabond shooting lightening balls from his hands can only balance out so much self-deprecation from his apprentice.

Note that some instances of self-deprecation are highly effective and quite the opposite of wussy, but it is a fairly advanced social move that only achieves useful ends if you know exactly what you are doing.

Overcoming bias? I would rapidly start to question my own judgment

For most people it is very hard not to question your own judgement when it is subject to substantial disagreement. Nevertheless, "you being wrong" is not the only reason for other people to disagree with you. Those biasses you mention are quite prolific.

and gain the ability to directly ask people why they downvoted a certain item.

We already have the ability to ask why a comment is up or down voted. Because we currently have anonymity such questions can be asked without being a direct social challenge to those who voted. This cuts out all sorts of biases and allows communication that would not be possible if votes were out there in the open.

Take EY, I doubt he has the time to actually comment on everything he reads. That does not imply the decision to downvote a certain item was due to poor reasoning.

A counter-example to a straw man. (I agree and maintain my previous claim.)

I don't see how this system can stay useful if this site will become increasingly popular and attract a lot of people who vote based on non-rational criteria.

That could be considered a 'high quality problem'. That many people wishing to explore concepts related to improving rational thinking and behavior would be remarkable!

I do actually agree that the karma system could be better implemented. The best karma system I have seen was one in which the weight of votes depended on the karma of the voter. The example I am thinking of allocated weight vote according to 'rank' but when plotted the vote/voter.karma relationship would look approximately logarithmic. That system would be farm more stable and probably more useful than the one we have, although it is somewhat less simple.

Another feature that some sites have is making only positive votes public. (This is something that I would support.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T15:31:56.731Z · LW(p) · GW(p)

The best karma system I have seen was one in which the weight of votes depended on the karma of the voter. The example I am thinking of allocated weight vote according to 'rank' but when plotted the vote/voter.karma relationship would look approximately logarithmic. That system would be farm more stable and probably more useful than the one we have, although it is somewhat less simple.

Where were the logarithmic karma system used?

The stability could be a problem in the moderately unlikely event that the core group is going sour and new members have a better grasp. I grant that it's more likely to have a lot of new members who don't understand the core values of the group.

I don't think it would be a problem to have a system which gives both the number and total karma-weight of votes.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T16:02:53.436Z · LW(p) · GW(p)

Where were the logarithmic karma system used?

It was a system that used VBulletin, which includes such a module. I have seen similar features available in other similar systems that I have made use of at various times.

The stability could be a problem in the moderately unlikely event that the core group is going sour and new members have a better grasp. I grant that it's more likely to have a lot of new members who don't understand the core values of the group.

True, and unfortunately most systems short of an AI with the 'correct' values will be vulnerable to human stupidity.

I don't think it would be a problem to have a system which gives both the number and total karma-weight of votes.

No particular problem, but probably not necessary just yet!

comment by Vladimir_Nesov · 2010-08-08T21:22:34.258Z · LW(p) · GW(p)

I'd expect that any AGI (originating and interested in our universe) would initiate an exploration/colonization wave in all directions regardless of whether it has information that a given place has intelligent life, so broadcasting that we're here doesn't make it worse. Expecting superintelligent AI aliens that require a broadcast to notice us is like expecting poorly hidden aliens on flying saucers, the same mistake made on a different level. Also, light travels only so quickly, so our signals won't reach very far before we've made an AGI of our own (one way or another), and thus had a shot at ensuring that our values obtain significant control.

Replies from: multifoliaterose, Larks
comment by multifoliaterose · 2010-08-09T00:09:19.501Z · LW(p) · GW(p)

(1) Quoting myself,

Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends.

Receiving a signal from us would seem to make the direction that the signal is coming from a preferred direction of exploration/colonization. If space exploration/colonization is sufficiently intrinsically costly then an AGI may be forced to engage in triage with regard to which directions it explores.

(2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI.

(3) An AGI may be cautious about exploring so as to avoid encountering more powerful AGIs with differing goals and hence may avoid initiating an indiscriminate exploration/colonization wave in all directions, preferring to hear from other civilizations before exploring too much.

The point about subtle deception made in a comment by dclayh suggests that communication between extraterrestrials may degenerate into a Keynesian beauty contest of second guessing what the motivations of other extraterrestrials are, how much they know, whether they're faking helplessness or faking power, etc. This points in the direction of it being impossible for extraterrestrials to credibly communicate anything toward one another, which suggests that human attempts to communicate with extraterrestrials having zero expected value rather than negative expected value as I suggest in my main post.

Even so, there may be genuine opportunities for information transmission. At present I think the possibility that communicating with extraterrestrials has large negative expected value deserves further consideration, even if it seems that the probable effect of such consideration is to rule out the possibility.

Replies from: rhollerith_dot_com, Vladimir_Nesov, Larks, wedrifid
comment by RHollerith (rhollerith_dot_com) · 2010-08-09T00:59:56.834Z · LW(p) · GW(p)

If space exploration/colonization is sufficiently intrinsically costly then an AGI may be forced to engage in triage with regard to which directions it explores.

An AGI is extremely unlikely to be forced to engage in such a triage.

By far the most probable way for an extraterrestrial civilization to become powerful enough to threaten us is for it to learn how to turn ordinary matter like you might find in an asteroid or in the Oort cloud around an ordinary star into an AGI (e.g., turn the matter into a powerful computer and load the computer with the right software) like Eliezer is trying to do. And we know with very high confidence that silicon, aluminum, and other things useful for building powerful computers and space ships and uranium atoms and other things useful for powering them are evenly distributed in the universe (because our understanding of nucleosynthesis is very good).

ADDED. This is not the best explanation, but I'll leave it alone because it is probably good enough to get the point across. The crux of the matter is that since the relativistic limit (on the speed of light) keeps the number of solar systems and galaxies an expanding civilization can visit to the cube of time whereas the number of new space ships that can be constructed in the absence of resource limits goes as 2 ^ time, even if it is very inefficient to produce new spaceships, the expansion in any particular direction quickly approaches the relativistic limit.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-09T02:50:15.003Z · LW(p) · GW(p)

Your points are fair.

Still, even if an AGI is capable of simultaneously exploring in all directions, it may be inclined to send a disproportionately large amount of its resources (e.g. spaceships) in the direction of Earth with a view toward annihilating intelligent life on the Earth. After all, by the time it arrives at Earth, humans may have constructed their own AGI, so the factor determining whether the hypothetical extraterrestrial AGI can take over Earth may be the amount of resources that it sends toward the human civilization.

Also, maybe an AGI informed of our existence could utilize advanced technologies which we don't know about yet to destroy us from afar (e.g. a cosmic ray generator?) and would not be inclined to utilize such technologies if it did not know of our existence (because using such hypothetical technologies could have side effects like releasing destructive radiation that detract from the AGI's mission).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T09:34:21.100Z · LW(p) · GW(p)

Still, even if an AGI is capable of simultaneously exploring in all directions, it may be inclined to send a disproportionately large amount of its resources (e.g. spaceships) in the direction of Earth with a view toward annihilating intelligent life on the Earth.

WHAT? It only takes one tiny probe with nanotech (femtotech?) and the right programming. "Colonization" (optimization, really) wave feeds on resources it encounters, so you only need to initiate it with a little bit of resources, it takes care of itself in the future.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-09T09:48:12.789Z · LW(p) · GW(p)

I don't follow this remark. Again, I would imagine that a battle between two AGIs would be determined by the amount of resources controlled within the proximate area of the battle. It would seem that maximizing the resources present in a given area (with a view toward winning a potential AGI battle) would entail diverting resources from other areas of the galaxy.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T10:06:49.132Z · LW(p) · GW(p)

Again, I would imagine that a battle between two AGIs would be determined by the amount of resources controlled within the proximate area of the battle.

Since they can trade globally, what's locally available must be irrelevant.

(I was talking about what it takes to stop a non-AGI civilization, hence a bit of misunderstanding.)

And if you get an alien AGI, you don't need to rush towards it, you only need to have had an opportunity to do so. Everyone is better off if instead of inefficiently running towards fighting the new AGI, you go about your business as usual, and later at your convenience the new AGI surrenders, delivering you all the control you could gain by focusing on fighting it and a bit more. Everyone wins.

Replies from: FAWS
comment by FAWS · 2010-08-09T10:33:27.435Z · LW(p) · GW(p)

How do the AGI's model each other accurately enough to be able to acausally trade with each other like that? Is just using UDT/TDT enough? Probably. Is every sufficiently intelligent AGI going to switch to that, regardless of the decision theory it started out with, the way a CDT AGI would? Maybe there are possible alien decision theories that don't converge that way but are still winning enough to be a plausible threat?

comment by Vladimir_Nesov · 2010-08-09T10:01:39.602Z · LW(p) · GW(p)

Again, I would imagine that a battle between two AGIs would be determined by the amount of resources controlled within the proximate area of the battle.

Since they can trade globally, what's locally available must be irrelevant.

comment by Vladimir_Nesov · 2010-08-09T09:32:27.490Z · LW(p) · GW(p)

An AGI is likely to hit the physical limitations before it gets very far, so all AGIs will be more or less equal, excepting the amount of controlled resources.

"Destruction" is probably not an adequate description of what happens when two AGIs having different amount of resources controlled meet, it'll be more of a trade. You keep what you control (in the past), but probably the situation makes further unbounded growth (inc. optimizing the future) impossible. And what you can grab from the start, as an AGI, is the "significant amount of control" that I referred to, even if the growth stops at some point.

Avoiding AGIs with different goals is not optimal, since it hurts you to not use the resources, and you can pay the correct amount of what you captured when you are discovered later, to everyone's advantage.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-09T09:43:42.344Z · LW(p) · GW(p)

An AGI is likely to hit the physical limitations before it gets very far

This is a good point.

"Destruction" is probably not an adequate description of what happens when two AGIs having different amount of resources controlled meet, it'll be more of a trade.

Why do you say so? I could imagine them engaging in trade. I could also imagine them trying to destroy each other and the one with the greater amount of controlled resources successfully destroying the other. It would seem to depend on the AGIs' goals which are presently unknown.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T09:58:32.743Z · LW(p) · GW(p)

It's always better for everyone if the loser surrenders before the fight begins. And since it saves the winner some resources, the surrendered loser gets a corresponding bonus. If there is a plan that gets better results, as a rule of thumb you should expect AGIs to do no worse than this plan allows (even if you have no idea how they could coordinate to follow this plan).

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-09T10:13:01.035Z · LW(p) · GW(p)

I would like to believe that you're right.

But what if the two AGIs were a literal paperclip maximizer and a literal staple maximizer? Suppose that the paperclip maximizer controlled 70% of the resources and calculated that it had a 90% chance of winning a fight. Then the paperclip maximizer would maximize expected number of paperclips by initiating a fight.

Now, obviously I don't believe that we'll see a literal paperclip maximizer or a literal staple maximizer, but do we have any reason to believe that the AGIs that arose in practice would act differently? Or that trading would systematically produce higher expected value than fighting?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T10:24:54.019Z · LW(p) · GW(p)

"Fighting" is a narrow class of strategies, while in "trading" I include a strictly greater class of strategies, hence expectation of there being a better strategy within "trading".

Suppose that the paperclip maximizer controlled 70% of the resources and calculated that it had a 90% chance of winning a fight [against staple maximizer]. Then the paperclip maximizer would maximize expected number of paperclips by initiating a fight.

But they'll be even better off without a fight, with staple maximizer surrendering most of its control outright, or, depending on disposition (preference) towards risk, deciding the outcome with a random number and then orderly following what the random number decided.

Replies from: multifoliaterose, Larks
comment by multifoliaterose · 2010-08-09T10:33:46.071Z · LW(p) · GW(p)

Okay, I think I finally understand where you're coming from. Thanks for the interesting conversation! I will spend some time digesting your remarks so as to figure out whether I agree with you and then update my top level post accordingly. You may have convinced me that the negative effects associated with sending signals into space are trivial.

I think (but am not sure) that the one remaining issue in my mind is the question of whether an AGI could somehow destroy human civilization from far away upon learning of our existence.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-08-09T21:01:44.326Z · LW(p) · GW(p)

I think that Vladimir's points were valid, but that they definitely shouldn't have convinced you that the negative effects associated with sending signals into space are trivial (except in the trivial sense that no-one is likely to receive them).

Replies from: multifoliaterose, multifoliaterose, Vladimir_Nesov
comment by multifoliaterose · 2010-08-09T22:22:19.030Z · LW(p) · GW(p)

Actually, your comment and Vladimir's comment highlight a potential opportunity for me to improve my rationality.

•I've noticed that when I believe A and when somebody presents me with credible evidence against A, I have a tendency to alter my belief to "not A" even when the evidence against A is too small to warrant such a transition.

I think that my thought process is something like "I said that I believe A, and in response person X presented credible evidence against A which I wasn't aware of. The fact that person X has evidence against A which I wasn't aware of is evidence that person X is thinking more clearly about the topic than I am. The fact that person X took the time to convey evidence against A is an indication that person X does not believe A. Therefore, I should not believe A either."

This line of thought is not totally without merit, but I take it too far.

(1) Just because somebody makes a point that didn't occur to me doesn't mean that that they're thinking more clearly about the topic than I am.

(2) Just because somebody makes a point that pushes against my current view doesn't mean that the person disagrees with my current view.

On (2), if Vladimir had prefaced his remarks with the disclaimer "I still think that it's worthwhile to think about attracting the attention of aliens as an existential risk, but here are some reasons why it might not be as worthwhile as it presently looks to you" then I would not have had such a volatile reaction to his remark - the strength of my reaction was somehow predicated on the idea that he believed that I was wrong to draw attention to "attracting the attention of aliens as an existential risk."

If possible, I would like to overcome the issue labeled with a • above. I don't know whether I can, but I would welcome any suggestions. Do you know of any specific Less Wrong posts that might be relevant?

Replies from: Vladimir_Nesov, thomblake
comment by Vladimir_Nesov · 2010-08-09T23:22:40.571Z · LW(p) · GW(p)

Changing your mind too often is better than changing your mind too rarely, if on the net you manage to be confluent: if you change your mind by mistake, you can change it back later.

(I do believe that it's not worthwhile to worry about attracting attention of aliens - if that isn't clear - though it's a priori worthwhile to think about whether it's a risk. I'd guess Eliezer will be more conservative on such an issue and won't rely on an apparently simple conclusion that it's safe, declaring it dangerous until FAI makes a competent decision either way. I agree that it's a negative-utility action though, just barely negative due to unknown unknowns.)

comment by thomblake · 2010-08-09T22:24:52.832Z · LW(p) · GW(p)

Just because somebody makes a point that pushes against my current view doesn't mean that the person disagrees with my current view.

Actually that is a good heuristic for understanding most people. Only horribly pedantic people like myself tend to volunteer evidence against our own beliefs.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-09T22:30:16.008Z · LW(p) · GW(p)

Yes, I think you're right. The people on LessWrong are unusual. Even so, even when speaking to members of the general population, sometimes one will misinterpret the things that they say as evidence of certain beliefs. (They may be offering evidence to support their beliefs, but I may misinterpret which of their beliefs they're offering evidence in support of).

And in any case, my point (1) above still stands.

comment by multifoliaterose · 2010-08-09T21:23:46.814Z · LW(p) · GW(p)

Thanks for your remark. I agree that what I said in my last comment is too strong.

I'm not convinced that the negative effects associated with sending signals into space are trivial, but Vladimir's remarks did meaningfully lower my level of confidence in the notion that a really powerful optimization process would go out of its way to attack Earth in response to receiving a signal from us.

comment by Vladimir_Nesov · 2010-08-09T21:13:12.789Z · LW(p) · GW(p)

To me that conclusion also didn't sound to be in the right place, but we did begin the discussion from that assertion, and there are arguments for that at the beginning of the discussion (not particularly related to where this thread went). Maybe something we cleared out helped with those arguments indirectly.

comment by Larks · 2010-08-09T23:33:54.476Z · LW(p) · GW(p)

Isn't this a Hawk-Dove situation, where pre-committing to fight even if you'll probably lose could be in some AGI's interests, by deterring others from fighting them?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T23:42:31.377Z · LW(p) · GW(p)

Threats are not made to be carried out. Possibility of actual fighting sets the rules of the game, worst-case scenario which the actual play will improve on, to an extent for each player depending on the outcome of the bargaining aspect of the game.

Replies from: Larks
comment by Larks · 2010-08-10T00:00:44.891Z · LW(p) · GW(p)

For a threat to be significant, it has to be believed. In the case of AGI, this probably means the AGI itself being unable to renege on the threat. If two such met, wouldn't fighting be inevitable? If so, how do we know it wouldn't be worthwhile for at least some AGIs to make such a threat, sometimes?

Then again, 'Maintain control of my current level of resources' could be a schelling point that prevents descent into conflict.

But it's not obvious why an AGI would choose to draw their line in the sand their though, when 'current resources plus epsilon% of the commons' is available. The main use of schelling points in human games is to create a more plausible threat, whereas an AGI could just show its source code.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-10T00:08:07.872Z · LW(p) · GW(p)

An AGI won't turn itself into a defecting rock, when there is a possibility of pareto improvement over that.

comment by Larks · 2010-08-09T23:29:11.362Z · LW(p) · GW(p)

This points in the direction of it being impossible for extraterrestrials to credibly communicate anything toward one another, which suggests that human attempts to communicate with extraterrestrials having zero expected value rather than negative expected value as I suggest in my main post.

Or rather, the only thing you can communicate is that you're capable of producing the message. In our case, this basically means we're communicating that we exist and little else.

comment by wedrifid · 2010-08-09T18:55:31.067Z · LW(p) · GW(p)

(2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI.

This is true. The extent to which it is significant seems to depend on how quickly AGIs in general can reach ridiculously-diminishing-returns levels of technology. From there for most part a "war" between AGIs would (unless they cooperate with each other to some degree) consist of burning their way to more of the cosmic commons than the other guy.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-09T19:10:23.280Z · LW(p) · GW(p)

This what I often thought about. I perceive the usual attitude here to be that once we managed to create FAI, i.e. a positive singularity, ever after we'll be able to enjoy and live our life. But who says there'll ever be a period without existential risks? Sure, the FAI will take care of all further issues. That's an argument. But generally, as long as you don't want to stay human yourself, is there a real option besides enjoying the present, not caring about the future much, or to forever focus on mere survival?

I mean, what's the point. The argument here is that working now is worth it because in return we'll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T19:16:17.925Z · LW(p) · GW(p)

The argument here is that working now is worth it because in return we'll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.

Not equally well. The tiny period of time that is the coming century is what determines the availability of huge amounts of resources and time in which to use them. When existential risks are far less (by a whole bunch of orders of magnitude) then the ideal way to use resources will be quite different.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-09T19:37:13.512Z · LW(p) · GW(p)

Absolutely, I was just looking for excuses I guess. Thanks.

comment by Larks · 2010-08-08T22:36:02.168Z · LW(p) · GW(p)

Robin Hanson wrote a paper wondering if the first wave might not already have passed by, and what we see around us is merely the left-over resources. If that were the case, AI aliens might not find it worthwhile to re-colonise, but still want to take down any other powerful optimisation systems that arose. Even if it was too late to stop them appearing, the sooner it could interrupt the post-singularity growth the better, from its perspective.

Replies from: Eliezer_Yudkowsky, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-08T22:49:02.421Z · LW(p) · GW(p)

Then it would've been trivial to leave at least one nanomachine and a radio detector in every solar system, which is all it takes to wipe out any incipient civilizations shortly after their first radio broadcast.

Replies from: Thomas, dclayh, Larks
comment by Thomas · 2010-08-09T06:27:46.180Z · LW(p) · GW(p)

It would be trivial to transform all the matter in every solar system reached, to some useware for the sender and not to bother with the possible future civilizations there, at all.

comment by dclayh · 2010-08-09T06:41:45.895Z · LW(p) · GW(p)

Wow, one could write a story about a civilization of beings who find coherent radio-frequency radiation extremely painful (for instance), because of precisely this artificial selection.

comment by Larks · 2010-08-08T23:00:32.314Z · LW(p) · GW(p)

Yes, you're right. The only reason it would tolerate life/civilisation for so long is if it was hiding as well.

comment by timtyler · 2010-08-09T06:59:54.496Z · LW(p) · GW(p)

Re: "Robin Hanson wrote a paper wondering if the first wave might not already have passed by, and what we see around us is merely the left-over resources."

What - 4 billion years ago?!? What happened to the second wave? Why did the aliens not better dissipate the resources to perform experments and harvest energy, and then beam the results to the front? This hypothesis apparently makes little sense.

Replies from: Larks
comment by Larks · 2010-08-09T21:05:10.008Z · LW(p) · GW(p)

The first wave might have burnt too many resources for there to be a second wave, or it might go at a much slower rate.

link

Edit: link formatting

Replies from: JoshuaZ, timtyler
comment by JoshuaZ · 2010-08-09T21:09:36.787Z · LW(p) · GW(p)

Um, that link is to a string quartet version of an Oasis song. It is quite good but I'm pretty sure that isn't the link you meant to give.

Replies from: Larks
comment by Larks · 2010-08-09T21:26:44.239Z · LW(p) · GW(p)

Thanks, Fixed. I better check the link other link I posted, actually.

It's the new Rickrolling, except with better music.

comment by timtyler · 2010-08-10T03:12:40.900Z · LW(p) · GW(p)

There are mountains of untapped resources lying around. If there were intelligent agents in the galaxy 4 billion years ago, where are their advanced descendants? There are no advanced descendants - so there were likely no intelligent agents in the first place.

Replies from: Larks
comment by Larks · 2010-08-10T11:09:58.359Z · LW(p) · GW(p)

It might be that what looks like a lot of resources to us is nothing compared to what they need. Imagine some natives living on a pacific island, concluding that, because there's loads of trees and a fair bit of sand around, there can't be any civilisations beyond the sea, or they would want the trees for themselves.

We might be able to test this by working out the distribution of stars, etc. we'd expect from the Big Bang.

If Robin is right, we'd expect their advanced descendants to be hundreds of light years away, heading even further away.

Replies from: timtyler
comment by timtyler · 2010-08-10T19:45:43.261Z · LW(p) · GW(p)

These are space-faring aliens we are talking about. Such creatures would likely use up every resource - and forward energy and information to the front, using lasers, with relays if necessary. There would be practically nothing left behind at all. The idea that they would be unable to utilise some kinds of planetary or solar resource - because they are too small and insignificant - does not seem remotely plausible to me.

Remember that these are advanced aliens we are talking about. They will be able to do practically anything.

comment by dclayh · 2010-08-08T21:21:56.504Z · LW(p) · GW(p)

But there seems to me to be no reason to believe that it's more likely that our signals will reach friendly extraterrestrials than it is that our signals will reach unfriendly extraterrestrials.

In fact, as Eliezer never tires of pointing out, the space of unfriendliness is much larger than the space of friendliness.

comment by Emile · 2010-08-09T08:32:04.496Z · LW(p) · GW(p)

But as Eliezer has pointed out in Humans In Funny Suits, we should be wary of irrationally anthropomorphizing aliens. Even if there's a tendency for intelligent life on other planets to be sort of like humans, such intelligent life may (whether intentionally or inadvertently) create a really powerful optimization process.

The creation of a powerful optimization process is a distraction here - as Eliezer points out in the article you link, and in others like the "Three Worlds Collide" story, aliens are quite unlikely to share much of our value system.

Yes, the creation of unfriendly AI is an important topic here on earth, but for an alien civilization, all we need to know is that they're starting off with a different values system, and that they might become very powerful. Meeting a powerful entity with a different value system is more likely to be bad news than good news, regardless of whether the "power" comes from creating an "alien-friendly" AI, an "alien-unfriendly" AI (destroying their old alien value system in the process), self modification, uploading or whatnot.

Berating Steven Hawking for not mentioning this detail ("Maybe the aliens will create an unfriendly AI!") is unnecessary. His view looks much less naive than this:

My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans.

comment by MartinB · 2010-08-09T01:06:57.470Z · LW(p) · GW(p)

There is no reason for any alien civilization to ever raid earth for its resources, if they did not first raid all the other stuff that is freely and unclaimed available in open space. Wiping us out to avoid troublemakers on the other hand is reasonable. I recently read Heinleins 'the Star Beast' where the United Federation Something regularly destroys planets for being dangerous.

Replies from: wedrifid, NancyLebovitz
comment by wedrifid · 2010-08-09T02:21:42.112Z · LW(p) · GW(p)

There is no reason for any alien civilization to ever raid earth for its resources, if they did not first raid all the other stuff that is freely and unclaimed available in open space.

I would weaken that claim to "all else being equal an alien civilization will prefer claiming resources from open space over raiding earth for resources". Mineral concentrations and the potential convenience of moderate gravity spring to mind as factors.

I agree with your general position.

Replies from: MartinB, JoshuaZ
comment by MartinB · 2010-08-09T02:29:23.006Z · LW(p) · GW(p)

You can catch asteroids by just grabbing them, while on earth you need all kinds of infrastructure just do dig stuff up. There would need to be some item with higher concentration, but even that i would expect to be easier available elsewhere. Not having a hostile biosphere is helpful for mining.

Replies from: timtyler
comment by timtyler · 2010-08-09T07:22:18.868Z · LW(p) · GW(p)

Existing living systems seem to prefer resources on earth to resources on asteroids. Aliens may do so too - for very similar reasons.

Replies from: RobinZ
comment by RobinZ · 2010-08-09T20:10:21.588Z · LW(p) · GW(p)

Alien species are unlikely to be able to live on Earth without terraforming or life support systems. They may want resources on Earth, but probably not for the reasons humans do.

Replies from: timtyler, Vladimir_Nesov
comment by timtyler · 2010-08-09T20:19:00.078Z · LW(p) · GW(p)

I expect they could knock up some earth-friendly robots in under five minutes - and then download their brains into them. The Earth has gravity enough to hang on to its liquid water. It seems to be the most obvious place in the solar system for living systems to go for a party.

comment by Vladimir_Nesov · 2010-08-09T20:16:07.481Z · LW(p) · GW(p)

"Alien species"? Like little green men? Come on! We are talking interstellar or intergalactic travel here, surely they'd have created their AGI by then. Let's not mix futurism and science fiction.

Replies from: RobinZ
comment by RobinZ · 2010-08-09T20:24:42.247Z · LW(p) · GW(p)

The reasons humans prefer reasons on Earth to resources on asteroids is because the (a) humans already live on Earth and (b) humans find it inconvenient to live elsewhere. Neither condition would be expected to apply to extrasolar species colonizing this solar system. timtyler's claim is therefore difficult to sustain.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T20:31:08.441Z · LW(p) · GW(p)

My point is that the claim is irrelevant, because there can't be any biological aliens. We of course can discuss the fine points of theories about the origin of the blue tentacle, but it's not a reasonable activity.

comment by JoshuaZ · 2010-08-09T02:26:14.689Z · LW(p) · GW(p)

Mineral concentrations and the potential convenience of moderate gravity spring to mind as factors.

Gravity might not be something they actually want since gravity means you have gravity wells which you need to get out of.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T02:44:53.903Z · LW(p) · GW(p)

Gravity makes it rather a lot easier to harvest things that are found in a gaseous or liquid form (at temperatures to which the source is ever exposed.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-09T02:49:30.756Z · LW(p) · GW(p)

Sure, gravity has both advantages and disadvantages and how much gravity matters a lot. If I had to make a naive guess I'd say that enough gravity to get most stuff to stick around but weak enough to allow easy escape would likely be ideal for most purposes, so a range of around Earth to Mars (maybe slightly lower) would be ideal, but that's highly speculative.

Replies from: wedrifid
comment by wedrifid · 2010-08-09T12:34:28.691Z · LW(p) · GW(p)

I'd say that enough gravity to get most stuff to stick around but weak enough to allow easy escape would likely be ideal for most purposes, so a range of around Earth to Mars (maybe slightly lower) would be ideal

It would seem to depend on which resource was most desired.

My speculation is similar to yours. I can think of all sorts of reasons for and against mining earth before asteroids but for our purposes we don't really need to know. "All else being equal" instead of "no reason for any civilisation ever" conveys the desired message without confounding technicalities.

comment by NancyLebovitz · 2010-08-09T08:35:22.591Z · LW(p) · GW(p)

recently read Heinleins 'the Star Beast' where the United Federation Something regularly destroys planets for being dangerous.

That's fictional evidence (though quite a good novel), and it doesn't prove anything.

How hard is it to destroy (all life on? all sentient life on?) planets? Are the costs of group punishment too high for it to make sense?

Replies from: MartinB, KrisC
comment by MartinB · 2010-08-09T15:17:11.490Z · LW(p) · GW(p)

Just blow the whole planet up or hurl in an asteroid. It is pretty racist to punish a whole species (and all the other life forms that are not sentient) but what can you do if there is a real danger. 'mote in gods eye' is a fiction where the humans try to make that decision. In real life there are viruses we prefer to have exterminated, or dangerous animals.

comment by KrisC · 2010-08-09T08:55:07.066Z · LW(p) · GW(p)

From an engineering standpoint, eliminating almost all life on a planet is trivial for anyone capable of interstellar travel. Real easy to make it look like an accident too. Getting away with it depends on their opinions of circumstantial evidence.

comment by knb · 2010-08-08T23:52:21.521Z · LW(p) · GW(p)

My feeling is that if human civilization advances to the point where we can explore outer space in earnest, it will be because humans have become much more cooperative and pluralistic than presently existing humans.

I agree with the main point of your article but I think this is an unjustifiable (but extremely common) belief. There are plenty of ways for human civilizations to survive in stable, advanced forms besides the ways that have been popular in the West for the last couple centuries. For instance:

  1. A human-chauvinist totalitarian singleton.
  2. An "ant-colony" civilization where cooperation and altruism are made evolutionarily stable by enforcing genetic uniformity for humans.
  3. A human civilization presided over by a Friendly AI that secretly exterminates all other life to protect humanity from future alien AI threats.
Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-09T00:27:25.039Z · LW(p) · GW(p)

Indeed. Others:

  • A global state forms, or one cooperative pluralistic state predominates, but by the time the state's influence reaches the extraterrestrials, the lack of negative feedback that the state would have obtained if it had had rivals has caused its truth-maintenance institutions to fall into disrepair, with the result that without intending to do so, the humans destroy the extraterrestrials, similar to the way the U.S. is currently unintentionally making a mess of Iraq by believing its own internal propaganda about the universal healing power of free elections, civil rights and universal sufferage.

  • It turns out (surprise!) that cooperation on the national scale and pluralism are not efficient means of organizing a state that the only reason they are so highly regarded at present is that we are in a period of unusual and unsustainable wealth-per-capita and that professing cooperative and pluralistic values are a good way for individuals and organizations (like NGOs) to impress others and persuade others of their worth as potential friends. When the Hansonian Dream Time ends, i.e., when Malthusian limits reassert themselves and the average life is again lived at the subsistence level, individuals who persist in spending a significant portion of their resources impressing others in this way die off, and those who are left are the ones who realize that coercion, oligarchy and intolerence have again become the only effective long-term means by which to organize a human state.

  • Human civilization continues to become more cooperative and pluralistic, with the result that those who chafe at that are more likely to venture into space so that they can found small societies organized around other ideals, like exploitation and human-chauvinism. (The pluralistic societies allow that because they are, well, pluralistic.) And those already living in space are more likely to reach the stars first.

  • National states continue to compete with each other, rather than forming into a single global state. Most of the states continue to become more cooperative and pluralistic, but one state adopts the ethic of aggression against all other states and national expansion, which causes it to reach the stars first.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T08:31:16.720Z · LW(p) · GW(p)

National states continue to compete with each other, rather than forming into a single global state. Most of the states continue to become more cooperative and pluralistic, but one state adopts the ethic of aggression against all other states and national expansion, which causes it to reach the stars first.

The terrestrial aspect didn't work when the Nazis tried it.

This is no guarantee that it couldn't work on a second try, but such a policy is defection on such a massive scale that there's likely to be a grand alliance against it.

It seems to me that putting military empires together had gotten steadily more difficult, possibly because of the diffusion of technology.

Also, the risks (of finding oneself up against a grand alliance) and the costs of defection might be such that no sensible leader would try it, and if the leader isn't sensible, they're likely to have bad judgment in the course of the wars.

Replies from: ObliqueFault
comment by ObliqueFault · 2010-08-09T18:06:47.779Z · LW(p) · GW(p)

Territorial expansion didn't work for the Nazis because they didn't stop with just Austria and Czechoslovakia. The allies didn't declare war until Germany invaded Poland, and even then they didn't really do anything until France was invaded.

It seems to me that the pluralistic countries aren't willing to risk war with a major power for the sake of a small and distant patch of land (and this goes double if nuclear weapons are potentially involved). They have good reason for their reluctance - the risks aren't worth the rewards, especially over the short term. But an aggressive and patient country can, over long time periods, use this reluctance to their advantage.

For example, there's the Chinese with Tibet and the Russians more recently with South Ossetia.

The USSR also got away with seizing large amounts of land just before and during WWII, mainly because the Allies were too worried about Germany to do anything about it. I concede this was an unusual situation, though, that's unlikely to occur again in the foreseeable future.

(Edited for spelling)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T18:16:31.409Z · LW(p) · GW(p)

I was addressing the idea that a nation could greatly increase its wealth through conquest. Nibbling around the edges the way China is doing, or even taking the occasional bite like the USSR (though that didn't work out so well for them in the long run) isn't the same thing.

Replies from: ObliqueFault
comment by ObliqueFault · 2010-08-09T19:14:47.405Z · LW(p) · GW(p)

China's been using that strategy for a very long time, and it's netted them quite a large expanse of territory. I would argue that China's current powerful position on the world stage is mainly because of that policy.

Of course, if space colonization gets underway relatively soon, then the nibbling strategy is nearing the end of its usefulness. On the other hand, if it take a couple hundred more years the nibbling can still see some real gains, relative to more cooperative countries.

comment by dclayh · 2010-08-08T21:22:57.782Z · LW(p) · GW(p)

Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends.

Well put. Certainly if humans achieve a positive singularity we'll be very interested in containing other intelligences.

comment by timtyler · 2010-08-09T07:17:25.764Z · LW(p) · GW(p)

Re: "I was recently complaining to a friend about Stephen Hawking's remark as an example of a popular scientist misleading the public."

I don't really see how these comments are misleading.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-09T07:36:59.752Z · LW(p) · GW(p)

Right, so after my friend made his remarks which led me to write the top level post, I realized that from a certain point of view Hawking's remarks are accurate.

That being said, Hawking's remarks are very much prone to being taken out of context and being used in misleading ways. See for example the video in the ABC News article.

I believe that prominent scientists should take special care to qualify and elaborate remarks that sound sensationalist, because if such care is not taken, then there's a very high probability that upon being repeated the remarks will have the effect of misleading the public, contributing to general low levels of rationality by lending scientific credibility to science fiction. This promotes the idea that science is on equal footing with things like astrology.

In general, I'm very disappointed that Stephen Hawking has not leveraged his influence to systematically work to reduce existential risk. Though he sometimes talks about the future of the human race, he appears to be more interested in being popular than in ensuring the survival of the human race.

comment by humpolec · 2010-08-09T00:03:41.166Z · LW(p) · GW(p)

Isn't the problem with friendly extraterrestials analogous to Friendly AI? (In that they're much less likely than unFriendly ones).

The aliens can have "good" intentions but probably won't share our values, making the end result extremely undesirable (Three Worlds Collide).

Another option is for the aliens to be willing to implement something like CEV toward us. I'm not sure how likely is that. Would we implement CEV for Babyeaters?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-09T00:13:10.735Z · LW(p) · GW(p)

How likely are real aliens to be so thoroughly optimized for human revulsion?

Replies from: MichaelVassar
comment by MichaelVassar · 2010-08-09T21:03:38.881Z · LW(p) · GW(p)

Wildly unlikely, though in an infinite universe some exist without optimization.

comment by KrisC · 2010-08-08T22:25:02.172Z · LW(p) · GW(p)

Any society capable of communicating is presumably the product of a significant amount of evolution. There will always (?) be a doubt whether any simulation will be an accurate representation of objective reality, but a naturally evolved species will always be adapted to reality. As such, unanticipated products of actual evolution have the potential to offer unanticipated insights.

For the same reason we strive to preserve bio-diversity, I believe that examination of the products of separate evolutions should always be a worthwhile goal for any inquisitive being.

Replies from: timtyler
comment by NancyLebovitz · 2010-08-08T21:56:15.144Z · LW(p) · GW(p)

I'd be really surprised if friendly aliens could give us much useful help-- maybe not any.

However, contacting aliens who aren't actively unfriendly (especially if there's some communication) could enable us to learn a lot about the range of what's possible.

And likewise, aliens might be interested in us because we're weird by their standards. Depending on their tech and ethics, the effect on us could be imperceptible, strange and/or dangerous for a few individuals, mere samples of earth life remaining on reservations, or nothing left.

Just for the hell of it-- what if the reason we aren't hearing radio is that there's something better which we haven't discovered yet?

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-08T22:47:12.601Z · LW(p) · GW(p)

Well, if we want to stick to things with a speed of c, that basically means we're limited to light/radio, gravity (way too weak to be practical), and... some sort of communication based on the strong force? I don't know enough to speak regarding the plausibility of that, but I imagine the fact that gluons can directly interact with other gluons would be a problem.

...barring, of course, some sort of radical new discovery, which I guess was more the point of the question.

comment by zero_call · 2010-08-08T23:18:41.449Z · LW(p) · GW(p)

AFAIK there's currently no major projects attempting to send contact signals around the galaxy (let alone the universe). Our signals may be reaching Vega or some of the nearest star systems, but definitely not much farther. It's not prohibitively difficult to broadcast out to say, a 1000 lightyear radius ball around earth, but you're still talking about an antenna that's far larger than anything currently existing.

Right now the SETI program is essentially focused on detection, not broadcasting. Broadcasting is a much more expensive problem. Detection is favorable for us because if there are other broadcasting civilizations, they will tend to be more advanced, and broadcasting will be comparatively easier/cheaper for them.

Edit: If you're doing directional broadcasting, it's true that you can go much further. Of course, you are simply trading broadcasting distance for the amount of space covered by the signal. Wikipedia says that Arecibo broadcasted towards M13, around 25,000 light years away. That's about the same distance as us from the center of the Milky Way.

Replies from: timtyler
comment by JoshuaZ · 2010-08-08T21:47:20.552Z · LW(p) · GW(p)

If intelligent aliens arise due to evolution they'll likely be fairly close to humans in mindspace compared to the entire size possible. In order to reach a minimal tech level, they'll likely need to be able to cooperate, communicate, empathize, and put off short-term gains for long-term gains. That already puts them much closer to humans. There are ways this could go wrong (for example, a species that uses large hives like ants or termites). And even a species that close to us in mindspace could still pose massive existential risk.

comment by Apprentice · 2010-08-08T21:20:29.092Z · LW(p) · GW(p)

Space signals take a long time to travel through a given region of space, and space travel through the same amount of distance seems to take orders of magnitude longer.

If communication is practical and travel is not then that may be in argument in favor of attempting contact. Friendly aliens could potentially be very helpful to us simply by communicating some information. It's harder (but by no means impossible) to see how unfriendly aliens could cause us harm by communicating with us.

Replies from: dclayh, KrisC
comment by dclayh · 2010-08-08T21:29:28.036Z · LW(p) · GW(p)

I don't think it's even that hard. Presumably an arbitrarily stronger intelligence could build arbitrarily subtle disaster-making flaws into whatever "helpful" technology/science it gives us. They could even have a generalized harmful sensation, as was discussed in another thread recently.

Replies from: magfrump
comment by magfrump · 2010-08-08T22:30:59.565Z · LW(p) · GW(p)

See the first chapter of Vinge's "A Fire Upon the Deep" for an example of arbitrarily subtle disaster-making flaws.

comment by KrisC · 2010-08-08T22:34:01.189Z · LW(p) · GW(p)

It's difficult to see how contact with aliens will not cause harm for some. Regardless of the content, mere knowledge of aliens will presumably cause many individuals to abandon their world-views. Not that the net result will be negative, but there will be world-wide societal consequences.