Posts

Comments

Comment by lightwave2 on How Many LHC Failures Is Too Many? · 2008-09-21T08:08:24.000Z · score: 1 (1 votes) · LW · GW

I bet the terrorists would target the LHC itself, so after the terrorist attack there's nothing left to turn on.

Comment by lightwave2 on A Prodigy of Refutation · 2008-09-18T09:04:02.000Z · score: 1 (1 votes) · LW · GW

"Surely no supermind would be stupid enough to turn the galaxy into paperclips; surely, being so intelligent, it will also know what's right far better than a human being could."

Sounds like Bill Hibbard, doesn't it?

Comment by lightwave2 on The Truly Iterated Prisoner's Dilemma · 2008-09-05T12:59:25.000Z · score: 0 (0 votes) · LW · GW

There's a dilemma or a paradox here only if both agents are perfectly rational intelligences. In the case of humans vs aliens, the logical choice would be "cooperate on the first round, and on succeeding rounds do whatever its opponent did last time". The risk of losing the first round (1 million people lost) is worth taking because of the extra 98-99 million people you can potentially save if the other side also cooperates.

Comment by lightwave2 on Rationality Quotes 13 · 2008-09-03T13:50:38.000Z · score: 0 (1 votes) · LW · GW

The soldier protects your rights to do any of those actions, and as there always are people, who want to take them away from you, it is the soldier who is stopping them from doing so.

Comment by lightwave2 on Qualitative Strategies of Friendliness · 2008-08-30T11:24:56.000Z · score: 0 (2 votes) · LW · GW

Just like you wouldn't want an AI to optimize for only some of the humans, you wouldn't want an AI to optimize for only some of the values. And, as I keep emphasizing for exactly this reason, we've got a lot of values.

What if the AI emulates some/many/all human brains in order to get a complete list of our values? It could design its own value system better than any human.

Comment by lightwave2 on Magical Categories · 2008-08-26T14:58:41.000Z · score: 0 (0 votes) · LW · GW

I wonder if you'd consider a superintelligent human have the same flaws as a superintelligent AI (and will eventually destroy the world). What about a group of superintelligent humans (assuming they have to cooperate in order to act)?