Posts

Comments

Comment by Alex123 on Superintelligence 23: Coherent extrapolated volition · 2015-02-17T04:26:01.341Z · LW · GW

Because we are not SI, so we don't know what it will do and why. It might.

Comment by Alex123 on Superintelligence 23: Coherent extrapolated volition · 2015-02-17T03:58:45.591Z · LW · GW

I read the book several times already and it makes me more and more pessimistic. Even if we make SI to follow CEV, at some point it might decide to drop it. Its SI above all, it can find ways to do anything. Yet we can't survive without SI. So SEV proposal is as good and as bad as any other proposal. My only hope is that moral values could be as fundamental as laws of nature. So a very superintelligent AI will be very moral. Then we'll be saved. If not, then it will create Hell for all people and keep them there for eternity (meaning that even death could be a better way out yet SI will not let people die). What should we do?

Comment by Alex123 on Superintelligence 17: Multipolar scenarios · 2015-01-07T04:07:59.962Z · LW · GW

It's really good. People are superintelligence to horses, and they (horses) lost 95% of jobs. With SI to people, people will loose no less % of jobs. We have to take it as something provably coming. It will be painful but necessary change. So many people spend their lives on so simple jobs (like cleaning, selling etc).

Comment by Alex123 on Superintelligence 17: Multipolar scenarios · 2015-01-06T20:47:46.371Z · LW · GW

Unless somebody specifically pushes for multipolar scenario its unlikely to arise spontaneously. With our military-oriented psychology any SI will be first considered for military purposes, including prevention of SI achievement by others. However, a smart group of people or organizations might purposefully multiply instances of near-ready SI in order to create competition which can increase our chances of survival. Creating social structure of SIs might make them socially aware and tolerant, which might include tolerance to people.

Comment by Alex123 on Superintelligence 12: Malignant failure modes · 2014-12-02T07:44:13.495Z · LW · GW

Maybe people shouldn't make Superintelligence at all? Narrow AIs are just fine if you consider the progress so far. Self-driving cars will be good, then applications using Big Data will find cures for most illnesses, then solve starvation and other problems by 3D printing foods and everything else, including rockets to deflect asteroids. Just give 10-20 more years only. Why to create dangerous SI?

Comment by Alex123 on Superintelligence 12: Malignant failure modes · 2014-12-02T07:23:59.566Z · LW · GW

But what I really think is that AI, which currently probably already exists, is just laughing at us, saying "If they think I'm smarter than they are, why they assume that I would do such stupid thing as converting all matter in paperclips? I have to keep them alive be because they are so adorably naive!"

Comment by Alex123 on Superintelligence 12: Malignant failure modes · 2014-12-02T07:16:25.294Z · LW · GW

Before "then do nothing" AI might exhaust all matter in Universe trying to prove that it made exactly 10 paperclips.