Posts

Comments

Comment by gordon_wrigley2 on Ghosts in the Machine · 2008-06-18T03:08:53.000Z · LW · GW

Simon, what I was meaning to get at was just that it/they are going to be on the receiving end of that human response and if they deal with it about as well as the average human would then we could be in big trouble.

Comment by gordon_wrigley2 on Ghosts in the Machine · 2008-06-18T03:00:38.000Z · LW · GW

It also seems that if you take a utilitarian point of view, so none of this preserving stuff because it's unique or interesting only if it's useful, then once you have strong AI and decent robotics is there any use for humans or are they just technically obsolete?

And if the answer is that we're not really useful enough to justify the cost of our continued existence then should we try and define friendly to include preserving us or should we just swallow that metaphorical bullet and be happy that our creations will carry our legacy forward?

Comment by gordon_wrigley2 on Ghosts in the Machine · 2008-06-18T01:57:25.000Z · LW · GW

I'd be curious to hear the top initial reactions.

Personally I'd be going with unforeseen consequences of however happiness is defined. But that's cause I live and breathe computers so I imagine early AI having a personality that is computerish and I understand at a very fundamental level what can happen when seemingly innocent instructions are carried out with infinitely relentless and pedantic determination.

I have thought for a while now if we want to survive the transition then the first really successful AI's are going to have to dramatically more tolerant and understanding than the average human, because you can pretty much guarantee that they are going to be subject to the irrational fear and hatred that humans generally inflict on anything new or different.