Singularity Cost

post by davidschiffer · 2010-10-08T02:01:09.521Z · LW · GW · Legacy · 2 comments

I don’t think that AI is an existential risk. It is going to be more of a golden opportunity. For some not for all.

Given that most people oppose AI on various basis (religious, economic) chances are it will be implemented in a small group, and very few people will get to benefit from it. Wealthy people would probably be the first to use it.

This isn’t a regular technology and it will not go first to the rich and then to everybody else, like it happened with the phones or computers in a couple of decades. This is where Kurzweil is wrong.

Can someone imagine the dynamics of a group that has access to AI for 20-30 years?

I doubt that after 20 or 30 years, heck even after 10 years, they would need any money so the assumption that it will be shared with the rest of the world for financial reasons doesn’t seem founded.

So I am trying to save and figure what would be the cost of entry in this club.

Any thoughts on that?

2 comments

Comments sorted by top scores.

comment by magfrump · 2010-10-08T07:06:50.089Z · LW(p) · GW(p)

If an AI is developed, and run in such a way that it serves the interests of a select group of rich folk and no one else, then:

a) the friendliness problem has essentially been solved. That's great! I don't think it's likely that this will happen, though.

b) the power of the AI will likely come primarily from research and inventions which will be sold to the general public, resulting in an increase in welfare in general. If this is not the case, then we may have different definitions of AI, or the people using it are not very creative--in which case someone more creative will approach them and it will be worth so much money to them that they should start. This is largely speculative but I don't think particularly controversial.

c) the source code will get leaked, governments will require the group to turn over their results, or some significant conflict between this group and other global power groups will erupt.

d) all of this would happen easily within 5 years if not 1 year. Talking about a single AI existing (if "single AI" itself has meaning! Our intelligence is highly modular.) for 20 to 30 years is complete nonsense, or highly confused concerning the definition of AI.

There is a huge amount of thinking on this topic by highly intelligent people. If you're interested in updating on their beliefs, here is a link to the Hanson-Yudkowsky AI-Foom debate which has a huge amount of discussion of possible futures which seem to me much more likely and sophisticated than yours, even if I don't entirely agree with them.

comment by jimrandomh · 2010-10-08T12:59:14.811Z · LW(p) · GW(p)

AIs aren't like tools, they're like agents. If you want future AIs to talk to you, do impressive things that will make them think you're worth talking to. Because if access to an AI is something bought with money, then that will mean something has already gone horribly wrong.