The problem with proportional extrapolation

post by pathos_bot · 2024-01-30T23:40:02.431Z · LW · GW · 0 comments

Contents

No comments

I think a big bias preventing sufficient appraisal of AI safety is people generally perceiving future issues as simply scaled up versions of current issues, and generally having the belief that current issues will scale up at the same rate relative to each others.

Of course, depending on the person, this varies, but the gravity the default presumption holds over speculative imagination limits consideration of long-tailed future worlds, wherein some issue only making up 0.0001% of current consideration becomes 99.9999% of what is relevant to continued prosperous future human lives.

People have generally seen technology empower humans in a manner proportional to its advancement. So, extrapolating into the future, they imagine technology will continue to empower humans as it becomes more advanced. This bias prevents them from taking serious consideration of the very small issue of control turning into essentially the only important issue in the future.

When SOTA AI is 1/3 as smart as humans, it is massively more beneficial to humanity for it to become 10% smarter than for it to become 10x more capable of not thinking along the lines of harming humans. Current LLM's could produce millions of pages of text on doing bad things to humans, but its impact amounts to nothing, but marginal top-level improvements in cognition amount to far greater impacts and benefits for humans. This reality is beginning to instill a sense of "make it smarter, things will sort themselves out" mentality amount acceleration types, Meta, etc. They see improvements only benefit, and use that natural, intuitively developed understanding to bolster the emotional weight of their efforts and investments.

However. when AI is 10x smarter than humans, it will be a massively greater improvement in human livelihood that that AI is 10% more favorable of humans than if it were 10x smarter. 

This bias I believe is somewhat similar to the "end of history" bias, along with humans general inability to gauge exponentials when they're in them. 

It's also why I believe the only thing that will lead to serious consideration of AI safety and the control problem is something that seriously harms a number of people, because then the general consensus will have an anchor with which to extrapolate. "Oh, AI at this current level killed X people, so when it becomes 10x more societally integrated, it could kill 10*X people". 

Anchors for extrapolation I believe are the most efficient way to get people to appraise future risks without the kind of extreme speculation and long-tailed thinking rationalist types are more prone to.

0 comments

Comments sorted by top scores.