For what do we need Superintelligent AI?

post by avturchin · 2019-01-25T15:01:01.772Z · LW · GW · 15 comments

This is a question post.

Contents

  Answers
    8 Vanessa Kosoy
    5 shminux
    4 Gurkenglas
None
15 comments

Most of human problems could be solved by humans or slightly above human AIs. Which practically useful tasks – which we could define now – require very high level of superintelligence? I see two main such tasks:

1) Prevention of the creation of other potentially dangerous superintelligences.

2) Solving the task of indefinite life extension, which include solving aging and uploading.

I could imagine other tasks, like near-light-speed space travel, but they are neither urgent nor necessary.

For which other tasks do we need superintelligence?

Answers

answer by Vanessa Kosoy · 2019-01-26T12:59:38.909Z · LW(p) · GW(p)

The difference between "slightly above human" and "very high level of superintelligence" is difficult to grasp, because we don't have a good way to quantify intelligence and don't have a good way to predict how much intelligence you need to achieve something. That said, some plausible candidates (in addition to the two you mentioned, which are reasonable) are:

  1. Solving all other X-risks
  2. Constructing a Dyson sphere or something else that will allow much more efficient and massive conversion of physical resources to human flourishing
  3. Solving all problems of society/government/economics, except to the extent we want to solve them ourselves
  4. Creating a way of life for everyone which is neither oppressive (like having to work in a boring and/or unpleasant job) nor dull or meaningless
  5. Finding the optimal way to avert a Malthusian catastrophe while satisfying the human preferences for reproduction and immortality
  6. Allowing us to modify/improve the minds of ourselves and our descendants, and/or create entirely new kinds of minds, while protecting us from losing our values and identities, or unintentionally triggering a moral catastrophe
  7. Solving all moral conundrums involving animals, wild nature and other non-human minds, if such exist
  8. Negotiating with aliens, if such exist (but that is probably very non-urgent)

Regarding near-light-speed space travel (and space colonization), it does seem necessary if you want to make the best use of the universe.

Also, I think Gurkenglas has a very good point regarding acausal trade.

answer by Shmi (shminux) · 2019-01-26T06:49:47.521Z · LW(p) · GW(p)

Not even a superintelligent AI, but an Alpha* level AI could do a lot of good now, if it learned to understand humans without falling prey to the human biases. For example, an AI friend who knows just the right words to say in a given situation, never losing patience and never having own agenda would make the world a much better place almost instantly.

answer by Gurkenglas · 2019-01-25T21:00:57.902Z · LW(p) · GW(p)

Producing a strategic advantage for any party at all that is decisive enough to safely disarm the threat of nuclear war.

Acausal trade on even footing with distant superintelligences.

If our physics happen to allow for an easy way to destroy the world, then the way we do science, someone will think of it, someone will talk, and someone will try it. If one superintelligent polymath did our research instead, we don't lose automatically if some configuration of magnets, copper and glass can ignite the atmosphere.

15 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2019-01-25T18:47:16.914Z · LW(p) · GW(p)

Not really an answer, but there's also the point that with superintelligence humans don't have to do the things they otherwise could do because we built such a general tool that it eliminates the need for other tools. This is pretty appealing if, like me, you want to be free to do things you want to do even if you don't have to do them, rather than having to do things because you need to do them to satisfy some value.

Replies from: avturchin
comment by avturchin · 2019-01-25T20:16:21.210Z · LW(p) · GW(p)

Yes, but it is not necessary a good thing, as it may cause unemployment. People like to do things, and even for many unpleasant things there are people who like to do them. For example, I knew a bus driver, who had a severe depression after retirement, which finished only after he started making pictures.

Replies from: Aiyen
comment by Aiyen · 2019-01-25T22:01:41.362Z · LW(p) · GW(p)

Most people seem to need something to do to avoid boredom and potentially outright depression. However, it is far from clear that work as we know it (which is optimized for our current production needs, and in no way for the benefit of the workers as such) is the best way to solve this problem. There is likely a need to develop other things for people to do alongside alleviating the need for work, but simply saying "unemployment is bad" would seem to miss that there may be better options than either conventional work or idleness.

comment by Chris_Leong · 2019-01-25T15:07:13.781Z · LW(p) · GW(p)

I've actually had similar thoughts myself about why developing AI sooner wouldn't be that good. Technology isn't the barrier in most places to human flourishing, but governance.

Prevention of the creation of other potentially dangerous superintelligences

Solving existential risks in general

Replies from: avturchin, Aiyen, avturchin
comment by avturchin · 2019-01-25T15:58:48.323Z · LW(p) · GW(p)

For x-risks prevention, we should assume that risk of quick creation of AI is lower than all other x-risks combined, and it is highly uncertain from both sides. For example, I think that biorisks are underestimated in long run.

But to solve many x-risks we don't probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition.

Replies from: Chris_Leong
comment by Chris_Leong · 2019-01-25T16:30:26.241Z · LW(p) · GW(p)

"But to solve many x-risks we don't probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition" - unlikely to happen in the forseeable future

Replies from: avturchin
comment by avturchin · 2019-01-25T16:35:34.644Z · LW(p) · GW(p)

Not everywhere, but China is surprisingly close to it. However, the most difficult question is how to put such system in every corner of earth without starting world war. Ups, I forget about Facebook.

comment by Aiyen · 2019-01-25T21:57:52.834Z · LW(p) · GW(p)

Where governance is the barrier to human flourishing, doesn't that mean that using AI to improve governance is useful? A transhuman mind might well be able to figure out not only better policies but how to get those policies enacted (persuasion, force, mind control, incentives, something else we haven't thought of yet). After all, if we're worried about a potentially unfriendly mind with the power to defeat the human race, the flip side is that if it's friendly, it can defeat harmful parts of the human race, like poorly-run governments.

comment by avturchin · 2019-01-25T15:54:13.669Z · LW(p) · GW(p)

Most work for AI in life extension could be done by narrow AIs, like needed data-crunching for modelling genetic networks or control of medical nanobots. Quick ascending of self-improving - and benevolent - AI may be a last chance for survival for old person who will never survive until these narrow AI services, but then again, such person could make a safer bet on cryonics.

Replies from: Aiyen
comment by Aiyen · 2019-01-25T21:52:23.737Z · LW(p) · GW(p)

Safer for the universe maybe, perhaps not for the old person themselves. Cryonics is highly speculative-it *should* work, given that if your information is preserved it should be possible to reconstruct you, and cooling a system enough should reduce thermal noise and reactivity enough to preserve information... but we just don't know. From the perspective of someone near death, counting on cryonics might be as risky or more so than a quick AI.

comment by Donald Hobson (donald-hobson) · 2019-01-25T23:43:23.412Z · LW(p) · GW(p)

We don't know what we are missing out on without super intelligence. There might be all sorts of amazing things that we would just never consider to make, or dismiss as obviously impossible, without super intelligence.

I am pointing out that being able to make a FAI that is a bit smarter than you (smartness not really on a single scale, vastly different cognitive architecture, is deep blue smarter than a horse?), involves solving almost all the hard problems in alignment. When we have done all that hard work, we might as well tell it to make itself a trillion times smarter, the cost to us is negligible, the benefit could be huge.

AI can also serve as as a values repository. In most circumstances, values are going to drift over time, possibly due evolutionary forces. If we don't want to end up as hardscrapple frontier replicators, we need some kind of singleton. Most types of government or committee have their own forms of value drift, and couldn't keep enough of an absolute grip on power to stop any rebellions for billions of years. I have no ideas other than Friendly ASI oversight for how to stop someone in a cosmically vast society from creating a UFASI. Sufficiently draconian banning of anything at all technological could stop anyone from creating UFASI long term, and also stop most things since the industrial revolution.

The only reasonable scenario that I can see in which FAI is not created and the cosmic commons gets put to good use is if a small group of likeminded individuals, or single person, gains exclusive access to selfrep nanotech and mind uploading. They then use many copies of themselves to police the world. They do all programming and only run code they can formally prove isn't dangerous. No-one is allowed to touch anything Turing complete.

Replies from: volodymyr-frolov
comment by Volodymyr Frolov (volodymyr-frolov) · 2019-01-26T00:24:44.238Z · LW(p) · GW(p)

That's right. We need superintelligence to solve these problems that we don't even understand. For such problems we might not even be able understand the very definition of it, not even talking about finding good solution.

comment by seomi · 2019-01-25T20:33:02.190Z · LW(p) · GW(p)
Most of human problems could be solved by humans or slightly above human AIs

Every task (mainly engineering problems) that are currently solved by humans could be optimized to a staggering degree by a strong AI, think microprocessors.

The long list of coordination problems that exist in human communities.

The fact that humans are capable of solving some problems now (e.g. food production), is hardly sufficient. The problem is currently solved with immense human costs.

But the main problem is that even though humans are capable of solving some problems, they are inherently selfish, so they will only solve problems for themselves. For this reason there are billions of people on this planet lacking basic (and more complex) necessities of life.

Of course, whether an AI will actually help all these people will depend on the governing structure into which it will be integrated. But should it come from a corporation, in a capitalist system, it will still help by dramatically driving the costs down.

In other words, I think of an AI as a massively better tool for problem solving, a much more dramatic jump than the switch from horses to automobiles and planes for transportation.

Replies from: avturchin
comment by avturchin · 2019-01-25T20:50:20.579Z · LW(p) · GW(p)

That seems reasonable, but may be around-human-level level AI will be enough to automatise food production, and superintelligence is not needed for it? Let's make GMO crops, robotic farms in oceans and we will provide much more food for everybody.

Replies from: Aiyen
comment by Aiyen · 2019-01-25T22:17:20.070Z · LW(p) · GW(p)

It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.

On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as "neither urgent nor necessary", and that's true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.

As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can't detect.