How far is AGI?

post by Roko Jelavić (roko-jelavic) · 2020-07-05T17:58:30.224Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    2 adamzerner
    -2 George
None
No comments

The ability to complete sequences is equivalent to prediction. The way GPT-3 completes sequences is it that it predicts what the next token will be and then it outputs the prediction. You can use the same model on images.

In general, the agent, based on all of its input data up to some point, tries to generate future data. If it can predict its own input data reliably that means it has a model of the world which is similar to reality. This is similar to Solomonoff induction.

Once you have a good approximation of Solomonoff induction (which is uncomputable), you combine the approximation (somehow) with reinforcement learning and expected utility maximization and get an approximation of AIXI.

Since I'm not an expert in reinforcement learning I'm not sure which part is harder, but intuition tells me the hard part of all of this would be approximating Solomonoff induction, and once you have a good world-model, it seems to me it's relatively straightforward to maximize utility. I hope I'm wrong. (if you think I am please explain why)

Answers

answer by Adam Zerner (adamzerner) · 2020-07-05T18:11:53.645Z · LW(p) · GW(p)

See FAQ #4 on MIRI's website below.

Edit: It was written in 2013 so it is probably best viewed as a jumping off point from which you can make further updates based on what has happened in the world since then.

In early 2013, Bostrom and Müller surveyed the one hundred top-cited living authors in AI, as ranked by Microsoft Academic Search. Conditional on “no global catastrophe halt[ing] progress,” the twenty-nine experts who responded assigned a median 10% probability to our developing a machine “that can carry out most human professions at least as well as a typical human” by the year 2023, a 50% probability by 2048, and a 90% probability by 2080.5

Most researchers at MIRI approximately agree with the 10% and 50% dates, but think that AI could arrive significantly later than 2080. This is in line with Bostrom’s analysis in Superintelligence:

My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI [human-level machine intelligence] not having been developed by 2075 or even 2100 (after conditionalizing on “human scientific activity continuing without major negative disruption”) seems too low.

Historically, AI researchers have not had a strong record of being able to predict the rate of advances in their own field or the shape that such advances would take. On the one hand, some tasks, like chess playing, turned out to be achievable by means of surprisingly simple programs; and naysayers who claimed that machines would “never” be able to do this or that have repeatedly been proven wrong. On the other hand, the more typical errors among practitioners have been to underestimate the difficulties of getting a system to perform robustly on real-world tasks, and to overestimate the advantages of their own particular pet project or technique.

Given experts’ (and non-experts’) poor track record at predicting progress in AI, we are relatively agnostic about when full AI will be invented. It could come sooner than expected, or later than expected.

Experts also reported a 10% median confidence that superintelligence would be developed within 2 years of human equivalence, and a 75% confidence that superintelligence would be developed within 30 years of human equivalence. Here MIRI researchers’ views differ significantly from AI experts’ median view; we expect AI systems to surpass humans relatively quickly once they near human equivalence.

comment by Roko Jelavić (roko-jelavic) · 2020-07-05T18:16:35.175Z · LW(p) · GW(p)

Yes but GPT-3 offers us new evidence we should try to update on. It's debatable to say how many bits of evidence that provides, but we can also update based on this Discontinuous progress in history: an update:

Growth rates sharply changed in many trends, and this seemed strongly associated with discontinuities. If you experience a discontinuity, it looks like there’s a good chance you’re hitting a new rate of progress, and should expect more of that.

AlphaGo was something we saw before we expected it. GPT-3 text generator was something we saw before we expected it. They were discontinuities.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2020-07-05T19:44:26.828Z · LW(p) · GW(p)

I agree. I'm not sure how much to update on the things you mention or on other things that have happened since 2013, so I think my answer serves as more of jumping off point than something authoritative. I edited it to mention that.

answer by George3d6 (George) · 2020-07-06T04:26:30.857Z · LW(p) · GW(p)

1. The problem with theories along the vein of AIXI is that they assume exploration is simple (as it is, in RL), but exploration is very expensive IRL

So if you want to think based on that framework, well, then AGI is as far away as it takes to build a robust simulation of the world in which we want it to operate (very far away)

2. In the world of mortals, I would say AGI is basically already here, but it's not obvious because it's impact is not that great.

We have ML-based systems that could in theory do almost any job, the real problem lies in the fact that they are much more expensive than humans to "get right" and in some cases (e.g. self driving) there are regulatory hurdles to cross.

The main problem with a physical human-like platform running an AGI is not that designing the algorithms for it to perform useful tasks is hard, the problem is that designing a human like platform is impossible with current technology and the closest alternatives we've got are still more expensive to build and maintain than just hiring a human.

Hence why companies are buying checkout machines to replace employees rather than buying checkout robots.

3. If you're referring to "superintelligence" style AGI, i.e. something that is much more intelligent than a human, I'd argue we can't tell how far away this is or if it can even exists (i.e. I think it's non obvious that the bottleneck at the moment is intelligence and not physical limitations, see 1 + corrupt incentives structures, aka why smart humans are still not always used to their full potential).

comment by Donald Hobson (donald-hobson) · 2020-07-06T23:23:22.061Z · LW(p) · GW(p)
1. The problem with theories along the vein of AIXI is that they assume exploration is simple (as it is, in RL), but exploration is very expensive IRL

I'm not sure what you mean by this. Does RL mean Reinforcement Learning, but IRL mean "in real life". AIXI would be very efficient at using the minimum possible exploration. (And a lot of exploration can be done cheap. There is a lot of data online that can be downloaded for the cost of bandwidth, and sending a network packet to see what you get back is exploration.)

No comments

Comments sorted by top scores.