Why some people believe in AGI, but I don't.

post by cveres · 2022-10-26T03:09:17.498Z · LW · GW · 6 comments

6 comments

Comments sorted by top scores.

comment by ChristianKl · 2022-10-26T14:41:07.650Z · LW(p) · GW(p)

The problem with this argument is that it is very similar to arguments made in the late 1950s in a previous wave of AI optimism. 

Why is that a problem? A lot of technologies that get developed get developed in a similar situation where earlier dreams to develop the technology failed. 

Replies from: cveres
comment by cveres · 2022-10-26T20:21:01.348Z · LW(p) · GW(p)

Yes but when it does finally succeed, SOMETHING must be different.

That is what I go on to discuss. That something of course is the invention of DL. So my claim is that if DL is really not any better than symbol systems then the argument will come to the same inglorious end this time.

comment by shminux · 2022-10-26T08:09:33.395Z · LW(p) · GW(p)

Hmm, so is your argument basically "human-level intelligence is so hard, machines will not get there in the foreseeable future, so there is no need to worry about AI alignment"? Or is there something else?

Replies from: cveres
comment by cveres · 2022-10-26T08:48:40.086Z · LW(p) · GW(p)

No I don't think it is. AI systems can influence decisions even in their fairly primitive state, and we must think carefully how we use them. But my position is that we don't need to worry about these machines developing extremely sophisticated behaviours any time soon, which keeps the alignment problem somewhat in check.

comment by SD Marlow (sd-marlow) · 2022-10-26T03:44:26.465Z · LW(p) · GW(p)

First, we can all ignore LeCun, because despite his born-again claims, the guy wants to solve all the DL problems he has been pushing (and winning awards for) with more DL (absolutely not neuro-symbolic, despite borrowing related terms).

Second, I made the case that amazing progress in ML is only good for more ML, and that AGI will come from a different direction. Wanting to know that seems to have been the claim of this contest, but the distribution of posts indicates strong confirmation bias toward more, faster, sooner, danger!

Third, I think most people understand your position, but you have to understand the audience. Even if there is no AGI by 2029, on a long enough time scale, we don't just reach AGI, there is a likely outcome that intelligent machines exist for 10's of thousands of years longer than the human race (and they will be the ones to make first contact with another race of intelligent machines from halfway across the galaxy; and yes, it's interesting to consider future AI's contemplating the Fermi Paradox long after we have died-off). 

Replies from: cveres
comment by cveres · 2022-10-26T06:05:23.340Z · LW(p) · GW(p)

I love it! Ignore LeCun. Unfortunately he is pushing roughly the same line as Bengio, and is actually less extreme than Hinton. The heavyweights are on his side.

So yes, maybe from some direction, one day we will have intelligent machines. But for a funding agency it is not nearly clear enough where that direction is. Certainly not the kind of DL which is responsible for the current success. For example transformers.