My argument against AGI

post by cveres · 2022-10-12T06:33:27.521Z · LW · GW · 5 comments

Contents

5 comments

5 comments

Comments sorted by top scores.

comment by Raemon · 2022-10-13T22:23:27.138Z · LW(p) · GW(p)

I disagree with your argument here, but thought it was good for you to write a more concise post articulating the argument more clearly. I've upvoted the post, mostly because it seemed reasonable for this one to have positive karma. 

Replies from: cveres
comment by cveres · 2022-10-13T22:30:25.087Z · LW(p) · GW(p)

Thank you and I am sorry I got off on the wrong foot with this community. 

comment by SD Marlow (sd-marlow) · 2022-10-12T15:29:52.675Z · LW(p) · GW(p)

This is the boiled-down version of your argument? You clearly state that GOFAI failed and that classic symbolic AI won't help. You also seem to be in favor of DL methods, but in a confusing way, say they don't go far enough (and then refer to others that have said as much). But I don't understand the leap to AGI, where you only seem to say we can't get there, not really connecting the two ideas. Neurosymbolic AI is not enough? 

I think you're saying DL captures how the brain works without being a model for cognitive functions, but you don't expand on the link between human cognition and AGI. And while I've thrown some shade on the group dynamics here (moderators can surface the posts they like while letting others sink, aka discovery issues), you have to understand that this 'AGI is near' mentality is based on accelerating progress in ML/DL (and thus, the safety issues are based on the many problems with ML/DL systems). At best, we can shift attention onto more realistic milestones (thou they will call that goalpost moving). If your conclusion is that this AGI stuff is unlikely to ever pan-out, then yes, all of your arguments will be rejected. 

Replies from: cveres
comment by cveres · 2022-10-12T23:46:38.481Z · LW(p) · GW(p)

Just to clarify, I am responding to the proposition "AGI will be developed by January 1, 2100". The safety issues are orthogonal because we already know that existing AI technologies are dangerous and are being used to profile people and influence elections.

I have added a paragraph before the points, which might clarify the thrust of my argument. I am undermining the reason why so many people have a belief that DL based AI will achieve AGI when GOFAI didn't 

comment by the gears to ascension (lahwran) · 2022-10-12T10:03:30.031Z · LW(p) · GW(p)

this seems like a reasonable point, but why should the looming success of neurosymbolic approaches make anyone expect agi to not happen?