Posts
Comments
Some humans care about humans. Some humans are mass murdering sociopaths. It only requires a small number of said sociopaths to get their hands on weapons of mass destruction to cause disaster. A cursory reading of history (and current events!) confirms this.
That's a very well-argued point. I have precisely the opposite intuition of course, but I can't deny the strength of your argument.. I tend to be less interested in tasks that are well-bounded, than those that are open-ended and uncertain. I agree that much of what we call intelligent might be much simpler. But then I think common sense reasoning is much harder. I think maybe I'll try to draw up my own list of tasks for AGI :)
I think you need to be sceptical about what kind of reasoning these systems are actually doing. My contention is that they are all shallow. A system that is trained on near-infinite training sets can look indistinguishable from one that can do deep reasoning, but is in fact just pattern-matching. Or might be. This paper is very pertinent I think:
https://arxiv.org/abs/2205.11502
short summary: train a deep network on examples from a logical reasoning task, obtain near-perfect validation error, but find it hasn't learnt the task at all! It's learned arbitrary statistical properties of the dataset, completely unrelated to the task. Which is what deep learning does by default. That isn't going to go away with scale - if anything, it will get worse. And if we say we'll fix it by adding 'actual reasoning', well... good luck! AI spent 2 decades trying to build symbolic reasoning systems, getting that to work is incredibly hard.
Now I haven't actually read up on the Minerva results yet, and will do so, but I do think we need to exercise caution before attributing reasoning to something, if there are dumber ways to get the same behaviour.
To me all this says is that we need a new paradigm entirely to get anywhere close to AGI. That's not impossible, but it makes me sufficiently confident that it's going to be decades, if not a couple of centuries.
I agree it's an attempt to poke Elon, although I suspect he knew that he'd never take the bet. Also agree that anything involving real world robotics in unknown environments is massively more difficult. Having said that, the criteria from Effective Altuirism here:
for any human who can do any job, there is a computer program (not necessarily the same one every time) that can do the same job for $25/hr or less
do say 'any job', and we often seem to forget how many jobs require insane levels of dexterity and dealing with the unknown. We could think about the difficulty of building a robot plasterer or car mechanic for example, and see similar levels of complexity, if we pay attention to all the tasks they actually have to do. So I think it fair to have it part of AGI. I do agree that more detailed predictions would be hugely helpful. Marcus's colleague, Rodney Brooks, has a fun scorecard of predictions for robotics and AI here:
https://rodneybrooks.com/predictions-scorecard-2022-january-01/
which I think is quite useful. As an aside, I had a fun 20 minute chat with GPT-3 today and convinced myself that it doesn't have the slightest understand of meaning at all! Can send the transcript if interested.
Quite possibly. I just meant: you can't conclude from the bet that AGI is even more imminent.
Genuinely, I would love to hear people's thoughts on Marcus's 5 conditions, and hear their reasoning. For me, the one of having a robot cook that can work in pretty much anyone's kitchen is a severe test, and a long way from current capabilities.
"If your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world."
One problem I have with these scenarios is that they always rely on some lethal means for the AGI to actually kill people. And those lethal means are also available to humans, of course. If it's possible for an AGI to 'simply' hack into a nuclear base and launch all it's missiles, it's possible for a human to do the same - possibly using AI to assist themselves. I would wager that it's many orders of magnitude more likely for a human to do this, given our long history of killing each other. Therefore a world in which AGI can actually kill all of us, is a world in which a rogue human can as well. It feels to me that we're kind of worrying about the wrong thing in these scenarios - we are the bigger threat.
The reason he offered that bet was because Elon Musk had predicted that we'd likely have AGI by 2029, so you're drawing the wrong conclusion from that. Other people joined in with Marcus to push the wager up to $500k, but Musk didn't take the bet of course, so you might infer something from that!
The bet itself is quite insightful, and I would be very interested to hear your thoughts on its 5 conditions:
https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things
In fact anyone thinking that AGI is imminent would do well to read it - it focusses the mind on specific capabilities and how you might build them, which I think it more useful than thinking in vague terms like 'well AI has this much smartness already, how much will it have in 20 / 80 years!'. I think it's useful and necessary to understand at that level of detail, otherwise we might be watching someone building a taller and taller ladder, and somehow thinking that's going to get us to the moon.
FWIW, I work in DL, and I agree with his analysis