"On the Impossibility of Superintelligent Rubik’s Cube Solvers", Claude 2024 [humor]

post by gwern · 2024-06-23T21:18:10.013Z · LW · GW · 6 comments

This is a link post for https://gwern.net/rubiks-cube

Contents

6 comments

In recent years, a number of prominent computer scientists and roboticists have suggested that artificial intelligence may one day solve Rubik's Cubes faster than humans. Many have further argued that AI could even come to exceed human Rubik's Cube-solving abilities by a significant margin. However, there are at least twenty distinct arguments that preclude this outcome. We show that it is not only implausible that AI will ever exceed human Rubik's Cube-solving abilities, but in fact impossible.

6 comments

Comments sorted by top scores.

comment by Jiro · 2024-06-23T21:59:10.308Z · LW(p) · GW(p)

I am generally not thrilled with the idea of humor where the joke is how badly you can strawman your opponent [LW · GW]. You also end up with Schroedinger's comedian, a phenomenon that happens a lot with real-world comedians too: the comedian is making an insightful comment about a real world issue, up until someone points out that they are saying something flawed or outright false, to which the reply is "it's just humor. It doesn't have to be accurate".

Replies from: gwern
comment by gwern · 2024-06-23T23:20:41.555Z · LW(p) · GW(p)

I don't think a lot of the jokes are strawman, anymore than I think all of the jokes in the original 'Impossibility' paper strawmen. (The power consumption ones might as well have been copy-pasted straight from the original criticisms of GPT-3 from many award-winning academics back in 2020-2023 for all that Claude-3 wound up modifying them.) The arguments by people like Penrose against strong AI really were that bad and rely on insane premises like 'humans are logically omniscient and never make mistakes' that have no steelman. And there have been countless equally rubbish arguments made in all seriousness by serious people against AI scaling. It is not strawmanning to point out how breathtakingly bad those were. (Remember when image-gen models were useless because 'they could never generate a character consistently'? Or 'they will never generate realistic hands'? Or 'the fact that they can't generate text inside images proves deep learning has hit a wall'?)

In any case, the main interest of this demo is in showing how easy it is now to generate a a pretty coherent and not-too-uncreative/ChatGPTesque 23,000+ word essay by scaffolding an inner-monologue prompt with a cheap publicly-accessible long-context-window LLM - which is a long way from the 1000-word GPT-3 attempts I made 4 years ago. (It was a trial run for a poetry project I hope will be more worth reading on its merits rather than merely as a tech demo.)

Replies from: xXAlphaSigmaXx, xXAlphaSigmaXx
comment by xXAlphaSigmaXx · 2024-06-24T10:14:45.358Z · LW(p) · GW(p)

The power consumption ones might as well have been copy-pasted straight from the original criticisms of GPT-3 from many award-winning academics back in 2020-2023 for all that Claude-3 wound up modifying them.

That problem hasn't gone away?

Classic singularitarian, simply ignoring inconvenient evidence. (Your cult leader hero Eliezer Yudkowsky predicted the singularity would have occurred by 2021 in 1996. In 2005, shadow demon Ray Kurzweil predicted AGI by 2029. Notice a pattern? Just like fusion power and mind uploading, the singularity is always about 20 years away.)

comment by xXAlphaSigmaXx · 2024-06-24T10:04:23.002Z · LW(p) · GW(p)

Image generation still struggles with hands, text, and consistency. (AI "art" has failed every time it's tried to compete with artists. They're getting laid off because AI is cheaper than stock images and better than nothing.) Google's most "advanced" text predictor just told millions of people to eat rocks and put glue on pizza. (Maybe that's the eeevil superintelligence trying to destroy us?)

GPT-3 was around in 2018 and GPT-4, Claude etc. are minuscule improvements. So much for "exponential progress" - in fact, AI is getting dumber and we are running out of data to fix it. AI writing is still trash regardless of the model and it is incredibly easy to tell. Yes, that includes your "essay". I've read it. (Hint: If AI was really that good, people would be using it for more than spam websites. And no, AI is not being adopted in the workplace, even after three years of you guys shoving Blockchain 2.0 down everyone's throat. Don't believe everything OpenAI's marketing tells you.)

The oncoming AIpocalypse looks more like the Willy Wonka Experience than Terminator to me. It indeed poses a serious threat to our society, but only through sheer incompetence.

comment by Ben (ben-lang) · 2024-06-26T16:57:40.877Z · LW(p) · GW(p)

A housemate of mine at university had a project to build Rubik's cube solving machine with a group as part of his course.

The "human hands are actually really good and hard to replace mechanically" would be a sentiment he could sympathise with. All other aspects of the project (the solving code, the camera data interpretation) were negligible in comparison to making hands that could turn the faces they wanted to turn, and turn them in increments of 90 degrees (more or less causes the next turn to "jam up" the cube). I think in the end they got it to the point it would usually work with a brand new cube that had never been used before, but after a few hundred moves had been made on any given cube the stiffness of the turns would have changed in an inconsistent way and jams would occur.

Replies from: gwern
comment by gwern · 2024-06-26T20:16:28.825Z · LW(p) · GW(p)

I'd say that anecdote illustrates that human hands are not very good at the problem of Rubiks Cubing, if it is a reasonable task to assign in a single course. No matter how much speedcubers practice, they're not going to be able to solve a cube in <0.4s. Looking at the slo-mo version, it seems like it might be able to go even faster, but it's approaching the physical limits of the regular cubes not exploding... (WP tells me the human all-time record stands at >7x slower.) Even with unmodified cubes they're still a lot faster (using Legos, of all things). Such high-speed robotics is a good example of why chaos doesn't matter [LW(p) · GW(p)].