Steven Pinker on ChatGPT and AGI (Feb 2023)
post by Evan R. Murphy · 2023-03-05T21:34:14.846Z · LW · GW · 8 commentsThis is a link post for https://news.harvard.edu/gazette/story/2023/02/will-chatgpt-replace-human-writers-pinker-weighs-in/
Contents
Article link Summary None 8 comments
While I disagreed with a lot of Robin Hanson's latest take on AI risk [LW · GW], I am glad he came out with an updated position. I think with everything that's happened in the past 6-12 months, it's a good time for public intellectuals and prominent people who have previously commented on AGI and AI risk to check in again and share their latest views.
That got me curious if Steven Pinker had any recent statements. I found this article on the Harvard Gazette from last month (Feb 2023), which I couldn't find posted on LessWrong before:
Article link
Will ChatGPT supplant us as writers, thinkers?
Q&A with Steven Pinker
by Alvin Powell
Feb 14, 2023
Summary
Here's a summary of the article that ChatGPT generated for me just now (bold mine):
Steven Pinker, a psychology professor at Harvard, has commented on OpenAI’s ChatGPT, an artificial intelligence (AI) chatbot that can answer questions and write texts. He is impressed with the AI's abilities, but also highlights its flaws, such as a lack of common sense and factual errors. Pinker believes that ChatGPT has revealed how statistical patterns in large data sets can be used to generate intelligent-sounding text, even if it does not have understanding of the world. He also believes that the development of artificial general intelligence is incoherent and not achievable, and that current AI devices will always exceed humans in some challenges and not others. Pinker is not concerned about ChatGPT being used in the classroom, as its output is easy to unmask as it mashes up quotations and references that do not exist.
Note that while he comments on AGI being an incoherent idea, he doesn't speak specifically about existential risk from AI misalignment. So it's not totally clear, but I think we can infer Pinker considers the risk very low, since he doesn't think AGI is possible in the first place.
8 comments
Comments sorted by top scores.
comment by JNS (jesper-norregaard-sorensen) · 2023-03-06T09:36:22.406Z · LW(p) · GW(p)
This is not me hating on Steven Pinker, really it is not.
PINKER: I think it’s incoherent, like a “general machine” is incoherent. We can visualize all kinds of superpowers, like Superman’s flying and invulnerability and X-ray vision, but that doesn’t mean they’re physically realizable. Likewise, we can fantasize about a superintelligence that deduces how to make us immortal or bring about world peace or take over the universe. But real intelligence consists of a set of algorithms for solving particular kinds of problems in particular kinds of worlds. What we have now, and probably always will have, are devices that exceed humans in some challenges and not in others.
This looks to me like someone who is A) talking outside of their wheelhouse and B) have not given what they say enough thought.
Its all over the map, superheroes vs super intelligence. "General machine" is incoherent (?)
And then he goes completely bonkers and says the bolded part, maybe Alvin Powell got it wrong, But if not, then I can only concluded that whatever Steven Pinker has to say about (powerful) general systems, is bunk and I should pay no attention.
So I didn't finish the article.
The only thing that it did, was solidify my perception around public talk/discourse on (powerful) general systems. I think it is misguided to such a degree, that any engagement with it leads to frustration[1].
- ^
I think this explains why EY at times seems very angry and/or frustrated. Having done what he has done for many years now, in an environment like that, must be insanely depressing and frustrating.
↑ comment by Lone Pine (conor-sullivan) · 2023-03-06T10:50:53.739Z · LW(p) · GW(p)
Either you believe in the Church-Turing thesis or you don't, it seems. General machines have existed for over 70 years! I wonder how these people will pivot once there are human-like full agents running around (assuming we live to see it.)
comment by Augustine Esterhammer-Fic (augustine-esterhammer-fic) · 2023-03-31T14:53:13.355Z · LW(p) · GW(p)
I'm sure this talking point has been done to death, but if it's true that ChatGPT (in an experimental setting) was capable of deceiving someone on TaskRabbit to solve a capcha for it, and ChatGPT is only a language model, we have already far surpassed the kinds of capabilities Pinker has been dismissing for years.
It's similar to his writing on how language models will always be bad at the nuances of translating languages. I study Indonesian and Spanish, and recently had a conversation on character.ai switching between them. Unimaginable four years ago.
I think Pinker has an idea of how AI can and can't operate that is pretty rapidly becoming out-of-date, for someone who is so publicly vocal on the topic.
Kind of feels irresponsible to downplay safety issues.
comment by habryka (habryka4) · 2023-03-05T21:46:15.608Z · LW(p) · GW(p)
Mode note: I edited the title to say "Feb 2023" instead of "Feb 2022", because well, the thing happened in Feb 2022 (and indeed, I was very surprised to see the original title since this would have somehow implied that Steven Pinker had access to ChatGPT months before it was released).
Replies from: Evan R. Murphy↑ comment by Evan R. Murphy · 2023-03-05T21:49:12.452Z · LW(p) · GW(p)
Oops, thanks for catching that!
because well, the thing happened in Feb 2022
You mean Feb 2023, right? (Are we in a recursive off-by-one-year discussion thread? 😆)
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-03-05T22:58:41.881Z · LW(p) · GW(p)
You mean Feb 2023, right? (Are we in a recursive off-by-one-year discussion thread? 😆)
Yes, exactly, sorry, I meant to say that the thing happened in Feb 2022, of course.
Replies from: Radford Neal↑ comment by Radford Neal · 2023-03-05T23:20:54.876Z · LW(p) · GW(p)
I'm completely confused. Maybe you should just make a fresh start, and say whatever you actually intend to say, without reference to what you said before?
Replies from: Evan R. Murphy↑ comment by Evan R. Murphy · 2023-03-05T23:48:38.484Z · LW(p) · GW(p)
Haha sorry about that - the Too Confusing; Didn't Read is:
- The article is from Feb 2023 (one month ago), but I initially had a typo in the title saying it was from Feb 2022
- Habryka fixed the typo, so now it correctly reads Feb 2023
- The rest is just comments from me and Habryka making more accidental date typos, as well as some intentional ones for confusion-inducing comic relief