Questions for old LW members: how have discussions about AI changed compared to 10+ years ago?
post by Expertium (lavrov-andrey) · 2025-04-27T16:11:57.595Z · LW · GW · 4 commentsContents
4 comments
I wasn't on LessWrong 10 years ago, during the pre-ChatGPT era, so I'd like to hear from people who were here back in the day. I want to know how AI-related discussions have changed and how people's opinions have changed.
- Are you leaning more towards "alignment is easy" or towards "alignment is hard", compared to many years ago? Is the same true for the community as a whole?
- Which ideas and concepts ("pivotal act", CEV, etc.) that were discussed years ago do you think are still relevant, and which ones do you think are obsolete?
Is there some topic that was popular 10+ years ago that barely anybody talks about these days? - Are you leaning more towards "creating AGI is easy" or towards "creating AGI is hard", compared to many years ago? Is the same true for the community as a whole?
- What are you most surprised about? To put it another way, is there something about modern AI that you could not have predicted in advance in 2015?
I am surprised that LLMs can write code yet cannot reliably count the number of words in a short text. Recently, I asked Gemini 2.5 Pro to write a text with precisely 269 words (and even specified that spaces and punctuation don't count as words), and it gave me a text with 401 words. Of course, there are lots of other examples where LLMs fail in surprising ways [LW · GW], but this is probably the most salient example for me.
If in 2015 I was trying to predict what AI will look like in 2025, I definitely would not be able to predict that AI won't be able to count to a few hundred but will be able to write Python code.
4 comments
Comments sorted by top scores.
comment by RHollerith (rhollerith_dot_com) · 2025-04-27T20:40:01.409Z · LW(p) · GW(p)
In 2015, I didn't write much about AI on Hacker News because even just explaining why it is dangerous will tend to spark enthusiasm for it in some people (people attracted to power, who notice that since it is dangerous, it must be powerful). These days, I don't let that consideration stop me from write about AI.
comment by Brendan Long (korin43) · 2025-04-27T17:58:19.379Z · LW(p) · GW(p)
- I think alignment is easier than I used to, since we can kind-of look into LLM's and find the concepts, which might let us figure out the knobs we need to turn even though we don't know what they are right now (i.e. weirdly enough, there might be a "lie to humans" button and we can just prevent the AI from pushing it). I still think it's unclear if we'll actually do the necessary research fast enough though. Alignment-by-default also seems more likely than I would have expected, although it does seem to be getting worse as we make LLM's larger. I'm not really sure how this has changed within the community since people who don't think AI is a problem don't really post about it.
- I think older posts about were mostly arguments about whether things could happen (could you make an oracle that's not an agent, could you keep the AI in a box [? · GW], is AI even possible, etc.) and now that the AI doomers conclusively won all of those arguments, the discussions are more concrete now (discussion of actually-existing AI features).
- It depends on what you mean by easier, but my timelines are shorter than they used to be, and I think most people's are.
- I'm definitely surprised that glorified decompression engines might be sufficient for AGI. The remaining problems don't really surprise me on top of knowing how they're trained[1]. I'm guessing the evolutionary AI people are feeling very vindicated though.
- ^
There's lots of coding training data and not very much training data for creating documents of a specific length. I think if we added a bunch of "Write ### words about X" training data the LLM's would suddenly be good at it.
↑ comment by Expertium (lavrov-andrey) · 2025-04-27T18:13:45.327Z · LW(p) · GW(p)
There's lots of coding training data and not very much training data for creating documents of a specific length. I think if we added a bunch of "Write ### words about X" training data the LLM's would suddenly be good at it.
My point was that it's surprising that AI is so bad at generalizing to tasks that it hasn't been trained on. I would've predicted that generalization would be much better (I also added a link to a post with more examples). This is also why I think creating AGI will be very hard, unless there will be a massive paradigm shift (some new NN architecture or a new way to train NNs).
EDIT: It's not "Gemini can't count how many words it has in its output" that surprises me, it's "Gemini can't count how many words it has in its output, given that it can code in Python and in a dozen other languages and can also do calculus".
comment by Said Achmiz (SaidAchmiz) · 2025-04-27T18:52:30.572Z · LW(p) · GW(p)
The biggest change is that AI is discussed much more today than it was 10 years ago.