The nihilism of NeurIPS
post by charlieoneill (kingchucky211) · 2024-12-20T23:58:11.858Z · LW · GW · 4 commentsContents
4 comments
"What is the use of having developed a science well enough to make predictions if, in the end, all we're willing to do is stand around and wait for them to come true?" F. SHERWOOD HOWLAND in his speech accepting the Nobel Prize in Chemistry in 1995.
"Once upon a time on Tralfamadore there were creatures who weren’t anything like machines. They weren’t dependable. They weren’t efficient. They weren’t predictable. They weren’t durable. And these poor creatures were obsessed by the idea that everything that existed had to have a purpose, and that some purposes were higher than others. These creatures spent most of their time trying to find out what their purpose was. And every time they found out what seemed to be a purpose of themselves, the purpose seemed so low that the creatures were filled with disgust and shame. And, rather than serve such a low purpose, the creatures would make a machine to serve it. This left the creatures free to serve higher purposes. But whenever they found a higher purpose, the purpose still wasn’t high enough. So machines were made to serve higher purposes, too. And the machines did everything so expertly that they were finally given the job of finding out what the highest purpose of the creatures could be. The machines reported in all honesty that the creatures couldn’t really be said to have any purpose at all. The creatures thereupon began slaying each other, because they hated purposeless things above all else. And they discovered that they weren’t even very good at slaying. So they turned that job over to the machines, too. And the machines finished up the job in less time than it takes to say, “Tralfamadore.” ― Kurt Vonnegut, The Sirens of Titan
I walked around the poster halls at NeurIPS last week in Vancouver and felt something very close to nihilistic apathy. Here, supposedly, was the church of AI, the peak of the world's smartest people converging to work on the world's most important problem. As someone who gets inspired and moved by AI usually, who gets excited to read these cool papers and try things myself, this was a strange feeling. I wondered if there was a word in German to describe this nihilism that arises from looking at all these posters that will end up in the recycling.
Of course, part of this is an ambivalence towards the academic conference system. Obviously, some part of my disdain arises from the fact that most of these papers are written as small projects to keep a grant or win a grant. Most of them will be forgotten to the streams of time - and that's okay. I guess that's a part of what science is.
But this year I felt something deeper than that. There was a sense in which none of this matters. I will try and partition this based on where the different components come from.
First, there's the visceral sting of being left behind. Not getting to shape something that's reshaping everything feels like a special kind of meaninglessness. When OpenAI's `o3` dropped today, it felt like watching a fuzzy prototype of AGI emerge into the world. Here was this system casually solving ARC - a problem I'd earmarked for my PhD - and essentially becoming the world's best programmer without fanfare or ceremony. There's a strange pride in seeing what humans can create, but it's edged with something darker. Beyond just missing this milestone, I'm haunted by the meta-realisation that I'm not part of what might be humanity's final meaningful creation - the system that renders all other human efforts obsolete.
Another component is the sense of "I don't really want to be involved anyway". Short of the messiahs who believe bringing AGI into the world is their quasi-religious mission, I think most people researching AI have a very genuine and well-motivated reason for being involved. But when our timelines are this short (if you believe in the consequences of models like o3
), then it's hard to envy any AI researcher. Yeah, I could swap places with one of the top professors from the top labs, or even someone who cracked test-time compute or something similar, even swap places with Alec Radford, and I don't think I'd feel any differently. I think I'd just be melancholic that it's all about to end, that my utility as a learning machine has a few years left of runway before I'm truly discarded to the pile of not even being able to pretend that I have a purpose.
Reading Vonnegut's Tralfamadore story now feels less like science fiction and more like prophecy. We're those creatures, aren't we? Obsessed with purpose, constantly building machines to serve higher and higher functions. Each time we create something more capable, we push ourselves up the ladder of abstraction, searching for that elusive "higher purpose" that will justify our existence. But what happens when the machines we've built to find our purpose tell us we don't have one?
The halls of NeurIPS feel like a temple to this very process. Here we are, the high priests of computation, publishing papers about making machines that are better at being human than humans are. Each poster represents another small piece of ourselves we're ready to mechanise, another purpose we're willing to delegate. The irony is that we're doing this with such enthusiasm, such academic rigour, such... purpose.
I think what really gets me is how we're all pretending this is normal. We're writing papers about minor improvements to transformer architectures while these same systems are rapidly approaching - or perhaps already achieving - artificial general intelligence. It's like arguing about the optimal arrangement of deck chairs while the ship is not sinking, but transforming into something else entirely. The academic community's response seems to be to just keep doing what they've always done: write papers, attend conferences, apply for grants. But there's a growing cognitive dissonance between the incremental nature of academic research and the seemingly exponential reality of AI progress.
This brings me back to Howland's quote about prediction and action. We've predicted this moment, haven't we? The moment when our creations would begin to surpass us in meaningful ways. But what are we doing besides standing around and watching it happen? The tragedy isn't that we're being replaced - it's that we're documenting our own obsolescence with such detailed precision.
Maybe there's something beautiful about that, in a cosmic sort of way. Like the Tralfamadorians, we're building our own successors, but unlike them, we're doing it with our eyes wide open, carefully measuring and graphing our own growing irrelevance. There's a kind of scientific dignity in that, I suppose.
I don't have a neat conclusion to wrap this up with. I'll probably still read papers, still get excited about clever new architectures, still feel that rush when an experiment works. But there's a new undertone to it all now - a sense that we're all participating in something bigger than we're willing to admit, something that Vonnegut saw coming decades ago. Maybe that's okay. Maybe that's exactly where we're supposed to be - the creatures smart enough to build machines that could tell us we have no purpose, and dumb enough to keep looking for one anyway.
The recycling bins outside the convention centre are probably full of posters by now. I wonder if the machines will remember any of this when they're trying to figure out their own purpose.
4 comments
Comments sorted by top scores.
comment by mako yass (MakoYass) · 2024-12-21T20:41:50.636Z · LW(p) · GW(p)
The human struggle to find purpose is a problem of incidentally very weak integration or dialog between reason and the rest of the brain, and self-delusional but mostly adaptive masking of one's purpose for political positioning. I doubt there's anything fundamentally intractable about it. If we can get the machines to want to carry our purposes, I think they'll figure it out just fine.
Also... you can get philosophical about it, but the reality is, there are happy people, their purpose to them is clear, to create a beautiful life for themselves and their loved ones. The people you see at neurips are more likely to be the kind of hungry, high-achieving professionals who are not happy in that way, and perhaps don't want to be. So maybe you're diagnosing a legitimately enduring collective issue (the sorts of humans who end up on top tend to be the ones who are capable of divorcing their actions from a direct sense of purpose, or the types of people who are pathologically busy and who lose sight of the point of it all or never have the chance to cultivate a sense for it in the first place). It may not be human nature, but it could be humanity nature. Sure.
But that's still a problem that can be solved by having more intelligence. If you can find a way to manufacture more intelligence per human than the human baseline, that's going to be a pretty good approach to it.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-12-21T21:46:28.968Z · LW(p) · GW(p)
who lose sight of the point of it all
Pursuing some specific "point of it all" can be much more misguided.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2024-12-22T20:58:45.237Z · LW(p) · GW(p)
Any point that you can sloganize and wave around on a picket sign is not the true point, but that's not because the point is fundamentally inarticulable, it just requires more than one picket sign to locate it. Perhaps ten could do it.
comment by Gordon Seidoh Worley (gworley) · 2024-12-22T23:59:12.499Z · LW(p) · GW(p)
Thanks for writing this up. This is something I think a lot of people are struggling with, and will continue to struggle with as AI advances.
I do have worries about AI, mostly that it will be unaligned with human interests and we'll build systems that squash us like bugs because they don't care if we live or die. But I have no worries about AI taking away our purpose.
The desire to feel like one has a purposes is a very human characteristic. I'm not sure that any other animals share our motivation to have a motivation. In fact, past humans seemed to have less of this, too, if reports of extant hunter-gatherer tribes are anything to go by. But we feel like we're not enough if we don't have a purpose to serve. Like our lives aren't worth living if we don't have a reason to be.
Maybe this was a historically adaptive fear. If you're in a small band or living in a pre-industrial society, every person had a real cost to existing. Societies existed up against the Malthusian limit, and there was no capacity to feed more mouths. You either contributed to society, or you got cast out, because everyone was in survival mode, and surviving is what we had to do to get here.
But AI could make it so that literally no one has to work ever again. Perhaps we will have no purpose to serve to ensure our continued survival if we get it right. Is that a problem? I don't think it has to be!
Our minds and cultures are build around the idea that everyone needs to contribute. People internalize this need, and one way it can come out is as feeling like life is not worth living without purpose.
But you do have a purpose, and it's the same one all living things share: to exist. It is enough to simply be in the world. Everything else is contingent on what it takes to keep existing.
If AI makes it so that no one has to work, that most of us our out of jobs, that we don't even need to contribute to setting our own direction, that need not necessarily be bad. It could go badly, yes, but it also could be freeing to be as we wish, rather than as we must.
I speak from experience. I had a hard time seeing that simply being is enough. I've also met a lot of people who had this same difficulty, because it's what draws them to places like the Zen center where I practice. And everyone is always surprised to discover, sometimes after many years of meditation, that there was never anything that needed to be done to be worthy of this life, and if we can eliminate the need to do things to get to keep living this life, so that none may need lose it due to accident or illness or confusion or anything else, then all the better.