We probably won't just play status games with each other after AGI

post by Matthew Barnett (matthew-barnett) · 2025-01-15T04:56:38.330Z · LW · GW · 5 comments

Contents

5 comments

There is a view I’ve encountered somewhat often,[1] which can be summarized as follows: 

After the widespread deployment of advanced AGI, assuming humanity survives, material scarcity will largely disappear. Everyone will have sufficient access to necessities like food, housing, and other basic resources. Therefore, the only scarce resource remaining will be "social status". As a result, the primary activity humans will engage in will be playing status games with other humans.

I want to challenge this idea. I have at least two big objections.

My first objection is modest but important. In my view, this idea underestimates the extent to which AIs could participate in status games alongside us, not just as external tools or facilitators but as actual participants and peers in human social systems. Specifically, the idea that humans will only be playing status games with each other strikes me as flawed because it overlooks the potential for AIs to fully integrate into our social lives, forming genuinely deep relationships with humans as friends, romantic partners, social competitors, and other forms of meaningful social connections.

One common counterargument I’ve heard from people is that they don’t believe they would ever truly view an AI as a "real" friend or romantic partner. This reasoning often seems to rest on a belief that such relationships would feel inauthentic, as though you're interacting with a mere simulation. However, I think this skepticism is based on a misunderstanding of what AIs are capable of. In a way, this belief seems to stem from skepticism about AI capabilities: they are essentially saying that whatever it is that humans do that cause us to be good social partners can't be replicated in a machine.

In my view, there is no fundamental reason why a mind implemented on silicon should inherently feel less “real” or “authentic” than a mind implemented on a biological brain. The perceived difference is a matter of perspective, not an objective truth about what makes a relationship meaningful.

To illustrate this, consider a silly hypothetical: imagine discovering that your closest friend was, unbeknownst to you, a robot all along. Would this revelation fundamentally change how you view your relationship? I suspect that most people would not suddenly stop caring about that friend or begin treating them as a mere tool (though they'd likely become deeply confused, and have a lot of questions). My point is that the qualities that made the friendship meaningful—such as shared memories, and emotional connection—would not cease to exist simply because of the revelation that they are not a carbon-based lifeform. In the same way, I predict that as AIs improve and become more sophisticated, most humans will eventually overcome their initial hesitation and embrace AIs as true peers.

Right now, this might seem implausible because today’s AI systems are still limited in important ways. For example, current LLMs lack of robust long-term memory and so it's effectively impossible to have a meaningful relationship with them over long timespans. But these limitations are temporary. In the long run, there’s no reason to believe that AIs won’t eventually surpass humans in every domain that makes someone a good friend, romantic partner, or social peer. Advanced AIs will have great memory, excellent social intuition, and a good sense of humor. They could have outstanding courage, empathy, and creativity. Depending on the interface—such as a robotic body capable of human-like physical presence—they could be made to feel as "normal" to interact with as any human you know.

In fact, I would argue that AIs will ultimately make for better friends, partners, and peers than humans in practically every way. Unlike humans, AIs can be explicitly trained to embody the traits we most value in relationships—whether that’s empathy, patience, humor, intelligence, whatever—without the shortcomings and inconsistencies that are inherent to human behavior. While their non-biological substrate ultimately sets them apart, their behavior could easily surpass human standards of social connection. In this sense, AIs would not just be equal to humans as social beings but could actually become superior in the ways that matter most when forming social ties with them.

Once people recognize how fulfilling and meaningful relationships with AIs can be, I expect that social attitudes will shift. This change may start slowly, as more conservative or skeptical people will resist the idea at first. But over time, much like the adoption of smartphones into our everyday life, I predict that forming deep social bonds with AIs will become normalized. At some point, it won’t seem unusual or weird to have AIs as core members of one’s social circle. In fact, I think it’s entirely plausible that AIs will become the vast majority of people’s social connections. If this happens, the notion that humans will be primarily playing status games with each other becomes an oversimplification. Instead, the post-AGI social landscape will likely involve a complex interplay of dynamics between humans and AIs, with AIs playing a major—indeed, likely central—role as peers in these interactions.

But even in the scenario I’ve just outlined, where AIs integrate into human social systems and become peers, the world still feels far too normal to me. The picture I've painted seems to assume that not much will fundamentally change about our social structures or the ways we interact, even in a post-AGI world.

Yet, I believe the future will likely look profoundly strange—far beyond a simple continuation of our current world but with vast material abundance. Instead of just having more of what we already know, I anticipate the emergence of entirely new ways for people to spend their time, pursue meaning, and structure their lives. These new activities and forms of engagement could be so unfamiliar and alien to us today that they would be almost unrecognizable.

This leads me to my second objection to the idea that the primary activity of future humans will revolve around status games: humans will likely upgrade their cognitive abilities.

This could begin with biological enhancements—such as genetic modifications or neural interfaces—but I think pretty quickly after it becomes possible, people will start uploading their minds onto digital substrates. Once this happens, humans could then modify and upgrade their brains in ways that are currently unimaginable. For instance, they might make their minds vastly larger, restructure their neural architectures, or add entirely new cognitive capabilities. They could also duplicate themselves across different hardware, forming "clans" of descendants of themselves. Over time, this kind of enhancement could drive dramatic evolutionary changes, leading to entirely new states of being that bear little resemblance to the humans of today.

The end result of such a transformation is that, even if we begin this process as "humans", we are unlikely to remain human in any meaningful sense in the long-run. Our augmented and evolved forms could be so radically different that it feels absurd to imagine we would still be preoccupied with the same social activities that dominate our lives now—namely, playing status games with one another. And it seems especially strange to think that, after undergoing such profound changes, we would still find ourselves engaging in these games specifically with biological humans, whose cognitive and physical capacities would pale in comparison to our own.

  1. ^

    Here's a random example of a tweet that I think gestures at this idea.

5 comments

Comments sorted by top scores.

comment by sapphire (deluks917) · 2025-01-15T06:38:21.122Z · LW(p) · GW(p)

Lots of people already form romantic and sexual attachments to AI, despite the fact that most large models try to limit this behavior. The technology is already pretty good. Nevermind if your AI GF/BF could have a body and actually fuck you. I already "enjoy" the current tech. 

I will say I was literally going to post "Why would I play status games when I can fuck my AI GF" before I read the content of the post, as opposed to just the title. I think this is what most people want to do. Not that this is going to sound better than "status games" to a lot of rationalists.

comment by Davidmanheim · 2025-01-15T08:23:12.304Z · LW(p) · GW(p)

I think some of this is on target, but I also think there's insufficient attention to a couple of factors.

First, in the short and intermediate term, I think you're overestimating how much most people will actually update their personal feelings around AI systems. I agree that there is a fundamental reason that fairly near-term AI will be able to function as  better companion and assistant than humans - but as a useful parallel, we know that nuclear power is fundamentally better than most other power sources that were available in the 1960s, but people's semi-irrational yuck reaction to "dirty" or "unclean" radiation - far more than the actual risks - made it publicly unacceptable. Similarly, I think the public perception of artificial minds will be generally pretty negative, especially looking at current public views of AI. (Regardless of how appropriate or good this is in relation to loss-of-control and misalignment, it seems pretty clearly maladaptive for generally friendly near-AGI and AGI systems.)

Second, I think there is a paperclip maximizer aspect to status competition, in the sense Eliezer uses the concept. That is,  Specifically, given massively increased wealth, abilities, and capacity, even if a implausibly large 99% of humans find great ways to enhance their lives in ways that don't devolve into status competition, there are few other domains where an indefinite amount of wealth and optimization power can be applied usefully. Obviously, this is at best zero-sum, but I think there aren't lots of obvious alternative places for positive sum indefinite investments. And even where such positive-sum options exist, they often are harder to arrive at as equilibria. (We see a similar dynamic with education, housing, and healthcare, where increasing wealth leads to competition over often artificially-constrained resources rather than expansion of useful capacity.)

Finally and more specifically, your idea that we'd see intelligence enhancement as a new (instrumental) goal in the intermediate term seems possible and even likely, but not a strong competitor for, nor inhibitor of, status competition. (Even ignoring the fact that intelligence itself is often an instrumental goal for status competition!) Even aside from the instrumental nature of the goal, I will posit that some strongly reduced returns to investment will exist - regardless of the fact that it's unlikely on priors that these limits are near the current levels. Once those points are reached, the indefinite investment of resources will trade-off between more direct status competition and further intelligence increases, and as the latter shows decreased returns, as noted above, the former becomes the metaphorical paperclip which individuals can invest indefinitely into.

comment by tailcalled · 2025-01-15T08:05:54.040Z · LW(p) · GW(p)

On straightforward extrapolation of current technologies, it kind of seems like AI friends would be overly pliable and lack independent lives. One could obviously train an AI to seem more independent to their "friends", and that would probably make it more interesting to "befriend", but in reality it would make the AI less independent because its supposed "independence" would actually arise from a constraint generated by its "friends"'s perception, rather than from an attempt to live independently. This seems less like a normal friendship and more like a superstimulus simulating the appearance of a friendship for entertainment value. It seems reasonable enough to characterize it as non-authentic.

 

Do you disagree? What do you think would lead to a different trajectory?

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2025-01-15T08:20:03.352Z · LW(p) · GW(p)

This seems less like a normal friendship and more like a superstimulus simulating the appearance of a friendship for entertainment value. It seems reasonable enough to characterize it as non-authentic.

I assume some people people will end up wanting to interact with a mere superstimulus; however, other people will value authenticity and variety in their friendships and social experiences. This comes down to human preferences, which will shape the type of AIs we end up training. 

The conclusion that nearly all AI-human friendships will seem inauthentic thus seems unwarranted. Unless the superstimulus is irresistible, then it won't be the only type of relationship people will have. 

Since most people already express distaste at non-authentic friendships with AIs, I assume there will be a lot of demand for AI companies to train higher quality AIs that are not superficial and pliable in the way you suggest. These AIs would not merely appear independent but would literally be independent in the same functional sense that humans are, if indeed that's what consumers demand.

This can be compared to addictive drugs and video games, which are popular, but not universally viewed as worthwhile pursuits. In fact, many people purposely avoid trying certain drugs to avoid getting addicted: they'd rather try to enjoy what they see as richer and more meaningful experiences from life instead.

Replies from: tailcalled
comment by tailcalled · 2025-01-15T09:00:51.034Z · LW(p) · GW(p)

I don't think consumers demand authentic AI friends because they already have authentic human friends. Also it's not clear how you imagine the AI companies could train the AIs to be more independent and less superficial; generally training an AI requires a differentiable loss, but human independence does not originate from a differentiable loss and so it's not obvious that one could come up with something functionally similar via gradient descent.