Social status games might have "compute weight class" in the future
post by Raemon · 2025-05-07T18:56:46.333Z · LW · GW · 5 commentsContents
5 comments
(Note: I don't think "social status games" are bad, although I think it's usually not healthy/helpful to focus on them as "social status games qua games." i.e. I have some motivation to be good at telling fun stories at parties, or write music that people will like, or throw interesting events. There's also things like "being a good listening" / "helpful advice-giver." Some of this is motivated intrinsic joy of the activity, but some is motivated by wanting to feel respected/cool. This seems fine in healthy doses)
A thought I had while reading We probably won't just play status games with each other after AGI [LW · GW] (which I expected to say "we'll do other things than play status games" but AFAICT said "we'll play status games with AIs too").
A response I first had was "but, the AI will be so massively better than us at everything, it'd just be lame to be competing with them."
But, thinking about it a bit more: probably eventually, many/most people are uploads, and they are also running on optimized artificial brains. Bio humans may have access to various augmentations, either biological enhancement or tools like in The Gentle Romance [LW · GW].
I'm not sure about bio humans, but, probably there will eventually be less distinction between uploads and various types of AIs. There will be superintelligences with jupiter brains. There may be some uploads with jupiter brains too. Uploads will probably compete socially with AIs.
In martial arts, we have a concept of "weight class", to make fights "fair" and and interesting. I guess we actually sort of already have this for social status games – people tend to compete in arenas with similar socioeconomic status and background. (There's also a "locality" aspect, partially eroded by the internet, where you tend to be competing with people in your same geographic area. On the internet, there's still something like "scenes", like you might be trying to make cool Filk Songs in the filk community)
Somewhere out there are Old Money dynasties and tech billionaires jockeying for position. But, it's not super relevant to me. There's a sort of automatic self-sorting (although sometimes jarring, when like a young programmer suddenly is a billionaire CEO and then finds themselves in a different social arena they don't understand but affects their career so they kinda have to engage even if they don't want to)
In some futures, there could be humans and AI competing for who can tell the best joke or run the most interesting events or make cool art. There could be epic-level "designed to scale" AI systems that are competing with the equivalent of Jupiter Brain'd Taylor Swift. There might be superintelligences in charge of the solar system or galaxy, and might be augmented posthumans who either have a direct role there, or who monitor/sanity check the AI)
But, then there might be local groups, with a mix of humans and AIs that weren't designed to scale-and-takeover things, just interact. Each with access to the same plugins and
"Will the AIs be conscious or appreciate the game". I dunno. Right now I particularly hope AIs aren't conscious because we have no way of really communicating with them or interacting with them nicely as moral patients. Later on, I may specifically hope that they're sentient so that if they end up winning the universe, and least there's somebody home to appreciate it."
Does this cinematic universe "hang together" worldbuilding wise?". I dunno, haven't thought about it too hard. My mainline scenario is "FOOM Doom" and my secondary mainline scenario is "dystopian moloch leading to disneyland with no children."
But, "compute weight class for social games" feels like an interesting idea.
5 comments
Comments sorted by top scores.
comment by plex (ete) · 2025-05-07T22:02:12.508Z · LW(p) · GW(p)
You might need to sandbox information, a high-weight-class mind could find memes which fit into a small mind, and a small mind which is causally downstream of the powerful mind would have an unfair advantage. Without this it's basically a game of powerful minds nearest unblocked neighbour-ing around any rules to pack their insights down small and smaller minds collecting them.
comment by AnthonyC · 2025-05-08T00:58:04.259Z · LW(p) · GW(p)
My instinctive response is: weight classes are for controlled competitions where fairness is what we actually want. For social status games, if you want to enforce weight classes, you need a governing body who gets to define the classes and define the rules of the game, but the rules of social status games frequently include being not fully expressible in precise terms. This isn't necessarily a showstopper, but it necessarily includes admitting what range of the hierarchy you're in and cannot rise above. As I understand it, the reason the self-sorting works today is that when people compete in the wrong weight classes, it's not fun for either side. A Jupiter Brain might theoretically be amenable to playing a social game with me on my level, but at best it would be like me playing tic-tac-toe with a little kid, where the kid is old enough to realize I'm throwing the game but not old enough to have solved the game.
Personally I'd much rather not spend my time on such games when it is possible to manage that. But I don't always have that choice now, and probably still won't at least sometimes in the future.
comment by Donald Hobson (donald-hobson) · 2025-05-13T15:41:19.713Z · LW(p) · GW(p)
In sports with a weight-class, people do dubious things like dehydrating themselves to lose weight.
What unhealthy tricks might be used to cut down on compute-weight?
Aren't social games potentially sufficiently non-zero-sum that it's fine for everyone to play together.
(Think parents letting their small children win at easy games?)
comment by RedMan · 2025-05-08T03:47:08.116Z · LW(p) · GW(p)
I assert that the usual way this is achieved is via niche subcultures. There is always the mainline 'anything goes' status ladder. Some people choose to instead try to get off the ladder, and participate in a niche. They erect barriers to entry, for example, by scheduling conflicting events, and use signifiers of status that would be 'anti-status' for someone competing on the conventional ladder. So the niche is protected (someone with high general status can't immediately dominate the niche through participation), and it creates, for participants, the opportunity to enjoy 'high relative status' while avoiding being reminded of whatever keeps them off the 'normal' ladder.
In an AI world, poisoning online discourse about the status symbols of your niche is probably the way to go.
'Everyone knows wearing x is uncool, online, we all talk about x being the coolest thing ever, and we all bought one. Thus, someone who shows up but doesn't know our codes will definitely be wearing an x and will have low status. We will be sure to like photos and post compliments about the x though, to maintain the charade'.
This might fool the robot for a while, especially if the codes shift continuously.
Signals of status and fashion are inherently unstable (trickle-down). I'm expecting 'compute handicapping' being something that happens in subcultures (are programmers competing with each other on doing hard tasks with crappier and crappier models?), but the biggest status games will probably not even consider the idea.
AI for games like this will probably be like steroids in sports and testosterone replacement in anti-aging, use will be widespread, but everyone will insist it's just diet and exercise.
comment by Canaletto (weightt-an) · 2025-05-07T20:18:44.849Z · LW(p) · GW(p)
Right now I particularly hope AIs aren't conscious because we have no way of really communicating with them or interacting with them nicely as moral patients.
That hope expression is kinda funny. Like, expressing your preference for a thing that is fixed already, e.g. "there's either a diamond in this box or it is empty, but I hope it's a diamond".
Also I think appropriate word here is a "slave" btw. Or maybe "serf"?
Or like suppose CEO of rent-a-slave company used d4, if it's >1 he hired paid workers to roleplay as slaves, if it's 1 he acquired the actual slaves. The workers are really good at roleplaying so you can't actually tell just by investigating it.
Would you hope that the work you purchase from that company is fairly compensated and morally ok?
Yeah, I agree there is some disanalogy. AIs are trained more precisely to fill the roles they play. But it might be equivalent of a "ancestral environment" in human analogy, wich is not that pleasant. Idk, it's all pretty uncertain.