The other side of the tidal wave
post by KatjaGrace · 2023-11-03T05:40:05.363Z · LW · GW · 85 commentsContents
86 comments
I guess there’s maybe a 10-20% chance of AI causing human extinction in the coming decades, but I feel more distressed about it than even that suggests—I think because in the case where it doesn’t cause human extinction, I find it hard to imagine life not going kind of off the rails. So many things I like about the world seem likely to be over or badly disrupted with superhuman AI (writing, explaining things to people, friendships where you can be of any use to one another, taking pride in skills, thinking, learning, figuring out how to achieve things, making things, easy tracking of what is and isn’t conscious), and I don’t trust that the replacements will be actually good, or good for us, or that anything will be reversible.
Even if we don’t die, it still feels like everything is coming to an end.
85 comments
Comments sorted by top scores.
comment by lbThingrb · 2023-11-03T17:34:54.621Z · LW(p) · GW(p)
I'm middle-aged now, and a pattern I've noticed as I get older is that I keep having to adapt my sense of what is valuable, because desirable things that used to be scarce for me keep becoming abundant. Some of this is just growing up, e.g. when I was a kid my candy consumption was regulated by my parents, but then I had to learn to regulate it myself. I think humans are pretty well-adapted to that sort of value drift over the life course. But then there's the value drift due to rapid technological change, which I think is more disorienting. E.g. I invested a lot of my youth into learning to use software which is now obsolete. It feels like my youthful enthusiasm for learning new software skills, and comparative lack thereof as I get older, was an adaptation to a world where valuable skills learned in childhood could be expected to mostly remain valuable throughout life. It felt like a bit of a rug-pull how much that turned out not to be the case w.r.t. software.
But the rise of generative AI has really accelerated this trend, and I'm starting to feel adrift and rudderless. One of the biggest changes from scarcity to abundance in my life was that of interesting information, enabled by the internet. I adapted to it by re-centering my values around learning skills and creating things. As I contemplate what AI can already do, and extrapolate that into the near future, I can feel my motivation to learn and create flagging.
If, and to the extent that, we get a "good" singularity, I expect that it will have been because the alignment problem turned out to be not that hard, the sort of thing we could muddle through improvisationally. But that sort of singularity seems unlikely to preserve something as delicately balanced as the way that (relatively well-off) humans get a sense of meaning and purpose from the scarcity of desirable things. I would still choose a world that is essentially a grand theme park full of delightful experience machines over the world as it is now, with all its sorrows, and certainly I would choose theme-park world over extinction. But still ... OP beautifully crystalizes the apprehension I feel about even the more optimistic end of the spectrum of possible futures for humanity that are coming into view.
Replies from: ErickBall↑ comment by ErickBall · 2023-11-06T23:54:35.132Z · LW(p) · GW(p)
But that sort of singularity seems unlikely to preserve something as delicately balanced as the way that (relatively well-off) humans get a sense of meaning and purpose from the scarcity of desirable things.
I think our world actually has a great track record of creating artificial scarcity for the sake of creating meaning (in terms of enjoyment, striving to achieve a goal, sense of accomplishment). Maybe "purpose" in the most profound sense is tough to do artificially, but I'm not sure that's something most people feel a whole lot of anyway?
I'm pretty optimistic about our ability to adapt to a society of extreme abundance by creating "games" (either literal or social) that become very meaningful to those engaged in them.
comment by lc · 2023-11-03T12:50:18.843Z · LW(p) · GW(p)
Thank you for saying this. I have tried several times to explain something like it in a post, but I don't think I have the writing skill to convey effectively how deeply distressed I am about these scenarios. It's essential to my ability to enjoy life that I be useful, have political capital, can effect meaningful change throughout the world, can compete in status games with others, can participate in an economy of other people like me, and can have natural and productive relationships with unartificial people. I don't understand at all how I'm supposed to be excited by the "good OpenAI ending" where every facet of human skill and interaction gets slowly commoditized, and that ending seems strictly worse to me in a lot of ways than just dying suddenly in an exploding ball of fire.
Replies from: Viliam, aphyer, tachikoma, nikita-sokolsky↑ comment by Viliam · 2023-11-03T13:14:54.557Z · LW(p) · GW(p)
be useful, have political capital, can effect meaningful change throughout the world, can compete in status games with others, can participate in an economy of other people like me
How large part of this are zero-sum games, and the part that makes you happy is that you are winning? Would the person who is losing feel the same? What is the good ending for them?
Replies from: lc↑ comment by lc · 2023-11-03T13:34:28.458Z · LW(p) · GW(p)
WRT status games: I enjoy playing such games more when everybody agrees to the terms of the game and has a relatively even footing at the beginning and there are resets throughout. "Having more prestige" is great, but it's more important that I get to interact with other people in a meaningful way like that at all. The respect and prestige components people usually associate with winning status games are also not inherently zero-sum. It's possible to respect people even when they lose.
WRT political capital: Maybe it would be clearer if I said that I want to live in a world where humans have agency, and there's a History that feels like it's being shaped by actual people and not by brownian motion, and where the path to power is not always to subjugate your entire life and psychology to a moral maze. While most people won't outright endorse things like Prighozin's coup, because they realize it might end up being a lot more bad than good, they are obviously viscerally excited by the possibility that outsiders can win through gutsy action, and get depressed when they realize that's untrue. Contrast this with the default scenario of "some coalition of politicians and AGI lab heads and lobbyists decide how everything is going to be forever".
WRT everything else: Those things aren't zero sum at all. My laptop is useful and so am I. A laborer in Egypt is still participating in the economy.
Replies from: Viliam↑ comment by Viliam · 2023-11-04T15:23:51.215Z · LW(p) · GW(p)
Thank you! I agree. Things called "zero-sum" often become something else when we also consider their impact on third parties, i.e. when we model them as games of 3 players (Player 1, Player 2, World). It may be that the actions of Player 1 negate the actions of Player 2 from their relative perspectives (if we are in a running competition, and I start running faster, I get an advantage, but if you also start running faster, my advantage is lost), but both work in the same direction from the perspective of the World (if both of us run faster, the competition is more interesting to watch for the audience).
In some status games the effect on the third party is mostly "resources are wasted". (I try to buy a larger gold chain, you try to buy a larger gold chain, resources are wasted on mining gold and making chains.)
But if we compete at producing value for the third party, whether it is making jokes, or signaling wealth by sending money to charity, the effect on the third party is the value produced. Such games are good! If we could make producing value for the third party the only status game in town, the world would probably be a much nicer place.
That said, the concept of "useful" seems intrinsically related to "scarcity". The laborer in Egypt does something that wouldn't get done otherwise (or it would get done anyway, but at the cost of something else not done). If we get to see a positive singularity, some kind of future where all important things get done by a Friendly AI, then only the unimportant things are left for us. For example, the AI will provide healthcare, and we will play computer games. (I wanted to say "...and we will make art or tell jokes", but actually the AI will be able to do that much better; unless we choose to ignore that, and make sure that no human in our group is cheating by asking the AI for help.)
The possibility of a coup is, of course, a two-sided coin. If things can get surprisingly better, they can also get surprisingly worse. If all possibilities are open, then so is also the possibility of Hell. Someone will try to find a way to sacrifice everything to Moloch in return for being the ruler of the Hell. So other people will have to spend a lot of time trying to prevent that, and we get a sword of Damocles above our heads.
Replies from: lc↑ comment by lc · 2023-11-05T23:27:16.310Z · LW(p) · GW(p)
The possibility of a coup is, of course, a two-sided coin. If things can get surprisingly better, they can also get surprisingly worse.
I have long wanted a society where there is a "constitutional monarchy" position that is high status and a magnet for interesting political skirmishes but doesn't have much control over public policy, and alongside that a "head of government" who is a boring accountant type and by law doesn't get invited to any of the interesting parties or fly around in a fancy jet.
↑ comment by Tachikoma (tachikoma) · 2023-11-03T22:45:44.917Z · LW(p) · GW(p)
How distressed would you be if the "good ending" were opt-in and existed somewhere far away from you? I've explored the future and have found one version that I think would satisfy your desire but I'm asking to get your perspective. Does it matter whether there are super-intelligent AIs but they leave our existing civilization alone and create a new one out on the fringes (the Artic, Antarctica or just out in space) and invite any humans to come along to join them without coercion? If you need more details, they're available at the Opt-In Revolution, in narrative form.
↑ comment by Nikita Sokolsky (nikita-sokolsky) · 2024-08-19T06:59:22.980Z · LW(p) · GW(p)
It's essential to my ability to enjoy life
This assumes that we'll never have the technology to change our brain's wiring to our liking? If we live in the post-scarcity utopia, why won't you be able to just go change who you are as a person so that you'll fully enjoy the new world?
↑ comment by lc · 2024-08-19T07:25:08.263Z · LW(p) · GW(p)
https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence [LW · GW]
Replies from: andrei-alexandru-parfeni↑ comment by sunwillrise (andrei-alexandru-parfeni) · 2024-08-21T11:52:15.719Z · LW(p) · GW(p)
But you have also written [LW(p) · GW(p)] yourself a couple of years ago:
if aligned AGI gets here I will just tell it to reconfigure my brain not to feel bored, instead of trying to reconfigure the entire universe in an attempt to make monkey brain compatible with it. I sorta consider that preference a lucky fact about myself, which will allow me to experience significantly more positive and exotic emotions throughout the far future, if it goes well, than the people who insist they must only feel satisfied after literally eating hamburgers or reading jokes they haven't read before.
And indeed, when talking specifically about the Fun Theory sequence itself, you said [LW(p) · GW(p)]:
I think Eliezer just straight up tends not to acknowledge that people sometimes genuinely care about their internal experiences, independent of the outside world, terminally. Certainly, there are people who care about things that are not that, but Eliezer often writes as if people can't care about the qualia - that they must value video games or science instead of the pleasure derived from video games or science.
Do you no longer endorse this?
comment by Ben Pace (Benito) · 2023-11-03T18:59:25.549Z · LW(p) · GW(p)
Rah to bringing back the short LessWrong post!
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2023-11-05T05:44:39.451Z · LW(p) · GW(p)
Bringing back? When were there ever short LessWrong posts?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-11-05T18:01:38.663Z · LW(p) · GW(p)
The first person that comes to mind for me with this is Wei Dai — here's a 5 paragraph post [LW · GW] of theirs from 2010, and here's a 5-paragraph post of theirs from 2020 [LW · GW]. But also Hal Finney's historic Dying Outside [LW · GW] is 6 paragraphs. Psychohistorian's short story about life extension [LW · GW] is also 5-6 paragraphs. PhilGoetz's great post Have no heroes, and no villains [LW · GW] is just 6 short paragraphs. On Saying The Obvious [LW · GW] is under 500 words.
comment by Ben Pace (Benito) · 2023-11-03T18:59:10.083Z · LW(p) · GW(p)
I don't currently share this sense of distress.
Insofar as we don't all die and we broadly continue to have agency over the world, I am kind of excited and up for the challenge of the new age. Given no-extinction and no-death and lots of other improvements to wealth and health and power, I'm up for the challenges and pains and difficulties that come with it.
I am further reminded of this quote [LW(p) · GW(p)].
Replies from: kave, xiannPersonally, I've been hearing all my life about the Serious Philosophical Issues posed by life extension, and my attitude has always been that I'm willing to grapple with those issues for as many centuries as it takes.
— Patrick Nielsen Hayden
↑ comment by kave · 2023-11-03T21:23:59.356Z · LW(p) · GW(p)
I'd guess maybe @Katja Grace [LW · GW] doesn't expect improvements to power (in the sense of human agency) in the default non-extinction future.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-11-03T22:01:47.785Z · LW(p) · GW(p)
I would be interested in slightly more detail about what Katja imagines that world looks like.
Replies from: KatjaGrace, TrevorWiesinger↑ comment by KatjaGrace · 2023-11-06T01:07:37.871Z · LW(p) · GW(p)
Seems like there are a lot of possibilities, some of them good, and I have little time to think about them. It just feels like a red flag for everything in your life to be swapped for other things by very powerful processes beyond your control while you are focused on not dying. Like, if lesser changes were upcoming in people's lives such that they landed in near mode, I think they would be way less sanguine—e.g. being forced to move to New York City.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-11-06T01:53:32.646Z · LW(p) · GW(p)
I agree it's very jarring. Everything you know is going to stop and a ton of new things will be happening instead. I can see being upset over the things that ended (friendships and other joys) and it hurting to learn the new ways that life is just harder now.
That said, I note I don't feel bothered by your example. In the current era of video calls and shared slacks and LW dialogues I don't think I'd personally much mind being forced to move to New York and I might actually be excited to explore that place and its culture (acknowledging there will be a lot of friction costs as I adjust to a new environment).
Even without that, if I was basically going to live more than a trillion lifetimes, then being forced to move cities would just be a new adventure!
I have a possibly-related not-bothered attitude in that I am not especially bothered about the possibility of my own death, as long as civilization lives is to live on.[1] I am excited for many more stories to be lived out, whether I'm a character in them or not. This is part of why I am so against extinction.
- ^
Not that I wouldn't leap on the ability to solve aging and diseases.
↑ comment by trevor (TrevorWiesinger) · 2023-11-04T02:00:28.346Z · LW(p) · GW(p)
I can't speak for Katja, but the impression I get is that she thinks some of the challenges of slow takeoff might be impossible or unreasonably difficult for humans to overcome.
I've written about clown attacks and social media addiction optimization, but I expect resisting clown attacks and quitting social media to be the fun kind of challenge.
Mitigating your agency loss from things like sensor exposure based influence and human lie detection [LW · GW] will not be so fun, or even possible at all.
↑ comment by xiann · 2023-11-05T08:18:20.741Z · LW(p) · GW(p)
I agree, I'm reminded of the quote about history being the search for better problems. The search for meaning in such a utopic world (from our perspective) thrills me, especially when I think about all the suffering that exists in the world today. The change may be chaotic & uncomfortable, but if I consider my personally emotions about the topic, it would be more frightening for the world to remain the same.
comment by Richard_Ngo (ricraz) · 2023-11-03T19:12:23.655Z · LW(p) · GW(p)
I feel pretty wary of the alignment community becoming temperamentally pessimistic, in a way that colors our expectations and judgments. I note this post as a fairly central example of that. (May say more about this later, but just wanted to register disagreement.)
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-11-04T01:51:34.043Z · LW(p) · GW(p)
I think that the tradeoffs of making posts and comments vs. staying quiet is rather intensely complicated, e.g. you might think that clarifying your feelings through a keyboard or a conversation is worse than staying quiet about your feelings because if you stay quiet then you aren't outputting your thoughts as tokens.
But if you attempt to make that calculation with only that much info, you'll be blindsided by game-changing unknown unknowns outside your calculation (e.g. staying quiet means your feelings might still be revealed without your knowledge as you scroll through social media [LW · GW], except as linear algebra instead of tokens).
comment by Tamsin Leake (carado-1) · 2023-11-03T12:09:43.798Z · LW(p) · GW(p)
i've written before [LW · GW] about how aligned-AI utopia can very much conserve much of what we value now, including doing effort to achieve things that are meaningful to ourselves or other real humans. on top of alleviating all (unconsented) suffering and (unconsented) scarcty and all the other "basics", of course.
and without aligned-AI utopia we pretty much die-for-sure. there aren't really attractors in-between those two.
Replies from: Joe_Collman↑ comment by Joe Collman (Joe_Collman) · 2023-11-10T18:19:11.343Z · LW(p) · GW(p)
That's my guess too, but I'm not highly confident in the [no attractors between those two] part.
It seems conceivable to have a not-quite-perfect alignment solution with a not-quite-perfect self-correction mechanism that ends up orbiting utopia, but neither getting there, nor being flung off into oblivion.
It's not obvious that this is an unstable, knife-edge configuration. It seems possible to have correction/improvement be easier at a greater distance from utopia. (whether that correction/improvement is triggered by our own agency, or other systems)
If stable orbits exist, it's not obvious that they'd be configurations we'd endorse (or that the things we'd become would endorse them).
Replies from: carado-1↑ comment by Tamsin Leake (carado-1) · 2023-11-10T21:01:01.724Z · LW(p) · GW(p)
okay, thining about it more, i think the reason i believe this is because of a slack-vs-moloch situation.
if we get a lesser utopia, do we have enough slack to build up a better utopia, even slowly? if not, do we have enough slack to survive-at-all?
i feel like "we have exactly enough slack to live but not improve our condition" is a pretty unlikely state of affairs; most likely, either we don't have enough slack to survive (and we die, though maybe slowly) or we have more than enough to survive (and we improve our condition, though maybe slowly, all the way to the greater-utopia-we-didn't-start-with).
comment by Stephen McAleese (stephen-mcaleese) · 2023-11-03T22:31:15.088Z · LW(p) · GW(p)
No offense but I sense status quo bias in this post.
If you replace "AI" with "industrial revolution" I don't think the meaning of the text changes much and I expect most people would rather live today than in the Middle Ages.
One thing that might be concerning is that older generations (us in the future) might not have the ability to adapt to a drastically different world in the same way that some old people today struggle to use the internet.
I personally don't expect to be overly nostalgic in the future because I'm not that impressed by the current state of the world: factory farming, the hedonic treadmill, physical and mental illness, wage slavery, aging, and ignorance are all problems that I hope are solved in the future.
Replies from: Viliam, peterbarnett↑ comment by Viliam · 2023-11-04T15:54:04.704Z · LW(p) · GW(p)
With adapting, the important question is what happens if you don't. If it only means you will miss out some fun, I don't mind. Kids these days use Instagram and TikTok, I... don't really understand the allure of that, so I stay away. I may change my mind in future, so it feels like I am choosing between two good things: the convenience of ignoring the new stuff, and the possible advantages of learning it.
It is different when the failure to adapt will make your life actively worse. Like people today who are old but not retired yet, who made the choice to ignore all that computer stuff, and now they can't get a job. Or the peasants during the industrial revolution who made a bet that "people will always need some food, so my job is safe, regardless of all this new stuff", and then someone powerful just took their fields and built a factory there, and let them starve to death (because they couldn't get a job in that factory).
If the future will have all the problems solved, including the problem of "how can I get food and healthcare in a society where a robot can do literally anything much better and cheaper than me", then... I will find a hobby; I never had a problem with that.
(I really hope the solution will not be "create stressful bullshit jobs".)
A separate question is whether I can survive the point that is halfway between "here" and "there".
↑ comment by peterbarnett · 2023-11-04T05:00:44.748Z · LW(p) · GW(p)
I'm pretty worried about the future where we survive and build aligned AGI, but we don't manage to fully solve all the coordination or societal problems. Humans as a species still have control overall, but individuals don't really.
The world is crazy, and very good on most axes, but also disorienting and many people are somewhat unfulfilled.
It doesn't seem crazy that people born before large societal changes (eg industrial revolution, development of computers, etc) do feel somewhat alienated from what society becomes. I could imagine some pre-industrial revolution farmer kind of missing the simplicity and control they had over their life (although this might be romanticizing the situation).
comment by aysja · 2023-11-03T23:46:36.580Z · LW(p) · GW(p)
Yeah :/ I've struggled for a long time to see how the world could be good with strong AI, and I've felt pretty alienated in that. Most of the time when I talk to people about it they're like "well the world could just be however you like!" Almost as if, definitionally, I should be happy because in the really strong success cases we'll have the tech to satisfy basically any preference. But that's almost the entire problem, in some way? As you say, figuring things out for ourselves, thinking and learning and taking pride in skills that take effort to acquire... most of what I cherish about these things has to do with grappling with new territory. And if I know that it is not in fact new, if all of it could be easier were I to use the technology right there... it feels as though something is corrupted... The beauty of curiosity, wonder, and discovery feels deeply bound to the unknown, to me.
I was talking to a friend about this a few months ago and he suggested that because many humans have these preferences, that we ought to be able to make a world where we satisfy them—e.g., something like "the AI does its thing over there and we sit over here having basically normal human lives except that death is a choice and sometimes it helps us figure out hard coordination problems or whatever." And I can almost get behind this, but something still feels off to me. Like how when people get polarized through social media it almost seems like there's no going back? How do we know strange spirals won't happen with an even more advanced technology? It's hard to escape the feeling that a dystopia lurks. Hard to escape the feeling that all the people I know and love might change quickly and radically, that I might change radically, in ways that feel alien to me now. I want to believe that strong AI would be great, and perhaps it would be, perhaps I'm missing something here. But a part of me is terrified.
↑ comment by dirk (abandon) · 2024-08-19T08:05:12.974Z · LW(p) · GW(p)
As you say, figuring things out for ourselves, thinking and learning and taking pride in skills that take effort to acquire... most of what I cherish about these things has to do with grappling with new territory. And if I know that it is not in fact new, if all of it could be easier were I to use the technology right there... it feels as though something is corrupted... The beauty of curiosity, wonder, and discovery feels deeply bound to the unknown, to me.
This is a very strange mindset. It's already not new! Almost everything you can learn is already known by other people; most thoughts you can think have been thought before; most skills, other people have mastered more thoroughly than you're likely to. (If you only mean new to you in particular, on the other hand, AI can't remove the novelty; you'd have to experience it for it to stop being novel). Why would you derive your value from a premise that's false?
↑ comment by whestler · 2024-04-03T14:15:33.729Z · LW(p) · GW(p)
I realise this is a few months old but personally my vision for utopia looks something like the Culture in the Culture novels by Iain M. Banks. There's a high degree of individual autonomy and people create their own societies organically according to their needs and values. They still have interpersonal struggles and personal danger (if that's the life they want to lead) but in general if they are uncomfortable with their situation they have the option to change it. AI agents are common, but most are limited to approximately human level or below. Some superhuman AI's exist but they are normally involved in larger civilisational manouvering rather than the nitty gritty of individual human lives. I recommend reading it.
Caveats-
1: yes, this is a fictional example so I'm definitely in danger of generalising from fictional evidence. I mostly think about it as a broad template or cluster of attributes society might potentially be able to achieve.
2: I don't think this level of "good" AI is likely.
↑ comment by Roman Leventov · 2023-11-04T07:06:19.024Z · LW(p) · GW(p)
Discovering and mastering one's own psychology may still be a frontier where the AI could help only marginally. So, more people will become monks or meditators?
comment by Viliam · 2023-11-03T13:07:43.139Z · LW(p) · GW(p)
Ah, if things go well, it will be an amazing opportunity to find out how much of our minds was ultimately motivated by fear. Suppose that you are effectively immortal and other people can't hurt you, what would you do? Would you still want to learn? Would you bother keeping friends? Or would you maybe just simulate million kinds of experience, and then get bored and decide to die or wirehead yourself?
I think I want to know the answer. If it kills me, so be it... the universe didn't have a better plan for me anyway.
It would probably be nicer to take things slowly. Stop death and pain, and then let people slowly figure out everything. That would keep a lot of life normal. The question is whether we could coordinate on that, because it would be tempting to cheat. If we all voluntarily slow down and try to do "life as normal, but without pain", a little bit of cheating ("hey AI, give me 20 extra IQ points and all university-level knowledge as of 2023, but don't tell anyone; otherwise give me the life as normal but without pain") would keep lot of the benefits of life as normal, but also give one a relative advantage and higher status. It's not even necessary to give me all the knowledge, just make me run 10 times faster when no one is looking, and I will study it myself.
Maybe humanity will split into different bubbles, depending on how fast they want to take it, with AI keeping the boundaries between them, so that the slower ones are protected from interfering of the faster ones.
Probably the difficult choice will be whether to keep our relative disadvantages, especially if they can be fixed just by asking the AI nicely. We could probably agree on "AI, don't tell us the Theory of Everything, we want to figure it out ourselves, and now we have enough time to do so", or at least, to make the rule that anyone who wants to hear the answer from the AI is allowed to, but then is prevented from giving spoilers to others. But it would feel unfair e.g. to require people with low IQ to stay that way. However, the more differences we remove, the less sense remains for the division of labor, which seems like an important part of our relationships. (Not just literally "labor", but even things like "this person typically makes the jokes, because they are better at making jokes; and this person typically says something empathic; and this person typically makes the decision when the others hesitate...".)
Replies from: M. Y. Zuo↑ comment by M. Y. Zuo · 2023-11-03T14:09:44.019Z · LW(p) · GW(p)
(Not just literally "labor", but even things like "this person typically makes the jokes, because they are better at making jokes; and this person typically says something empathic; and this person typically makes the decision when the others hesitate...".)
Why would it be desirable to maintain this kind of 'division of labor' in an ideal future?
Replies from: Viliamcomment by romeostevensit · 2023-11-03T15:19:51.139Z · LW(p) · GW(p)
I think this might be mapping a regular getting older thing to catastrophe? Part of aging is the signaling landscape that one trained on changing enough that many of the things one once valued don't seem broadly valued any more.
Replies from: Nonlocalcomment by GeneSmith · 2023-11-04T02:59:54.204Z · LW(p) · GW(p)
This is more or less why I chose to go into genetics instead of AI: I simply couldn't think of any realistic positive future with AGI in it. All positive scenarios rely on a benevolent dictator, or some kind of stable equilibrium with multiple superintelligent agents whose desires are so alien to mine, and whose actions are so unpredictable that I can't evaluate such an outcome with my current level of intelligence.
Replies from: lccomment by Odd anon · 2023-11-03T06:39:43.490Z · LW(p) · GW(p)
I've actually been moving in the opposite direction, thinking that the gameboard might not be flipped over, and actually life will stay mostly the same. Political movements to block superintelligence seem to be gaining steam, and people are taking it seriously.
(Even for more mundane AI, I think it's fairly likely that we'll be soon moving "backwards" on that as well, for various reasons which I'll be writing posts about in the coming week or two if all goes well.)
Also, some social groups will inevitably internally "ban" certain technologies if things get weird. There's too much that people like about the current world, to allow that to be tossed away in favor of such uncertainty.
Replies from: carado-1↑ comment by Tamsin Leake (carado-1) · 2023-11-03T12:16:14.366Z · LW(p) · GW(p)
these social movements only delay AI. unless you ban all computers in all countries, after a while someone, somewhere will figure out how to build {AI that takes over the world} in their basement, and the fate of the lightcone depends on whether that AI is aligned or not.
comment by Lucius Bushnaq (Lblack) · 2023-11-06T06:18:55.558Z · LW(p) · GW(p)
I am not a fan of the current state of the universe. Mostly the part where people keep dying and hurting all the time. Humans I know, humans I don't know, other animals that might or might not have qualia, possibly aliens in distant places and Everett branches. It's all quite the mood killer for me, to put it mildly.
So if we pull off not dying and turning Earth into the nucleus of an expanding zero utility stable state, superhuman AI seems great to me.
comment by mako yass (MakoYass) · 2023-11-05T01:24:16.238Z · LW(p) · GW(p)
Did you miss transhumanism? If it's truly important to you, to be useful, alignment would mean that superintelligence will find a way to lift you up and give you a role.
I suppose there might be a period during which we've figured out existential security but the FASI hasn't figured out human augmentation beyond the high priority stuff like curing aging. I wouldn't expect that period to be long.
comment by O O (o-o) · 2023-11-03T07:07:38.856Z · LW(p) · GW(p)
I can only say there was probably someone in every rolling 100 year period that thought the same about the next 100 years
Replies from: florian-habermacher, KatjaGrace, Seth Herd, TrevorWiesinger↑ comment by FlorianH (florian-habermacher) · 2023-11-03T07:15:55.964Z · LW(p) · GW(p)
I think this time is different. The implications simply so much broader, so much more fundamental.
Replies from: daniel-kokotajlo, shankar-sivarajan↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-11-03T14:21:26.533Z · LW(p) · GW(p)
Also, from a zoomed-out-all-of-history-view, the pace of technological progress and social change has been accelerating. The difference between 500 AD and 1500 AD is not 10x the difference between 1900 and 2000, it's arguably less than 1x. So even without knowing anything about this time, we should be very open to the idea that this time is more significant than all previous times.
↑ comment by Shankar Sivarajan (shankar-sivarajan) · 2023-11-04T16:16:45.491Z · LW(p) · GW(p)
I think this time is different.
That's what people said last time too. And the time before that.
Replies from: florian-habermacher↑ comment by FlorianH (florian-habermacher) · 2023-11-04T17:03:30.071Z · LW(p) · GW(p)
That's so correct. But still so wrong - I'd like to argue.
Why?
Because replacing the brain is simply not the same as replacing just our muscles. In all the past we've just augmented our brain, with stronger muscle or calculation or writing power etc. using all sorts of dumb tools. But the brain remained the crucial all-central point for all action.
We will have now tools that are smarter, faster, more reliable than our brains. Probably even more empathic. Maybe more loving.
Statistics cannot be extrapolated when there's a visible structural break. Yes, it may have been difficult to anticipate, 25 years ago, that computers that calculate so fast etc., don't quickly change society all that fundamentally (although quite fundamentally) already, so the 'this time is different' guys 25 years ago were wrong. But in hindsight, it is not so surprising: As long as machines were not so truly smart, we could not change the world as fundamentally as we now foresee. But this time, we seem to be about to get the truly smart ones.
The future is a miracle, we cannot truly fathom how exactly it will look. So nothing is absolutely sure indeed. But merely looking back to the period where mainly muscles were replaceable but not brains, is simply not a way to extrapolate into the future, where something qualitatively entirely new is about to be born.
So you need sth more tangible, a more reliable to rebut the hypothesis underlying the article. And the article beautifully, concisely, explains why we're awaiting sth rather unimaginably weird. If you have sth to show where specifically it seems wrong, it'd be great to read that.
↑ comment by KatjaGrace · 2023-11-06T01:18:03.362Z · LW(p) · GW(p)
Not sure about this, but to the extent it was so, often they were right that a lot of things they liked would be gone soon, and that that was sad. (Not necessarily on net, though maybe even on net for them and people like them.)
↑ comment by Seth Herd · 2023-11-03T18:01:02.471Z · LW(p) · GW(p)
There was, but their arguments for it were terrible. If there are flaws in the superintelligence argument, please point them out. It's hard to gauge when, but with GPT4 being smarter than a human for most things, it's tough to imagine we won't have its gaps (memory, using its intelligence to direct continuous learning) within a couple of decades.
↑ comment by trevor (TrevorWiesinger) · 2023-11-04T02:04:35.655Z · LW(p) · GW(p)
Modern civilization is pretty OOD compared to the conditions that formed it over the last 100 years. Just look at the current US-China-Russia conflict, for example. Unlike the original Cold War, this current conflict was not started with intent to carpet bomb each other with nukes (carpet bombing was the standard with non-nuclear bombs during WW2, so when the Cold War started they assumed that they would do the carpet bombing with nukes instead).
comment by Seth Herd · 2023-11-06T21:20:26.510Z · LW(p) · GW(p)
Aligned AGI means people getting more of what they want. We'll figure out where to get challenge and fulfillment.
People will have more friends, not less, once we have more time for them, and more wisdom about how to get and keep them. And we will still want human support and advice, even if machines can do it better.
People who want to learn skills and knowledge will still do that, and create spaces to compete and show them off. I'm thinking of the many competitions that already happen with rules about not getting outside help.
Most humans have always lived their lives knowing they'll never be the best at anything or accomplish anything significant on a large scale. Many of them still find happiness in small projects and personal connections.
In the long term, if alignment is right, we'll have something as marvelous as hinted in Richard Ngo's The Soul Key [LW · GW].
comment by [deactivated] (Yarrow Bouchard) · 2023-11-03T06:30:30.831Z · LW(p) · GW(p)
What do you make of the prospect of neurotech, e.g. Neuralink, Kernel, Openwater, Meta/CTRL-Labs, facilitating some kind of merge of biological human intelligence and artificial intelligence? If AI alignment is solved and AGI is safely deployed, then "friendly" or well-aligned AGI could radically accelerate neurotech. This sounds like it might obviate the sort of obsolescence of human intelligence you seem to be worried about, allowing humans alive in a post-AGI world to become transhuman or post-human cyborg entities that can possibly "compete" with AGI in domains like writing, explanation, friendship, etc.
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2023-11-03T20:26:54.427Z · LW(p) · GW(p)
I don't think rational human beings are human beings at all. Why wouldn't we make ourselves psychopaths in order to reduce suffering? Why would we not reduce emotions in order to become more rational?
Friendships are a human thing, an emotional thing, and something which is ruined by excess logic and calculation.
I argue that everything pretty in life, and everything optimal in life, are at odds with eachother. That good things are born from surplus, and even from wasting this surplus (e.g. parties, festivals, relaxation). And that everything ugly in life stems from optimization (exploitation, manipulation, bad faith).
I don't even wish to become more rational than I am now, I can feel how it's making me more nihilistic, how I must keep myself from thinking about things in order to enjoy them, how mental models get in the way of experiencing and feeling life, and living in the moment. I'd even argue that seeking ones own advantage in every situation is a a symptom of bad health, a feeling of desperacy and neediness, a fear of inadequacy.
Replies from: Viliam, cousin_it↑ comment by Viliam · 2023-11-04T16:07:27.797Z · LW(p) · GW(p)
That good things are born from surplus, and even from wasting this surplus (e.g. parties, festivals, relaxation). And that everything ugly in life stems from optimization (exploitation, manipulation, bad faith).
Yet I would think that there is some positive relation between "optimization" and "surplus". You can enjoy wine at the party, because someone spent a lot of time thinking how to produce and distribute it cheaply.
Replies from: StartAtTheEnd↑ comment by StartAtTheEnd · 2023-11-05T08:41:57.292Z · LW(p) · GW(p)
That's true. But somebody who optimizes excessively would consider it irrational to purchase any wine. This was just an example, it's not very valuable on its own, but if you generalize the idea or isolate the mechanism which leads to it, I think you will find that it's rather pervasive.
To illustrate my point differently: Mass-producing cubes is much more efficient than it is to build pretty housing with some soul and aesthetical value. So optimization is already, in some sense, in conflict with human values. Extrapolating the current development of society, I predict that the already lacking sense of humanity and realness is going to disappear completely. You may think that the dead internet theory and such are unintended side-effects that we will deal with in time, but I believe that they're mathematically unavoidable consequences. "Human choice" and "optimal choice" go in different directions, and our human choices are being corrected in the name of optimization and safety.
Being unoptimal is almost treated as a form of self-harm nowadays, but the life which is not genuine is not life at all, in my eyes. So I'm not deriving any benefits from being steered in such a mechanical direction (I'm not accusing you of doing this)
comment by GoteNoSente (aron-gohr) · 2023-11-04T21:22:55.241Z · LW(p) · GW(p)
I do not see why any of these things will be devalued in a world with superhuman AI.
At most of the things I do, there are many other humans who are vastly better at doing the same thing than me. For some intellectual activities, there are machines who are vastly better than any human. Neither of these stops humans from enjoying improving their own skills and showing them off to other humans.
For instance, I like to play chess. I consider myself a good player, and yet a grandmaster would beat me 90-95 percent of the time. They, in turn, would lose on average 8.5-1.5 in a ten game match against a world-championship level player. And a world champion will lose almost all of their games against Stockfish running on a smartphone. Stockfish running on a smartphone, in turn, will lose most of its games against Stockfish running on a powerful desktop computer or against Leela Chess Zero running on something that has a decent GPU. I think those opponents would probably, in turn, lose almost all of their games against an adversary that has infinite retries, i.e. that can target and exploit weaknesses perfectly. That is how far I am away from playing chess perfectly.
And yet, the emergence of narrow superintelligence in chess has increased and not diminished my enjoyment of the game. It is nice to be able to play normally against a human, and to then be able to find out the truth about the game by interactively looking at candidate moves and lines that could have been played using Leela. It is nice to see a commented world championship game, try to understand the comments, make up one's own mind about them, and then explore using an engine why the alternatives that one comes up with (mostly) don't work.
If we get superintelligence, that same accessibility of tutoring at beyond the level of any human expert will be available in all intellectual fields. I think in the winning scenario, this will make people enjoy a wide range of human activities more, not less.
comment by jmh · 2023-11-03T20:46:51.466Z · LW(p) · GW(p)
I kind of understand where that sentiment comes from but I do think it is "wrong". Wrong in the sense that it is neither a necessary position to hold nor a healthy one. There are plenty of things I do today in which I get a lot of satisfaction even though existing machines, or just other people, can do them much better than I can. The satisfaction comes from the challenge to my own ability level rather than some comparison to something outside me -- be it machine, environment or another person.
comment by AnthonyC · 2023-11-07T13:31:59.976Z · LW(p) · GW(p)
To me it sounds like you're dividing possible futures into extinction, dystopia, and utopia, and noticing that you can't really envision the latter. In which case, I agree, and I think if any of us could, we'd be a lot closer to solving alignment than we actually are.
Where my intuition cuts differently is that I think most seemingly-dystopian futures, where humans exist but are disempowered and dissatisfied with our lives and the world, are unstable or at best metastable, and will eventually give way to one of the other two categories. I'm sure stable dystopias are possible, of course, but ending up in one seems like it would require getting almost all of the steps of alignment right, but failing the last step of the grail quest.
Yes, this means I think most of the non-extinction futures you're considering are really extinction-but-with-a-longer-delay-and-lots-of-suffering futures. But I also think there's a sizable set of utopian-futures-outside-my-capacity-to-concretely-envision such that my P(doom) isn't actually close to 100%.
comment by Stephen Fowler (LosPolloFowler) · 2023-11-04T03:47:13.108Z · LW(p) · GW(p)
I predict most humans choose to reside in virtual worlds and possibly have their brain altered to forget that it's not real.
comment by Roko · 2023-11-05T10:33:55.621Z · LW(p) · GW(p)
You can just create personalized environments to your preferences. Assuming that you have power/money in the post-singularity world.
Replies from: KatjaGrace↑ comment by KatjaGrace · 2023-11-06T01:20:11.017Z · LW(p) · GW(p)
Assuming your preferences don't involve other people or the world
Replies from: Roko↑ comment by Roko · 2023-11-07T11:59:44.554Z · LW(p) · GW(p)
Most people, ultimately, do not care about something that abstract and will be happy living in their own little Truman Show realities that are customized to their preferences.
Personally I find The World to be dull and constraining, full of things you can't do because someone might get offended or some lost-purposes system might zap you. Did you fill in your taxes yet!? Did you offend someone with that thoughtcrime?! Plus, there are the practical downsides like ill health and so on.
I'd be quite happy to never see 99.9999999% of humanity ever again, to simply part ways and disappear off into our respective optimized Truman Shows.
And honestly I think anyone who doesn't take this point of view is being insane. Whatever it is you like, you can take with you. Including select other people who mutually consent.
comment by pathos_bot · 2023-11-03T23:29:56.700Z · LW(p) · GW(p)
It really is. My conception of the future is so weighed by the very likely reality of an AI transformed world that I have basically abandoned any plans with a time scale over 5 years. Even my short term plans will likely be shifted significantly by any AI advances over the next few months/years. It really is crazy to think about, but I've gone over every single aspect of AI advances and scaling thousands of times in my head and can think of no reality in the near future not as alien to our current reality as ours is to pre-eukaryotic life.
comment by Mitchell_Porter · 2023-11-03T06:18:08.088Z · LW(p) · GW(p)
My taxonomy of possible outcomes is x-risk (risk of extinction), s-risk (risk of suffering), w-risk (risk of a "weirdtopia"), and success. It seems like what you are worried about is a mix of s-risk and w-risk, maybe along lines that no-one has clearly conceptualized yet?
Replies from: Raemon↑ comment by Raemon · 2023-11-03T06:47:24.599Z · LW(p) · GW(p)
I mean there’s also like ‘regular ol’ (possibly subtle) dystopia?’ Like, it might also be a weirdtopia but it doesn’t seem necessary in the above description. (I interpret weirdtopia to mean ‘actually good, overall, but in a way that feels horrifying or strange’. If the replacements for friendship etc aren’t actually good, it might just be bad)
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2023-11-03T07:01:03.067Z · LW(p) · GW(p)
I interpret weirdtopia to mean ‘actually good, overall, but in a way that feels horrifying or strange’
This could be a reason for me not to call it a "w-risk". But this also highlights the slippery nature of some of the boundaries here.
My central idea of a w-risk and a weirdtopia, is that it's a world where the beings in it are happy, because it's being optimized/governed according to their values - but those values are not ours, and yet those beings are us, and/or our descendants, after being changed by some process to which we would not have consented beforehand, if we understood its nature.
On the other hand, your definition of weirdtopia could also include futures in which our present values are being satisfied, "but in a way that feels horrifying or strange" if it's described to us in the present. So it might belong to my fourth category - all risks successfully avoided - and yet we-in-the-present would reject it, at least at first.
comment by martinkunev · 2024-09-01T14:21:21.361Z · LW(p) · GW(p)
Superhumwn chess AI did not remove people's pleasure from learning/playing chess. I think people are adaptible and can find meaning. Surely, the world will not feel the same but I think there is significant potential for something much better. I wrote about tfhis a little on my blog:
https://martinkunev.wordpress.com/2024/05/04/living-with-ai/
comment by dirk (abandon) · 2024-08-19T07:52:17.781Z · LW(p) · GW(p)
If superhuman AI would prevent you from thinking, learning, or being proud of yourself; that seems to me like the result of some sort of severe psychological issue. I'm sorry that you have that going on, but... maybe get help?
comment by Review Bot · 2024-04-01T17:04:30.030Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by RogerDearnaley (roger-d-1) · 2023-11-12T04:50:15.071Z · LW(p) · GW(p)
I sort of agree. But there is clearly a potential flip-side: we quite likely get to be post-scarcity on a lot of things (modulo fairness of distribution and absence of tyranny), including: customized art and stories, scientific and mathematical discoveries, medical care, plus any sort of economic goods that depend more on knowledge and technological inputs than material resources. So the video games will be awesome. We might even be somewhat less constrained on material resources, if there are rapid technological improvements in reusable energy, environmental remediation after mining, asteroid mining, or things like that.
comment by Michał Zdunek (michal-zdunek) · 2023-11-04T01:30:14.968Z · LW(p) · GW(p)
This makes an interesting point about scarcity. On one hand, it sucks to be limited in the amount of stuff you have. On the other hand, struggling through adversity or having scarce skills can give people a sense of meaning. We know that people whose skills are automated can suffer a lot from it.
I think that even once all of humans' cognitive skills can be replaced by AI, we will still be useful to one another. We will still relate to each other on account of our shared biological nature. I think that people will not for the most part have the same emotional relation with silicon-based intelligence as we have with other homo sapiens, because it will not share our emotions and flaws. I think a big part of why we like interacting with other people is because we can see ourselves in them, which we will not be able to do with AI. This is why I support developing AI capabilities, as long as we can keep control.
comment by StartAtTheEnd · 2023-11-03T20:11:28.060Z · LW(p) · GW(p)
Thanks for writing this. I fully agree, by the way.
Anything like an utopia requires a form of stability. But we're going to speed everything up, by a lot. Nothing can change and yet remain the same. And I think it's silly to assume that optimization, or simply just improving things, can co-exist with things remaining as they are. We're necessarily something intermediate, so speeding things up doesn't seem like a good idea.
Furthermore, it seems that slowing down technological advancement is almost impossible, and that keeping people from making optimal choices for themselves is almost impossible (even if they know it's harmful to the whole). And there's two further dangers here, the first is technology completely removing our freedoms and privacy, and the second is technology being used to manipulate human behaviour, erasing our humanity. And this second point is almost guanteed, as human well-being is not possible in a world withour freedom, agency and privacy, these are core needs. So why not effectively turn the population into drones by modifying them? We're already doing this with adderall and SSRIs, and since it makes people suffer less, most people suport it.
That first possibility, technology taking away our agency, freedoms, and privacy, is basically guaranteed as well. It has actually already happened, it will just get much worse. This is because technology makes it easier to cause damages, and while it also makes it easier to defend and prevent damages, there's an inequality between the two growth rates.
All the increases in mental health issues is a direct consequence of the modern life, so it's also extremely naive to assume that more technology and more "progress" is going to fix it. Why eliminate struggle? Even struggle is essential to life and well-being! Are intellectuals really so disconnected from reality that they don't realize this?
comment by Quadratic Reciprocity · 2023-11-03T19:06:48.906Z · LW(p) · GW(p)
In my head, I've sort of just been simplifying to two ways the future could go: human extinction within a relatively short time period after powerful AI is developed or a pretty good utopian world. The non-extinction outcomes are not ones I worry about at the moment, though I'm very curious about how things will play out. I'm very excited about the future conditional on us figuring out how to align AI.
I'm curious about, for people who think similarly to Katja, what kind of story are you imagining that leads to that? Does the story involve authoritarianism (but I think even then, the world in which the leader of any of the current leading labs has total control and a superintelligent AI that does whatever they want, that future is probably much much more fun and exciting for me than the present - and I like my present life!)? Does it involve us being only presented with pretty meh options for how to build the future because we can't agree on something that wholly satisfies everyone? Does it involve multi-agent scenarios with the AIs or the humans controlling the AIs being bad at bargaining so we end up with meh futures that no one really wants? I find a bunch of stories pretty unlikely after I think about them but maybe I'm missing something important.
This is also something I'd be excited to have a Dialogue with someone about. Maybe just fleshing out what kind of future you're imagining and how you're imagining we end up in that situation.
comment by nim · 2023-11-03T16:21:55.532Z · LW(p) · GW(p)
writing, explaining things to people, friendships where you can be of any use to one another, taking pride in skills, thinking, learning, figuring out how to achieve things, making things, easy tracking of what is and isn’t conscious
Used to be, we enjoyed doing those things ourselves through a special uniquely-human flavor of being clever.
Seems like post-AI, we'll get to those things through a special uniquely-human flavor of being un-clever.
It doesn't make sense to judge human accomplishment and effort differently from that of AI -- but humans are great at choosing to do things which don't make sense when it suits us to. Living a generally enjoyable life despite not being particularly good at things compared to the other entities around us is a skill rather anthithetical to this community's values, but it does exist in plenty of cultures of the human population. Call it instrumental irrationality, perhaps.
Those of us who've pinned our self-worth on being better at computers than the computers are, though, are in for a bad time of a magnitude that may warrant replacing that entire part of the worldview to reach happiness.
comment by jedharris · 2023-11-03T06:34:54.566Z · LW(p) · GW(p)
The Sea of Faith
Was once, too, at the full, and round earth’s shore
Lay like the folds of a bright girdle furled.
But now I only hear
Its melancholy, long, withdrawing roar,
Retreating, to the breath
Of the night-wind, down the vast edges drear
And naked shingles of the world.
Ah, love, let us be true
To one another! for the world, which seems
To lie before us like a land of dreams,
So various, so beautiful, so new,
Hath really neither joy, nor love, nor light,
Nor certitude, nor peace, nor help for pain;
And we are here as on a darkling plain
Swept with confused alarms of struggle and flight,
Where ignorant armies clash by night.
comment by Gesild Muka (gesild-muka) · 2023-11-03T15:33:40.935Z · LW(p) · GW(p)
Even if we don’t die, it still feels like everything is coming to an end.
Everything? I imagine there will be communities/nations/social groups that completely ban AI and those that are highly dependent on AI. There must be something between those two extremes.
Replies from: cousin_it↑ comment by cousin_it · 2023-11-04T10:50:32.531Z · LW(p) · GW(p)
This is like saying "I imagine there will be countries that renounce firearms". There aren't such countries. They got eaten by countries that use firearms. The social order of the whole world is now kept by firearms.
The same will happen with AI, if it's as much a game changer as firearms.
Replies from: gesild-muka↑ comment by Gesild Muka (gesild-muka) · 2023-11-04T15:00:27.960Z · LW(p) · GW(p)
I think I understand, we're discussing with different scales in mind. I'm saying individually (or if your community is a small local group) nothing has to end but if your interests and identity are tied to sizeable institutions, technical communities etc. many will be disrupted by AI to the point where they could fade away completely. Maybe I'm just an unrealistic optimist, I don't believe collective or individual meaning has to fade away just because the most interesting and cutting edge work is done exclusively by machines.
comment by sapphire (deluks917) · 2023-11-03T19:11:57.083Z · LW(p) · GW(p)
All things arise and all things pass away.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-11-05T06:08:24.323Z · LW(p) · GW(p)
that seems like saying "alignment will not be solved" to me.