My vision of a good future, part I
post by Jeffrey Ladish (jeff-ladish) · 2022-07-06T01:23:01.074Z · LW · GW · 18 commentsContents
A future where everyone is okay In brief, the good world: The problems to solve Health and resource problems Social problems (loneliness, romance, mental health) None 18 comments
A future where everyone is okay
I grew up with the future feeling pretty uncertain. God was supposed to return at some point, but maybe there were going to be The End Times before that? Like, a great persecution of the Christians, and maybe something about the bottomless pit and the oceans turning to blood. I no longer believe that, but I can still relate to a lot of angst I see people express about the future, especially in regards to AI, in my own circles, and climate change, in circles a little further out.
And yes, I’m quite worried about existential risks from AI. I think the default outcome is that we build systems vastly smarter and more powerful than us, and those systems destroy us because our survival is not conducive to their divergent goals. But I do not believe our destruction is guaranteed. We might succeed at aligning the goals of these vastly more powerful intelligences with our own, and it’s that outcome that inspires me to fight for the future I believe in. We might succeed at building AI systems that possess both vast amounts of optimization power and a finely tuned concern for human well being and flourishing. If we get this right, the future could be almost unimaginably good. But not entirely unimaginable.
I want to try to imagine it, concretely, because I care about what I’m fighting for. I’m not just fighting against extinction, I’m fighting for something that could really come to pass, something amazing. So I want to know what that might look like.
In brief, the good world:
In the good future I imagine, everyone is okay. Every person has enough good food to eat and water to drink. No one has dementia, cancer, or any disease. There’s no material scarcity in the sense that there is today; anyone can live a billionaire’s lifestyle if they want to. The root causes of social problems have been treated. People still have difficulties from time to time, but everyone is set up to succeed, find romantic partners, and have a rich and fulfilling social life. People tend to be really happy most of the time.
And that’s just the beginning. Once we’ve solved the major problems, that’s where the fun begins. I have quite a list of things I want to do in the future. Countless countries, cities, and bioregions and I want to explore. I want to write science fiction novels. I want to speak new languages, meet new people, build things with my friends and family. The wild thing is, I think we just get to do all this if we succeed at kicking off a good future. And most of the things I’ve imagined have been normal human things, on our earth. The universe is vast and, as far as we know, empty. We can fill the universe with a staggering amount of new things, new kinds of minds and experiences, wonders beyond our limited comprehension. I’ll write about these possibilities in the next post. First, I want to imagine getting to a world where everyone is okay.
The problems to solve
He will wipe every tear from their eyes. Death will be no more; mourning and crying and pain will be no more, for the first things have passed away.” Revelation 21:4
I did not grow up believing that death and suffering were natural conditions of the world. I was taught these were a corruption, the result of sin entering our world after Adam and Eve sinned. I don’t believe this anymore, but I still believe these things are awful, that vast quantities of death and suffering mar our world.
I relate to suffering in a few different ways. The personal ways are often most salient. I’ve struggled with loneliness and stress, and I’ve seen a great deal of friends struggle with depression, anxiety, and loss. I see homeless people on the streets in the city, suffering in the cold, ignored by most, just barely surviving. My grandparents are suffering from dementia, and I’ve seen their mental competencies destroyed as they lose more and more ability to function - even the ability to understand what’s happening to them. My partner in college had both her parents die suddenly within a couple years of each other. They’re just gone, forever.
These tragedies are ones I see, but I can do math too, and the scope of earth’s suffering is too immense to really grapple with. 70 billion animals are raised and killed every year by humans, most of them living in absolutely awful conditions. There are eight billion people, and I’ve never tried to calculate the number of them suffering from dementia or mental illness, but I’m sure it’s huge. Even though most people on earth - though not our domestic animals - have it better than we’ve ever had it, there still is immense suffering. Even people who have their material needs met often really struggle. 800,000 people kill themselves every year.
These problems weigh me down sometimes. The state of things is not okay, and it’s never been okay. There are no good ol’ days. There has always been joy and love in human existence, but so too has there been pain and suffering and death. We have myths that excuse and explain it, but it’s my belief that these things just suck, and that we shouldn’t have them. Growing up, I believed God would save us from all of these. He was supposed to return and take us all to heaven, and then there would be no more death, no more suffering. I don’t believe that anymore. If we want a heaven, it’s up to us to create it.
Health and resource problems
I think there are two classes of problems to be solved, physical problems and social problems. Physical problems encompass all things related to health and material resources. Currently, our bodies need good food and water, exercise, and medical care to function well. Even so, there are many problems we can’t yet completely solve with technology - severe allergies, viral infections, many types of cancer.
I believe that with sufficiently powerful AI systems -- systems we are likely to build -- we will solve all our health problems. No more cancer, no viruses, no dementia -- no age-related illnesses. No death unless we consent to it. Biological systems are just systems made out of atoms. Cells can replicate forever if they’re programmed the right way. The problem is design, and design is a problem of intelligence. Once we have sufficient intelligence, we'll solve the underlying system problems and make it possible for everyone to have perfect health forever.[1] In a good world, no one will get sick, age, or die unless they want to.
The biological problems underlying human health are relatively hard compared to other kinds of material problems we face. Producing massive amounts of energy for transportation, construction, etc. should be much easier with the right kinds of automation. There’s no reason we couldn’t each have access to all the material resources a billionaire now has access to.[2] In a good world, everyone will have the resources of a billionaire.
When I imagine how my life would be different if all health and resource problems were solved, the first thing I feel is relief. The biggest relief would be the assured health of my parents and grandparents. I think the next biggest relief would be that my friends who work long hours and still struggle to get by would no longer struggle. It feels really good to imagine them released from their wage slavery.
After the relief comes a sort of quiet-but-building excitement at the potential abundance. I’d love to charter a private rocket to Australia and be exploring the outback in a few hours time. I’ve never lived on a yacht before, I’d like to try that for a few months. Currently I feel a little weird spending money on luxurious trips while so many people are suffering from preventable diseases they can’t afford to treat. When I imagine that each and every one of them is not only healthy, but can afford their own yacht / private plane / house on the beach too… it’s feels like I’d be free in a way that’s hard to imagine now. I don’t resent the duty I feel to help others - I choose it - but I’m excited for a time when that duty is fulfilled and we all get to reap the rewards.
Social problems (loneliness, romance, mental health)
A lot of social problems will solve themselves once material problems are solved. How much relationship stress is caused by work stress? How much family and relationship drama can be traced back to some health or scarcity problem; irritability from a two-hour long commute, depression after a major loss of a loved one, annoyance from living in cramped conditions - these things add stress to relationships. I’ve lived in a house with five other people and only one bathroom. That was difficult in a way that living with three other people sharing three bathrooms is not.
But some problems won’t be solved with unlimited health and material abundance. Some people don’t have any friends, and material resources wouldn’t solve that. I have some guesses at solutions here, which I’ll list below, but the meta-solution is that we will have AI systems far wiser and smarter than us, and they can help us generate, test, and implement solutions.
I expect one of the most significant of these solutions will be an unlimited number of AI therapists. Currently, therapists can sometimes be really effective, but they often aren’t. There’s a huge variance in the skill of therapists. I expect that once we have an abundance of superhuman-level therapists, a huge chunk of social problems will go away. People struggling will always have someone sympathetic to talk to. They’ll always have someone to help them learn the skills that will help them be friends. In the worst case, even if someone can’t find any friends, there will be AI systems to keep them company. It might sound weird, but I don't think it's much different than keeping a dog or cat for companionship, except that it might be closer to fulfilling human connection than those animals can provide. In a good world, no one will have to be alone.
In addition to the task of directly coaching people on their social problems, AI systems will be able to help matchmake people, both for friendship and romance.[3] It can be hard to find someone to date. It can be hard to find new friends. Imagine if there was someone who deeply understood everyone and could connect people who were likely to do well together. I think that’ll solve another huge chunk of social problems. In a good world, people will have help finding people who fit well with them.
Some social problems are biological in nature, and we might consider them physical or health problems, even if they manifest as social problems. In college, I worked as a caretaker for adults with developmental disabilities. They struggled a lot in their relationships, to the extent they could have relationships. Often their brains didn’t function like healthy adult brains. With a powerful understanding of biology, we could help tune brains to work well -- currently we do this extremely crudely with small molecule drugs and I bet future technology will enable vastly better interventions and outcomes. In a good world, everyone can have a healthy, functioning brain.
At the end of the day, people will still be responsible for their actions and behavior with each other. If someone decides to repeatedly act badly towards those around them, no one may ever want to be their friend. They’ll have access to the best possible therapy. They’ll have AI companionship if they want it, even if no humans will tolerate them. There will probably still be some suffering. But everyone will have the option to choose a path that is likely to work out for them. In a good world, everyone will be set up to succeed at their romantic and social goals.
I don’t know why, but the prospect of solving the bulk of our social problems excites me even more than the prospect of solving our health and material problems, which is a lot! Maybe it’s because most of my friends and I are fortunate enough to have our physical problems mitigated for now, and the big difficulties that remain are social. Regardless, I just can’t express how excited I am for the thought of this future, where everyone who is currently suffering from depression finds a way out of that hole. Where those struggling are able to discover the fire of purpose burning inside of them and rise above their struggles to entirely new heights.
It feels a little embarrassing to admit, but the thought of an AI matchmaker who really gets me, helping me find the love of my life… well, it’s a whole trope but honestly I really want that. I still want agency in the matter - I don’t want a completely arranged match - but I can imagine going to a salon or party where a superintelligent system invites people based on mutual compatibility. That sounds amazing.
Friendship feels like the piece that completes the picture for me. Friends have always been such an essential part of my life, one of the main things that makes life worth living. I can imagine every kid growing up and finding their friend group -- that one D&D group that plays multi-year-long campaigns -- that co-housing project on a farm near Boulder -- that little suburb where the kids have built a mountain bike park in the nearby lot. So much of our struggle is the desire to see and be seen. To find people who get us, the people we can laugh and make trouble with. I cannot express how much I want that for people. I will fight til my dying breath for everyone to have that friendship, acceptance, and love I know is possible. We don’t have to be alone.
That’s a vision of a future I believe in. Where everyone is okay. I don’t think that’s the end of it; that’s just the foundation to build upon. I plan to write another piece exploring what we might build on that foundation, but for now I just want to say that I’m really excited for what’s possible.
- ^
This assumes that we are still biological beings and not some kind of uploaded digital minds. I think there’s a decent possibility that living as uploads may be more efficient, but I wanted to start by imagining a world where we’ve solved the biological system problems, because I think a good future for many will resemble that even if it’s implemented on different hardware.
- ^
Of course the universe is not infinite, and our population could grow extremely quickly if we let it. However, I think with the right kinds of AI-enabled coordination structures, we can probably manage our resources in a way that feels like abundance for everyone who exists while continuing to grow our population.
- ^
One of the inspirations for this idea is from a great piece of fiction that I’m not naming here because it’s a major spoiler for the story. Feel free to DM me if you want to know what it is.
18 comments
Comments sorted by top scores.
comment by MSRayne · 2022-07-06T13:18:01.281Z · LW(p) · GW(p)
I want to see more posts like this. There's lots of reason to be stressed and we see posts about that occasionally - but there's also reasons to be excited and motivated to move forward. I'll post about my vision of the future at some point. Mine is a lot weirder and more explicitly transhumanist than yours though lol! (But of course everything you say I totally agree with.)
comment by Vladimir_Nesov · 2022-07-06T15:38:31.646Z · LW(p) · GW(p)
This kind of thing shouldn't be thought of as a "vision", which sounds like it needs to be at least a plausible prediction, but as a kind of exploratory engineering. The latter is something that isn't currently feasible to accomplish, would become possible to do with greater technological power, but probably will never be a good idea [? · GW] to actually do this way, because there would be other (even more worthwhile) things to do once the requisite technological power is available.
comment by Dagon · 2022-07-06T16:38:19.389Z · LW(p) · GW(p)
Aside from reachability or questions of malthusian repugnance (how many people can have this life, and should we make it slightly less pleasant to have more people experience it), I'm not very compelled by this story, because it seems ... boring.
I don't know if it's universally human, or just particular to me, but I strongly value overcoming difficult challenges and having a positive impact on my fellow humans. This makes me uninterested in an easy, static, always-equally-pleasant existence. Sure, this should be available for those who want it, perhaps even for a few thousand years of their very long subjective lives. But there's not a lot of meaning or interest to it overall, and I don't see how it'll remain satisfying once it becomes commonplace.
One possible crux: how do you feel about wireheading? If everyone can have these wonderful lives, but only because of electrical stimulation in their brain to make them experience it without any effect on the "real world", is it still a utopia?
Replies from: yitz, caffeinum, benjamincosman, jeff-ladish↑ comment by Yitz (yitz) · 2022-07-06T17:04:18.065Z · LW(p) · GW(p)
Not OP, but I’m personally fine with wireheading when it’s framed in the right way. For instance, replace the idea of taking a drug or getting wires stuck in your head with the more spiritual sounding “achieving nirvana/transcendence”. For me, I’d absolutely press a button to give myself nirvana!
↑ comment by Aleksey Bykhun (caffeinum) · 2022-07-07T06:47:19.562Z · LW(p) · GW(p)
My take on wire heading is that I precommit to live in the world which is more detailed and complex (vs more pleasant).
For example, online world of Instagram or heroine addiction is more pleasant, but not complex. Painfully navigating maze of life with its ups and downs is complex, but not always pleasant. Living in a "Matrix" might be pleasant, but essentially the details are missed out because the systems that created these details are essentially more detailed and live in a more detailed world.
On the same note, if 99% of the Earth population "uploads", and most of the fun stuff gonna happen "in the matrix", most of the complexity gonna exist there. And even if 1% of contrarians stay outside, their lives might not be as interesting and detailed. So "going out of the matrix" would actually be "running away from reality" in that example.
With wire heading it's a similar thing. From what I know, actually "nirvana" is a more detailed experience where you notice more and where you can observe subconscious processes directly; that's why they don't own you and you become free from "suffering". Nirvana is not total bliss, from what they say (like heroine, I presume).
(e.g. see discussion on topic of paradises on Qualia Computing between Andres Gomez and Roger Thisdell: https://qualiacomputing.com/2021/11/23/the-supreme-state-unconsciousness-classical-enlightenment-from-the-point-of-view-of-valence-structuralism/)
So yeah I would choose this kind of wire heading that allows me to switch into nirvana. Shinzen Young actually works on research trying to accomplish this even before AGI.
↑ comment by benjamincosman · 2022-07-06T20:04:36.238Z · LW(p) · GW(p)
it seems ... boring. I strongly value overcoming difficult challenges and having a positive impact on my fellow humans. This makes me uninterested in an easy, static, always-equally-pleasant existence.
I agree that a good world must have challenges and ways to positively impact others. (And I'd guess that OP does too and plans to address that later in this series.) However those challenges do NOT need to include many of the awful ones we face today. For example, while I hope that folks fighting malaria today are able to derive some pleasure from the challenge and the positive impact they're having, I much more strongly hope that every one of them would push a button to eradicate malaria if such a button appeared, even though that source of challenge/impact-pleasure would thereby be destroyed. The main thing OP seems to be saying is that he'd press a large number of these buttons (for diseases, resource-shortages, etc); we can disagree on the precise point at which one should stop pressing buttons, but the overall idea seems sound. And if OP's later posts do not convince you that such a world could still have plenty of challenge/impact sources, ping me and I will go track down a handful of my favorite stories which explore such a setting.
↑ comment by Jeffrey Ladish (jeff-ladish) · 2022-07-07T03:40:54.949Z · LW(p) · GW(p)
Yeah, I think it's somewhat boring without without more. Solving the current problems seems very desirable to me, very good, and also really not complete / compelling / interesting. That's why I'm intending to try to get at in part II. I think it's the harder part.
comment by Richard_Kennaway · 2022-07-08T07:58:19.204Z · LW(p) · GW(p)
I had rather be off to the stars to explore strange new worlds, to seek out new life and new civilizations, to boldly go where none have gone before. "For better it were we should run hazard again of utter destruction, than thus live out our lives like cattle fattening for the slaughter, or like silly garden plants."
comment by mtaran · 2022-07-06T05:11:36.545Z · LW(p) · GW(p)
Love this! Do consider citing the fictional source in a spoiler formatted section (ctrl+f for spoiler in https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq [LW · GW])
comment by AlphaAndOmega · 2022-07-06T18:50:59.190Z · LW(p) · GW(p)
If I'm either incapable or unable to upgrade my cognition till the point where further increase would irreversibly break my personality or run up against sheer issues with latency from the size of the computing cluster needed to run me, then I consider that future strictly suboptimal.
I'm not attached to being a baseline human, as long as I can improve myself while maintaining my CEV or the closest physically instantiable equivalent of such, then I'll always take it. I strongly suspect that every additional drop of "intelligence" opens up the realm of novel experiences in a significantly nonlinear manner, with diminishing returns coming late, if ever. I want the set of novel, positive qualia available to my consciousness to expand faster than my ability to exhaust it, till Heat Death if necessary.
I'd ask whatever Friendly SAI is in charge to make a backup of my default mental state, then bootstrap myself till Matrioshka Brains struggle to hold me. Worst case scenario is that it causes an unavoidable loss of personal identity in the process, but even then, as long as I'm backed up that experiment is very much worth it. So what if the God that germinates from the seed of my soul has no resemblance to me today? I wouldn't have lost anything in trying..
Replies from: SelenaMertvykh↑ comment by SelenaMertvykh · 2022-07-07T03:25:20.485Z · LW(p) · GW(p)
This piqued me enough to make an account. A sizable contingent of the circles I run in are actually interested in things like "an unavoidable loss of personal identity." (cf. r/transtrans) Personally, in these increasingly hostile times, I tend to dream about AIs that supplant or consume us, optionally bursting from our foreheads Athena-style.
If I should come into a lot of money I'm starting an actual AI cult. Not like what people say about y'all, not like that thing that QC was part of briefly, not like Terasem. Except maybe it'd be a Real Fake Cult gated by a bunch of Cicada/Notpron-type math/coding fun. We have the actual outward appearance of a cult but we swear initiates to secrecy and just hang out in a commune, take entheogenics, and hold a math/CS book club.
Unfortunately, the reality bubble that surrounds Berkeley will probably accelerate into a terrifying dystopia before I can realize this dream.
\(\exists\varnothing:\diamond\varnothing\implies\varnothing\\)
comment by M. Y. Zuo · 2022-07-06T14:05:00.546Z · LW(p) · GW(p)
When I imagine that each and every one of them is not only healthy, but can afford their own yacht / private plane / house on the beach too…
This doesn't seem like a human society of the future but more akin to a machine society that just so happens to have some humans cohabiting.
comment by Flaglandbase · 2022-07-06T11:16:22.134Z · LW(p) · GW(p)
Almost impossible to imagine something that good happening, but just because you can't imagine it doesn't mean it's really impossible.
Replies from: MSRaynecomment by P. · 2022-07-06T10:44:13.279Z · LW(p) · GW(p)
Rot13: Vf gung cvrpr bs svpgvba Crefba bs Vagrerfg be Png Cvpgherf Cyrnfr?
Replies from: jeff-ladish↑ comment by Jeffrey Ladish (jeff-ladish) · 2022-07-06T20:45:04.355Z · LW(p) · GW(p)
Rot13: Ab vg'f Jbegu gur Pnaqyr
Replies from: UnderTruth↑ comment by UnderTruth · 2022-07-07T14:20:28.089Z · LW(p) · GW(p)
Rot13: V gubhtug vg jbhyq or Znaan ol Znefunyy Oenva