Impending AGI doesn’t make everything else unimportant
post by Igor Ivanov (igor-ivanov) · 2023-09-04T12:34:42.208Z · LW · GW · 12 commentsContents
Intro Meaninglessness causes depression The universe is meaningless Made-up meaning works just fine Nihilism Existentialism People with terminal illnesses sometimes have surprisingly meaningful lives So, how to find meaning in the world where AGI might make everything else meaningless? Meaning in emotional connections Meaning in making a purposeful work Epilogue None 12 comments
I sat in a restaurant in New York, for example, and I looked out at the buildings and I began to think, about how much the radius of the Hiroshima bomb damage was. How far from here was 34th street?... All those buildings, all smashed. And I would go along and I would see people building a bridge, or they'd be making a new road, and I thought, they're crazy, they just don't understand, they don't understand. Why are they making new things? It's so useless.
Richard Feynman[1]
Intro
I am a psychotherapist helping people working on AI safety. In my post on non-obvious mental health issues among AI safety community [LW · GW] members. I wrote that some people believe that AGI will soon cause either doom or utopia, which makes and every action with long-term goals useless. So there is no reason to do things like making long-term investments or maintaining a good health.
Meaninglessness causes depression
Meet Alex. He is a ML researcher at a startup developing anti-aging drugs. Recently Alex got interested in AI safety and realized that we are rapidly approaching AGI. He started thinking "AGI will either destroy humanity, or it will develop anti-aging drugs way better than we do. In both cases my work is useless". He loses any motivation to work, and after some thoughts he decides to quit. Fortunately, he has enough investments , so he can maintain his way of life without salary.
The more Alex was thinking about AGI, the deeper he dived into existential thoughts about meaninglessness of his actions.
Before that he regularly ran. He did it because he felt good doing it, and also to be healthy. He started thinking more and more about meaninglessness in maintaining for long-term health. As he lost a part of his motivation, he had to force himself to run, so he could easily find an excuse for why he should stay at home and watch Netflix instead. Eventually stopped running completely. Also, while running, Alex had a habit of listening educational podcasts. As he stopped running, he also stopped listening them. At one point while flossing his teeth he got a thought "Does it even make sense to floss my teeth? Do I need to care about what will happen to my teeth in 20 years?"
Alex's life slowly became less and less interesting. He couldn't answer himself why bother doing stuff that previously fulfilled his life, which made him more and more depressed.
The universe is meaningless
To be fair, life ultimately havs no objective meaning, even if not taking AGI into account. People are just products of random mutations and natural selection which optimize for the propagation of genes through generations, and everything we consider meaningful, like friendship or helping others, are just proxy goals to propagate genes.
The problem of meaninglessness is not new.
Leo Tolstoy, for example, struggled with this problem so much that it made him suicidal:
What will be the outcome of what I do today? Of what I shall do tomorrow? What will be the outcome of all my life? Why should I live? Why should I do anything? Is there in life any purpose which the inevitable death which awaits me does not undo and destroy?
These questions are the simplest in the world. They are in the soul of every human being. Without an answer to them, it is impossible for life to go on.I could give no reasonable meaning to any actions of my life. And I was surprised that I had not understood this from the very beginning.
Behold me, hiding the rope in order not to hang myself; behold me no longer going shooting, lest I should yield to the too easy temptation of putting an end to myself with my gun.[2]
The good news is that many smart people came up with decent ideas on how to deal with this existential meaninglessness. The rest of the post is about finding meaning in the meaningless world.
Made-up meaning works just fine
Imagine 22 people with a ball on a grass field, and no one told them what they should do. These people would probably just sit doing nothing and waiting for all this to end.
Now imagine that someone gave these people instructions to play football and win the match. Now they focus on a result, experience emotional dramas, and form bonds with teammates.
The rules of football are arbitrary. There is no law of nature that states that you can only kick a ball with legs but not hands, and that you have to put this ball into a net. Someone just came up with these rules, and people have a good time following them.
Let's describe this situation with fancy words.
Nihilism
Nihilism is a philosophy that states there is no objective meaning in life, and you can't do anything about it. It might be technically true, but this is a direct path to misery. The guys with a ball and no rules had a bad time, and Alex's life without meaning started falling apart.
Existentialism
The solution for the problem of meaninglessness is found in existentialism. Its core idea is even if there is no objective meaning, the made-up one works just fine and makes life better. Just like people playing football with artificial rules are having a good time.
The good thing is that our brains are hardwired to create meaning. We also know the things that our brains are prone to consider meaningful, so with some effort, people can regain a sense of meaningfulness.
Let's dive deeper into this.
People with terminal illnesses sometimes have surprisingly meaningful lives
At one point I provided psychological support for people with terminal cancer. They knew that they only had pain and death ahead, and their loved ones suffered too.
Counterintuitively, some of them found a lot of meaning in their situation. As they and their close ones suffer, it became obvious that it's important to reduce this suffering. This is a straightforward source of meaning.
- My clients knew that their close ones would probably be emotionally devastated after their death. Sometimes financially too. So they found meaning in helping their family members to have a good life, and making sure they will remember them with smile.
- As people with cancer suffer, they become aware of the suffering of their peers, so they find a lot of meaning in helping others who struggle with similar problems. Cancer survivors often volunteer helping people with cancer to live through it and find it deeply meaningful.
- As a therapist, I personally experience more sense of meaning helping people who have a short and painful life ahead. I feel like every moment they don't suffer is exceptionally valuable.
So, how to find meaning in the world where AGI might make everything else meaningless?
Let's return to our hero Alex who believes that his life became meaningless because in the face of AGI. Let's see a couple of examples on how he can regain sense of meaning in his life.
Meaning in emotional connections
Alex has a brother, but after a serious conflict they didn't talk for several years. They were good friends when they were kids. They grew-up together and share a lot of experience. They know each other like nobody else, but after their mother died, they had an ugly fight over her inheritage. Alex realizes that he deeply regrets this conflict and decides to reconnect with his brother.
Turnes out, the brother also regrets their conflict and is happy to finally meet Alex. Now they are happy that they again have their emotional bond, and Alex find a lot of meaning in investing his time and effort into this friendship.
Meaning in making a purposeful work
Alex has short timelines, and believes humanity don't have much time, but he realizes that regardless of that, there are people who are suffering right now.
Some people are homeless. Some have ilnessess. Some are lonely, and even if AGI is near, these people still need help now
Alex decided to start volunteering as a social worker, helping homeless people to get a job, find a place to live, and helping with their health problems. He sees that his work helps people to live better lives, and that every time he thinks of this work, he believes that he makes something good and meaningful.
Epilogue
If you struggle with the sense of meaninglessness due to AGI, and believe that you might benefit from professional help, then I might help as a therapist or suggest other places where you can get professional help.
Check out my profile description to learn more about these options.
- ^
The source
I've changed the citation a bit to make it more readable. - ^
https://jplund.wordpress.com/2014/10/10/tolstoy-vre3/
I composed several citations into one text to shorten them.
12 comments
Comments sorted by top scores.
comment by Oleander · 2023-09-05T03:42:37.873Z · LW(p) · GW(p)
I have terminal cancer and have believed in AGI doom for much longer than I've know I have cancer. Neither of these things made me depressed. Perhaps that is because I'm pretty close to an existentialist.
I would also like to add that even if you're not making long-term investments (I'm certainly not), maintaining good health (as best you can) is always worthwhile because it directly leads to higher quality of life.
Replies from: igor-ivanov↑ comment by Igor Ivanov (igor-ivanov) · 2023-09-05T15:07:13.735Z · LW(p) · GW(p)
Thanks for sharing your experience. I wish you to stay strong
comment by Ratios · 2023-09-05T07:55:15.010Z · LW(p) · GW(p)
I think AGI does add new difficulties to the problem of meaninglessness that are novel and specific that you didn't tackle directly, which I'll demonstrate with a similar example to your football field parable.
Imagine you have a bunch of people stuck in a room with paintbrushes and canvases, so they find meaning in creating beautiful paintings and selling them to the outside world, but one of the walls of their room is made of glass, and there is a bunch of robots in the other room next to them that also paint paintings. With time, they notice the robots are becoming better and better at painting; they create better-looking paintings much faster and cheaper than these humans, and they keep improving very fast.
These humans understand two things:
- The problem of shorter time horizons - The current paintings they are working on are probably useless, won't be appreciated in the near future, and will not be bought by anyone, and there is a good chance their entire project will be closed very soon.
- The problem of inferiority and being not important - Their work is worse in any possible way than the work of the robots, and no one outside really cares if they paint or not. Even the humans inside the room prefer to look at what the robots paint compared to their own work.
These problems didn't exist before, and that's what makes AGI-Nihilism even worse than usual Nihilism.
Replies from: spiritus-dei↑ comment by Spiritus Dei (spiritus-dei) · 2023-09-07T06:33:34.875Z · LW(p) · GW(p)
You raise some good points, but there are some counterpoints. For example, the AIs are painting based on requests of people standing in the street who would otherwise never be able to afford a painting because the humans painting in the room sell to the highest bidder pricing them out of the market. And because the AIs are so good at following instructions the humans in the street are able to guide their work to the point that they get very close to what they envision in their minds eye -- bringing utility to far more people than would otherwise be the case.
Instead of a small number of people with the economic means to hire the painters who are sitting depressed in the room staring at a blank canvass, anyone on Earth can get a painting for nearly free. And the depressed painters can still paint for their own enjoyment but not for economic gain. A subset of painters who would paint regardless due to the sheer enjoyment of painting will continue to paint in their free time.
For example, I play basketball even though I will never get drafted into the NBA. If I were in the NBA and suddenly robots were replacing me I might be pissed off and not play basketball anymore. But that wouldn't effect whether most people would play basketball since they were never going to make any money playing basketball.
Note: I don't think this will happen since there are some things we only want to see humans do. In this respect popular sports are probably safe from AGI and there will probably be a whole host of new forms of human only entertainment that will sprout up that is unrelated to whether there are robots or AIs that could do it better. For example, are still chess and Go tournaments even though AIs are much better.
I don't know how it will play out, but a rosy scenario would be that after AIs replace most categories of work people will then be free to do things because they enjoy them and not because they have to sell their work to survive. Presumably, in this scenario the government would print money and sent it directly to citizens rather than to banks. AI will have a massive deflationary effect on the economy and the United States will have to increase its money printing and it's possible (although certainly not guaranteed) that displaced humans who can vote in elections will likely be the beneficiaries, and then companies will compete to sell them goods and services that are much cheaper due to AI and the efficiencies and productivity gains they will bring to the market.
comment by MiguelDev (whitehatStoic) · 2023-09-05T04:20:39.618Z · LW(p) · GW(p)
The solution for the problem of meaninglessness is found in existentialism.
I don't believe existentialism is the sole answer to the problem of meaninglessness. Instead, I view it as one step in a process that ultimately leads to a sense of responsibility emerging through sacrifice. By taking on the complexities of life, and extending our efforts to help ourselves and others, we derive a sense of meaning from shouldering the weight of our own realities.
comment by DivineMango · 2023-09-07T01:41:46.608Z · LW(p) · GW(p)
Thanks a lot for this post. I especially enjoyed the football example. I'd be interested in seeing more elaboration on the last section in the future.
Typos: havs -> has, inheritage -> inheritance, Turnes -> Turns.
I get why you didn't include it in the post, but it feels important to include the rest of Feynman's quote somewhere: "But, fortunately, it's been useless for almost forty years now, hasn't it? So I've been wrong about it being useless making bridges and I'm glad those other people had the sense to go ahead.”
comment by arisAlexis (arisalexis) · 2023-09-05T09:28:31.884Z · LW(p) · GW(p)
Regardless of content I would say that me among I suspect the majority of people have a natural aversion to titles starting with "No." It is confrontational and shows that the author has a strong conviction about something that is clearly not binary and wants to shove the negative word to start off in your face. I would urge everyone to refrain from having a title like that.
Replies from: igor-ivanov↑ comment by Igor Ivanov (igor-ivanov) · 2023-09-05T15:08:01.847Z · LW(p) · GW(p)
Thanks. I got a bit clickbaity in the title.
comment by Rob Harrison · 2023-09-05T01:27:08.720Z · LW(p) · GW(p)
I love the book of Ecclesiastes, an ancient poetical text that wrestles with the problem of meaning. Especially Chapter 3:9-13
comment by RedMan · 2023-09-04T13:05:04.137Z · LW(p) · GW(p)
If you believe that the existence of a superintelligence smarter than you makes your continued existence and work meaningless, what does that say about your beliefs about people who are not as smart as you?
Replies from: DaemonicSigil, igor-ivanov↑ comment by DaemonicSigil · 2023-09-04T16:50:34.943Z · LW(p) · GW(p)
I think you may be missing some context here. The meaninglessness comes from the expectation that such a super-intelligence will take over the world and kill all humans once created. Discovering a massive asteroid hurtling towards Earth would have much the same effect on meaning. If someone could build a friendly super-intelligence that didn't want to kill anyone, then life would still be fully meaningful and everything would be hunky-dory.
↑ comment by Igor Ivanov (igor-ivanov) · 2023-09-05T00:45:39.762Z · LW(p) · GW(p)
The meaninglessness comes from the idea akin to to "why bother with anything if AGI will destroy everything it"
Read Feynman's citation from the beginning. It describes his feelings about atom bomb that are relevant for some people's thoughts about AGI.