"No-one in my org puts money in their pension"
post by Tobes (tobias-jolly) · 2024-02-16T18:33:28.996Z · LW · GW · 16 commentsThis is a link post for https://seekingtobejolly.substack.com/p/no-one-in-my-org-puts-money-in-their
Contents
This is going to be big This is going to be bad It’s a long way off though This is fine It’s probably something else in your life No-one in my org puts money in their pension Doom-vibes Maths might help A problem shared is… Hope None 16 comments
Epistemic status: the stories here are all as true as possible from memory, but my memory is so so.
This is going to be big
It’s late Summer 2017. I am on a walk in the Mendip Hills. It’s warm and sunny and the air feels fresh. With me are around 20 other people from the Effective Altruism London community. We’ve travelled west for a retreat to discuss how to help others more effectively with our donations and careers. As we cross cow field after cow field, I get talking to one of the people from the group I don’t know yet. He seems smart, and cheerful. He tells me that he is an AI researcher at Google DeepMind. He explains how he is thinking about how to make sure that any powerful AI system actually does what we want it to. I ask him if we are going to build artificial intelligence that can do anything that a human can do. “Yes, and soon,” he says, “And it will be the most important thing that humanity has ever done.”
I find this surprising. It would be very weird if humanity was on the cusp of the most important world changing invention ever, and so few people were seriously talking about it. I don’t really believe him.
This is going to be bad
It is mid-Summer 2018 and I am cycling around Richmond Park in South West London. It’s very hot and I am a little concerned that I am sweating off all my sun cream.
After having many other surprising conversations about AI, like the one I had in the Mendips, I have decided to read more about it. I am listening to an audiobook of Superintelligence by Nick Bostrom. As I cycle in loops around the park, I listen to Bostrom describe a world in which we have created superintelligent AI. He seems to think the risk that this will go wrong is very high. He explains how scarily counterintuitive the power of an entity that is vastly more intelligent than a human is. He talks about the concept of “orthogonality”; the idea that there is no intrinsic reason that the intelligence of a system is related to its motivation to do things we want (e.g. not kill us). He talks about how power-seeking is useful for a very wide range of possible goals. He also talks through a long list of ways we might try to avoid it going very wrong. He then spends a lot of time describing why many of these ideas won’t work. I wonder if this is all true. It sounds like science fiction, so while I notice some vague discomfort with the ideas, I don’t feel that concerned. I am still sweating, and am quite worried about getting sunburnt.
It’s a long way off though
It’s still Summer 2018 and I am in an Italian restaurant in West London. I am at an event for people working in policy who want to have more impact. I am talking to two other attendees about AI. Bostrom’s arguments have now been swimming around my mind for several weeks. The book’s subtitle is “Paths, Dangers, Strategies” and I have increasingly been feeling the weight of the middle one. The danger feels like a storm. It started as vague clouds on the horizon and is now closing in. I am looking for shelter.
“I just don’t understand how we are going to set policy to manage these things” I explain.
I feel confused and a little frightened.
No-one seems to have any concrete policy ideas. But my friend chimes in to say that while yeah there’s a risk, it’s probably pretty small and far away at this point.
“Experts thinks it’ll take at least 40 more years to get really powerful AI” she explains, “there is plenty of time for us to figure this out”.
I am not totally reassured, but the clouds retreat a little.
This is fine
It is late January 2020 and I am at after-work drinks in a pub in Westminster. I am talking to a few colleagues about the news. One of my colleagues, an accomplished government economist, is reflecting on the latest headlines about a virus in China.
“We are overreacting,” he argues “just as we did with SARS. The government feels like it has to look like it’s doing something, but it’s too much.“
He goes on to talk about how the virus is mostly just media hype. Several other colleagues agree.
I don’t respond to this. I have a half-formed thought along the lines of “even if there is only a small chance that this becomes a big pandemic, it might be worth reacting strongly. The cost of underreacting is much greater than the cost of overreacting”. But I don’t say this. I am a little bit reassured by his confidence that all will be well.
A few weeks later, I come home to see my housemate carrying crates of canned food to his room. He has spent a lot of time on internet rationality forums and at their suggestion has started stockpiling. His cupboard is filled with huge quantities of soup. I think that this is pretty ridiculous. My other housemates and I watch him as he lugs the crates up the stairs to his room; we look at each other and roll our eyes.
It’s probably something else in your life
It’s Spring 2021. I am at a dinner hosted by a friend in south London. I am talking to an acquaintance, a PhD student studying machine learning at a London university. I ask him how he has been doing.
“Not very good to be honest,” he says glumly. “I am pretty worried about AI and no one in my lab seems to take this seriously. They all think it’s science fiction. It makes me feel really lonely and a bit crazy.”
He looks dejected.
“It’s not just the ideas though. I think this thing is actually going to kill me! It’s horrifying”.
I commiserate with him. It must be hard to be surrounded by people who dismiss your ideas all day. I wonder if his anxiety has more to do with lock-down-exacerbated loneliness than with the whole AI apocalypse thing.
A year later we will see each other at another party and he will tell me that he has now quit the PhD and is feeling much better. He will say that he still thinks it’s likely that AI will kill him.
No-one in my org puts money in their pension
It’s Summer 2022, I am at a party in central London. I am talking to an AI alignment researcher in the kitchen. He is upbeat and is excitedly talking about some ideas that might ensure the alignment of superintelligent AI with human values.
“We just need to create this small AI that we can fully understand and verify is aligned” he explains “and then we use that AI to validate that the super huge massive superintelligence is aligned with our values…. It’s a bit like creating Godzilla to fight Mega-Godzilla!”.
“That is not a reassuring metaphor,” I say,
“Oh yeah, we’re not super hopeful, but we’re trying” he states, somehow still cheerfully.
Later in the conversation, he will explain that despite the government incentive systems, no-one at his place of work takes anything above the minimum possible pension. They all doubt they will need it.
Over the next few weeks, I will wonder if I should be putting less money into my pension.
Doom-vibes
It is February 2023. I now have a vague sense of impending doom most mornings. I’m not sure why. It might be the combination of my pre-existing beliefs about the difficulty of AI alignment and the holy-shit-rapid-advances vibe that OpenAI has been producing. It might be that I have been reading post after post talking about the timelines to transformative AI and the probability of doom. Or all the people talking about how AI-alignment seems really hard and how there is a now a molochian race-to-the-bottom between AI labs to produce the biggest intelligence as quickly as possible.
The doom feeling might also be the mid-winter darkness and cold, alongside my general tendency towards anxious thoughts and feelings. It could also be that I am yet to secure future funding for the project that is my current livelihood. All I know is that I regularly feel hopeless.
Each morning, I go to the window of my East London flat. Many days I look out over the rooftops and picture the swarms of kill-drones on the horizon. Maybe that’s how it will happen? That, or a mysterious and terrifying pandemic? I will see my friends freaking out about it on social media one day and then a day later my partner and I will be coughing up blood. Or maybe it’ll be quicker. I’ll be blinded by the initial flash of the bomb, then a fraction of a second of extreme heat before the end. The fear isn’t sharp, just a dull empty sense that there is no future.
Maths might help
It’s March 2023 and I am in my flat, talking to a friend. We have agreed to meet to spend some time figuring out our thoughts on the actual risk from AI. We have both been reading a lot about it, but still feel very confused so wanted to be more deliberate.
We spend some time together writing about and discussing our thoughts. I am still very confused. I mostly seem to be dancing between two ideas:
On one side there is the idea that the base-rate for catastrophic risk is low. New technology is usually good on balance and no new tech has ever killed humanity before. Good forecasters should need a lot of evidence to update away from that very low prior probability of doom. There isn’t much hard evidence that AI is actually dangerous, and it seems very possible that we just won’t be able to create superintelligence for some reason anyway.
On on the other side is the idea that intelligence creation is just categorically different from other technology. Intelligence is the main tool for gaining power in the world. This makes the potential impact of AI completely historically unprecedented. And if you are able to take the perspective of smaller groups of humans (e.g. many groups of indigenous people), powerful agents unexpectedly causing sudden doom is actually very very precedented. Oh, and the power of these things is growing extremely fast.
“So what do you think?” my friend asks, “What’s your probability of doom?”
“I really don’t know” I sigh, “one part of me is saying this is all going to be fine, things are usually fine, and another part of me is saying that this is definitely going to be terrible.” I pause….
“So maybe like 20%?”
A problem shared is…
It is Easter 2023 and I am at my aunt's house. We have just finished a large lunch and I am sitting at the dinner table with my parents, aunts, uncles and cousins. Someone asks me what I am working on at the moment. I explain the personal career review I am doing.
“I want my career to be impactful, I want to help others” I explain, “And in terms of the positive impact I could have on the world, I am really worried about the risks from advanced AI”.
My uncle asks me why I’m so worried.
I respond:
“These orgs have many billions of dollars in funding and are building things that are unimaginably powerful, on purpose. If something much smarter than a human can be built, and there doesn’t seem to be a reason why it won’t be, it will be massively powerful. Intelligence is what allowed humanity to become the dominant species on this planet. It’s the reason we exterminate ants, not the other way around. They are building something, that in terms of raw power could be to us as we are to ants”
I go on…
“Lots of people think that this might be enough; that the main risk is that we will build a thing, it will have goals that are different to ours, and then game over. That seems possible and scary. But humans don’t even need to lose control for this to go very badly. Superintelligence is a superpower, and big changes to the global structure of power can be unpredictable and terrifying. So I don’t know what happens next. It might be the worst wars imaginable or AI-powered global totalitarianism. But whatever does happen seems like it has a decent chance of killing us or making life for most or all of humanity terrible”.1
My family shows concern, maybe some confusion, but definitely concern. It feels relieving to express. I have always been stoical about my pain and anxieties. As a child and teenager, I never wanted to bother others with my stuff. It’s nice to be able to express to them that I am scared about something. Talking about the risk of AI doom feels easier than discussing my career worries.
Hope
It is June 2023. In the months prior, the heads of the leading AI labs have been talking to world leaders about the existential risks from AI. They, along with many prominent AI researchers and tech leaders have signed a statement saying that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
Today I am hosting an event in East London on AI governance. I have lined up 8 speakers from the UK government, think tanks and academia, and around 70 people have turned up to watch. I’m proud of my work organising the event, but am feeling pretty nervous and hoping it goes well.
To inspire discussion, I have created a graph on the wall by the entrance to collect and present attendee’s views on AI. The two axis are “how long it will be until we have Artificial General Intelligence?” and “the probability of AGI catastrophe”. As people come through the door they add a post-it note to the place on the graph that approximates their viewpoint. Most people have put their post-its in a cluster centered at a 25% chance of catastrophe in around 20 years. I stare at the second largest cluster in the top left of the graph. Several people here think there is around a 75% chance of everyone being killed in around 7 years. I think about the kill-drones. I think about the flash of the bomb. I think about whether my pension is a waste of money. I think about the unreal shock of the first days of lockdown in 2020. I feel a pang of fear in my stomach.
I look around at the crowded room of people. They are snacking on the fruit and hummus I bought and are talking eagerly. I look with some trepidation at the lectern where I will soon be standing in front of everyone to introduce the first speaker. They will be talking positively and insightfully about how we could govern advanced AI and what policies and institutions we need to ensure AI safety. Right now I focus on my breath and try to let go of the fear for a moment. I start to walk towards the front of the room. I really, really, really hope that this goes well.
16 comments
Comments sorted by top scores.
comment by Jacob G-W (g-w1) · 2024-02-16T19:33:20.632Z · LW(p) · GW(p)
I really enjoyed this post. Thank you for writing it!
I also have no clue what is going to happen. I predict that it will be wild, and I also predict that it will happen in <=10 years. Let's fight for the future we want!
comment by Karl von Wendt · 2024-02-17T07:56:12.990Z · LW(p) · GW(p)
Thank you for being so open about your experiences. They mirror my own in many ways. Knowing that there are others feeling the same definitely helps me coping with my anxieties and doubts. Thank you also for organizing that event last June!
comment by Ben Pace (Benito) · 2024-02-17T20:21:13.126Z · LW(p) · GW(p)
<3
comment by Vincent Fagot (vincent-fagot) · 2024-02-18T10:33:41.752Z · LW(p) · GW(p)
Thank you for sharing this.
comment by whestler · 2024-03-11T16:12:17.451Z · LW(p) · GW(p)
I had a similar emotional response to seeing these same events play out. The difference for me is that I'm not particularly smart or qualified, so I have an (even) smaller hope of influencing AI outcomes, plus I don't know anyone in real life who takes my concerns seriously. They take me seriously, but aren't particularly worried about AI doom. It's difficult to live in a world where people around you act like there's no danger, assuming that their lives will follow a similar trajectory to their parents. I often find myself slipping into the same mode of thought.
comment by Review Bot · 2024-02-17T13:26:20.803Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by Blacknsilver · 2024-05-18T10:44:42.945Z · LW(p) · GW(p)
This is propaganda and alarmism.
Edit: I spent the past 20 minutes thinking about the best way to handle this type of situation. I could make a gigantic effortpost, pointing out the millions/billions of people who are currently suffering and dying. People whose lives could be improved immeasurably by AI that is being slowed down (a tiny bit) by this type of alarmism. But that would be fighting propaganda with propaganda.
I could point out the correct way of handling these types of thoughts using CBT and similar strategies. Again, it would be a huge, tremendously difficult endeavour and it would mean sacrificing my own free time to most likely get downvotes and insults in return (I know because I've tried enlightening people in the past elsewhere).
Ultimately, I think the correct choice for me is to avoid lesswrong and adjacent forums because this is not the first or second time I've seen this type of AI doomerism and I know for a fact depression is contagious.
Replies from: Gurkenglas, tobias-jolly, DPiepgrass, kromem↑ comment by Gurkenglas · 2024-05-18T11:15:53.748Z · LW(p) · GW(p)
Sure, he's trying to cause alarm via alleged excerpts from his life. Surely society should have some way to move to a state of alarm iff that's appropriate, do you see a better protocol than this one?
↑ comment by Tobes (tobias-jolly) · 2024-05-31T09:33:46.019Z · LW(p) · GW(p)
I'd appreciate seeing the post that you mentioned, and part of me does worry that you are right.
Part of me worries that this is all just a form of group mental illness. That I am have been sucked into a group that was brought together through a pathological obsession with groundless abstract prediction and a sad-childhood-memories-induced intuition that narratives about the safety of powerful actors are usually untrustworthy. That fears about AI are an extreme shadow of these underlying group beliefs and values. That we are just endlessly group-reinforcing our mental-ill-health-backed doomy predictions about future powerful entities. I put weight on this part of me having some or all of the truth.
But I have other parts that tell me that these ideas just all make sense. In fact, the more grounded, calm and in touch with my thoughts and feelings I am - the more I think/feel that acknowledging AI risk is the healthiest thing that I do.
Replies from: kromem↑ comment by kromem · 2024-05-31T10:13:19.594Z · LW(p) · GW(p)
In mental health circles, the general guiding principle as for whether a patient needs treatment for their mental health is whether the train of thought is interfering with their enjoyment of life.
Do you enjoy thinking about these topics and discussing them?
If you don't - if it just stresses you out and makes the light of life shine less bright, then it's not a bad idea to step away from it or take a break. Even if AI is going to destroy the world, that day isn't today and arguably the threat of that looming over you sooner than a natural demise increases the value of the days you have that are good. Don't squander a limited resource.
But if you enjoy the discussions and the debates, if you find the topic stimulating and the problem space interesting - you're going to whittle your days away doing something no matter how you spend your time. It might as well be working on something fun that you believe in and feel may make a difference to the world. Even if your worries are overblown, time spent on something you enjoy with people you respect isn't time wasted.
Health is a spectrum and too much of a good thing isn't good at all. But only you can decide what's too much and what's the right amount. So if you feel it's too much, you can scale it back. And if you feel it's working out well for you, more power to you - the sense of feeling in the right place at the right time (even if under perceived dire circumstances) is a bit of a rarity in the human experience.
In general - enjoy life while it lasts. No matter your objective p(doom), your relative p(doom) is 100%. Make the most of the time you have.
Replies from: Seth Herd↑ comment by Seth Herd · 2024-05-31T10:40:26.934Z · LW(p) · GW(p)
This is good advice, but you must recognize that it's also advice to be selfish. Many rationalists believe in utilitarianism, which preaches near zero selfishness. This is an immense source of stress and unhappiness.
This is particularly problematic when combined with the historically under-recognized importance of the alignment problem. There's been a concern that each individuals efforts might have a nontrivial influence on the odds of a good future for a truly vast number of sentient beings.
Fortunately, AI alignment/outcomes is being steadily better recognized, so individuals can step away slightly easier knowing someone else will do similar work.
But this does not fully solve the problem. Pretending it doesn't exist and advising someone to be selfish when they have complex, well-thought-out reasons not to be is not going to help those individuals.
Replies from: quetzal_rainbow↑ comment by quetzal_rainbow · 2024-05-31T11:37:16.471Z · LW(p) · GW(p)
I feel weird reading this. Like, preventing planetary catastrophe from killing you is pretty much selfish. On the other hand, increasing your own happiness is just as good method to increase total utility as any else. So, the real question is "am I capable to create impact on AI-risk issue given such-n-such tradeoffs on my happiness?"
Replies from: Seth Herd↑ comment by Seth Herd · 2024-05-31T18:21:57.354Z · LW(p) · GW(p)
I totally agree that increasing your own happiness is a valid way to pursue utilitarianism. I think this is often overlooked. (although let's bear in mind that almost nobody actually earns-to-give and so almost nobody walks the talk of being fully utilitarian; the few I know of who do have made a career of it, keeping their true motives in question)
I think rationalists are aware of the following calculus: My odds of actually saving my own life by working on AGI alignment are very small. There are thousands of people involved; the odds of my making the critical contribution are tiny, on the order of maybe 1/10000 at most. But the payoff could be immense; I might live for a million years and expand my mind to experience much more happiness per year, if this all goes very well.
For anyone who does that calculus, it is worth bing quite unhappy now to have that less than 1/10000 chance of achieving so much more happiness.
I don't think that's how everyone thinks of it, and probably not most of them. I suspect that even rationalist utilitarians don't have it all spelled out in mathematical detail. I certainly don't.
But my point is, just telling them "hey you should do something that makes you happy" doesn't address the reasons they're doing what they are, for most alignment people, because they have very specific logic for why they're doing what they are.
On the other hand, some of them did just start out thinking "this sounds fun" and have found out it's not, and reminding them to ask if that's the case could make them happy.
And slightly reduce our odds of a grand future...
↑ comment by DPiepgrass · 2024-05-22T00:50:42.184Z · LW(p) · GW(p)
I can't recall another time when someone shared their personal feelings and experiences and someone else declared it "propaganda and alarmism". I haven't seen "zero-risker" types do the same, but I would be curious to hear the tale and, if they share it, I don't think anyone one will call it "propaganda and killeveryoneism".
↑ comment by kromem · 2024-05-31T10:03:13.050Z · LW(p) · GW(p)
It's not propaganda. OP clearly believes strongly in the sentiments discussed in the post, and its mostly a timeline of personal response to outside events than a piece meant to misinform or sway others regarding those events.
And while you do you in terms of your mental health, people who want to actually be "less wrong" in life would be wise to seek out and surround themselves by ideas different from their own.
Yes, LW has a certain broad bias, and so ironically for most people here I suspect it serves this role "less well" than it could in helping most of its users be less wrong. But particularly if you disagree with the prevailing views of the community, that makes it an excellent place to spend your time in listening, even it if can create a somewhat toxic environment for partaking in discussions and debate.
It can be a rarity to find spaces where people you disagree with take time to write out well written and clearly thought out pieces on their thoughts and perspectives. At least in my own lived experiences, many of my best insights and ideas were the result of strongly disagreeing with something I read and pursuing the train of thought resulting from that exposure.
Sycophantic agreement can give a bit of a dopamine kick, but I tend to find it next to worthless for advancing my own thinking. Give me an articulate and intelligent "no-person" any day over a "yes-person."
Also, very few topics are actually binaries even if our brains tend towards categorizing them as such. Data doesn't tend to truly map to only one axis, and it typically even mapped to a single axis it falls along a spectrum. It's possible to disagree about the spectrum of a single axis of a topic while finding insight and agreement about a different axis.
Taking what works and leaving what doesn't is probably the most useful skill one can develop in information analysis.