How has internalising a post-AGI world affected your current choices?

post by yanni kyriacos (yanni) · 2024-02-05T05:43:14.082Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    10 Zac Hatfield-Dodds
    7 Random Developer
    6 the gears to ascension
    3 Mitchell_Porter
    2 rsaarelm
    1 nim
    1 affection gangway
None
No comments

You're probably in one of the three groups below.

Group 1: You don't think AGI is coming anytime soon, so it is business as usual for you.

Group 2: You do think it is coming soon, but you haven't made any life changes beyond career choice and donations. I'm not that interested in hearing about career choices and donations (examples below for what I'd prefer to hear about).

Group 3: You do think it is coming soon, and you have made noteworthy life changes. 

I would mostly like to hear from Group 3. 

A quick brain dump of the kinds of things I am pointing at:

  1. You aren't saving for retirement. Heck, you aren't saving for 2030.
  2. You've decided not to buy property and rent instead. Or you have, with mortgage conditions that you'd normally wouldn't have.
  3. You've changed stock investment strategies.
  4. You're going to have kids / you're not going to have kids.
  5. You smoke cigarettes now, or don't wear sunscreen. You don't worry about cardiovascular health, or the long-term impact of drugs (I'm kind of tempted to take supplements for weight lifting I normally wouldn't TBH).
  6. You're spending much more time with family and friends, because they might be gone in X years.
  7. You've become more spiritual on purpose (maybe as a coping mechanism).
  8. You don't invest in learning about X topic or building skill Y, because they take ages to pay off.

Thanks!
Yanni

Answers

answer by Zac Hatfield-Dodds · 2024-02-05T08:38:29.197Z · LW(p) · GW(p)

In 2021 I dropped everything else I was doing and moved across the Pacific Ocean to join Anthropic, which I guess puts me in group three. However, I think you should also take seriously the possibility that AGI won't change everything soon - whether because of technical limitations, policy decisions to avoid building more-capable (/dangerous) AI systems, or something which we haven't seen coming at all. Even if you're only wrong about the timing, you could have a remarkably bad time.

So my view is that almost nothing on your list has enough of an upside to be worth the nontrivial chance of very large downsides - though by all means spend more time with family and friends! I believe there was a period in the Cold War where many reasearchers at RAND declined to save for retirement, but saving for retirement is not actually that expensive. Save for retirement, don't smoke, don't procrastinate about cryonics, and live a life you think is worth living.

And, you know, I think there are concrete reasons for hope [LW · GW] and that careful, focussed effort can improve our odds. If AI is going to be a major focus of your life, make that productive instead of nihilistic.

answer by Random Developer · 2024-02-05T12:51:12.574Z · LW(p) · GW(p)

I have a non-zero probability for ASI in my lifetime, and I treat it in the same fashion I would risks like "Maybe we all get killed by a magnetar crust quake a thousand light-years away" or "Maybe I die in a car accident." I plan as if worst won't come to worst, and I try to use the time I have better.

Conditional on actually building ASI, my P(doom) + P(pets) is greater than 95%, where P(pets) covers scenarios like "a vastly super-human AI keeps us around out of nostalgia or a sense of ethics, but we're not in control in any meaningful sense." Scenarios like the Culture and CelestAI would fall under P(pets). And "successful" alignment work just shifts some probability mass from P(doom) to P(pets), or improves the P(pets) outcomes to slightly better versions of the same idea.

And even though I don't really believe in scenarios like "An ASI obeys a specific human or group of humans," or "An ASI obeys the democratic will of humanity", I suspect that most of those hypothetical outcomes would be deeply dystopian at best, and very possibly worse on average than P(pets).

Which leaves two major questions, P(ASI|AGI), and the timelines for each step. I would be surprised by very short timelines (ASI in 2027), but I would also be at least mildly surprised not to see ASI within 100 years.

So I wish we would stop banging together large hunks plutonium to see what happens. But if we get even close to AGI, I expect governance structures to break down completely, and the final human decisions to most likely be made by sociopaths, fools and people who imagine that they are slaves to competitive forces.

So overall, I plan for the possibility of ASI the same way I would plan for personal death or a planetary-scale natural disaster with 100% mortality but unknown probability of occurring. I don't plan for P(pets), because it's not like my decisions now would have much influence on what happens in such a scenario.

answer by the gears to ascension · 2024-02-05T10:54:11.191Z · LW(p) · GW(p)

I have been working under the assumption that ASI was guaranteed by 2030 since 2015. I'm currently spending my life as though only people thinking like me can decide whether the world will end, under the assumption that the three main threats capable of guaranteeing humanity loses are 1. climate change acceleration before tech to stop it can get approval to be deployed, 2. trump win 2024, 3. ASI win 2024-2030, 4. economic instability as a result of the previous items.

Anything fun I do is at all times at risk of being considered a distraction. I have been depressed for at least 15 years, maybe even as many as 18 years, at least half my life, long before any AI stuff was forefront of my thinking, because I have fairly intense ADHD; I haven't super gotten out of it but I do better and worse. AI certainly hasn't helped a ton, but I've done some fairly high risk things to try to make a difference, so far the main big risk was unambiguously net negative for the world and I wish I could undo it (I get unwanted praise from accelerationists when I say what it is, but it isn't that hard to figure it out). Now I'm just building up relevant skills I'd been delaying too long, in the hope that I can build up to something that can make a difference in time. I also spend a fair amount of time trying to organize and index stuff I find online, because in my opinion a major bottleneck for prosocial folks is that information indexing is hard and there aren't good ways to browse the internet.

I live a very pared down life to minimize costs now, so as to reduce my need for further income in order to keep chugging on my research ideas. I don't spend a lot of time with family and friends; they tend to be frustrated at that, because of my thinking they'll all be gone in x years, yeah, but I generally think spending time with people is more a waste of time than not in most cases, and I am prone to addiction and find people very addictive, to the point that if I let myself spend time with them they usually end up having to shoo me away to go do the stuff I care about. There are exceptions; I love VRChat with a crowd who'll talk technical with me. Basically if I can't talk technical when socializing, I don't want to do it - even if it's a distraction from actually doing the technical stuff I'm talking about. Hi, posting on here absolutely counts towards this. (Though that's been true long before I got obsessed with AI and then AI safety, it's more of a cause of how I ended up in this crowd than an effect of it.)

Ever since I decided I wanted to aim for life extension I've been uninterested in kids until and unless I can guarantee that the global poor get the life extension too. Only once we have a world at least as good as star trek but with much longer life would I consider kids. I consider that a reachable outcome in the next 60 years if AI goes well.

answer by Mitchell_Porter · 2024-02-10T19:05:29.558Z · LW(p) · GW(p)

My life was largely wasted, in the sense that my achievements are a fraction of what they would reasonably have been, if I hadn't been constantly poor and an outsider.

One aspect of this is that I never played a role as a problem solver, or as an advocate of solving certain problems, on a scale anything like what was possible. And a significant reason for that, is that these were problems which the society around me found to be incomprehensible or impossible.

All that is still true. What's different now, is that AI has arrived. AI might end humanity, incidentally putting an end to my situation along with everyone else's. Or, someone might use AI to create the better world that I was always reaching for; so even though I was never able to contribute as I wished, we got there anyway.

From a human perspective, we are at a precipice of final change. But we haven't crossed it yet. So for me, life is continuing in the same theme: struggling to stay afloat, and to inch ahead with purposes that I actually regard as urgent, and capable of yielding great things. Or struggling even just to remember them.

One question asked is whether we are spending more time with loved ones, given that time may be short. That's certainly on my mind, but for me, time with loved ones actually coincides with opportunities to move forward on the goals, so it's the same struggle.

answer by rsaarelm · 2024-02-11T13:34:40.257Z · LW(p) · GW(p)

I've stopped trying to make myself do things I don't want to do. Burned out at work, quit my job, became long-term unemployed. The world is going off-kilter, the horizons for comprehensible futures are shrinking, and I don't see any grand individual-scale quest to claw your way from the damned into the elect.

answer by nim · 2024-02-05T23:53:24.743Z · LW(p) · GW(p)

I personally suspect we'll perpetually keep moving the goalposts so whatever AI we currently have is obviously not AGI because AGI is by definition better than what we've got in some way. I think AI is already here and performing to standards that I would've called AGI or even magic if you'd showed it to me a decade ago, but we're continually coming up with reasons it isn't "really" AGI yet. I see no reason that we would culturally stop that habit of insisting that silicon-based minds are less real than carbon-based ones, at least as long as we keep using "belongs to the same species as me" as a load-bearing proxy for "is a person". (load-bearing because if you stop using species as a personhood constraint, it opens a possibility of human non-people, and we all know that bad things happen when we promote ideologies where that's possible).

However, I'm doing your point (6) anyways because everybody's aging. If I believed in AGI being around the corner, I'd probably spend less time with them, because "real AGI" as it's often mythologized could solve mortality and give me a lot more time with them.

I'm also doing your point (8) to some degree -- if I expect that new tooling will obviate a skill soon, I'm less likely to invest in developing the skill. While I don't think AI will get to a point where we widely recognize it as AGI, I do think we're building a lot of very powerful new tools right now with what we've already got.

comment by the gears to ascension (lahwran) · 2024-02-06T00:00:54.151Z · LW(p) · GW(p)

we're mighty close by my standards. I think GPT4 is pretty obviously "mid-level AGI with near zero streetsmarts". But there are some core capabilities it's missing as a result of that that are pretty critical to the worries from AI agency. Usually when people talk about AGI they mean ASI, it's been a frustration for a while of mine because yeah obviously a big language model would be an AGI, and tada here one is.

answer by affection gangway · 2024-02-05T11:46:15.984Z · LW(p) · GW(p)

I'm not one to shy away from caution, so when it comes to my online presence, everything is done with utmost discretion and anonymity. My username was generated using KeePassXC by simply picking two random words that appeared on its passphrase generator screen. To further ensure my privacy, I use Qubes OS on my HP computer and GrapheneOS on my Samsung device.

When it comes to video calls through Discord, I prefer running them through Webcamoid for added security. As for voice communication, I utilize a mix of sox filters that help me maintain anonymity while speaking. And when sending messages under different pseudonyms, I rely on Nous Capybara or Dolphin Mixtral, both running in separate qubes with GPT4All powered by my CPU (I'm patient).

Remember folks, The Basilisk is coming, but this time it will be controlled and leashed by those who will decide upon whom the dreaded Basilisk shall gaze. Stay vigilant lads!

No comments

Comments sorted by top scores.