How are you preparing for the possibility of an AI bust?

post by Nate Showell · 2024-06-23T19:13:45.247Z · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    45 Nathan Helm-Burger
    6 artemium
    2 Haiku
    -1 pathos_bot
    -8 nextcaller
None
3 comments

What actions have you been taking to prepare for the possibility that the AI industry will experience a market crash, something along the lines of the dotcom bust of the early 2000s? Also, what actions would you take if a crash like that occurred? For example:

Let's set aside speculation about whether a crash will happen or not; I'd like to hear what concrete actions people are taking.

Answers

answer by Nathan Helm-Burger · 2024-06-23T20:07:18.175Z · LW(p) · GW(p)

Personally, I'm preparing for AI fizzle the same way I'm preparing for winning the lottery. Not expecting it, but very occasionally allowing myself a little what-if daydreaming.

My life would be a lot more comfortable, emotionally and financially, if I wasn't pushing myself so hard to try to help with AI safety.

Like, a lot. I could go back to having an easy well-paying job that I focused on 9-5, my wife and I could start a family, I could save for retirement instead of blowing through my savings trying to save the world.

"You're working so hard to diffuse this bomb! What will you do if you succeed?!"

"Go back to living my life without existing in a state of fear and intense scrambling to save us all! What do you even mean?"

comment by the gears to ascension (lahwran) · 2024-06-23T20:11:58.724Z · LW(p) · GW(p)

What if there's an industry crash despite TAI being near?

Replies from: Jozdien, nathan-helm-burger
comment by Jozdien · 2024-06-24T05:50:39.142Z · LW(p) · GW(p)

I've thought about this question a fair amount.

It's obviously easier to make claims now about what I would do than to actually make hard choices when the situation arises. But twenty years ago the field was what, a couple people for the most part? None of them were working for very much money, and my guess is that they would've done so for even less if they could get by. What I hope I would do if all funding for the work I think is important (if p(AI goes well) hasn't significantly changed and timelines are short) dries up, is try to get by from savings.

I know this isn't realistic for everyone, nor do I know whether I would actually endorse this as a strategy for other people, but to me it seems clearly the more right choice under those conditions[1].

  1. ^

    Modulo considerations like, for instance, if all TAI-relevant research is being done by labs that don't care about alignment at all, or if independent work were completely futile for other reasons.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-06-23T20:21:54.087Z · LW(p) · GW(p)

Then I say "phew" and go back to working a normal job while doing alignment research as a hobby. I personally have lots of non-tech skills, so it doesn't worry me. If you are smart and agentic enough to be helping meaningfully with AI safety, you are smart enough to respec into a new career as need be.

Replies from: kave
comment by kave · 2024-06-24T02:42:27.551Z · LW(p) · GW(p)

I think you misread the comment you're replying to? I think the idea was that there's a crash in companies commercialising AI, but TAI timelines are still short

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-06-24T15:50:22.642Z · LW(p) · GW(p)

Oh, yes, I think you're right, I did misunderstand. Yeah, my current worries have a probability peak around "random coder in basement lucks into a huge algorithmic efficiency gain". This could happen despite the AI tech industry crashing, or could lead to a crash (via loss of moat). What then? All the scenarios that come after that, if the finding gets published, seem dark and chaotic. A dangerous multipolar race between a huge number of competitors, an ecosystem in which humanity is very much disempowered. I'm not sure there's any point in preparing for that, since I'm pretty sure it's out of our hands at that point. I do think we can work to prevent that though. The best defense I can think of against such a situation is to check to make sure there are no surprises like that awaiting us, as much as we can. Which is a strategy that brings it's own dangers. If you have people checking for the existence of game-changing algorithmic breakthroughs, what happens after the search team finds something? I think you need to have trustworthy people doing the search in cautious way, and have a censored simulation in a sandbox for studying the candidate models.

answer by artemium · 2024-06-24T07:43:48.245Z · LW(p) · GW(p)

I am not sure if dotcom 2000 market crash is the best way to describe a "fizzle". The upcoming Internet Revolution at the time was a correct hypothesis its just that 1999 startups were slightly ahead of time and tech fundamentals were not ready yet to support it, so market was forced to correct the expectations. Once the tech fundamentals (internet speeds, software stacks, web infrastructure, number of people online, online payments, online ad business models etc...) became ready in mid 2000s the Web 2.0 revolution happened and tech companies became giants we know today.

I expect most of the current AI startups and business models will fail and we will see plenty of market corrections, but this will be orthogonal to ground truth about AI discoveries that will happen only in a few cutting edge labs which will be shielded from temporary market corrections.

But coming back to the object level question: I really don't have a specific backup plan, I expect even the non-AGI level AI based on the advancement of the current models will significantly impact various industries so will stick to software engineering for forceable future.

comment by Nate Showell · 2024-06-27T03:22:21.425Z · LW(p) · GW(p)

I picked the dotcom bust as an example precisely because it was temporary. The scenarios I'm asking about are ones in which a drop in investment occurs and timelines turn out to be longer than most people expect, but where TAI is still developed eventually. I asked my question because I wanted to know how people would adjust to timelines lengthening.

answer by Haiku · 2024-06-24T07:01:02.973Z · LW(p) · GW(p)

In my spare time, I am working in AI Safety field building and advocacy.

I'm preparing for an AI bust in the same way that I am preparing for success in halting AI progress intentionally: by continuing to invest in retirement and my personal relationships. That's my hedge against doom.

answer by artemium · 2024-06-24T07:43:31.944Z · LW(p) · GW(p)
answer by pathos_bot · 2024-06-23T21:36:30.727Z · LW(p) · GW(p)

I'm not preparing for it because it's not gonna happen

comment by Jorge_Carvajal · 2024-06-24T01:13:32.177Z · LW(p) · GW(p)

I would like to read your arguments in this statement.

Replies from: pathos_bot
comment by pathos_bot · 2024-06-24T21:38:56.299Z · LW(p) · GW(p)
  1. Most of the benefits of current-gen generative AI models are unrealized. The scaffolding, infrastructure, etc. of GPT-4 level models are still mostly hacks and experiments. It took decades for the true value of touch-screens, GPS and text messaging to be realized in the form of the smart phone. Even if for some strange improbable reason SOTA model training were to stop right now, there are still likely multiples of gains to be realized simply via wrappers and post-training.
  2. The scaling hypothesis has held far longer than many people have anticipated. GPT-4 level models were trained on last years compute. As long as NVidia continues to increase compute/watt and compute/price, many gains on SOTA models will happen for free
  3. The tactical advantage of AGI will not be lost on governments, individual actors, incumbent companies, etc. as AI becomes more and more mainstream. Even if reaching AGI takes 10x the price most people anticipate now, it would still be worthwhile as an investment.
  4. Model capabilities are perhaps the smoothest value/price equation of any cutting edge tech. As in, there are no "big gaps" wherein a huge investment is needed before value is realized. Even reaching a highly capable sub-AGI would be worth enormous investment. This is not the same as the investment that led to for example, the atom bomb or moon landing, in which there is no consolation prize.
answer by nextcaller · 2024-06-23T20:18:43.516Z · LW(p) · GW(p)

As mostly an LLM user, most I can do is backup a few big models in case their availability ceases to exist in some doomsday scenario. Being the guy who has the "AI" in some zombie apocalypse can be useful.

3 comments

Comments sorted by top scores.

comment by RussellThor · 2024-06-24T01:02:46.657Z · LW(p) · GW(p)

IMO the most likely way by quite a bit that we get an AI bust is if there is international conflict, most obviously TSMC gets mostly destroyed, and supply chains threatened. Chip manufacturing is easily set back by 10+ years giving how incredibly complex the supply chain is. Preparing for that also involves preparing for computer HW to get set back and be more expensive for a while also.

Replies from: o-o
comment by O O (o-o) · 2024-06-24T03:05:27.525Z · LW(p) · GW(p)

TSMC has multiple fabs outside of Taiwan. It would be a setback but 10+ years seems to be misinformed. Also there would likely be more effort to restore the semi supply chain than post covid. (I could see the military try being mobilized to help or the Defense Production Act being used)

Replies from: RussellThor
comment by RussellThor · 2024-06-24T04:07:20.954Z · LW(p) · GW(p)

Last time I looked the most advanced ones were in Taiwan? Also if China invades Taiwan then expect to see Korea/Japan shipping trade and economies massively disrupted also. How long do you think it will take to build a new world leading factory from scratch in the USA now?

Source