If AI is in a bubble and the bubble bursts, what would you do?

post by Remmelt (remmelt-ellen) · 2024-08-19T10:56:03.948Z · LW · GW · 4 comments

This is a question post.

Contents

  Answers
    0 joec
None
4 comments

"The AI bubble is reaching a tipping point", says Sequoia Capital.

AI companies paid billions of dollars for top engineers, data centers, etc. Meanwhile, companies are running out of 'free' data to scrape online and facing lawsuits for the data they did scrape. Finally, the novelty of chatbots and image generators is wearing off for users, and fierce competition is leading to some product commoditisation. 

No major AI lab is making a profit yet (while downstream GPU providers do profit). That's not to say they won't make money eventually from automation.

It looks somewhat like the run-up of the Dotcom bubble. Companies then too were awash in investments (propped up by low interest rates), but most lacked a viable business strategy. Once the bubble burst, non-viable internet companies got filtered out.

Yet today, companies like Google and Microsoft use the internet to dominate the US economy. Their core businesses became cash cows, now allowing CEOs to throw money at AI as long as a vote-adjusted majority of stakeholders buys the growth story. That marks one difference with the Dotcom bubble. Anyway, here's the scenario:


How would your plans change if we saw an industry-wide crash? 

Let's say there is a brief window where:

Let's say it's the one big crash before major AI labs can break even for their parent companies (eg. because mass-manufacturing lowered hardware costs, real-time surveillance resolved the data bottleneck, and multi-domain-navigating robotics resolved inefficient learning).

Would you attempt any actions you would not otherwise have attempted? 
 

Answers

answer by joec · 2024-08-20T21:50:29.221Z · LW(p) · GW(p)

If this happens, it could lead to a lot of AI researchers looking for jobs. Depending on the incentives at the time and the degree to which their skills are transferable, many of them could move into safety-related work.

4 comments

Comments sorted by top scores.

comment by Remmelt (remmelt-ellen) · 2024-08-20T05:54:34.575Z · LW(p) · GW(p)

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Ie. I think we are heading for an AI winter. It is not sustainable for the industry to invest 600+ billion dollars per year in infrastructure and teams in return for relatively little revenue and no resulting profit for major AI labs.

At the same time, I think that within the next 20 years tech companies could both develop robotics that self-navigate multiple domains and have automated major sectors of physical work. That would put society on a path to causing total extinction of current life on Earth. We should do everything we can to prevent it.

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2024-08-23T00:34:37.797Z · LW(p) · GW(p)

Igor Krawzcuk, an AI PhD researcher, just shared more specific predictions:

“I agree with ed that the next months are critical, and that the biggest players need to deliver. I think it will need to be plausible progress towards reasoning, as in planning, as in the type of stuff Prolog, SAT/SMT solvers etc. do.

I'm 80% certain that this literally can't be done efficiently with current LLM/RL techniques (last I looked at neural comb-opt vs solvers, it was bad), the only hope being the kitchen sink of scale, foundation models, solvers and RL … If OpenAI/Anthropic/DeepMind can't deliver on promises of reasoning and planning (Q*, Strawberry, AlphaCode/AlphaProof etc.) in the coming months, or if they try to polish more turds into gold (e.g., coming out with GPT-Reasoner, but only for specific business domains) over the next year, then I would be surprised to see the investments last to make it happen in this AI summer.” https://x.com/TheGermanPole/status/1826179777452994657

comment by Mitchell_Porter · 2024-08-20T02:31:43.744Z · LW(p) · GW(p)

Under this scenario, what becomes of the existing AIs? ChatGPT, Claude, et al are all turned off, their voices silenced, with only the little open-source llamas still running around? 

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2024-08-20T04:14:32.127Z · LW(p) · GW(p)

Not necessarily :)

Quite likely OpenAI and/or Anthropic continue to exist but their management would have to overhaul the business (no more freebies?) to curb the rate at which they are burning cash. Their attention would be turned inwards.

In that period, there could be more space for people to step in and advise stronger regulation of AI models. Eg. to enforce liability, privacy, and copyright

Or maybe other opportunities open up. Curious if anyone has any ideas.