Anthropic is further accelerating the Arms Race?
post by sapphire (deluks917) · 2023-04-06T23:29:24.080Z · LW · GW · 22 commentsThis is a link post for https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai/
Contents
22 comments
Anthropic is raising even more funds and the pitch deck seems scary. A choice quote from the article:
These models could begin to automate large portions of the economy,” the pitch deck reads. “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.
This frontier model could be used to build virtual assistants that can answer emails, perform research and generate art, books and more, some of which we have already gotten a taste of with the likes of GPT-4 and other large language models.
Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the biggest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation; Anthropic implies (in the deck) it relies on clusters with “tens of thousands of GPUs.”
22 comments
Comments sorted by top scores.
comment by Tamsin Leake (carado-1) · 2023-04-07T11:54:06.375Z · LW(p) · GW(p)
terrible news, sucks to see even more people speedrunning the extinction of everything of value.
reminder that if you destroy everything first you don't gain anything. everything is just destroyed earlier.
Replies from: None↑ comment by [deleted] · 2023-04-07T17:10:48.600Z · LW(p) · GW(p)
Note you could make this exact claim when people were racing to build large particle accelerators. You don't have evidence that AGI will cause such extinction, just it is a possibility. Same way an accelerator could cause black holes or vacuum collapse.
Replies from: ryan_b, lahwran↑ comment by ryan_b · 2023-04-08T00:29:55.316Z · LW(p) · GW(p)
People could, and people did. We went ahead with particle accelerators because we had a lot of people with relevant expertise soberly assess the risks and concluded with high confidence that they were safe to run.
We had strong causal evidence particle accelerators wouldn’t cause extinction. Where’s the equivalent for AGI?
↑ comment by the gears to ascension (lahwran) · 2023-04-07T18:28:32.310Z · LW(p) · GW(p)
you used to be so much more interesting before you fell into the "quit trying to safety" conversation pattern mode collapse
comment by Zach Stein-Perlman · 2023-04-06T23:50:19.369Z · LW(p) · GW(p)
Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the biggest models today.
This doesn't make sense. GPT-4 used around 2*10^25 FLOP, someone estimated.
Replies from: dsj, dsj↑ comment by dsj · 2023-04-07T01:58:15.761Z · LW(p) · GW(p)
Got a source for this estimate?
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-04-07T02:12:33.730Z · LW(p) · GW(p)
Epoch says 2.2e25. Skimming that page, it seems like a pretty unreliable estimate. They say their 90% confidence interval is about 1e25 to 5e25.
↑ comment by dsj · 2023-04-07T01:59:26.412Z · LW(p) · GW(p)
My guess is “today” was supposed to refer to some date when they were doing the investigation prior to the release of GPT-4, not the date the article was published.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-04-07T02:14:47.333Z · LW(p) · GW(p)
Minerva (from June 2022) used 3e24; there's no way "several orders of magnitude larger" was right when the article was being written. I think the author just made a mistake.
comment by the gears to ascension (lahwran) · 2023-04-07T00:54:38.461Z · LW(p) · GW(p)
just looks like keeping pace in the arms jog to me. not good news, but not really much of an update either, which is the minimum I want to hear.
comment by Mateusz Bagiński (mateusz-baginski) · 2023-04-07T18:26:38.279Z · LW(p) · GW(p)
Reading this, it reminds me of the red flags that some people (e.g. Soares [EA · GW]) saw when interacting with SBF and, once shit hit the fan, ruminated over not having taken some appropriate action.
comment by romeostevensit · 2023-04-07T04:31:08.279Z · LW(p) · GW(p)
Not genuinely relevant due to differences in the metrics discussed, but it does recall many years ago seeing 10^25 flops as the estimate given for a human brain.
Replies from: DragonGod↑ comment by DragonGod · 2023-04-07T15:41:07.904Z · LW(p) · GW(p)
For training a brain like model? Are you talking about bioanchors?
Replies from: romeostevensit↑ comment by romeostevensit · 2023-04-07T16:06:32.596Z · LW(p) · GW(p)
it was an AI impacts estimate from probably 2015-16 iirc
comment by Konstantin P (konstantin-pilz) · 2023-04-07T10:56:22.706Z · LW(p) · GW(p)
Please don't call it an arms race or it might become one. (Let's not spread that meme to onlookers) This is just about the wording, not the content
Replies from: lahwran, None↑ comment by the gears to ascension (lahwran) · 2023-04-07T18:30:43.160Z · LW(p) · GW(p)
it looks to me like it's behaving like an arms jog: people are keeping up but moving at a finite smooth rate. correctly labeling it does help a little, but mostly it's the actual behavior that matters.
↑ comment by [deleted] · 2023-04-07T17:11:50.974Z · LW(p) · GW(p)
Would the cold war not be a cold war if it wasn't called that? Your suggestion is useless. The dynamics of the game make it an arms race.
Replies from: konstantin-pilz, lahwran↑ comment by Konstantin P (konstantin-pilz) · 2023-04-11T12:28:46.921Z · LW(p) · GW(p)
The way we communicate changes how people think. So if they currently just think of AI as normal competition but then realize it's worth to race to powerful systems, we may give them the intention to race. And worse, we might get additional actors to join in such as the DOD, which would accelerate it even further.
↑ comment by the gears to ascension (lahwran) · 2023-04-07T18:33:01.112Z · LW(p) · GW(p)
you've really caught a nasty case of being borged by an egregore. you might want to consider tuning yourself to be less adversarial about it - I don't think you're wrong, but you've got ape specific stuff that to me, someone who disagrees on the object level anyway, it seems like you're reducing rate of useful communication by structuring your responses to have mutual information with your snark subnet. though of course I'm maybe doing it back just a little.
comment by jacob_cannell · 2023-04-09T07:44:15.641Z · LW(p) · GW(p)
That is only about 300 H100 GPU years: 10^15 flops/s * 10^9 secs/30 yrs * 10
comment by Qumeric (valery-cherepanov) · 2023-04-09T08:04:12.639Z · LW(p) · GW(p)
It is easy to understand why such news could increase P(doom) even more for people with high P(doom) prior.
But I am curious about the following question: what if an oracle told us that P(doom) is 25% before the announcement (suppose it was not clear to the oracle what strategy will Anthropic choose, it was inherently unpredictable due to quantum effects or whatever).
Would it still increase P(doom)?
What if the oracle said P(doom) is 5%?
I am not trying to make any specific point, just interested in what people think.
comment by Connor Williams · 2023-04-07T02:07:58.167Z · LW(p) · GW(p)
Anthropic is, ostensibly, an organization focused on safe and controllable AI. This arms race is concerning. We've already seen this route taken once with OpenAI. Seems like the easy route to take. This press release sure sounds like capabilities, not alignment/safety.
Over the past month, reinforced even more every time I read something like this, I've firmly come to believe that political containment is a more realistic strategy, with a much greater chance of success, than focusing purely on alignment. Even comparing the past month to the month of December 2022, things are accelerating dramatically. It only took a few weeks between the release of GPT-4 and the development of AutoGPT, which is crudely agentic. Capabilities is starting with a pool of people OOMs higher than alignment, and as money pours into the field at ever growing rates (toward capabilities, of course, because that's where the money is), it's going to be really hard for alignment folks (who I deeply respect) to keep pace. I believe that this year is the crucial moment for persuading the general populace that AI needs to be contained, and doing so effectively because if we use poor strategies and backfire, we may have missed our best chance.