Anthropic is further accelerating the Arms Race?
post by sapphire (deluks917)
This is a link post for https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai/
Anthropic is raising even more funds and the pitch deck seems scary. A choice quote from the article:
These models could begin to automate large portions of the economy,” the pitch deck reads. “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.
This frontier model could be used to build virtual assistants that can answer emails, perform research and generate art, books and more, some of which we have already gotten a taste of with the likes of GPT-4 and other large language models.
Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the biggest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation; Anthropic implies (in the deck) it relies on clusters with “tens of thousands of GPUs.”
Comments sorted by top scores.
comment by Tamsin Leake (carado-1) ·
2023-04-07T11:54:06.375Z · LW(p) · GW(p)
terrible news, sucks to see even more people speedrunning the extinction of everything of value.
reminder that if you destroy everything first you don't gain anything. everything is just destroyed earlier.
Replies from: gerald-monroe
↑ comment by Gerald Monroe (gerald-monroe) ·
2023-04-07T17:10:48.600Z · LW(p) · GW(p)
Note you could make this exact claim when people were racing to build large particle accelerators. You don't have evidence that AGI will cause such extinction, just it is a possibility. Same way an accelerator could cause black holes or vacuum collapse.
Replies from: ryan_b, lahwran
↑ comment by ryan_b ·
2023-04-08T00:29:55.316Z · LW(p) · GW(p)
People could, and people did. We went ahead with particle accelerators because we had a lot of people with relevant expertise soberly assess the risks and concluded with high confidence that they were safe to run.
We had strong causal evidence particle accelerators wouldn’t cause extinction. Where’s the equivalent for AGI?
comment by Zach Stein-Perlman ·
2023-04-06T23:50:19.369Z · LW(p) · GW(p)
Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the biggest models today.
This doesn't make sense. GPT-4 used around 2*10^25 FLOP, someone estimated.Replies from: dsj, dsj
comment by Konstantin P (konstantin-pilz) ·
2023-04-07T10:56:22.706Z · LW(p) · GW(p)
Please don't call it an arms race or it might become one. (Let's not spread that meme to onlookers)
This is just about the wording, not the content
Replies from: lahwran, gerald-monroe
↑ comment by Gerald Monroe (gerald-monroe) ·
2023-04-07T17:11:50.974Z · LW(p) · GW(p)
Would the cold war not be a cold war if it wasn't called that? Your suggestion is useless. The dynamics of the game make it an arms race.
Replies from: konstantin-pilz, lahwran
↑ comment by Konstantin P (konstantin-pilz) ·
2023-04-11T12:28:46.921Z · LW(p) · GW(p)
The way we communicate changes how people think. So if they currently just think of AI as normal competition but then realize it's worth to race to powerful systems, we may give them the intention to race. And worse, we might get additional actors to join in such as the DOD, which would accelerate it even further.
↑ comment by the gears to ascension (lahwran) ·
2023-04-07T18:33:01.112Z · LW(p) · GW(p)
you've really caught a nasty case of being borged by an egregore. you might want to consider tuning yourself to be less adversarial about it - I don't think you're wrong, but you've got ape specific stuff that to me, someone who disagrees on the object level anyway, it seems like you're reducing rate of useful communication by structuring your responses to have mutual information with your snark subnet. though of course I'm maybe doing it back just a little.
comment by Qumeric (valery-cherepanov) ·
2023-04-09T08:04:12.639Z · LW(p) · GW(p)
It is easy to understand why such news could increase P(doom) even more for people with high P(doom) prior.
But I am curious about the following question: what if an oracle told us that P(doom) is 25% before the announcement (suppose it was not clear to the oracle what strategy will Anthropic choose, it was inherently unpredictable due to quantum effects or whatever).
Would it still increase P(doom)?
What if the oracle said P(doom) is 5%?
I am not trying to make any specific point, just interested in what people think.
comment by Connor Williams ·
2023-04-07T02:07:58.167Z · LW(p) · GW(p)
Anthropic is, ostensibly, an organization focused on safe and controllable AI. This arms race is concerning. We've already seen this route taken once with OpenAI. Seems like the easy route to take. This press release sure sounds like capabilities, not alignment/safety.
Over the past month, reinforced even more every time I read something like this, I've firmly come to believe that political containment is a more realistic strategy, with a much greater chance of success, than focusing purely on alignment. Even comparing the past month to the month of December 2022, things are accelerating dramatically. It only took a few weeks between the release of GPT-4 and the development of AutoGPT, which is crudely agentic. Capabilities is starting with a pool of people OOMs higher than alignment, and as money pours into the field at ever growing rates (toward capabilities, of course, because that's where the money is), it's going to be really hard for alignment folks (who I deeply respect) to keep pace. I believe that this year is the crucial moment for persuading the general populace that AI needs to be contained, and doing so effectively because if we use poor strategies and backfire, we may have missed our best chance.