post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Tomás B. (Bjartur Tómas) · 2023-05-29T19:26:34.121Z · LW(p) · GW(p)

These poor people getting price gouged by Nvidia. What we really need is a price ceiling to stop Nvidia and AMD's greed. LessWrong should push hard on getting  such a price ceiling passed.

Replies from: rotatingpaguro
comment by rotatingpaguro · 2023-05-29T20:16:43.429Z · LW(p) · GW(p)

What?

Replies from: Bjartur Tómas
comment by Tomás B. (Bjartur Tómas) · 2023-05-29T20:29:47.370Z · LW(p) · GW(p)

It was a joke about using the classic, disastrous populist affinity for price controls as a means of ensuing the shortage persists indefinitely. 

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-05-29T18:07:58.415Z · LW(p) · GW(p)

Unfortunately, we can't count on compute remaining a bottleneck protecting us from danger. It's functioning in that way for now, yes, but the possibility is there of algorithmic advances causing the compute threshold for 'dangerously capable model' to drop rapidly.

Should we be grateful for it for now? Sure.

Should we count on it keeping us safe in the future? Definitely not.

I think compute governance as a way to prevent dangerously capable models is doomed as a long term tactic. I think that likely buys us a couple of years, maybe 4 or 5 years at best. I mean, those years could be critical, so let's not fail to secure them! But lets not fool ourselves into thinking that 20 years from now, a worldwide compute governance agency will be keeping the world safe from powerful AGI. It is a stopgap measure that cannot hold.

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-05-29T19:40:58.885Z · LW(p) · GW(p)

Unfortunately, we can't count on compute remaining a bottleneck protecting us from danger. It's functioning in that way for now, yes, but the possibility is there of algorithmic advances causing the compute threshold for 'dangerously capable model' to drop rapidly.

Should we be grateful for it for now? Sure.

Should we count on it keeping us safe in the future? Definitely not.

I couldn't agree more! I think this is well-said.

I mainly linkposted this article because I thought it was a valuable look into the public perspective on this; Human civilization in its current state seems to be pretty broadly interested in accelerating AI capabilities, and sees nothing wrong with that.

I think compute governance as a way to prevent dangerously capable models is doomed as a long term tactic. I think that likely buys us a couple of years, maybe 4 or 5 years at best. I mean, those years could be critical, so let's not fail to secure them! But lets not fool ourselves into thinking that 20 years from now, a worldwide compute governance agency will be keeping the world safe from powerful AGI. It is a stopgap measure that cannot hold.

My thinking on the AGI macrostrategy here is that there's already more than enough interest, among government officials in the US and China, to limit the AI disruption introduced by massive compute production; although it's for completely different reasons as people in the AI safety community. It's just that currently, the rewards seem to outweigh the risks, in their minds.