Posts

Comments

Comment by wobblz on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T12:05:02.964Z · LW · GW

My point is that, as you said, you take the safest route when not knowing what others will do - do whatever is best for you and, most importantly, guaranteed. You take some years, and yes, you lose the opportunity to walk out of doing any time, but at least you're in complete control of your situation. Just imagine a PD with 500 actors... I know what I'd pick. 

Comment by wobblz on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T11:41:37.022Z · LW · GW

"The moratorium on new large training runs needs to be indefinite and worldwide."

Here lies the crux of the problem. Classical prisoners' dilemma, where individuals receive the greatest payoffs if they betray the group rather than cooperate. In this case, a bad actor will have the time to leapfrog the competition and be the first to cross the line to super-intelligence. Which, in hindsight, would be an even worse outcome.

The genie is out of the bottle. Given how (relatively) easy it is to train large language models, it is safe to assume that this whole field is now uncontrollable. Every actor with enough data and processing power can give it a go. You, me, anyone. Unlike, for example, advanced semiconductor manufacturing, where controlling ASML, the only sufficiently advanced company specialising in photolithography machines used in the production of chips, is equal to effectively overseeing the entire chip manufacturing industry.