0 comments
Comments sorted by top scores.
comment by the gears to ascension (lahwran) · 2023-04-16T10:11:55.230Z · LW(p) · GW(p)
I upvoted to zero because this is a reasonable idea, but I wouldn't upvote more right now because I don't see how this strongly impacts superintelligence safety timeline; this doesn't seem like a particularly high impact path to me from my current context, since it isn't reliable in any meaningful sense even at current models' capability level.
Replies from: bary-levi↑ comment by Bary Levy (bary-levi) · 2023-04-16T10:47:57.671Z · LW(p) · GW(p)
I actually don't think it has much impact on superintelligence. I shared this mostly because I thought it's a cool idea that we can implement now and can later be turned into a policy. Compared to existing policy proposals that don't limit training/usage, I think this can have a much larger impact