GPT2, Five Years On

post by Joel Burget (joel-burget) · 2024-06-05T17:44:17.552Z · LW · GW · 0 comments

This is a link post for https://importai.substack.com/p/import-ai-375-gpt-2-five-years-later

Contents

No comments

Jack Clark's retrospective on GPT2 is full of interesting policy thoughts, I recommend reading the whole thing. One excerpt:

I've come to believe that in policy "a little goes a long way" - it's far better to have a couple of ideas you think are robustly good in all futures and advocate for those than make a confident bet on ideas custom-designed for one specific future - especially if it's based on a very confident risk model that sits at some unknowable point in front of you.

Additionally, the more risk-oriented you make your policy proposal, the more you tend to assign a huge amount of power to some regulatory entity - and history shows that once we assign power to governments, they're loathe to subsequently give that power back to the people. Policy is a ratchet and things tend to accrete over time. That means whatever power we assign governments today represents the floor of their power in the future - so we should be extremely cautious in assigning them power because I guarantee we will not be able to take it back.

For this reason, I've found myself increasingly at odds with some of the ideas being thrown around in AI policy circles, like those relating to needing a license to develop AI systems; ones that seek to make it harder and more expensive for people to deploy large-scale open source AI models; shutting down AI development worldwide for some period of time; the creation of net-new government or state-level bureaucracies to create compliance barriers to deployment (I take as a cautionary lesson, the Nuclear Regulatory Commission and its apparent chilling effect on reactor construction in the USA); the use of the term 'safety' as a catch-all term to enable oversight regimes which are not - yet - backed up by quantitative risks and well developed threatmodels, and so on.

 I'm not saying any of these ideas are without redeeming qualities, nor am I saying they don't nobly try to tackle some of the thornier problems of AI policy. I am saying that we should be afraid of the power structures encoded by these regulatory ideas and we should likely treat them as dangerous things in themselves. I worry that the AI policy community that aligns with longterm visions of AI safety and AGI believes that because it assigns an extremely high probability to a future AGI destroying humanity that this justifies any action in the present - after all, if you thought you were fighting for the human race, you wouldn't want to compromize! But I think that along with this attitude there comes a certain unwillingness to confront just how unpopular many of these ideas are, nor how unreasonable they might sound to people who don't have similar intuitions about the technology and its future - and therefore an ensuing blindnesss to the costs of counterreaction to these ideas. Yes, you think the future is on the line and you want to create an army to save the future. But have you considered that your actions naturally create and equip an army from the present that seeks to fight for its rights?

 

Is there anything I'm still confident about? Yes. I hate to seem like a single-issue voter, but I had forgotten that in the GPT-2 post we wrote "we also think governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems." I remain confident this is a good idea! In fact, in the ensuring years I've sought to further push this idea forward via, variously, Regulatory Markets as a market-driven means of doing monitoring; articulating why and how governments can monitor AI systems; advocating for the US to increase funding for NIST; laying out why Anthropic believes third-party measurement of AI systems is very important for policy and state capacity; and a slew of other things across Senate and Congressional testimonies, participation in things like the Bletchley and Seoul safety summits, helping to get the Societal Impacts and Frontier Red Teams at Anthropic to generate better evidence for public consumption here, and so on. So much of the challenge of AI policy rests on different assumptions about the rate of technological progression for certain specific capabilities, so it seems robustly good in all world to have a greater set of people, including those linked to governments, to track these evolving capabilities. A good base of facts doesn't guarantee a sensible discussion, but it does seem like a prerequisite for one.

0 comments

Comments sorted by top scores.