Posts
Comments
Nitpick: I believe you meant to say last updated Apr 14, not Mar 14.
Anthropic is, ostensibly, an organization focused on safe and controllable AI. This arms race is concerning. We've already seen this route taken once with OpenAI. Seems like the easy route to take. This press release sure sounds like capabilities, not alignment/safety.
Over the past month, reinforced even more every time I read something like this, I've firmly come to believe that political containment is a more realistic strategy, with a much greater chance of success, than focusing purely on alignment. Even comparing the past month to the month of December 2022, things are accelerating dramatically. It only took a few weeks between the release of GPT-4 and the development of AutoGPT, which is crudely agentic. Capabilities is starting with a pool of people OOMs higher than alignment, and as money pours into the field at ever growing rates (toward capabilities, of course, because that's where the money is), it's going to be really hard for alignment folks (who I deeply respect) to keep pace. I believe that this year is the crucial moment for persuading the general populace that AI needs to be contained, and doing so effectively because if we use poor strategies and backfire, we may have missed our best chance.
Very well put, and I couldn't agree more with this. I've been reading and thinking more and more about the AI situation over the past year or so, starting when that AI researcher at Google became convinced that he had created a conscious being. Things are now accelerating at a shocking pace, and what once seemed like speculation that wasn't immediately relevant is now crucially urgent. Time is of the essence. Moreover, I'm becoming increasingly convinced that AI containment, if it is achieved, will be done through political solutions rather than technological solutions. Things are just moving way too fast, and I don't see how technical alignment will keep up when the pool of alignment researchers is so tiny compared to the enormous number of AI capabilities researchers.
For those of us deeply worried about AI risk, we're going to have to prepare for a rapid change in the discourse. Public persuasion will be crucial, as if we win it will be achieved by a combination of public persuasion and effective wielding of the levers of power. This means a paradigm shift in how capital-R Rationalists talk about this issue. Rationalists have a very distinctive mode of discourse which, despite having undeniable benefits, is fundamentally incongruent with more typical modes of thinking. We need to be willing to meet people where they are at, empathize with their concerns (including people worried about AI taking their jobs or making life meaningless - this seems to be quite common), and adopt non-Rationalist methods of persuasion and effective use of power that are known to be effective. Memetic warfare, one could call it. This will probably feel very dirty to some, and understandably so, but the paradigm has shifted and now is the time.
The methods of Rationality can still be very useful in this - it's an effective way to interrogate one's own assumptions and preexisting biases. But people have to be willing and able to use these methods in service of effective persuasion. Keeping our eyes on the prize will also be crucial - if this new limelight ends up getting used to advance other popular Rationalist causes and viewpoints such as atheism and wild animal suffering, I do not see how this could possibly go well.