Posts

Why was the AI Alignment community so unprepared for this moment? 2023-07-15T00:26:29.769Z

Comments

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-18T22:02:46.735Z · LW · GW

Thank you for the reply. This has been an important takeaway from this post: There are significant groups (or at least informal networks) doing meaningful work that don't congregate primarily on LW or Twitter. As I said on another comment - that is encouraging! I wish this was more explicit knowledge within LW - it might give things more of a sense of hope around here.

The first question that comes to mind: Is there any sense of direction on policy proposals that might actually have a chance of getting somewhere? Something like: "Regulating card production" has momentum or anything like that?

Are policy proposals floating around even the kind that would not-kill-everyone? or is it more "Mundane Utility" type stuff, to steal the Zvi term.

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-18T14:27:29.917Z · LW · GW

This is fantastic information, thank you for taking the time.

One of my big takeaways from all of the comments on this post is a big update to my understanding of the "AI Risk" community and that LW was not actually the epicenter and there were significant efforts being made elsewhere that didn't necessarily circle back to LW.

That is very encouraging actually!

The other big update is what you say: There were just so few people with the time and ability to work on these things.

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T20:56:35.526Z · LW · GW

Someone else said similar about the basement possibility, which I did not know.

Interesting questions raised though: Even if it wasn't clear until GPT, wouldn't that still have left something like 2-3 years?

Granted that is not 10-20 years.

It seems we all, collectively, did not update nearly enough on ChatGPT-2?

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T20:53:42.449Z · LW · GW

Point taken! This I just plain did not know and I will update based on that.

 It does not make sense to focus on public policy if basement guy is the primary actor.

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T20:27:59.102Z · LW · GW

The UK funding is far and away the biggest win to date, no doubt.

And all this is despite the immediately-preceding public relations catastrophe of FTX!


Do you feel that FTX/EA is that closely tied in the public mind and was a major setback for AI alignment? That is not my model at all.

We all know they are inextricably tied, but I suspect if you were to ask they very people in those same polls if they knew that SBF supported AI risk research they wouldn't know or care.

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T20:18:55.698Z · LW · GW

I've added an Edit to the post to include that right up front.

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T20:10:05.370Z · LW · GW

I asked this of another commenter, but I will ask you too:

Do you feel it is accurate to say that many or most people working on this (including and especially Eliezer) at the time considered nuts and bolts alignment work to be the only worthwhile path? Given what info was available at the time.

And that widescale public persuasion / overton window / policy making was not likely to matter as the most scenarios were Foom based?

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T20:07:04.772Z · LW · GW

You reminded me of that famous tweet
 

Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale 

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus

But more seriously, I think this is a real point that has not been explored enough in alignment circles.

I have encountered a large number of people - in fact probably almost all people I discuss AI with - who I would call "normal people". Just regular, moderately intelligent people going about their lives for which "don't invent a God-Like AI" is so obvious it is almost a truism. 

It is just patently obvious based on their mental model of Skynet, Matrix, etc that we should not build this thing.

Why are we not capitalizing on that?

This deserves it's own post, which I might try to write, but I think it boils down to condescension.

  • LWers know Skynet Matrix is not really how it works under the hood
  • How it really works under the hood is really really complicated
  • Skynet / Matrix is a poor mental model
  • Using poor mental models is bad, we should not do that and we shouldn't encourage other people to do that
  • In order to communicate AI risk we need to simplify it enough to make it accessible to people
  • <produces 5000 word blog post that requires years of implicit domain knowledge to parse>
  • Ironically most people would be closer to the truth with a Skynet Matrix model, which is the one they already have installed. 

We could win by saying: Yes, Skynet is actually happening, please help us stop this.

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T19:53:04.151Z · LW · GW

I'm starting to draw a couple of conclusions for myself from this thread as I get a better understanding of the history.

Do you feel it is accurate to say that many or most people working on this (including and especially Eliezer) at the time considered nuts and bolts alignment work to be the only worthwhile path? Given what info was available at the time.

And that widescale public persuasion / overton window / policy making was not likely to matter as the most scenarios were Foom based?

It is pretty interesting that the previous discussion in all these years kind've zoomed in on only that.

 Maybe someone more experienced than me will do a post-mortem of why it did not work out like that at all and we seem not to have seen that coming or even given it meaningful probability.

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T16:56:35.232Z · LW · GW

You say that we have no plan to solve AI risk, so we cannot communicate that plan. That is not the same as not being able to take any beneficial actions.

"We do not know the perfect thing to do, therefore we cannot do anything"

Do we not consider timeline-extending things to be worthwhile?

This is a genuine question: Is it the prevailing wisdom that ultimately solving AI X-risk is the only worthwhile path and only work worthy of pursing? This seems to have been Eliezer's opinion prior to GPT-3ish. That would answer the questions of my original post.

For example: MIRI could have established, funded, integrated, and entrenched a Thinktank / Policy group in DC with the express goal of being able to make political movement when the time came. 

Right now today they could be using those levers in DC to push "Regulate all training runs" or "Regulate Card production". In the way that actually gets things done in DC, not just on Twitter.

Clearly those are not going to solve the X-risk, but it is also seems pretty clear to me that at the present moment something like those things would extend timelines.

To answer my own question you might say:

  • Prior to 2022 it was not obvious that the political arena would be a leverage point against AI risk (neither for ultimate fix or even for extending timelines.)
     
  • MIRI/CFAR did not have the resources to commit to something like that. It was considered but rejected for what we thought were higher leverage research possibilities.
Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T16:14:09.914Z · LW · GW

Are these "DC" people you are talking about organized somewhere? Or is this a more hidden / informal type of thing?

I ask because I have seen both Zvi and Eliezer make comments to the effect of: "There is no on special behind the curtain working on this - what you see on twitter is what there is" (my paraphrasing)

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T16:09:01.788Z · LW · GW

Great websites! 

I find it interesting that you are the second commenter (and Dan H above) to jump in and explicitly say: I have been doing that! 

and point to great previous work doing exactly these things, but from my perspective they do not seem widely known or supported within the community here (I could be wrong about that)

I am starting to feel that I have a bad map of the AI Alignment/Safety community. My previous impression was the lesswrong / MIRI was mostly the epicenter, and if much of anything was being done it was coming from there or at least was well known there. That seems not to be the case - Which is encouraging! (I think)

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T16:04:32.977Z · LW · GW

100% agreed - I thought I had flagged the complete hindsight bias by saying that it is obvious in retrospect.

The post was a genuine attempt to ask why it was not a clear path before.

Comment by Ras1513 on Why was the AI Alignment community so unprepared for this moment? · 2023-07-17T15:58:51.111Z · LW · GW

I completely agree that it made no sense to divert qualified researchers away from actually doing the work. I hope my post did not come across as suggesting that.