AI coordination needs clear wins

post by evhub · 2022-09-01T23:41:48.334Z · LW · GW · 16 comments

Contents

16 comments

Thanks to Kate Woolverton and Richard Ngo for useful conversations, comments, and feedback.

EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs. So far, however, despite that investment, it doesn’t seem to me like we’ve had that many big coordination “wins” yet. I don’t mean to say that to imply that our efforts have failed, however—the obvious other hypothesis is just that we don’t really have that much to coordinate on right now, other than the very nebulous goal of improving our general coordination/cooperation capabilities.

In my opinion, however, I think that our lack of clear wins is actually a pretty big problem—and not just because I think there are useful things that we can plausibly coordinate on right now, but also because I expect our lack of clear wins now to limit our ability to get the sort of cooperation we need in the future.

In the theory of political capital, it is a fairly well-established fact that “Everybody Loves a Winner.” That is: the more you succeed at leveraging your influence to get things done, the more influence you get in return. This phenomenon is most thoroughly studied in the context of the ability of U.S. presidents’ to get their agendas through Congress—contrary to a naive model that might predict that legislative success uses up a president’s influence, what is actually found is the opposite: legislative success engenders future legislative success, greater presidential approval, and long-term gains for the president’s party.

I think many people who think about the mechanics of leveraging influence don’t really understand this phenomenon and conceptualize their influence as a finite resource to be saved up over time so it can all be spent down when it matters most. But I think that is just not how it works: if people see you successfully leveraging influence to change things, you become seen as a person who has influence, has the ability to change things, can get things done, etc. in a way that gives you more influence in the future, not less.

Of course, you do have to actually succeed to make this work—if you try to spend your influence to make something happen and fail, you get the opposite effect. This suggests the obvious strategy, however, of starting with small but nevertheless clear coordination wins and working our way up towards larger ones—which is exactly the strategy that I think we should be pursuing.[1]


  1. In that vein, in a follow-up post [LW · GW], I will propose a particular clear, concrete coordination task that I think might be achievable soon given the current landscape, would generate a clear win, and that I think would be highly useful in and of itself. ↩︎

16 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2022-09-03T14:27:17.507Z · LW(p) · GW(p)

EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs.

Wait, really? Can you name some examples? I thought this was mostly being left to the big AI labs. Maybe I should be talking to the people investing these resources.

comment by Kaj_Sotala · 2022-09-02T20:11:08.158Z · LW(p) · GW(p)

The one big coordination win I recall us having was the 2015 Research Priorities document that among other things talked about the threat of superintelligence. The open letter it was an attachment to was signed by over 8000 people, including prominent AI and ML researchers.

And then there's basically been nothing of equal magnitude since then.

comment by Davidmanheim · 2022-09-06T07:13:54.848Z · LW(p) · GW(p)

Is the best way to suggest how to do political and policy strategy, or coordination, to post it publicly on Lesswrong? This seems obviously suboptimal, and I'd think that you should probably ask for feedback and look into how to promote cooperation privately first.

That said, I think everything you said here is correct on an object level, and worth thinking about.

Replies from: evhub
comment by evhub · 2022-09-07T02:17:35.878Z · LW(p) · GW(p)

I'd think that you should probably ask for feedback and look into how to promote cooperation privately first.

I have done this also.

comment by Thane Ruthenis · 2022-09-02T02:07:39.322Z · LW(p) · GW(p)

Agreed, though I think there's an additional factor to consider: what goes into ensuring that you succeed. I view it in terms of power expansion and power consolidation.

When you try to get something unusual done, you "stake" some amount of your political capital on this. If you win, you "expand" the horizon of the socially acceptable actions available to you. You start being viewed as someone who can get away with doing things like that, you get an in with more powerful people, people are more tolerant of you engaging in more disruptive action.

But if you try to immediately go for the next, even bigger move, you'll probably fail. You need buy-in from other powerful actors, some of which have probably only now became willing to listen to you and entertain your more extreme ideas. You engage in politicking with them, arguing with them, feeding them ideas, establishing your increased influence and stacking the deck in your favor. You consolidate your power.

Then you stake it to expand your action-space even more, and so on.

comment by Prometheus · 2022-09-09T11:05:19.820Z · LW(p) · GW(p)

I agree that we need clear wins, but I also think that most people in the AI Safety community agree that we need clear wins. Would you be interested in taking ownership of this, speaking with various people in the community, and write up a blog post with what you think would characterize a clear action plan, with transparent benchmarks for progress? I think this would be very beneficial, both on the Alignment side and the Governance side.

Replies from: evhub
comment by evhub · 2022-09-09T20:03:16.433Z · LW(p) · GW(p)

In that vein, in a follow-up post [LW · GW], I will propose a particular clear, concrete coordination task that I think might be achievable soon given the current landscape, would generate a clear win, and that I think would be highly useful in and of itself.

comment by Shiroe · 2022-09-01T23:49:50.492Z · LW(p) · GW(p)

I'm looking forward to your follow-up post.

comment by habryka (habryka4) · 2024-01-15T07:30:42.382Z · LW(p) · GW(p)

I disagree with the conclusion of this post, but still found it a valuable reference for a bunch of arguments I do think are important to model in the space.

comment by hunterglenn · 2022-10-24T18:43:32.317Z · LW(p) · GW(p)

Maybe we should see if, out of the population of those that need to coordinate, we can convince several of them to try to pair up and coorindate with one other in the same population. It's a small start, but it's a start

comment by Phil Tanny · 2022-09-02T14:32:11.614Z · LW(p) · GW(p)

EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs.

 

Are you having any luck finding cooperation with Russian, Chinese, Iranian and North Korean labs?

Replies from: lc, Algon, kave
comment by lc · 2022-09-02T17:42:31.273Z · LW(p) · GW(p)

Are you having any luck finding innovative Russian, Chinese, Iranian, or North Korean labs?

comment by Algon · 2022-09-02T19:26:10.469Z · LW(p) · GW(p)

Upvoted because I think this comment is a reasonable question, and shouldn't be getting this many downvotes. Your latter comment in the thread wasn't thought provoking, as it felt like a non-sequitur, though still not really something I'd downvote. I would encourage you to share your model for why a lack of co-operation with labs within three likely-inconsequential-to-AI states and one likely-consequential-to-AI-states implies that well intended intellectuals in the west aren't likely to have control over the future of AI. 

After all, substantial chunk of the most capable AI companies take alignment risks fairly seriously (Deepmind, OpenAI sort-of), and I mostly think AGI will arrive in a decade or two. Given Chinese companies don't seem interested in building AGI, and still aren't producing as high quality research as the west, and China's slowing economic growth, I think it probable the West will play a large role in the creation of AGI.

Replies from: ChristianKl
comment by ChristianKl · 2022-09-03T14:22:14.867Z · LW(p) · GW(p)

It's not a reasonable question because the premise of the OP is that there currently isn't any cooperation no matter the nationality. 

It also does ignore that the Chinese Communist Party [LW(p) · GW(p)]does take actions in regard to AI safety and that practically matters more than any cooperation with North Korean AI labs. 

There's an odd background framing that implies that somehow the Chinese don't care about the public good while Westerners do care. The CCP is perfectly willing to engage in heavy regulations of their tech industry provided they believe that the regulation will protect the public good. There's much more potential for Chinese actors to not follow economic imperatives because their government believes that this is a bad idea.  

comment by kave · 2022-09-02T14:50:32.746Z · LW(p) · GW(p)

OP writes that there have been no big cooperation wins, so a fortiori, there have been no big cooperation wins with the countries you mention.

Replies from: Phil Tanny
comment by Phil Tanny · 2022-09-02T15:43:15.148Z · LW(p) · GW(p)

Nor is there likely to ever be such cooperation.   Thus, well intended intellectual elites in the West are not in a position to decide the future of AI.   I shoulda just said that.