AGI Timelines in Governance: Different Strategies for Different Timeframes

post by simeon_c (WayZ), AmberDawn · 2022-12-19T21:31:25.746Z · LW · GW · 28 comments

Contents

  Summarization Table
      Probability estimates in the "Promising Strategies" category have to be interpreted as the likelihood that this strategy/consideration is more promising/important under timelines X than timelines Y.
  Introduction
  If AGI is developed before 2030, the following is more likely to be true:
    AGI will be built by an organization that’s already trying to build it (85%)
    Compute will still be centralized at the time AGI is developed (60%)
    National government policy won’t have strong[5] positive effects (70%)
    The best strategies will have more variance (75%)
  If you think that AGI will be developed before 2030, it would make sense to:
    Aim to promote a security mindset in the companies currently developing AI (85%)
    Prioritize targeted outreach to highly motivated young people and senior researchers (80%)
    Avoid publicizing AGI risk among the general public (60%)
    Beware of large-scale coordination efforts (80%)
    Focus on corporate governance (75%)
  If AGI is developed after 2030, the following is more likely to be true:
    Some governments will be in the race (80%)
    More companies will be in the race (90%)
    China is more likely to lead (85%)
    There will be more compute suppliers[12] (90%)
  If you think that AGI will be developed after 2030,  it would make sense to:
    Focus on general community building (90%)
    Build the AI safety community in China (80%)
    Coordinate with national governments (65%)
  Conclusion
None
28 comments

Summarization Table

TimelinesPre-2030Post-2030
ExpectationsAGI will be built by an organization that’s already trying to build it (85%)

Some governments will be in the race (80%)

 

Compute will still be centralized at the time AGI is developed (60%)More companies will be in the race (90%)
National government policy won’t have strong positive effects (70%)

China is more likely to lead than pre-2030 (85%)

 

The best strategies will have more variance (75%)There will be more compute suppliers[1] (90%)
Comparatively More Promising Strategies (under timelines X)[2]Aim to promote a security mindset in the companies currently developing AI (85%)Focus on general community building (90%)
Focus on corporate governance (75%)
Build the AI safety community in China (80%)
Target outreach to highly motivated young people and senior researchers (80%)
Avoid publicizing AGI risk (60%)
Coordinate with national governments (65%)
Beware of large-scale coordination efforts (80%)

Probability estimates in the "Promising Strategies" category have to be interpreted as the likelihood that this strategy/consideration is more promising/important under timelines X than timelines Y.

Introduction

Miles Brundage recently argued that AGI timeline discourse might be overrated. [EA · GW] He makes a lot of good points, but I disagree with one thing. Miles says: “I think the correct actions are mostly insensitive to timeline variations.”

Unlike Miles, I think that if the timeline differences are greater than a couple of years, the choice of actions does depend on timeline differences[3]. In particular, our approach to governance should be very different depending on whether we think that AGI will be developed in ~5-10 years or after that. In this post, I list some of the likely differences between a world in which AGI is developed before, and after ~2030 and discuss how those differences should affect how we approach AGI governance. I discuss most of the strategies and considerations in relative terms, i.e. arguing why they’re likely to be significantly more crucial under certain timelines than others. I am discussing these specific strategies and considerations because I believe they are important for AI governance, or at least likely to be effective strategies under one of the two timelines I am considering.

I chose 2030 as a cut-off point because it is easy to remember and it seems to make sense to differentiate between the actions that should be prioritized in the time leading up to 2030 (~5-10 years timelines) and those that should be prioritized after 2030 (15-20 years and beyond timelines). But perhaps a better way to read this post is ‘the sooner you think AGI will be developed, the more likely my points about the pre-2030 AGI world are to be true’, and vice versa.

 

Epistemic status and reasoning behind publishing this:

Thanks to Nicole Nohemi, Felicity Reddel, Andrea Miotti, Fabien Roger and Gerard van Smeden for the feedback on this post.

If AGI is developed before 2030, the following is more likely to be true:

AGI will be built by an organization that’s already trying to build it (85%)

Building huge AI models takes a lot of accumulated expertise. The organizations currently working on AI rely on huge internal libraries and repositories of tricks that they’ve built up over time. It’s unlikely that a new organization or actor, starting from scratch, could achieve this within a couple of years[4]. This means that if AGI is developed before 2030, it’s likely to be first developed by one of the (<15) companies that are currently working on it. 

In decreasing order:

This is relevant because DeepMind and OpenAI are more concerned about safety than others. They both have alignment teams and their leaders have expressed commitments to safety, whereas (for example) Meta and Amazon seem less interested in safety.

Compute will still be centralized at the time AGI is developed (60%)

The compute supply chain is currently highly centralized at several points of the supply chain. This is partly because, even though selling compute is lucrative, the machines and fabs needed to make computer parts are extremely expensive. Therefore, companies need to make a massive initial investment just to get started. On top of that, the initial R&D investments that are required are huge.

This is relevant because we can leverage the compute supply chain for AI governance. For example, we could encourage suppliers to put on-chip safety mechanisms in place. However, this is more likely to work if there are fewer companies in the supply chain.

National government policy won’t have strong[5] positive effects (70%)

Governments are slow, and the policy cycle is long. Advocacy efforts usually take years to bear fruit. First, advocates have to raise awareness and shift public opinion in the right direction, and politicians will only take note if their constituents care. If you think that AGI will be developed by 2030, there is likely not enough time to influence national governments in a way that lead them to take strong measures, so governance interventions that rely on national policy or law are less likely to be useful.

I think this matters particularly for the US because the contribution of the US government seems indispensable for most policies that can significantly impact AGI timelines or AGI governance. However, the US government will likely only get involved in X-risk related topics if there is strong support from the public. Achieving the necessary levels of support would require a significant shift in public opinion. Unfortunately, such major shifts probably take more than 7 years and are highly uncertain processes.

The best strategies will have more variance (75%)

If timelines are short, we should be more willing to tolerate variance[6] since we have much less time to explore the possible strategies and can’t wait for slower, less risky strategies to pan out. Timelines seem pretty crucial to calibrate our risk aversion, especially for funders. I think that this consideration is one of the most important effects of timelines on macro-strategy.

Here are some decisions on which this should have a significant effect:

If you think that AGI will be developed before 2030, it would make sense to:

Aim to promote a security mindset in the companies currently developing AI (85%)

Some governance strategies involve pushing for a security mindset among AI developers (using outreach) so that they voluntarily decide to do things that make AGI less dangerous. Any researcher at DeepMind, Google Brain, or OpenAI who starts taking AI risks seriously is a huge win because it:

If you think AGI will be developed very soon, these strategies are more likely to be promising since there are still relatively few companies aiming to develop AGI. Later, there will be more such companies, increasing coordination difficulty and decreasing the cost-effectiveness of efforts that will target companies individually. For example, you might encourage companies to create an alignment team if they don’t have one or, if they do have one, increase their funding.

Prioritize targeted outreach to highly motivated young people and senior researchers (80%)

If timelines are short, prioritize outreach efforts on senior researchers[8], or on people who’ll be able to make contributions within the next few years, i.e. highly motivated young people[9]. Community building that focuses on younger people who are undergrads and want to do a standard curriculum before working on the problem is likely to have a much lower EV under short timelines[10]. EA general community building would also have much less time to pay off than more targeted AI safety outreach.

Avoid publicizing AGI risk among the general public (60%)

It’s difficult to explain why AI is dangerous without also explaining why it’s powerful. This means that if you try to mitigate risk by raising awareness, it might backfire. You might inadvertently persuade governments to enter the race sooner than they otherwise would. Governments and national defense have a worrying track record of not caring whether developing a powerful technology is dangerous. If your timelines are short, it therefore might make sense not to publicize AGI risk to the general public enough for governments to enter the race. On the other hand, if your timelines are longer, governments will likely be aware of AGI’s power anyway, and thus it might make more sense to publicize AGI risks, putting an emphasis on risks[11].

Note that this advice holds only when governments don’t know a lot about AGI. If AGI is already being discussed or is already an important consideration, then it is likely that talking about accidental risks is a good strategy.

Beware of large-scale coordination efforts (80%)

Large-scale coordination efforts involving many actors usually take a lot of time to have effects. Therefore, if you’re relying on such a mechanism in your governance plan for pre-2030 timelines, you should probably begin implementing the plan in the next few years and thus start building the necessary coalitions now you would need to succeed. Preferring actors that are moving faster might also be good.

Focus on corporate governance (75%)

There will be more AI companies in the future, and governments will also be in the race. This means that achieving cultural change and coordination between AI companies leveraging corporate governance is a much less promising strategy under longer timelines than under shorter ones. compared to governance that involves governments. On the other hand, some of the top labs’ governance teams are genuinely concerned by AGI risks and seem to be acting to make AGI development as safe as possible. Thus I think that engaging with these actors and ensuring that they have all the tools and ideas they need to actually cut the right risks seems promising to me.

If AGI is developed after 2030, the following is more likely to be true:

Some governments will be in the race (80%)

National governments are likely to eventually realize that AGI is incredibly powerful and will try to build it. In particular, national defense organizations may try to develop it. If you believe that AGI will be developed after 2030, it is possible that it will be developed by a government, as they may have had time to catch up with the organizations currently working on it by that point.

More companies will be in the race (90%)

If AGI is developed later than 2030, it may be developed by a new company that has not yet started building it. Given the number of companies that started racing in 2022, it seems plausible that in 2030 there will be more than 50 companies in the race.

China is more likely to lead (85%)

Chinese companies and the government are currently lagging in AI development, but they’re making progress quite quickly. I think they’re decently likely to catch up to Western companies eventually (I’d put 35% by 2035). The recent export controls on semiconductors may have made that a lot more difficult, but they’ll probably try harder than ever to develop their own chip supply chain. This seems to be a crucial consideration because Chinese AI developers currently don’t care much about safety, and the safety community doesn’t have much influence in China.

There will be more compute suppliers[12] (90%)

Despite the high barriers to entry, because it became clear in recent years that compute would be hugely important, there are likely to be more companies at all stages of the compute supply chain in future. For example, the Chinese government is trying to build its own computing supply chain at the moment. There are also startups, such as Cerebras, that are trying to enter the market. This means that the strategies of compute governance that rely on compute companies will probably be less promising.

If you think that AGI will be developed after 2030,  it would make sense to:

Focus on general community building (90%)

The later AI is developed, the more useful it is to do community building now, because many of the results of community building take a while to bear fruit. If a community builder gets an undergraduate computer scientist interested in AI safety, it may be many years before they make their greatest contributions. Great community builders also recruit and/or empower new community builders, who go on to form their own cohorts, which means that a community builder today might be counterfactually responsible for many new AI researchers in 20 years. If you think that AGI won’t be developed for 10 years, building the AGI safety community (or the EA community in general) is probably one of the most effective things for you to do.  

Note that community building is promising even on shorter time scales but is particularly exciting under post-2030 timelines (potentially more than anything else).

Build the AI safety community in China (80%)

If your timelines are longer, AGI is more likely to be developed by the Chinese government or a Chinese company. There is currently not a large EA or AI safety community in China. So if you think AGI will be developed after 2030, you should try to build bridges with Chinese ML researchers and AI developers[13].  It’s especially important not to frame AI governance questions adversarially, as ‘US vs China’, as this could make it harder for the US and European safety communities to build alliances with Chinese developers. AI safety may become politicized as ‘an annoying thing that domineering Americans are trying to impose on us’ rather than common sense.

Coordinate with national governments (65%)

This is a more promising strategy if your timelines are longer, because national governments are more likely to be both, developing AGI themselves and generally interested in AGI policy. A way you might be able to have some influence on AGI governance in national governments is by being a civil servant or politician. Other ways could involve trying to become a recognized expert in AGI governance in the relevant country.

I am unsure if theories of change that utilize compliance mechanisms will be more or less effective after 2030. The lengthy process of policy development, including setting standards and establishing an audit system that prevents loopholes, suggests that compliance mechanisms may be more effective after 2030. However, the possibility that China may be in a leadership position could mean that compliance mechanisms will rely heavily on the Brussels effect, which is not a very reliable compliance mechanism.

I would say that post-2030 timelines probably favor these theories of change, but not very confidently.

Conclusion

To summarize, whether you have a 5-10 year timeline or a 15-20 year timeline changes the strategic landscape in which we operate and thus changes some of the strategies we should pursue.

Under pre-2030 timelines:

I'm looking forward to reading your comments and disagreements about this important topic. I'm also happy to make a call if you want to talk more in-depth about this topic (https://calendly.com/simeon-campos/). 

 

 

This post was written collaboratively by Siméon Campos and Amber Dawn Ace. The ideas are Siméon’s; Siméon explained them to Amber, and Amber wrote them up. Then Siméon partly rewrote the post on that basis. We would like to offer this service to other EAs who want to share their as-yet unwritten ideas or expertise.

If you would be interested in working with Amber to write up your ideas, fill out this form

 

  1. ^

     This is a prediction about the number of suppliers that represent more than 1% of the market they operate in, not the size of the market or the total production. Some events could lead to some supply chain disruptions that could overall decrease the total production of chips.

  2. ^

     Probability estimates in this category have to be interpreted as the likelihood that this strategy/consideration is more promising/important under timelines X than timelines Y.

  3. ^

     Naturally, if timelines turn out to be longer, the same "couple of years" estimation differences make a smaller difference in what actions would be best.

  4. ^

     Main caveat: Recent startups such as Adept.ai and Cohere.ai were built by team leads or major researchers from leader labs. Thanks to the expertise they have, they’re fairly likely to reach the state of the art in at least one subfield of deep learning. That said, most of these organizations are quite likely to not have the compute and money that OpenAI and DeepMind have.

  5. ^

     By strong, I mean measures in the reference class of “Constrain labs to airgap and box their SOTA models while they train them”.

  6. ^

     In the exploration vs exploitation dilemma, you should start exploiting earlier and thus tolerate a) more downside risks and b) to have chances of not having chosen the maximum.

  7. ^

     And wants to contribute to survive alignment.

  8. ^

     The senior researchers that are the most relevant are probably those working in top labs and those who are highly regarded in the ML community. It’s much less tractable than young people but it’s probably at least 10 times more valuable  in the next 5 years to have a senior researcher who starts caring about AI safety than a junior one. Thus, I’d expect this intervention to be highly valuable under short timelines.

  9. ^

     Obviously, how talented the people are matters a lot. I mostly want to underline the fact that for someone to start contributing in the next couple of years, the most important factor is probably motivation.

  10. ^

     Note that under post-2030 timelines, the effect of having a lot more PhD students in AI safety in the next few years is probably quite high, mostly due to cultural effects of “AI safety is legible and is a big thing in academia”.

  11. ^

     One key consideration here is the medium you’re using to do that publicization. AI alignment is a very complex problem and thus you need to find the media that maximize the complexity you can successfully transmit. Movies seem to be a promising avenue in that respect.

  12. ^

     This is a prediction about the number of suppliers that represent more than 1% of the market they operate in, not the size of the market or the total production. Some events could lead to some supply chain disruptions that could overall decrease the total production of chips.

  13. ^

     Note that it’s recommended to talk to people with experience on the topic if you want to do that.

28 comments

Comments sorted by top scores.

comment by Alex Lawsen (alex-lszn) · 2022-12-20T09:10:35.989Z · LW(p) · GW(p)

[crossposting my comment from the EA forum as I expect it's also worth discussing here]

whether you have a 5-10 year timeline or a 15-20 year timeline

Something that I'd like this post to address that it doesn't is that to have "a timeline" rather than a distribution seems ~indefensible given the amount of uncertainty involved. People quote medians (or modes, and it's not clear to me that they reliability differentiate between these) ostensibly as a shorthand for their entire distribution, but then discussion proceeds based only on the point estimates.

I think a shift of 2 years in the median of your distribution looks like a shift of only a few % in your P(AGI by 20XX) numbers for all 20XX, and that means discussion of what people who "have different timelines" should do is usually better framed as "what strategies will turn out to have been helpful if AGI arrives in 2030".

While this doesn't make discussion like this post useless, I don't think this is a minor nitpick. I'm extremely worried by "plays for variance", some of which are briefly mentioned above (though far from the worst I've heard). I think these tend to look good only on worldviews which are extremely overconfident, and treat timelines as point estimates/extremely sharp peaks). More balanced views, even those with a median much sooner than mine, should typically realise that the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don't. This is in addition to the usual points about co-operative behaviour when uncertain about the state of the world, adverse selection, the unilateralist's curse etc.

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-20T10:05:05.026Z · LW(p) · GW(p)

[Cross-posting my answer]
Thanks for your comment! 
That's an important point that you're bringing up. 

My sense is that at the movement level, the consideration you bring up is super important. Indeed, even though I have fairly short timelines, I would like funders to hedge for long timelines (e.g.  fund stuff for China AI Safety). Thus I think that big actors should have in mind their full distribution to optimize their resource allocation. 

That said, despite that, I have two disagreements: 

  1. I feel like at the individual level (i.e. people working in governance for instance, or even organizations), it's too expensive to optimize over a distribution and thus you should probably optimize with a strategy of "I want to have solved my part of the problem by 20XX". And for that purpose, identifying the main characteristics of the strategic landscape at that point (which this post is trying to do) is useful.
  2. "the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don't." I disagree with this statement, even at the movement level. For instance I think that the trade-off of "should we fund this project which is not the ideal one but still quite good?" is one that funders often encounter and I would expect that funders have more risk adverseness than necessary because when you're not highly time-constrained, it's probably the best strategy (i.e. in every fields except in AI safety, it's probably a way better strategy to trade-off a couple of years against better founders).

 

Finally, I agree that "the best strategies will have more variance" is not a good advice for everyone. The reason I decided to write it rather than not is because I think that the AI governance community tends to have a too high degree of risk adverseness (which is a good feature in their daily job) which penalizes mechanically a decent amount of actions that are way more useful under shorter timelines. 

comment by Evan R. Murphy · 2022-12-20T09:22:40.269Z · LW(p) · GW(p)

I've heard people talk vaguely about some of these ideas before, but this post makes it all specific, clear and concrete in a number of ways. I'm not sure all the specifics are right in this post, but I think the way it's laid out can help advance the discussion about timeline-dependent AI governance strategy. For example, someone could counter this post with a revised table that has modified percentages and then defend their changes.

comment by peterslattery · 2022-12-19T22:04:28.825Z · LW(p) · GW(p)

Thanks for writing this up Simeon, it's given me a lot to think about. The table is particularly helpful.

comment by Igor Ivanov (igor-ivanov) · 2022-12-20T21:04:04.535Z · LW(p) · GW(p)

First, your article is very insightful and well-structured, and totally like it.

But there is one thing that bugs me.

I am a person new to AI alignment field, and recently, I realized (maybe by mistake) that there is very hard to find a long-term financially stable full-time job in AI field-building. 

For me, it basically means that only a tiny amount of people consider AI alignment important enough to pay money to decrease P(doom).  And at the same time, here we are talking about possibility of doom within next 10 or 20 years. For me it is all a bit crazy

I also think that sooner or later, when AIs will become more and more capable, and, either some large Chernobyl-like tragedy caused by AI will happen, or some AI will become so powerful that it will horrify people. In my opinion, probability of that is very high. I already see how ChatGPT spread some fear.  And fear might spread like a wildfire. If it will happen too late for governments to react thoughtfully, it will introduce a large amount of risk and uncertainty. In my opinion, too much risk  and uncertainty.

So, in my opinion, even if we will educate the public and promote government regulation, and if AGI will appear before 2030, then government policies might suck. But if we will not do it, they might suck much more and it is even more dangerous.

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-20T21:15:52.239Z · LW(p) · GW(p)

Thanks for your comment! 

I see your point on fear spreading causing governments to regulate. I basically agree that if it's what happens, it's good to be in a position to shape the regulation in a positive way or at least try to. I still think that I'm more optimistic about corporate governance which seems more tractable than policy governance to me. 

comment by Karl von Wendt · 2022-12-20T10:35:05.022Z · LW(p) · GW(p)

I strongly disagree with "Avoid publicizing AGI risk among the general public" (disclaimer: I'm a science fiction novelist about to publish a novel about AGI risk, so I may be heavily biased). Putin said in 2017 that "the nation that leads in AI will be the ruler of the world". If anyone who could play any role at all in developing AGI (or uncontrollable AI [LW · GW] as I prefer to call it) isn't trying to develop it by now, I doubt very much that any amount of public communication will change that. 

On the other hand, I believe our best chance of preventing or at least slowing down the development of uncontrollable AI is a common, clear understanding of the dangers, especially among those who are at the forefront of development. To achieve that, a large amount of communication will be necessary, both within development and scientific communities and in the public. 

I see various reasons for that. One is the availability heuristic: People don't believe there is an AI x-risk [EA · GW] because they've never seen it happen outside of science fiction movies and nobody but a few weird people in the AI safety community is talking seriously about it (very similar to climate change a few decades ago).  Another reason is social acceptance: As long as everyone thinks AI is great and the nation with the most AI capabilities wins, if you're working on AI capabilities, you're a hero. On the other hand, if most people think that strong AI poses a significant risk to their future and that of their kids, this might change how AI capabilities researchers are seen, and how they see themselves. I'm not suggesting disparaging people working at AI labs, but I think working in AI safety should be seen as "cool", while blindly throwing more and more data and compute at a problem and see what happens should be regarded as "uncool". 

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-20T11:47:27.113Z · LW(p) · GW(p)

Thanks for your comment! 

First, you have to have in mind that when people are talking about "AI" in industry and policymaking, they usually have mostly non-deep learning or vision deep learning techniques in mind simply because they mostly don't know the ML academic field but they have heard that "AI" was becoming important in industry. So this sentence is little evidence that Russia (or any other country) is trying to build AGI, and I'm at ~60% Putin wasn't thinking about AGI when he said that. 

If anyone who could play any role at all in developing AGI (or uncontrollable AI [LW · GW] as I prefer to call it) isn't trying to develop it by now, I doubt very much that any amount of public communication will change that. 

I think that you're deeply wrong about this. Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least). The level of investment right now in top training runs probably doesn't go beyond $200M. The GDP of the US is 20 trillion. Likewise for China. Even a country like France could unilaterally put $50 billion in AGI development and accelerate timelines quite a lot within a couple of years. 

Even post ChatGPT, people are very bad at projecting what it means for next years and still have a prior on the fact that human intelligence is very specific and can't be beaten which prevents them from realizing all the power of this technology.

I really strongly encourage you to go talk to actual people from industry and policy to get a sense of their knowledge on the topic. And I would strongly recommend not publishing your book as long as you haven't done that. I also hope that a lot of people who have thought about these issues have proofread your book because it's the kind of thing that could really increase P(doom) substantially.

I think that to make your point, it would be easier to defend the line that "even if more governments got involved, that wouldn't change much". I don't think that's right because if you gave $10B more to some labs, it's likely they'd move way faster. But I think that it's less clear. 

a common, clear understanding of the dangers

I agree that it would be something good to have. But the question is: is it even possible to have such a thing? 

I think that within the scientific community, it's roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it's ~impossible. Climate change, which is a problem which is much easier to understand and explain is already way too complex for the general public to have a good idea of what are the risks and what are the promising solutions to these risks (e.g. a lot people's top priorities is to eat organic food, recycle and decrease plastic consumption).

I agree that communicating with the scientific community is good, which is why I said that you should avoid publicizing only among "the general public". If you really want to publish a book, I'd recommend targeting the scientific community, which is not at all the same public as the general public. 

 

"On the other hand, if most people think that strong AI poses a significant risk to their future and that of their kids, this might change how AI capabilities researchers are seen, and how they see themselves"

I agree with this theory of change and I think that it points a lot more towards "communicate in the ML community" than "communicate towards the general public". Publishing great AI capabilities is mostly cool for other AI researchers and not that much for the general public. People in San Francisco (where most of the AGI labs are) also don't care much about the general public and whatever it thinks ; the subculture there and what is considered to be "cool" is really different from what the general public thinks is cool. As a consequence, I think they mostly care about what their peers are thinking about them. So if you want to change the incentives, I'd recommend focusing your efforts on the scientific & the tech community. 

Replies from: Karl von Wendt
comment by Karl von Wendt · 2022-12-22T10:49:35.642Z · LW(p) · GW(p)

Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least).

As you point out yourself, what makes people interested in developing AGI is progress in AI, not the public discussion of potential dangers. "Nobody cared about" LLMs is certainly not true - I'm pretty sure the relevant people watched them closely. That many people aren't concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds. There are others who think very differently, like the people at OpenAI, Deepmind, Google, and (I suspect) a lot of others who communicate less openly about what they do.

I agree that [a common understanding of the dangers] would be something good to have. But the question is: is it even possible to have such a thing? 

I think that within the scientific community, it's roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it's ~impossible.

I don't think you can easily separate the scientific community from the general public. Even scientific papers are read by journalists, who often publish about them in a simplified or distorted way. Already there are many alarming posts and articles out there, as well as books like Stuart Russell's "Human Compatible" (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late (it was probably too late already when Arthur C. Clarke wrote "2001 - A  Space Odyssey"). Not talking about the dangers of uncontrollable AI for fear that this may lead to certain actors investing even more heavily in the field is both naive and counterproductive in my view.

And I would strongly recommend not publishing your book as long as you haven't done that.

I will definitely publish it, but I doubt very much that it will have a large impact. There are many other writers out there with a much larger audience who write similar books.

I also hope that a lot of people who have thought about these issues have proofread your book because it's the kind of thing that could really increase P(doom) substantially.

I'm currently in the process of translating it to English so I can do just that. I'll send you a link as soon as I'm finished. I'll also invite everyone else in the AI safety community (I'm probably going to post an invite on LessWrong).

Concerning the Putin quote, I don't think that Russia is at the forefront of development, but China certainly is. Xi has said similar things in public, and I doubt very much that we know how much they currently spend on training their AIs. The quotes are not relevant, though, I just mentioned them to make the point that there is already a lot of discussion about the enormous impact AI will have on our future. I really can't see how discussing the risks should be damaging, while discussing the great potential of AGI for humanity should not.

comment by konstantin (konstantin@wolfgangpilz.de) · 2022-12-20T11:41:26.319Z · LW(p) · GW(p)

Thank you for writing this up. I think I agree with the general direction of your takes, but you imply high certainty that I often don't share. This may lead people unfamiliar with the complexity of AI governance to update too strongly.

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-20T11:57:00.111Z · LW(p) · GW(p)

Have you read note 2? If note 2 was made more visible, would you still think that my claims imply a too high certainty? 

Replies from: konstantin@wolfgangpilz.de
comment by konstantin (konstantin@wolfgangpilz.de) · 2022-12-22T16:45:01.046Z · LW(p) · GW(p)

I didn't read it, this clarifies a lot! I'd recommend making it more visible, e.g., putting it at the very top of the post as a disclaimer. Until then, I think the post implies unreasonable confidence, even if you didn't intend to.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-12-31T07:12:07.671Z · LW(p) · GW(p)

I disagree with some of the numbers, but overall quite like this way of framing the situation. I think having three divisions of strategy discussion makes sense here 1-5 years, 5-10 years, 10-20 years. Also, there is another important axis: are we compute/data limited and big labs are the main probability of origin or can a lucky small research group make a dramatic breakthrough? I think so the cases and timelines I mention here are plausible enough to be worth planning for.

comment by Faustine Li (faustine-li) · 2022-12-22T17:13:21.995Z · LW(p) · GW(p)

I liked the format, but let me pick on a particular point. What makes you confident that in seven years China will be meaningfully ahead of the West? My intuition is that the West still has the best education and economic centers to drive R&D and those have significant moats that don't get shaken up that quickly. You're pretty vague about your justifications other that impressive levels of progress. I see it as a "rising tides float all boats" situation where progress is being accelerated everywhere by open sharing, an economically conducive environment for AI research, and availability of compute.

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-27T23:59:26.339Z · LW(p) · GW(p)

What I'm confident in is that they're more likely to be ahead than now or within a couple years. As I said, otherwise my confidence is ~35% by 2035 that China catches up (or become better), which is not huge? 

My reasoning is that they've been better at optimizing ~everything than the US mostly because of their centralization and norms (not caring too much about human rights helps optimizing) which is why I think it's likely that they'll catch up. 

comment by Donald Hobson (donald-hobson) · 2022-12-21T22:34:31.164Z · LW(p) · GW(p)

This is a more promising strategy if your timelines are longer, because national governments are more likely to be both, developing AGI themselves and generally interested in AGI policy.

I am not quite sure why you think this is true. I kind of expect national governments to still be slow lumbering and stupid in 2040.

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-27T23:54:45.814Z · LW(p) · GW(p)

Mostly because they have a lot of resources and thus can weigh a lot in the race once they enter it. 

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-12-28T00:01:25.259Z · LW(p) · GW(p)

Sure governments have a lot of resources. What they lack is the smarts to effectively turn those resources into anything. So maybe some people in government think AI is a thing, others think it's still mostly hype. The government crafts a bill. Half the money goes to artists put out of work by stable diffusion. A big section details insurance liability regulations for self driving cars. Some more funding is sent to various universities. A committee is formed. This doesn't change the strategic picture much.

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-28T00:13:17.570Z · LW(p) · GW(p)

I guess I'm a bit less optimistic on the ability of governments to allocate funds efficiently, but I'm not very confident in that. 

A fairly dumb-but-efficient strategy that I'd expect some governments to take is "give more money to SOTA orgs" or "give some core roles to SOTA orgs in your Manhattan Project". That seems likely to me and that would have substantial effects. 

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-12-28T00:21:17.495Z · LW(p) · GW(p)

They may well have some results. Dumping money on SOTA orgs just bumps compute a little higher. (and maybe data, if you are hiring lots of people to make data.) 

It isn't clear why SOTA orgs would want to be in a govmnt Manhatten project. It also isn't clear if any modern government retains the competence to run one. 

I don't expect governments to do either of these. You generated those strategies by sampling "dumb but effective" strategies. I tried to sample from "most of the discussion got massively side tracked into the same old political squabbles and distractions." 

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-28T10:42:11.935Z · LW(p) · GW(p)

The idea that EVERY governments are dumb and won't figure out a way which is not too bad to allocate their resources into AGI seems highly unlikely to me. There seems to be many mechanisms by which it could not be the case (e.g national defense is highly involved and is a bit more competent, the strategy is designed in collaboration with some competent people from the private sector etc.). 

To be more precise, I'd be surprised if no one of these 7 countries had an ambitious plan which meaningfully changed the strategic landscape post-2030: 

  • US 
  • Israel 
  • UK
  • Singapore
  • France
  • China 
  • Germany
comment by konstantin (konstantin@wolfgangpilz.de) · 2022-12-20T11:36:40.944Z · LW(p) · GW(p)

National government policy won’t have strong[5] effects (70%)

This can change rapidly, e.g., if systems suddenly get much more agentic and become more reliable decision-makers or if we see incidents with power-seeking AI systems. Unless you believe in takeoff speeds of weeks, governments will be important actors in the time just before AGI, and it will be essential to have people working in relevant positions to advise them.

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-20T11:54:59.307Z · LW(p) · GW(p)

I hesitated on decreasing the likelihood on that one based on your consideration to be honest, but I still think that 30% of having strong effects is quite a lot because as you mentioned it requires the intersection of many conditions. 

In particular, you don't mention which intervention you expect from them. If you take the intervention I took as a reference class ("Constrain labs to airgap and box their SOTA models while they train them”), do you think there are things that are as much or more "extreme" than this and that are likely? 

What might be misleading in my statement is that it could be understood as "let's drop national government policy" while it's more "I think that currently too many people are focused on national government policy and not enough are focused on corporate governance, and it puts us in a fairly bad position for pre-2030 timelines". 

Replies from: Koen.Holtman
comment by Koen.Holtman · 2022-12-20T17:36:29.576Z · LW(p) · GW(p)

I think you are ignoring the connection between corporate governance and national/supra-national government policies. Typically, corporations do not implement costly self-governance and risk management mechanisms just because some risk management activists have asked them nicely. They implement them if and when some powerful state requires them to implement them, requires this as a condition for market access or for avoiding fines and jail-time.

Asking nicely may work for well-funded research labs who do not need to show any profitability, and even in that special case one can have doubts about how long their do-not-need-to-be-profitable status will last. But definitely, asking nicely will not work for your average early-stage AI startup. The current startup ecosystem encourages the creation of companies that behave irresponsibly by cutting corners. I am less confident than you are that Deepmind and OpenAI have a major lead over these and future startups, to the point where we don't even need to worry about them.

It is my assessment that, definitely in EA and x-risk circles, too few people are focussed on national government policy as a means to improve corporate governance among the less responsible corporations. In the case of EA, one might hope that recent events will trigger some kind of update.

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-20T20:52:42.149Z · LW(p) · GW(p)

The points you make are good, especially in the second paragraph. My model is that if scale is all you need, then it's likely that indeed smaller startups are also worrying. I also think that there could be visible events in the future that would make some of these startups very serious contenders (happy to DM about that). 

Having a clear map of who works in corporate governance and who works more towards policy would be very helpful. Is there anything like a "map/post of who does what in AI governance" or anything like that? 

Replies from: Koen.Holtman
comment by Koen.Holtman · 2022-12-23T10:57:24.507Z · LW(p) · GW(p)

Thanks!

I am not aware of any good map of the governance field.

What I notice is that EA, at least the blogging part of EA, tends to have a preference for talking directly to (people in) corporations when it comes to the topic of corporate governance. As far as I can see, FLI is the AI x-risk organisation most actively involved in talking to governments. But there are also a bunch of non-EA related governance orgs and think tanks talking about AI x-risk to governments. When it comes to a broader spectrum of AI risks, not just x-risk, there are a whole bunch of civil society organisations talking to governments about it, many of them with ties to, or an intellectual outlook based on, Internet and Digital civil rights activism.

comment by konstantin (konstantin@wolfgangpilz.de) · 2022-12-22T16:53:42.850Z · LW(p) · GW(p)

Compute is centralized and thus lets room for compute governance

[under pre 2030 timelines]

Unfortunately, good compute governance takes time. E.g., if we want to implement hardware-based safety mechanisms, we first have to develop them, convince governments to implement them, and then they have to be put on the latest chips, which take several years to dominate compute. 

So large parts of compute gov will probably take longer to yield meaningful results.
 

(Also note that compute gov likely requires government levers, so this clashes a bit with you other statement)

Replies from: WayZ
comment by simeon_c (WayZ) · 2022-12-28T00:09:42.154Z · LW(p) · GW(p)

Unfortunately, good compute governance takes time. E.g., if we want to implement hardware-based safety mechanisms, we first have to develop them, convince governments to implement them, and then they have to be put on the latest chips, which take several years to dominate compute. 

This is a very interesting point. 

I think that some "good compute governance" such as monitoring big training runs doesn't require on-chip mechanisms but I agree that for any measure that would involve substantial hardware modifications, it would probably take a lot of time.

note that compute gov likely requires government levers, so this clashes a bit with you other statement

I agree that some governments might be involved but I think that it will look very differently from "national government policy". My model of international coordination is that there are a couple of people involved in each government and what's needed to move the position of these people (and thus of a country essentially) is not comparable with national policy.