Will AI Resilience protect Developing Nations?

post by ejk64 · 2025-01-21T15:31:32.378Z · LW · GW · 0 comments

Contents

      Position Piece: Most of the developing world lacks the institutional capacity to adapt to powerful, unsecure AI systems by 2030. Incautious model release could disproportionately affect these regions. Enhanced societal resilience in frontier AI states consequently provides no 'black cheque' for inca...
    ‘Societal Resilience’ requires strong foundations
      These are currently tentative beliefs. I intend to a) revise this post as soon as I can find more evidence for/against these claims and b) learn more about successful case studies to better understand how these situations might change over time. If you have strong opinions here, let me know!
  2. AI diffusion in the developing world could create notable risks
      Developmental Lock-in
      Undermining Nascent Governments
  3. What can we do about this? 
  Conclusion
None
No comments

Position Piece: Most of the developing world lacks the institutional capacity to adapt to powerful, unsecure AI systems by 2030. Incautious model release could disproportionately affect these regions. Enhanced societal resilience in frontier AI states consequently provides no 'black cheque' for incautious release. 

‘We should develop more societal resilience to AI-related harms’ is now a common refrain in AI governance. The argument goes that we should try to reduce the risks of large frontier models as far as possible, but some amount of harm is inevitable, and technological developments will always see potentially harmful models fall outside the regulatory net. The countermeasure, born both of practicality and necessity, is to develop societal resilience through adaptation to avoid, defend against and remedy harms from advanced AI systems. 

This is a sensible idea, and one I think more people should do work on. 

However, even if societal resilience against the harms of advanced AI systems (from here on, ‘societal resilience’) in frontier AI states was to improve significantly, it seems likely that the rest of the world is unlikely to develop at the same speed. Many societal resilience mechanisms depend on foundations which regions in the developing world do not have. If powerful models were used in these regions, they might still be used to perpetrate significant harms. 

So if you only takeaway two things from this blog, it's:

  1. Increased societal resilience in some areas does not 1) preclude the risks of AI models being misused in other areas and therefore 2) provide a blank cheque for those models to be deployed.
  2. Regulation and deployment decisions based on the resilience of developed world states that ignored the vulnerability of nations that cannot implement similar protections might create severe harm.

This short exploratory post aims to do three things:

  1. Examine some foundations of societal resilience and why they may be lacking in the developing world
  2. Suggest some potential harms arising from policies assuming such resilience, and
  3. Explore some avenues to mitigate these risks. 

This is still a new research area, however, so I’m very keen to hear any feedback you might have. 

  1. ‘Societal Resilience’ requires strong foundations

Successful implementation also depends on the existence of appropriate institutions for resolving collective action problems, and on organisations’ technical, financial and institutional capacities for monitoring and responding to AI risk. 

-- From Bernardi et al., ‘Societal Adaptation to Advanced AI’

The requirements for effectively adapting society to advanced AI are demanding. At a quick glance, here are a few things that might be useful. We don’t yet have enough of these in the developing world, but there has been a start: 

On the other hand, many countries in the developing world don’t have these services. Notably, few of them are straightforwardly technical: they relate to institutional trust, social networks, and civic infrastructure that it is famously difficult to just ‘accelerate’ into resilient configurations. Moreover, I worry that many regions are beset by other features like: 

These are currently tentative beliefs. I intend to a) revise this post as soon as I can find more evidence for/against these claims and b) learn more about successful case studies to better understand how these situations might change over time. If you have strong opinions here, let me know!

I strongly believe that it is essential for the Global South to actively participate in shaping the development of AI and to share in its benefits. However, I am equally concerned about the potential consequences of a company with a myopic frontier-AI-state perspective inadvertently releasing a model that falls into the hands of insurgents in conflict-affected regions, leading to catastrophic outcomes.

Here are some arguments for fast societal resilience building in the developing world I’m not convinced by:

Admittedly, there’s a risk of motte-and-bailey arguments with cases like these. ‘The developing world’ is a broad concept, and one risks accidentally using it to conflate India (where some societal adapation no doubt would work) with the Democratic Republic of Congo (where I would be a lot more skeptical). 

I’m hoping to be directionally correct, but I think the detail of these distinctions are important and something I would be keen to explore further. For instance, it might be that there is a band of the developing world where technical literacy is low enough to reduce the risks of misuse to almost nothing, even though that region would in theory be extremely vulnerable to cyberattacks.

2. AI diffusion in the developing world could create notable risks

The risks from powerful AI systems have been well-covered elsewhere, the risks of AI and biosecurity in the global south particularly so. Here, I just want to note two additional stories that I think could result from the premature development of broadly capable, accessible, and unsecured AI in developing regions. 

Note that I don’t know that these will happen, and the benefits might outweigh the risks in some cases nonetheless, but they might still be worth being mindful of. I would be grateful for proxies or analogies that could help build these intuitions further. 

Developmental Lock-in

Several regions in Africa are experiencing developmental lock-in, where both parties try to undermine the other side, keeping the level of progress minimal. Unsecured AI systems could exacerbate this.

Consider a regional group that wants to build a local digital banking system. However, any attempt they make is vulnerable to hacking attempts involving AI systems accessed via the darkweb, or supplied by external organisations, which enemies in the region employ. This means that digital forms of theft predominate until the idea to create a banking system is scrapped altogether, keeping the region bound to physical currency. Later, an external third-party might step in with a fully-formed off-the-shelf AI-protected banking system, and the region might choose to adopt this. However, this means that they have outsourced the technical literacy required to build such a system to a third party, and they may be vulnerable to extortion or manipulation by that group in future. 

(Note: in my mind, a lot of what it takes to build a preliminary state over the next 50 years will relate to building digital infrastructure, both to coordinate actors in diverse areas and to organise information for decision making purposes a la James C. Scott. In that way, I think the nation-building efforts of 2030-2050 are likely to be even more vulnerable to cyberespionage and attack than those of previous generations, and I’m also assuming the base rate of cyberattacks will rise).

Undermining Nascent Governments

The AI arms race is often presented as the AGI arms race between the US and China, but I expect there to be a race for AI-supported epistemic disruption tools (as well as military tools) in the rest of the digitalising war-waging world eventually, one that policymakers in the west might be well-advised to delay.

Consider the position of a aspirationally-democratic government in an unstable state that has experienced notable civil wars. An insurgent group is using a foreign-developed generative model to generate disinformation about them across many hundreds of languages spoken in the region which proliferate online and sow dissent. Unlike the well-established democratic governments to the north, this government has newly taken power and does not have the connections with large technology corporations to take action to stem the flow, nor are it’s population familiar enough with fake news, many of them having recently obtained internet access for the first time. They have little choice but to take violent action to suppress the insurrection, lowering them to the brutality of the regime they sought to replace. 

3. What can we do about this? 

We also need to build adaptation and resilience infrastructure and ensure that better tech diffuses *faster* and *wider*.

The current trajectory is to spend more people and resources developing programmes for societal adaptation to advanced AI in developing world societies and to track how effective these might be. The obvious extension is to explore how viable these policies might be in different areas in the developing world, and whether governments would be able to implement them before e.g. 2030. These both seem worthwhile. 

Here are some less obvious things that we might prioritise: 

Of course, another thing to do would be to buy time against diffusion by limiting the sharing and leaking of key information about models and imposing better access protocols. I’ve written about that here and here

Conclusion

The developing world has a huge amount to benefit from advanced AI technologies. I'm truly excited by the prospect that advanced AI might bring a degree of growth and economic independence that contributes to countries in the developing world achieving sovereignty from external powers for the first time in centuries. 

However: it would be a shame to have the post-AGI era of developmental studies be one entirely focused on dealing with the aftershocks of sharing powerful AI tools to warring nations (perhaps not unlike dropping the idea of nation states on countries that lacked the resilience infrastructure) where states moving towards independence are once again found beholden to the developing nations that promise to pull them out of it. 

This would not just be a worse world to live in for the global majority, but risk being a far more polarised world between AGI superpowers.

Nonetheless, I suspect that there are significant parts of this story left out, and I welcome comments and corrections that could help build this line of thinking or counterarguments against it. 

Thanks to Jamie Bernardi for suggestions. All errors/opinions my own. 

0 comments

Comments sorted by top scores.