Will AI Resilience protect Developing Nations?
post by ejk64 · 2025-01-21T15:31:32.378Z · LW · GW · 0 commentsContents
Position Piece: Most of the developing world lacks the institutional capacity to adapt to powerful, unsecure AI systems by 2030. Incautious model release could disproportionately affect these regions. Enhanced societal resilience in frontier AI states consequently provides no 'black cheque' for inca... ‘Societal Resilience’ requires strong foundations These are currently tentative beliefs. I intend to a) revise this post as soon as I can find more evidence for/against these claims and b) learn more about successful case studies to better understand how these situations might change over time. If you have strong opinions here, let me know! 2. AI diffusion in the developing world could create notable risks Developmental Lock-in Undermining Nascent Governments 3. What can we do about this? Conclusion None No comments
Position Piece: Most of the developing world lacks the institutional capacity to adapt to powerful, unsecure AI systems by 2030. Incautious model release could disproportionately affect these regions. Enhanced societal resilience in frontier AI states consequently provides no 'black cheque' for incautious release.
‘We should develop more societal resilience to AI-related harms’ is now a common refrain in AI governance. The argument goes that we should try to reduce the risks of large frontier models as far as possible, but some amount of harm is inevitable, and technological developments will always see potentially harmful models fall outside the regulatory net. The countermeasure, born both of practicality and necessity, is to develop societal resilience through adaptation to avoid, defend against and remedy harms from advanced AI systems.
This is a sensible idea, and one I think more people should do work on.
However, even if societal resilience against the harms of advanced AI systems (from here on, ‘societal resilience’) in frontier AI states was to improve significantly, it seems likely that the rest of the world is unlikely to develop at the same speed. Many societal resilience mechanisms depend on foundations which regions in the developing world do not have. If powerful models were used in these regions, they might still be used to perpetrate significant harms.
So if you only takeaway two things from this blog, it's:
- Increased societal resilience in some areas does not 1) preclude the risks of AI models being misused in other areas and therefore 2) provide a blank cheque for those models to be deployed.
- Regulation and deployment decisions based on the resilience of developed world states that ignored the vulnerability of nations that cannot implement similar protections might create severe harm.
This short exploratory post aims to do three things:
- Examine some foundations of societal resilience and why they may be lacking in the developing world
- Suggest some potential harms arising from policies assuming such resilience, and
- Explore some avenues to mitigate these risks.
This is still a new research area, however, so I’m very keen to hear any feedback you might have.
‘Societal Resilience’ requires strong foundations
Successful implementation also depends on the existence of appropriate institutions for resolving collective action problems, and on organisations’ technical, financial and institutional capacities for monitoring and responding to AI risk.
-- From Bernardi et al., ‘Societal Adaptation to Advanced AI’
The requirements for effectively adapting society to advanced AI are demanding. At a quick glance, here are a few things that might be useful. We don’t yet have enough of these in the developing world, but there has been a start:
- Systems oversight: National reporting and monitoring of AI systems use; good awareness of critical vulnerabilities or misalignment failure modes
- Cyberattack durability: Good enough technical expertise in critical service infrastructure like healthcare, transportation or emergency services, or access to software consultancies with this expertise, to protect against cyberattacks
- Bioattack durability: Good healthcare systems, good quality sanitation, vaccination protocols and pandemic response plans, good supply chains for critical products during periods of quarantine and labor shortages
- Lab Affiliations with Governments: Good personal connections and networks to decision-makers at top labs to work with governments at short notice where lab actions might otherwise undermine security
- Trusted, competent institutions: Government instituions with sufficient talent and high-quality information with clear responsibilities for dealing with different varieties of AI-related harms
On the other hand, many countries in the developing world don’t have these services. Notably, few of them are straightforwardly technical: they relate to institutional trust, social networks, and civic infrastructure that it is famously difficult to just ‘accelerate’ into resilient configurations. Moreover, I worry that many regions are beset by other features like:
- Violence and division: Civil and national wars might be theatres in which militants jerryrig state-of-the-art multimodal models to support military use cases; terrorists might use generative scientific tools or coding systems to perpetrate cyberattacks on unsecured developing but entirely critical infrastructure (e.g. preventing access to Starlink). This might make AI model misuse more probable than one would expect given …
- Low levels of technical knowledge: National governments might be lack the state capacity to benefit from or oversee AI developments in their regions, making it difficult to predict, defend against and remedy emerging threats. Even if governments are well-informed, they might struggle to coordinate against emerging threats.
- Poverty and precarity: Once a technical system has been hacked by malware, it might be far more difficult to restore it in an environment with limited technical expertise. If a cyberattack took down the grid in a critical area with fewer electrical engineers, it could take a lot longer to fix the issues.
These are currently tentative beliefs. I intend to a) revise this post as soon as I can find more evidence for/against these claims and b) learn more about successful case studies to better understand how these situations might change over time. If you have strong opinions here, let me know!
I strongly believe that it is essential for the Global South to actively participate in shaping the development of AI and to share in its benefits. However, I am equally concerned about the potential consequences of a company with a myopic frontier-AI-state perspective inadvertently releasing a model that falls into the hands of insurgents in conflict-affected regions, leading to catastrophic outcomes.
Here are some arguments for fast societal resilience building in the developing world I’m not convinced by:
- Some adaptations might be easy to develop: It might be that governments in developing countries can get reasonably far by implementing one or two solutions, e.g. the way that most airports in the world copied security protocols at airports after 911. I can’t think of any that would be enough to deal with the more catastrophic risks from misuse or misalignment cyber/bio/chem attacks.
- Some adaptations are hard to develop, but easy to copy: A world in which defensive technologies or protocols are easy to copy would result in far less disparity between developed and developing nations. These tend (to my mind) to be code or protocol-based e.g. if one country cracked watermarking, it might be a useful epistemic technology that could be rolled out widely straight away. Unfortunately, I think most of the adapation is not like this: it involves complex infrastructure
- Some adapations are hard to develop, but easy to buy: It might be that the future looks a lot like developed-world corporations selling solutions to developing-world governments to protect them against developed-world risk mismanagement. I don’t like this future, from a political sovereignty point of view (not to mention that this could be a key engine for the polarisation of the world into US/China blocks), but I can see how it might reduce risks.
Admittedly, there’s a risk of motte-and-bailey arguments with cases like these. ‘The developing world’ is a broad concept, and one risks accidentally using it to conflate India (where some societal adapation no doubt would work) with the Democratic Republic of Congo (where I would be a lot more skeptical).
I’m hoping to be directionally correct, but I think the detail of these distinctions are important and something I would be keen to explore further. For instance, it might be that there is a band of the developing world where technical literacy is low enough to reduce the risks of misuse to almost nothing, even though that region would in theory be extremely vulnerable to cyberattacks.
2. AI diffusion in the developing world could create notable risks
The risks from powerful AI systems have been well-covered elsewhere, the risks of AI and biosecurity in the global south particularly so. Here, I just want to note two additional stories that I think could result from the premature development of broadly capable, accessible, and unsecured AI in developing regions.
Note that I don’t know that these will happen, and the benefits might outweigh the risks in some cases nonetheless, but they might still be worth being mindful of. I would be grateful for proxies or analogies that could help build these intuitions further.
Developmental Lock-in
Several regions in Africa are experiencing developmental lock-in, where both parties try to undermine the other side, keeping the level of progress minimal. Unsecured AI systems could exacerbate this.
Consider a regional group that wants to build a local digital banking system. However, any attempt they make is vulnerable to hacking attempts involving AI systems accessed via the darkweb, or supplied by external organisations, which enemies in the region employ. This means that digital forms of theft predominate until the idea to create a banking system is scrapped altogether, keeping the region bound to physical currency. Later, an external third-party might step in with a fully-formed off-the-shelf AI-protected banking system, and the region might choose to adopt this. However, this means that they have outsourced the technical literacy required to build such a system to a third party, and they may be vulnerable to extortion or manipulation by that group in future.
(Note: in my mind, a lot of what it takes to build a preliminary state over the next 50 years will relate to building digital infrastructure, both to coordinate actors in diverse areas and to organise information for decision making purposes a la James C. Scott. In that way, I think the nation-building efforts of 2030-2050 are likely to be even more vulnerable to cyberespionage and attack than those of previous generations, and I’m also assuming the base rate of cyberattacks will rise).
Undermining Nascent Governments
The AI arms race is often presented as the AGI arms race between the US and China, but I expect there to be a race for AI-supported epistemic disruption tools (as well as military tools) in the rest of the digitalising war-waging world eventually, one that policymakers in the west might be well-advised to delay.
Consider the position of a aspirationally-democratic government in an unstable state that has experienced notable civil wars. An insurgent group is using a foreign-developed generative model to generate disinformation about them across many hundreds of languages spoken in the region which proliferate online and sow dissent. Unlike the well-established democratic governments to the north, this government has newly taken power and does not have the connections with large technology corporations to take action to stem the flow, nor are it’s population familiar enough with fake news, many of them having recently obtained internet access for the first time. They have little choice but to take violent action to suppress the insurrection, lowering them to the brutality of the regime they sought to replace.
3. What can we do about this?
We also need to build adaptation and resilience infrastructure and ensure that better tech diffuses *faster* and *wider*.
- Lennart Heim (linkedin post)
The current trajectory is to spend more people and resources developing programmes for societal adaptation to advanced AI in developing world societies and to track how effective these might be. The obvious extension is to explore how viable these policies might be in different areas in the developing world, and whether governments would be able to implement them before e.g. 2030. These both seem worthwhile.
Here are some less obvious things that we might prioritise:
- Prioritise using AI to develop defensive technologies that aren’t AI: Ai research in sustainable and reslience crops, Ai for biosecurity research like universal antivirals or wiping out diseases like malaria, or more efficient desalination should all be priorities understood as occupying a distinct and useful niche in societal resilience conversations. Windfall clauses are a good general idea, but it might be that regions in the developing world would benefit more from concrete embodiments of that value in technology rather than cash (although cash might be hard to outcompete at a local level).
- Developing partnerships between AI firms and developing-world corporations: Done carefully, this would help to onshore key talent and protect the sovereignty of developing nations, and protect against extractionism and potential extortion. The alternative is sponsoring a world where the only options of developing governments is to side with a major AI power like the USA or China, which would not only increase polarisation but create potential mineral-rich military hot-zones for AI-assisted conflicts to bloom out Balkan-style (what if the only place with the element for a next-generation AGI chip is a mine in Chinese-controlled Congo? Would you rather decide who gets it through military, or have the option of markets?). (I am definitely oversimplifying here).
- Doing more research into what developmental resilience under short AI timelines might look like: In my experience, not enough developmental studies researchers take a future with AGI post 2030 seriously. It may be that a lot of work should be being done now in developing economies to strengthen them against . Research and funding organisations like Open Philanthropy with dual AI-developmental teams should encourage this cross pollination.
- Doing more research into ‘offense-defense’ balance and how it might relate to different and developing regions: Thoughts about this field are still very nascent with respect to the intersection between AI technologies and different types of society. I don’t know enough about the harms of disinformation on non-western, non-digital audiences; I don’t know enough about how the different levels of cyberinfrastructure a develooping world might be expected to design might vary in their exposure to cyberattack vulnerability.
Of course, another thing to do would be to buy time against diffusion by limiting the sharing and leaking of key information about models and imposing better access protocols. I’ve written about that here and here.
Conclusion
The developing world has a huge amount to benefit from advanced AI technologies. I'm truly excited by the prospect that advanced AI might bring a degree of growth and economic independence that contributes to countries in the developing world achieving sovereignty from external powers for the first time in centuries.
However: it would be a shame to have the post-AGI era of developmental studies be one entirely focused on dealing with the aftershocks of sharing powerful AI tools to warring nations (perhaps not unlike dropping the idea of nation states on countries that lacked the resilience infrastructure) where states moving towards independence are once again found beholden to the developing nations that promise to pull them out of it.
This would not just be a worse world to live in for the global majority, but risk being a far more polarised world between AGI superpowers.
Nonetheless, I suspect that there are significant parts of this story left out, and I welcome comments and corrections that could help build this line of thinking or counterarguments against it.
Thanks to Jamie Bernardi for suggestions. All errors/opinions my own.
0 comments
Comments sorted by top scores.