Protectionism will Slow the Deployment of AI
post by bgold · 2023-01-07T20:57:11.644Z · LW · GW · 6 commentsContents
6 comments
I believe I’m more optimistic than the average LWer that regulation will be passed that slows down AI capabilities research. The current capabilities of language models threaten the interests of the politically salient professional managerial middle class, which will cause constituencies in the government to pass protectionist, anti-AI policies.
I don’t think they will be well targeted or well designed regulation from an alignment point of view, but they will nevertheless slow capabilities research and deployment and alter the strategic AI development landscape.
—
- ChatGPT was a fire alarm for AI and, contra the original post, people have noticed. The capabilities of ChatGPT, and the accessible interface, mean many people have or will experience it. This is already causing a general reaction among the chattering classes that I’d sum up as awareness that this is cool and yet also terrifying.
- The government is a combination of elected politicians and non-elected ‘civil servant types’ (bureaucrats, NGOs, the permanent government)
- The government is ineffective at long term strategic thinking that maximizes the general good of the public, no argument there. However, that’s the wrong lens to view things through - from a public choice/class perspective, the government is quite responsive to certain specific interest groups.
- The professional managerial class (PMCs) is a term of analysis/too-online slur[1] for the ‘class’ of professionals and managers who, in the late 2010s and 2020s America, tend to be distinguished by things like having gone to university, have a generic technocratic liberal bent, work in high status professions (tech, academia, journalism, legal, medicine parts of finance, etc.).
- The government is very responsive to PMC interests; the civil service, almost by definition, entirely comes from the PMC class. Democratic and Republican congresspeople are primarily from professional managerial backgrounds. The donor and chattering classes also come from or are of this class.
- ChatGPT is, rightfully, very scary to PMCs. Headlines about replacing writers, serving as medical counsel, lawyers; right now, these are easily tuned out because there are always these stories (there were tons of op eds seven years ago talking about political truck drivers being replaced).
- However, I think this will be different: ChatGPT gives that experience, of being replaced, directly to many people, who can try it at home. It will be a lot easier to deploy versions of this into applications that end up causing some form of job loss to automation, in visible ways. And PMCs have much greater direct contact with political operators.
- I don’t think this will fall into the culture war swamp where nothing happens. There are many actions that can be taken, outside of the classic red-blue dynamic, by congress or, importantly, by government agencies, that can slow down AI capability developments to benefit favored classes.
- Ban its use in certain professions (“How can you trust an AI to give medical advice?”)
- More occupational licensing
- Protect the children by slowing tech (“you can generate what with image models??”)
- Put liability on the manufacturers of the models
- Requiring licensed development of AI - general anti-competitive bills that benefit established large companies that agree to defacto limit certain use cases.
An admittedly vague scenario [2]that I see as likely is a type of PMC work is automated, this causes a lot of distress, and then pressure in various ways from political actors starts being ratched up, maybe not directly attributable to job loss but for all the various sins (ex the four horseman of internet apocalypse).
Again, it won't be 'good regulation', but things that in the aggregate will slow deployment and likely push things into a more legible ecosystem model, primarily to respond to concerns from the professional managerial class.
- ^
I considered using a different word because it’s rarely used outside of tracts criticizing that group. However, alternatives didn’t seem to really fit - 'text manipulation professions' seemed almost more insulting.
- ^
Suggestions for operationalized questions for forecasting are appreciated.
6 comments
Comments sorted by top scores.
comment by James_Miller · 2023-01-07T22:29:21.142Z · LW(p) · GW(p)
You might be right, but let me make the case that AI won't be slowed by the US government. Concentrated interests beat diffuse interests so an innovation that promises to slightly raise economic growth but harms, say, lawyers could be politically defeated by lawyers because they would care more about the innovation than anyone else. But, ignoring the possibility of unaligned AI, AI promises to give significant net economic benefit to nearly everyone, even those who jobs it threatens consequently there will not be coalitions to stop it, unless the dangers of unaligned AI become politically salient. The US, furthermore, will rightfully fear that if it slows the development of AI, it gives the lead to China, and this could be militarily, economically, and culturally devastating to US dominance. Finally, big tech has enormous political power with its campaign donations and control of social media and so politicians are unlikely to go against the will of big tech on something big tech cares a lot about.
Replies from: memeticimagery, bgold↑ comment by memeticimagery · 2023-01-08T00:11:14.548Z · LW(p) · GW(p)
Your last point seems like it agrees with point 7e becoming reality, where the US govt essentially allows existing big tech companies to pursue AI within certain 'acceptable' confines they think of at the time. In that case how much AI might be slowed is entirely dependent on how tight a leash they put them on. I think that scenario is actually quite likely given I am sure there is considerable overlap between US alphabet agencies and sectors of big tech.
↑ comment by bgold · 2023-01-08T15:00:13.110Z · LW(p) · GW(p)
I agree that competition with China is a plausible reason regulation won't happen; that will certainly be one of the arguments advanced by industry and NatSec as to why it should not be throttled. However, I'm not sure, and currently don't think it will, be stronger than the protectionist impulses,. Possibly it will exacerbate the "centralization" of AI dynamic that I listed in the 'licensing' bullet point, where large existing players receive money and de-facto license to operate in certain areas and then avoid others (as memeticimagery points out [LW(p) · GW(p)]). So for instance we see more military style research, and GooAmBookSoft tacitly agree to not deploy AI that would replace lawyers.
To your point on big tech's political influence; they have, in some absolute sense, a lot of political power, but relatively they are much weaker in political influence than peer industries. I think they've benefitted a lot from the R-D stalemate in DC; I'm positing that this will go around/through this stalemate, and I don't think they currently have the softpower to stop that.
Replies from: James_Miller↑ comment by James_Miller · 2023-01-08T17:31:58.382Z · LW(p) · GW(p)
Greatly slowing AI in the US would require new federal laws meaning you need the support of the Senate, House, presidency, courts (to not rule unconstitutional) and bureaucracy (to actually enforce). If big tech can get at least one of these five power centers on its side, it can block meaningful change.
Replies from: bgold↑ comment by bgold · 2023-01-08T18:11:56.970Z · LW(p) · GW(p)
This seems like an important crux to me, because I don't think greatly slowing AI in the US would require new federal laws. I think many of the actions I listed could be taken by government agencies who over-interpret their existing mandates given the right political and social climate. For instance, the eviction moratorium during COVID, obviously should have required congressional action, but was done by fiat through an over-interpretation of authority by an executive branch agency.
What they do or do not do seems mostly dictated by that socio-political climate, and by the courts, which means less veto points for industry.
comment by DragonGod · 2023-01-07T22:18:51.219Z · LW(p) · GW(p)
Upvoted.
I agree that sociopolitical obstacles to deployment of AI technologies are a significant factor that should influence timelines/takeoff dynamics which are insufficiently appreciated on LW.
I'm not confident the general scenario you painted will play out, but do expect some sort of regulatory speed bumps to AI capabilities progress.