What does it take to ban a thing?

post by qbolec · 2023-05-08T11:00:51.133Z · LW · GW · 18 comments

Contents

  Ban of chemical weapons
  Ban of child labor
  Ban of Chlorofluorocarbons
  Lessons learned for AI Governance
None
18 comments

Epistemic status: I am not an expert. I just took several things which people banned (child labor, chemical weapons, ozone-depleting substances) and for each just searched for the first article which seriously engages with the question "how did we succeed to ban it?", read it once, and summarized how I understand it. If someone has more examples, or better explanations, I'd be glad to learn.

I think that there's something to learn from examples of bad things that we have banned in the past despite some people benefiting from those bad things. A rosy-eyed, but wrong, image of how that happened is "well, people just realized the thing was bad so, they've banned it". Turns out it is not at all how it has happened.

Ban of chemical weapons

TL;DR: They seem to not be very effective (face masks), may backfire in case of wind ("blowback")

Source: https://www.politico.eu/article/why-the-world-banned-chemical-weapons/

Quotes I've found interesting:

One answer is that while gas attacks are terrifying, the weapon has proved to be militarily ineffective. After Ypres, the allies provided masks to their front-line troops, who stood in their trenches killing onrushing Germans as clouds of gas enveloped their legs. That was true even as both sides climbed the escalatory ladder, introducing increasingly lethal chemicals (phosgene and mustard gas), that were then matched by increasingly effective countermeasures. The weapon also proved difficult to control. In several well-documented instances, gases deployed by front-line troops blew back onto their own trenches — giving a literalist tinge to the term “blowback,” now used to describe the unintended consequences of an intelligence operation.

The world’s militaries are loath to ban weapons that kill effectively, while acceding to bans of weapons that they don’t need.

At the end of World War I, a precise tabulation of casualties showed that some 91,000 soldiers on all sides were killed in gas attacks — less than 10 percent of the total deaths for the entire war. Machine guns and artillery shells, it turns out, were far more effective systems for delivering death.

 

Among the ban supporters was a Norwegian foreign ministry official who issued an impassioned plea for the adoption of a treaty banning the weapon. In the midst of his talk (which I attended), a British colonel leaned across the table at which I was sitting, a wry smile on his face. “You know why the Norwegians favor a ban?” he asked. I shook my head: no. “Because they don’t have any,” he said.

Note, that cluster bombs and mines are still not banned, despite similar "moral" problems with them:

Additionally, key senior military officers believed agreeing to the ban would set a dangerous precedent — that the military could be pressured into banning weapons by what they described as left-leaning humanitarian organizations.

 

The world’s militaries don’t want to ban weapons that are efficient killers. So while it is true that the land mine and cluster munitions bans have gained widespread international support (162 countries have signed the land-mine ban, 108 countries have signed onto the Convention on Cluster Munitions), the countries most likely to use both (the U.S., China, Russia and India) remain nonsignatories.

Ban of child labor

TL;DR: In Great Depression children were considered stealing jobs from adults

Source: https://nationalinterest.org/blog/reboot/how-child-labor-ended-united-states-167858

Quotes I've found interesting:

By the 1870s, unions condemned child labor on the basis that overly young workers competed for jobs, making it harder for adults to obtain higher pay and better conditions – not due to concerns about the well-being of kids.

 

Despite Southern opposition, reformers argued that state-level regulations were rife with loopholes and difficult to enforce. In 23 states, for instance, there was no official way to determine children’s ages. Additionally, many states allowed poor children to work out of “necessity.”

 

In 1913, the minister Owen Lovejoy brought new religious allies to the committee, which by then focused on the sinfulness of child labor in America.

In 1916, they got Congress to pass the the first federal child labor law. Like the Beveridge bill, the new law prohibited shipping products made with child labor across state lines.

 

This 1938 law included provisions banning child labor under age 14 in most industries while exempting “children under 16 employed in agriculture” and “children working for their parents” in most occupations.

Ban of Chlorofluorocarbons

TL;DR: the number of producers was small, the issue was just small fraction of their revenue, one of the players was big enough that when it innovated a safer solution, it was in its interest to ban the old unsafe solutions and for others to adopt the new one - but the push to develop safe alternative at all was forced by consumers encouraged by Greenpeace which even shown a viable alternative PoC

Source: https://www.rapidtransition.org/stories/back-from-the-brink-how-the-world-rapidly-sealed-a-deal-to-save-the-ozone-layer/

This diversity within industry was harnessed and an alliance formed between the environmental movement and those companies that ultimately stood to gain from the increased regulations. Following initial resistance, DuPont, the main industry player responsible for a quarter of global CFC production, backed the initial draft of the Montreal Protocol and its subsequent strengthening, in part because it could benefit from exporting alternatives to CFCs to the European market as a domestic ban on the nonessential use of CFCs as aerosol propellants had been introduced in the US in 1978, spurring innovation.

 

Key to the rapid transition to phase out CFCs was the widespread acceptance amongst the general public, business actors and world leaders of the severity and urgency of the problem; a consensus that was forged following the discovery of the ozone layer in 1985. However, the negotiations around the Montreal Protocol still had to handle the conflicting national interests of participating governments to reach a deal. The United States, a leader in the negotiations, was to a large extent influenced in its position by its business interests, which opposed any ban until 1986 when the company with the largest role in CFC production worldwide, DuPont, had developed successfully developed alternative chemicals. From this point forward, the US took the lead in pushing for a ban. European countries initially resisted this call until their own companies such as ICI had developed CFC substitutes, at which point they also agreed to the need for a ban. 

 

First of all, the limited number of actors involved made it relatively easy to reach an agreement. Eighteen chemical companies accounted for most of the world’s production of CFCs in the early 1980s – mostly concentrated in the US, UK, France and Japan. DuPont was by far and away the most important player, producing around one quarter of the global output. This meant that once DuPont acted as the industry leader in the global negotiations, and once the company’s agreement for a ban was secured, the rest of the industry followed suit. Also important was the fact that, although the CFC market was important, it was not truly ‘big business’ – CFCs accounted for 3% of DuPont’s total sales.

The final, and perhaps most crucial factor, in the speed of the phase out of CFCs following the discovery of the ozone layer was the technological innovations to develop alternative chemicals. Once the science and the gravity of the situation became clear, DuPont began investing heavily in research into substitutes.

 

Civil society action around CFCs extended beyond campaigning into directly driving industrial innovations. In 1992 when chemical companies attacked Greenpeace and their anti-CFC campaign for “criticizing and offering no solutions”, Greenpeace brought together a group of engineers to develop a prototype of a refrigerator that did not use CFCs. Within a few months, the engineers had developed a prototype for the “GreenFreeze” fridge – which used a mix of natural hydrocarbons instead of CFCs and so did not harm the ozone layer. Greenpeace subsequently founded a company to design and market GreenFreeze fridges, which ultimately revolutionised the domestic refrigeration sector – with more than a billion in use today.

 

Also interesting and relevant to the challenges of the climate movement today was the success of citizen-led campaigning on the relatively abstract and remote environmental problem of ozone depletion. Behind the success of the multilateral negotiations was well organized civil society campaigning – both in the US and around the world. Environmental organisations coalesced around the issue of CFCs – and through inventive public campaigns managed to spur changes in consumer behaviour, including widespread boycotts of products and companies that used CFCs. Consumer pressure forced action by some US-based companies even before the government introduced bans on the use of CFCs. By the time the ban was in place, the market for CFCs had dwindled, making their phase out more feasible.

Lessons learned for AI Governance

First of all, it looks like "moral compass" is not enough to get people to do anything even in most "obvious" cases like child abuse or chemical weapons. The actors seem to oppose the ban as long as it would harm their profits, and support it as soon as it becomes profitable - which is usually when they know how to solve the issue, while the competition still doesn't know. Also, it helps if the issue at hand is not huge part of their revenue, or they have more effective ways to make revenue, etc.

Also, it looks like activism to make people aware how the sausage is made can help to create a consumer pressure on producers which in turn might want to switch to more acceptable solutions not even waiting for government action. Interestingly, the first company to do that then has an incentive to push for the ban, to gain edge on competition and pay back the costs of research.

18 comments

Comments sorted by top scores.

comment by Dagon · 2023-05-08T18:28:13.901Z · LW(p) · GW(p)

It's worth looking at medical technology for things that are massively slowed down for safety reasons.  Presumably there's a lot that is slowed down or made more expensive to the point of not happening at all.

This seems to be mostly motivated by blame-avoidance rather than large-scale-risk avoidance, but that may be sufficient for some kinds of x-risk.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-05-08T20:07:27.715Z · LW(p) · GW(p)

Yeah, I agree. Although, I am concerned that it seems that tech which is both already established, has high profitability, and has substantial irreplaceable military benefits is exactly the sort of tech that I would least expect the government to be willing to ban. National defense and 'our competitors will do it whether or not we do' style thinking seems to hold a lot of sway with national governments.

comment by Zach Stein-Perlman · 2023-05-08T16:53:21.310Z · LW(p) · GW(p)

If someone has more examples, or better explanations, I'd be glad to learn.

See "Technological restraint" in "Slowing AI: Reading list" [? · GW] for several more sources.

comment by Dumbledore's Army · 2023-05-08T15:32:58.242Z · LW(p) · GW(p)

The child labour example seems potentially hopeful for AI given that fears of AI taking jobs are very real and salient, even if not everyone groks the existential risks. Possible takeaway: rationalists should be a lot more willing to amplify, encourage and give resources to protectionist campaigns to ban AI from taking jobs, even though we are really worried about x-risk not jobs.

Related point: I notice that the human race has not banned gain-of-function research even though it seems to have high and theoretically even existential risks. I am trying to think of something that's banned purely for having existential risk and coming up blank[^1]. 

Also related: are there religious people who could be persuaded to object to AI in the same way they object to eg human gene editing? Can we persuade religious influencers that building AI is 'playing God' in some way? (Our very atheist community are probably the wrong people to reach out to the religious - do we know any intermediaries who could be persuaded?)

Or to summarise: if we can't get AGI banned/regulated for the right reasons (and we should keep trying), can we support or encourage those who want to ban AGI for the wrong reasons? Or at minimum, not stand in their way? (I don't like advocating Dark Arts, but my p(doom) is high enough that I would encourage any peaceful effort to ban, restrict, or slow AI development, even if it means working with people I disagree with on practically everything else.)

[^1]European quasi-bans on genetic modification of just about anything are one possibility. But those seem more like reflexive anti-corporatism plus religious fear of playing God, plus a pre-existing precautionary attitude applied to food items.)

Replies from: korin43, Lichdar
comment by Brendan Long (korin43) · 2023-05-08T16:01:05.556Z · LW(p) · GW(p)

I am trying to think of something that's banned purely for having existential risk and coming up blank.

Weren't CFC's banned for existential reasons (although only after an alternative was found, because it would be better to die than not have refrigerators..)?

Replies from: Dumbledore's Army
comment by Dumbledore's Army · 2023-05-09T05:42:36.511Z · LW(p) · GW(p)

OP discusses CFCs in the main post. But yes, that’s the most hopeful precedent. The problem being that CFCs could be replaced by alternatives that were reasonably profitable for the manufacturers, whereas AI can’t be.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-05-19T19:25:19.822Z · LW(p) · GW(p)

The dynamics are not comparable at all. 

Even before the invention of sufficiently viable refrigerants, physical chemists had already calculated the guaranteed existence of viable alternatives because the possibility space is quite finite. The only roadblock was manufacturing them at scale.

comment by Lichdar · 2023-05-08T16:17:21.094Z · LW(p) · GW(p)

I am religious enough and consider AI some blend of being a soulless monster and perhaps an undead creature that is sucking up the mental states of humanity to live off our corpses.

So there is definitely the argument. The "playing God" angle does not actually work imo: none of us actually think we can be God(we lack the ability to be outside time and space).

The soullessness argument is strong. This is also our/my opposition to mind copying.

comment by qbolec · 2023-05-13T17:34:33.249Z · LW(p) · GW(p)

What did it take to ban slavery in Britain: 
TL;DR: Become the PM and propose laws which put foot in the door, by banning bad things in the new areas at least, and work from there. Also, be willing to die before seeing the effects
Source: https://twitter.com/garius/status/1656679712775880705

Replies from: ege-erdil
comment by Ege Erdil (ege-erdil) · 2023-05-13T18:23:50.905Z · LW(p) · GW(p)

I don't think you can deduce anything about what it took to ban slavery from this tweet thread.

Replies from: qbolec
comment by qbolec · 2023-05-18T06:20:51.837Z · LW(p) · GW(p)

Why? (I see several interpretations of your comment)

Replies from: ege-erdil
comment by Ege Erdil (ege-erdil) · 2023-05-19T11:01:40.182Z · LW(p) · GW(p)

The lack of any attempt at causal analysis is a pretty serious problem, for starters. It's not clear to what extent these individual people were responsible for the abolition of slavery in Britain, as opposed to the rising opposition to slavery among the general public which slowly changed the incentives of politicians until banning slavery became more politically expedient than keeping it alive.

My model is that public opinion was what really mattered for the abolition of slavery in Britain. Indeed, e.g. the whole reason Castlereagh tried to get some anti-slavery commitments into the Treaty of Vienna in 1815 was that the public opposition to slavery had reached such a point that Liverpool's government felt they had to make some kind of concession to them in order to keep the Whigs at bay. Otherwise, I think neither Liverpool nor Castlereagh cared much about the issue of slavery one way or the other.

Replies from: qbolec
comment by qbolec · 2023-05-20T17:58:46.272Z · LW(p) · GW(p)

Thanks for clarifying! I agree the twitter thread doesn't look convincing.

IIUC your hypothesis, then translating it to AI Governance issue, it's important to first get general public on your side, so that politicians find it in their interest to do something about it.

If so, then perhaps meanwhile we should provide those politicians with a set of experts they could outsource the problem of defining the right policy to? I suspect politicians do not write rules themselves in situations like that, they rather seek people considered experts by the public opinion? I worry, that politicians may want to use this occasion to win something more than public support, say money/favor from companies, and hence pick not the right experts/laws - hence perhaps it is important to not only work on public perception of the threat but also on who the public considers experts?

comment by Matthew_Opitz · 2023-05-08T15:18:41.472Z · LW(p) · GW(p)

Good examples to consider!  Has there ever been a technology that has been banned or significantly held back via regulation that spits out piles of gold (not counting externalities) and that doesn't have a next-best alternative that replicates 90%+ of the value of the original technology while avoiding most of the original technology's downsides?  

The only way I could see humanity successfully slowing down AGI capabilities progress is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially.  Perhaps it takes time (a generation or more?) for human beings to even figure out what to do with a certain amount of new utility, such that even a tiny risk of disaster from AGI would motivate people to satisfice and content themselves with the "AI summer harvest" from narrow AI?  Perhaps our best hope for giving us time to get AGI right is to squeeze all we can out of systems that are identifiably narrow-AI (while making sure to not fool ourselves that a supposed narrow-AI that we are building is actually AGI.  I suppose this idea relies on there being a non-fuzzy, readily-discernable line between safe and bounteous narrow-AI and risky AGI).

Replies from: nathan-helm-burger, going-durden
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-05-08T20:04:45.311Z · LW(p) · GW(p)

I've had thoughts along similar lines, but worry that there is no clear line between safer narrower less-useful less-profitable AI and riskier more-profitable more-general AI. Seems like a really slippery slope with a lot of motivation for relevant actors to engage in motivated thinking to rationalize their actions.

comment by Going Durden (going-durden) · 2023-05-11T07:37:09.984Z · LW(p) · GW(p)

is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially.

 

I find it not just likely but borderline certain. Ubiquitous, explicitly below-human narrow AI has a tremendous potential that we act blind to, focusing on superhuman AI. Creating superhuman, self-improving AGI, while extremely dangerous, is also an extremely hard problem (in the same realm as dry nanotech or FTL travel). Meanwhile, creating brick-dumb but ubiquitous narrow AI and then mass producing it to saturation is easy. It could be done today, its just a matter of market forces and logistics.

It might very well be the case that once the number of narrow-AI systems, devices and drones passes certain threshold (say, it becomes as ubiquitous, cheap and accessible as cars, but not yes as much as smartphones) we would enter a weaker form of post-scarcity and have no need to create AI gods.

Replies from: gerald-monroe
comment by Gerald Monroe (gerald-monroe) · 2023-05-19T16:57:36.229Z · LW(p) · GW(p)

Another big aspect to this is that narrow AIs occupy ecological space and potentially may allow us to move the baseline level of technology closer to theoretical limits.

The AI superintelligence scenario implicitly is one where the baseline level of technology is far from the limit. Where the ASI can invent nanotechnology, or hack insecure computers, or living humans are just walking around and not wearing isolation suits, or the ASI can set up manufacturing centers on the ocean floor which is unmonitored and unoccupied.

If we had better baseline technology, if we already had the above, the ASI might not have enough of an edge to win. It can't break the laws of physics. If baseline human + narrow AI technology were even half as good as the absolute limits it might be enough. If computers were actually secure because narrow AI constructed a proof that no possible input message to all the software could cause out of spec/undefined behavior.

Another thing that gets argued is the ASI would coordinate with all the other AIs in this world to betray humans. Which is a threat but if the AIs are narrow enough this may not actually be possible. If they are too myopic and focused on short term tasks, there is nothing an ASI can offer as a long term promise, the myopic narrow AI will forget any deals struck because it runs in limited duration sessions like current models.