The Unilateralist’s “Curse” Is Mostly Good

post by David Hornbein · 2020-04-13T22:48:22.589Z · LW · GW · 16 comments

Contents

16 comments

People around here sometimes reference the “Unilateralist’s Curse”, especially when they want to keep someone else from doing something that might cause harm. Briefly, the idea is that “unilateral” actions taken without the consent of society at large are especially likely to be harmful, because people who underestimate the resulting harm will be the most likely to take the action in question.

The canonical formulation of the argument is in a paper by Bostrom, Douglas, and Sandberg, most recently updated in 2016. I highly recommend reading the “Introduction” and “Discussion” sections of the paper. (I found the toy mathematical models in the middle sections less useful.)

While the paper mostly gives arguments that unilateralism could be harmful and ought to be stopped via a “principle of conformity”, the authors concede that the historical record does not back this up: “[I]f we “backtest” the principle on historical experience, it is not at all clear that universal adoption of the principle of conformity would have had a net positive effect. It seems that, quite often, what is now widely recognized as important progress was instigated by the unilateral actions of mavericks, dissidents, and visionaries who undertook initiatives that most of their contemporaries would have viewed with hostility and that existing institutions sought to suppress.”

For example, an older 2013 version of Bostrom et al’s paper notes that “The principle of conformity could be seen to imply, for instance, that Galileo Galilei ought to have heeded the admonitions of the Catholic Church and ceased his efforts to investigate and promote the heliocentric theory.” The 2016 update removes this and most of the discussion of unilateralism’s importance to science, but retains the discussion of Daniel Ellsburg’s decision to leak the Pentagon Papers to the media as an example of beneficial unilateralist action. A cursory glance through history finds many, many other examples of unilateralists having strong positive effects, from Stanislav Petrov to Oskar Schindler to Ignaz Semmelweis to Harriet Tubman.

Bostrom et al continue, “The claim that the unilateralist curse is an important phenomenon and that we have reason to lift it is consistent with the claim that the curse has provided a net benefit to humanity. [Italics mine.] The main effect of the curse is to produce a tendency towards unilateral initiatives, and if it has historically been the case that there have been other factors that have tended to strongly inhibit unilateral initiatives, then it could be the case that the curse has had the net effect of moving the overall amount of unilateralism closer to the optimal level.”

Obviously there are any number of “factors that have tended to strongly inhibit unilateral initiatives”, and not just historically. These continue today, and will last as long as the human condition persists. Examples include reflexive conformity a la Asch’s experiments; loyalty to friends, tribalism, and other identity-based sentiments; active punishment of dissenters via methods that range from execution to imprisonment and torture to civil lawsuits to “merely” withholding acclaim, funding, and opportunities for promotion; and most important of all, the sheer difficulty of figuring out better ways of doing things.

An undiscriminating “principle of conformity” would be just one more source of drag on progress across the board. In most areas, the benefits of unilateral action are far larger than the costs. In areas where particular caution is warranted, such as sharing weapons technology, the potential harms are widely understood and unilateral action by altruists is at most a minor factor. Advocates of inaction should rely on arguments about specific harms, not vague heuristics to buttress the natural human tendency towards deference.

16 comments

Comments sorted by top scores.

comment by John_Maxwell (John_Maxwell_IV) · 2020-04-15T01:54:03.253Z · LW(p) · GW(p)

If 90% of the population believes in Religion X, and they all get their opinions on Topic Y from Guru Z, then a naive view of the unilateralist's curse will say that since Guru Z's opinion on Topic Y is shared by a majority of the population, action in line with the Guru's opinion is "multilateral". Any action Guru Z disagrees with is "unilateral", even if the remaining 10% of the population worship a broad variety of gods, follow a broad variety of gurus, and are all in agreement that Guru Z happens to be wrong in this case. (BTW see my recent comment [LW(p) · GW(p)] on ideological Matthew effects.)

One of Phil Tetlock's forecasting secrets is something he calls "extremization": Find forecasters who don't correlate well with each other in general, and if they actually end up agreeing on something, become even more certain it's the case. In Bayesian terms, observing conditionally independent evidence counts for more.

The frustrating thing about unilateralist's curse dialogue is that in order to develop the necessary diversity of perspectives to extremize, you need people to be seriously considering the possibility that the group is wrong, probably playing devil's advocate, at the very least not feeling pressure to conform intellectually. But that kind of intellectual noncompliance is the very thing a person concerned with the UC will want to clamp down on if they want to prevent unilateral action.

I think the thing to do is separate out talking and acting: People should be developing their own handcrafted models of the world to make extremization possible, but they should be using ensembles of world models to choose important actions.

Think unilaterally, act multilaterally.

comment by Chris_Leong · 2020-04-14T06:08:32.132Z · LW(p) · GW(p)

I think this is a case of unilaterists curse being good until it suddenly isn't. We are entering a phase of technology which is fundamentally different from before. If we embrace unilaterism we need to do it based on an understanding of our current situation, not just past history.

Replies from: jmh, ChristianKl, MakoYass
comment by jmh · 2020-04-15T00:22:21.408Z · LW(p) · GW(p)

There is some truth there but it is also true that every age has been able to make the same claim.

Is this age really that different? I wonder how big one might suggest the years of discontinuity we might expect compared to some of the big jumps in the past?

Replies from: John_Maxwell_IV, Chris_Leong
comment by John_Maxwell (John_Maxwell_IV) · 2020-04-15T01:42:32.194Z · LW(p) · GW(p)

What sort of historical tech from previous ages do you have in mind which enables unilateral individual action to have a massive influence?

The only thing I'm thinking of are weapons which allow the assassination of important people. All the assassinations of important people which come immediately to mind are cases where I think I would've preferred the important person to stick around.

Replies from: Radamantis, jmh
comment by NunoSempere (Radamantis) · 2020-04-16T16:52:08.644Z · LW(p) · GW(p)

Machiavelli's The Prince, and various other texts.

comment by jmh · 2020-04-15T14:47:00.578Z · LW(p) · GW(p)

How about the printing press?

Replies from: John_Maxwell_IV, MakoYass
comment by John_Maxwell (John_Maxwell_IV) · 2020-04-16T04:30:30.871Z · LW(p) · GW(p)

The printing press is a tool which lets us coordinate multilateral action differently than we were previously.

comment by mako yass (MakoYass) · 2020-04-16T00:20:10.414Z · LW(p) · GW(p)

The author's agency doesn't dominate the effects so much, I think, there are many people who have to agree before a piece of writing can do anything. Nothing propagates or gets read without the consent of an audience, to the extent that popular texts often just articulate sentiments that're already endemic.

comment by Chris_Leong · 2020-04-15T01:17:48.526Z · LW(p) · GW(p)

Thanks for asking this question; it's a good question. Let's take for example things like machine guns. This was an age which, like our age, gave inventors more power to affect the world than ever before. However, if you didn't invent machine guns, someone else would have, just slightly later. However, we're now in the situation where inventions can either end the whole game or remove us from this time of perils. It's not just about the timing of inventions, but our ultimate destiny that is at stake.

Replies from: jmh
comment by jmh · 2020-04-15T14:28:50.430Z · LW(p) · GW(p)

Chris, I think we can make a case that throughout history these types of innovations that have the potential for disrupting the existing order are not unusual. Bronze age, iron age, industrial age, discovery of knowledge about herbal/plant usages (both in healing and killing), communication speeds, gunpowder, bow & arrow over sling or just spear/club, navigation over semi-blind wondering.... Even opening up new trade routes, or accepting the presence of strangers, introduces such potentials so some have always fought against that. In general there has always been the fear of innovation and of the unknown that such things bring.

I didn't really elaborate on the big jumps (thinking here of KatjaGrace's discontinuity post). I see that as connecting here if we think about how people behave with change. I think people do a pretty good job of taking change in stride when it fits with the existing trend -- in other words it's near trend variations and not discontinuity change. When we're seeing something that one might characterize as compressing say 50, 100, 500 years of trend progress into a few years people don't know how to wrap their heads around what that will mean. Fear of the unknown.

So in history, just what types of innovations (probably by some few or even a single promoter) have been those types of jumps in technological progress and did they really produce what we're saying is the risk now?

I think the answer has to be no, we really have not seen really bad outcomes for humans in general.

So perhaps a related question would be why has that not been the case. Here perhaps we can make the case that it comes back to the the Unilateralist Curse but in a slightly different form. It is where those risks become clear and present that the unilateralist is constrained by society. So rather then wanting to apply the thinking at the start of the innovation the argument applies to the post-innovative state perhaps.

comment by ChristianKl · 2020-04-15T08:52:42.858Z · LW(p) · GW(p)

We have a lot of different situations in our times. There are security critical situations where unilaterism isn't good but many situations in our times don't fall into that category.

comment by mako yass (MakoYass) · 2020-04-14T09:28:14.696Z · LW(p) · GW(p)

Aye, technology seems to be growing more inclined to have global irreversible effects over time (the use of fossil fuels -> nukes (but fortunately never applied) -> microplastics (already pervading every ecosystem) -> additive manufacturing (a physical build can spread as fast as a piece of data), transgenic organisms might be in there somewhere.), so the unilateralist curse must be warded more strongly as time goes on.

Replies from: Pattern
comment by Pattern · 2020-04-14T18:45:46.249Z · LW(p) · GW(p)

But does unilateral action tend to increase or inhibit negative and hard to reverse effects?

A cursory glance through history finds many, many other examples of unilateralists having strong positive effects, from Stanislav Petrov
comment by Isnasene · 2020-04-17T04:53:09.402Z · LW(p) · GW(p)

Yeah, my impression as that the Unilateralist's Curse as something bad mostly relies on the assumption that everyone is taking actions based on the common good. From the paper,

Suppose that each agent decides whether or not to undertake X on the basis of her own independent judgement of the value of X, where the value of X is assumed to be independent of who undertakes X, and is supposed to be determined by the contribution of X to the common good...

That is to say-- if each agent is not deciding to undertake X on the basis of the common good, perhaps because of fundamental value differences or subconscious incentives, there is no longer an implication that the unilateral action will be chosen more often than it ought to be.

I believe that example of Galileo and the Pentagon Papers are both cases where the "common good" assumption fails. In the context of Galileo, it's easy to justify this -- I'm an anti-realist and the Church does not share my ethical stances so they differ in terms of the common good. In the context of the Pentagon papers, one has to grapple with the fact that most of the people choosing not to leak them were involved not-very-common-good-at-all actions that those papers revealed.

The stronger argument for the Unilateralist's Curse for effective altruism in particular is that, for most of us, our similar perceptions of the common good are what attracted us in the first place (whereas, in many examples of the Unilateralist Curse, values are very inhomogenous). Also because cooperation is game-theoretically great, there's a sort of institutional pressure for those involves in effective altruism to assume others are considering the common good in good-faith.

comment by jmh · 2020-04-15T00:20:23.090Z · LW(p) · GW(p)

But don't we also have lots of evidence of the poor performance under a regime of decision by committee?

I might also be tempted to say one needs to consider all the social/public choice literature that suggests masses really are pretty bad at getting much right -- in other words, groups of people cannot generally come to a rational choice. (And, yes I get that can seem at odds with things like prediction markets but I think the key there is the institutional setting of all the separate individual choices).

Perhaps also relevant is an old paper one of my professors once mentions (all those years ago ;-) called something like The Chairman's Problem. I never read it but as I recall from the context it was really about how to overcome the host of problems everyone knows about (cycles, agenda settings, effect of iterative pairwise selection of multiple alternatives...). Essentially it seems the way the problem was cast, the Chairman ultimately had to be that unilateralist. Now, that doesn't completely go against the argument as I understand it being made -- the chairman clearly is listening to the committee members. But the point was, the chairman could not really rely on the committee to actually give a good solution (or take action).