Totalitarian ethical systems

post by Benquo · 2019-05-03T19:35:28.800Z · LW · GW · 12 comments

This is a link post for http://benjaminrosshoffman.com/totalitarian-ethical-systems/

(Excerpt of another conversation with my friend Mack [LW · GW].)

Mack: Do you consider yourself an Effective Altruist (capital letters, aligned with at least some of the cause areas of the current movement, participating, etc)?

Ben: I consider myself strongly aligned with the things Effective Altruism says it's trying to do, but don't consider the movement and its methods a good way to achieve those ends, so I don't feel comfortable identifying as an EA anymore.

Consider the position of a communist who was never a Leninist, during the Brezhnev regime.

Mack: I am currently Quite Confused about suffering. Possibly my confusions have been addressed by EA or people who are also strongly aligned with the stated goals of EA and I just need to read more. I want people to thrive and this feels important, but I am pretty certain that "suffering" as I think the term is colloquially used is a really hard thing to evaluate, so "end suffering" might be a dead end as a goal

Ben: I think the frame in which it's important to evaluate global states using simple metrics is kind of sketchy and leads to people mistakenly thinking that they don't know what's good locally. You have a somewhat illegible but probably coherent sense that capacity and thriving are important, and that suffering matters in the context of the whole minds experiencing the suffering, not atomically

There's not actually a central decisionmaker responsible for all the actions, who has to pick a metric to add up all the goods and bads to decide which actions to prioritize. There are a lot of different decisionmakers with different capacities, who can evaluate or generate specific plans to e.g. alleviate specific kinds of suffering, and counting the number of minds affected and weighting by impact is one thing you might do to better fulfill your values. And one meta-intervention might be centralizing or decentralizing decisions.

Since you wouldn't need to do this if the info were already processed, the best you can do really is try to see (a) how different levels of centralization have worked out in terms of benefiting from economies of scale vs costs due to value-distortion in the past, and (b) whether there's a particular class of problem you care about that requires one or the other.

So, for instance, you might notice that factory farming creates a lot of pointless (from the animal's perspective) suffering that doesn't enable growth and thriving, but results from constantly thwarted intentions. This is pretty awful, and you might come up with one of many plans to avert that problem. Then you might, trying to pool resources to enact such a plan, find that other people have other plans they think are better, and try to work out some way to decide which large-scale plans to use shared resources to enact. (Assuming everyone with a large-scale plan thinks it's better than smaller-scale plans, or they'd just do their own thing)

So, one way to structure that might be hiring specialists like GiveWell / Open Phil - that's one extreme where a specialized group of plan-comparers are entrusted with the prioritization. At the other extreme there are things like donor lotteries, where if you have X% of the funds to do something, the expected value of participating has to be at least X% of the value of funding the thing. And somewhere in the middle is some combination of direct persuasion and negotiation / trade.

Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric, so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god. This should make us at least a little worried about the proposal if it seems like the proposers are likely to be part of the decisionmaking group in the new regime. E.g. Leninism.

Mack: I'm...very cognizant of my uncertainty around what's good for other people, in part because I am often uncertain about what's good for me.

Ben: Yeah, it's kind of funny in the way Book II (IIRC) of Plato's Republic is funny. "I don't know what Iwant, so maybe I should just add up what everyone in the world wants and do that instead..."

"I don't know what a single just soul looks like, so let's figure out what an ENTIRE PERFECTLY JUST CITY looks like, and then assume a soul is just a microcosm of that."

Mack: Haven't read it, heard his Republic is a bit of a nightmare.

Ben: Well, it's a dialogue Socrates is having with some ambitious young Spartaphilic aristocrats. He points out that their desire to preserve class differences AND have good people in charge requires this totalitarian nightmare (since more narrowminded people will ALSO want the positions of disproportionate power - to be captain of the Titanic, to use a metaphor from earlier - I actually stole the ship metaphor from Republic - and be less distracted by questions of "how to steer the ship safely.")

He describes how even a totalitarian nightmare like this will break down in stages of corruption, and then suggests that maybe they just be happy with what they have and mostly leave other people alone.

Mack: That seems like...replacing a problem small enough for the nuance to intimidate you with one large enough that you can abstract away the nuance that would intimidate you if you acknowledged the nuance

Ben: Yes, it's not always a bad idea to try. But, like, it's one possible trick for becoming unconfused, and deciding a priori to stick with the result even if it seems kind of awful isn't usually gonna be a good move. You still gotta check that it seems right and nonperverse when applied to particular cases, using the same metrics that motivated you to want to solve the problem in the first place.

12 comments

Comments sorted by top scores.

comment by Hazard · 2019-05-04T18:24:06.243Z · LW(p) · GW(p)

Highlighting the parts that felt important:

I think the frame in which it's important to evaluate global states using simple metrics is kind of sketchy and leads to people mistakenly thinking that they don't know what's good locally.
[...]
Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric, so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god.
[...]
"I don't know what a single just soul looks like, so let's figure out what an ENTIRE PERFECTLY JUST CITY looks like, and then assume a soul is just a microcosm of that."

I can see ways in which my own thinking has fallen into the frame you mention in the first quote. It's an interesting and subtle transition, going from asking, "What is it best for me to do?" to "What is it best for a human to do?"/"What would it be best for everyone to be doing?". I notice that I feel very compelled to make this transition when thinking.

Replies from: Benquo
comment by Benquo · 2019-05-08T13:29:41.823Z · LW(p) · GW(p)

I think it would be extremely helpful if you shared some examples in more concrete detail.

Replies from: Hazard
comment by Hazard · 2019-05-08T16:25:55.590Z · LW(p) · GW(p)

"It feels good and right for me have a life where I'm producing more than I'm consuming. Wait, if it was actually a good thing to produce more than I consume, wouldn't that mean we should have a society every one is pumping out production that never get's used by anyone?"

The above is not something I'm very concerned with, but it did feel easy to jump to "this is now a question of the effects of this policy instantiated across all humans."

comment by DanielFilan · 2019-05-08T03:38:52.377Z · LW(p) · GW(p)

Re: the section on coming up with simple metrics to evaluate global states, which I couldn't quickly figure out how to nicely excerpt:

I tentatively disagree with the claim that "Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric", at least the way I think 'totalizing' is being applied. As a human in the world, I can see a few cool things I could potentially do: I could continue my PhD and try to do some important research in AI alignment, I could try to get involved with projects to build charter cities, I could try to advocate for my city to adopt policies that I think are good for local flourishing, I could try to give people info that makes it easier for them to eat a vegan diet, or I could make a nice garden. Since I can't do all of these, I need some way to pick between them. One important way is how productive I would be at each activity (as measured by to what extent I can get the activity done), but I think that for many of these my productivity is about in the same ballpark. To compare between these different activites, it seems like it's really useful to have a single metric on the future history of the world that can trade off the different bits of the world that these activities affect. Similarly, if I'm running a business, it's hard to understand how I could make do without the single metric of profit to guide my decisions.

Replies from: Benquo, Benquo
comment by Benquo · 2019-05-08T13:21:26.389Z · LW(p) · GW(p)

Profit is a helpful unifying decision metric, but it's not actually good to literally just maximize profits, this leads in the long run to destructive rent-seeking, regulatory capture, and trying to maximize negative externalities. It also leads to short-termism - consider the case of Enron (and the case of the privatization of post-Soviet Russia), where naive microeconomic advice led not to gains in long-run efficiency, but an environment that encouraged internally adversarial behavior. Effective managers very frequently use other metrics than profit, based on their judgment that some sort of less legible thing like information flow or clear self-signalling to communicate a coherent strategic intuition will matter more in the long-run than things that show up directly as profits and losses. Consider the following anecdote:

Herb Kelleher [the longest-serving CEO of Southwest] once told someone, “I can teach you the secret to running this airline inthirty seconds. This is it: We are THE low-fare airline. Once you understand that fact, you can make any decision about this company’s future as well as I can.
“Here’s an example,” he said. “Tracy from marketing comes into your office. She says her surveys indicate that the passengers might enjoy a light entrée on the Houston to Las Vegas flight. All we offer is peanuts, and she thinks a nice chicken Caesar salad would be popular. What do you say?”
The person stammered for a moment, so Kelleher responded: “You say, ‘Tracy, will adding that chicken Caesar salad make us THE low-fare airline from Houston to Las Vegas? Because if it doesn’t help us become the unchallenged low-fare airline, we’re not serving any damn chicken salad.’ ”

Replies from: DanielFilan
comment by DanielFilan · 2019-05-10T02:03:35.250Z · LW(p) · GW(p)

Profit is a helpful unifying decision metric, but it's not actually good to literally just maximize profits, this leads in the long run to destructive rent-seeking, regulatory capture, and trying to maximize negative externalities.

Agreed. That being said, it does seem like the frame in which it's important to evaluate global states of the business using the simple metric of profit is also right: like, maybe you also need strategic vision and ethics, but if you're not assessing expected future profits, it certainly seems to me that you're going to miss some things and go off the rails. [NB: I am more tied to the personal impact example than the business example, so I'd like to focus discussion in that thread, if it continues].

comment by Benquo · 2019-05-08T13:20:21.943Z · LW(p) · GW(p)

I think there's an unexplained leap here from the activity of trying to measure and compare things, to the assumption that you need a single metric.

It seems pretty reasonable, when assessing these options, to look at things like how many people (or other things you care about) would be directly affected by the project, what kind of effect it would have on them, and how big your piece of the project is. But given the very different nature of some of these projects, they'll end up in different units. You can then try to figure out which thing seems most appealing to you. But I would expect that you'll more often get a result you'd reflectively endorse, if you directly compared the different concrete outcomes and asked which seems better to you (e.g. "would I rather prevent twenty hogs from leading lives of pointless suffering, or help one person immigrate from a stable but somewhat poor place to a modern city?), than if you converted them to some sort of abstract utility metric like QALYs and then compared the numbers. The concrete comparisons, by not compressing out the moral complexity of the tradeoffs, allow you extra opportunity to notice features of the decision you might otherwise miss, and get curious about details that might even help you generate improved options.

Another thing you miss if you try to collapse everything into a single metric, is the opportunity to simplify the decision in more reliable ways through causally modeling the relationship between different intermediate outcomes. For instance, some amount of gardening might affect your own well-being, such that spending a small amount of your time on a small garden might actually improve your ability to do other work - in that case, that choice is overdetermined. On the other hand, working on a charter city might affect how many other people get to make a nice garden, and depending on how much you care about the experience happening to you rather than other people, you might end up deciding that past some point of diminishing returns on productivity, your production of gardening-related experiences is more efficiently pursued by empowering others than by making a garden.

This kind of thinking can also help generate surprising opportunities you hadn't contemplated - the kind of work that goes into a garden-friendly charter city might help with other agendas like robustness to food supply disruption, or carbon capture. This is a bit of a toy example, but I hope you see the general point. Curiosity about the concrete details of different plans can be extremely valuable, comparing plans is a great opportunity to generate such curiosity, and totalizing metrics compress out all those opportunities.

Replies from: DanielFilan
comment by DanielFilan · 2019-05-14T17:59:21.723Z · LW(p) · GW(p)

I guess I'd first like to disagree with the implication that using a single metric implies collapsing everything into a single metric, without getting curious about details and causal chains. The latter seems bad, for the reasons that you've mentioned, but I think there are reasons to like the former. Those reasons:

  • Many comparisons have a large number of different features. Choosing a single metric that's a function of only some features can make the comparison simpler by stopping you from considering features that you consider irrelevant, and inducing you to focus on features that are important for your decision (e.g. "gardening looks strictly better than charter cities because it makes me more productive, and that's the important thing in my metric - can I check if that's actually true, or quantify that?").
  • Many comparisons have a large number of varying features. If you think that by default you have biases or, more generally, unendorsed subroutines that cause you to focus on features you shouldn't, it can be useful to think about them when constructing a metric, and then using the metric in a way that 'crowds out' relevant biases (e.g. you might tie yourself to using QALYs if you're worried that by default you'll tend to favour interventions that help people of your own ethnicity more than you would consciously endorse). See Hanson's recent discussion of simple rules vs the use of discretion.
  • By having your metric be a function of a comparatively small number of features, you give yourself the ability to search the space of things you could possibly do by how those things stack up against those features, focussing the options you consider on things that you're more likely to endorse (e.g. "hmm, if I wanted to maximise QALYs, what jobs would I want to take that I'm not currently considering?" or "hmm, if I wanted to maximise QALYs, what systems in the world would I be interested in affecting, and what instrumental goals would I want to pursue?"). I don't see how to do this without, if not a single metric, then a small number of metrics.
  • Metrics can crystallise tradeoffs. If I'm regularly thinking about different interventions that affect the lives of different farmed animals, then after making several decisions, it's probably computationally easier for me to come up with a rule for how I tend to trade off cow effects vs sheep effects, and/or freedom effects vs pain reduction effects, then to make that tradeoff every time independently.
  • Metrics help with legibility. This is less important in the case of an individual choosing career options to take, but suppose that I want to be GiveWell, and recommend charities I think are high-value, or I want to let other people who I don't know very well invest in my career. In that case, it's useful to have a legible metric that explains what decisions I'm making, so that other people can predict my future actions better, and so that they can clearly see reasons for why they should support me.
Replies from: Benquo
comment by Benquo · 2019-05-24T12:56:14.466Z · LW(p) · GW(p)

I don't mean to make a fully general argument against ever using metrics here. I don't think that using a metric is always a bad thing, especially in some of the more scope-limited examples you give. I do think that some of the examples you give make some unreasonable assumptions, though - let's go case by case.

Choosing a single metric that's a function of only some features can make the comparison simpler by stopping you from considering features that you consider irrelevant, and inducing you to focus on features that are important for your decision (e.g. "gardening looks strictly better than charter cities because it makes me more productive, and that's the important thing in my metric - can I check if that's actually true, or quantify that?").

I think most of the work here is figuring out which attributes you care about, but I agree that in many cases the best way to make comparisons on those attributes will be via explicitly quantified metrics.

If you think that by default you have biases or, more generally, unendorsed subroutines that cause you to focus on features you shouldn't, it can be useful to think about them when constructing a metric, and then using the metric in a way that 'crowds out' relevant biases (e.g. you might tie yourself to using QALYs if you're worried that by default you'll tend to favour interventions that help people of your own ethnicity more than you would consciously endorse).

I think that a lot of worries about this kind of "bias" are only coherent in fiduciary situations; Hanson's examples are all about such situations, where we're trying to constrain the behavior of people to whom we've delegated trust. If you're deciding on your own account, then it can often be much better to resolve the conflict, or at least clarify what it's about, rather than trying to behave as though you're more accountable to others than you are.

By having your metric be a function of a comparatively small number of features, you give yourself the ability to search the space of things you could possibly do by how those things stack up against those features, focussing the options you consider on things that you're more likely to endorse (e.g. "hmm, if I wanted to maximise QALYs, what jobs would I want to take that I'm not currently considering?" or "hmm, if I wanted to maximise QALYs, what systems in the world would I be interested in affecting, and what instrumental goals would I want to pursue?"). I don't see how to do this without, if not a single metric, then a small number of metrics.

This seems completely unobjectionable - simple metrics (I'd use something much simpler than QALYs) can be fantastic for hypothesis generation, and it's a bad sign if someone claiming to care about scope doesn't seem to have done this.

Metrics can crystallise tradeoffs. If I'm regularly thinking about different interventions that affect the lives of different farmed animals, then after making several decisions, it's probably computationally easier for me to come up with a rule for how I tend to trade off cow effects vs sheep effects, and/or freedom effects vs pain reduction effects, then to make that tradeoff every time independently.
Metrics help with legibility. This is less important in the case of an individual choosing career options to take, but suppose that I want to be GiveWell, and recommend charities I think are high-value, or I want to let other people who I don't know very well invest in my career. In that case, it's useful to have a legible metric that explains what decisions I'm making, so that other people can predict my future actions better, and so that they can clearly see reasons for why they should support me.

This sounds great in principle, but my sense is that in practice almost no one actually does this based on a utilitarian metric because it would be ruinously computationally expensive. The only legible unified impact metrics GiveWell tracks are money moved and website traffic, for instance. Sometimes people pretend to use such metrics as targets to determine their actions, but the benefits of pretending to do a thing are quite different from the benefits of doing the thing.

comment by romeostevensit · 2019-05-04T00:11:29.759Z · LW(p) · GW(p)

This suggest to me that it's a good idea to power boost people who are in the upper echelons of competence in any given domain, but to be careful to not power boost them enough that they exit the domain they are currently in and try to play in a new larger one where they are of more average competence. Sort of an anti peter principle. At least if the domain is important. For unimportant domains you probably do want to skim the competent people out and get them playing in a more important domain.

Replies from: Benquo
comment by Benquo · 2019-05-08T13:31:11.402Z · LW(p) · GW(p)

Sure, though people trying to do good work who aren't already trying to collapse everything into a single metric will sometimes just tell you how much power they think they can productively use (and how much power they can productively redistribute).

comment by habryka (habryka4) · 2019-05-04T02:08:09.388Z · LW(p) · GW(p)

Promoted to frontpage with caveats. The caveats are basically the same as for the other post you made, and that I explained over there [LW(p) · GW(p)].