Legibility: Notes on "Man as a Rationalist Animal"

post by Hazard · 2020-06-08T02:18:01.935Z · LW · GW · 6 comments

Contents

6 comments

Cross-posted on my roam-blog. This post is meant to be part of a series that explores the ideas in samzdat's The Uruk Series, and it also serves as a standalone introduction to the idea of legibility.

 

The first book that Lou rings in is Seeing Like a State by James C. Scott, and with it comes the ideas of Legibility and High Modernism. High Modernism is a movement/aesthetic/frame-of-mind that peaked around the middle of the 1900's (the shadow of which, we still live in), and legibility refers to the more abstract process that High Modernism represents.

Lou is mostly in the game of connecting the dots, and in introducing legibility he skips over a lot of the juicy details that appear in the book. I'd recommend Scott Alexander's book review of Seeing Like a State if you want to recover a lot of the concrete examples that ground out Lou's more abstract points. If you want the shortest possible intro to legibility and High Modernism, check out the ribbonfarm post on it. If all of those sound like too much reading, here's some highlights from all sources.

Here is the recipe:

  1. Look at a complex and confusing reality, such as the social dynamics of an old city
  2. Fail to understand all the subtleties of how the complex reality works
  3. Attribute that failure to the irrationality of what you are looking at, rather than your own limitations
  4. Come up with an idealized blank-slate vision of what that reality ought to look like
  5. Argue that the relative simplicity and platonic orderliness of the vision represents rationality
  6. Use authoritarian power to impose that vision, by demolishing the old reality if necessary
  7. Watch your rational Utopia fail horribly

From Scott Alexander:

The first part of the story is High Modernism, an aesthetic taste masquerading as a scientific philosophy. The High Modernists claimed to be about figuring out the most efficient and high-tech way of doing things, but most of them knew little relevant math or science and were basically just LARPing being rational by placing things in evenly-spaced rectangular grids.

James C. Scott himself describing High Modernism:

A strong, one might even say muscle-bound, version of the self-confidence about scientific and technical progress, the expansion of production, the growing satisfaction of human needs, the mastery of nature (including human nature), and above all, the rational design of social order commensurate with the scientific understanding of natural laws.

A definition that Scott Alexander responds to:

[insert James's C. Scott's definition] …which is just a bit academic-ese for me. An extensional definition might work better: standardization, Henry Ford, the factory as metaphor for the best way to run everything, conquest of nature, New Soviet Man, people with college degrees knowing better than you, wiping away the foolish irrational traditions of the past, Brave New World, everyone living in dormitories and eating exactly 2000 calories of Standardized Food Product (TM) per day, anything that is For Your Own Good, gleaming modernist skyscrapers, The X Of The Future, complaints that the unenlightened masses are resisting The X Of The Future, demands that if the unenlightened masses reject The X Of The Future they must be re-educated For Their Own Good, and (of course) evenly-spaced rectangular grids.

Let's take a step back for a moment. It's easy to make the mistake of simplifying this to "Any government or top-down authority trying to change something is going to end up making things worse." That's why I'd really recommend reading Scott Alexander's post, so you can become familiar with the specific historical examples that ground this idea. James C. Scott and Lou both acknowledge that plenty of awesome things have come from some top-down plans (some of the urban planning that made it harder to be social in a city also made it less cramped and disease ridden)

The thing that Lou seems to be worried about is less any given decision that's made, but how decisions made through the lens of legibility cannot understand illegible counter arguments.

Some quotes from Lou:

At the same time, my concerns are not merely with the science we have (especially psychology, but w/e). I think a lot of the problem  comes from “legibility” itself. Namely – legibility is a kind of __language__, and so any resistance that isn’t in it will look crazy. At best, that means we ignore it (now). But at worst, and especially with the assumption that we “know everything now” (implicitly or explicitly), that enables us to just do the exact same thing again. 

another one:

The only way that I can see that one would is if there’s a genuine admission of ignorance on the part of the “scientists”. Even reconsidering one’s position requires some kind of counter-argument, and the problem here is that the counter-argument doesn’t make sense in the language of the scientists. Indeed – a solid counterargument requires some kind of power (you’re not going to listen to some random guy on the street  screaming about [whatever]), but legibility inherently reduces that power.

and one last quote:

Legibility is a process of power, but it’s also a channel for it. People with epistemic knowledge can argue within civil society using it, but those outside cannot.The kinds of argument they make might be translatable, but they won’t know how to translate them (if we even have the rational knowledge to do so). Given that __when__ power crushes someone, it __tends__ to crush the powerless, we have a pretty serious problem: no one with power will even understand the powerless when they are being crushed.

This might confuse you if you equated legible with "something a very clever person could understand." As Scott Alexander pointed out, it's more about wanting everything to conform to a specific aesthetic formatting, than it's about peak human intelligence/rationality.

The problem isn't that top down planners are going "Look, we've got this new urban design, and we know it will have some negative effects on how easy it is for people to socialize with their neighbors, but overall it should increase life expectancy by 10 years, so we think it's worth it." The problem is that it doesn't even look like a trade off in the eyes of legibility. "We made a great plan to increase life expectancy that has no downsides, and oh yeah, the plebs are complaining about 'friendship' or something, but they don't have any studies so why listen to them?"

Given all that, what happens if we don't have legible ways to quantify some of the aspect of life that are most important to people? (love, happiness, meaning, friendship, belonging, etc) That's what Lou spends a lot of the rest of the series exploring.

6 comments

Comments sorted by top scores.

comment by Chris_Leong · 2020-06-08T08:36:19.266Z · LW(p) · GW(p)

I'm really keen to see the later posts in this series, since Lou's posts are often somewhat tricky to decrypt.

Replies from: Hazard
comment by Hazard · 2020-06-08T14:49:31.013Z · LW(p) · GW(p)

Cool! Yeah, I've gone over all of them a few times and starting outlining this, but also lost steam and moved to other things. You noting interest is useful for me getting more out :)

comment by Kerry (ellardk@gmail.com) · 2020-06-09T01:52:00.152Z · LW(p) · GW(p)

Great points. Your last three paragraphs get at something especially important, and I agree with your characterization.

Replies from: Hazard
comment by Hazard · 2020-06-09T14:41:54.901Z · LW(p) · GW(p)

Thanks! I might want to make another post at some point that really digs into subtle differences between rationality and legibility. Because I think a lot of people's rationality is legibility. It's like the shadow side of rationality.

Replies from: ellardk@gmail.com
comment by Kerry (ellardk@gmail.com) · 2020-06-16T01:35:07.351Z · LW(p) · GW(p)

That sounds like a great post!

comment by romeostevensit · 2020-06-08T06:13:23.436Z · LW(p) · GW(p)

We internally bounce between over/under updating on top down priors/bottom up data, and the pendulum swing is not optimally dampened. As below, so above.