Simplicio and Sophisticus

post by Zvi · 2018-07-22T13:30:00.333Z · LW · GW · 1 comments

Contents

1 comment

Previously (Slate Star Codex): The Whole City is Center

Epistemic Status:

Image result for two spiderman meme

Note that after writing a lot of this, I checked and Sniffnoy anticipated a lot of this in the comments, but I think both takes are necessary.

There are many useful points to Scott’s philosophical dialogue, The Whole City is Center, between Simplicio and Sophisticus. I want to point out an extra one I think is important.

Here’s a short summary of some key points of disagreement they have.

Simplicio claims that there are words people use to describe concepts, and we should use those words to describe those concepts, even if those words have unfortunate implicaitons. Say true things about the world. Larry is lazy.

Sophisticus says no, if those words have unfortunate implications we shouldn’t use them. And in many cases, where the unfortunate implications are inevitable because people have those implications about the concept being described, we shouldn’t use any word at all to describe the concept. Larry can be counted on not to do things. But we shouldn’t treat lazy as a thing, because people think being lazy is bad and there’s no utility in thinking Larry is bad.

Simplicio says we should use whatever techniques work, regardless of whether they are negative reinforcement, positive reinforcement, before the act, after the act, too big, too small, you name it, if that’s the system that works. And if people’s natural instincts are to do things that work best as a system, but are sometimes ‘overkill’ or have unfortunate side effects in a particular case, you should accept that.

Sophisticus says no. Studies show negative reinforcement reinforcement doesn’t work, so don’t do it. Studies show harsher prisons don’t deter people so don’t use them. You should only use exactly what is needed to cause a direct effect in each situation. Or, if you need to use deterrence, what the evidence says will actually deter people.

Sophisticus says, we should look upon motivations like ‘I want this person to suffer’ with horror, and assume something has gone horribly wrong. (He makes no comment on feeling ‘I want this particular person to be happy’, which doesn’t come up.)

Simplicio says, if having seemingly unreasonable desires in some situations, including potential future situations, is the way persons and groups get better results, stop looking at it as some crazy or horrible thing. People’s motivations are messy, they have lots of weird side effects like loving kittens (I would note, so much so that I am punished for not loving them, basically because not having bad side effects of a thing is evidence of not having the thing itself). Going all ‘these instincts seem superficially nice so we’re going to approve, and these instincts seem superficially not nice so we’re going to disapprove’ seems wrong.

Sophisticus says, that by refusing to use concepts like lazy, he has a value disagreement with Simplicio and those who do use the lazy concept. Because those people embrace the implications.

Simplicio says no, this isn’t about value disagreement.

But then, near the end, Sophisticus catches Simplicio by saying he’s refusing in context to use the term ‘value difference’ because he doesn’t like its implications, and insisting only upon some Platonic ideal version of value difference. Which, Sophisticus says, makes him a hypocrite! Rather than point out either that no, it doesn’t, or maybe it does and you get non-zero points for noticing but asking for people not to ever be a hypocrite is not a valid move, he instead gets so embarrassed he flees town, and only redeems himself ten years later by pointing out what happens if you reject the unfortunate implications of the term ‘city center.

The most important lesson is, as Sniffnoy observes, the characters have the wrong names. Simplicio should be Sophisticus. Sophisticus should be Simplicio.

(I will continue to refer to them by Scott’s names here.)

Sophisticus wants to solve the world by getting rid of all the things he doesn’t like, and all the things he can’t properly quantify. He only accepts actions that are based on fully described and measured reasons. He will accept second or third order causes and consequences, but only and exactly those with well-described and quantified causal pathways.

Then he says that such actions are intelligent, sophisticated and advanced. They reject the irrational, the non-scientific. So they denigrate people who think otherwise with labels like Simplicio, and pretend that word doesn’t have unfortunate implications. Because it’s never all right to label people, in ways that have unfortunate and false implications (e.g. that a person is simple or stupid) unless you catch someone labeling people.

Simplicio accepts that the world is complex, and that our systems for dealing with it are approximations and sets of rules and values that won’t always do the locally optimal thing, and that doesn’t mean they’re wrong. Simplicio is comfortable with the idea that correlations and associations exist even when we don’t like them.

Sophisticus is what Nassim Taleb calls the Intellectual, Yet Idiot (IYI). By doing things that are more abstract, and discarding most of the valid and useful information and relationships, they fool themselves and others into thinking that they are smarter and more sophisticated. Simplicio is advocating for Taleb’s typical grandmother, who has learned what actually works and survives, even if she doesn’t understand all the reasons or implications.

Sophisticus is vastly simplifying the world.

He simplifies the world by cutting out the parts he does not like, and the parts he does not understand.

This allows him to create a model of the world. That’s great! That’s super useful! I love me some models, and you can’t have models without throwing a lot of stuff out. Often the model gives much better answers despite this, and allows us to learn much and make better decisions.  What makes a model great is that when you get rid of all the fuzziness, you get rid of a lot of noise, and you can manipulate and do math to what is left. Over time, you can add more stuff back into the model, and make it more sophisticated.

When you start thinking in models, or like a rationalist, or an economist, either in general or about a particular thing, that kind of thinking starts out deeply, deeply stupid. You must count on your other ways of thinking to contain the damage and point out the mistakes, to avoid taking these stupid conclusions too seriously, rather than as additional perspectives, as points of departure and future development, and places to learn. It goes way beyond Knowing About Biases Can Hurt People. [LW · GW]

Drop stuff from your model, and you fail to understand or optimize for those things. If you then optimize based on your model, the things you left out of the model will be left out, and sacrificed, because they’re using optimization pressure and atoms that can be used for something else. The results might or might not be an improvement. As the optimizations get more extreme, we should expect bigger disruptions and sacrifices of key excluded elements, so that had better be worth it.

One danger is that many people who develop the models either do so because they are really bad at navigating without models, or because they realized how bad everyone is at navigating without models. This provides motivation to work on the models even if they aren’t yet any good, but it also increases temptation to forget that the model is a map and not the territory.

I think this is related to how those who found a business are as a group completely delusional about their chances of success, but also that founding a business is a generally very good idea. Motivating the long term investment and endurance of high costs only works in such cases, even if many more people would be better off in the long run if they did it.

The struggle is, how does one combine these two approaches. Build up one’s models and toolboxes, to allow systematic thinking, while not losing the power of what you’re ignoring, and slowly incorporating that stuff into your systematic thinking. Otherwise, no matter how simplistic the average person might be, you risk being even more so.

 

 

 

 

1 comments

Comments sorted by top scores.

comment by a gently pricked vein (strangepoop) · 2018-07-22T21:53:52.909Z · LW(p) · GW(p)

"... natural science has shown a curious mixture of rationalism and irrationalism. Its prevalent tone of thought has been ardently rationalistic within its own borders, and dogmatically irrational beyond those borders. In practice such an attitude tends to become a dogmatic denial that there are any factors in the world not fully expressible in terms of its own primary notions devoid of further generalization. Such a denial is the self-denial of thought."

- A.N. Whitehead, Process and Reality

I can't really tell yet, but David Chapman's work seems to be trying to hint at this phenomenon all the time. See his How to Think Real Good, for example, even if you don't agree with his characterization of Bayesian rationality. There's also Fixation and Denial, where he goes into some failure modes when dealing with hard-to-fully-formalize things. Meta-rationality seems to be mostly about this, AFAICT.

I have to say, most of Chapman's stuff feels like pure lampshading, ie acknowledging that there is a problem and then simply moving on. I suppose he's building up to more practical advice.

If you're getting frustrated (I certainly am) that all everyone seems to be doing about this is offering loose and largely unhelpful tips, I think that's something Alan Perlis anticipated: "One can't proceed from the informal to the formal by formal means."

(of course, that's just another restatement of the fact that there is a problem.)