Separation of Concerns

post by abramdemski · 2019-05-23T21:47:23.802Z · LW · GW · 30 comments

Contents

  Epistemic vs Instrumental
  Argument vs Premises and Conclusion
  Preferences vs Bids
None
30 comments

Separation of concerns is a principle in computer science which says that distinct concerns should be addressed by distinct subsystems, so that you can optimize for them separately. We can also apply the idea in many other places, including human rationality. This idea has been written about before. I'm not trying to make a comprehensive post about it, just remark on some things I recently though about.

Epistemic vs Instrumental

The most obvious example is beliefs vs desires. Although the distinction may not be a perfect separation-of-concerns in practice (or even in principle), at least I can say this:

I'm particularly thinking about how the distinction is used in conversation. If an especially sharp distinction isn't being made, you might see things like:

Notice that this isn't an easy distinction to make. It isn't right at all to just ignore conversational implicature. You should not only make literal statements, nor should you just assume that everyone else is doing that. The skill is more like, raise the literal content of words as a hypothesis; make a distinction in your mind between what is said and anything else which may have been meant.

Side note -- as with many conversation norms, the distinctions I'm mentioning in this post cannot be imposed on a conversation unilaterally. Sometimes simply pointing out a distinction works; but generally, one has to meet a conversation where it's at [LW · GW], and only gently try to pull it to a better place. If you're in a discussion which is strongly failing to make a true-vs-useful distinction, simply pointing out examples of the problem will very likely be taken as an attack, making the problem worse.

Making a distinction between epistemics and instrumentality seems like a kind of "universal solvent" for cognitive separation of concerns -- the rest of the examples I'm going to mention feel like consequences of this one, to some extent. I think part of the reason for this is that "truth" is a concept which has a lot of separation-of-concerns built in: it's not just that you consider truth separately from usefulness; you also consider the truth of each individual statement separately, which creates a scaffolding to support a huge variety of separation-of-concerns (any time you're able to make an explicit distinction between different assertions).

But the distinction is also very broad. Actually, it's kind of a mess -- it feels a bit like "truth vs everything else". Earlier, I tried to characterize it as "what's true vs what you want to be true", but taken literally, this only captures a narrow case of what I'm pointing at. There are many different goals which statements can optimize besides truth.

Simply put, there are a wide variety of incentives on beliefs and claims. There wouldn't even be a concept of 'belief' or 'claim' if we didn't separate out the idea of truth from all the other reasons one might believe/claim something, and optimize for it separately. Yet, it is kind of fascinating that we do this even to the degree that we do -- how do we successfully identify the 'truth' concern in the first place, and sort it out from all the other incentives on our beliefs?

Argument vs Premises and Conclusion

Another important distinction is to separate the evaluation of hypothetical if-then statements from any concern with the truth of their premises or conclusions. A common complaint among the more logic-minded, of the less, is that hardly anyone is capable of properly distinguishing the claim "If X, then Y" from the claim "X, and also Y".

It could be that a lack of a very sharp truth-vs-implicature distinction is what blocks people from making an if-vs-and distinction. Why would you be claiming "If X, then Y" if not to then say "by the way, X; so, Y"? (There are actually lots of reasons, but, they're all much less common than making an argument because you believe the premises and want to argue the conclusion -- so, that's the commonly understood implicature.)

However, it's also possible to successfully make the "truth" distinction but not the "hypothetical" distinction. Hypothetical reasoning is a tricky skill. Even if you successfully make the distinction when it is pointed out explicitly, I'd guess that there are times when you fail to make it in conversation or private thought.

Preferences vs Bids

The main reason I'm writing this post is actually because this distinction hit me recently. You can say that you want something, or say how you feel about something, without it being a bid for someone to do something about it. This is both close to the overall topic of In My Culture and a specific example (like, listed as an example in the post).

Actually, let's split this up into cases:

Preferences about social norms vs bids for those social norms to be in place. This is more or less the point of the In My Culture article; saying "in my culture" before something to put a little distance between the conversation and the preferred norm, so that it is put on the table as an invitation rather than being perceived as a requirement.

Proposals vs preferences vs bids. Imagine a conversation about what restaurant to go to. Often, people run into a problem: no one has any preferences; everyone is fine with whatever. No one is willing to make any proposals. One reason why this might happen is that proposals, and preferences, are perceived as bids. No one wants to take the blame for a bad plan; no one wants to be seen as selfish or negligent of other's preferences. So, there's a natural inclination to lose touch with your preferences; you really feel like you don't care, and like you can't think of any options.

If a strong distinction between preferences and bids is made, it gets easier to state what you prefer, trusting that the group will take it as only one data point of many to be taken together. If a distinction between proposals and bids is made, it will be easier to list whatever comes to mind, and to think of places you'd actually like to go.

Feelings vs bids. I think this one comes less naturally to people who make a strong truth distinction -- there's something about directing attention toward the literal truth of statements which directs attention away from how you feel about them, even though how you feel is something you can also try to have true beliefs about. So, in practice, people who make an especially strong truth distinction may nonetheless treat statements about feelings as if they were statements about the things the feelings are about, precisely because they're hypersensitive to other people failing to make that distinction. So: know that you can say how you feel about something without it being anything more. Feeling angry about someone's statement doesn't have to be a bid for them to take it back, or a claim that it is false. Feeling sad doesn't have to be a bid for attention. An emotion doesn't even have to reflect your more considered preferences.

(To make this a reality, you probably have to explicitly flag that your emotions are not bids.)

When a group of people is skilled at making a truth distinction, certain kinds of conversation, and certain kinds of thinking, become much easier: all sorts of beliefs can be put out into the open where they otherwise couldn't, allowing the collective knowledge to go much further. Similarly, when a group of people is skilled at the feelings distinction, I expect things can go places where they otherwise couldn't. If you can mention in passing that something everyone else seems to like makes you sad, without it becoming a big deal. If there is sufficient trust that you can say how you are feeling about things, in detail, without expecting it to make everything complicated.

The main reason I wrote this post is that someone was talking about this kind of interaction, and I initially didn't see it as very possible or necessarily desirable. After thinking about it more, the analogy to making a strong truth distinction hit me. Someone stuck in a culture without a strong truth distinction might similarly see such a distinction as 'not possible or desirable': the usefulness of an assertion is obviously more important than its truth; in reality, being overly obsessed with truth will both make you vulnerable (if you say true things naively) and ignorant (if you take statements at face value too much, ignoring connotation and implicature); even if it were possible to set aside those issues, what's the use of saying a bunch of true stuff? Does it get things done? Similarly: the truth of the matter is more important than how you feel about it; in reality, stating your true feelings all the time will make you vulnerable and perceived as needy or emotional; even if you could set those things aside, what's the point of talking about feelings all the time?

Now it seems both are possible and simply good, for roughly the same reason. Having the ability to make distinctions doesn't require you to explicitly point out those distinctions in every circumstance; rather, it opens up more possibilities.

I can't say a whole lot about the benefits of a feelings-fluent culture, because I haven't really experienced it. This kind of thing is part of what circling [LW · GW] seems to be about, in my mind. I think the rationalist community as I've experienced it goes somewhat in that direction, but definitely not all the way.

30 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2019-05-23T22:35:28.539Z · LW(p) · GW(p)

Separation of concerns is a principle in computer science which says that distinct concerns should be addressed by distinct subsystems, so that you can optimize for them separately.

Since separation of concerns is obviously only applicable to bounded agents, it seems like someone who is clearly optimizing for epistemic rationality as a separate concern is vulnerable to being perceived as lacking the ability to optimize for instrumental rationality in an end-to-end way, and perhaps vulnerable to marketing that promises to teach how to optimize for instrumental rationality in an end-to-end way. Also most people are already running evolved strategies that don't (implicitly) value epistemic rationality very highly as a separate concern, so it seems like you need to say something about why they should give up such strategies (which evolution has presumably poured huge amounts of compute into optimizing for), not just point to the principle in computer science (where humans are forced into separation of concerns due to lack of sufficient computing power).

It seems like there are two potential arguments one could make in this regard:

  1. Our environment has changed and the strategies that evolution came up with aren't very good anymore.
  2. Evolution only optimizes for individual rationality, and what we care about or need is group rationality, and separation of concerns is a good way to achieve that in the absence of superhuman amounts of compute.
Replies from: jessica.liu.taylor, jessica.liu.taylor, abramdemski
comment by jessicata (jessica.liu.taylor) · 2019-05-24T01:37:38.092Z · LW(p) · GW(p)

Evolution only optimizes for individual rationality, and what we care about or need is group rationality, and separation of concerns is a good way to achieve that in the absence of superhuman amounts of compute.

This seems very clearly true, such that it seems strange to use "evolution produces individuals who, in some societies, don't seem to value separate epistemic concerns" as an argument.

Well-functioning societies have separation of concerns, such as into different professions (as Plato described in his Republic). Law necessarily involves separation of concerns, as well (the court is determining whether someone broke a law, not just directly what consequences they will suffer). Such systems are created sometimes, and they often degrade over time (often through being exploited), but can sometimes be repaired.

If you notice that most people's strategies implicitly don't value epistemic rationality as a separate concern, you can infer that there isn't currently a functioning concern-separating social structure that people buy into, and in particular that they don't believe that desirable rule of law has been achieved.

Replies from: abramdemski
comment by abramdemski · 2019-05-24T19:00:16.707Z · LW(p) · GW(p)

In an environment with very low epistemic-instrumental separation of concerns, an especially intelligent individual actually has an incentive to insulate their own epistemics (become good at lying and acting), so that they can think. Optimizing for true beliefs then becomes highly instrumentally valuable (modulo the cost of keeping up the barrier / the downside risk if discovered).

Also, still thinking about the environment where the overarching social structure doesn't support much separation of concerns, there's still a value to be had in associating with individuals who often naively speak the truth (not the liar/actor type), because there's a lot of misaligned preferences floating around. Your incentives for bending the truth are different from another person's, so you prefer to associate with others who don't bend the truth much (especially if you internally separate concerns). So, separation of concerns seems like a convergent instrumental goal which will be bubbling under the surface even in a dysfunctional superstructure.

Of course, both effects are limited in their ability encourage truth, and it's a little like arguing that cooperation in the prisoner's dilemma is a convergent instrumental subgoal bubbling under the surface of a defective equilibrium.

comment by jessicata (jessica.liu.taylor) · 2019-05-24T01:16:07.713Z · LW(p) · GW(p)

Since separation of concerns is obviously only applicable to bounded agents, it seems like someone who is clearly optimizing for epistemic rationality as a separate concern is vulnerable to being perceived as lacking the ability to optimize for instrumental rationality in an end-to-end way

No one believes anyone else to be an unbounded agent, so how is the concern with being perceived as a bounded agent relevant? A bounded agent can achieve greater epistemology and instrumental action by separating concerns, and there isn't a reachable upper bound.

comment by abramdemski · 2019-05-24T03:08:33.895Z · LW(p) · GW(p)

I think a lot of words have been spent on this debate elsewhere, but all I feel like citing is biases against overcoming bias. The point it mentions about costs accruing mostly to you is related to the point you made about group rationality. The point about not knowing how to evaluate whether epistemic rationality is useful without developing epistemic rationality -- while it is perhaps intended as little more than a cute retort, I take it fairly seriously; it seems to apply to specific examples I encounter.

My recent view on this is mostly: but if you actually look, doesn't it seem really useful to be able to separate these concerns? Overwhelmingly so?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-05-24T05:25:09.242Z · LW(p) · GW(p)

Both you and Jessica seem to have interpreted me as arguing against separation of concerns w.r.t. epistemic and instrumental rationality, which wasn't really my intention. I'm actually highly in favor of separation of concerns in this regard, and was just reporting a train of thought that was triggered by your "Separation of concerns is a principle in computer science" statement.

I didn't follow much of the earlier debates about instrumental vs epistemic rationality (in part because I think something like curiosity/truth/knowledge is part of my terminal values so I'd personally want epistemic rationality regardless) so apologies if I'm retreading familiar ground.

Replies from: abramdemski
comment by abramdemski · 2019-05-24T18:42:04.887Z · LW(p) · GW(p)

Yeah, I later realized that my comment was not really addressing what you were interested in.

I read you as questioning the argument "separation of concerns, therefore, separation of epistemic vs instrumental" -- not questioning the conclusion, which is what I initially responded to.

I think separation-of-concerns just shouldn't be viewed as an argument in itself (ie, identifying some concerns which you can make a distinction between does not mean you should separate them). That conclusion rests on many other considerations.

Part of my thinking in writing the post was that humans have a relatively high degree of separation between epistemic and instrumental even without special scientific/rationalist memes. So, you can observe the phenomenon, take it as an example of separation-of-concerns, and think about why that may happen without thinking about abandoning evolved strategies.

Sort of like the question "why would an evolved species invent mathematics?" -- why would an evolved species have a concept of truth? (But, I'm somewhat conflating 'having a concept of truth' and 'having beliefs at all, which an outside observer might meaningfully apply a concept of truth to'.)

comment by jessicata (jessica.liu.taylor) · 2019-05-24T01:11:36.275Z · LW(p) · GW(p)

I strongly agree that separation of concerns is critical, and especially the epistemic vs. instrumental separation of concerns.

There wouldn’t even be a concept of ‘belief’ or ‘claim’ if we didn’t separate out the idea of truth from all the other reasons one might believe/claim something, and optimize for it separately.

This doesn't seem quite right. Even if everyone's beliefs are trying to track reality, it's still important to distinguish what people believe from what is true (see: Sally-Anne test). Similarly for claims. (The connection to simulacra [LW · GW] is pretty clear; there's a level-1 notion of a belief (i.e. a property of someone's world model, the thing controlling their anticipations and which they use to evaluate different actions), and also higher-level simulacra of level-1 beliefs)

Moreover, there isn't an obvious decision-theoretic reason why someone might not want to think about possibilities they don't want to come true (wouldn't you want to think about such possibilities, in order to understand and steer away from them?). So, such perceived incentives are indicative of perverse anti-epistemic social pressures, e.g. a pressure to create a positive impression of how one's life is going regardless of how well it is actually going.

Replies from: abramdemski, shminux
comment by abramdemski · 2019-05-24T19:13:24.042Z · LW(p) · GW(p)

I agree, it's not quite right. Signalling equilibria in which mostly 'true' signals are sent can evolve in the complete absence of a concept of truth, or even in the absence of any model-based reasoning behind the signals at all. Similarly, beliefs can manage to be mostly true without any explicit modeling of what beliefs are or a concept of truth.

What's interesting to me is how the separation of concerns emerges at all.

Moreover, there isn't an obvious decision-theoretic reason why someone might not want to think about possibilities they don't want to come true (wouldn't you want to think about such possibilities, in order to understand and steer away from them?). So, such perceived incentives are indicative of perverse anti-epistemic social pressures, e.g. a pressure to create a positive impression of how one's life is going regardless of how well it is actually going.

It does seem like it's largely that, but I'm fairly uncertain. I think there's also a self-coordination issue (due to hyperbolic discounting and other temporal inconsistencies). You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations. (Though, why evolution crafted organisms which do something-like-hyperbolic-discounting rather than something more decision-theoretically sound is another question.)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-05-24T21:09:02.402Z · LW(p) · GW(p)

You might need to believe that a plan will work with very high probability in order to go through with every step rather than giving in to short-term temptations.

Why doesn't conservation of expected evidence apply? (How could you expect thinking about something to predictably shift your belief?)

Replies from: abramdemski
comment by abramdemski · 2019-05-28T21:46:56.191Z · LW(p) · GW(p)

In the scenario I'm imagining, it doesn't apply because you don't fully realize/propagate the fact that you're filtering evidence for yourself. This is partly because the evidence-filtering strategy is smart enough to filter out evidence about its own activities [LW · GW], and partly just because agency is hard [LW · GW] and you don't fully propagate everything by default.

I'm intending this mostly as an 'internal' version of "perverse anti-epistemic social pressures". There's a question of why this would exist at all (since it doesn't seem adaptive). My current guess is, some mixture of perverse anti-epistemic social pressures acting on evolutionary timelines, and (again) "agency is hard" -- it's plausible that this kind of thing emerges accidentally from otherwise useful mental architectures, and doesn't have an easy and universally applicable fix.

comment by Shmi (shminux) · 2019-05-24T08:13:22.106Z · LW(p) · GW(p)

I don't understand the OP's point at all, but just wanted to remark on

there isn't an obvious decision-theoretic reason why someone might not want to think about possibilities they don't want to come true

There absolutely are reasons like that. Beliefs affect "reality", like in the folk theorem. If everyone believes that everyone else cooperates, then everyone would cooperate. (And defectors get severely punished.)


Replies from: abramdemski, jessica.liu.taylor
comment by abramdemski · 2019-05-24T19:30:28.193Z · LW(p) · GW(p)
I don't understand the OP's point at all, but

If I had to summarize: "Talking about feeling is often perceived as a failure of separation-of-concerns by people who are skilled at various other cognitive separations-of-concerns; but, it isn't necessarily. In fact, if you're really good at separation-of-concerns, you should be able to talk about feelings a lot more than otherwise. This is probably just a good thing to do, because people care about other people's feelings"

Replies from: shminux
comment by Shmi (shminux) · 2019-05-25T01:09:14.428Z · LW(p) · GW(p)

Ah, that makes sense. Talking about feelings, to a degree, is essential to being human and being relatable. If anything, people's minds are 90% or more about feelings.

comment by jessicata (jessica.liu.taylor) · 2019-05-24T08:15:31.838Z · LW(p) · GW(p)

Considering a possibility doesn't automatically make you believe it. Why not think about the different possible Nash equilibria in order to select the best one?

Replies from: shminux
comment by Shmi (shminux) · 2019-05-24T15:20:55.963Z · LW(p) · GW(p)

Yep, thinking about different possibilities changes reality. In this particular case, it makes it worse, since mutual cooperation (super-rationality, twin prisoner's dilemma, etc.) has by definition the highest payoff in symmetric games.

Replies from: Dagon
comment by Dagon · 2019-05-24T18:31:52.817Z · LW(p) · GW(p)

Wait. Some thoughts enable actions, which can change reality. Some thoughts may be directly detectable and thereby change reality (say, pausing before answering a question, or viewers watching an fMRI as you're thinking different things). But very few hypothetical and counterfactual thoughts in today's humans actually effect reality in either of these ways.

Are you claiming that someone who understands cooperation and superrationality can change reality by thinking more about it than usual, or just that knowledge increases the search space and selection power over potential actions?


Replies from: abramdemski
comment by abramdemski · 2019-05-24T19:25:09.891Z · LW(p) · GW(p)

In practice, a lot of things about one person's attitudes toward cooperation 'leak out' to others (as in, are moderately detectable). This includes reading things like pauses before making decisions, which means that merely thinking about an alternative can end up changing the outcome of a situation.

comment by Gordon Seidoh Worley (gworley) · 2019-05-24T22:30:11.867Z · LW(p) · GW(p)

The natural argument against this is of course that separation is an illusion. I don't say that to sound mysterious, I mean that just in the simple way that everything is tangled up together, dependent on each other for its existence, and its only in our models that clean separation can exist, and then only by ignore some parts of reality in order to keep our models clean.

As a working programmer, I'm very familiar with the original context of the idea of separation of concerns, and I can also tell you that even there is never totally works. It's a tool we use to help us poor humans who can't fathom the total, complete, awesome complexity of the world to get along well enough anyway to collect a paycheck. Or something like that.

Relatedly, every abstraction is leaky, and if you think it isn't you just haven't looked hard enough.

None of that is to say we shouldn't respect the separation of concerns when useful, but also that we shouldn't elevate it more than it deserves to be, because the separation is a construction of our minds, not a natural feature of the world.

Replies from: DoubleFelix, Slider
comment by DoubleFelix · 2019-07-11T17:42:48.949Z · LW(p) · GW(p)

Responding to this on the point of feelings/bids/etc:

One problem I run into a lot is that I want to just say X, but it's fo freakin' difficult to do so without also accidentally saying Y. The default solution is to put a boatload of effort into crafting what I'm saying to evoke the correct response in the other person, but this is difficult and failure-prone. So if we don't go that route, in order to communicate what I mean to communicate and not also something else, it takes some cooperation between both parties — I promise that I just mean what I'm saying, and not other possibly-inferable implications, and the other party agrees to take what I say at face value. (And if any implications come up that seem important, assume they aren't attempts at manipulation, by default, and ask about them directly if you want to know what they think of that unsaid thing)

Actually doing that is difficult for both parties, but when both give it a good effort it enables some things that are super difficult to communicate otherwise, and I suspect doing this frequently makes it much easier with practice.

Replies from: ChristianKl
comment by ChristianKl · 2019-08-14T07:52:21.171Z · LW(p) · GW(p)

This approach seems to assume that can create the message that you want to send from a blank slate. Plenty of times you can't communicate what you want to communicate because what you want to communicate isn't really true and it's easy for the other party to see that.

comment by Slider · 2019-05-25T03:48:18.997Z · LW(p) · GW(p)

If your code compiles into one program it's literallly one system.

Replies from: ChristianKl
comment by ChristianKl · 2019-08-13T14:16:49.984Z · LW(p) · GW(p)

That seems to be an argument that if the program works on my machine it should also work elsewhere. In reality there's often some weird quirk that means that the abstrations that the program uses don't generalize everywhere the same way.

Replies from: Slider
comment by Slider · 2019-08-13T16:24:58.839Z · LW(p) · GW(p)

I would count that as compiling in to two things.

But the point I was after everykind of separation we make it will in the end be undone. In that integration the leakyness will make itself apparent.

If you have a reference implementation then your "rule" can't leak because the code just is what it is. What would be a bug or inconsistency can be redefined to be a feature. But any kind of spesification that is not an instantiation doesn't contain enough information to construct an instantation yet programs are instances thus instances contain more information than the abstract rules.

comment by Vladimir_Nesov · 2019-05-23T23:19:05.379Z · LW(p) · GW(p)

Ideas, beliefs, feelings, norms, and preferences can be elucidated/reformulated, ascertained/refuted, endorsed/eschewed, reinforced/depreciated, or specialized/coordinated, not just respectively but in many combinations. Typically all these distinctions are mixed together. Naming them with words whose meaning is already common knowledge might be necessary to cheaply draw attention to relevant distinctions. Otherwise it takes too long to frame the thought, at which point most people have already left the conversation.

comment by Ben Pace (Benito) · 2020-12-11T05:43:26.827Z · LW(p) · GW(p)

Simple and fundamental points, explained clearly.

comment by habryka (habryka4) · 2019-07-02T04:47:51.332Z · LW(p) · GW(p)

Promoted to curated: I've referred to this post a few times since it got posted, and generally think this is a clear explanation of an underlying intuition for many social norms that I would like to see in a good rationalist culture, as well a bunch of ideas that also apply to individual rationality.

comment by Slider · 2019-05-24T10:37:49.039Z · LW(p) · GW(p)

I have found that caring about one kind of distinction makes you care about others. That is when I primarily care about technical accuracy I can pinpoint in my thoughts what are and what are not technicalities accurate. When there are clusters of non-technicalities they often have similar symptoms of what is needed to be done to them to aquire technical accuracy. That is I start to model them as "the enemy" and "get inside their head" in order to effectively resist them. Often this leads into insight where when you give the other motivation they more easily give the primary motivation that I care about. If I am thinking about a politically charged topic and I find my thoughts are not techincally accurate I can start naming emotions and usually the technical accuracy is easier to find. Undifferntiated they would interfere but explictly treated their pull works only in their own "worksphere".

Replies from: abramdemski
comment by abramdemski · 2019-05-24T19:21:12.438Z · LW(p) · GW(p)

Significant if true & applicable to other people. I'm a bit skeptical -- I sort of think it works like that, but, sort of think it would be hard to notice places where this strategy failed.

Replies from: Slider
comment by Slider · 2019-05-24T20:24:11.839Z · LW(p) · GW(p)

Because this strategy relies on you having behaviours that you don't understand it includes periods where you have low introspection which could potentially be dangerous. In general if you encounter a behaviour you exhibit but don't understnad you could let it grow so you can examine it or you could cut/kill it because it doesn't fullfill your current alingment criteria (behaviours "too stubborn to be killed" is a common failure mode thought).