Simulacra and Subjectivity

post by Benquo · 2020-03-05T16:25:10.430Z · LW · GW · 33 comments

This is a link post for http://benjaminrosshoffman.com/simulacra-subjectivity/

Contents

33 comments

In Excerpts from a larger discussion about simulacra, following Baudrillard, Jessica Taylor and I laid out a model of simulacrum levels with something of a fall-from grace feel to the story:

  1. First, words were used to maintain shared accounting. We described reality intersubjectively in order to build shared maps, the better to navigate our environment. I say that the food source is over there, so that our band can move towards or away from it when situationally appropriate, or so people can make other inferences based on this knowledge.
  2. The breakdown of naive intersubjectivity - people start taking the shared map as an object to be manipulated, rather than part of their own subjectivity. For instance, I might say there's a lion over somewhere where I know there's food, in order to hoard access to that resource for idiosyncratic advantage. Thus, the map drifts from reality, and we start dissociating from the maps we make.
  3. When maps drift far enough from reality, in some cases people aren't even parsing it as though it had a literal specific objective meaning that grounds out in some verifiable external test outside of social reality. Instead, the map becomes a sort of command language for coordinating actions and feelings. "There's food over there" is construed and evaluated as a bid to move in that direction, and evaluated as such. Any argument for or against the implied call to action is conflated with an argument for or against the proposition literally asserted. This is how arguments become soldiers [LW · GW]. Any attempt to simply investigate the literal truth of the proposition is considered at best naive and at worst politically irresponsible.
    But since this usage is parasitic on the old map structure that was meant to describe something outside the system of describers, language is still structured in terms of reification and objectivity, so it substantively resembles something with descriptive power, or "aboutness." For instance, while you cannot acquire a physician’s privileges and social role simply by providing clear evidence of your ability to heal others, those privileges are still justified in terms of pseudo-consequentialist arguments about expertise in healing.
  4. Finally, the pseudostructure itself becomes perceptible as an object that can be manipulated, the pseudocorrespondence breaks down, and all assertions are nothing but moves in an ever-shifting game where you're trying to think a bit ahead of the others (for positional advantage), but not too far ahead.

There is some merit to this linear treatment, but it obscures an important structural feature: the resemblance of levels 1 and 3, and 2 and 4. 

Another way to think about it, is that in levels 1 and 3, speech patterns are authentically part of our subjectivity. Just as babies are confused if you show them something that violates their object permanence assumptions, and a good rationalist is more confused by falsehood than by truth [LW · GW], people operating at simulacrum level 3 are confused and disoriented if a load-bearing social identity or relationship is invalidated.

Likewise, levels 2 and 4 are similar in nature - they consist of nothing more than taking levels 1 and 3 respectively as object (i.e. something outside oneself to be manipulated) rather than as subject (part of one's own native machinery for understanding and navigating one's world). We might name the levels:

Simulacrum Level 1: Objectivity as Subject (objectivism, or epistemic consciousness)

Simulacrum Level 2: Objectivity as Object (lying)

Simulacrum Level 3: Relating as Subject (power relation, or ritual magic)

Simulacrum Level 4: Relating as Object (chaos magic, hedge magic, postmodernity) [1]

I'm not attached to these names and suspect we need better ones. But in any case this framework should make it clear that there are some domains where what we do with our communicative behavior is naturally "level 3" and not a degraded form of level 1, while in other domains level 3 behavior has to be a degenerate form of level 1.[2]

Much body language, for instance, doesn't have a plausibly objective interpretation, but is purely relational, even if evolutionary psychology can point to objective qualities we're sometimes thereby trying to signal. Sometimes we're just trying to stay in rhythm with each other, or project good vibes.


[1] Some chaos magicians have attempted to use the language of power relation (gods, rituals, etc) to reconstruct the rational relation between map and territory, e.g. Alan Moore's Promethea. The postmodern rationalist project, by contrast, involves constructing a model of relational and postrelational perspectives through rational epistemic means.

[2] A prepublication comment by Zack M. Davis that seemed pertinent enough to include:

Maps that reflect the territory are level 1. Coordination games are "pure" level 3 (there's no "right answer"; we just want to pick strategies that fit together). When there are multiple maps that fit different aspects of the territory (political map vs. geographic map vs. globe, or different definitions of the same word), but we want to all use the SAME map in order to work together, then we have a coordination game on which map to use. To those who don't believe in non-social reality, attempts to improve maps (Level 1) just look like lobbying for a different coordination equilibrium (Level 4): "God doesn't exist" isn't a nonexistence claim about deities; it's a bid to undermine the monotheism coalition and give their stuff to the atheism coalition.
Book Review: Cailin O'Connor's The Origins of Unfairness: Social Categories and Cultural Evolution
Schelling Categories, and Simple Membership Tests [LW · GW]

33 comments

Comments sorted by top scores.

comment by DirectedEvolution (AllAmericanBreakfast) · 2021-12-28T09:15:35.275Z · LW(p) · GW(p)

Frames that describe perception can become tools for controlling perception.

The idea of simulacra has been generative here on LessWrong, used by Elizabeth in her analysis of negative feedback [LW · GW], and by Zvi in his writings on Covid-19 [LW · GW]. It appears to originate in private conversations between Benjamin Hoffman and Jessica Taylor. The four simulacra levels or stages are a conception of Baudrillard’s, from Simulacra and Simulation. The Wikipedia summary quoted on the original blog post between Hoffman and Taylor has been reworded several times by various authors and commenters.

We should approach this writing with a gut-level understanding of what motivates these authors to so passionately defend “level one” speech, and why they find simulacra to be so threatening as to describe it in Biblical terms. For Hoffman and Taylor, simulacra are “corruption of discourse,” “destroying the language,” “wireheading,” comparable to a speculative bubble, a “fall from grace.” For Elizabeth, they cause “more conflict or less information in the world,” and she “wanted a button [she] could push to make everyone go to level one all the time.” The “game” is an “enemy,” a “climate” that “punishes” honesty. Zvi’s relentlessly cynical attitude about speech he figures as simulacra is well-known to readers of his Covid-19 analysis.

It is transparently obvious to me that there are some problems in which perceptions matter enormously, others in which perceptions matter not at all, and still others in which both are equally important and dependent upon one another. Problems of perceptions operate in a complex but ultimately lawful manner. It is possible to develop a correct gears-level model of perceptions and use it to predict and control them to advantage. As Elizabeth points out, however, this perpetuates the necessity of doing so. It may be more advantageous to build relationships and institutions that at least resemble the sort that we would build if we placed a great deal of value on preserving our capacity for honesty.

Many people claim to have a bullshit detector, but nobody has one that I can borrow. Most analysis of simulacra is a description of untruths. While this can be important, it mostly motivates me to seek predictive models, identifying honest and reliable experts, make factual and logical arguments, and to advocate for perceptions that allow us to focus on solving practical problems. It’s never wise to overanalyze nonsense, and I hope that the rationalist community can continue to focus less on the thousands of things that are not and should not be, and more on what should be, what can be, and what is.

Replies from: Raemon
comment by Raemon · 2021-12-30T03:26:55.886Z · LW(p) · GW(p)

It’s never wise to overanalyze nonsense, and I hope that the rationalist community can continue to focus less on the thousands of things that are not and should not be, and more on what should be, what can be, and what is.

I'm not actually sure what it is you're prescribing here. Which things seem like nonsense to you? Which things did you mean by "overanalyzing nonsense" and which things would you mean by "focus on what should be / can be / what is"

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-12-30T07:05:24.207Z · LW(p) · GW(p)

Simulacra levels 2-4, and especially 3-4, are ways of characterizing speech as not meaning what it literally appears to mean. Analysis of simulacra often seems to spend a lot of time trying to highlight these quotes, characterize exactly how they depart from literal truth, and assert an imprecise and unverifiable reason the speech appears as it is. The level 2+ speech such analysis is criticizing is what I am referring to here as “nonsense,” because that is how it is being taken as by the critics. I rarely if ever find that my ability to predict the behavior of the speakers or institutions they belong to is enhanced by these criticisms, characterizations of simulacra levels, and vague speculations.

Analyzing nonsense can be a way to motivate discussion of what does make sense. “Here’s the nonsense the ‘experts’ or ‘leaders’ are saying, here’s a quick explanation of why it’s nonsense, and now here’s a more sensible interpretation of what’s going on.”

But going deep into characterizing just why this particular form of nonsense is the way it is, and what type of nonsense it is, rapidly becomes unconvincing to me.

Better might be “only a fool analyzes blitz,” referring to chess games played out in just a few minutes. I think that a combination of pressure, time constraints, and ego lead to people saying things that seem to them intuitively like the most sensible thing to say at that time. Just as analyzing a blitz chess move that seemed sensible with a computer often reveals a fatal flaw, so analyzing blitz speech reveals all sorts of foolishness. The simulacra theory invited us to think really hard about why certain bad chess moves seemed superficially compelling to the player of a blitz game. I don’t think there’s much we can learn from that, though.

By contrast, attempts to describe how we can better measure, predict, and control the physical world in morally good ways, including the social world, seem fruitful.

Replies from: Raemon
comment by Raemon · 2021-12-30T07:57:11.471Z · LW(p) · GW(p)

Hmm. So on one hand, I think it's reasonable to argue that all the Simulacra stuff hasn't made much legible case for it actually being a model with explanatory power.

But, to say "there's nothing to explain" or "it's not worth trying" seems pretty wrong. If we're reliably running into particular kinds of nonsense (and we seem to be), knowing what's generating the nonsense seems important both for predicting/navigating the world, and for helping us not fall prey to it. (Maybe your point there is that "steering towards goodness" is better than "steering away from badness", which seems plausibly true, but a) I think we need at least some model of badness, b) there are places where, say, Simulacrum Level 3 might actually be an important coordination strategy [LW · GW])

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-12-30T17:09:59.310Z · LW(p) · GW(p)

I haven’t seen these analyses into definitions and causes done with rigor. It also seems very hard to achieve rigor in these analyses, given that the information into individual psychology and sociology of specific institutions we’d need to do so successfully is hard to come by.

As such, the tack these authors take is often not to attempt such a rigorous analyses, but instead to go straight from their current model, composed of guesswork, to activist claims about how to improve the world and the level of destruction caused by that guesswork-based model.

The analysis, then, seems to be of a guesswork-based, ill-defined model with limited predictive power or falsifiability, involving a lot of arguing with organizations and people you perceive as propagandists for empirically and morally wrong views. It also seems to involve a tendency to discourage behaviors that could disconfirm the assumptions.

But I don’t want to tear into it too deeply. I recognize that simulacra levels point at something real. I also think that doing this too much would be hypocritical.

If I saw more attempts to falsify the model or use it to make predictions, I’d be happier with it.

comment by Jay Molstad (jay-molstad) · 2020-03-28T11:21:24.761Z · LW(p) · GW(p)

I'll add that this is a cycle; Stage 5 is Stage 1. People operating in Stage 4 are paying very little attention to objective reality. Accordingly, their objective situation is usually deteriorating; competitors operating at lower levels gradually eat their lunch without them really noticing. The cycle restarts when objective conditions deteriorate to the point that they can no longer be ignored and the complicated games of social signaling are abandoned. To extend Strawperson's comment:

Level 1: "There's a lion across the river." = There's a lion across the river.
Level 2: "There's a lion across the river." = I don't want to go (or have other people go) across the river.
Level 3: "There's a lion across the river." = I'm with the popular kids who are too cool to go across the river.
Level 4: "There's a lion across the river." = A firm stance against trans-river expansionism focus grouped well with undecided voters in my constituency.

Level 5/Level 1: "There's a lion right here" = There's a lion right here (We really should have been paying more attention to the actual lion and focus groups no longer seem important).

comment by habryka (habryka4) · 2020-03-21T00:53:06.166Z · LW(p) · GW(p)

Promoted to curated: It took me a really long time to wrap my head around the simulacra-level idea, but after a few weeks of engaging with it passively, it clicked, and it's now something that I've been using quite a bit in my modeling, and have brought up in conversations quite a few times. 

This post is still not amazing at explaining the different levels, but it seems better than the posts that came before it. And I do think the concept is important, and worth pointing more people at, so I am curating it.

comment by Jacob Falkovich (Jacobian) · 2020-05-20T14:16:39.467Z · LW(p) · GW(p)

Let me know if this matches — the way I understand it is that level 3 is often about signaling belonging to a group, and level 4 is about shaping how well different belonging signal works.

So:

Level 1: "Believe all women" = If a woman accuses someone of sexual assault, literally believe her.

Level 2: "Believe all women" = I want accusations of sexual assault to be taken more seriously.

Level 3: "Believe all women" = I'm part of the politically progressive tribe that takes sexual assault seriously.

Level 4: "Believe all women" = Taking sexual assault seriously should be a more important signal of political progressivism than other issues.

Level 5: "Believe all women" = But actually take sexual assault seriously even if it becomes opposed to political progressivism because Biden.

comment by FactorialCode · 2020-03-05T21:46:20.397Z · LW(p) · GW(p)

1-3 I can wrap my head around. But can you provide some examples of what level 4 looks like?

Replies from: Benquo, daniel-kokotajlo, mr-hire
comment by Benquo · 2020-03-11T14:50:06.972Z · LW(p) · GW(p)

Level 1 (referential, or epistemic): Generating a business plan and financial projections as an integral part of the process by which you decide whether your startup is worth trying.

Level 2 (lying): Publicizing a business plan you don't expect to carry out, to obscure your secret plan to pivot to something that would undercut existing power, so they don't crush you immediately.

Level 3 (relating): Coming up with a business plan so you feel like a Real Business, and feel less impostor anxiety, and then feeling a need to rationalize whatever future decisions you make as somehow Part of the Plan.

Level 4 (magical): Coming up with a business plan and financial projections because that's something venture capitalists want, and no one thinks that anyone else cares whether the plan is literally true or even possible, it's just one of the theatrical hoops you gotta jump through for the vibe to feel right.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-03-05T23:17:30.192Z · LW(p) · GW(p)

Yeah, I don't really get the difference between levels 3 and 4.

comment by Matt Goldenberg (mr-hire) · 2020-03-05T22:15:38.633Z · LW(p) · GW(p)

At the risk of being political because I couldn't think of a better example:

Consider people advocating for/against laws on late trimester abortions.

AFAICT, there's only a very small minority that believes that most abortion is Ok but late trimester abortion is not. For most people making these arguments the facts about pain etc are just a game trying to get more or less restrictions on abortion placed. Everyone is just trying to find the facts that move their particular position (allowing or not allowing abortion) closer.


Edit: Actually, this is more like stage 3.

Replies from: Strawperson
comment by Strawperson · 2020-03-06T23:09:50.738Z · LW(p) · GW(p)

As far as I understand, at level 3 ostensibly factual statements are instrumentalized in the service of ideological concerns (ideology is the deciding agent), whereas at level 4 ideology itself becomes a malleable object that is instrumentalized in the service of the pursuit of power (in the limit case, Moloch is the deciding agent). At level 3, what matters is that your side is winning, at level 4, what matters is that you're on the winning side.

Level 1: "There's a lion across the river." = There's a lion across the river.
Level 2: "There's a lion across the river." = I don't want to go (or have other people go) across the river.
Level 3: "There's a lion across the river." = I'm with the popular kids who are too cool to go across the river.
Level 4: "There's a lion across the river." = A firm stance against trans-river expansionism focus grouped well with undecided voters in my constituency.

I too have trouble thinking of a non-political real-life example (professional politics, at least by reputation, very much seems to be a level 4 discipline), so feel free to disregard what follows, but a striking example would be some hypothetical ex-Soviet functionary whose career trajectory dictated seamlessly shifting between being a communist in the 80's, a liberal democrat in the 90's and early 2000's, and a conservative nationalist by the 2010's.

Replies from: Yoav Ravid, pktechgirl, mr-hire
comment by Yoav Ravid · 2020-08-07T17:48:25.417Z · LW(p) · GW(p)

At level 3, what matters is that your side is winning, at level 4, what matters is that you're on the winning side.


What a fantastic distinction, thank you.

comment by Elizabeth (pktechgirl) · 2020-04-12T02:19:49.049Z · LW(p) · GW(p)

I've talked about simulacra levels with Ben a ton and this comment is the single most helpful thing in understanding them or explaining to others.

Replies from: Strawperson
comment by Strawperson · 2020-04-30T11:31:16.553Z · LW(p) · GW(p)

Thanks! I appreciate the feedback, and I'm glad to hear my thoughts were in the right direction and helpful to others.

comment by Matt Goldenberg (mr-hire) · 2020-03-06T21:21:59.932Z · LW(p) · GW(p)

Yeah that helps a lot.

comment by Matt Goldenberg (mr-hire) · 2020-03-05T22:25:38.504Z · LW(p) · GW(p)

I really like these levels. In addition to viewing this as a progression of knowledge, I think it's also possible to treat people as existing at different levels in terms of both how they can see the board, and how they act within various local incentive structures. For instance, there are people who will take everything at face value, people who will always lie, and people who view everything as a game piece.

One of the reasons it's so important to create robust sociopath repellents is that without them, the default is that people playing at higher levels will locally outcompete people playing at lower levels, whereas groups that can play internally at lower levels will outcompete groups that play internally at higher levels. You need organizations that provide incentives, deterrents, and screening mechanisms such that people at Kegan 3, 4 and 5 are all incentivized to play at level 1 internally, even if they can see the board at level 4.

I think a lot of Zvi's recent sequence on Moral Mazes [? · GW] was asking if we can overcome the local incentives that cause individuals who play at higher levels to beat individuals who play at lower levels.

comment by Raemon · 2021-12-20T20:13:00.684Z · LW(p) · GW(p)

This post went in a similar direction as Daniel Kokotajlo's 2x2 Simulacrum grid [LW(p) · GW(p)]. It seems to have a "medium amount of embedded worldmodel", contrasted with some of Zvi's later simulacra writing (which I think bundle a bunch of Moral-Maze-ish considerations into Simulacrum 4) and Daniel's grid-version (which is basically unopinionated about where the levels came from)

I like that this post notes the distinction between domains where Simulacrum 3 is a degenerate form of level 1, vs domains where Simulacrum 3 is the "natural" form of expression.

comment by Dagon · 2020-03-05T18:15:44.867Z · LW(p) · GW(p)

A topic that came up last time and I didn't see a great answer for: where does truth fit in here? Does the idea of rent-paying beliefs have any intersection with this framework?

Replies from: Benquo, Dagon
comment by Benquo · 2020-03-05T22:44:03.374Z · LW(p) · GW(p)

Level 1, objectivity, is trying to describe the territory accurately.

Replies from: Dagon, mr-hire
comment by Dagon · 2020-03-06T01:53:54.093Z · LW(p) · GW(p)

Wait, really? That's totally now how I read it. I thought the simulacra levels were divergence between public and private beliefs. People start realizing that the 'shared map' is chosen for reasons other than correspondence with territory, and begin to explicitly model the map-sharing processes separate from map-validating ones.

Replies from: Raemon
comment by Raemon · 2020-03-06T02:11:05.306Z · LW(p) · GW(p)

That process is incremental and I think the part you just described (where people "realize the shared map is chosen for reasons other than correspondence") is what's going on at level 3-4. 

Replies from: Dagon
comment by Dagon · 2020-03-06T04:21:04.027Z · LW(p) · GW(p)

But really, how does this framework work when the level-1 beliefs are false? One example is a church-heavy township where everyone does actually believe their god is real (level 1, private and public beliefs match) and over time people start to question, but not publicly (level 2), then start to find reasons that religion was a useful cohesive belief, without actually believing it (level 3?).

Is there a framework for staying in level 1, but being less wrong, or including other's beliefs in your level-1 model without getting stuck in higher levels where you forget that there IS a truth?

Replies from: Raemon
comment by Raemon · 2020-03-06T22:10:21.373Z · LW(p) · GW(p)

The intent of level-1, as I understand it, is you just say "this seems false?" and they say "why?" and you say "because X", and that either works or doesn't because of object level beliefs about the world. (i.e. people at level 1 have an understanding of having been mistaken)

Replies from: Dagon
comment by Dagon · 2020-03-06T22:21:22.113Z · LW(p) · GW(p)

I think I'm still confused, or maybe stuck at a low (or maybe high! unsure how to use this...) level. I do my best for my private maps and models to be predictive of future experiences. I have no expectation that I can communicate these private beliefs very well to most of humanity. I am quite willing to understand other individuals' and groups' statements of belief as a mix of signaling, social cohesion, manipulation, and true beliefs. I participate in communication acts for all of these purposes as well.

Does this mean I'm simultaneously at different levels for different purposes?

Replies from: Benquo
comment by Benquo · 2020-03-11T15:00:17.503Z · LW(p) · GW(p)

Does this mean I’m simultaneously at different levels for different purposes?

There's an important difference between:

(1) Participating in fictions or pseudorepresentative communication (i.e. bullshit) while being explicitly aware of it (at least potentially, like if someone asked you whether it meant anything you'd give an unconfused answer). This is a sort of reflective, rational-postmodernist level 1.

(2) Adjusting your story for nonepistemic reasons but feeling compelled to rationalize them in a consistent way, which makes your nonepistemic narratives sticky, and contaminates your models of what's going on. This is what Rao calls clueless in The Gervais Principle.

(3) Acting from a fundamentally social metaphysics like a level 3/4 player, willing to generate sophisticated "logical" rationales where convenient, but not constraining your actions based on your story. This is what cluster thinking cashes out as, as far as I can tell.

Replies from: Dagon
comment by Dagon · 2020-03-11T16:23:11.388Z · LW(p) · GW(p)

Hmm. I still suspect I'm more fluid than is implied in these models. I think I'm mostly a mix of cluster thinking (I recognize multiple conflicting models, and shift my weights between them for private beliefs, while using a different set of weights for public beliefs (because shifting others' beliefs is relative to my model of their current position, not absolute prediction levels - Aumann doesn't apply to humans)), and I do recognize that I will experience only one future, which I call "objective", and that's pretty much rational-postmodernist level 1. I watch for #2, but I'm sure I'm sometimes susceptible (stupid biological computing substrate!).

comment by Matt Goldenberg (mr-hire) · 2020-03-05T23:49:39.407Z · LW(p) · GW(p)

But in another sense someone who takes level 1 as subject has whole swathes of the territory that they can't see, namely all the people who are operating at levels 2,3,4.

Replies from: Benquo
comment by Benquo · 2020-03-11T15:03:45.568Z · LW(p) · GW(p)

Someone at level 1 is going to take longer to learn how to get along in a level 3 or 4 environment than a level 3 or 4 player, but is capable of knowing about them, while people who are level 3/4 players at core can't really know about anything. They can acquire know-how by doing, but not know-about, insofar as their language is nonepistemic.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-03-11T17:16:20.270Z · LW(p) · GW(p)

I don't think that squares with the subject/object interpretation you're offering here though. If I can take level one as object, I can use and manipulate and know in all the ways that someone who is subject to it can.

It seems to me that one can take each level as subject or object, without necessarily having taken the previously level as subject/object. That might mean that the "stages of subject/object shifts" you're pointing at here is less useful.

I know people who can't really grok other people playing in social realms, but are really good at sensemaking with other people who can take level one as object.

I also know people can play social games a bunch, but are bad at object level knowing.

I also know people who understand level 4 game playing through a level one lens, viewing at as another aspect of the territory.

And I know people who understand level 1 lens, but basically use it to manipulate level 4 social reality to get their way, rather than seeing at as the thing that's important in its' own right.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-03-11T17:19:58.285Z · LW(p) · GW(p)

So a stab at a model that can handle more complexity might be two factors of:

  1. What levels you can take as object.
  2. What levels do you most frequently use as your primary sensemaking apparatus.
comment by Dagon · 2020-03-09T20:19:45.462Z · LW(p) · GW(p)

Another way of exploring my confusion: can you give an example where this model was more predictive or made better recommendations than a simpler "public communication is an idiosyncratic mix of correct beliefs, incorrect beliefs, intentional signaling and unintentional signaling" explanation?