Scissors Statements for President?

post by AnnaSalamon · 2024-11-06T10:38:21.230Z · LW · GW · 31 comments

Contents

31 comments

(Epistemic status: I spoke simply / without "appears to" hedges, but I'm not sure of this at all.)

I’m confused why we keep getting scissors statements as our Presidential candidates, but we do.  (That is: the candidates seem to break many minds/communities.)

A toy model:[1]

Take two capacities, A and B.  Ideally anti-correlated.

Craft two candidates:

Now let voters talk.

“How can you possibly vote for X, given how it’ll make a disaster on axis A?”, asks Susan.  (She is B-blind, which is part of why she is so confused/irate/loud here.)  Susan inquires in detail.  She (accurately) determines the staunchest X-voters don't understand A, and (understandably, but incorrectly) concludes that this explains their X-voting, that they have nothing to teach her, and that she should despair of working well with anyone who voted for Candidate X.

““How can you possibly vote for Y, given how it’ll make a disaster on axis B?”, asks Robert.  He, too, inquires in detail.  And he (accurately) determines the staunchest Y-voters have a key basic blind spot where he and his friends/neighbors have sense... feels a sense of closure ("okay, it's not that they know something I don't know"), and despairs of working well with anyone who voted for Y.

The thing that annoys me about this process is that, in the wake, it is harder for both sets of voters to heal their own blind spots.  “Being able to see A accurately” is now linked up socially and verbally [? · GW]with “being one of the people who refuse to acknowledge B” (and vice versa).  (This happens because the ontology has been seized by the scissors-statement crafters – there is a common, salient, short word that means bothA matters” and “B is fake,” and people end up using it in their own head, and, while verifying a real truth they can see, locking in a blind spot they can’t see.)

  1. ^

    This is a toy model for how the "scissors-ness" works, not for why some process is crafting us candidates like that.  I don't have a guess [LW · GW] about that part.  Though I like these [LW · GW] articles [LW · GW].

31 comments

Comments sorted by top scores.

comment by Benquo · 2024-11-06T11:59:48.188Z · LW(p) · GW(p)

X and Y are cooperating to contain people who object-level care about A and B, and recruit them into the dialectic drama. X is getting A wrong on purpose, and Y is getting B wrong on purpose, as a loyalty test. Trying to join the big visible org doing something about A leads to accepting escalating conditioning to develop the blind spot around B, and vice versa.

X and Y use the conflict as a pretext to expropriate resources from the relatively uncommitted. For instance, one way to interpret political polarization in the US is as a scam for the benefit of people who profit from campaign spending. War can be an excuse to subsidize armies. Etc.

I wrote about this here: http://benjaminrosshoffman.com/discursive-warfare-and-faction-formation/

comment by AnnaSalamon · 2024-11-06T11:01:36.388Z · LW(p) · GW(p)

If we can get good enough models of however the scissors-statements actually work, we might be able to help more people be more in touch with the common humanity of both halves of the country, and more able to heal blind spots.

E.g., if the above model is right, maybe we could tell at least some people "try exploring the hypothesis that Y-voters are not so much in favor of Y, as against X -- and that you're right about the problems with Y, but they might be able to see something that you and almost everyone you talk to is systematically blinded to about X."

We can build a useful genre-savviness about common/destructive meme patterns and how to counter them, maybe.  LessWrong is sort of well-positioned to be a leader there: we have analytic strength, and aren't too politically mindkilled.

Replies from: Seth Herd
comment by Seth Herd · 2024-11-06T20:57:51.564Z · LW(p) · GW(p)

I think this idea is worth exploring. The first bit seems pretty easy to convey and get people to listen to:

"try exploring the hypothesis that Y-voters are not so much in favor of Y, as against X -- and that you're right about the problems with Y...

But the second bit

... but they might be able to see something that you and almost everyone you talk to is systematically blinded to about X."

sounds like a very bitter pill to swallow, and therefore hard to get people to listen to.

I think motivated reasoning [? · GW] effects turn our attention quickly away from ideas we think are "bad" on an emotional level. These might thought of as low-level ugh fields [? · GW] around those concepts. Steve Byrnes excellent work on valence in the brain and mind [LW · GW] can be read as an explanation for motivated reasoning and resulting polarization, and I highly recommend doing so. I had reached essentially identical conclusions after some years of studying cognitive biases from the perspective of brain mechanisms, but I haven't yet gotten around to the substantial task of writing it up well enough to be useful. I think it's by far the most important cognitive bias. Scott Alexander says in his review of the scout mindset:

Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias...

I think this is half right; motivated reasoning overlaps highly with confirmation bias, since most (all) of the things we believe currently are things we think are good to believe. But it's subtly different, particularly when we think about how to work around it, either in our own minds or when communicating with others. 

For instance, deep canvassing appears to sidestep motivated reasoning by focusing on a personal connection, and to actually work to change minds on political issues (at least acceptance of LGBTQ issues), and according to scant available data it seems like the best known method for actually changing beliefs. It works on an emotional level, presenting no arguments, just a pleasant conversation with someone from a group. It lets people do the work of changing their own minds - as an honest, rational approach should. The specifics of deep canvassing might be limited to opinions about groups, but its success might be a guide to developing other approaches. Not directly asking someone to consider adopting a belief they dislike on an instinctive/unconscious level seems like a sensible starting point.

 

Applying that to your specific proposal: Perhaps something more like "Y-voters are not so much in favor of Y, as against X ...  You probably agree that X can be a problem; they're just estimating it as way worse than you are. Here are some things they're worried about. Maybe they're wrong that those things could easily happen, but you can see why they'd want to prevent those things from happening."

 This might work for some people, since they don't actually like the possible consequences, and don't have strong beliefs about the abstract or complex theories of how those very bad outcomes might come to pass.

That might still set of emotional/valence alarms if it brings up the concept of giving ground to ones' opponents.

 

Anyway, I think it's possible to create useful political/cognitive discourse if it's done carefully and with an understanding of the psychological forces involved. I'd be interested in being involved if some LWers want to workshop ideas along these lines.

comment by TsviBT · 2024-11-06T11:06:59.967Z · LW(p) · GW(p)

IDK, but I'll note that IME, calling for empathy for "the other side" (in either direction) is received with incuriosity / indifference at best, often hostility.

One thing that stuck with me is one of those true crime Youtube videos, where at some stage of the interrogation, the investigator stops being nice, and instead will immediately and harshly contradict anything that the suspect Bob is saying to paint a story where he's innocent. The commentator claimed that the reason the investigator does this is to avoid giving Bob confidence: if Bob's statements hung in the air unchallenged, Bob might think he's successfully creating a narrative and getting that narrative bought. Even if the investigator is not in danger of being fooled (e.g. because she already has video evidence contradicting some of Bob's statements), Bob might get more confident and spend more time lying instead of just confessing.

A conjecture is that for Susan, empathizing with Robert seems like giving room for him to gain more political steam; and the deeper the empathy, the more room you're giving Robert.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2024-11-06T18:01:00.702Z · LW(p) · GW(p)

I like your conjecture about Susan's concern about giving Robert steam.

I am hoping that if we decode the meme structure better, Susan could give herself and Robert steam re: "maybe I, Susan, am blind to some thing, B, that matters" without giving steam to "maybe A doesn't matter, maybe Robert doesn't have a blind spot there."  Like, maybe we can make a more specific "try having empathy right at this part" request that doesn't confuse things the same way.  Or maybe we can make a world where people who don't bother to try that look like schmucks who aren't memetically savvy, or something.  I think there might be room for something like this?

Replies from: TsviBT, nathan-helm-burger
comment by TsviBT · 2024-11-07T04:49:40.575Z · LW(p) · GW(p)

IIUC, I agree with your vision being desirable. (And, IDK, it's sort of plausible that you can basically do it with a good toolbox that could be developed straightforwardly-ish.)

But there might be a gnarly, fundamental-ish "levers problem" here:

  • It's often hard to do [the sort of empathy whereby you see into your blindspot that they can see]
  • without also doing [the sort of empathy that leads to you adopting some of their values, or even blindspots].

(A levers problem is analogous to a buckets problem, but with actions instead of beliefs. You have an available action VW which does both V and W, but you don't have V and W available as separate actions. V seems good to do and W seems bad to do, so you're conflicted, aahh.)

I would guess that what we call empathy isn't exactly well-described as "a mental motion whereby one tracks and/or mirrors the emotions and belief-perspective of another". The primordial thing--the thing that comes first evolutionarily and developmentally, and that is simpler--is more like "a mental motion whereby one adopts whatever aspects of another's mind are available for adoption". Think of all the mysterious bonding that happens when people hang out, and copying mannerisms, and getting a shoulder-person, and gaining loyalty. This is also far from exactly right. Obviously you don't just copy everything, it matters what you pay attention to and care about, and there's probably more prior structure, e.g. an emphasis on copying aspects that are important for coordinating / synching up values. IDK the real shape of primordial empathy.

But my point is just: Maybe, if you deeply empathize with someone, then by default, you'll also adopt value-laden mental stances from them. If you're in a conflict with someone, adopting value-laden mental stances from them feels and/or is dangerous.

To say it another way, you want to entertain propositions from another person. But your brain doesn't neatly separate propositions from values and plans. So entertaining a proposition is also sort of questioning your plans, which bleeds into changing your values. Empathy good enough to show you blindspots involves entertaining propositions that you care about and that you disagree with.

Or anyway, this was my experience of things, back when I tried stuff like this.

Replies from: AnnaSalamon, ExCeph
comment by AnnaSalamon · 2024-11-07T08:44:41.724Z · LW(p) · GW(p)

Thanks; I love this description of the primordial thing, had not noticed this this clearly/articulately before, it is helpful.

Re: why I'm hopeful about the available levers here: 

I'm hoping that, instead of Susan putting primary focal attention on Robert ("how can he vote this way, what is he thinking?"), Susan might be able to put primary focal attention on the process generating the scissors statements: "how is this thing trying to trick me and Robert, how does it work?"

A bit like how a person watching a commercial for sugary snacks, instead of putting primary focal attention on the smiling person on the screen who seems to desire the snacks, might instead put primary focal attention on "this is trying to trick me."  

(My hope is that this can become more feasible if we can provide accurate patterns for how the scissors-generating-process is trying to trick Susan(/Robert).  And that if Susan is trying to figure out how she and Robert were tricked, by modeling the tricking process, this can somehow help undo the trick, without needing to empathize at any point with "what if candidate X is great."

Replies from: TsviBT
comment by TsviBT · 2024-11-08T06:04:17.559Z · LW(p) · GW(p)

My hope is that this can become more feasible if we can provide accurate patterns for how the scissors-generating-process is trying to trick Susan(/Robert). And that if Susan is trying to figure out how she and Robert were tricked, by modeling the tricking process, this can somehow help undo the trick, without needing to empathize at any point with "what if candidate X is great."

This is clarifying...

Does it actually have much to do with Robert? Maybe it would be more helpful to talk with Tusan and Vusan, who are also A-blind, B-seeing, candidate Y supporters. They're the ones who would punish non-punishers of supporting candidate X / talking about A. (Which Susan would become, if she were talking to an A-seer without pushing back, let alone if she could see into her A-blindspot.) You could talk to Robert about how he's embedded in threats of punishment for non-punishment of supporting candidate Y / talking about B, but that seems more confusing? IDK.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2024-11-11T06:39:18.667Z · LW(p) · GW(p)

You raise a good point that Susan’s relationship to Tusan and Vusan is part of what keeps her opinions stuck/stable.

But I’m hopeful that if Susan tries to “put primary focal attention on where the scissors comes from, and how it is working to trick Susan and Robert at once”, this’ll help with her stuckness re: Tusan and Vusan.  Like, it’ll still be hard, but it’ll be less hard than “what if Robert is right” would be.

Reasons I’m hopeful:

I’m partly working from a toy model in which (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all used to be members of a common moral community, before it got scissored.  And the norms and memories of that community haven’t faded all the way.

Also, in my model, Susan’s fear of Tusan’s and Vusan’s punishment isn’t mostly fear of e.g. losing her income or other material-world costs.  It is mostly fear of not having a moral community she can be part of.  Like, of there being nobody who upholds norms that make sense to her and sees her as a member-in-good-standing of that group of people-with-sensible-norms.

Contemplating the scissoring process… does risk her fellowship with Tusan and Vusan, and that is scary and costly for Susan.

But:

  • a) Tusan and Vusan are not *as* threatened by it as if Susan had e.g. been considering more directly whether Candidate X was good.  I think.
  • b) Susan is at least partially compensated by her partial-risk-of-losing-Tusan-and-Vusan, by the hope/memory of the previous society that (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all shared, which she has some hope of reaccessing here
  • b2) Tusan and Vusan are maybe also a bit tempted by this, which on their simpler models (since they’re engaging with Susan’s thoughts only very loosely / from a distance, as they complain about Susan) renders as “maybe she can change some of the candidate X supporters, since she’s discussing how they got tricked”
  • c) There are maybe some remnant-norms within the larger (pre-scissored) community that can appreciate/welcome Susan and her efforts.

I’m not sure I’m thinking about this well, or explicating it well.  But I feel there should be some unscissoring process?

Replies from: TsviBT
comment by TsviBT · 2024-11-12T13:39:30.906Z · LW(p) · GW(p)

I think you might have been responding to

Susan could try to put focal attention on the scissor origins; but one way that would be difficult is that she'd get pushback from her community.

which I did say in a parenthetical, but I was mainly instead saying

Susan's community is a key substrate for the scissor origins, maybe more than Susan's interaction with Robert. Therefore, to put focal attention on the scissor origins, a good first step might be looking at her community--how it plays the role of one half of a scissor statement.

Your reasons for hope make sense.

hope/memory of the previous society that (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all shared, which she has some hope of reaccessing here

Anecdata: In my case it would be mostly a hope, not a memory. E.g. I don't remember a time when "I understand what you're saying, but..." was a credible statement... Maybe it never was? E.g. I don't remember a time when I would expect people to be sufficiently committed to computing "what would work for everyone to live together" that they kept doing so in political contexts.

comment by ExCeph · 2024-11-08T01:52:38.572Z · LW(p) · GW(p)

In my experience, the first step in reconciling conflict is to understand one's own values, before listening to those of others.  There are multiple reasons for this step, but the one relevant to your point is that by reflecting on the tradeoffs that I accept or reject and why, I can feel secure in listening to someone else's point of view.  If their approach addresses my own concerns, then I can recognize it and that dissolves the disagreement.  If it doesn't, then I know enough about what I really want to suggest modifications to their approach that would address my concerns.  Either way, it keeps me safe from value-drift, especially on important principles like ethics.  

Just because someone else has valid concerns doesn't mean I have to give up any of my own, but it doesn't mean we're at an impasse either.  Humans have a habit of turning disagreements into false dichotomies.  When they listen to each other, the conversation becomes, "alright, I understand your concerns, but you understand why mine are more important, right?"  They are so quick to ask other people to sacrifice their values that they don't think of exploring alternative approaches, ones that can change the situation to fulfill the values of all the stakeholders.  That's what I'm working on changing.  

Does that all make sense?  

Replies from: TsviBT
comment by TsviBT · 2024-11-08T05:44:04.711Z · LW(p) · GW(p)

I think I agree, but

  • It's hard to get clear enough on your values. In practice (and maybe also in theory) it's an ongoing process.
  • Values aren't the only thing going on. There are stances that aren't even close to being either a value, a plan, or a belief. An example is a person who thinks/acts in terms of who they trust, and who seems good; if a lot of people that they know who seem good also think some other person seems good, then they'll adopt that stance.
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-11-07T06:47:28.798Z · LW(p) · GW(p)

I think Tsvi has a point that empathy towards a group that has some good-but-puzzling views and some clearly-bad views is a tricky thing to manage internally.

I think an easier step in this direction is to approach the problem more analytically. This is why I feel such affection for the Intellectual Turing Test. You can undertake the challenge of fully understanding some else's viewpoint without needing to emotionally commit to it. It can be a purely intellectual challenge. Sometimes, as I try to write an ITt for a view I feel is overall incorrect I sneer at the view a bit in my head. I don't endorse that, I think ideally one approaches the exercise in an emotionally neutral way. Nevertheless, it is a much easier step from being strongly set against a view to trying to tackle the challenge of fully understanding it. Going the further step to empathy for (parts of) the other view is a much harder further step to take.

comment by Ben Pace (Benito) · 2024-11-06T19:01:23.072Z · LW(p) · GW(p)

Minor mod note: I left this post on personal blog, as we generally avoid frontpaging content related to and during the US election for sanity protection [LW · GW]. To be clear, this post is pretty abstracted from any election details, so I'd normally frontpage it, but I'm erring on the side of leaving on personal while the election is so close (literally today).

comment by Dentin · 2024-11-06T20:44:24.718Z · LW(p) · GW(p)

My belief is that it's primarily the voting system that causes this. (Not the electoral college; rather the whole 'first past the post' style of voting.) We see scissors presidents because that's the winning strategy.

I suspect that other more sophisticated voting systems (even just ranked choice!) would do better. No voting system is perfect, but 'first past the post' is particularly pathological.

comment by ExCeph · 2024-11-07T04:38:30.718Z · LW(p) · GW(p)

This pattern is one I have observed as well, in various disagreements but especially in political ones.  For the past few years I've been working on methods for dissolving these scissors statements.  With some foundational concepts to peel away assumptions and take the conversation down to basics, a person can do it systematically with relative ease.  

Points of disagreement take the form of tradeoffs, which people either accept or reject.  These tradeoffs can be described in terms of costs, risks, habits, and trust.  People accept or reject a tradeoff based on what their experiences tell them about the tradeoff's drawbacks, as well as how well they are positioned to deal with those drawbacks.  

As you pointed out, people often downplay or ignore the drawbacks of their own tradeoff choices, while believing that people who make the opposite choice with different drawbacks are selfish, or even have that choice's drawbacks as a terminal value instead of an unfortunate side-effect.  

For example: 

"My environmental policy won't affect the economy that much.  Our opponents care more about consumerism than about the future of the planet."  

"My economic policy won't affect the environment that much.  Our opponents care more about feeling superior than about the people they'd be putting out of work."  

Listening to and acknowledging the real reasons people accept or reject a tradeoff makes them feel understood and respected, at which point they are more comfortable listening to reasons why other people might disagree.  Understanding the tradeoff concepts makes it much easier to understand and explain the reasoning behind tradeoffs.  

Once we understand the tradeoffs in play and the drawbacks people fear, we can resolve these points of disagreement using constructive principles.  (This part requires zooming out of the narrow focus on the situation and applying problem-solving mindsets.  Most humans get stuck arguing about zero-sum tradeoffs because humans don't yet teach constructive problem-solving as part of a standard educational curriculum.)  

Constructive principles improve the situation over time to put everyone in better positions with more options, lowering the stakes for disagreement about what to do in the short term.  Investment deals with costs.  Preparation deals with risks.  Challenge deals with habits.  Ethics deals with trust.  These principles are how we explore approaches for simultaneously addressing all the drawbacks that all the stakeholders are concerned about.  

How does that sound?  If you'd like to learn more about the process for dissolving scissors statements and reconciling conflict constructively, just let me know.  We can apply it to conflict situations that you want to resolve.  

(Edited for typo.)

comment by deepthoughtlife · 2024-11-06T21:05:22.611Z · LW(p) · GW(p)

While there are legitimate differences that matter quite a bit between the sides, I believe a lot of the reason why candidates are like 'scissors statements' is because the median voter theorem actually kind of works, and the parties see the need to move their candidates pretty far toward the current center, but they also know they will lose the extremists to not voting or voting third party if they don't give them something to focus on, so both sides are literally optimizing for the effect to keep their extremists engaged.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2024-11-07T09:03:41.672Z · LW(p) · GW(p)

I don't follow this model yet.  I see why, under this model, a party would want the opponent's candidate to enrage people / have a big blind spot (and how this would keep the extremes on their side engaged), but I don't see why this model would predict that they would want their own candidate to enrage people / have a big blind spot.

Replies from: Nick_Tarleton, deepthoughtlife
comment by Nick_Tarleton · 2024-11-07T23:40:10.767Z · LW(p) · GW(p)

It sounds to me like the model is 'the candidate needs to have a (party-aligned) big blind spot in order to be acceptable to the extremists(/base)'. (Which is what you'd expect, if those voters are bucketing 'not-seeing A' with 'seeing B'.)

(Riffing off from that: I expect there's also something like, Motive Ambiguity-style, 'the candidate needs to have some, familiar/legible(?), big blind spot, in order to be acceptable/non-triggering to people who are used to the dialectical conflict'.)

Replies from: deepthoughtlife
comment by deepthoughtlife · 2024-11-08T05:23:47.846Z · LW(p) · GW(p)

It seems I was not clear enough, but this is not my model. (I explain it to the person who asked if you want to see what I meant, but I was talking about parties turning their opponents into scissors statements.)

That said, I do believe that it is a possible partial explanation that sometimes having an intentional blind spot can be seen as a sign of loyalty by the party structure.

comment by deepthoughtlife · 2024-11-08T05:21:09.806Z · LW(p) · GW(p)

So, my model isn't about them making their candidate that way, it is the much more obvious political move... make your opponent as controversial as possible. There is something weird / off / wrong about your opponent's candidate, so find out things that could plausibly make the electorate think that, and push as hard as possible. I think they're good enough at it. Or, in other words, try to find the best scissors statements about your opponent, where 'best' is determined both in terms of not losing your own supporters, and in terms of losing your opponent possible supporters.

This is often done as a psyop on your own side, to make them not understand why anyone could possibly support said person.

That said, against the simplified explanation I presented in my initial comment, there is also the obvious fact I didn't mention that the parties themselves have a certain culture, and that culture will have blindspots which they don't select along, but the other party does. Since the selection optimizes hard for what the party can see, that makes the selected bad on that metric, and even pushes out the people that can see the issue making it even blinder.

Replies from: AnnaSalamon, deepthoughtlife
comment by AnnaSalamon · 2024-11-13T00:29:40.241Z · LW(p) · GW(p)

I mean, I see why a party would want their members to perceive the other party's candidate as having a blind spot.  But I don't see why they'd be typically able to do this, given that the other party's candidate would rather not be perceived this way, the other party would rather their candidate not be perceived this way, and, naively, one might expect voters to wish not to be deluded.  It isn't enough to know there's an incentive in one direction; there's gotta be more like a net incentive across capacity-weighted players, or else an easier time creating appearance-of-blindspots vs creating visible-lack-of-blindspots, or something.  So, I'm somehow still not hearing a model that gives me this prediction.

Replies from: deepthoughtlife
comment by deepthoughtlife · 2024-11-13T01:22:30.144Z · LW(p) · GW(p)

To be pedantic, my model is pretty obvious, and clearly gives this prediction, so you can't really say that you don't see a model here, you just don't believe the model. Your model with  extra assumptions doesn't give this prediction, but the one I gave clearly does.

You can't find a person this can't be done to because there is something obviously wrong with everyone? Things can be twisted easily enough. (Offense is stronger than defense here.) If you didn't find it, you just didn't look hard/creatively enough. Our intuitions against people tricking us aren't really suitable defense against sufficiently optimized searching. (Luckily, this is actually hard to do so it is pretty confined most of the time to major things like politics.) Also, very clearly, you don't actually have to convince all that many people for this to work! If even 20% of people really bought it, those people would probably vote and give you an utter landslide if the other side didn't do the same thing (which we know they do, just look at how divisive candidates obviously are!)

comment by deepthoughtlife · 2024-11-08T05:27:44.553Z · LW(p) · GW(p)

I should perhaps have added something I thought of slightly later that isn't really part of my original model, but an intentional blindspot can be a sign of loyalty in certain cases.

comment by leerylizard (timtheenchanter) · 2024-11-06T17:08:32.871Z · LW(p) · GW(p)

How might someone figure out what their blind spot (A or B) is and overcome it?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2024-11-06T18:02:54.513Z · LW(p) · GW(p)

By parsing the other voter as "against X" rather than "for Y", and then inquiring into how they see X as worth being against, and why, while trying really hard to play taboo and avoid ontological buckets.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2024-11-06T18:38:53.758Z · LW(p) · GW(p)

Or: by seeing themselves, and a voter for the other side, as co-victims of an optical illusion, designed to trick each of them into being unable to find another's areas of true seeing.  And by working together to figure out how the illusion works, while seeing it as a common enemy.

But my specific hypothesis here is that the illusion works by misconstruing the other voter's "Robert can see a problem with candidate Y" as "Robert can't see the problem with candidate X", and that if you focus on trying to decode the first the illusion won't kick in as much. 

comment by tailcalled · 2024-11-06T13:51:34.691Z · LW(p) · GW(p)

Electoral candidates can only be very bad because the country is very big and strong, which can only be the case because there's a lot of people, land, capital and institutions.

Noticing that two candidates for leading these resources are both bad is kind of useless without some other opinion on what form the resources should enter. A simple option would be that the form of the resources should lessen, e.g. that people should work less. The first step to this is to go away from Keynesianism. But if you take that to its logical conclusion, it implies e/acc replacement of humanity, VHEM, mass suicide, or whatever. It's not surprising that this is unpopular.

So that raises the question: What's some direction that the form of societal resources could be shifted towards that would be less confusing than a scissor statement candidate?

Because without an answer to this question, I'm not sure we even need elaborate theories on scissor statements.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2024-11-06T18:02:08.847Z · LW(p) · GW(p)

Huh.  Is your model is that surpluses are all inevitably dissipated in some sort of waste/signaling cascade?  This seems wrong to me but also like it's onto something.

Replies from: tailcalled, tailcalled
comment by tailcalled · 2024-11-06T18:46:08.490Z · LW(p) · GW(p)

And I guess I should say, I have a more sun-oriented [LW · GW] and less competition-oriented [LW · GW] view. A surplus (e.g. in energy from the sun or negentropy from the night) has a natural "shape" (e.g. trees or solar panels) that the surplus dissipates into. There is some flexibility in this shape that leaves room for choice, but a lot less than rationalists usually assume.

comment by tailcalled · 2024-11-06T18:39:42.049Z · LW(p) · GW(p)

Kind of. First, the big exception: If you manage to enforce global authoritarianism, you can stockpile surplus indefinitely, basically tiling the world with charged-up batteries. But what's the point of that?

Secondly, "waste/signaling cascade" is kind of in the eye of the beholder. If a forest is standing in some region, is it wasting sunlight that could've been used on farming? Even in a very literal sense, you could say the answer is yes since the trees are competing in a zero-sum game for height. But without that competition, you wouldn't have "trees" at all, so calling it a waste is a value judgement that trees are worthless. (Which of course you are entitled to make, but this is clearly a disagreement with the people who like solarpunk.)

But yeah, ultimately I'm kind of thinking of life as entropy maximization [LW · GW]. The surplus has to be used for something, the question is what. If you've got nothing to use it for, then it makes sense for you to withdraw, but then it's not clear why to worry that other people are fighting over it.