Posts
Comments
To me this is exciting. I deduced that the mental architecture you're describing should be possible. It's extremely cool to hear someone just name it as a lived experience. Like, what would a mind that's actually systematically free of Newcomblike self-deception have to be like, assuming the hostile telepaths problem is real? This is one possible solution. Assuming I haven't misunderstood what you're describing!
Ah yeah, I think "gaining independence" is a better descriptor of (what I meant by) that solution type.
A few examples:
- Framing kids as "disruptive" or "inattentive" or otherwise having the wrong nature if they feel disengaged. This is after informing them what they're going to study without consulting what's relevant or interesting to them, and then using social power to require them to study those things. But the problem is supposedly the student, not the system.
- Claiming that they'll need these math tools later in life, and that this justifies adults pressuring the kids to learn those skills now. (This is more bullshit-flavored than gaslight-flavored, but I think they're psychological neighbors.)
- Pretending that because a word problem touches on a topic kids care about, the math is relevant to what the kids like about that topic.
- Insisting that forcing kids to take math classes is for their own good, and if the kids don't see why or don't agree, then they should believe the adults over their own sense of things.
It makes me so angry. It's perfectly antithetical to the essence of math as I see it.
In broad strokes I agree with you. Here I was sharing my observation of four cases where a friend was involved this way. One case might have been miscommunication but it doesn't seem likely to me. The other three definitely weren't. In one of those I personally knew the guy; I liked him, but he was also emotionally very unstable and definitely not a safe father. I don't think the abuse was physical in any of those four cases.
! I'm genuinely impressed if you wrote this post without having a mental frame for the concepts drawn from LDT.
Thanks. :)
And thanks for explaining. I'm not sure what "quasi-Kantian" or "quasi-Rawlsian" mean, and I'm not sure which piece of Eliezer's material you're gesturing toward, so I think I'm missing some key steps of reasoning.
But on the whole, yeah, I mean defensive power rather than offensive. The offensive stuff is relevant only to the extent that it works for defense. At least that's how it seems to me! I haven't thought about it very carefully. But the whole point is, what could make me safe if a hostile telepath discovers a truth in me? The "build power" family of solutions is based on neutralizing the relevance of the "hostile" part.
I think you're saying something more sophisticated than this. I'm not entirely sure what it is. Like here you say:
Basically, you have to control things orthogonal to your position in the lineup, to robustly improve your algorithm for negotiating with others.
I'm not sure what "the lineup" refers to, so I don't know what it means for something to be orthogonal to my position in it.
I think I follow and agree with what you're saying if I just reason in terms of "setting up arms races is bad, all else being equal".
Or to be more precise, if I take the dangers of adaptive entropy seriously and I view "create adaptive entropy to get ahead" as a confused pseudo-solution. It might be that that's my LDT-like framework.
I like this way of expressing it. Thanks for sharing.
I think it's the same core thing I was pointing at in "We're already in AI takeoff", only it goes in the opposite direction for metaphors. I was arguing that it's right to view memes as alive for the same reason we view trees and cats as alive. Grey seems to be arguing to set aside the question and just look at the function. Same intent, opposite approaches.
I think David Deutsch's article "The Evolution of Culture" is masterful at describing this approach to memetics.
(Though maybe I should say that the therapist needs to either experience unconditional positive regard toward the client, or successfully deceive themselves and the client into thinking that they do. Heh.)
I mean, technically they don't even need to deceive themselves. They can be consciously judgy as f**k as long as they can mask it effectively. Psychopaths might make for amazing therapists in this one way!
I think the word "power" might be creating some confusion here.
I mean something pretty specific and very practical. I'm not sure how to precisely define it, but here are some examples:
- If someone threatens to freak out at you if you disagree with them, and you tend to get overwhelmed and panic when the freak out at you, then they have a kind of power over you. Building power here probably looks like learning to experience them freaking out without you getting overwhelmed.
- If someone pays for your rent and food but might stop if they get any hint that you're gay, it might not be safe to even ask yourself honestly whether you are. You build power here by getting an income, or a source of rent and food, that doesn't depend on the hostile telepathic benefactor.
- If your lover gets turned on by you politically agreeing with them and turned off by disagreement, you might find your political views drifting toward theirs for "unrelated" reasons. One way to build power here is to get other access to sex. Another is to diminish your libido. Another is to break up with them. (Not saying any of these are a great idea. I'm just naming what the solution of "building power" might look like here.)
I'm not familiar with LDT. I can't comment on that part. Sorry if that means what I just said misses your point.
The fact that Bob has this policy in the first place is more likely when he's being self-deceptive.
I don't know if that's true. It might be. But some possible counterpoints:
- People can distrust systems that demand they check. "You have nothing to fear if you have nothing to hide" can get a response of "No" even from people who don't have anything to hide.
- If someone subconsciously thinks they can pull off the illusion of honestly looking while in fact finding nothing, they become more likely to choose to look because they're self-deceiving.
- Someone with a policy of not looking might be better at making their own self-deception unnecessary.
…more often it will be the result of Bob noticing that he's the sort of person who might have something to hide.
Sure, that way of deciding doesn't work.
Likewise, if you're inclined to decide you're going to dig into possible sources of self-deception because you think it's unlikely that you have any, then you can't do this trick.
The hypothetical respect for any self-deception that might be there needs to be unconditional on its existence. Otherwise, for the reason you say, it doesn't work as well.
(…with some caveats about how people are imperfect telepaths, so some fuzz in implementation here is in practice fine.)
That said, I think you're right in that if Omega-C is looking only at the choice of whether to look or not, then yes, Omega-C would be right to take the choice as evidence of a deception.
But the whole point is that Omega-C can read what conscious processes you're using, and can see that you're deciding for a glomerizing reason.
That's why why you choose what you do matters so much here. Not just what you choose.
It's a general rule that if E is strong evidence for X, then ~E is at least weak evidence for ~X.
Conservation of expected evidence is what makes looking relevant. It's not what makes deciding to look relevant.
If I decide to appease Omega-C by looking, and then I find that I'm self-deceiving, the fact that I chose to look gets filtered. The fact that this is possible is why not finding evidence can matter at all. Otherwise it'd just be a charade.
Relatedly: I have a coin in my pocket. I don't feel like checking it for bias. Does that make it more likely that the coin is biased? Maybe. But if I could magically show you that I'm not looking because I honestly do not care one way or the other and don't want to waste the effort, and it doesn't affect me whether it's biased or not… then you can't use my disinterest in checking the coin for bias as evidence of some kind of subconscious deception about the coin's bias. I'm just refusing to do things that would inform you of the coin's possible bias.
If this kind of reasoning weren't possible, then it seems to me that glomerization wouldn't be possible.
It's not very hard to detect when someone's deceiving them self…
A few notes:
- Sometimes this is obviously true. I agree.
- It's a curious question why many folk turn their attention away from someone else's self-deception when it's obvious. Often they don't, but sometimes they do. Why they (we) do that is an interesting question worthy of some sincere curiosity.
- Confirmation bias. You don't notice the cases where you don't pick up on someone else's self-deception.
…people should notice more and disincentivise that
Boy oh boy do I disagree.
If someone's only option for dealing with a hostile telepath is self-deception, and then you come in and punish them for using it, thou art a dick.
Like, do you think it helps the abused mothers I named if you punish them somehow for not acknowledging their partners' abuse? Does it even help the social circle around them?
Even if the "hostile telepath" model is wrong or doesn't apply in some cases, people self-deceive for some reason. If you don't dialogue with that reason at all and just create pain and misery for people who use it, you're making some situation you don't understand worse.
I agree that getting self-deception out of a culture is a great idea. I want less of it in general.
But we don't get there by disincentivizing it.
…I went in the other direction: trying to self-deceive little, and instead be self-honest about my real motivations, even if they are "bad PR".
Yep. I'm not sure why you think this is a "very different" conclusion. I'd say the same thing about myself. The key question is how to handle the cases where becoming conscious of a "bad PR" motivation means it might get exposed.
And you answer that! In part at least. You divide people into three categories based on (a) whether you need occlumency with them at all and (b) whether you need to use occlumency on the fact that you're using occlumency.
I don't think of it in terms this explicit, but it's pretty close to what I do now. People get to see me to the extent that I trust them with what I show them. And that's conscious.
Am I misunderstanding you somehow?
Moreover, having an extremely difficult high-stakes problem is not just a strong reason to self-deceive less, it's also strong reason to become more truth-oriented as a community. This means that people with such a common cause should strive to put each other at least in category 2 above, tentatively moving towards 3 (with the caveat of watching out for bad actors trying to exploit that).
I both agree and partly disagree. I tagged your comment with where.
Totally, yes, having a real and meaningful shared problem means we want a truth-seeking community. Strong agreement.
But I think how we "strive" to be truth-seeking might be extremely important. If it's a virtue instead of an engineering consideration, and if people are shamed or punished for having non-truth-seeking behaviors, then the collective "striving" being talked about will encourage individual self-deception and collective untalkaboutability. It's an example of inducing adaptive entropy.
Relatedly: mathematicians don't have truth-seeking collaboration because they're trying hard to be truth-seeking. They're trying to solve problems, and they can verify whether their proposed solutions actually solve the problems they're working on. That means truth-seeking is more useful for what they're doing than any alternatives are. There's no need for focusing on the Virtue of Seeking Truth as a culture.
Likewise, there's no Virtue of Using a Hammer in carpentry.
What puts someone in category 2 or 3 for me isn't something I can strive for. It's more like, I can be open to the possibility and be willing to look for how they and I interact. Then I discover how my trust of them shifts. If I try to trust people more than I do, I end up in more adaptive entropic confusion. I'm pretty sure this is lawful on par with thermodynamics.
This might be what you meant. If so, sorry to set up and take a swing at a strawman of what you were saying.
I think I disagree. I'll add some precision to point out how. Happy to hear if I'm missing something.
E is Bayesian evidence of X if E is more likely to happen when X is true than when it's not.
If Bob says "As a policy, I'm not going to check whether I'm running an Omega-C deception", that's equally likely whether Bob is running a deception or not. (Hence the "as a policy" part.) It just fully happens in both cases. So from Omega-C's point of view, it's not Bayesian evidence that distinguishes between the two versions of Bob.
It would be evidence if the choice were made from a stance of "Oh shoot, that might be self-deception! Well, I'm now going to adopt the no-looking policy so that I don't have to check it!" Then yeah, sure, that's clearly evidence — which is precisely why that method of deciding not to look isn't what can work.
The policy of always deeply investigating oneself can produce evidence for Omega-C, but the act of choosing that policy might not. Choosing the policy not to look just doesn't produce evidence.
Or at least that's how it seems to me.
I can secondhand lend some affirmation to the newcomb case.
Oh yeah, that's a cool example.
Another solution is illegible-ization/orthogonalization of preferences to the hostile telepath so that you don't overlap in anything they might care about or overpower you with.
You mean something like, look boring to them? Like, I don't care how good Putin is at reading people, I just don't have anything he wants, so I'm safe as long as I keep (apparently) not having anything he wants?
Cool. I knew there at least used to be "antisocial personality disorder", which I thought was under cluster B along with narcissism and borderline. And I thought "psychopathy" was a different term for APD. Thanks for the correction.
The main thing I wanted to gesture at there is that I wasn't using "psychopath" as something derogatory. I didn't mean "bad guys". I meant something more like "people who are naturally unconstrained by social pressures and have no qualms breaking even profound taboos if they think it'll benefit them". (I just now made that up.) It seems to me that it's a pretty specifically different mental/emotional architecture.
Some cultures used to, and maybe still do, have a solution to the hostile telepaths problem you didn't list: perform rituals even if you don't mean them.
Ah, yep! True that!
Your point relates more directly to my main interest, memetics. I bet there are memes that encourage both (a) these rituals and (b) the telepathic attacks that make those rituals necessary.
Oh huh. Yeah. It's not a solution by itself since there are lots of other cues hostile telepaths can use. But rigidity might dampen what they can read for sure!
This is testable. It predicts that improved skill with occlumency and/or gaining power should sometimes cause a release of chronic tension.
I think breastfeeding is different because… public health people decided it should be, and we’ve internalized their messaging.
I haven't gone around and checked much, but my gut impression isn't that this is about public health people. I think it's more like a Chesterton's Fence backlash against previous generations' experts claiming that formula was obviously better. IIRC, mothers were warned against using breastmilk and told to go to formula instead, because it's Scientific™. So it took some cultural pushback to reclaim evolution's solution to feeding newborns.
I haven't read OP yet, just a quick translation note:
The Sanskrit word "tanha" shares an etymology with English words like "tenacious", "tendency", and "tenet". The PIE root means "grip" or "hold".
I think most folk in my social circles who use "tanha" these days are referencing Romeo's "(mis)Translating the Buddha":
Tanha is usually translated as desire or craving but this is wrong and misleading. Tanha is more literally translated as 'fused to' or 'welded to'. It immediately follows the mental moment that you zoom in with the attentional aperture on something. It could be that a flower or an item on the shelf at the supermarket captures your attention, or you turn your head to catch more detail as you pass by an accident on the road. Many hundreds of thousands of such events take place in the course of a single day. With most of them attention then relaxes and makes space for the next thing. But with some small proportion you find the mind doesn't quite 'unclench' from the object or some aspect of the object. This tension aspect is why it is sometimes translated as ‘grasping’ which is closer. Imagine something you aren’t finished with being pulled out of your hand and you tensing your fingers to resist.
That seems maybe true. What's the problem you see with that?
I consider ultra-BS a primarily 'central route' argument, as the practitioner uses explicit reasoning to support explicit narrative arguments. […]
Putting someone off balance, on the other hand, is more 'peripheral route' persuasion. There's far more emphasis on the implicit messaging.
Ah! This distinction helped clarify a fair bit for me. Thank you!
…I think I might conclude that your implicit primers and vibes are very good at detecting implicit persuasion, which typically but not always has a correlation with dark artsy techniques.
I agree on all accounts here. I think I dumped most of my DADA skill points into implicit detection. And yes, the vibes thing isn't a perfect correlation to Dark stuff, I totally agree.
Is this example satisfying?
It's definitely helpful! The category still isn't crisp in my mind, but it's a lot clearer. Thank you!
Thanks for the response in any case, I really enjoy these discussions! Would you like to do a dialogue sometime?
I've really enjoyed this exchange too. Thank you!
And sure, I'd be up for a dialogue sometime. I don't have a good intuition for what kind of thing goes well in dialogues yet, so maybe take the lead if & when you feel inspired to invite me into one?
Can you spell this out a little more? Did Brent and LaSota employ baloney-disclaimers and uncertainty-signaling in order to bypass people's defenses?
I think Brent did something different from what I'm describing — a bit more like judo plus DOS attacks.
I'm not as familiar with LaSota's methods. I talked with them several times, but mostly before I learned to detect the level of psychological impact I'm talking about with any detail. Thinking back to those interactions, I remember it feeling like LaSota was confidently asserting moral and existential things that threatened to make me feel inadequate and immoral if I didn't go along with what they were saying and seek out the brain hemisphere hacking stuff they were talking about. And maybe even then I'd turn out to be innately "non-good".
(Implied here is a type of Dark hack I find most folk don't have good defenses against other than refusing to reason and blankly shutting down. It works absurdly well on people who believe they should do what they intellectually conclude makes sense to do.)
The thing I was referring to is something I personally stumbled across. IME rationalists on the whole are generally more likely to take in something said in a low-status way. It's like the usual analyze-and-scrutinize machinery kind of turns off.
One of the weirder examples is, just ending sentences as though they're questions? I'm guessing it's because ending each thing with confidence as a statement is a kind of powerful assertion. But, I mean, if the person talking is less confident then maybe what they're saying is pretty safe to consider?
(I'm demoing back & forth in that paragraph, in case that wasn't clear.)
I think LaSota might have been doing something like this too, but I'm not sure.
(As a maybe weird example: Notice how that last sentence is in fact caveated, but it's still confident. I'm quite sure this is my supposition. I'm sure I'm not sure of the implied conclusion. I feel solid in all of this. My impression is, this kind of solidity is a little (sometimes a lot) disturbing to many rationalists (with some exceptions I don't understand very well — like how Zvi and Eliezer can mostly get away with brazen confidence without much pushback). By my models, the content of the above sentence would have been easier to receive if rewritten along the lines of, "I'm really not sure, but based on my really shaky memories, I kinda wonder if LaSota might have been doing something like this too — but don't believe me too much!")
Does that answer what you'd hoped?
Yep, I think you're basically right on all accounts. Maybe a little off with the atheist fellow, but because of context I didn't think to share until reading your analysis, and what you said is close enough!
It's funny, I'm pretty familiar with this level of analysis, but I still notice myself thinking a little differently about the bookstore guy in light of what you've said here. I know people do the unbalancing thing you're talking about. (Heck, I used to quite a lot! And probably still do in ways I haven't learned to notice. Charisma is a hell of a drug when you're chronically nervous!) But I didn't think to think of it in these terms. Now I'm reflecting on the incident and noticing "Oh, yeah, okay, I can pinpoint a bunch of tiny details when I think of it this way."
The fact that I couldn't tell whether any of these were "ultra-BS" is more the central point to me.
If I could trouble you to name it: Is there a more everyday kind of example of ultra-BS? Not in debate or politics?
I'm gonna err on the side of noting disagreements and giving brief descriptions of my perspective rather than writing something I think has a good chance of successfully persuading you of my perspective, primarily so as to actually write a reply in a timely fashion.
Acknowledged.
I don't see this as showing that in all domains one must maintain high offensive capabilities in order to have good defenses.
Oh, uh, I didn't mean to imply that. I meant to say that rejecting attention to military power is a bad strategy for defense. A much, much better defensive strategy is to study offense. But that doesn't need to mean getting good at offense!
(Although I do think it means interacting with offense. Most martial arts fail spectacularly on this point for instance. Pragmatically speaking, you have to have practice actually defending yourself in order to get skillful at defense. And in cases like MMA, that does translate to getting skilled at attack! But that's incidental. I think you could design good self-defense training systems that have most people never practicing offense.)
I think these problems aren't that hard once you have community spaces that are willing to enforce boundaries. Over the last few years I've run many events and spaces, and often gotten references for people who want to enter the spaces, and definitely chosen to not invite people due to concerns about ethics and responsible behavior. I don't believe I would've accepted these two people into the spaces more than once or twice at most.
Nice. And I agree, boundaries like this can be great for a large range of things.
I don't think this helps the Art much though.
And it's hard to know how much your approach doesn't work.
I also wonder how much this lesson about boundaries arose because of the earlier Dark exploits. In which case it's actually, ironically, an example of exactly the kind of thing I'm talking about! Only with lessons learned much more painfully than I think was necessary due to their not being sought out.
But also, maybe this is good enough for what you care about. Again, I don't mean to pressure that you should do anything differently.
I'm mostly pushing back against the implication I read that "Nah, our patches are fine, we've got the Dark Arts distanced enough that they're not an issue." You literally can't know that.
My position is that most thinking isn't really about reality and isn't truth-tracking, but that if you are doing that thinking then a lot of important questions are surprisingly easy to answer.
Totally agree. And this is a major defense against a lot of the stuff that bamboozles most folk.
I think there's a ton of adversarial stuff going on as well, but the primary reason that people haven't noticed that AI is an x-risk isn't because people are specifically trying to trick them about the domain, but because the people are not really asking themselves the question and checking.
I agree — and I'm not sure why you felt this was relevant to say? I think maybe you thought I was saying something I wasn't trying to.
(I think there's some argument to be made here that the primary reason people don't think for themselves is because civilization is trying to make them go crazy, which is interesting, though I still think the solution is primarily "just make a space where you can actually think about the object level".)
This might be a crux between us. I'm not sure. But I think you might be seriously underestimating what's involved in that "just" part ("just make a space…"). Attention on the object-level is key, I 100% agree there. But what defines the space? What protects its boundaries? If culture wants to grab you by the epistemic throat, but you don't know how it tries to do so, and you just try to "make a space"… you're going to end up way more confident of the clarity of your thinking than is true.
I acknowledge that there are people who are very manipulative and adversarial in illegible ways that are hard to pin down. […] …I think probably there are good ways to help that info rise up and get shared…. I don't think it requires you yourself being very skilled at engaging with manipulative people.
I think there's maybe something of a communication impasse happening here. I agree with what you're saying here. I think it's probably good enough for most cases you're likely to care about, for some reasonable definition of "most". It also strikes me as obvious that (a) it's unlikely to cover all the cases you're likely to care about, and (b) the Art would be deeply enriched by learning how one would skillfully engage with manipulative people. I don't think everyone who wants to benefit from that enrichment needs to do that engagement, just like not everyone who wants to train in martial arts needs to get good at realistic self-defense.
I've said this several times, and you seem to keep objecting to my implied claim of not-that. I'm not sure what's going on there. Maybe I'm missing your point?
I do sometimes look at people who think they're at war a lot more than me, and they seem very paranoid and to spend so many cognitive cycles modeling ghosts and attacks that aren't there. It seems so tiring!
I agree. I think it's dumb.
I suspect you and I disagree about the extent to which we are at war with people epistemically.
Another potentially relevant point here is that I tend to see large groups and institutions as the primary forces deceiving me and tricking me, and much less so individuals.
Oh! I'm really glad you said this. I didn't realize we were miscommunicating about this point.
I totally agree. This is what I mean when I'm talking about agents. I'm using adversarial individuals mostly as case studies & training data. The thing I actually care about is the multipolar war going on with already-present unaligned superintelligences. Those are the Dark forces I want to know how to be immune to.
I'm awfully suspicious of someone's ability to navigate hostile psychofauna if literally their only defense against (say) a frame controller is "Sus, let's exclude them." You can't exclude Google or wokism or collective anxiety the same way.
Having experienced frame control clawing at my face, and feeling myself become immune without having to brace… and noticing how that skill generalized to some of the tactics that the psychofauna use…
…it just seems super obvious to me that this is really core DADA. Non-cognitive, very deep, very key.
- Personally I would like to know two or three people who have successfully navigated being manipulated, and hopefully have them write up their accounts of that.
Ditto!
- I think aspiring rationalists should maneuver themselves into an environment where they can think clearly and be productive and live well, and maintain that, and not try to learn to survive being manipulated without a clear and present threat that they think they have active reason to move toward rather than away from.
Totally agree with the first part. I think the whole thing is a fine choice. I notice my stance of "Epistemic warriors would still be super useful" is totally unmoved thus far though. (And I'm reminded of your caveat at the very beginning!)
I'm reminded of the John Adams quote: "I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study mathematics and Philosophy, Geography, natural History, naval Architecture, navigation, Commerce and Agriculature, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine."
I note that when I read your comment I'm not sure whether you're saying "this is an important area of improvement" or "this should be central to the art", which are very different epistemic states.
Oh, I don't know what should or shouldn't be central to the Art.
It just strikes me that rationality currently is in a similar state as aikido.
Aikido claims to be an effective form of self-defense. (Or at least it used to! Maybe it's been embarrassed out of saying that anymore?) It's a fine practice, it has immense value… it's just not what it says on the tin.
If it wanted to be what it claims, it needs to do things like add pressure testing. Realistic combat. Going into MMA tournaments and coming back with refinements to what it's doing.
And that could be done in a way that honors its spirit! It can add the constraints that are key to its philosophy, like "Protect everyone involved, including the attacker."
But maybe it doesn't care about that. Maybe it just wants to be a sport and discipline.
That's totally fine!
It does seem weird for it to continue claiming to be effective self-defense though. Like it needs its fake meaning to be something its practitioners believe in.
I think rationality is in a similar state. It has some really good stuff in it. Really good. It's a great domain.
But I just don't see it mattering for the power plays. I think rationalists don't understand power, the same way aikido practitioners don't understand fighting. And they seem to be in a similar epistemic state about it: they think they basically do, but they don't pressure-test their understanding to check, best as I can tell.
So of your two options, it's more like "important area for improvement"… roughly like pressure-testing could be an important area of improvement for aikido. It'd probably become a kind of central if it were integrated! But I don't know.
And, I think the current state of rationality is fine.
Just weak in one axis it sometimes claims to care about.
Well, that particular comment had a lot of other stuff going on…
That's really not a central example of what I meant. I meant more like this one. Or this one.
But also, yeah, I do kinda feel like "downvoting people when they admit they did something bad" is a thing we sometimes do here and that's not great incentives. If someone wants to avoid that kind of downvote, "stop admitting to the bad thing" seems like an obvious strategy. Oops! And like, I remember times when I asked someone a question and they got downvoted for their answer, and I did think it was a bad answer that in a vacuum deserved downvotes, but I still upvoted as thanks for answering.
Yep. This is messy and unfortunate, I agree.
Someone might not have realized the thing they did was bad-according-to-LW, and the downvotes help signal that.
It's not possible to take the downvotes as a signal of this if downvotes get used for a wide range of things. If the same signal gets used for
"This was written in bad form, but if you'd written it differently it would have been welcome"
and
"Your attitude doesn't belong on this website, and you should change it or leave"
and
"I don't like your vibe, so I'm just gonna downvote"
then the feedback isn't precise enough to be helpful in shaping behavior.
If someone did a bad thing and doesn't care, maybe we just don't want them here.
True.
Although if the person disagrees with whether it was bad, and the answer to that disagreement is to try to silence them… then that seems to me like a pretty anti-epistemic norm. At least locally.
I'd also really like to see a return of the old LW cultural thing of, if you downvote then you explain why. There are some downvotes on my comments that I'm left scratching my head about and going "Okay, whatever." It's hard for downvotes to improve culture if the feedback amounts to "Bad."
I think there's currently too many things that deserve downvotes for that to be realistic.
I have a hard time believing this claim. It's not what I see when I look around.
The dynamic would be pretty simple:
- After I downvote, I skim the replies to see if someone else already explained what had me do the downvote. If so, I upvote that explanation and agree-vote it too.
- If there's no such explanation, I write one.
Easy peasy. I seriously doubt the number of things needing downvotes on this site is so utterly overwhelming that this approach is untenable. The feedback would be very rich, the culture well-defined and transparent.
I don't know why LW stopped doing this. Once upon a time it used to cost karma to downvote, so people took downvotes more seriously. I assume there was some careful thought put into changing that system to the current one. I haven't put more than a sum total of maybe ten minutes of thinking into this. So I'm probably missing something.
But without knowing what that something is, and without a lot of reason for me to invest a ton more time into figuring it out… my tentative but clear impression is that what I'm describing would be way better for culture here by a long shot.
…I think another pretty good option is "a master rationalist would definitely avoid surrounding themselves with con artists and frauds and other adversarial actors".
I think that's a great option. I'd question a "master rationalist's" skills if they couldn't avoid such adversarial actors, or notice them if they slip through the cracks.
I do think there are real skills you are pointing to, but to some extent I prefer the world where I don't have those skills and in place of that my allies and I coordinate to identify and exclude people who are using the dark arts.
I like your preference. I'll say some things, but I want to start by emphasizing that I don't think you're making a wrong or bad choice.
I want to talk about what I think the Art could be, kind of for aesthetic reasons. This isn't to assert anything about what you or any given individual should or shouldn't be doing in any kind of moral sense.
So with that said, here are three points:
(1) I think there's a strong analogy here to studying combat and war. Yes, if you can be in a pacifist cluster and just exclude folk who are really into applied competitive strategy, then you have something kind of like a cooperate/cooperate equilibrium. But if that's the whole basis of your culture, it's extremely vulnerable, the way cooperate-bot is vulnerable in prisoners' dilemmas. You need military strength, the way a walled garden needs walls. Otherwise folk who have military strength can just come take your resources, even if you try to exclude them at first.
At the risk of using maybe an unfair example, I think what happened with FTX last year maybe illustrates the point.
Clearer examples in my mind are Ziz and Brent. The point not being "These people are bad!" But rather, these people were psychologically extremely potent and lots of folk in the community could neither (a) adequately navigate their impact (myself included!) nor (b) rally ejection/exclusion power until well after they'd already had their impact.
Maybe, you might hope, you can make the ejection/exclusion sensitivity refined enough to work earlier. But if you don't do that by studying the Dark Arts, and becoming intimately familiar with them, then what you get is a kind of naïve allergic response that Dark Artists can weaponize.
Again, I don't mean that you in particular or even rationalists in general need to address this. There's nothing wrong with a hobby. I'm saying that as an Art, it seems like rationality is seriously vulnerable if it doesn't include masterful familiarity with the Dark Arts. Kind of like, there's nothing wrong with practicing aikido as a sport, but you're not gonna get the results you hope for if you train in aikido for self-defense. That art is inadequate for that purpose and needs exposure to realistic combat to matter that way.
(2) …and I think that if the Art of Rationality were to include intimate familiarity with the Dark Arts, it would work way way better.
Things like the planning fallacy or confirmation bias are valuable to track. I could stand to improve my repertoire here for sure.
But the most potent forms of distorted thinking aren't about sorting out the logic. I think they look more like reaching deep down and finding ways to become immune to things like frame control.
Frame control is an amazing example in my mind precisely because of the hydra-like nature of the beast. How do you defend against frame control without breaking basic things about culture and communication and trust? How do you make it so your cultural and individual defenses don't themselves become the manual that frame controllers use to get their desired effects?
And this barely begins to touch on the kind of impact that I'd want to call "spiritual". By which I don't mean anything supernatural; I'm talking about the deep psychological stuff that (say) conversing with someone deep in a psilocybin trip can do to the tripper. That's not just frame control. That's something way deeper, like editing someone's basic personality operating system code. And sometimes it reaches deeper even than that. And it turns out, you don't need psychedelics to reach that deep; those chemical tools just open a door that you can open other ways, voluntarily or otherwise, sometimes just by having a conversation.
The standard rationalist defense I've noticed against this amounts to mental cramping. Demand everything go through cognition, and anything that seems to try to route around cognition gets a freakout/shutdown/"shame it into oblivion" kind of response. The stuff that disables this immune response is really epistemically strange — things like prefacing with "Here's a fake framework, it's all baloney, don't believe anything I'm saying." Or doing a bunch of embodied stuff to act low-status and unsure. A Dark Artist who wanted to deeply mess with this community wouldn't have to work very hard to do some serious damage before getting detected, best as I can tell (and as community history maybe illustrates).
If this community wanted to develop the Art to actually be skillful in these areas… well, it's hard to predict exactly what that'd create, but I'm pretty sure it'd be glorious. If I think of the Sequences as retooling skeptical materialism, I think we'd maybe see something like a retooling of the best of Buddhist psychotechnology. I think folk here might tend to underestimate how potent that could really be.
(…and I also think that it's maybe utterly critical for sorting out AI alignment. But while I think that's a very important point, it's not needed for my main message for this exchange.)
(3) It also seems relevant to me that "Dark Arts" is maybe something of a fake category. I'm not sure it even forms a coherent cluster.
Like, is being charismatic a Dark Art? It certainly can be! It can act as a temptation. It seems to be possible to cultivate charisma. But the issue isn't that charisma is a Dark Art. It's that charisma is mostly symmetric. So if someone has a few slightly anti-epistemic social strategies in them, and they're charismatic, this can have a net Dark effect that's even strategic. But this is a totally normal level of epistemic noise!
Or how about something simpler, like someone using confirmation bias in a way that benefits their beliefs? Astrology is mostly this. Is astrology a Dark Art? Is talking about astrology a Dark Art? It seems mostly just epistemically hazardous… but where's the line between that and Dark Arts?
How about more innocent things, like when someone is trying to understand systemic racism? Is confirmation bias a helpful pattern-recognizer, or a Dark Art? Maybe it's potentially in service to Dark Arts, but is a necessary risk to learn the patterns?
I think Vervaeke makes this point really well. The very things that allow us to notice relevance are precisely the things that allow us to be fooled. Rationality (and he explicitly cites this — even the Keith Stanovich stuff) is a literally incomputable practice of navigating both Type I and Type II errors in this balancing act between relevance realization and being fooled.
When I think of central examples of Dark Arts, I think mostly of agents who exploit this ambiguity in order to extract value from others.
…which brings me back to point (1), about this being more a matter of skill in war. The relevant issue isn't that there are "Dark Arts". It's that there are unaligned agents who are trying to strategically fool you. The skill isn't to detect a Dark toolset; it's to detect intelligent intent to deceive and extract value.
All of which is to say:
- I think a mature Art of Rationality would most definitely include something like skillful navigation of manipulation.
- I don't think every practitioner needs to master every aspect of a mature Art. Much like not all cooks need to know how to make a roux.
- But an Art that has detection, exclusion, & avoidance as its only defense against Dark Artists is a much poorer & more vulnerable Art. IMO.
The unspoken but implicit argument is that Russia doesn't need a reason to nuke us. If we give them the Arctic there's no question, we will get nuked.
Ah, interesting, I didn't read that assumption into it. I read it as "The power balance will have changed, which will make Russia's international bargaining position way stronger because now it has a credible threat against mainland USA."
I see the thing you're pointing out as implicit though. Like an appeal to raw animal fear.
For a successful nuclear first strike to be performed Russia must locate all of our military assets (plus likely that of our NATO allies as well), take them all out at once, all while the CIA somehow never gets wind of a plan.
That makes a lot of sense. I didn't know about the distributed and secret nature of our nuclear capabilities… but it's kind of obvious that that's how it'd be set up, now that you say so. Thank you for spelling this out.
Reactions like yours are thus part of what I was counting on when making the argument. It works because in general I can count on people not having prior knowledge. (don't worry, you're not alone)
Makes sense!
And I wasn't worried. I'm actually not concerned about sounding like (or being!) an idiot. I'm just me, and I have the questions I do! But thank you for the kindness in your note here.
It also seems rather incongruous with most people's model of the world […]. Suppose Russia was prepared to nuke the US, and had a credible first strike capability. Why isn't Uncle Sam rushing to defend his security interests? Why haven't pundits and politicians sounded the alarm? Why has there been no diplomatic incidents? A second Cuban missile crisis? A Russian nuclear attack somewhere else?
I gotta admit, my faith in the whole system is pretty low on axes like this. The collective response to Covid was idiotic. I could imagine the system doing some stupid things simply because it's too gummed up and geriatric to do better.
That's not my main guess about what's happening here. I honestly just didn't think through this level of thing when I first read your arctic argument from your debate. But collective ineptitude is plausible enough to me that the things you're pointing out here just don't land as damning.
But they definitely are points against. Thank you for pointing them out!
I hope that answers your question! Is everything clear now?
For this instance, yes!
There's some kind of generalization that hasn't happened for me yet. I'm not sure what to ask exactly. I think this whole topic (RE what you're saying about Dark Arts) is bumping into a weak spot in my mind that I wasn't aware was weak. I'll need to watch it & observe other examples & let it settle in.
But for this case: yes, much clearer!
Thank you for taking the time to spell all this out!
Do you mind providing examples of what categories and indicators you use?
I can try to provide examples. The indicators might be too vague for the examples to help much with though!
A few weeks ago I met a fellow who seems to hail from old-guard atheism. Turn-of-the-century "Down with religion!" type of stuff. He was leading a philosophy discussion group I was checking out. At some point he said something (I don't remember what) that made me think he didn't understand what Vervaeke calls "the meaning crisis". So I brought it up. He started going into a kind of pressured debate mode that I intuitively recognized from back when I swam in activist atheism circles. I had a hard time pinning down the moves he was doing, but I could tell I felt a kind of pressure, like I was being socially & logically pulled into a boxing ring. I realized after a few beats that he must have interpreted what I was saying as an assertion that God (as he thought others thought of God) is real. I still don't know what rhetorical tricks he was doing, and I doubt any of them were conscious on his part, but I could tell that something screwy was going on because of the way interacting with him became tense and how others around us got uneasy and shifted how they were conversing. (Some wanted to engage & help the logic, some wanted to change the subject.)
Another example: Around a week ago I bumped into a strange character who runs a strange bookstore. A type of strange that I see as being common between Vassar and Ziz and Crowley, if that gives you a flavor. He was clearly on his way out the door, but as he headed out he directed some of his… attention-stuff… at me. I'm still not sure what exactly he was doing. On the surface it looked normal: he handed me a pamphlet with some of the info about their new brick-and-mortar store, along with their online store's details. But there was something he was doing that was obviously about… keeping me off-balance. I think it was a general social thing he does: I watched him do it with the young man who was clearly a friend to him and who was tending the store. A part of me was fascinated. But another part of me was throwing up alarm bells. It felt like some kind of unknown frame manipulation. I couldn't point at exactly how I was being affected, but I knew that I was, because my inner feet felt less firmly on inner ground in a way that was some kind of strategic.
More blatantly, the way that streetside preachers used to find a corner on college campuses and use a loudspeaker to spout off fundamentalist literalist Christianity memes. It's obvious to me now that the memetic strategy here isn't "You hear my ideas and then agree." It's somehow related to the way that it spurs debate. Back in my grad school days, I'd see clusters of undergrads surrounding these preachers and trying to argue with them, both sides engaging in predetermined patter. It was quite strange. I could feel the pull to argue with the preacher myself! But why? It has a snare trap feeling to it. I don't understand the exact mechanism. I might be able to come up with a just-so story. But looking back it's obvious that there's a being-sucked-in feeling that's somehow part of the memetic strategy. It's built into the rhetoric. So a first-line immune response is "Nope." Even though I have little idea what it is that I'm noping out of. Just its vibe.
I don't think all (any?) of these fall under what you're calling "ultra-BS". That's kind of my point: I think my rhetoric detector is tracking vibes more than techniques, and you're naming a technique category. Something like that.
I think this part stands alone, so I'll reply to the rest separately.
Thank you. I found this exchange very enriching.
In particular, it highlights a gap in my way of reasoning. I notice that even after you give examples, the category of "ultra-BS" doesn't really gel for me. I think I use a more vague indicator for this, like emotional tone plus general caution when someone is trying to persuade me of something.
In the spirit of crisping up my understanding, I have a question:
Now, I understand I sound obviously crazy already, but hear me out. Russia's Kinzhal hypersonic missiles, which have a range of roughly 1,000 miles, cannot hit the US from the Russian mainland. But they can hit us from the Arctic. I add that hypersonic missles are very, very fast. [This essentially acts as a preemptive rebuttal to my opponent's counterargument (but what about MAD?).] If we're destroyed by a first strike, there is no MAD, and giving Russia the Arctic would immediately be an existential threat.
Of course, this is ridiculous…
I think I'm missing something obvious, or I'm missing some information. Why is this clearly ridiculous?
Isn’t this an ironic choice of metaphor? The situation rather more resembles you insisting that it’s your daughter’s arm, being certain of this despite many other people thinking that you’re not quite in touch with reality, being impervious to demonstrations or proofs that it’s your arm, etc.
Of course it's not ironic. What do you think the patient must think about the doctor's certainty?
…the current site culture, moderation policies, etc., actively discourage such explanations.
How so? What's the discouragement? I could see people feeling like they don't want to bother, but you make it sound like there's some kind of punishment for doing so…?
I'd also really like to see a return of the old LW cultural thing of, if you downvote then you explain why. There are some downvotes on my comments that I'm left scratching my head about and going "Okay, whatever." It's hard for downvotes to improve culture if the feedback amounts to "Bad."
For instance, my review has been pretty heavily downvoted. Why? I can think of several reasons. But the net effect is to convey that LW would rather not have seen such a review.
Now why would that be?
I notice that there's also a -16 on the agree/disagree voting, with just three votes. So I'm guessing that what I said seriously irked a few people who probably heavy-downvoted the karma too.
But if it's really a distributed will, it's curious. Do you really want me not to have shared more context? Not to have reflected on where I'm at with the post? Or is it that you want me to feel differently about the post than I do?
I guess I don't get to know!
It's worth remembering that karma downvoting has a technical function. Serious negative karma makes a comment invisible by default. A user who gets a lot of negative karma in a short period of time can't post comments for a while (I think?). A user who has low karma overall can't post articles (unless that's changed?).
So a karma downvote amounts to saying "Shut up."
And a strong-downvote amounts to saying "Shut the fuck up."
If that's really the only communication the whole culture encourages for downvotes… that doesn't really foster clarity.
It seems dead obvious to me that this aspect of conversation culture here is quite bad.
But this isn't a hill I intend to die on.
…it looks like Valentine is never going to write the promised post…
It was Mythic Mode. I guess that went over everyone's heads.
I had a sequence in mind, on "ontology cracking". I gave up on that sequence when it became obvious that Less Wrong really wasn't interested in that direction at all. So I ended up never describing how I thought mythic mode worked on me, and how it might generalize.
But honestly, Mythic Mode has all the ingredients you need if you want to work it out.
It also seems worth noting, I've gotten way more PCK on the whole thing since then, and now I have approaches that are a fair bit more straightforward. More zen-like. Kinder. So the approach I advocate these days feels different and is more grounded & stable.
I might try to share some of that at some point.
This might be related to his statement in a followup discussion that he is unable to provide any cake. (It is an odd discussion, I think, and reading Valentine's attempts to comment there remind me of Kennaway's comment.)
You seem to have quite missed the point of that exchange.
But honestly, I'm tired of arguing with logic machines about this. No, I cannot prove to you that it's not your daughter's arm. No, that fact does not cause me to question my certainty that it's not your daughter's arm. Yes, I understand you think I'm crazy or deluded. I am sorry I don't know how to help you; it is beyond my skill, and my human heart hurts for being so misunderstood so much here.
As an aside, looking over the way some of my comments were downvoted in the discussion section:
I think LW could stand to have a clearer culture around what karma downvotes are for.
Now that downvote is separable from disagreement vote, I read a downvote as "This comment shouldn't have been posted / doesn't belong on LW."
But it's clear that some of what I said was heavily downvoted because I took a stance people didn't like. Saying things like "Yep, I could have phrased this post in a more epistemically accurate way… but for this post in particular I really don't care."
Would you really rather I didn't share the fact that I didn't care?
I'm guessing the intention was to punish me for not caring.
…which is terrible collective rationality, by the way! It's an attempt to use social-emotional force to change how my mind works without dialoguing with the reasons I'm making the choices I am.
(Which is ironic given the nature of the complaints about this post in particular!)
I'd argue that the right and good function of downvoting is to signal an opinion that a post or comment does not belong here.
That's how I use it. And until I'm given good reason otherwise, that's how I plan to continue using it.
I'd also really like to see a return of the old LW cultural thing of, if you downvote then you explain why. There are some downvotes on my comments that I'm left scratching my head about and going "Okay, whatever." It's hard for downvotes to improve culture if the feedback amounts to "Bad."
(But this really is an aside. It doesn't matter at all for the 2022 review. It's not really about this particular post either. It just has some very loud-to-me examples of the downvote behavior I think is unhealthy.)
It's kind of funny to me to see this one nominated. It's sort of peak "Val is weird on LW".
The point of this post wasn't to offer claims for people to examine. I still agree with the claims I see myself having tried to make! But the point wasn't to offer ideas for discussion. It was to light a path out of Hell.
Because of that purpose, the style of this post really doesn't fit LW culture. I think it's fair to call it a mind spell. I get the impression that LWers in particular find mind spells unnerving: they're a symmetric tech that can do an end-run around the parts of cognition that rationalists heavily rely on to feel safe. Hence tripping the "cult"/"guru" immune reaction.
(To me it's dead obvious that this highlights a gap in the LW rationality toolbox. The reaction of "Lock down, distrust, get cynical, burn it with fire" actually makes you more susceptible to skillful bad actors — like going rigid in response to a judo master grabbing a hold of you. IMO, a mature Art of Rationality would necessarily include learning to navigate cognition-jamming (or cognition-incompatible!) spaces with grace. But I get the sense LW collectively doesn't want to build that skillset. Which is fine, but I find it a bit disappointing.)
I picked up some of the language & framing of this post from Perri Chase. I now talk about this stuff a little differently. And more kindly, I think. I suspect I could write a version of this spell today that would be less of a problem for the LW memetic immune system. Partly because I'm better at slipping through immune systems! (I'm sure that's comforting!) But mostly because I've learned how to work with such systems instead of needing to step around them to have the "real" conversation.
That said, I don't regret writing this post. I got a lot of feedback (including in quite a few PMs across many different media) from people who found this relieving, validating, soothing, deeply helpful, kind, orienting. I'm okay with some people being upset with me if that's the price for enacting this kindness. I went in expecting that price, really.
I think there's a post possible that would be something like a LW-compatible rewrite of this one. It'd remove the "spell" nature and try to lay out some claims & implications for folk to consider. A bit like dissecting a once-living specimen and laying out its organs for examination.
I probably won't write that post. I don't see it doing hardly any good beyond being kind of interesting.
I might write a related post sometime on the nature of Hell as a psychosocial attractor state. AFAICT it's utterly essential study for real Defense Against the Dark Arts. It's also very tricky to talk about in a way that's kind to the listeners or the speaker. But if LW were to learn to take it seriously without falling into it harder, I think that awareness would transform a lot of what "rationality" means here, and it would soften a lot of the sharp edges that can meaningfully hurt people here.
I don't plan on rewriting any of this post for the review. The spell worked great. I want to leave it here as is.
(Though if someone understands the spellcraft and wants to suggest some edits, I'm open to receiving those suggestions! I'm not putting up a wall here. I'm just sharing where I'm at with this post right now, for the sake of the 2022 review.)
I like the tone of this review. That might be because it scans as positive about something I wrote! :D But I think it's at least in part because it feels clear, even where it's gesturing at points of improvement or further work. I imagine I'd enjoy more reviews written in this style.
I would be interested to see research done to test the claim. Does increased sympathetic nervous system activation cause decreased efficacy [at AI research]?
If folk can find ways of isolating testable claims from this post and testing them, I'm totally for that project.
The claim you name isn't quite the right one though. I'm not saying that people being stressed will make them bad at AI research inherently. I'm saying that people being in delusion will make what they do at best irrelevant for solving the actual problem, on net. And that for structural reasons, one of the signs of delusion is having significant recurring sympathetic nervous system (SNS) activation in response to something that has nothing to do with immediate physical action.
The SNS part is easy to measure. Galvanic skin response, heart rate, blood pressure, pupil dilation… basically hooking them up to a lie detector. But you can just buy a GSR meter and mess with it.
I'm not at all sure how to address the questions of (a) identifying when something is unrelated to immediate physical action, especially given the daughter's arm phenomenon; or (b) whether someone's actions on net have a positive effect on solving the AI problem.
E.g., it now looks plausible that Eliezer's net effect was to accelerate AI timelines while scaring people. I'm not saying that is his net effect! But I'm noting that AFAIK we don't know it isn't.
I think it would be extremely valuable to have some way of measuring the overall direction of some AI effort, even in retrospect. Independent of this post!
But I've got nuthin'. Which is what I think everyone else has too.
I'd love for someone to prove me wrong here.
A sequence or book compiled from the wisdom of many LessWrongers discussing their mental health struggles and discoveries would be extremely valuable to the community (and to me, personally)…
This is a beautiful idea. At least to me.
That's the narrative for sure. I wonder if it's mostly just a stale holdover and doesn't really apply though.
Like, misandry is vastly more blatant and serious these days from what I can tell. Getting emotional or social support as a man is a joke. There's a whole totally weirdly okay joke set that basically goes "What are women better at than men? XYZ…. What are men better at than women? Stupid pointless stuff, being wrong, yada yada, hahaha!"
There's a ton of stuff like this, like with child custody & paternity, or suicide patterns… but all this gets shoved into an eyerolling box of "MRA" or whatever. So it's un-talk-about-able.
I wonder if men are actually way more restricted in what they can do these days than women are. I don't know. But it sure seems plausible to me!
So I question whether it's really an anti-women pressure. I suspect it's more like, there's gender warfare going on, and we seem to have figured out how to culturally attack one direction of it pretty well, but we haven't stopped the war.
And having the suggested solution be even more women's rights just… doesn't seem like it's looking at the real problem.
At least to me.
Especially if people like @Valentine are called upon to return from their cold sleep because the world needs them.
Double-click? I'm wondering what you mean by "cold sleep" here.
FWIW, I meant something less like "Pretend it doesn't matter to you personally, please don't feel emotional responses" and more like "There's zero intention of attacking something precious here, I hope you can feel that and can engage in a way that's not attack-and-defend; let's honor all the precious things together in our pursuit of truth."
...reads like a mistake a feminist would not have made.
I guess maybe I'm not whatever you mean by "a feminist" then…?
I read you as meaning something a little like "You should have known better. You would have if you'd been the right kind of person. So you're the wrong kind of person."
I mean… okay? Sure? I guess you can believe that if you want?
But also… doesn't that make the conversation harder?
(And sorry if I'm misreading you here. I don't mean to trap you in a meaning you didn't intend if I'm missing you here. It just seems worth naming explicitly in case I am roughly catching your emotional tone right.)
(Implicit assumption: "postmodernism" was coined in 1980 and "postmodern feminism" the mid-90s, and most people who talk about gender ideology date it to the last 10-20 years, so I'm assuming that's the time period you're referring to by "last many years".)
I didn't mean anything formal. I was mostly reflecting on how #metoo seemed to imply women feeling pretty unsafe in workplaces and everywhere else for quite a while. And in the wake of #metoo guys feeling like their own sexuality was like Russian roulette.
So I guess I was gesturing at roughly the last decade or so.
I didn't mean to talk about {postmodern feminism} by the way. I meant "postmodern" and "feminist" as two separate adjectives.
I think separating the sexes into distinct classes ("kitchen staff are one sex and serving staff are another") wouldn't output a separate-but-equal situation; it would instead output a society that subjugates women overtly (again).
I'm really not sure. I don't think there's a fundamental "subjugate women" drive. If we were to implement this kind of segregation today, it'd have the benefit of a very different context.
That said, I do agree that "separate but equal" is a crazy myth. That doesn't make much sense. If they're equal, why separate them? The whole point is that they're not equal. Not in all ways. E.g., if we had jobs explicitly separated by gender, then obviously things involving lifting heavy things by hand (e.g. certain kinds of construction) should be male.
Part of the problem has long been that things traditionally coded male have also been more economically valued. We don't economically value raising an emotionally healthy child the way we value creating a million-dollar company.
If we don't change those incentive structures, then yeah, economic separation by sex might end up with some old unkindness.
The bonobos apparently use sex to strengthen bonds, but your argument is about strengthening bonds through non-sex with your non-sexually-compatible friends, so idk how those are related
Ah yeah, oops, I noticed that possible confusion and forgot to say something about it.
The fact that the bonobos use sex to reassure each other is purely incidental to why it came to mind for me. The structure of interest was more "Our tribe just encountered a potentially rare resource, so let's focus on reaffirming our tribal bonds before we even orient to the resource."
Like for men, they could focus on just maximizing appeal to women… but that'd heat up competition between them. So maybe instead there's a draw to affirming male bonds. Being useful to other men. Working on being a more functional member of the male cluster.
Likewise for women. The main factor in picking a mate isn't getting a guy to want to have sex with her. It's in making sure she's well-supported while having children. If there's competition between the women for attracting a specific man, that can create rancor in their ranks, and that can weaken all their children's support. So there's maybe a natural draw to focus on bonding with other women first precisely because they're the competition. Slightly different dynamic as with the men, but roughly the same overall effect.
You can't say I'm defecting after I'm below zero.
Uh… that's not how "defection" works.
I feel like you have some implicit additional assumptions WRT what you mean by "stable", here.
True!
I think I meant mostly an intuition about how sexual stuff adds drama that isn't relevant to (say) baking.
I also have the intuition that a single-gender environment would be less stable in the sense of being somehow "more stale" and "less alive" than a mixed-gender one, and thus less stable in the long-term…
Huh. Well, I guess it depends a lot on the social scene!
Like, I don't think a football team would feel more alive if you mixed in girls. Even if you somehow navigate the thing about major physical differences between the sexes. There's something about the way the locker room culture there can be masculine that's actually part of the bonding. And I think having a girl or two mixed in there could add some rivalry that'd have to be sorted out to be a functional team!
But yeah, if it's a group of programmers, it might actually work better to have mixed sexes. Vague intuition here. Though I notice that I picked this example in part because physical sex can be made way, way less relevant in that context. (E.g., it's possible to have a team of programmers that don't even know one another's sex and interact purely remotely and over text. That's just not gonna work in a football team.)
All of what I'm saying here is spitballing and not very careful. Just playing with ideas. Thanks for pointing out the questions here!
FWIW, my experience on this was… mixed.
My easiest time having female friends was in an implicitly monogamous context, when I was married, and my wife and I were exclusive. It was super easy. Like a switch in my brain could just filter out the attraction question. It's like it was as addressed for all women the way it's always addressed for all men.
It became way messier when she & I opened up our marriage. Then the sexual dynamic between me and her felt to me like it depended on whether I could find other female partners. I don't know if she really felt this way! But for me there was a real concern: When we were exclusive, other women not being into me was just expected. But when we were open, I feared other women not being into me was a sign she should focus on mating with other guys.
So there was a sense, for me, of increased pressure that I needed to find more partners even if my wife was the only woman I was interested in!
This increased stress on my female relationships.
Now, in an implicitly poly context, this isn't a huge problem. "Might we fuck?" is a lot more okay a question to explore.
But it became a question we had to explore, basically every time, at least on my end, at least implicitly.
I now find it's easier to have friendships with poly women now that I've set poly aside… because their being poly puts them out of the market for me.
And none of this is to dismiss your experience! I bet if I were more sexually confident, and happier being poly, I might feel the same way you do.
I'm just offering some counterpoint.
I think the point is that women are clearly optimizing way harder for female approval of their looks than they are male approval.
This article is pretty wild to read in this context. I think it has some Hell Realm memetic code embedded in it, & LW is kind of awful at navigating Hell Realm memetics, so I kinda hesitate to point at it here… but with that caveat: it's just fascinating that here's an article spelling out how to maximally appeal to the male gaze, focused on some sincere attempts at data, and the apparent female reaction is disgust and eyerolling and attempts to censor?
(It's possible that the female reaction is actually to Hell Realm code, not so much to the optimize-for-male-gaze thing. I bet that's at least a factor. But it's still interesting that the rejection shows up this way!)
Oh yeah, I read this article some time ago! It probably affected my thinking here.
I also heard Louise Perry make comments pointing out something similar recently.
I don't really get to claim a lot of originality here. Maybe my Great Insight™ is how there's maybe an analogy between the way women focus on beauty and men focusing on getting big.
That's an interesting point. Thank you, I hadn't thought about it before. I'm not sure what it implies but it's nice to have noticed.
Ditto. Gears, I didn't downvote your comments until you deleted them. It's now hard to see why I wrote what I did. I think that's bad form.
That said, I read you (Gears) as being overwhelmed here. I'm guessing you wanted to delete your comments because you're both hurting and feeling unseen/unsupported. Pulling out, including deleting your comments, totally makes sense to me in that context I'm imagining you in.
In the future, if you have to do that, I think it would be kinder to make some kind of note about that in the deleted comment.
Even better would be to use strikethrough with a comment saying something like "This is beyond my capacity to keep engaging in, I'll leave this here so others can understand the exchange, but I'm checking out and ask not to be pulled in or expected to reply."
But I also recognize this is an intense and painful topic for you. I understand if all of those options are out of emotional range for you.
But in case they're not out of range in the future, that'd at least have avoided my and Said's downvotes!