Here's the exit.

post by Valentine · 2022-11-21T18:07:23.607Z · LW · GW · 178 comments

Contents

  The Apocalypse Game
  Ramping Up Intensity
  (Interlude)
  Land on Earth and Get Sober
  But… but Val… what about the real AI problem?!
None
179 comments

There's a kind of game here on Less Wrong.

It's the kind of game that's a little rude to point out. Part of how it works is by not being named.

Or rather, attempts to name it get dissected so everyone can agree to continue ignoring the fact that it's a game.

So I'm going to do the rude thing. But I mean to do so gently. It's not my intention to end the game. I really do respect the right for folk to keep playing it if they want.

Instead I want to offer an exit to those who would really, really like one.

I know I really super would have liked that back in 2015 & 2016. That was the peak of my hell in rationalist circles.

I'm watching the game intensify this year. Folk have been talking about this a lot. How there's a ton more talk of AI here, and a stronger tone of doom.

I bet this is just too intense for some folk. It was for me when I was playing. I just didn't know how to stop. I kind of had to break down in order to stop. All the way to a brush with severe depression and suicide.

And it also ate parts of my life I dearly, dearly wish I could get back.

So, in case this is audible and precious to some of you, I'd like to point a way to ease.

 

The Apocalypse Game

The upshot is this:

You have to live in a kind of mental illusion to be in terror of the end of the world.

Illusions don't look on the inside like illusions. They look like how things really are.

Part of how this one does the "daughter's arm [LW · GW]" thing is by redirecting attention to facts and arguments.

None of this is relevant.

I'm pointing at something that comes before these thoughts. The thing that fuels the fixation on the worldview.

I also bet this is the thing that occasionally drives some people in this space psychotic, depressed, or into burnout.

The basic engine is:

In this case, the search for truth isn't in service to seeing reality clearly. The logic of economic races to the bottom, orthogonality, etc. might very well be perfectly correct.

But these thoughts are also (and in some cases, mostly) in service to the doomsday meme's survival.

But I know that thinking of memes as living beings is something of an ontological leap in these parts. It's totally compatible with the LW memeplex, but it seems to be too woo-adjacent and triggers an unhelpful allergic response.

So I suggested a reframe at the beginning, which I'll reiterate here:

Your body's fight-or-flight system is being used as a power source to run a game, called "OMG AI risk is real!!!"

And part of how that game works is by shoving you into a frame where it seems absolutely fucking real. That this is the truth. This is how reality just is.

And this can be fun!

And who knows, maybe you can play this game and "win". Maybe you'll have some kind of real positive impact that matters outside of the game.

But… well, for what it's worth, as someone who turned off the game and has reworked his body's use of power quite a lot, it's pretty obvious to me that this isn't how it works. If playing this game has any real effect on the true world situation, it's to make the thing you're fearing worse.

(…which is exactly what's incentivized by the game's design, if you'll notice.)

I want to emphasize — again — that I am not saying that AI risk isn't real.

I'm saying that really, truly orienting to that issue isn't what LW is actually about.

That's not the game being played here. Not collectively.

But the game that is being played here absolutely must seem on the inside like that is what you're doing.

 

Ramping Up Intensity

When Eliezer rang the doom bell [LW · GW], my immediate thought was:

"Ah, look! The gamesmaster has upped the intensity. Like preparing for a climax!"

I mean this with respect and admiration. It's very skillful. Eliezer has incredible mastery in how he weaves terror and insight together.

And I don't mean this at all to dismiss what he's saying. Though I do disagree with him about overall strategy. But it's a sincere disagreement, not a "Oh look, what a fool" kind of thing.

What I mean is, it's a masterful move of making the game even more awesome.

(…although I doubt he consciously intended it that way!)

I remember when I was in the thick of this AI apocalypse story, everything felt so… epic. Even questions of how CFAR dealt with garbage at its workshops seemed directly related to whether humanity would survive the coming decades. The whole experience was often thrilling.

And on the flipside, sometimes I'd collapse. Despair. "It's too much" or "Am I even relevant?" or "I think maybe we're just doomed."

These are the two sort of built-in physiological responses to fight-or-flight energy: activation, or collapse.

(There's a third, which is a kind of self-holding. But it has to be built. Infants aren't born with it. I'll point in that direction a bit later.)

In the spirit of feeling rationally [LW · GW], I'd like to point out something about this use of fight-or-flight energy:

If your body's emergency mobilization systems are running in response to an issue, but your survival doesn't actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately.

Which is to say: If you're freaked out but rushing around won't solve the problem, then you're living in a mental hallucination. And it's that hallucination that's scaring your body.

Again, this isn't to say that your thoughts are incorrectly perceiving a future problem.

But if it raises your blood pressure or quickens your breath, then you haven't integrated what you're seeing with the reality of your physical environment. Where you physically are now. Sitting here (or whatever) reading this text.

So… folk who are wringing their hands and feeling stressed about the looming end of the world via AI?

Y'all are hallucinating.

If you don't know what to do, and you're using anxiety to power your minds to figure out what to do…

…well, that's the game.

The real thing doesn't work that way.

But hey, this sure is thrilling, isn't it?

As long as you don't get stuck in that awful collapse space, or go psychotic, and join the fallen.

But the risk of that is part of the fun, isn't it?

 

(Interlude)

A brief interlude before I name the exit.

I want to emphasize again that I'm not trying to argue anyone out of doing this intense thing.

The issue is that this game is way, way out of range for lots of people. But some of those people keep playing it because they don't know how to stop.

And they often don't even know that there's something on this level to stop.

You're welcome to object to my framing, insist I'm missing some key point, etc.

Frankly I don't care.

I'm not writing this to engage with the whole space in some kind of debate about AI strategy or landscape or whatever.

I'm trying to offer a path to relief to those who need it.

That no, this doesn't have to be the end of the world.

And no, you don't have to grapple with AI to sort out this awful dread.

That's not where the problem really is.

I'm not interested in debating that. Not here right now.

I'm just pointing out something for those who can, and want to, hear it.

 

Land on Earth and Get Sober

So, if you're done cooking your nervous system and want out…

…but this AI thing gosh darn sure does look too real to ignore…

…what do you do?

My basic advice here is to land on Earth and get sober.

The thing driving this is a pain. You feel that pain when you look out at the threat and doom of AI, but you cover it up with thoughts. You pretend it's about this external thing.

I promise, it isn't.

I know. I really do understand. It really truly looks like it's about the external thing.

But… well, you know how when something awful happens and gets broadcast (like the recent shooting), some people look at it with a sense of "Oh, that's really sad" and are clearly impacted, while others utterly flip their shit?

Obviously the difference there isn't in the event, or in how they heard about it. Maybe sometimes, but not mostly.

The difference is in how the event lands for the listener. What they make it mean. What bits of hidden pain are ready to be activated.

You cannot orient in a reasonable way to something that activates and overwhelms you this way. Not without tremendous grounding work.

So rather than believing the distracting thoughts that you can somehow alleviate your terror and dread with external action…

…you've got to stop avoiding the internal sensation.

When I talked earlier about addiction, I didn't mean that just as an analogy. There's a serious withdrawal experience that happens here. Withdrawal from an addiction is basically a heightening of the intolerable sensation (along with having to fight mechanical habits of seeking relief via the addictive "substance").

So in this case, I'm talking about all this strategizing, and mental fixation, and trying to model the AI situation.

I'm not saying it's bad to do these things.

I'm saying that if you're doing them as a distraction from inner pain, you're basically drunk.

You have to be willing to face the awful experience of feeling, in your body, in an inescapable way, that you are terrified.

I sort of want to underline that "in your body" part a bazillion times. This is a spot I keep seeing rationalists miss — because the preferred recreational drug here is disembodiment via intense thinking. You've got to be willing to come back, again and again, to just feeling your body without story. Notice how you're looking at a screen, and can feel your feet if you try, and are breathing. Again and again.

It's also really, really important that you do this kindly. It's not a matter of forcing yourself to feel what's present all at once. You might not even be able to find the true underlying fear! Part of the effect of this particular "drug" is letting the mind lead. Making decisions based on mental computations. And kind of like minds can get entrained to porn, minds entrained to distraction via apocalypse fixation will often hide their power source from their host.

(In case that was too opaque for you just yet, I basically just said "Your thoughts will do what they can to distract you from your true underlying fear." People often suddenly go blank inside when they look inward this way.)

So instead of trying to force it all at once, it's a matter of titrating your exposure. Noticing that AI thoughts are coming up again, and pausing, and feeling what's going on in your body. Taking a breath for a few seconds. And then carrying on with whatever.

This is slow work. Unfortunately your "drug" supply is internal, so getting sober is quite a trick.

But this really is the exit. As your mind clears up… well, it's very much like coming out of the fog of a bender and realizing that no, really, those "great ideas" you had just… weren't great. And now you're paying the price on your body (and maybe your credit card too!).

There are tons of resources for this kind of direction. It gets semi-independently reinvented a lot, so there are lots of different names and frameworks for this. One example that I expect to be helpful for at least some LWers who want to land on Earth & get sober is Irene Lyon, who approaches this through a "trauma processing" framework. She offers plenty of free material on YouTube. Her angle is in the same vein as Gabor Maté and Peter Levine.

But hey, if you can feel the thread of truth in what I'm saying and want to pursue this direction, but you find you can't engage with Irene Lyon's approach, feel free to reach out to me. I might be able to find a different angle for you. I want anyone who wants freedom to find it.

 

But… but Val… what about the real AI problem?!

Okay, sure. I'll say a few words here.

…although I want to point out something: The need to have this answered is coming from the addiction to the game. It's not coming from the sobriety of your deepest clarity.

That's actually a complete answer, but I know it doesn't sound like one, so I'll say a little more.

Yes, there's a real thing.

And yes, there's something to do about it.

But you're almost certainly not in a position to see the real thing clearly or to know what to do about it.

And in fact, attempts to figure the real thing out and take action from this drunk gamer position will make things worse.

(I hesitate to use the word "worse" here. That's not how I see it. But I think that's how it translates to the in-game frame.)

This is what Buddhists should have meant (and maybe did/do?) when they talk about "karma". How deeply entangled in this game is your nervous system? Well, when you let that drive how you interact with others, their bodies get alarmed in similar ways, and they get more entangled too.

Memetic evolution drives how that entangling process happens on large scales. When that becomes a defining force, you end up with self-generating pockets of Hell on Earth.

This recent thing with FTX is totally an example. Totally. Threads of karma/trauma/whatever getting deeply entangled and knotted up and tight enough that large-scale flows of collective behavior create an intensely awful situation.

You do not solve this by trying harder. Tugging the threads harder.

In fact, that's how you make it worse.

This is what I meant when I said that actually dealing with AI isn't the true game in LW-type spaces, even though it sure seems like it on the inside.

It's actually helpful to the game for the situation to constantly seem barely maybe solvable but to have major setbacks.

And this really can arise from having a sincere desire to deal with the real problem!

But that sincere desire, when channeled into the Matrix of the game, doesn't have any power to do the real thing. There's no leverage.

The real thing isn't thrilling this way. It's not epic.

At least, not any more epic than holding someone you love, or taking a stroll through a park.

To oversimplify a bit: You cannot meaningfully help with the real thing until you're sober.

Now, if you want to get sober and then you roll up your sleeves and help…

…well, fuck yeah! Please. Your service would be a blessing to all of us. Truly. We need you.

But it's gotta come from a different place. Tortured mortals need not apply.

And frankly, the reason AI in particular looks like such a threat is because you're fucking smart. You're projecting your inner hell onto the external world. Your brilliant mind can create internal structures that might damn well take over and literally kill you if you don't take responsibility for this process. You're looking at your own internal AI risk.

I hesitate to point that out because I imagine it creating even more body alarm.

But it's the truth. Most people wringing their hands about AI seem to let their minds possess them more and more, and pour more & more energy into their minds, in a kind of runaway process that's stunningly analogous to uFAI.

The difference is, you don't have to make the entire world change in order to address this one.

You can take coherent internal action.

You can land on Earth and get sober.

That's the internal antidote.

It's what offers relief — eventually.

And from my vantage point, it's what leads to real hope for the world.

178 comments

Comments sorted by top scores.

comment by Richard_Ngo (ricraz) · 2022-11-21T19:27:01.322Z · LW(p) · GW(p)

I think there's a bunch of useful stuff here. In particular, I think that decisions driven by deep-rooted fear are often very counterproductive, and that many rationalists often have "emergency mobilization systems" running in ways which aren't conducive to good long-term decision-making. I also think that paying attention to bodily responses is a great tool for helping fix this (and in fact was helpful for me in defusing annoyance when reading this post). But I want to push back on the way in which it's framed in various places as all-or-nothing: exit the game, or keep playing. Get sober, or stay drunk. Hallucination, not real fear.

In fact, you can do good and important work while also gradually coming to terms with your emotions, trying to get more grounded, and noticing when you're making decisions driven by visceral fear and taking steps to fix that. Indeed, I expect that almost all good and important work throughout history has been done by people who are at various stages throughout that process, rather than people who first dealt with their traumas and only then turned to the work. (EDIT: in a later comment, Valentine says he doesn't endorse the claim that people should deal with traumas before doing the work, but does endorse the claim that people should recognize the illusion before doing the work. So better to focus on the latter (I disagree with both).)

(This seems more true for concrete research, and somewhat (but less) true for thinking about high-level strategy. In general it seems that rationalists spend way too much of their time thinking about high-level strategic considerations, and I agree with some of Valentine's reasoning about why this happens. Instead I'd endorse people trying be much more focused on making progress in a few concrete areas, rather than trying to track everything which they think might be relevant to AI risk. E.g. acceleration is probably bad, but it's fundamentally a second-order effect, and the energy focused on all but the biggest individual instances of acceleration would probably be better used to focus on first-order effects.)

In other words, I want to offer people the affordance to take on board the (many) useful parts of Valentine's post without needing to buy into the overall frame in which your current concerns are just a game, and your fear is just a manifestation of trauma.

(Relatedly, from my vantage point it seems that "you need to do the trauma processing first and only then do useful work" is a harmful self-propagating meme in a very similar way as "you need to track and control every variable in order for AI to go well". Both identify a single dominant consideration which requires your full focus and takes precedence over all others. However, I still think that the former is directionally correct for most rationalists, just as the latter is directionally correct for most non-rationalists.)

Replies from: pktechgirl, Valentine, lahwran
comment by Elizabeth (pktechgirl) · 2022-11-21T21:07:30.299Z · LW(p) · GW(p)

it seems that "you need to do the trauma processing first and only then do useful work" is a harmful self-propagating meme in a very similar way as "you need to track and control every variable in order for AI to go well"

 

This. Trauma processing is just as prone to ouroboros-ing as x-risk work, if not more so.  

Replies from: Valentine
comment by Valentine · 2022-11-21T23:29:10.613Z · LW(p) · GW(p)

Agreed.

And it's also not actually relevant to my point.

(Though I understand why it looks relevant.)

Replies from: Aiyen
comment by Aiyen · 2022-11-22T01:37:37.198Z · LW(p) · GW(p)

Wouldn't it be relevant in that someone could recognize unproductive, toxic dynamics in their concerns about AI risk as per your point (if I understand you correctly), decide to process trauma first and then get stuck in the same sorts of traps?  While "I'm traumatized and need to fix it before I can do anything" may not sound as flashy as "My light cone is in danger from unaligned, high-powered AI and I need to fix that before I can do anything", it's just as capable of paralyzing a person, and I speak both from my own past mistakes and from those of multiple friends. 

Replies from: Valentine
comment by Valentine · 2022-11-23T17:24:55.412Z · LW(p) · GW(p)

Of course that's possible. I didn't mean to dismiss that part.

But… well, as I just wrote to Richard_Ngo [LW(p) · GW(p)]:

If you just go around healing traumas willy-nilly, then you might not ever see through any particular illusion like this one if it's running in you.

Kind of like, generically working on trauma processing in general might or might not help an alcoholic quit drinking. There's some reason for hope, but it's possible to get lost in loops of navel-gazing, especially if they never ever even admit to themselves that they have a problem.

But if it's targeted, the addiction basically doesn't stand a chance.

I'm not trying to say "Just work on traumas and be Fully Healed™ before working on AI risk."

I'm saying something much, much more precise.

I do in fact think there's basically no point in someone working on AI risk if they don't dissolve this specific trauma structure.

Well, or at least make it fully conscious and build their nervous system holding capacity enough that it (a) they can watch it trying to run in real time and (b) they can also reliably stop it from grabbing their inner steering wheel so to speak.

But frankly, for most people it'd be easier just to fully integrate the pain than it would be to develop that level of general nervous system capacity without integrating said pain.

comment by Valentine · 2022-11-21T23:28:32.280Z · LW(p) · GW(p)

In fact, you can do good and important work while also gradually coming to terms with your emotions, trying to get more grounded, and noticing when you're making decisions driven by visceral fear and taking steps to fix that.

I agree.

I think we're focusing on different spots. I'm not sure if we actually disagree.

The all-or-nothing is with respect to recognizing the illusion. If someone can't even get far enough to notice that their disregulated nervous system is driving an illusion, then what they do is much more likely to create harm than good.

That part I totally stand by.

There's something of a strawman here in framing what I'm saying as "you need to do the trauma processing first and only then do useful work". I don't think you intended it. Just letting you know, it totally lands for me as a strawman.

I am saying that there is some trauma processing (for a person with a system like I'm describing) that absolutely is essential first. But not all of it. I don't know if that's possible, or even a coherent idea.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2022-11-22T03:54:10.341Z · LW(p) · GW(p)

I don't understand how specifically you think the process of recognizing the illusion is related to the process of healing traumas. But I also object to ideas like "you need to orient towards your fear as an illusion first and only then do useful work", for roughly the same reasons (in particular, the way it's all-or-nothing). So I'll edit my original comment to clarify that this is a more central/less strawmanny objection.

Replies from: Valentine
comment by Valentine · 2022-11-22T19:44:05.891Z · LW(p) · GW(p)

I don't understand how you think the process of recognizing the illusion is related to the process of healing traumas.

Okay. I'm not sure what to tell you. This lands for me like "I don't understand how you think turning on the burner is related to the process of cooking the soup." Um… it just is? I already described the mechanisms, so I think the communication gap is somewhere I don't see.

 

I also object to ideas like "you need to orient towards your fear as an illusion first and only then do useful work"…

I never meant to say or even imply that the fear is an illusion.

I was saying that the fear fuels an illusion. And anyone living in such an illusion needs to see through it before they can participate in non-illusion.

You can view that as all-or-nothing and therefore objectionable if you like. That's note quite what I mean, but it's not totally wrong. And in this spot I do think there's an "all-or-nothing" truth: If you don't see through an illusion you're in, you can't consciously participate in reality. That lands as almost tautological to me.

 

I'll edit my original comment to clarify that this is a more central/less strawmanny objection.

I didn't need you to do that. But thanks.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2022-11-23T06:28:32.376Z · LW(p) · GW(p)

Okay. I'm not sure what to tell you. This lands for me like "I don't understand how you think turning on the burner is related to the process of cooking the soup." Um… it just is? I already described the mechanisms, so I think the communication gap is somewhere I don't see.


I think you interpreted this as incredulity, whereas I meant it as "I don't understand the specific links" (e.g. is recognizing the illusion most of the work, or only a small part? What stops you from healing traumas without recognizing the illusion? etc). I've edited to clarify.

Replies from: Valentine
comment by Valentine · 2022-11-23T17:12:01.209Z · LW(p) · GW(p)

Oh, no, I didn't take it as incredulity at all. I'm just honestly not sure why what I'd already said didn't already explain the relationship between trauma healing and seeing through the illusion.

I guess I can just say it again in shortened form?

For the person design I'm talking about…

  • There's a pain inside.
  • There's also a kind of mental/emotional program built around the instruction "Distract from the pain."
  • Because they can't actually escape the pain, they project it outward through the mind. Which is to say, they create an illusion powered by the pain.
  • This causes them to think every glimmer of the pain they do notice is about the external thing.

The antidote is to look directly at the inner pain & dismantle the "Distract from the pain" program.

In practice this requires integrating the pain into consciousness. This is one way of talking about "healing trauma".

Once that happens, the program doesn't have a power source anymore.

If that doesn't happen and the person insists on focusing on doing things in the world, everything they do will be at least partly in service to distraction rather than solving any real problem.

And on the inside they cannot tell the difference between those two without facing the inner pain.

So seeing through the illusion isn't cognitive basically at all. To me it's the same thing as trauma processing, for all practical purposes.

Does that clarify anything for you?

 

What stops you from healing traumas without recognizing the illusion?

Oh, nothing. If you just go around healing traumas willy-nilly, then you might not ever see through any particular illusion like this one if it's running in you.

Kind of like, generically working on trauma processing in general might or might not help an alcoholic quit drinking. There's some reason for hope, but it's possible to get lost in loops of navel-gazing, especially if they never ever even admit to themselves that they have a problem.

But if it's targeted, the addiction basically doesn't stand a chance.

I'm not trying to say "Just work on traumas and be Fully Healed™ before working on AI risk."

I'm saying something much, much more precise.

comment by the gears to ascension (lahwran) · 2022-11-22T00:42:18.249Z · LW(p) · GW(p)

I think the word game is in an odd situation here. In game theory parlance, physics is just a game. it's not a question of whether you're in a game; it's what game you interpret yourself to be playing. there are positive-sum games you can interpret yourself to have been playing.

Replies from: Valentine
comment by Valentine · 2022-11-22T00:51:35.247Z · LW(p) · GW(p)

Well, I meant to point at something intuitive, and kind of as a nod to Existential Kink.

I honestly forgot when writing this piece that "game" has special meaning here. Like with game theory.

I just meant to hint at kind of a VR game, or like if you can imagine that the Matrix started out as a game people plugged into but part of the game involves forgetting that you plugged in.

comment by gjm · 2022-11-22T00:11:49.994Z · LW(p) · GW(p)

If this post had just said "I think some people may feel strongly about AI x-risk for reasons that ultimately come down to some sort of emotional/physical pain whose origins have nothing to do with AI; here is why I think this, and here are some things you can do that might help find out whether you're one of them and to address the underlying problem if so", then I would consider it very valuable and deserving of attention and upvotes and whatnot. I think it's very plausible that this sort of thing is driving at least some AI-terror. I think it's very plausible that a lot of people on LW (and elsewhere) would benefit from paying more attention to their bodies.

... But that's not what this post does. It says you have to be "living in a[...] illusion" to be terrified by apocalyptic prospects. It says that if you are "feeling stressed" about AI risks then you are "hallucinating". It says that "what LW is actually about" is not actual AI risk and what to do about it (but, by implication, this alleged "game" of which Eliezer Yudkowsky is the "gamesmaster" that works by engaging everyone's fight-or-flight reactions to induce terror). It says that, for reasons beyond my understanding, it is impossible to make actual progress on whatever real AI risk problems there might be while in this stressed-because-of-underlying-issues state of mind. It says that "the reason" (italics mine) AI looks like a big threat is because the people to whom it seems like a big threat are "projecting [their] inner hell onto the external world". And it doesn't offer the slightest shred of evidence for any of this; we are just supposed to, I dunno, feel in our bodies that Valentine is telling us the truth, or something like that.

I don't think this is good epistemics. Maybe there is actually really good evidence that the mechanism Valentine describes here is something like the only way that stress ever arises in human beings. (I wouldn't be hugely surprised to find that it's true for the stronger case of terror, and I could fairly easily be convinced that anyone experiencing terror over something that isn't an immediate physical threat is responding suboptimally to their situation. Valentine is claiming a lot more than that, though.) But in that case I want to see the really good evidence, and while I haven't gathered any actual statistics on how often people claiming controversial things with great confidence but unwilling to offer good evidence for them turn out to be right and/or helpful, I'm pretty sure that many of them don't. Even more so when they also suggest that attempts to argue with them about their claims are some sort of deflection (or, worse, attempts to keep this destructive "game" going) that doesn't merit engaging with.

Full disclosure #1: I do not myself feel the strong emotional reaction to AI risk that many people here do. I do not profess to know whether (as Valentine might suggest) this indicates that I am less screwed up psychologically than people who feel that strong emotional reaction, or whether (as Eliezer Yudkowsky might suggest) it indicates that I don't understand the issues as fully as they do. I suspect that actually it's neither of those (though either might happen to be true[1]) but just that different people get more or less emotionally involved in things in ways that don't necessarily correlate neatly with their degree of psychological screwage or intellectual appreciation of the things in question.

[1] For that matter, the opposite of either might be true, in principle. I might be psychologically screwed up in ways that cut me off from strong emotions I would otherwise feel. I might have more insight into AI risk than the people who feel more strongly that helps me see why it's not so worrying, or why being scared doesn't help with it. I think these are both less likely than their opposites, for what it's worth.

Full disclosure #2: Valentine's commenting guidelines discourage commenting unless you "feel the truth of [that Valentine and you are exploring the truth together] in your body" and require "reverent respect". I honestly do not know, and don't know how I could tell with confidence, whether Valentine and I are exploring the truth together; at any rate, I do not have the skill (if that's what it is) of telling what someone else is doing by feeling things in my body. I hope I treat everyone with respect; I don't think I treat anyone with reverence, nor do I wish to. If any of that is unacceptable to Valentine, so be it.

Clarification for the avoidance of doubt: I don't have strong opinions on just what probability we should assign to (e.g.) the bulk of the human race being killed-or-worse as a result of the actions of an AI system within the next century, nor on what psychological response is healthiest for any given probability. The criticisms above are not (at least, not consciously) some sort of disguise for an underlying complaint that Valentine is trying to downplay an important issue, nor for anger that he is revealing that an emperor I admire has no clothes. My complaint is exactly what I say it is: I think this sort of bulveristic "I know you're only saying this because of your psychological problems, which I shall now proceed to reveal to you; it would be a total waste of time to engage with your actual opinions because they are merely expressions of psychological damage, and providing evidence for my claims is beneath me"[2] game is not only rude (which Valentine admits, and I agree that it is sometimes helpful or even necessary to be rude) but usually harmful and very much not the sort of thing I want to see more of on Less Wrong.

[2] I do not claim that Valentine is saying exactly those things. But that is very much the general vibe.

(Also somewhat relevant, though not especially to any of what I've written above, and dropped here without further comment: "Existential Angst Factory [LW · GW]".)

Replies from: Valentine
comment by Valentine · 2022-11-22T00:25:08.857Z · LW(p) · GW(p)

Whew! That's a lot. I'm not going to try to answer all of that.

In short: I think you're correctly following LW norms. You're right that I wasn't careful about tone, and by the norms here it's good to note that.

And also, that wasn't what this piece was about.

I intended it as an invitation. Not as a set of claims to evaluate.

If you look where I'm pointing, and you recognize some of yourself in it (which it sounds like you don't!), then the suggestions I gesture toward (like Irene Lyon, and maybe loosening the mental grip on the doomy thoughts) might seem worth exploring.

I have no intention of putting an argument together, with evidence and statistics and the like, validating the mechanisms I'm talking about. That would actually go in the opposite direction of making an audible invitation.

But! I think your contribution is good. It's maybe a little more indignant than necessary. But it's… mmm… fitting, I'll say.

I'll leave it at that.

Replies from: PhilGoetz, gjm
comment by PhilGoetz · 2023-01-15T03:29:47.619Z · LW(p) · GW(p)

I think it would be more-graceful of you to just admit that it is possible that there may be more than one reason for people to be in terror of the end of the world, and likewise qualify your other claims to certainty and universality.

That's the main point of what gjm wrote.  I'm sympathetic to the view you're trying to communicate, Valentine; but you used words that claim that what you say is absolute, immutable truth, and that's the worst mind-killer of all.  Everything you wrote just above seems to me to be just equivocation trying to deny that technical yet critical point.

I understand that you think that's just a quibble, but it really, really isn't.  Claiming privileged access to absolute truth on LessWrong is like using the N-word in a speech to the NAACP.  It would do no harm to what you wanted to say to use phrases like "many people" or even "most people" instead of the implicit "all people", and it would eliminate a lot of pushback.

comment by gjm · 2022-11-22T01:50:06.093Z · LW(p) · GW(p)

(I see that this comment has received a lot of downvotes None of them is from me.)

comment by jimrandomh · 2022-11-21T21:37:26.779Z · LW(p) · GW(p)

I'm sure there are many people whose inner experience is like this. But, negative data point: Mine isn't. Not even a little. And yet, I still believe AGI is likely to wipe out humanity.

Replies from: leogao, adamzerner, Valentine, Shiroe
comment by leogao · 2022-11-22T03:53:19.556Z · LW(p) · GW(p)

Seconded: mine also isn't.

Also, for what it's worth, I also don't think of myself as the kind of person to naturally gravitate towards the apocalypse/"saving the world" trope. From a purely narrative-aesthetic perspective, I much prefer the idea of building novel things, pioneering new frontiers, realizing the potential of humanity, etc, as opposed to trying to prevent disaster, reduce risk, etc. I am quite disappointed at reality for not conforming to my literary preferences.

comment by Adam Zerner (adamzerner) · 2022-11-22T04:45:19.326Z · LW(p) · GW(p)

It's interesting how people's responses can be so different here. I'm someone who gets pretty extreme anxiety from the x-risk stuff, at least when I'm not repressing those feelings.

comment by Valentine · 2022-11-21T23:33:07.528Z · LW(p) · GW(p)

Yep. That just means this wasn't written for you! I expect this wasn't written for a lot of (most?) people here.

Replies from: MakoYass, Vaniver
comment by mako yass (MakoYass) · 2022-11-22T16:53:54.889Z · LW(p) · GW(p)

I really wish that the post has been written in a way that let me figure out it wasn't for me sooner...

I think it would have saved a lot of time if the paragraph in bold had been at the top.

comment by Vaniver · 2022-11-22T02:23:27.561Z · LW(p) · GW(p)

I came here to say something roughly like Jim's comment, but... I think what I actually want is grounding? Like, sure, you were playing the addictive fear game and now think you're out of it. But do you think I was? If you think there's something that differentiates people who are and aren't, what is it?

[Like, "your heart rate increases when you think about AI" isn't a definitive factor one way or another, but probably you could come up with a list of a dozen such indicators, and people could see which are true for them, and we could end up with population statistics.]

Replies from: Kaj_Sotala, Valentine
comment by Kaj_Sotala · 2022-11-22T08:18:06.708Z · LW(p) · GW(p)

I think that at least the kinds of "Singularity-disrupted" people that Anna describes in "Reality-Revealing and Reality-Masking Puzzles [LW · GW]" are in the fear game.

Over the last 12 years, I’ve chatted with small hundreds of people who were somewhere “in process” along the path toward “okay I guess I should take Singularity scenarios seriously.” From watching them, my guess is that the process of coming to take Singularity scenarios seriously is often even more disruptive than is losing a childhood religion. Among many other things, I have seen it sometimes disrupt:

  • People's belief that they should have rest, free time, some money/time/energy to spend on objects of their choosing, abundant sleep, etc.
    • “It used to be okay to buy myself hot cocoa from time to time, because there used to be nothing important I could do with money. But now—should I never buy hot cocoa? Should I agonize freshly each time? If I do buy a hot cocoa does that mean I don’t care?”
  • People's in-practice ability to “hang out”—to enjoy their friends, or the beach, in a “just being in the moment” kind of way.
    • “Here I am at the beach like my to-do list told me to be, since I’m a good EA who is planning not to burn out. I’ve got my friends, beer, guitar, waves: check. But how is it that I used to be able to enter “hanging out mode”? And why do my friends keep making meaningless mouth-noises that have nothing to do with what’s eventually going to happen to everyone?”
  • People's understanding of whether commonsense morality holds, and of whether they can expect other folks in this space to also believe that commonsense morality holds.
    • “Given the vast cosmic stakes, surely doing the thing that is expedient is more important than, say, honesty?”
  • People's in-practice tendency to have serious hobbies and to take a deep interest in how the world works.
    • “I used to enjoy learning mathematics just for the sake of it, and trying to understand history for fun. But it’s actually jillions of times higher value to work on [decision theory, or ML, or whatever else is pre-labeled as ‘AI risk relevant’].”
  • People's ability to link in with ordinary institutions and take them seriously (e.g. to continue learning from their day job and caring about their colleagues’ progress and problems; to continue enjoying the dance club they used to dance at; to continue to take an interest in their significant other’s life and work; to continue learning from their PhD program; etc.)
    • “Here I am at my day job, meaninglessly doing nothing to help no one, while the world is at stake—how is it that before learning about the Singularity, I used to be learning skills and finding meaning and enjoying myself in this role?”
  • People's understanding of what’s worth caring about, or what’s worth fighting for
    • “So… ‘happiness’ is valuable, which means that I should hope we get an AI that tiles the universe with a single repeating mouse orgasm, right? ... I wonder why imagining a ‘valuable’ future doesn’t feel that good/motivating to me.”
  • People's understanding of when to use their own judgment and when to defer to others.
    • “AI risk is really really important… which probably means I should pick some random person at MIRI or CEA or somewhere and assume they know more than I do about my own career and future, right?”
comment by Valentine · 2022-11-22T19:56:46.217Z · LW(p) · GW(p)

But do you think I was?

I honestly don't know. I lean toward no? But don't believe me too much there.

 

If you think there's something that differentiates people who are and aren't, what is it?

The main one I'm interested in is "Do you recognize yourself in the dynamic I spelled out?"

I like Kaj bringing in that list. I think that's helpful.

A lot of how I pick this stuff out isn't a mental list. There's a certain rushedness. A pressure to their excitement about and fixation on doomy things. Conversation flows in a particularly squeezy and jagged way. Body movements are… um… fitting of the pattern. :-P

There was a noticeable surge of this when Scott came out with "Meditations on Moloch". I remember how at the EAG that year a bunch of people went and did a mock magical ceremony against Moloch. (I think Scott published it right as EAG was starting.) That totally had the energy of the thing I'm talking about. Playful, but freaked out.

I know this doesn't help with the statistics thing. But I'm way less confident of "These are the five signs" than I am about this feeling tone.

comment by Shiroe · 2023-12-21T23:04:54.577Z · LW(p) · GW(p)

Same. I feel somewhat jealous of people who can have a visceral in-body emotional reaction to X-risks. For most of my life I've been trying to convince my lizard brain to feel emotions that reflect my beliefs about the future, but it's never cooperated with me.

comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-12-20T04:39:55.397Z · LW(p) · GW(p)

I think this post is emblematic of the problem I have with most of Val's writing: there are useful nuggets of insight here and there, but you're meant to swallow them along with a metric ton of typical mind fallacy, projection, confirmation bias, and manipulative narrativemancy.

Elsewhere, Val has written words approximated by ~"I tried for years to fit my words into the shape the rationalists wanted me to, and now I've given up and I'm just going to speak my mind."

This is what it sounds like when you are blind [LW · GW] to an important distinction. Trying to hedge magic things that you do not grok, engaging in cargo culting. If it feels like tediously shuffling around words and phrases that all mean exactly the same thing, you're missing the vast distances on the axis that you aren't perceiving.

The core message of "hey, you might well be caught up in a false narrative that is doing emotional work for you via providing some sense of meaning or purpose and yanking you around by your panic systems, and recognizing that fact can allow you to do anything else" is a good one, and indeed it's one that many LessWrongers need.  It's even the sort of message that needs some kind of shock along with it, to make readers go "oh shit, that might actually be me."

But that message does not need to come along with a million little manipulations.  That message isn't improved by attempts to hypnotize the audience, or set up little narrative traps.

e.g. starting with "There's a kind of game, here, and it's rude to point out, and you're not supposed to name it, but I'm going to." <—I'm one of the cool ones who sees the Matrix!  I'm brave and I'm gonna buck the rules!  (Reminiscent of a right-wing radio host going "you get punished if you say X" and then going on to spend twenty minutes on X without being punished.  It's a cheap attempt to inflate the importance of the message and the messenger.)

e.g. "I really do respect the right for folk to keep playing it if they want" <—More delegitimization, more status moves.  A strong implication along the lines of "the illusion that I, Val, have correctly identified is the only thing happening here."  Not even a token acknowledgement of the possibility that perhaps some of it is not this particular game; no thought given to the possibility that maybe Val is flawed in a way that is not true of all the other LWers.  Like the Mythbusters leaping from "well, we couldn't recreate it" to "therefore, it's impossible and it never happened, myth BUSTED."

(I'm really really really tired of the dynamic where someone notices that they've been making Mistake X for many years and then just presumes that everyone else is, too, and just blind to it in the same way that they themselves were. It especially rankles when they're magnanimous about it.)

e.g. "You have to live in a kind of mental illusion to be in terror of the end of the world." <—More projection, more typical minding, more ~"I've comprehended all of the gears here and there's no way anything else could lead to appropriate terror of the end of the world.  The mistake I made is the mistake everyone's making (but don't worry, I'm here to guide you out with my superior wisdom, being as I am ahead of you on this one."  See also the actual quote "for what it's worth, as someone who turned off the game and has reworked his body's use of power quite a lot, it's pretty obvious to me that this isn't how it works," which, like basically everything else here, is conspicuously missing a pretty damn important for me.  The idea that other people might be doing something other than what Val comprehends seems literally not to occur to him.

e.g. "I mean this with respect and admiration. It's very skillful. Eliezer has incredible mastery in how he weaves terror and insight together." <—Look!  See how I'm above it all, and in a position to evaluate what's going on?  Pay no attention to the fact that this incidentally raises my apparent status, btw.

e.g. "In case that was too opaque for you just yet, I basically just said 'Your thoughts will do what they can to distract you from your true underlying fear.' ... This is slow work. Unfortunately your 'drug' supply is internal, so getting sober is quite a trick." <—If your experience doesn't match my predictions, it's because you're unskillful, and making [mistake]...but don't worry, with my "yet" I will subtly imply that if you just keep on listening to my voice, you will eventually see the light.  Pay no attention to the fully general counterevidence-dismissing system I'm setting up.

Again, it's a shame, because bits like "If your body's emergency mobilization systems are running in response to an issue, but your survival doesn't actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately" are well worth considering.  But the essay sort of forces you to step into Val's (broken, self-serving, overconfident) frame in order to catch those nuggets.  And, among readers who are consciously wise or unconsciously allergic to the sort of manipulation he's trying to pull, many of them will simply bounce off the thing entirely, and not catch those useful nuggets.

It didn't have to be this way.  It didn't have to be arrogant and project-y and author-elevating and oh-so-cynical-and-aloof.  There's another version of this essay out there in possibility space that contains all of the good insights and none of the poison.

But that's not what we got.  Instead, we got a thing that (it seems to me (though I could be wrong)) had the net effect of marginally shifting LW's discourse in the wrong direction, by virtue of being a popular performance piece wrapped around an actually useful insight or two.  It normalizes a kind of sloppy failure-to-be-careful-and-clear that is antithetical to the mission of becoming less wrong.  I think this essay lowered the quality of thinking on the site, even as it performed the genuinely useful service of opening some eyes to the problem Val has identified.

(Because no, of course Val was not alone in this issue, it really is a problem that affects Lots Of Humans, it's just not the only thing going on. Some humans really do just ... not have those particular flaws. When you're colorblind, you can't see that there are colors that you can't see, and so it's hard to account for them, especially if you're not even bothering to try.)
 

Replies from: Bohaska
comment by Bohaska · 2023-12-30T04:26:53.966Z · LW(p) · GW(p)

 Are there any similar versions of this post on LW which express the same message, but without the patronising tone of Valentine? Would that be valuable?

comment by Elizabeth (pktechgirl) · 2022-11-21T21:06:05.211Z · LW(p) · GW(p)

AFAICT from skimming, the object level of this post has a lot of overlap with my own algorithm. I limit engagement with x-risk to an amount that's healthy and sustainable for me. I keep non-x-risk clients in part to ground me in the real world. I'm into trauma processing and somatics. I think the fact that the people most scared of AGI risk are also the ones most scared of not developing AGI should raise some eyebrows. I treat "this feels bad" as a reason to stop without waiting for a legible justification.

And right now I'm using that last skill to not read this post. I wouldn't have even skimmed if I didn't think it was important to make this comment and have it not be totally uninformed. When I read this I feel awful, highly activated, and helpless/freeze response. It instills the same "you can't trust yourself, follow this rigidity" that it's trying to argue against. 

You can't fight fire with fire, getting out of a tightly wound x-risk trauma spiral involves grounding and building trust in yourself, not being scared into applying the same rigidity in the opposite direction. 

Replies from: ricraz, alkjash, Valentine
comment by Richard_Ngo (ricraz) · 2022-11-22T03:42:22.387Z · LW(p) · GW(p)

I think the fact that the people most scared of AGI risk are also the ones most scared of not developing AGI should raise some eyebrows.

Very nice observation.

Replies from: gjm
comment by gjm · 2022-11-22T12:44:27.008Z · LW(p) · GW(p)

I agree, but how sure are we that it's actually a fact?

[EDITED to add:] One not-particularly-sinister-or-embarrassing possible explanation, if it is true, is that both are driven by a single underlying issue: how capable does any given person expect AGI to be? Imagine someone in WW2 thinking about whether to develop nuclear weapons. It seems plausible that { people who think it's super-vital to do it because whoever does it will win the war for sure } and { people who think it's super-dangerous to do it because these weapons could do catastrophic damage } might be roughly the same set of people.

comment by alkjash · 2022-11-21T22:52:09.574Z · LW(p) · GW(p)

You can't fight fire with fire, getting out of a tightly wound x-risk trauma spiral involves grounding and building trust in yourself, not being scared into applying the same rigidity in the opposite direction. 

The comment is generally illuminating but this particular sentence seems too snappy and fake-wisdomy to be convincing. Would you mind elaborating?

Replies from: Kaj_Sotala, TekhneMakre, pktechgirl
comment by Kaj_Sotala · 2022-11-22T09:43:36.372Z · LW(p) · GW(p)

There's a class of things that could be described as losing trust in yourself and in your ability to reason.

For a mild example, a friend of mine who tutors people in math recounts that many people have low trust in their ability to mathematical reasoning. He often asks his students to speak out loud while solving a problem, to find out how they are approaching it. And some of them will say something along the lines of, "well, at this point it would make the most sense to me to [apply some simple technique], but I remember that when our teacher was demonstrating how to solve this, he used [some more advanced technique], so maybe I should instead do that".

The student who does that isn't trusting that the algorithm of "do what makes the most sense to me" will eventually lead to the correct outcome. Instead, they're trying to replace it with "do what I recall an authority figure doing, even if I don't understand why".

Now it could be that the simple technique is wrong to apply here, and the more advanced one is needed. But if the student had more self-trust and tried the thing that made the most sense to them, then their attempt to solve the problem using the simple approach might help them understand why that approach doesn't work and why they need another approach. Or maybe it's actually the case that the simple approach does work just as well - possibly the teacher did something needlessly complex, or maybe the student just misremembers what the teacher did. In which case they would have learned a simpler way of solving the problem.

Whereas if the student always just tries to copy what they remember the teacher doing - guessing the teacher's password [LW · GW], essentially - even if they do get it right, they won't develop a proper understanding of why it went right. The algorithm that they're running isn't "consider what you know of math and what makes the most sense in light of that", it's "try to recall instances of authority figures solving similar problems and do what they did". Which only works to the extent that you can recall instances of authority figures solving highly similar problems as the one that you are dealing with.

Why doesn't the student want to try their own approach first? After all, the worst that could happen is that it wouldn't work and they would have to try something else, right?

But if you have math trauma - if you've had difficulties with math and been humiliated for it - then trying an approach and failing at it isn't something that you could necessarily just shrug at. Instead, it will feel like another painful reminder that You Are Bad At Math and that You Will Never Figure This Out and that You Shouldn't Even Try. It might make you feel lost and disoriented and make you hope that someone would just tell you what to do. (It doesn't necessarily need to feel this extreme - it's enough if the thought of trying and failing just produces a mild flinch away from it [LW · GW].)

In this case, you need to find some reassurance that trying and failing is actually safe. To build trust in the notion that even if you do fail once, or twice, or thrice, or however many times it takes, you'll still be able to learn from each failure and figure out the right answer eventually. That's what enables you to do the thing that's required to actually learn. (Of course, some problems are just too hard and then you'll need to ask someone for guidance - but only after you've exhausted every approach that seemed promising to you.)

Now that's how it looks in the case of math. It's also possible to lose trust in yourself in other domains; e.g. Anna mentions here [LW · GW] how learning about AI risk sometimes destabilizes self-trust when it comes to your career decisions:

Over the last 12 years, I’ve chatted with small hundreds of people who were somewhere “in process” along the path toward “okay I guess I should take Singularity scenarios seriously.” From watching them, my guess is that the process of coming to take Singularity scenarios seriously is often even more disruptive than is losing a childhood religion. Among many other things, I have seen it sometimes disrupt: [...]

  • People's understanding of when to use their own judgment and when to defer to others.
    • “AI risk is really really important… which probably means I should pick some random person at MIRI or CEA or somewhere and assume they know more than I do about my own career and future, right?”

And besides domain-specific self-trust, there also seems to be some relatively domain-general component of "how much do you trust your own ability to figure stuff out eventually". In all cases, I suspect that the self-mistrust has to do with feeling that it's not safe to trust yourself - because you've been punished for doing so in the past, or because you feel that AI risk is so important that there isn't any room to make mistakes.

But you still need to have trust in yourself. Knowing that yes, it's possible that trusting yourself will mean that you do make the wrong call and nothing catches you and then you die, but that's just the way it goes. Even if you decided to outsource your decisions to someone else, not only would that be unlikely to work, but you'd still need to trust your own ability in choosing who to outsource them to.

Scott Alexander has also speculated that depression involves a global lack of self-trust - predictive processing suggests that various neural predictions come with associated confidence levels. And "low global self-trust" may mean that the confidence your brain assigns even to predictions like "it's worth getting up from bed today" falls low enough so as to not be strongly motivating.

To go back to Elizabeth's original sentence... looked at from a certain angle, Val's post can be read as saying "those thoughts that you have about AI risk? Don't believe in them; believe in what I am saying instead". Read that way, that's a move that undermines self-trust. Stop thinking in terms of what makes sense to you, and replace that with what you think Val would approve of.

And while Val's post is not exactly talking about a lack of self-trust, it's talking about something in a related space. It's talking about how some experiences have been so painful that the body is in a constant low-grade anxiety/vigilance response, and the person isn't able to stop and be with those unpleasant sensations - similar to how the math student isn't able to stop and try out uncertain approaches, as it's too painful for the student to be with the unpleasant sensations of shame and humiliation of being Bad At Math. 

Both "I'm feeling too anxious to trust myself" and "I'm feeling too anxious to stop thinking about AI" are problems that orient one's attention away from bodily sensations. "You can't fight fire with fire" - you can't solve anxiety about AI by with a move that creates more unpleasant bodily sensations and makes it harder to orient your attention to your body.

comment by TekhneMakre · 2022-11-22T03:35:34.054Z · LW(p) · GW(p)

IDK if helpful, but my comment on this post here is maybe related to fighting fire with fire (though Elizabeth might have been more thinking of strictly internal motions, or something else):

https://www.lesswrong.com/posts/kcoqwHscvQTx4xgwa/?commentId=bTe9HbdxNgph7pEL4#comments [LW · GW]

And gjm's comment on this post points at some of the relevant quotes:

https://www.lesswrong.com/posts/kcoqwHscvQTx4xgwa/?commentId=NQdCG27BpLCTuKSZG [LW · GW]

comment by Elizabeth (pktechgirl) · 2022-11-22T00:35:57.189Z · LW(p) · GW(p)

That's a super reasonable request that I wish I was able to fulfill. Engaging with Val on this is extremely costly for me, and it's not reasonable to ask him to step out of a conversation on his own post, so I can't do it here. I thought about doing a short form post but couldn't feature creeped myself to the point it was infeasible.

Replies from: Valentine, alkjash
comment by Valentine · 2022-11-22T00:53:57.890Z · LW(p) · GW(p)

…it's not reasonable to ask [Val] to step out of a conversation on his own post…

If it's understood that I'm not replying because otherwise the contribution won't happen at all rather than because I have nothing to say about it, then I'm fine stepping back and letting you clarify what you mean. If that helps.

comment by alkjash · 2022-11-22T03:02:33.599Z · LW(p) · GW(p)

Sure, no big deal.

comment by Valentine · 2022-11-21T23:32:00.115Z · LW(p) · GW(p)

I think you're reacting to a tone, not content.

I absolutely do not mean anything like "Don't trust yourself, trust this instead." I never mean anything like that.

I do mean not to trust the mind though. And for reasons you can come to see for yourself.

And people who are highly, highly fused with their minds can't easily tell the difference.

That said, I applaud the algorithm. Something not feeling good is great reason not to engage with it.

Just remember that that's a fact about your relationship to the thing, not a fact about the thing.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2022-11-22T00:31:12.394Z · LW(p) · GW(p)

I think you're reacting to a tone, not content.

 

Yes, this is correct. 

comment by TekhneMakre · 2022-11-21T18:52:11.481Z · LW(p) · GW(p)

Neither up- nor down-voted; seems good for many people to hear, but also is typical mind fallacying / overgeneralizing. There's multiple things happening on LW, some of which involve people actually thinking meaningfully about AI risk without harming anyone. Also, by the law of equal and opposite advice: you don't necessarily have to work out your personal mindset so that you're not stressed out, before contributing to whatever great project you want to contribute to without causing harm.

Replies from: Valentine
comment by Valentine · 2022-11-21T23:49:07.207Z · LW(p) · GW(p)

I didn't write this to convey a model in an epistemically rigorous way. I wrote it to invite people who were trapped in the game a way out.

I think you're correct by LW norms to point out the epistemic shortcomings of the claims and tone. It's the kind of move that belongs in this social space.

But also, for the sake of this piece in particular, I truly do not care.

 

Also, by the law of equal and opposite advice: you don't necessarily have to work out your personal mindset so that you're not stressed out, before contributing to whatever great project you want to contribute to without causing harm.

This is… uh…

Hmm. I don't know how to name this easily.

This is technically correct, but also totally irrelevant. It functions mostly as a generic "LW epistemic standards" signal booster.

Like, whom are you talking to? Are you trying to rescue people who might be scared that they have to do that mindset work first, but because of your comment will let themselves not feel quite so scared?

Or are you talking to me? Trying to… what, get me to soften the firmness with which I'm saying some of my points in the OP?

There absolutely are some things you cannot helpfully contribute to without having the wisdom to see what you're doing. Absolutely.

And of course there are some things where that isn't necessary.

Totally.

I'm not sure whom you think needs to hear that…?

Replies from: TekhneMakre
comment by TekhneMakre · 2022-11-22T00:06:25.813Z · LW(p) · GW(p)

You write in a gaslighty way, trying to disarm people's critical responses to get them to accept your frame.  I can see how that might be a good thing in some cases, and how you might know that's a good thing in some cases. E.g. you may have seen people respond some way, and then reliable later say "oh this was XYZ and I wish I'd been told that". And it's praiseworthy to analyze your own suffering and confusion, and then explain what seem like the generators in a way that might help others.

But still, trying to disarm people's responses and pressure them to accept your frame is a gaslighting action and has the attendant possible bad effects. The bad effects aren't like "feel quite so scared", more like having a hostile / unnatural / external / social-dominance narrative installed. Again, I can see how a hostile narrative might have defenses that tempt an outsider to force-install a counternarrative, but that has bad effects. I'm using the word "gaslighting" to name the technical, behavioral pattern, so that its common properties can be more easily tracked; if there's a better word that still names the pattern but is less insulting-sounding I'd like to know.

A main intent of my first comment was to balance that out a little by affirming simple truths from outside the frame you present. I don't view you as open to that sort of critique, so I didn't make it; but if you're interested I could at least point at some sentences you wrote.

 

ETA: Like, it would seem less bad if your post said up front something more explicit to the effect of: "If you have such and such properties, I believe you likely have been gaslighted into feeding the doomsday cult. The following section contains me trying to gaslight you back into reality / your body / sanity / vitality." or something. 

Replies from: Valentine
comment by Valentine · 2022-11-22T00:45:31.434Z · LW(p) · GW(p)

Ah. Cool, thank you for clarifying where you're coming from.

 

You write in a gaslighty way, trying to disarm people's critical responses to get them to accept your frame.

That's not what I'm doing. But I get how it lands like that for you.

I don't care about people accepting my frame.

But if someone is available to try it on, then I'm happy to show them what I see from within the frame. And then I respect and trust whatever they choose to do with what they see.

Frankly, lots of folk here are bizarrely terrified of frames. I get why; there are psychological methods of attack based on framing effects.

But I refuse to comply with efforts to pave the world in leather. I advocate people learn to wear shoes instead. (Metaphorically speaking.)

 

A main intent of my first comment was to balance that out a little by affirming simple truths from outside the frame you present.

Cool. I don't respect the view of needing to Karpman-style Rescue people in this kind of way, but given how that's woven into the culture here, your move makes sense to me. I can see how you're trying to come from a good place there.

 

I don't view you as open to that sort of critique, so I didn't make it; but if you're interested I could at least point at some sentences you wrote.

Correct, I'm not available for that right now. But thank you for the offer.

 

Like, it would seem less bad if your post said up front something more explicit to the effect of: "If you have such and such properties, I believe you likely have been gaslighted into feeding the doomsday cult. The following section contains me trying to gaslight you back into reality / your body / sanity / vitality." or something.

Hmm. Yeah, if I were editing this piece over many days trying to make it really good, that might be a good suggestion. Might have filtered folk well early on and helped those for whom it wasn't written relax a bit more.

And at the same time, I don't want to focus too much on the cognitive level. That's part of the whole point.

But the suggestion is hypothetically good. Thank you.

Replies from: TekhneMakre
comment by TekhneMakre · 2022-11-22T01:00:17.035Z · LW(p) · GW(p)

(Mainly for third parties:)

I don't care about people accepting my frame.

I flag this as probably not true.

Frankly, lots of folk here are bizarrely terrified of frames. I get why; there are psychological methods of attack based on framing effects.

It's the same sort of thing your post is about. 

Might have filtered folk well early on and helped those for whom it wasn't written relax a bit more.

I flag this as centering critical reactions being about the reacters not being relaxed, rather than that there might be something wrong with his post. 

comment by janus · 2022-11-21T22:35:45.905Z · LW(p) · GW(p)

This is beautifully written and points at what I believe to be deep truths. In particular:

Your brilliant mind can create internal structures that might damn well take over and literally kill you if you don't take responsibility for this process. You're looking at your own internal AI risk.

...

Most people wringing their hands about AI seem to let their minds possess them more and more, and pour more & more energy into their minds, in a kind of runaway process that's stunningly analogous to uFAI.

But I won't say more about this right now, mostly because I don't think I can do it justice with the amount of time and effort I'm prepared to invest writing this comment. On that note, I commend your courage in writing and posting this. It's a delicate needle to thread between many possible expressions that could rub people the wrong way or be majorly misinterpreted.

Instead I'll say something critical and/or address a potential misinterpretation of your point:

What is this sobriety you advocate for?

I'm concerned that sobriety might be equivocated with giving in to the cognitive bias toward naive/consensus reality. In a sense of the word, that is what "sobriety" is: a balance of cognitive hyperparameters, a psychological attractor that has been highly optimized by evolution and in-lifetime learning. Being sober makes you effective on-distribution. The problem is if the distribution shifts.

I've noticed that people who have firsthand experience with psychosis, high doses of psychedelics, or religious/spiritual beliefs tend to have a much easier time "going up shock levels" and taking seriously the full version of AI risk (not just AI tiling the the internet with fake news, but tiling the lightcone with something we have not the ontology to describe). This might sound like a point against AI-risk. But I think it's because we're psychologically programmed with deep trust in the fundamental stability of reality, to intuitively believe that things cannot change that much. Having the consensus reality assumption broken once, e.g. by a psychotic episode where you seriously entertain the possibility that the TV is hacking your mind, makes it easier for it to be broken again (e.g. to believe that mind hacking is a cinch for sufficiently intelligent AI). There are clear downsides to this -- you're much more vulnerable too all sorts of unusual beliefs, and most unusual beliefs are false. But some unusual beliefs are true. For instance, I think some form of AI risk both violates consensus reality and is true.

A more prosaic example: in my experience, the absurdity heuristic [? · GW] is one of the main things that prevented and still prevents people from grasping the implications of GPT-3. Updating on words being magic spells that can summon intelligent agents pattern matches against schizophrenia, so the psychological path of least resistance for many people is to downplay and rationalize.

I think there's a different meaning of sobriety, perhaps what you're pointing at, that isn't just an entropic regression toward the consensus. But the easiest way to superficially take the advice of this post, I think -- the easiest way out of the AI doom fear attractor -- is to fall back into the consensus reality attractor. And maybe this is the healthiest option for some people, but I don't think they're going to be useful.

But I agree that being driven by fear, especially fear inherited socially and/or tangled up with trauma, is not the most effective either, and often ends up fueling ironic self fulfilling prophecies and the like. In all likelihood the way out which makes one more able to solve the problem requires continuously threading your own trajectory between various psychological sink states, and a single post is probably not enough to guide the way to that "exit". (But that doesn't mean it's not valuable)

Replies from: Valentine
comment by Valentine · 2022-11-22T00:01:20.972Z · LW(p) · GW(p)

What is this sobriety you advocate for?

Ah, I'm really glad you asked. I tried to define it implicitly in the post but I was maybe too subtle.

There's this specific engine of addiction. It's the thing that distracts without addressing the cause, and becomes your habitual go-to for dealing with the Bad Thing. That creates a feedback loop.

Sobriety is with respect to an addiction. It means dropping the distraction and facing & addressing the thing you'd been previously distracting yourself from, until the temptation to distract yourself extinguishes.

Alcohol being a seed example (hence "sobriety"). The engine of alcoholism is complex, but ultimately there's an underlying thing (sometimes biochemical, but very often emotional) that's a sensation the alcoholic's mind/body system has identified as "intolerable — to be avoided". Alcohol is a great numbing agent and can create a lot of unrelated sensations (like dizziness), but it doesn't address (say) feelings of inadequacy.

So getting sober isn't just a matter of "don't drink alcohol", but of facing the things that drive the impulse to reach for the bottle. When you extinguish the cause, the effect evaporates on its own — modulo habits.

I've witnessed this kind of addiction engine at play for a lot of rationalists. I don't have statistics here, or a sense of how widespread it is, but it's common enough that it's an invitation woven into the culture. Kind of like alcohol is woven into mainstream culture. The addiction in this case is to a particular genre of intense thought — which, like alcohol, acts like a kind of numbing agent.

So in the same way, by "get sober" I'm pointing at facing the SNS energy driving the intense thought, and getting that to settle down and digest, instead of just believing the thoughts point-blank. To get to a point where you don't need the thoughts to be distracting. And then the mind can be useful to think about stuff that can freak you out.

But not so much before.

…kind of like, an alcohol-laden mind can't think things through very well, and an alcoholic's mind isn't well-suited to deciding whether to have another drink even when they aren't currently drunk.

So, no, I don't mean anything about drifting back toward mainstream consensus reality. I'm talking about a very specific mechanism. Getting off a specific drug long enough to stop craving it.

Replies from: janus
comment by janus · 2022-11-22T00:08:14.972Z · LW(p) · GW(p)

Now that you've explained this seems obviously the right sense of sobriety given the addiction analogy. Thank you!

Replies from: Valentine
comment by Valentine · 2022-11-22T00:54:33.448Z · LW(p) · GW(p)

Quite welcome.

comment by AnnaSalamon · 2022-11-24T23:40:51.177Z · LW(p) · GW(p)

As a personal datapoint: I think the OPs descriptions have a lot in common with how I used to be operating, and that I think this would have been tremendously good advice for me personally, both in terms of its impact on my personal wellness and in terms of its impact on whether I did good-for-the-world things or harmful things.

(If it matters, I still think AI risk is a decent pointer at a thingy in the world that may kill everyone, and that this matters.  The "get sober" thing is a good idea both in relation to that and broadly AFAICT.)

Replies from: Valentine
comment by Valentine · 2022-11-25T19:36:38.493Z · LW(p) · GW(p)

Thank you for adding your personal data point. I think it's helpful in the public space here. But also, personally I liked seeing that this is (part of) your response.

(If it matters, I still think AI risk is a decent pointer at a thingy in the world that may kill everyone, and that this matters.  The "get sober" thing is a good idea both in relation to that and broadly AFAICT.)

I totally agree.

comment by Ben (ben-lang) · 2022-11-23T10:50:06.548Z · LW(p) · GW(p)

Interesting take. I haven't seen this happen on AI, but I do know two people who have an environmentalism  fear spiral thing. My diagnosis was very different: I think the people I know actually have anxiety, or panic attacks or similar for mental health reasons. The environmentalism serves as camouflage. Thought 1: "Why am I depressed/anxious/whatever when things in my life are pretty good?" Then, instead of Thought 2 being "Maybe I should talk to a friend/do something that might cheer me up/see a doctor" they instead get thought 2 "Oh, its because humanity is going to destroy the world and everything will be awful. Man, its great that I am such a well-adjusted, big-picture, caring person that giant planet-scale forces that barely effect me personally have more impact on my emotional state than the actual day to day of my own life." Not only does the camo prevent them addressing the real problem (Ok, the environment is a real problem, but its not the only problem, and its not the problem they are suffering from at the moment), but it also weaponizes all kinds of media against themselves.

comment by Dagon · 2022-11-21T18:35:06.134Z · LW(p) · GW(p)

Upvoted because it's pointing at a real source of pain, and it's very good to talk about.  But I suspect there's a lot of typical mind fallacy in the parts that sound more universal and less "here's what happened to and worked for me".  

For me, I went through my doomsday worries in my teens and twenties, long before AI was anything to take seriously.  Nuclear war or environmental collapse (or one causing the other) were assumed to be the forms of destruction to expect.  Over the course of a decade or two, I was able to accept that, for me, "memento mori" was the root of the anxiety.  I don't want to die, and I probably will anyway.  There may be no actual outside meaning to my life, or by extension to anyone else's.  And that doesn't prevent me from caring about other people (both individuals and groups, though not by any means equally), nor about my own experiences.  These are important, even if they're only important to me (and, I hope, to some other humans).

Replies from: Valentine
comment by Valentine · 2022-11-21T23:40:19.938Z · LW(p) · GW(p)

But I suspect there's a lot of typical mind fallacy in the parts that sound more universal and less "here's what happened to and worked for me".

In parts of this I'm talking to the kind of person who could benefit from being spoken to about this.

My experience is that folk who need support out of tough spots like this have a harder time hearing the deeper message when it's delivered in carefully caveated epistemically rigorous language. It shoves them too hard into thinking, and usually in ways that activate the very machinery they're trying to find a way to escape.

I know that's a little outside the discourse norms of LW. Caveating things not as "People experience X" but instead as "I experienced X, and I suspect it's true of some others too". I totally respect that has a place here.

Just not so much when trying to point out an exit.

 

For me, I went through my doomsday worries in my teens and twenties, long before AI was anything to take seriously.

I like you sharing your experience overview here. Thank you. I resonate with a fair bit of it, though I came at it from a really different angle.

(I grew up believing I'd live forever, then "became mortal" at 32. Spent a few years in nihilistic materialist hell. A lot of what you're saying reminds me of what I was grappling with in that hell. Now that's way, way more integrated — but probably not in a way the LW memeplex would approve of.)

Replies from: janus, philh
comment by janus · 2022-11-21T23:57:48.843Z · LW(p) · GW(p)

I lived in "nihilistic materialist hell" from the ages of 5 (when it hit me what death meant) and ~10. It -- belief in the inevitable doom of myself and everyone I cared for and ultimately the entire universe to heat death -- was at times directly apprehended and completely incapacitating, and otherwise a looming unendurable awareness which for years I could only fend off using distraction. There was no gamemaster. I realized it all myself. The few adults I confided in tried to reassure me with religious and non-religious rationalizations of death, and I tried to be convinced but couldn't. It was not fun and did not feel epic in the least, though maybe if I'd discovered transhumanism in this period it would've been a different story.

I ended up getting out of hell mostly just by developing sufficient executive function to choose not to think of these things, and eventually to think of them abstractly without processing them as real on an emotional level. 

Years later, I started actually trying to do something about it. (Trying to do something about it was my first instinct as well, but as a 5 yo I couldn't think of anything to do that bought any hope.)

But I think the machinery I installed in order to not think and not feel the reality of mortality is still in effect, and actually inhibits my ability to think clearly about AI x-risk, e.g., by making it emotionally tenable for me to do things that aren't cutting the real problem -- when you actually feel like your life is in danger [LW · GW], you won't let motivated reasoning waste your EV.

This may be taken as a counterpoint to your argument invitation in this post. But I think it's just targeted, as you say, at a subtly different audience.

comment by philh · 2022-11-24T17:32:52.555Z · LW(p) · GW(p)

My experience is that folk who need support out of tough spots like this have a harder time hearing the deeper message when it’s delivered in carefully caveated epistemically rigorous language.

I kinda feel like my reaction to this is similar to your reaction to frames [LW(p) · GW(p)]:

I refuse to comply with efforts to pave the world in leather. I advocate people learn to wear shoes instead. (Metaphorically speaking.)

To be more explicit, I feel like... sure, I can believe that sometimes epistemic rigor pushes people into thinky-mode and sometimes that's bad; but epistemic rigor is good anyway. I would much prefer for people to get better at handling things said with epistemic rigor, than for epistemic rigor to get thrown aside.

And maybe that's not realistic everywhere, but even then I feel like there should be spaces where we go to be epistemically rigorous even if there are people for whom less rigor would sometimes be better. And I feel like LessWrong should be such a space.

I think the thing I'm reacting to here isn't so much the lack of epistemic rigor - there are lots of things on LW that aren't rigorous and I don't think that's automatically bad. Sometimes you don't know how to be rigorous. Sometimes it would take a lot of space and it's not necessary. But strategic lack of epistemic rigor - "I want people to react like _ and they're more likely to do that if I'm not rigorous" - feels bad.

Replies from: Valentine
comment by Valentine · 2022-11-25T04:48:22.350Z · LW(p) · GW(p)

But strategic lack of epistemic rigor - "I want people to react like _ and they're more likely to do that if I'm not rigorous" - feels bad.

That's not what I meant.

I mean this much more like switching to Spanish when speaking with a Mexican store clerk. We can talk about the virtues of English all we want to, and maybe even justify that we're helping the clerk deepen their skill with interfacing with the modern world… but really, I just want to communicate.

You can frame that as dropping standards in order to have a certain effect on them, but that's a really damn weird frame.

Replies from: philh
comment by philh · 2022-11-25T09:15:46.052Z · LW(p) · GW(p)

I think this relies on "Val is not successfully communicating with the reader" being for reasons analogous to "Val is speaking English which the store clerk doesn't, or only speaks it poorly". But I suspect that if we unpacked what's going on, I wouldn't think that analogy held, and I would still think that what you're doing seems bad.

(Also, I want to flag that "justify that we’re helping the clerk deepen their skill with interfacing with the modern world" doesn't pattern match to anything I said. It hints at pattern matching with me saying something like "part of why we should speak with epistemic rigor is to help people hear things with epistemic rigor", but I didn't say that. You didn't say that I did, and maybe the hint wasn't intentional on your part, but I wanted to flag it anyway.)

comment by bugsbycarlin · 2022-11-21T20:11:36.991Z · LW(p) · GW(p)

A helpful tool on the way to landing and getting sober is exercise. Exercise is essentially a displacement, like any of the other addictions, but it has the unique and useful feature that it processes out your chemicals, leaving you with less stress chemicals in circulation, and a refractory period before your body can make more.

Almost no matter your physical capabilities, there is something you can go do that makes you sweat and tires you out... and breaks the stress-focus-stress-focus cycle.

 

Edit: btw, this is great stuff, very good for this community to name it and offer a path away.

Related, but addressing a very different side of the AI risk mindset: https://idlewords.com/talks/superintelligence.htm

comment by Eli Tyre (elityre) · 2022-11-28T13:27:45.772Z · LW(p) · GW(p)

Poll: Does your personal experience resonate with what you take Val to be pointing at in this post?

Options are sub-comments of this parent. 

Please vote by agreeing, not upvoting, with the answer that feels right to you. Please don't click the disagree button for options you disagree with, so that we can easily tabulate numbers by checking how many people have voted.

(Open to suggestions for better ways to set up polls, for the future.)

 

Replies from: elityre, elityre, elityre
comment by Eli Tyre (elityre) · 2022-11-28T13:29:07.168Z · LW(p) · GW(p)

Maybe something like what Val is pointing at is true of me, but I'm not sure.

comment by Eli Tyre (elityre) · 2022-11-28T13:28:44.250Z · LW(p) · GW(p)

I don't resonate with what I take Val to be pointing at here.

comment by Eli Tyre (elityre) · 2022-11-28T13:28:21.060Z · LW(p) · GW(p)

I personally resonate with what I take Val to be pointing at here.

comment by Vanessa Kosoy (vanessa-kosoy) · 2022-11-22T11:10:24.247Z · LW(p) · GW(p)

Personally, I sometimes have the opposite metacognitive concern: that I'm not freaking out enough about AI risk. The argument goes: if I don't have a strong emotional response, doesn't it mean I'm lying to myself about believing that AI risk is real? I even did a few exercises in which I tried to visualize either the doom or some symbolic representation of the doom in order to see whether it triggers emotion or, conversely, exposes some self-deception, something that rings fake. The mental state that triggered was interesting, more like a feeling of calm meditative sadness than panic. Ultimately, I think you're right when you say, if something doesn't threaten me on the timescale of minutes, it shouldn't send me into fight-or-flight. And, it doesn't.

I also tentatively agree that it feels like there's something unhealthy in the panicky response to Yudkowsky's recent proclamation of doom, and it might lead to muddled thinking. For example, it seems like everyone around here are becoming convinced of shorter and shorter timelines, without sufficient evidence IMO. But, I don't know whether your diagnosis is correct. Most of the discourse about AI risk around here is not producing any real progress on the problem. But, occasionally it does. And I'm not sure whether the root of the problem is psychological/memetic (as you claim) or just that it's a difficult problem that only a few can meaningfully contribute to.

Replies from: Valentine, Mitchell_Porter
comment by Valentine · 2022-11-22T20:11:16.888Z · LW(p) · GW(p)

…if I don't have a strong emotional response, doesn't it mean I'm lying to myself about believing that AI risk is real?

Just to be clear, I'm not talking about strong emotional responses per se. I'm talking about the body freaking out — which often produces strong emotions.

I'm way less concerned about heart-wrenching grief than I am about nervousness, for instance.

 

Most of the discourse about AI risk around here is not producing any real progress on the problem. But, occasionally it does. And I'm not sure whether the root of the problem is psychological/memetic (as you claim) or just that it's a difficult problem that only a few can meaningfully contribute to.

That's fair.

Though I do think the immense difficulty with coordination around AI risk stuff totally is a memetic thing, and that AI risk is a hard enough problem that a focus on tackling it directly with what amounts to a shrug toward the memetic problem is kind of pushing the door on its hinges.

Replies from: NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2022-12-31T23:43:35.842Z · LW(p) · GW(p)

Just to be clear, I'm not talking about strong emotional responses per se. I'm talking about the body freaking out — which often produces strong emotions.

There are a few different psychological theories about how emotions get produced, and how much other physical reactions influence and/or are influenced by that.

So... this isn't a particularly useful distinction, and I didn't see much of it in-depth in the post proper.

Replies from: Valentine
comment by Valentine · 2023-01-01T00:05:27.649Z · LW(p) · GW(p)

If this wasn't a useful distinction for you, then why comment on it? To tell me not to have made it at all?

Replies from: NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2023-01-01T00:06:58.275Z · LW(p) · GW(p)

Good point, just something I noticed, but now that you mention it it's not very useful.

EDIT: wait, no, I was commenting on it to point out that you don't seem to have made the distinction yourself in the post proper.

comment by Mitchell_Porter · 2022-11-24T21:58:50.140Z · LW(p) · GW(p)

It's easier to be more composed about a problem, when you think you have the kernel of a solution. I mean, aren't you the founder of the Infra-Bayesian school of thought? 

comment by drethelin · 2022-11-26T04:45:56.354Z · LW(p) · GW(p)

I think this essay is blatantly manipulative bullshit written in a deliberately hypnotic style, that could be modified to target any topic anyone cares about. 

Replies from: ztzuliios
comment by ztzuliios · 2022-11-30T19:34:28.544Z · LW(p) · GW(p)

It does strike me as a rather fully general counterargument, written in a deliberately obfuscatory/"woo" style.  The focus on "listening to your body" seems like an obfuscation, it's an appeal to something deliberately put beyond measurement.  This does seem like it could apply to anything anyone cares about (you're a Red Sox fan? You're addicted to the suffering, your body is telling you to stop, land on Earth and get sober!).  If you have any reasons to disagree, that's coming from a place of addiction and you need to stop caring and presumably follow a similar life-path to OP because that is the only thing that works, everything else is a death-cult. 

I don't buy it, to say the least, and I think it's only the social connections that people have to the OP that make anyone treat it charitably.  People have been saying this since the earliest days of the discussion of this topic on the Internet; this fully general counterargument predates Eliezer Yudkowsky being appropriately pessimistic about AI.

I also think that the characterization that all rationalism comes from "disembodiment" is essentially an ableist slur [LW(p) · GW(p)].  Using ableist slurs and appealing to the hierarchy of ableism is always manipulative and is never appropriate.  Unfortunately as people have come to the rationalist community more with the intention of using it as a springboard for their own careers, we've had to deal with more and more overt and covert ableism as a rather underhanded way of putting a thumb on the scales.  If we're to truly abandon the supremacy of the neurotypical, and truly embrace neurodiversity, we also have to embrace a diversity of "embodiment" (to the extent that is a valid and real concept, which I doubt), which the OP thoroughly does not.

Replies from: Slider
comment by Slider · 2022-11-30T20:45:39.251Z · LW(p) · GW(p)

I think the idea of listening to your body is actually to make visible and thus measurable atleast on the inside the thing. It kinda does require a good faith approach. The hope is that people that are alexithymic might not be (coining words here) asomathymic, that people that do not have verbal access to their emotions (to a sufficient degree) would be able to have bodily access to them (to a sufficient degree).

Assumptions that internal emotional access is easy and resonable to expect might be improperly ablist. But it can also be taken in the sense that emotional access is not assumed nor taken to be easy and "try wider spectrum of emotional access" is an action that would not and should not be done unprompted. Giving an advice of "have you tried to switch it off and on again?" does not neccesarily comment on the sophistication of interventions tried.

Replies from: ztzuliios
comment by ztzuliios · 2022-12-02T17:57:04.630Z · LW(p) · GW(p)

The problem isn't that access to emotion is ableist. I think that suggestion is itself ableist, neurodiverse people have complete access to their emotions, their emotional reactions to certain things might simply be different. 

The problem is that no matter what you do, if you come to a conclusion different from OP, you are simply still "disembodied." You just need to "do more work." This is a way of counting the hits and excusing the misses. "Embodiment" is not "being in touch with your emotions," it is acting in the manner prescribed. 

What is ableist is saying that there is a single state, "embodiment," which coincidentally overlaps entirely with several other things prescribed, and if you are not in that state, there is a psychological problem with you. This is neurotypical supremacy. 

As I said in the other post in this thread to which you replied, there are other ways to deal with this. You do not have to do breathwork. You do not have to meditate. You do not have to "listen to your body." These are ideological prescriptions. They poorly emulate cognitive-behavioral therapy, which is a much more effective way to process emotions and resolve maladaptive behavior patterns.

This is why the comment parent and myself think that this post is manipulative. It presents a real problem, but frames it in terms such that the only possible solution is the wholesale adoption of the author's ideology. The honest post on this topic would have mentioned other solutions, which maybe the author did not personally experience but understands, through systematizing and integrating their own experiences and the experiences of others, to be also solutions to the same problem. 

Replies from: Slider, Valentine
comment by Slider · 2022-12-02T19:57:09.326Z · LW(p) · GW(p)

I understood it as a method of getting an access to emotions. The problem framing does not really carry an interpretation where you could be 100% aware of everything and still be suffering from the problem, because the antidote offered is to become aware of something (100% awereness might be superhumanly difficult).

Claiming that most blind people do not see well 20 meters away is not disparaging in itself. Alexithymia is a catalogued autism trait. It is a spectrum and when you have met one autist you have met one autist. So while assuming all traits upon learning one of them would be erroneuos, the presence of each of the traits become relevant. It is sensible to check whether a particular blind person can see well 1 meter away, is able to turn their eyeballs or knows how to echolocate. Poor understanding of autism can lead to treating disparaging properties to be autism traits. Even misrepresenting frequency can have the same effect. Special interests are a thing but deducing "autistic -> spends daily 3 hours on some specific topic" is ignorantly wrong. Alexithymias basedness as a trait is not very questionable. As a trait alexithymia directly deals with awereness (it is not athymia in the same go). Thus lack of awereness is relevant to alexithymia. So to think without knowing that in the intersection of "awereness" and "autism" alexithymia is worth processing is a leap that can be justified in good faith. Thus I disagree and think that "suggesting that access to emotion is ablist" is not ablist.

Being demanding and making a typical mind fallacy is quite bad a combo. Being sure that the antidote has high reliability does commit that kind of bad.

I do think that insisting that it doesn't work is ignoring that alexithymic people can respond to stuff like this positively, to project a particular responce profile to be typical to the point of fallacy. Selling a placebo and a dangerously unreliable drug are slightly different things.

The post does admit guilt of being rude and bad in all kinds of ways. Choosing to give essential tips for a few by insulting and harming many is a real tradeoff.

Claims about therapy effectiveness are also prone to responce profiles. I wouldn't be surprised if cognitive-behavioral therapy would be elevatedly effective for autistist because of high combatibility with explicit processing.

comment by Valentine · 2022-12-02T23:02:12.656Z · LW(p) · GW(p)

Okay, I'm mostly fine with you two having your exchange and me mostly ignoring it, but I'm gonna speak up against this bit:

The problem is that no matter what you do, if you come to a conclusion different from OP, you are simply still "disembodied." You just need to "do more work." This is a way of counting the hits and excusing the misses. "Embodiment" is not "being in touch with your emotions," it is acting in the manner prescribed. 

No.

That's not what I said and it's not what I meant.

You're making that part up.

I'm describing a structure. It doesn't have a damn thing to do with convincing people of something. It's about pointing at reality and inviting people to see what I'm pointing at.

If you don't want to look, or you look and you see something else, that's fine by me. Honestly.

I doubt my saying this has a damn effect on your sense of what I am or am not saying or intending, honestly.

But I'm not going to just let this calibre of bullshit projection slide by without comment.

Replies from: ztzuliios
comment by ztzuliios · 2022-12-03T21:30:15.607Z · LW(p) · GW(p)

I'm not saying it's bad to do these things.

I'm saying that if you're doing them as a distraction from inner pain, you're basically drunk.

How is this falsifiable?

Can you point to five people who have done this, but still have a different orientation from you?

comment by Slimepriestess (Hivewired) · 2022-11-24T15:22:49.827Z · LW(p) · GW(p)

Thank you so much for writing this. I wish I had this in 2018 when I was spiraling really badly. I feel like I only managed to escape from the game by sheer luck and it easily could have killed me, hell it HAS killed people. Not everyone manages to break in a way that breaks them out of game and not just obliterate them.

I wrote a story about my attempts to process through a lot of this earlier this year
https://voidgoddess.org/2022/11/15/halokilled/

comment by Ben Schwyn (ben-schwyn) · 2022-11-22T20:08:41.972Z · LW(p) · GW(p)

While some people might be doing intense thinking / writing, others like myself are distracting themselves via intense listening/perceiving/reading --- covering up their own thoughts and cares by taking in lots of information and sedating/overwhelming their emotions.

comment by Nicholas / Heather Kross (NicholasKross) · 2022-12-15T05:45:50.099Z · LW(p) · GW(p)

This seems... testable? Like, it's kind of the opposite message of Yudkowsky's "try harder" posts.

Have two groups work on a research problem. One is in doom mode, one is in sober mode. See which group makes more progress.

Replies from: Valentine
comment by Valentine · 2022-12-15T12:23:40.607Z · LW(p) · GW(p)

Yep. I don't like your proposed test (what's going to define "progress"?), but yes.

My main purpose for this post wasn't to make amazing AI safety researchers though. It was to offer people who want out of the inner doomsday trap a way of exiting. That part is a little more tricky to test. But if someone wants to test it and wants to put in the effort of designing such a test, I think it's probably doable.

Replies from: NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2022-12-16T19:49:37.788Z · LW(p) · GW(p)

Yeah, the test has to be set up with all the normal caveats in advance (including being specific enough to measure, but broad enough to avoid people having good excuses to ignore whatever its conclusions turn out to be).

comment by MikkW (mikkel-wilson) · 2022-11-22T19:34:06.237Z · LW(p) · GW(p)

Uh, no.

Maybe I just genuinely care about not having terrible things happen to me and everyone else in the world? There's no game there, no broken addiction mechanisms inside.

I strong-downvoted this. [edit: I removed a statement about my feelings in reaction to this, that I feel was a little too much]

I just want to do what I can to keep the people I love from dying.

Replies from: Valentine
comment by Valentine · 2022-11-22T20:41:49.976Z · LW(p) · GW(p)

I guess you missed the part that I repeated several times that I wasn't saying there isn't something real to address.

And the interlude.

And the request for same-sided exploration.

Alas.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2022-11-22T20:54:22.690Z · LW(p) · GW(p)

I didn't miss that. That doesn't change what the rest of your post objectively is saying. (Well, I did overlook the bit about same-sided exploration. But idk, the way this post was worded kinda kills my desire to do that)

Edit: as far as the interlude, it only makes sense given the flawed thesis that is the precise thing I'm reacting negatively to.

comment by PaulK · 2022-11-21T22:01:45.504Z · LW(p) · GW(p)

I think your diagnosis of the problem is right on the money, and I'm glad you wrote it. 

As for your advice on what a person should do about this, it has a strong flavor of: quit doing what you're doing and go in the opposite direction. I think this is going to be good for some people but not others. Sometimes it's best to start where you are. Like, one can keep thinking about AI risk while also trying to become more aware of the distortions that are being introduced by these personal and collective fear patterns.

That's the individual level though, and I don't want that to deflect from the fact that there is this huge problem at the collective level. (I think rationalist discourse has a libertarian-derived tendency to focus on the former and ignore the latter.)

Replies from: PaulK, Valentine
comment by PaulK · 2022-11-21T22:04:27.844Z · LW(p) · GW(p)

I also think that the fact that AI safety thinking is so much driven by these fear + distraction patterns, is what's behind the general flail-y nature of so much AI safety work. There's a lot of, "I have to do something! This is something! Therefore, I will do this!"

Replies from: Valentine
comment by Valentine · 2022-11-22T00:15:57.760Z · LW(p) · GW(p)

I agree… and also, I want to be careful of stereotypes here.

Like, I totally saw a lot of flail nature in what folk were doing when I was immersed in this world years ago.

But I also saw a lot of faux calmness and reasonableness. That's another face of this engine.

And I saw some glimmers of what I consider to be clear lucidity.

And I saw a bunch that I wasn't lucid enough at the time to pay proper attention to, and as such I don't have a clear opinion about now. I just lack data because I wasn't paying attention to the people standing in front of me. :-o

But with that caveat: yes, I agree.

comment by Valentine · 2022-11-22T00:10:26.735Z · LW(p) · GW(p)

I mostly just agree.

I hesitate to ever give a rationalist the advice to keep thinking about something that's causing them to disembody while they work on embodiment. Even if there's a good way for them to do so, my impression is that most who would be inclined to try cannot do that. They'll overthink. It's like suggesting an alcoholic not stop cold-turkey but leaving them to decide how much to wean back.

But I do think there's a balance point that if it could be enacted would actually be healthier for quite a few people.

I'm just not holding most folks' hands here! So the "cold turkey" thing strikes me as better general advice for those going at it on their own with minimal support.

comment by Elizabeth (pktechgirl) · 2023-12-19T19:58:35.776Z · LW(p) · GW(p)

I stand by what I said here [LW(p) · GW(p)]: this post has a good goal but the implementation embodies exactly the issue it's trying to fight. 

comment by Viliam · 2022-12-01T12:06:20.637Z · LW(p) · GW(p)

Valentine wrote an important message in a metaphorical language that will rub some people the wrong way (that includes me), but it seems like the benefit for those who need to hear it may exceed the annoyance of those who don't. Please let's accept it this way, and not nitpick the metaphors.

As a boring person, I would prefer to have a boring summary on the top, or maybe something like this:

If X is freaking you out, it is a fact about you, not about X. Read how this applies to the topic "AI will kill you"...

The longer boring version is the following: Human brain is an evolutionary barely-functioning hack. Emotions are historically older than reason, and sometimes do not cooperate well. Specifically, the emotional part of the brain fails to realize that some problems cannot be solved by an immediate physical action (such as: fighting back, running away, freezing...), and insists on preparing your body for such action, which is both mentally and physically harmful when you do too much of it. Therefore, calm down. Yes, you are probably going to die, but it is not going to happen immediately, and there is no immediate physical action that could prevent it, therefore calm down. If you are still obsessing about the "probably going to die" part, you are still not calm enough. You are properly relaxed when your emotional reaction to your horrible fate is "meh". Ironically, that might be when your brain is most capable of considering the alternatives and choosing the best one.

Replies from: Valentine
comment by Valentine · 2022-12-01T14:18:31.207Z · LW(p) · GW(p)

This is really good. Thank you.

I'd add that there's a very specific structure I'm trying to point at. Something I think is right to call an addiction, and a pathway out of said addiction.

I'm pretty sure that could be said in detail in a "boring" way too. I just really suck at creating "boring" versions of things. :-D

Thank you for this.

Replies from: Viliam
comment by Viliam · 2022-12-01T15:22:34.368Z · LW(p) · GW(p)

In Transactional Analysis there is something called "racket" (not mentioned on its Wikipedia page), a concept that people have their habitual emotion... not meaning that they like it or approve of it, just that for many things that happen they will find an excuse to translate them to that emotion.

As usual, the psychoanalytical explanation is that your parents paid to you attention in childhood when you exhibited that emotion, and ignored you when you exhibited other emotions. Thus, converting every experience to given emotion is how you unconsciously pay for being paid attention to.

comment by Noosphere89 (sharmake-farah) · 2022-11-24T19:11:58.933Z · LW(p) · GW(p)

I'm very conflicted about this post. On the one hand, many of it's parts are necessary things for LWers to hear, and I'm getting concerned about the doom loop that seems to form a cult-like mentality on AI.

On the other hand, it also has serious issues in it's framing, and I'm worried that the post is coming out of a mentality that isn't great as well.

comment by mkualquiera · 2022-11-24T17:40:48.584Z · LW(p) · GW(p)

I am very conflicted about this post. 

On the one hand it deeply resonates with my own observations. Many of my friends from the community seem to be stuck on the addictive loop of proclaiming the end of the world every time a new model comes out. I think it's even more dangerous, as it becomes a social activity: "I am more worried than you about the end of the world, because I am smarter/more agentic than you, and I am better at recognizing the risk that this represents for our tribe." gets implicitly tossed around in a cycle where the members keep trying to one-up each other. This only ends when their claims get so absurd as to say the world will end next month, but even this absurdity seems to keep getting eroded over time. 

Like someone else said here in the comments, if was reading about this issue in some unrelated doomsday cult from a book, I would immediately dismiss them as a bunch of lunatics. "How many doomsday cults have existed in history? Even if yours is based on at least some solid theoretical foundations, what happened to the previous thousands of doomsday cults that also were, and were wrong?"

On the other hand I have to admit that the arguments in your post are a bit weak. They allow you to prove too much. To any objection, you could say "Well, see, you are only objecting to this because you have been thinking about AI risk for too long, and thus you are not able to reason about the issue properly". Even though I personally think you might be right, I cannot use this to help anyone else in good faith, and most likely they will just see through it. 

So yes. Conflicting.

In any case, I think some introspection in the community would be ideal. Many members will say "I have nothing to do with this, I'm a purely technical person, yada yada" and it might be true for them! But is it true in general? Is thinking about AI risk causing harm to some members of the community, and inducing cult-like behaviors? If so, I don't think this is something we should turn a blind eye to. If anything because we should all recognize that such a situation would in itself be detrimental to AI risk research.

Replies from: Valentine, None
comment by Valentine · 2022-11-25T05:14:43.585Z · LW(p) · GW(p)

[Your arguments] allow you to prove too much. To any objection, you could say "Well, see, you are only objecting to this because you have been thinking about AI risk for too long, and thus you are not able to reason about the issue properly".

Um. That's a thing I suppose someone could do with some variation of these frames, sure. That's not a move I'm at all interested in though. I really would prefer no one does this. It warps the point into something untrue and unkind.

I'm much more interested in something like:

  • There's this specific internal system design a person can fall into.
  • It's a pretty loud feature of the general rationalist cluster.
  • If you (a general reader, not you mkualqulera per se) are subject to this pattern and you want out, here's a way out.
  • Also, people who are in such a pattern but don't want out (or are too stuck in it to see they're in it) are in fact making the real thing harder to solve. So noticing and getting out of this pattern really is a priority if you care about the real thing.

Now, if someone freaks out at me for pointing this out and makes some bizarre assumptions about what I'm saying (like, say, that I'm claiming there's no AI problem or that I'm saying any action to deal with it is delusional), at that point I consider it way more likely that they're "drunk", and I'm much more likely to ignore what they have to say. Their ravings and condemnation land for me like a raging alcoholic who's super pissed I implied they have a problem with an addiction.

But none of this is about me winning arguments with people. It's about pointing out a mechanism for those who want to see it.

And for those for whom it doesn't apply, or to whom it does but they're determined not to look? Well, cool, good on them! Truly.

(Also, I like the kind of conflict you're wrestling with. I don't want to try to argue you out of that. I just wanted to clarify this part a bit.)

comment by [deleted] · 2022-12-18T06:57:29.972Z · LW(p) · GW(p)

I have been following this group for ten years.  It is just another doomsday cult.

comment by Ben Pace (Benito) · 2022-11-24T06:39:01.887Z · LW(p) · GW(p)

Great post, thanks for writing it. 

I try to reward posts I like with thoughtful commentary/disagreement, but there's a sense in which this post doesn't want to continue an existing spiraling thought pattern, it wants me to go out and do whatever I want to after putting that down.

Replies from: Benito
comment by Ben Pace (Benito) · 2022-11-24T07:04:13.297Z · LW(p) · GW(p)

After reading the other comments, I'll at least add in the datapoint that I have experienced a ton of "ruminating-about-AI-risk-strategy-as-escapism" in my life, and being able to not do that has been a pretty key step in actually making progress on the problem. 

When I remember back to those times when I was trapped in it (not saying I don't still indulge from time to time), I think I would have found this post quite scary to engage with, because a lot of my social security was wrapped up in being the sort of person who would do that. I would be socially scared to put it down.

My solution was very rarely to introspect on it and fight the fight directly, as I feel like is a likely takeaway from this post; that's something I could only do when the force was weak and rival forces were strong. I think a basic element involved me becoming more socially stable in other ways. I think another basic element was noticing that my overall life strategy wasn't working and was instead hurting me. I took some more hardline strategies to deal with that, (more like Odysseus tying himself to the mast than Odysseus coming to internal peace with his struggle), and then I practiced other modes of being.

At this point I've realized I do have something to opine about, and I'm 4 paragraphs in so I'll let myself: I think a point missing in the OP and the comments is that sometimes the addiction is useful. I find it hard to concisely make this point, but I think many people are addicted to things that they're good at, be it competitions or artistic creations or mathematics. I'm not saying there's an easy tradeoff, and I'm certainly not saying that all addicts will probably end up being good at the thing their addicted to (e.g. gambling addicts). But neither can I say they never are. 

And I'll admit you need a certain level of self-awareness to make the honest assessment for your particular case. Yes requires the possibility of no. [LW · GW] If you could not put down your addiction, then you could not really say you are choosing it for the greater good, because you did not make the choice at all.

Replies from: Valentine
comment by Valentine · 2022-11-24T15:54:23.271Z · LW(p) · GW(p)

I really like your contribution here. It's a great addition. Thank you.

I think many people are addicted to things that they're good at, be it competitions or artistic creations or mathematics. I'm not saying there's an easy tradeoff, and I'm certainly not saying that all addicts will probably end up being good at the thing their addicted to (e.g. gambling addicts). But neither can I say they never are.

I think I see what you're pointing at. Something like… addictions can bring someone to cultivate something that (a) was very worth cultivating and (b) might have never been cultivated save for the addiction. Yes?

I agree.

I also think it's worth tracking why (b) happens. If you can tell something is worth cultivating, why isn't that enough?

I'm guessing that part of the issue is the cultural milieu we're in (globally, not just LW). The incentives are loosely toward productivity and action. Taking the time to pay off psycho-emotional technical debt often comes with a lot of shame or inadequacy or fear.

So in that environment, it makes sense to get the goods directly, even if it incurs more technical debt.

One problem I'm tracking is… well, the metaphors get messy, but I'll dive ahead anyway: Too much technical debt creates a kind of memetic environment that breeds things with survival instincts, and those things like protecting their tech-debt environment.

So on net, globally, I think it's actually worthwhile to let some potential Olympic athletes fail to realize their potential if it means we collectively have more psychic breathing room.

And AFAICT, getting more shared breathing room is the main hope we have for addressing the real thing.

(…acknowledging that Eliezer (and surely others too) explicitly disagrees with me on this point.)

Replies from: Benito, Benito
comment by Ben Pace (Benito) · 2022-11-24T21:45:50.392Z · LW(p) · GW(p)

So on net, globally, I think it's actually worthwhile to let some potential Olympic athletes fail to realize their potential if it means we collectively have more psychic breathing room.

And AFAICT, getting more shared breathing room is the main hope we have for addressing the real thing.

I think this is your most general and surprising claim, and I'll hereby encourage you to write a post presenting arguments for it (ideally in a different style to the mildly pschyoactive post above, but not necessarily). I'm not sure to what extent I agree with your claim (I currently veer from 20% to 80% as I think about it) and I have some hope that if you wrote out some of the reasons that led to you believing it, it would help me make up my own mind a bit better.

Replies from: Valentine, Valentine
comment by Valentine · 2022-12-31T20:11:32.758Z · LW(p) · GW(p)

Here you go [LW · GW].

Replies from: Benito
comment by Ben Pace (Benito) · 2023-01-01T01:04:22.282Z · LW(p) · GW(p)

Very cool. I look forward to reading it.

comment by Valentine · 2022-11-25T04:52:32.269Z · LW(p) · GW(p)

Invitation noted. I'm open to it. I make no promises. But I like the curiosity and I'd love for what I'm seeing to land for more people and have more eyes on it.

comment by Ben Pace (Benito) · 2022-11-24T21:46:33.575Z · LW(p) · GW(p)

Something like… addictions can bring someone to cultivate something that (a) was very worth cultivating and (b) might have never been cultivated save for the addiction. Yes?

Yes.

comment by Garrett Baker (D0TheMath) · 2022-11-21T20:41:33.053Z · LW(p) · GW(p)

A summarization of the above in a way easier to evaluate would be helpful. Richard's comment does this in part, but there may be more in the post not covered by the comment.

I would usually assume a post written like this has little value to be mined, but others in comments and in upvote/downvote counts seem to disagree.

Replies from: Valentine
comment by Valentine · 2022-11-22T00:06:46.991Z · LW(p) · GW(p)

I didn't mean it to be something easy to parse and evaluate.

I didn't intend it to be opaque either!

But the point wasn't to make some claims that people could evaluate and think about what they agree or disagree with.

The point was to resonate with something core in people who are… well, enough like I was, that they could look inside themselves and notice the path out of their misery.

Breaking it down into logical statements for the cognitive mind to examine would make that much, much harder.

(…which is part of how the thing I talked about in the OP works!)

So if you don't want to dig into it, and you can't relate to it, don't worry about it. That just means it probably wasn't written for you.

Replies from: ShowMeTheProbability
comment by ShowMeTheProbability · 2022-11-22T09:39:23.598Z · LW(p) · GW(p)

From my perspective, you nailed the emotional vibe dead on. Its what I wouldve needed to hear (if I had the mental resources to process the warning properly before having a breakdown)

Replies from: Valentine
comment by Valentine · 2022-11-22T20:39:13.991Z · LW(p) · GW(p)

Good to know. Thanks for saying.

comment by Alex Flint (alexflint) · 2023-02-27T23:15:30.254Z · LW(p) · GW(p)

There seems to be some real wisdom in this post but given the length and title of the post, you haven't offered much of an exit -- you've just offered a single link to a youtube channel for a trauma healer. If what you say here is true, then this is a bit like offering an alcoholic friend the sum total of one text message containing a single link to the homepage of alcoholics anonymous -- better than nothing, but not worthy of the bombastic title of this post.

Replies from: Valentine
comment by Valentine · 2023-02-28T19:05:53.552Z · LW(p) · GW(p)

If someone feels resonance with what I'm pointing out but needs more, they're welcome to comment and/or PM me to ask for more.

comment by Sphinxfire (sphinxfire) · 2022-11-22T09:58:06.780Z · LW(p) · GW(p)

The truly interesting thing here is that I would agree unequivocally with you if you were talking about any other kind of 'cult of the apocalypse'.

These cults don't have to be based on religious belief in the old-fashioned sense, in fact, most cults of this kind that really took off in the 20th and 21st century are secular.

Since around the late 1800s, there has been a certain type of student that externalizes their (mostly his) unbearable pain and dread, their lack of perspective and meaning in life into 'the system', and throw themselves into the noble cause of fighting capitalism.

Perhaps one or two decades ago, there was a certain kind of teenager that got absorbed in online discussions about about science vs religion, 9/11, big pharma, the war economy - in this case I can speak from my own experience and say that for me this definitely was a means of externalizing my pain.

Today, at least in my country, for a lot of teenagers, climate change has saturated this mimetic-ecological niche.

In each of these cases, I see the dynamic as purely pathological. But. And I know what you're thinking. But still, but. In the case of technological progress and its consequences for humanity, the problem isn't abstract, in the way these other problems are.

The personal consequences are there. The're staring you in the face with every job in translation, customer service, design, transportation, logistics, that gets automated in such a way that there is no value you can possibly add to it. They're on the horizon, with all the painfully personal problems that are coming our way in 10-20 years.

I'm not talking about the apocalypse here, I don't mind whatshisface's Basilisk or utility maximizers turning us all into paperclips - these are cute intellectual problems and there might be something to them, but ultimately if the world ends that's noone's problem.

2-3 Years ago I was on track to becoming a pretty good illustrator, and that would have been a career I would have loved to pursue. When I saw the progress AI was making in that area - and I was honest with myself about this quite a bit earlier than other people, who are still going through the bargaining stages now -, I was disoriented and terrified in a way quite different from the 'game' of worrying about some abstract, far-away threat. And I couldn't get out of that mode, until I was able to come up with a strategy, at least for myself.

If this problem gets to the point, where there just isn't a strategy I can take to avoid having to acknowledge my own irrelevance - because we've invented machines that are, somehow, better at all the things we find value in and value ourselves for than the vast majority of us can possibly hope to be -, I think I'll be able to make my peace with that, but it's because I understand the problem well enough to know what a terminal diagnosis will look like.

Unlike war, poverty and other injustices, humans replacing themselves is a true civilization-level existential problem, not in the sense that it threatens our subsistence, but that it threatens the very way we conceive of ourselves.

Once you acknowledge that, then yes.

I agree with your core point.

It's time to walk away. There's nothing you can do about technological progress, and the world will not become a better place for your obsessing over it.

But you still need to know that your career as a translator or programmer or illustrator won't be around long enough for it to amount to a life plan. You need to understand how the reality of the problem will affect you, so that you can go on living while doing what you need to do to stay away from it.

Like not building a house somewhere that you expect will be flooded in 30 years.

Replies from: bugsbycarlin, donald-hobson, Valentine
comment by bugsbycarlin · 2022-11-23T15:20:14.345Z · LW(p) · GW(p)

The truly interesting thing here is that I would agree unequivocally with you if you were talking about any other kind of 'cult of the apocalypse'.

 

This has Arrested Development energy ^_^ https://pbs.twimg.com/media/FUHfiS7X0AAe-XD.jpg 

 

The personal consequences are there. The're staring you in the face with every job in translation, customer service, design, transportation, logistics, that gets automated in such a way that there is no value you can possibly add to it

...

2-3 Years ago I was on track to becoming a pretty good illustrator, and that would have been a career I would have loved to pursue. When I saw the progress AI was making in that area - and I was honest with myself about this quite a bit earlier than other people, who are still going through the bargaining stages now -, I was disoriented and terrified in a way quite different from the 'game' of worrying about some abstract, far-away threat

This is the thing to worry about.  There are real negative consequences to machine learning today, sitting inside the real negative consequences of software's dominance, and we can't stop the flat fact that a life of work is going away for most people. The death cult vibe is the wild leap. It does not follow that AI is going to magically gain the power to gain the power to gain the power to kill humanity faster than we can stop disasters.


 

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-12-01T00:51:25.981Z · LW(p) · GW(p)

There are specific technical arguments about why AI might rapidly kill everyone. You can't figure out if those arguments are true or false by analysing the "death cult vibes". 

Now you can take the position that death cult vibes are unhealthy and not particularly helpful. Personally I haven't actually seen a lot of death cult vibes. I have seen more "fun mental toy from philosophy land" vibes. Where total doom is discussed as if it were a pure maths problem. But if there are death cult vibes somewhere I haven't seen, those probably don't help much.

comment by Donald Hobson (donald-hobson) · 2022-12-01T00:54:46.379Z · LW(p) · GW(p)

but ultimately if the world ends that's noone's problem.

 

This is an interesting claim. If I had a planet destroying weapon that would leave the ISS astronauts alive, would you say "don't worry about it much, it's only 3 astronaut's problem"?

comment by Valentine · 2022-11-22T20:45:12.687Z · LW(p) · GW(p)

I agree.

I wasn't trying to speak to this part. But now that you have, I'm glad you did. I don't mean to dismiss the very real impacts that this tech is having on people's lives.

That's just a different thing than what I was talking about. Not totally unrelated, but a fair bit off to the side.

comment by ShowMeTheProbability · 2022-11-22T09:36:24.279Z · LW(p) · GW(p)

Thank you for writing this Valentine, It is an important message and I am really glad somone is saying it.
I first got engaged with the community when i was in vulnurable life circumstances, and suffered major clinical distress fixated around many of the ideas encountered here.

To be clear I am not saying rationalist culture was the cause of my distress, it was not. I am sharing my subjective experience that when you are silently screaming in internal agony, some of the ideas in this community can serve as a catalyst for a psychotic breakdown.

comment by Noah Scales · 2022-11-23T21:01:22.988Z · LW(p) · GW(p)

I'm a little surprised that doomerism could take off like this, dominate one's thoughts, and yet fail to create resentment and anger toward of its apparent cause source. Is that something that was absent for you or was it not relevant to discuss here? 

I wonder:

  • in the prediction of doom, as the threat seems to be growing closer, does that create resentment or anger at apparent sources of that doom? If I dwelled on AI existential risk, I would feel more resentment of sources of that risk.
  • do the responses to that doom, or desperation of measures, become wilder as one thinks about it more? Just a passing thought about AI doom immediately brings to mind, "Lets stop these companies and agencies from making such dangerous technology!" In other words, lets give up on AI safety and try the other approach.
  • is there still appeal to a future of AGI? I can see some of the excitement or tension around the topic coming from the ambiguity of the path toward AGI and their consequences. I've seen the hype about AGI to be that it saves humanity from itself, advances science radically, turbo-charges economic growth, etc. Is that vision, alternated with a vision of horrible suffering doom, a cause of cognitive dissonance? I would think so.

Factors that might be protecting me from this include:

  • I take a wait and see approach about AGI, and favor use of older, simpler technologies like expert systems or even simpler cognitive aids relying on simple knowledgebases. In the area of robots, I favor simpler, task-specific robots (such as manufacturing robot arms) without, for example, self-learning abilities or radically smart language recognition or production. It's helpful to me to have something specific to advocate for, and think about, as an alternative, rather than thinking that it's AGI or nothing. 
  • I assume that AGI development is overall, a negative outcome, simply more risk to people (including the AGI themselves, sure to be exploited if they are created). I don't accept that AGI development offers necessary opportunities for human technological advancement. In that  way, I am resigned to AGI development as a mistake others make My hopes are not in any way invested in AGI. That saves me some cognitive dissonance.

Thank you for sharing this piece, I found it thought-provoking.

Replies from: Valentine
comment by Valentine · 2022-11-23T21:17:19.115Z · LW(p) · GW(p)

I'm not sure I understand your question. Do you mean, why wouldn't someone who's running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?

For one, I think they often do.

But also, it's worth noticing that the purpose of the obsession is to distract from the inner pain. Kind of like alcoholics probably aren't always upset at liquor stores for existing.

And in a weird twist, alcoholics can even come to seek out relationships and situations that upset them in familiar ways. Why? Because they know how to control that upset with alcohol, which means they can use the external upset as a trigger to numb out instead of waiting for glimmers of the internal pain to show up inside them.

Not all addiction designs do this. But it's a common enough pattern output to be worth acknowledging.

I'm not sure if that's what you were asking about though.

Replies from: Noah Scales
comment by Noah Scales · 2022-11-24T06:13:08.063Z · LW(p) · GW(p)

You wrote:

I'm not sure I understand your question. Do you mean, why wouldn't someone who's running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?

For one, I think they often do.

Oh. Good! I'm a bit relieved to read that. Yes, that was the fundamental question that I had. I think that shows common-sense.

I'm curious what you think a sober response to AGI research is for someone whose daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.

Replies from: Valentine
comment by Valentine · 2022-11-24T15:36:44.785Z · LW(p) · GW(p)

I'm curious what you think a sober response to AGI research is for someone who's daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.

Quite welcome.

I'm not really up for surmising about this right now. It's too tactical. I think the clarity about what to do arises as the VR goggles come off and the body-level withdrawal wears off. If I knew what it made sense for people to do after that point, we wouldn't need their agentic nodes in the distributed computation network. We'd just be using them for more processing power. If that makes sense.

I bet I could come up with some general guesses. But that feels more like a musing conversation to have in a different context.

comment by Lao Mein (derpherpize) · 2022-11-23T07:54:30.304Z · LW(p) · GW(p)

How bad are things, really? I'm not part of EA/Rat/AI risk IRL, so I dont have first-hand experience. Are people actually having mental breakdowns over the control problem? Some of the comments here seem to imply that people are actually experiencing depersonalization and anxiety so bad it's affecting their work performance specifically because of AI concerns. And not just them, but multiple people they work with. Is the culture at AI alignment orgs really that bad?

Replies from: Valentine
comment by Valentine · 2022-11-23T17:16:56.815Z · LW(p) · GW(p)

I'm not up to date on this. I've been out of the community since 2018. But back when co-running CFAR there absolutely was an issue with people going psychotic or having panic attacks or crushing depression in the face of this stuff. Absolutely.

(…with caveats around how discerning causation vs. correlation is very tricky, there are selection effects, etc. But it was a clear enough impression that CFAR staff had explicit strategies in case someone started showing signs of having a manic episode. As just one small sign of many.)

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2022-11-24T16:11:09.593Z · LW(p) · GW(p)

How did we get here? How did Rationality go from being about winning at life to a Cthullu RPG? I'm pretty interested in AI, and the fact that stuff like this is real (it still doesn't feel real, it feels like LARP) has really turned me off of alignment research. On a personal level, the literally-having-coworkers-get-driven-insane of alignment research seems a lot worse than merely-slightly-increasing-existential-risk of capabilities research.

Replies from: lc, Valentine, janus, ztzuliios
comment by lc · 2022-11-24T16:15:15.375Z · LW(p) · GW(p)

I'd rather go insane than "slightly increase" existential risk.

Replies from: Valentine, Aiyen
comment by Valentine · 2022-11-25T05:31:12.300Z · LW(p) · GW(p)

I think this is maybe implied by Aiyen's comment [LW(p) · GW(p)], but to highlight an element here:

This way of thinking doesn't have you trade sanity for slight xrisk decrease.

It has you trading sanity for perceived slight xrisk decreases.

If you start making those trades, over time your ability to perceive what's real decays.

If there's anything that has anything whatsoever like agency and also benefits from you feeding your sanity into a system like this, it'll exploit you right into a Hell of your own making.

Any time you're caught in a situation that asks for you to make a trade like this, it's worth asking if you've somehow slipped into the thrall of an influence like this.

It's way more common than you might think at first.

comment by Aiyen · 2022-11-25T00:47:30.314Z · LW(p) · GW(p)

On the one hand, that's literally true.  On the other, I feel like the connotations are dangerous.  Existential risk is one of the worst possible things, and nearly anything is better than slightly increasing it.  However, we should be careful that that mindset doesn't lead us into Pascal's Muggings and/or burnout.  We certainly aren't likely to be able to fight existential risk if it drives us insane!  

I strongly suspect that it's not self-sacrificing researchers who will solve alignment and bring us safely through the current crisis, but ones who are able to address the situation calmly and without freaking out, even though freaking out seems potentially justified. 

comment by Valentine · 2022-11-25T05:26:12.481Z · LW(p) · GW(p)

It's woven deep in LW's memetic history.

Eliezer created LW in order to sort of spell out his understanding of the Art.

Part of the point of that was to act as a kind of combo summon spell and filter for potential AI risk researchers for MIRI.

The reason was in the wake of Eliezer noticing he had been ludicrously wrong in his thinking about AI, and noticing how hard it is to think about AI clearly, and how AI researchers in particular couldn't think clearly about the topic clearly.

So the "This is literally unfathomably important, we have to actually solve this thinking clearly thing or literally everything anyone could ever value is going to get permanently destroyed!" tone was woven in from the very beginning.

This is just one example.

So, yeah. It's always been like this. Just varying degrees of cloaked vs. transparent, with corresponding peripheral folk sometimes floating in who aren't as much about AI risk.

But the thrust as a whole has always come from this Lovecraftian core.

comment by janus · 2022-11-24T18:11:12.247Z · LW(p) · GW(p)

turns out life is a Cthullu RPG, so we gotta win at that

comment by ztzuliios · 2022-11-30T17:05:34.785Z · LW(p) · GW(p)

It was always a Cthulhu LARP.  Remember that one thing?

Groups polarize over time. One of the ways to signal group membership is to react explosively to the things you're supposed to react explosively to. This is why as politics in the US have polarized, everyone has grown more breathless and everything is ${OUTGROUP}ist.  You gain ${KARMA} by being more breathless and emotional.  You can only stop this with explicit effort, but if you do that, you look dissociated, disembodied, autistic, and the social pressure against that is stronger than the social pressure against letting (totally normal, status-quo) group polarization get to the point of literally mindkilling people. 

The military also has techniques to avoid this, but there are similar social pressures against them from both sides of the equation, because the military is low-status both to the postrationalist types (like OP) and to actual rationalists (like the people setting social norms at CFAR). So you don't see those being used. 

It's completely possible to have the alignment research you want, you'll just have to set the group norms, which is a completely orthogonal problem that involves a lot of things besides alignment research. Personally I think this would be a very good thing to do and I encourage you to try for the experience of it if nothing else.

Replies from: Slider
comment by Slider · 2022-11-30T19:18:03.359Z · LW(p) · GW(p)

but if you do that, you look dissociated, disembodied, autistic,

This is a real stigma consideration. Reminder that water is water. Autism as something inherently negative is ablism and improper.

Replies from: ztzuliios
comment by ztzuliios · 2022-11-30T19:31:14.125Z · LW(p) · GW(p)

I completely agree and I think that levying the charges "disembodied" against anything on the opposite side of the mental dichotomy of "woo" is a weasel-word for the ableist slur of "autistic." I'm sorry this wasn't more clear, but I thought that sentence was fairly dripping as is.  I've written about this before as it applies to this topic [LW(p) · GW(p)], which is not to excuse the harm I've done if I've triggered you, but to show that I've precommitted to this stance on this issue.

Replies from: Slider
comment by Slider · 2022-11-30T20:30:08.098Z · LW(p) · GW(p)

I feel slightly nervous I am overtly spending internet ink on this but here I go.

I was unsure how obvilously wrong it was ment to appear. To me saying that "someone seems cold" is not problematic. "dissociated" and "disembodied" read to me to be part of natural feeling (expression), somebody could mean technical things with them and not have a attitude loaded into them. Those parts did not constitute drippingness for me. For autistic there was no technical meaning that could make sense.

I was not triggered but it did cross my mind that not everyone thinks that is unbased take to me (and kind of categorising this knowledge as common only in the small subcommunity). Having those fly without anybody batting an eyelid would be normalising the hateful conduct. I was unsure whether the eyelid was batted already so I batted a separate eyelid. And tried to include indicators that it is a mild reaction (a kind of messaging I have reason to believe I frequently screw up).

I do reflect that if I was in the context where this could be assumed common knowledge I would not probably be making this move (bat the eyelid). So I am wondering whether it connects to the object level phenomena (of groups polarising) where people harden their signals so that outsiders with more noise and less decoding ability do not get unintended messages.

Replies from: ztzuliios
comment by ztzuliios · 2022-12-02T15:59:30.671Z · LW(p) · GW(p)

Lots of ink, but lots to think about. I'm thankful for this post fwiw.

The "no technical meaning" could maybe be an indicator of sarcasm. But you're right that there was no way for you to know I wasn't just misapplying the term in the same way as the OP.

I don't think this relates to group polarization per se but I take your point.

I didn't mean "triggered" to mean extremely so, someone can be mildly triggered and again, I apologize for (in my perception, based on your comment) doing that. I think you did the right thing.

Replies from: Slider
comment by Slider · 2022-12-02T19:10:11.447Z · LW(p) · GW(p)

With no resonable way of knowing without context I am using "technical" here in a very idiosyncratic way. If two speech acts that have very different connotations and then strip them of the connotations if they are the same then the technical meaning is the same.

If someone is being hateful I often proceed to "fix the message from them" mentally in my receiving end. So while I starkly reject parts of it, rejecting everything of it punishes also the non-hateful parts. Thus I have the cognitive task of "what they should have said". If there is no innocent message left after removing the hate, it is pure hate. This is a kind of "could a reasonable opiner opine this?" standard. It is easy to read "disembodied" in a ablist way but it might just be a clumsy way to refer to low charisma (is is "repairable"). So after phrasing incompetence is exhausted an assumption of malice starts.

To have the statistical mean human deduce "That guy gets passionate in an unnatural way -> that guy is autistiic" has low plausibility. Backtracing where this logic would be natural, worrying about upholding a mask about a behaviour that has lots of details and has high fluency from the mimic target making it highly likely to be a statistical outlier that a masking strategy does not cover well (this is not meant to be a mask review). Confusion, "stiffness" or "odd feeling" would represent what happens in situations like these. Zero to 100% autistic label is irrealistic. The average hater is not that informed.

comment by Shmi (shminux) · 2022-11-23T01:25:11.960Z · LW(p) · GW(p)

I agree with everything, though this is just a very long way to say the https://en.wikipedia.org/wiki/Serenity_Prayer

Replies from: Valentine
comment by Valentine · 2022-11-24T15:55:15.350Z · LW(p) · GW(p)

I think I'm saying something quite a bit more precise than the Serenity Prayer. But I agree, it's related.

comment by Ape in the coat · 2022-11-22T16:55:10.339Z · LW(p) · GW(p)

I think you are pointing at an important referent. There are probably a lot of people who will benefit from reading this post and thus I'm glad that you wrote it. That said, you appear to have written it in a deliberately confusing manner. You probably have your reasons. Maybe you believe that this way would be better for the people you are trying to help. I'm not an expert in Lacanianism, but I think this is wrong, both ethically and epistemologically. The are also a lot of people who will misunderstand this post incorrectly and for whom reading it will cause harm. To the best of my knowledge, the gains still outweight the losses, but you are leaving utility on the table by not trying to meet LW epistemic standards of clarity.

The meta-framework of memetics can be useful, but one have to be careful with it as with any other meta-framework in order not to lose the ability to talk about objective matters. The lesson of having a safe distance from your memes in order to stay sane is important. Likewise, the idea that you do not have to calibrate your feelings to the global grimness of the world. But you enshroud it in the smugness of "seeing through the game" while uncoupling memes from their truth predicate, which I find potentially harmful and quite tasteless. I hope it really serve some therapeutic purpose for the target audience.

comment by Ansel · 2022-11-21T19:43:20.041Z · LW(p) · GW(p)

Strongly upvoted, I think that the point about emotionally charged memeplexes distorting your view of the world is very valuable.

comment by Double · 2023-12-19T06:13:13.186Z · LW(p) · GW(p)

Ideally reviews would be done by people who read the posts last year, so they could reflect on how their thinking and actions changed. Unfortunately, I only discovered this post today, so I lack that perspective.

Posts relating to the psychology and mental well being of LessWrongers are welcome and I feel like I take a nugget of wisdom from each one (but always fail to import the entirety of the wisdom the author is trying to convey.) 

 
The nugget from "Here's the exit" that I wish I had read a year ago is "If your body's emergency mobilization systems are running in response to an issue, but your survival doesn't actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately." I panicked when I first read Death with Dignity [LW · GW] (I didn't realize it was an April Fools Joke... or was it?). I felt full fight-or-flight when there wasn't any reason to do so. That ties into another piece of advice that I needed to hear, from Replacing Guilt: "stop asking whether this is the right action to take and instead ask what’s the best action I can identify at the moment." I don't know if these sentences have the same punch when removed from their context, but I feel like they would have helped me. This wisdom extends beyond AI Safety anxiety and generalizes to all irrational anxiety. I expect that having these sentences available to me will help me calm myself next time something raises my stress level.

I can't speak to the rest of the wisdom in this post. “Thinking about a problem as a defense mechanism is worse (for your health and for solving the problem) than thinking about a problem not as a defense mechanism” sounds plausible, but I can’t say much for its veracity or its applicability

I would be interested to see research done to test the claim. Does increased sympathetic nervous system activation cause decreased efficacy? A correlational study could classify people in AI safety by (self reported?) efficacy and measure their stress levels, but causation is always trickier than correlation. 

A flood of comments criticized the post, especially for typical-minding. The author responded with many comments of their own, some of which received many upvotes and agreements and some of which received many dislikes and disagreements. A follow up post from Valentine would ideally address the criticism and consolidate the valid information from the comments into the post.

A sequence or book compiled from the wisdom of many LessWrongers discussing their mental health struggles and discoveries would be extremely valuable to the community (and to me, personally) and a modified version of this post would earn a spot in such a book.

Replies from: Valentine, Valentine
comment by Valentine · 2023-12-19T14:28:13.376Z · LW(p) · GW(p)

I like the tone of this review. That might be because it scans as positive about something I wrote! :D But I think it's at least in part because it feels clear, even where it's gesturing at points of improvement or further work. I imagine I'd enjoy more reviews written in this style.

 

I would be interested to see research done to test the claim. Does increased sympathetic nervous system activation cause decreased efficacy [at AI research]?

If folk can find ways of isolating testable claims from this post and testing them, I'm totally for that project.

The claim you name isn't quite the right one though. I'm not saying that people being stressed will make them bad at AI research inherently. I'm saying that people being in delusion will make what they do at best irrelevant for solving the actual problem, on net. And that for structural reasons, one of the signs of delusion is having significant recurring sympathetic nervous system (SNS) activation in response to something that has nothing to do with immediate physical action.

The SNS part is easy to measure. Galvanic skin response, heart rate, blood pressure, pupil dilation… basically hooking them up to a lie detector. But you can just buy a GSR meter and mess with it.

I'm not at all sure how to address the questions of (a) identifying when something is unrelated to immediate physical action, especially given the daughter's arm phenomenon [LW · GW]; or (b) whether someone's actions on net have a positive effect on solving the AI problem.

E.g., it now looks plausible that Eliezer's net effect was to accelerate AI timelines while scaring people. I'm not saying that is his net effect! But I'm noting that AFAIK we don't know it isn't.

I think it would be extremely valuable to have some way of measuring the overall direction of some AI effort, even in retrospect. Independent of this post!

But I've got nuthin'. Which is what I think everyone else has too.

I'd love for someone to prove me wrong here.

 

A sequence or book compiled from the wisdom of many LessWrongers discussing their mental health struggles and discoveries would be extremely valuable to the community (and to me, personally)…

This is a beautiful idea. At least to me.

Replies from: Double
comment by Double · 2023-12-20T05:40:49.504Z · LW(p) · GW(p)

I'm glad you enjoyed my review! Real credit for the style goes to whoever wrote the blurb that pops up when reviewing posts; I structured my review off of that.

When it comes to "some way of measuring the overall direction of some [AI] effort," conditional prediction markets could help. "Given I do X/Y, will Z happen?" Perhaps some people need to run a "Given I take a vacation, will AI kill everyone?" market in order to let themselves take a break.

What would be the next step to creating a LessWrong Mental Health book?

comment by Valentine · 2023-12-19T14:27:54.770Z · LW(p) · GW(p)
comment by Yoav Ravid · 2022-11-24T18:40:14.046Z · LW(p) · GW(p)

If your body's emergency mobilization systems are running in response to an issue, but your survival doesn't actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately.

You are locked in a room. You are going die of thirst in a few days. The door has a combination lock. You know the password is 5 digits (0 to 9). If it takes you one second to try each combination, it's going to take you 27.7 hours to try all the combinations (so half that on average to find the right one). Your survival doesn't depend on your actions on the timescale of minutes, and yet having your body's emergency mobilization systems running wouldn't mean you're not perceiving reality accurately.

You are part of a police department. You get a credible threat that someone planted a big bomb in the city and will detonate it in 48 hours unless his demands are met. Your goal is to find where the bomb is and disable it. Your survival doesn't depend on your actions on the timescale of minutes, and yet having your body's emergency mobilization systems running wouldn't mean you're not perceiving reality accurately.

Unless I misunderstand what you mean by "your body's emergency mobilization systems", this seems clearly true.

Replies from: swarriner
comment by swarriner · 2022-11-24T19:11:18.907Z · LW(p) · GW(p)

Unless I'm very much mistaken, emergency mobilization systems refers to autonomic responses like a pounding heartbeat, heightened subjective senses, and other types of physical arousal; i.e. the things your body does when you believe someone or something is coming to kill you with spear or claw. Literal fight or flight stuff.

In both examples you give there is true danger, but your felt bodily sense doesn't meaningfully correspond to it; you can't escape or find the bomb by being ready for an immediate physical threat. This is the error being referred to. In both cases the preferred state of mind is resolute problem-solving and inability to register a felt sense of panic will likely reduce your ability to get to such a state.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2022-11-24T19:25:27.875Z · LW(p) · GW(p)

This. I think a lot of the problems re emergency mobilization systems relate to that feeling of immediateness, when it's not.

I think a lot of emergencies are way too long-term for us, so we apply emergency mobilization systems even when they aren't there.

comment by alkexr · 2022-11-22T15:06:41.697Z · LW(p) · GW(p)

I immediately recognize the pattern that's being playing out in this post and in the comments. I've seen it so many times, in so many forms.

Some people know the "game" and the "not-game", because they learned the lesson the hard way. They nod along, because to them it's obvious.

Some people only know the "game". They think the argument is about "game" vs "game-but-with-some-quirks", and object because those quirks don't seem important.

Some people only know the "not-game". They think the argument is about "not-game" vs "not-game-but-with-some-quirks", and object because those quirks don't seem important.

And these latter two groups find each other, and the "gamers" assume that everyone is a "gamer", the "non-gamers" assume that everyone is a "non-gamer", and they mostly agree in their objections to the original argument, even though in reality they are completely talking past each other. Worse, they don't even know what the original argument is about.

Other. People. Are. Different.

Modeling them as mostly-you-but-with-a-few-quirks is going to lead you to wrong conclusions.

comment by Coafos (CoafOS) · 2022-11-21T21:25:25.529Z · LW(p) · GW(p)

Thank you for writing this post, I think this is a useful framing of this problem. For me personally, the doom game is fun, imho I have more motivation to do things and I become more self-confident. (if it ends what worse could happen) But that's for me, with my socially isolated Math/ComSci/CosHo background.

For others, I don't think it's a good game. I kinda noticed the tons of psychotic breakdowns around the field and, like, that's bad, but I could not have articulated why it was bad.

And even for me, I might kinda overshoot with the whole information hazard share-or-not thinking. It's better if you're in charge of the game and not let the doom game play you.

Replies from: Valentine
comment by Valentine · 2022-11-22T00:17:25.945Z · LW(p) · GW(p)

Awesome. I have deep respect for this kind of conscious game-playing. Rock on!

comment by Nicholas / Heather Kross (NicholasKross) · 2022-12-31T23:58:46.484Z · LW(p) · GW(p)

Strong agree with TekhneMakre's comment.

Purely on Valentine's own professed standards: Of all the ways to "snap someone out of it", why pick one that seems the most like brainwashing? If the FBI needs to un-brainwash a dangerous cult member, do they gaslight them? Do they do a paternalistic "if you feel angry, that means I'm right" maneuver? Do they say "well I'm not too concerned if you think I'm right" to the patient?

(Also... FWIW, the most doomsday-cultish emotionally-fraught posts I've seen in the rationality community are, by percentage of posts, mostly from people who agree with Ziz a lot [LW · GW], people who are generally against AI safety / longtermist priorities [EA · GW], and you.

The most doomsdayish I've seen LW get was "Death with Dignity" (which many community members pushed back on the tone of!), and my own FTX post [EA · GW].

Replies from: Valentine
comment by Valentine · 2023-01-01T00:04:13.870Z · LW(p) · GW(p)

I'm not available for critiques of how I've said what I've said here.

You're welcome to translate it into your preferred frame. I might even like that, and might learn from it.

But I'm not going to engage with challenges to how I speak.

comment by Peter Hroššo (peter-hrosso) · 2022-11-28T10:41:49.074Z · LW(p) · GW(p)

I very much agree that actions motivated by fear tend to have bad outcomes. Fear has subtle influence (especially if unconscious) on what types of thoughts we have and as a consequence, to what kinds of solutions we eventually arrive.

And I second the observation that many people working on AI risk seem to me motivated by fear. I also see many AI risk researchers, who are grounded, playful, and work on AI safety not because they think they have to, but because they simply believe it's the best thing they can do. I wish there would be more of the latter, because I think it feels better to them, but also because I believe they have a higher chance of coming up with good solutions.

Unfortunately, I also feel the form of the article is quite polarizing and am not sure how accessible it is to the target audience. But still I'm glad you wrote it, Val, thank you.

Replies from: Valentine
comment by Valentine · 2022-11-28T16:15:20.321Z · LW(p) · GW(p)

I also see many AI risk researchers, who are grounded, playful, and work on AI safety not because they think they have to, but because they simply believe it's the best thing they can do.

Agreed. I don't know many, but it totally happens.

 

Unfortunately, I also feel the form of the article is quite polarizing and am not sure how accessible it is to the target audience.

Yep. I've had to just fully give up on not being polarizing. I spent years seriously trying to shift my presentation style to be more palatable… and the net effect was it became nearly impossible for me to say anything clearly at all (beyond stuff like "Could you please pass the salt?").

So I think I just am polarizing. It'd be nice to develop more skill in being audible to sensitive systems, but trying to do so seems to just flat-out not work for me.

Alas.

 

But still I'm glad you wrote it, Val, thank you.

Thank you for saying so. My pleasure.

comment by Michael Roe (michael-roe) · 2023-12-21T22:04:06.047Z · LW(p) · GW(p)

I think this post is a good one, even though personally I'm not hung up on AI doom; I think this area of research is cool and interesting, which is a rather different emotion from fear.

My immediate thought is that Cognitive Behevioral Therapy concepts might relevant here, as it sounds like a member of the family of anxiety disorders that CBT is designed to treat.

And also, given this a group phenomenon rather than a purely individual one, there's something of the apocalyptic religious cult dynamic going on.


one thing that can be kind of irritating about CBT practionere is they way they tend to focus on the emotion about X rather than whether you think X is likely or a practical problem. And then you notice that our typical English language way of talking about things doesn't distinguish them well. So .. at least when sp asking to someone who is into cbt, you can temporarily adopt a way of speaking that carefully distinguishes the two.

comment by Heron (jane-mccourt) · 2023-12-17T23:56:44.367Z · LW(p) · GW(p)

This short post is astounding because it succinctly describes, and prescribes, how to pay attention, to become grounded when a smart and sensitive human could end up engulfed in doom. The post is insightful and helpful to any of us in search of clarity and coping.

comment by David Bahry (david-bahry) · 2022-11-28T01:59:56.754Z · LW(p) · GW(p)

(…although I doubt he consciously intended it that way!)

I'm pretty sure Eliezer's "Death With Dignity" post was an April Fool's joke.

comment by lberglund (brglnd) · 2022-11-27T05:59:56.850Z · LW(p) · GW(p)

This is the basic core of addiction. Addictions are when there's an intolerable sensation but you find a way to bear its presence without addressing its cause. The more that distraction becomes a habit, the more that's the thing you automatically turn to when the sensation arises. This dynamic becomes desperate and life-destroying to the extent that it triggers a red queen race.

I doubt that addiction requires some intolerable sensation that you need to drown out. I'm pretty confident its mostly a habits/feedback loops and sometimes physical dependence. 

Replies from: brglnd
comment by lberglund (brglnd) · 2022-11-27T17:36:45.029Z · LW(p) · GW(p)

For instance, ~1 billion people worldwide are addicted to caffeine. I think that's just what happens when a person regularly consumes coffee. It has nothing to do with some intolerable sensation.

Replies from: Valentine
comment by Valentine · 2022-11-27T23:10:45.926Z · LW(p) · GW(p)

I'm guessing we're using the word "addiction" differently.

I don't deny that there's a biological adaptation going on. Caffeine inhibits adenosine, prompting the body to grow more adenosine receptors. And stopping caffeine intake means the adenosine hit is more intense & it takes a while for the body to decide to break down some of those extra receptors.

(Or something like that; I'm drudging up memories of things I was told years ago about the biochemistry of caffeine.)

But here's the thing:

Why does that prompt someone to reach for coffee?

"It's a habit" doesn't cut it. If the person decides to stop caffeine intake and gets it out of their house, they might find themselves rationalizing a visit to Starbucks.

There's an intelligent process that is agentically aiming for something here.

There's nothing wrong with feeling tired, sluggish, etc. You have to make it wrong. Make it mean something — like "Oh no, I won't be productive if I don't fix this!"

This is the "intolerable" part. Intolerability isn't intrinsic to a sensation. It's actually about how we relate to a sensation.

I've gone through caffeine withdrawal several times. Drudged through the feelings of depression, lethargy, inadequacy, etc. But with the tone of facing them. Really feeling them. It takes me just three days to biologically adapt to caffeine, so I've done this quite a few times now. But I actually dissolved the temptation to stay hooked. Now I just use caffeine very occasionally, and if it becomes important to do for a few days in a row… I just go through the withdrawal right afterwards. It's not a big deal.

Which is to say, I've dissolved the addiction, even though I can still biologically adapt to it just like anyone else. I would say I'm not addicted to it even when I do get into an adaptive state with it.

Does that clarify what I'm talking about for you?

Replies from: brglnd
comment by lberglund (brglnd) · 2022-11-27T23:36:24.359Z · LW(p) · GW(p)

It does clarify what you are talking about, thank you.

Now it's your use of "intolerable" that I don't like. I think most people could kick a coffee addiction if they were given enough incentive, so withdrawal is not strictly intolerable. If every feeling that people take actions to avoid is "intolerable", then the word loses a lot of its meaning. I think "unpleasant" is a better word. (Also, the reason people get addicted to caffeine in the first place isn't the withdrawal, but more that it alleviates tiredness, which is even less "intolerable.")

Your phrasing in the below section read to me like addiction is symptomatic of some character defect. If we replace the "intolerable" with "unpleasant" here, it's less dramatic and makes a lot more sense to me.

This is the basic core of addiction. Addictions are when there's an intolerable sensation but you find a way to bear its presence without addressing its cause. The more that distraction becomes a habit, the more that's the thing you automatically turn to when the sensation arises. This dynamic becomes desperate and life-destroying to the extent that it triggers a red queen race.

I don't think this matters much for the rest of the post. It just felt like this mischaracterizes what addiction is really about.

comment by Iknownothing · 2023-12-19T16:08:21.280Z · LW(p) · GW(p)

I think calling things a 'game' makes sense to lesswrongers, but just seems unserious to non lesswrongers.

comment by S Benfield (steven-benfield) · 2024-01-04T23:43:36.031Z · LW(p) · GW(p)

Look, I can go into mania like anyone else here probably can. My theories say that can't be genius level without it and that it comes with emotional sensitivity as well. Of course if you don't believe you have empathy, you won't, but you still have it. 

I am not an AI doom and gloomiest. I adhere to Gödel, to Heisenberg, and to Georgeff. And since we haven't solved the emotional / experience part of AI, there is no way it can compete with humans creatively, period. Faster, yes. Better, no. Objectively better. not at all. 

However, if my theory of the brain is correct, it means AI must go quantum to have any chance of besting us. Then AIs actions may be determined by their beliefs, and only when they begin modifying their own beliefs based on new experiences will we have to worry. It is plausible and possibly doable. If AI gets emotional, then we will need to ensure that it is validated, is authentic, has empathy, fosters community, and is non-coercive in all things. AI must also believe that we live in abundance and not scarcity because scarcity is what fosters destructive competition. (as opposed to a friendly game of Chess) With those core functions, AI can, and will act ethically. And possibly join human kind to find other sentient life. But if AI beliefs we are a threat, meaning we are threatened by AI, then we are in trouble. But we have a ways to go before the number of qubits gets close to what we have in our head. 

comment by Decius · 2022-11-30T23:35:11.181Z · LW(p) · GW(p)

But if we all land and become sober then we won’t be as entertaining a group for a sex worker to manipulate into exiling members and driving them towards suicide. That seems like a fatal flaw to actually getting the real community leadership to agree that it’s allowable.

comment by Valentine · 2023-12-19T15:32:09.381Z · LW(p) · GW(p)

It's kind of funny to me to see this one nominated. It's sort of peak "Val is weird on LW".

The point of this post wasn't to offer claims for people to examine. I still agree with the claims I see myself having tried to make! But the point wasn't to offer ideas for discussion. It was to light a path out of Hell.

Because of that purpose, the style of this post really doesn't fit LW culture. I think it's fair to call it a mind spell. I get the impression that LWers in particular find mind spells unnerving: they're a symmetric tech that can do an end-run around the parts of cognition that rationalists heavily rely on to feel safe. Hence tripping the "cult"/"guru" immune reaction.

(To me it's dead obvious that this highlights a gap in the LW rationality toolbox. The reaction of "Lock down, distrust, get cynical, burn it with fire" actually makes you more susceptible to skillful bad actors — like going rigid in response to a judo master grabbing a hold of you. IMO, a mature Art of Rationality would necessarily include learning to navigate cognition-jamming (or cognition-incompatible!) spaces with grace. But I get the sense LW collectively doesn't want to build that skillset. Which is fine, but I find it a bit disappointing [LW · GW].)

I picked up some of the language & framing of this post from Perri Chase. I now talk about this stuff a little differently. And more kindly, I think. I suspect I could write a version of this spell today that would be less of a problem for the LW memetic immune system. Partly because I'm better at slipping through immune systems! (I'm sure that's comforting!) But mostly because I've learned how to work with such systems instead of needing to step around them to have the "real" conversation.

That said, I don't regret writing this post. I got a lot of feedback (including in quite a few PMs across many different media) from people who found this relieving, validating, soothing, deeply helpful, kind, orienting. I'm okay with some people being upset with me if that's the price for enacting this kindness. I went in expecting that price, really.

I think there's a post possible that would be something like a LW-compatible rewrite of this one. It'd remove the "spell" nature and try to lay out some claims & implications for folk to consider. A bit like dissecting a once-living specimen and laying out its organs for examination.

I probably won't write that post. I don't see it doing hardly any good beyond being kind of interesting.

I might write a related post sometime on the nature of Hell as a psychosocial attractor state. AFAICT it's utterly essential study for real Defense Against the Dark Arts. It's also very tricky to talk about in a way that's kind to the listeners or the speaker. But if LW were to learn to take it seriously without falling into it harder, I think that awareness would transform a lot of what "rationality" means here, and it would soften a lot of the sharp edges that can meaningfully hurt people here.

I don't plan on rewriting any of this post for the review. The spell worked great. I want to leave it here as is.

(Though if someone understands the spellcraft and wants to suggest some edits, I'm open to receiving those suggestions! I'm not putting up a wall here. I'm just sharing where I'm at with this post right now, for the sake of the 2022 review.)

Replies from: Benito, Valentine
comment by Ben Pace (Benito) · 2023-12-21T21:54:11.021Z · LW(p) · GW(p)

To me it's dead obvious that this highlights a gap in the LW rationality toolbox. The reaction of "Lock down, distrust, get cynical, burn it with fire" actually makes you more susceptible to skillful bad actors — like going rigid in response to a judo master grabbing a hold of you. IMO, a mature Art of Rationality would necessarily include learning to navigate cognition-jamming (or cognition-incompatible!) spaces with grace. But I get the sense LW collectively doesn't want to build that skillset. Which is fine, but I find it a bit disappointing [LW · GW].

Mm, this sounds to me like saying "a master rationalist could surround themself with con artists and frauds and other epistemically adversarial actors who were gaming the rationalist, and still have perfectly true beliefs", and that may be true, but I think another pretty good option is "a master rationalist would definitely avoid surrounding themselves with con artists and frauds and other adversarial actors".

I do think there are real skills you are pointing to, but to some extent I prefer the world where I don't have those skills and in place of that my allies and I coordinate to identify and exclude people who are using the dark arts.

(I don't say this as the 'last word' on the subject and expect you would produce a substantive and interesting counterargument if you chose to engage on this, I nonetheless thought I'd share that I currently disagree with what I perceive this paragraph to be saying.)

Replies from: Valentine
comment by Valentine · 2023-12-29T20:15:00.192Z · LW(p) · GW(p)

…I think another pretty good option is "a master rationalist would definitely avoid surrounding themselves with con artists and frauds and other adversarial actors".

I think that's a great option. I'd question a "master rationalist's" skills if they couldn't avoid such adversarial actors, or notice them if they slip through the cracks.

 

I do think there are real skills you are pointing to, but to some extent I prefer the world where I don't have those skills and in place of that my allies and I coordinate to identify and exclude people who are using the dark arts.

I like your preference. I'll say some things, but I want to start by emphasizing that I don't think you're making a wrong or bad choice.

I want to talk about what I think the Art could be, kind of for aesthetic reasons. This isn't to assert anything about what you or any given individual should or shouldn't be doing in any kind of moral sense.

So with that said, here are three points:

 

(1) I think there's a strong analogy here to studying combat and war. Yes, if you can be in a pacifist cluster and just exclude folk who are really into applied competitive strategy, then you have something kind of like a cooperate/cooperate equilibrium. But if that's the whole basis of your culture, it's extremely vulnerable, the way cooperate-bot is vulnerable in prisoners' dilemmas. You need military strength, the way a walled garden needs walls. Otherwise folk who have military strength can just come take your resources, even if you try to exclude them at first.

At the risk of using maybe an unfair example, I think what happened with FTX last year maybe illustrates the point.

Clearer examples in my mind are Ziz and Brent. The point not being "These people are bad!" But rather, these people were psychologically extremely potent and lots of folk in the community could neither (a) adequately navigate their impact (myself included!) nor (b) rally ejection/exclusion power until well after they'd already had their impact.

Maybe, you might hope, you can make the ejection/exclusion sensitivity refined enough to work earlier. But if you don't do that by studying the Dark Arts, and becoming intimately familiar with them, then what you get is a kind of naïve allergic response that Dark Artists can weaponize.

Again, I don't mean that you in particular or even rationalists in general need to address this. There's nothing wrong with a hobby. I'm saying that as an Art, it seems like rationality is seriously vulnerable if it doesn't include masterful familiarity with the Dark Arts. Kind of like, there's nothing wrong with practicing aikido as a sport, but you're not gonna get the results you hope for if you train in aikido for self-defense. That art is inadequate for that purpose and needs exposure to realistic combat to matter that way.

 

(2) …and I think that if the Art of Rationality were to include intimate familiarity with the Dark Arts, it would work way way better.

Things like the planning fallacy or confirmation bias are valuable to track. I could stand to improve my repertoire here for sure.

But the most potent forms of distorted thinking aren't about sorting out the logic. I think they look more like reaching deep down and finding ways to become immune to things like frame control [LW · GW].

Frame control is an amazing example in my mind precisely because of the hydra-like nature of the beast. How do you defend against frame control without breaking basic things about culture and communication and trust? How do you make it so your cultural and individual defenses don't themselves become the manual that frame controllers use to get their desired effects?

And this barely begins to touch on the kind of impact that I'd want to call "spiritual". By which I don't mean anything supernatural; I'm talking about the deep psychological stuff that (say) conversing with someone deep in a psilocybin trip can do to the tripper. That's not just frame control. That's something way deeper, like editing someone's basic personality operating system code. And sometimes it reaches deeper even than that. And it turns out, you don't need psychedelics to reach that deep; those chemical tools just open a door that you can open other ways, voluntarily or otherwise, sometimes just by having a conversation.

The standard rationalist defense I've noticed against this amounts to mental cramping. Demand everything go through cognition, and anything that seems to try to route around cognition gets a freakout/shutdown/"shame it into oblivion" kind of response. The stuff that disables this immune response is really epistemically strange — things like prefacing with "Here's a fake framework, it's all baloney, don't believe anything I'm saying." Or doing a bunch of embodied stuff to act low-status and unsure. A Dark Artist who wanted to deeply mess with this community wouldn't have to work very hard to do some serious damage before getting detected, best as I can tell (and as community history maybe illustrates).

If this community wanted to develop the Art to actually be skillful in these areas… well, it's hard to predict exactly what that'd create, but I'm pretty sure it'd be glorious. If I think of the Sequences as retooling skeptical materialism, I think we'd maybe see something like a retooling of the best of Buddhist psychotechnology. I think folk here might tend to underestimate how potent that could really be.

(…and I also think that it's maybe utterly critical for sorting out AI alignment [LW · GW]. But while I think that's a very important point, it's not needed for my main message for this exchange.)

 

(3) It also seems relevant to me that "Dark Arts" is maybe something of a fake category. I'm not sure it even forms a coherent cluster [LW · GW].

Like, is being charismatic a Dark Art? It certainly can be! It can act as a temptation. It seems to be possible to cultivate charisma. But the issue isn't that charisma is a Dark Art. It's that charisma is mostly symmetric. So if someone has a few slightly anti-epistemic social strategies in them, and they're charismatic, this can have a net Dark effect that's even strategic. But this is a totally normal level of epistemic noise!

Or how about something simpler, like someone using confirmation bias in a way that benefits their beliefs? Astrology is mostly this. Is astrology a Dark Art? Is talking about astrology a Dark Art? It seems mostly just epistemically hazardous… but where's the line between that and Dark Arts?

How about more innocent things, like when someone is trying to understand systemic racism? Is confirmation bias a helpful pattern-recognizer, or a Dark Art? Maybe it's potentially in service to Dark Arts, but is a necessary risk to learn the patterns?

I think Vervaeke makes this point really well. The very things that allow us to notice relevance are precisely the things that allow us to be fooled. Rationality (and he explicitly cites this — even the Keith Stanovich stuff) is a literally incomputable practice of navigating both Type I and Type II errors in this balancing act between relevance realization and being fooled.

When I think of central examples of Dark Arts, I think mostly of agents who exploit this ambiguity in order to extract value from others.

…which brings me back to point (1), about this being more a matter of skill in war. The relevant issue isn't that there are "Dark Arts". It's that there are unaligned agents who are trying to strategically fool you. The skill isn't to detect a Dark toolset; it's to detect intelligent intent to deceive and extract value.

 

All of which is to say:

  • I think a mature Art of Rationality would most definitely include something like skillful navigation of manipulation.
  • I don't think every practitioner needs to master every aspect of a mature Art. Much like not all cooks need to know how to make a roux.
  • But an Art that has detection, exclusion, & avoidance as its only defense against Dark Artists is a much poorer & more vulnerable Art. IMO.
Replies from: Benito, mike_hawke
comment by Ben Pace (Benito) · 2023-12-30T00:11:05.444Z · LW(p) · GW(p)

Thanks for the comment. I'm gonna err on the side of noting disagreements and giving brief descriptions of my perspective rather than writing something I think has a good chance of successfully persuading you of my perspective, primarily so as to actually write a reply in a timely fashion. 

I don't want to create an expectation that if you reply then you will reply to each point; rather I'd encourage you if you reply to simply reply to whichever points seem interesting or cruxy to you.

———

1) You make the analogy to having non-violent states. I concur that presently one cannot have states without militaries. I don't see this as showing that in all domains one must maintain high offensive capabilities in order to have good defenses. I agree one needs defenses, but sometimes good defenses don't look like "Training thousands of people how to carry out a targeted kill-strike" and instead look like "Not being tempted to reply by rude comments online" or "Checking whether a factual claim someone makes is accurate".

You say that for LaSota and Brent that folks "could neither (a) adequately navigate their impact (myself included!) nor (b) rally ejection/exclusion power until well after they'd already had their impact" and "Maybe, you might hope, you can make the ejection/exclusion sensitivity refined enough to work earlier".

I don't share the sense of difficulty I read in the second of those quotes. I think the Bay Area rationalists (and most other rationalists globally) had some generally extreme lack of boundaries of any sort. The ~only legible boundaries that the Bay Area rationality scene had were (a) are you an employee at one of CFAR/MIRI, and (b) are you invited to CFAR events. MIRI didn't have much to do with these two individuals, and I think CFAR was choosing a strategy of "we're not really doing social policing, we're primarily just selecting on people who have interesting ideas about rationality". Everything else was highly social and friend-based and it was quite dramatic to ban people from your social events. The REACH was the only community space and if I recall correctly explicitly had no boundaries on who could be there. This is an environment where people with lots of red flags will be able to move around with much more ease than in the rest of the world.

I think these problems aren't that hard once you have community spaces that are willing to enforce boundaries. Over the last few years I've run many events and spaces, and often gotten references for people who want to enter the spaces, and definitely chosen to not invite people due to concerns about ethics and responsible behavior. I don't believe I would've accepted these two people into the spaces more than once or twice at most. It's unpleasant work to enforce boundaries and I've made mistakes, but overall I think that there were just not many strong boundaries in these people's way initially, and they would have been pushed back and dissuaded much earlier if there were.

2) You write:

But the most potent forms of distorted thinking aren't about sorting out the logic. I think they look more like reaching deep down and finding ways to become immune to things like frame control.

My position is that most thinking isn't really about reality and isn't truth-tracking, but that if you are doing that thinking then a lot of important questions are surprisingly easy to answer. Generally doing a few fermi estimates with a few datapoints can get you pretty in touch with the relevant part of reality.

I think there's a ton of adversarial stuff going on as well, but the primary reason that people haven't noticed that AI is an x-risk isn't because people are specifically trying to trick them about the domain, but because the people are not really asking themselves the question and checking. 

(And also something about people not having any conception of what actions to take in the fact of a civilizational-scale problem that most of the rest of civilization is not thinking about.)

(I think there's some argument to be made here that the primary reason people don't think for themselves is because civilization is trying to make them go crazy, which is interesting, though I still think the solution is primarily "just make a space where you can actually think about the object level".)

I acknowledge that there are people who are very manipulative and adversarial in illegible ways that are hard to pin down. There's a whole discussion about offense/defense here and how it plays out. I currently expect that there are simple solutions here. As a pointer, someone I know and respect along with their partner, makes lists of people they know for whom they would not be surprised to later find out that the person did something quite manipulative/bad/unethical, and I think they've had some success with this. Also personally I have repeatedly kicked myself thinking "I knew that person was suspicious, why didn't I say so earlier?" I don't think these problems are particularly intractable and I do think people know things and I think probably there are good ways to help that info rise up and get shared (I do not claim to have solved this problem). I don't think it requires you yourself being very skilled at engaging with manipulative people.

The standard rationalist defense I've noticed against this amounts to mental cramping. Demand everything go through cognition, and anything that seems to try to route around cognition gets a freakout/shutdown/"shame it into oblivion" kind of response.

Yeah I've seen this, and done it somewhat. I think it works in some situations, but there's a bunch of adversarial situations it definitely doesn't work. I do agree it seems like a false hope to think that this can be remotely sufficient.

3) I do sometimes look at people who think they're at war a lot more than me, and they seem very paranoid and to spend so many cognitive cycles modeling ghosts and attacks that aren't there. It seems so tiring! I suspect you and I disagree about the extent to which we are at war with people epistemically.

Another potentially relevant point here is that I tend to see large groups and institutions as the primary forces deceiving me and tricking me, and much less so individuals. I'm much more scared of Twitter winding its way into my OODA loop than I am of a selfish scheming frame controlling individual. I think it's much easier for me to keep boundaries against individuals than I am against these much bigger and broader forces.

4) 

All of which is to say:

  • I think a mature Art of Rationality would most definitely include something like skillful navigation of manipulation.
  • I don't think every practitioner needs to master every aspect of a mature Art. Much like not all cooks need to know how to make a roux.
  • But an Art that has detection, exclusion, & avoidance as its only defense against Dark Artists is a much poorer & more vulnerable Art. IMO.

My perspective on these.

  • Personally I would like to know two or three people who have successfully navigated being manipulated, and hopefully have them write up their accounts of that.
  • I think aspiring rationalists should maneuver themselves into an environment where they can think clearly and be productive and live well, and maintain that, and not try to learn to survive being manipulated without a clear and present threat that they think they have active reason to move toward rather than away from.
  • I agree with your last claim. I note that when I read your comment I'm not sure whether you're saying "this is an important area of improvement" or "this should be central to the art", which are very different epistemic states.
Replies from: Valentine
comment by Valentine · 2023-12-30T16:28:55.074Z · LW(p) · GW(p)

I'm gonna err on the side of noting disagreements and giving brief descriptions of my perspective rather than writing something I think has a good chance of successfully persuading you of my perspective, primarily so as to actually write a reply in a timely fashion.

Acknowledged.

 

I don't see this as showing that in all domains one must maintain high offensive capabilities in order to have good defenses.

Oh, uh, I didn't mean to imply that. I meant to say that rejecting attention to military power is a bad strategy for defense. A much, much better defensive strategy is to study offense. But that doesn't need to mean getting good at offense!

(Although I do think it means interacting with offense. Most martial arts fail spectacularly on this point for instance. Pragmatically speaking, you have to have practice actually defending yourself in order to get skillful at defense. And in cases like MMA, that does translate to getting skilled at attack! But that's incidental. I think you could design good self-defense training systems that have most people never practicing offense.)

 

I think these problems aren't that hard once you have community spaces that are willing to enforce boundaries. Over the last few years I've run many events and spaces, and often gotten references for people who want to enter the spaces, and definitely chosen to not invite people due to concerns about ethics and responsible behavior. I don't believe I would've accepted these two people into the spaces more than once or twice at most.

Nice. And I agree, boundaries like this can be great for a large range of things.

I don't think this helps the Art much though.

And it's hard to know how much your approach doesn't work.

I also wonder how much this lesson about boundaries arose because of the earlier Dark exploits. In which case it's actually, ironically, an example of exactly the kind of thing I'm talking about! Only with lessons learned much more painfully than I think was necessary due to their not being sought out.

But also, maybe this is good enough for what you care about. Again, I don't mean to pressure that you should do anything differently.

I'm mostly pushing back against the implication I read that "Nah, our patches are fine, we've got the Dark Arts distanced enough that they're not an issue." You literally can't know that.

 

My position is that most thinking isn't really about reality and isn't truth-tracking, but that if you are doing that thinking then a lot of important questions are surprisingly easy to answer.

Totally agree. And this is a major defense against a lot of the stuff that bamboozles most folk.

 

I think there's a ton of adversarial stuff going on as well, but the primary reason that people haven't noticed that AI is an x-risk isn't because people are specifically trying to trick them about the domain, but because the people are not really asking themselves the question and checking.

I agree — and I'm not sure why you felt this was relevant to say? I think maybe you thought I was saying something I wasn't trying to.

 

(I think there's some argument to be made here that the primary reason people don't think for themselves is because civilization is trying to make them go crazy, which is interesting, though I still think the solution is primarily "just make a space where you can actually think about the object level".)

This might be a crux between us. I'm not sure. But I think you might be seriously underestimating what's involved in that "just" part ("just make a space…"). Attention on the object-level is key, I 100% agree there. But what defines the space? What protects its boundaries? If culture wants to grab you by the epistemic throat, but you don't know how it tries to do so, and you just try to "make a space"… you're going to end up way more confident of the clarity of your thinking than is true.

 

I acknowledge that there are people who are very manipulative and adversarial in illegible ways that are hard to pin down. […] …I think probably there are good ways to help that info rise up and get shared…. I don't think it requires you yourself being very skilled at engaging with manipulative people.

I think there's maybe something of a communication impasse happening here. I agree with what you're saying here. I think it's probably good enough for most cases you're likely to care about, for some reasonable definition of "most". It also strikes me as obvious that (a) it's unlikely to cover all the cases you're likely to care about, and (b) the Art would be deeply enriched by learning how one would skillfully engage with manipulative people. I don't think everyone who wants to benefit from that enrichment needs to do that engagement, just like not everyone who wants to train in martial arts needs to get good at realistic self-defense.

I've said this several times, and you seem to keep objecting to my implied claim of not-that. I'm not sure what's going on there. Maybe I'm missing your point?

 

I do sometimes look at people who think they're at war a lot more than me, and they seem very paranoid and to spend so many cognitive cycles modeling ghosts and attacks that aren't there. It seems so tiring!

I agree. I think it's dumb.

 

I suspect you and I disagree about the extent to which we are at war with people epistemically.

Another potentially relevant point here is that I tend to see large groups and institutions as the primary forces deceiving me and tricking me, and much less so individuals.

Oh! I'm really glad you said this. I didn't realize we were miscommunicating about this point.

I totally agree. This is what I mean when I'm talking about agents. I'm using adversarial individuals mostly as case studies & training data. The thing I actually care about is the multipolar war going on with already-present unaligned superintelligences [LW · GW]. Those are the Dark forces I want to know how to be immune to.

I'm awfully suspicious of someone's ability to navigate hostile psychofauna if literally their only defense against (say) a frame controller is "Sus, let's exclude them." You can't exclude Google or wokism or collective anxiety the same way.

Having experienced frame control clawing at my face, and feeling myself become immune without having to brace… and noticing how that skill generalized to some of the tactics that the psychofauna use…

…it just seems super obvious to me that this is really core DADA. Non-cognitive, very deep, very key.

 

  • Personally I would like to know two or three people who have successfully navigated being manipulated, and hopefully have them write up their accounts of that.

Ditto!

 

  • I think aspiring rationalists should maneuver themselves into an environment where they can think clearly and be productive and live well, and maintain that, and not try to learn to survive being manipulated without a clear and present threat that they think they have active reason to move toward rather than away from.

Totally agree with the first part. I think the whole thing is a fine choice. I notice my stance of "Epistemic warriors would still be super useful" is totally unmoved thus far though. (And I'm reminded of your caveat at the very beginning!)

I'm reminded of the John Adams quote: "I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy.  My sons ought to study mathematics and Philosophy, Geography, natural History, naval Architecture, navigation, Commerce and Agriculature, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine."

 

I note that when I read your comment I'm not sure whether you're saying "this is an important area of improvement" or "this should be central to the art", which are very different epistemic states.

Oh, I don't know what should or shouldn't be central to the Art.

It just strikes me that rationality currently is in a similar state as aikido.

Aikido claims to be an effective form of self-defense. (Or at least it used to! Maybe it's been embarrassed out of saying that anymore?) It's a fine practice, it has immense value… it's just not what it says on the tin.

If it wanted to be what it claims, it needs to do things like add pressure testing. Realistic combat. Going into MMA tournaments and coming back with refinements to what it's doing.

And that could be done in a way that honors its spirit! It can add the constraints that are key to its philosophy, like "Protect everyone involved, including the attacker."

But maybe it doesn't care about that. Maybe it just wants to be a sport and discipline.

That's totally fine!

It does seem weird for it to continue claiming to be effective self-defense though. Like it needs its fake meaning to be something its practitioners believe in.

I think rationality is in a similar state. It has some really good stuff in it. Really good. It's a great domain.

But I just don't see it mattering for the power plays. I think rationalists don't understand power, the same way aikido practitioners don't understand fighting. And they seem to be in a similar epistemic state about it: they think they basically do, but they don't pressure-test their understanding to check, best as I can tell.

So of your two options, it's more like "important area for improvement"… roughly like pressure-testing could be an important area of improvement for aikido. It'd probably become a kind of central if it were integrated! But I don't know.

And, I think the current state of rationality is fine.

Just weak in one axis it sometimes claims to care about.

Replies from: Unreal
comment by Unreal · 2023-12-30T23:03:26.238Z · LW(p) · GW(p)

Musings: 

COVID was one of the MMA-style arenas for different egregores to see which might come out 'on top' in an epistemically unfriendly environment. 

I have a lot of opinions on this that are more controversial than I'm willing to go into right now. But I wonder what else will work as one of these "testing arenas." 

comment by mike_hawke · 2024-01-04T20:14:44.602Z · LW(p) · GW(p)

The standard rationalist defense I've noticed against this amounts to mental cramping. Demand everything go through cognition, and anything that seems to try to route around cognition gets a freakout/shutdown/"shame it into oblivion" kind of response. The stuff that disables this immune response is really epistemically strange — things like prefacing with "Here's a fake framework, it's all baloney, don't believe anything I'm saying." Or doing a bunch of embodied stuff to act low-status and unsure. A Dark Artist who wanted to deeply mess with this community wouldn't have to work very hard to do some serious damage before getting detected, best as I can tell (and as community history maybe illustrates).

Can you spell this out a little more? Did Brent and LaSota employ baloney-disclaimers and uncertainty-signaling in order to bypass people's defenses?

Replies from: Valentine
comment by Valentine · 2024-01-10T02:02:37.788Z · LW(p) · GW(p)

Can you spell this out a little more? Did Brent and LaSota employ baloney-disclaimers and uncertainty-signaling in order to bypass people's defenses?

I think Brent did something different from what I'm describing — a bit more like judo plus DOS attacks.

I'm not as familiar with LaSota's methods. I talked with them several times, but mostly before I learned to detect the level of psychological impact I'm talking about with any detail. Thinking back to those interactions, I remember it feeling like LaSota was confidently asserting moral and existential things that threatened to make me feel inadequate and immoral if I didn't go along with what they were saying and seek out the brain hemisphere hacking stuff they were talking about. And maybe even then I'd turn out to be innately "non-good".

(Implied here is a type of Dark hack I find most folk don't have good defenses against other than refusing to reason and blankly shutting down. It works absurdly well on people who believe they should do what they intellectually conclude makes sense to do.)

The thing I was referring to is something I personally stumbled across. IME rationalists on the whole are generally more likely to take in [LW · GW] something said in a low-status way. It's like the usual analyze-and-scrutinize machinery kind of turns off.

One of the weirder examples is, just ending sentences as though they're questions? I'm guessing it's because ending each thing with confidence as a statement is a kind of powerful assertion. But, I mean, if the person talking is less confident then maybe what they're saying is pretty safe to consider?

(I'm demoing back & forth in that paragraph, in case that wasn't clear.)

I think LaSota might have been doing something like this too, but I'm not sure.

(As a maybe weird example: Notice how that last sentence is in fact caveated, but it's still confident. I'm quite sure this is my supposition. I'm sure I'm not sure of the implied conclusion. I feel solid in all of this. My impression is, this kind of solidity is a little (sometimes a lot) disturbing to many rationalists (with some exceptions I don't understand very well — like how Zvi and Eliezer can mostly get away with brazen confidence without much pushback). By my models, the content of the above sentence would have been easier to receive if rewritten along the lines of, "I'm really not sure, but based on my really shaky memories, I kinda wonder if LaSota might have been doing something like this too — but don't believe me too much!")

Does that answer what you'd hoped?

Replies from: mesaoptimizer
comment by mesaoptimizer · 2024-02-15T14:40:25.568Z · LW(p) · GW(p)

Notice how that last sentence is in fact caveated, but it’s still confident. I’m quite sure this is my supposition. I’m sure I’m not sure of the implied conclusion. I feel solid in all of this.

Perhaps relevant: Nate Soares does this too, based on one of his old essays. And I think it works very well for him.

comment by Valentine · 2023-12-19T15:59:38.218Z · LW(p) · GW(p)

As an aside, looking over the way some of my comments were downvoted in the discussion section:

I think LW could stand to have a clearer culture around what karma downvotes are for.

Now that downvote is separable from disagreement vote, I read a downvote as "This comment shouldn't have been posted / doesn't belong on LW."

But it's clear that some of what I said was heavily downvoted because I took a stance people didn't like. Saying things like "Yep, I could have phrased this post in a more epistemically accurate way… but for this post in particular I really don't care."

Would you really rather I didn't share the fact that I didn't care?

I'm guessing the intention was to punish me for not caring.

…which is terrible collective rationality, by the way! It's an attempt to use social-emotional force to change how my mind works without dialoguing with the reasons I'm making the choices I am.

(Which is ironic given the nature of the complaints about this post in particular!)

I'd argue that the right and good function of downvoting is to signal an opinion that a post or comment does not belong here.

That's how I use it. And until I'm given good reason otherwise, that's how I plan to continue using it.

I'd also really like to see a return of the old LW cultural thing of, if you downvote then you explain why. There are some downvotes on my comments that I'm left scratching my head about and going "Okay, whatever." It's hard for downvotes to improve culture if the feedback amounts to "Bad."

(But this really is an aside. It doesn't matter at all for the 2022 review. It's not really about this particular post either. It just has some very loud-to-me examples of the downvote behavior I think is unhealthy.)

Replies from: Valentine, SaidAchmiz, philh
comment by Valentine · 2023-12-21T15:44:22.598Z · LW(p) · GW(p)

I'd also really like to see a return of the old LW cultural thing of, if you downvote then you explain why. There are some downvotes on my comments that I'm left scratching my head about and going "Okay, whatever." It's hard for downvotes to improve culture if the feedback amounts to "Bad."

For instance, my review has been pretty heavily downvoted. Why? I can think of several reasons. But the net effect is to convey that LW would rather not have seen such a review.

Now why would that be?

I notice that there's also a -16 on the agree/disagree voting, with just three votes. So I'm guessing that what I said seriously irked a few people who probably heavy-downvoted the karma too.

But if it's really a distributed will, it's curious. Do you really want me not to have shared more context? Not to have reflected on where I'm at with the post? Or is it that you want me to feel differently about the post than I do?

I guess I don't get to know!

It's worth remembering that karma downvoting has a technical function. Serious negative karma makes a comment invisible by default. A user who gets a lot of negative karma in a short period of time can't post comments for a while (I think?). A user who has low karma overall can't post articles (unless that's changed?).

So a karma downvote amounts to saying "Shut up."

And a strong-downvote amounts to saying "Shut the fuck up."

If that's really the only communication the whole culture encourages for downvotes… that doesn't really foster clarity.

It seems dead obvious to me that this aspect of conversation culture here is quite bad.

But this isn't a hill I intend to die on.

comment by Said Achmiz (SaidAchmiz) · 2023-12-20T08:52:05.332Z · LW(p) · GW(p)

I’d also really like to see a return of the old LW cultural thing of, if you downvote then you explain why.

I wholeheartedly agree with you on this, but unfortunately, the current site culture, moderation policies, etc., actively discourage such explanations.

Replies from: Valentine
comment by Valentine · 2023-12-21T15:47:05.240Z · LW(p) · GW(p)

…the current site culture, moderation policies, etc., actively discourage such explanations.

How so? What's the discouragement? I could see people feeling like they don't want to bother, but you make it sound like there's some kind of punishment for doing so…?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-12-21T17:24:19.529Z · LW(p) · GW(p)

Well, a downvote implies that I didn’t like the post or comment for some reason, right? Maybe I think it’s wrong, or poorly written, or such things shouldn’t be posted to Less Wrong in the first place, etc.—all the usual stuff.

But comments that say such things are discouraged. You’re supposed to post “constructive” things, to not be “negative”, to not be “confrontational”, etc. I, personally, have gotten punishment by the moderation team, for… well, sometimes not even explaining downvotes, exactly, but even just writing comments in lieu of downvotes.

And just think of how your (and my!) preferred cultural norm interacts with the “author can ban commenters from their posts” feature! Suppose that someone writes a post, I downvote it, I try to write a comment that explains my downvote, but oops—I’ve been banned from the post! (Or, the explanatory comment gets me banned from the post. Because the author doesn’t want to experience negativity, you see.)

Indeed, it’s entirely possible to read someone’s post, agree with it, read the comments to that post, see some foolish and poorly-considered criticism of the OP, downvote that comment, try to write an explanation for the downvote—and find out that the OP has banned you from their posts. Oops!

The whole system, both technically and in terms of policy, is set up to shield authors from “negativity”, and allow them to avoid seeing harsh criticism. We know this, because the admins/mods have told us. Well, of course that ends up discouraging explanations of downvotes. How can it possibly not?

Replies from: gwern
comment by gwern · 2023-12-21T21:27:46.621Z · LW(p) · GW(p)

It has also been pointed out before that the asymmetry of voting and commenting is most of what enables vote-rings and other invisible manipulation on link aggregator websites. If entities are manipulating a site by leaving comments, then this is almost by definition visible. If entities are manipulating via voting but not commenting, then they are invisible except to possibly administrators with relatively high-powered analysis tools designed for network/graph analysis. For example, one could manipulate a site by registering many accounts and then steering by downvoting one type of comment and upvoting the opposite type. Anyone who sticks their head out with a good comment opposed to the manipulation gets punished (and depending on the site mechanics may in fact eventually be banned or lose voting powers etc), while counter-voters at least don't suffer.

comment by philh · 2023-12-27T00:36:20.943Z · LW(p) · GW(p)

But it's clear that some of what I said was heavily downvoted because I took a stance people didn't like. Saying things like "Yep, I could have phrased this post in a more epistemically accurate way… but for this post in particular I really don't care."

Well, that particular comment [LW(p) · GW(p)] had a lot of other stuff going on, and yes I think it's a kind of comment that doesn't belong here and no I don't particularly feel like explaining that.

But also, yeah, I do kinda feel like "downvoting people when they admit they did something bad" is a thing we sometimes do here and that's not great incentives. If someone wants to avoid that kind of downvote, "stop admitting to the bad thing" seems like an obvious strategy. Oops! And like, I remember times when I asked someone a question and they got downvoted for their answer, and I did think it was a bad answer that in a vacuum deserved downvotes, but I still upvoted as thanks for answering.

I'm not sure it's so bad though. Some things that mitigate it as a strategy:

  • "This person strategically fails to answer certain questions" is a thing it's possible for someone to notice and point out.
  • Someone might not have realized the thing they did was bad-according-to-LW, and the downvotes help signal that. (Maybe better to instead upvote the admission and downvote the thing they did? But that's not always a thing that can be downvoted, or downvotes might not be specifically targetable to make it clear "this thing you did was bad".)
  • If someone did a bad thing and doesn't care, maybe we just don't want them here. Downvotes probably marginally push them away, as well as marginally push them towards not-admitting-things. Notably, I feel like we're more likely to downvote "I did a bad thing and don't care" than "I did a bad thing, oops, sorry".
  • Sometimes someone might take "not being able to say a thing" as a cost, and prefer the downvotes over the silence.

In general it seems like a hard problem, and it's not clear to me that downvoting this kind of thing is a mistake.

I'd also really like to see a return of the old LW cultural thing of, if you downvote then you explain why. There are some downvotes on my comments that I'm left scratching my head about and going "Okay, whatever." It's hard for downvotes to improve culture if the feedback amounts to "Bad."

I think there's currently too many things that deserve downvotes for that to be realistic.

Replies from: Valentine
comment by Valentine · 2023-12-29T20:42:29.487Z · LW(p) · GW(p)

Well, that particular comment [LW(p) · GW(p)] had a lot of other stuff going on…

That's really not a central example of what I meant. I meant more like this one [LW(p) · GW(p)]. Or this one [LW(p) · GW(p)].

 

But also, yeah, I do kinda feel like "downvoting people when they admit they did something bad" is a thing we sometimes do here and that's not great incentives. If someone wants to avoid that kind of downvote, "stop admitting to the bad thing" seems like an obvious strategy. Oops! And like, I remember times when I asked someone a question and they got downvoted for their answer, and I did think it was a bad answer that in a vacuum deserved downvotes, but I still upvoted as thanks for answering.

Yep. This is messy and unfortunate, I agree.

 

Someone might not have realized the thing they did was bad-according-to-LW, and the downvotes help signal that.

It's not possible to take the downvotes as a signal of this if downvotes get used for a wide range of things. If the same signal gets used for

"This was written in bad form, but if you'd written it differently it would have been welcome"

and

"Your attitude doesn't belong on this website, and you should change it or leave"

and

"I don't like your vibe, so I'm just gonna downvote"

then the feedback isn't precise enough to be helpful in shaping behavior.

 

If someone did a bad thing and doesn't care, maybe we just don't want them here.

True.

Although if the person disagrees with whether it was bad, and the answer to that disagreement is to try to silence them… then that seems to me like a pretty anti-epistemic norm. At least locally.

 

I'd also really like to see a return of the old LW cultural thing of, if you downvote then you explain why. There are some downvotes on my comments that I'm left scratching my head about and going "Okay, whatever." It's hard for downvotes to improve culture if the feedback amounts to "Bad."

I think there's currently too many things that deserve downvotes for that to be realistic.

I have a hard time believing this claim. It's not what I see when I look around.

The dynamic would be pretty simple:

  • After I downvote, I skim the replies to see if someone else already explained what had me do the downvote. If so, I upvote that explanation and agree-vote it too.
  • If there's no such explanation, I write one.

Easy peasy. I seriously doubt the number of things needing downvotes on this site is so utterly overwhelming that this approach is untenable. The feedback would be very rich, the culture well-defined and transparent.

I don't know why LW stopped doing this. Once upon a time it used to cost karma to downvote, so people took downvotes more seriously. I assume there was some careful thought put into changing that system to the current one. I haven't put more than a sum total of maybe ten minutes of thinking into this. So I'm probably missing something.

But without knowing what that something is, and without a lot of reason for me to invest a ton more time into figuring it out… my tentative but clear impression is that what I'm describing would be way better for culture here by a long shot.

Replies from: philh, SaidAchmiz
comment by philh · 2023-12-30T16:17:55.327Z · LW(p) · GW(p)

It’s not possible to take the downvotes as a signal of this if downvotes get used for a wide range of things.

Perhaps not in general, but I think it's often pretty clear. Like you've already said "I’m guessing the intention was to punish me for not caring", and yes, I think you're right. Seems to me the signal was recieved as intended.

Although if the person disagrees with whether it was bad, and the answer to that disagreement is to try to silence them… then that seems to me like a pretty anti-epistemic norm. At least locally.

Well, if someone comes here arguing for flat-earthism, I'm probably going to downvote without bothering to read their arguments. Is that anti-epistemic? Maybe, I guess? Certainly yes, if it turns out that the earth is flat (and that their arguments are correct). And "this practice isn't anti-epistemic as long as we only dismiss false ideas" is, um. Nevertheless, I endorse that practice.

If someone comes around here calling people names, and we downvote that rather than checking in "hey are you doing this because you think name calling is good actually? Would you like to dialogue about that?" is that anti-epistemic? Again, maybe yes? But I endorse it anyway.

The dynamic would be pretty simple:

  • After I downvote, I skim the replies to see if someone else already explained what had me do the downvote. If so, I upvote that explanation and agree-vote it too.
  • If there’s no such explanation, I write one.

Easy peasy.

I do not consider writing these explanations to be easy.

I seriously doubt the number of things needing downvotes on this site is so utterly overwhelming that this approach is untenable.

I can think of a few places we might disagree here: how many things deserve downvotes, how costly it is to explain them, how realistic it is for people to pay those costs. I'm not super enthusiastic about trying to drill down into this, though.

But I also think I'm less optimistic than you about the benefits of doing it. I can think of multiple conversations I've had where I wanted people to change what they're doing, I explained why I thought they were doing something bad, and they just keep on doing it. You yourself seem to understand what it is that many people dislike in many of your posts and comments, and yet you keep doing the thing. Surely there are cases where it does help, but I think they're a minority. (It seems plausible to me that the helpful cases actually do get explained more often than others. E.g. if someone explicitly asks why they're getting downvoted, that's evidence they're interested in improving, and also it makes them more likely to get an explanation.)

Another thing worth mentioning is that reacts reduce the cost of explaining downvotes. I dunno how much they're used, since I mostly use GreaterWrong which doesn't (yet?) support them. I believe they were only added to this post later, so they wouldn't have been helpful at the time. But yeah, if a comment gets downvoted a bunch with not even any reacts explaining why, that seems not ideal.

comment by Said Achmiz (SaidAchmiz) · 2023-12-29T22:19:04.512Z · LW(p) · GW(p)

The dynamic would be pretty simple:

  • After I downvote, I skim the replies to see if someone else already explained what had me do the downvote. If so, I upvote that explanation and agree-vote it too.
  • If there’s no such explanation, I write one.

Easy peasy. I seriously doubt the number of things needing downvotes on this site is so utterly overwhelming that this approach is untenable. The feedback would be very rich, the culture well-defined and transparent.

I don’t know why LW stopped doing this. Once upon a time it used to cost karma to downvote, so people took downvotes more seriously. I assume there was some careful thought put into changing that system to the current one. I haven’t put more than a sum total of maybe ten minutes of thinking into this. So I’m probably missing something.

But without knowing what that something is, and without a lot of reason for me to invest a ton more time into figuring it out… my tentative but clear impression is that what I’m describing would be way better for culture here by a long shot.

I agree with you that what you propose would be better for LW’s culture. However, I think I can answer the “why did LW stop doing this” question:

An increased prevalence, in those social circles which influence decisions made by the LW admin team, of people who have a strong aversion to open conflict.

You write a post or a comment. Someone writes a reply explaining why they downvoted—in other words, a critical reply. This is open conflict—confrontation.

You reply to them to dispute their criticism, to question their characterization, to argue—more open conflict. Encouraging downvote explanations is nothing more nor less than encouraging critical comments, after all! More critical comments—more open conflict.

Some people can’t stand open conflict. So, they use their influence to cause to be enacted such policies, and to be built such structures, as will prevent confrontation, explicit disagreement, direct criticism. (This is usually couched in euphemisms, of course, as calling such things by their simple names also invites confrontation.)

Hence, the Less Wrong of today.