Signaling isn't about signaling, it's about Goodhart 2022-01-06T18:49:48.534Z
What are sane reasons that Covid data is treated as reliable? 2022-01-01T18:47:38.919Z
Where can one learn deep intuitions about information theory? 2021-12-16T15:47:01.076Z
Creating a truly formidable Art 2021-10-14T04:39:16.641Z
Of Two Minds 2018-05-17T04:34:51.892Z
Noticing the Taste of Lotus 2018-04-27T20:05:23.898Z
Mythic Mode 2018-02-23T22:45:06.709Z
The Intelligent Social Web 2018-02-22T18:55:36.414Z
Kenshō 2018-01-20T00:12:01.879Z
CFAR 2017 Retrospective 2017-12-19T19:38:35.516Z
In praise of fake frameworks 2017-07-11T02:12:32.017Z
Gears in understanding 2017-05-12T00:36:17.086Z
The art of grieving well 2015-12-15T19:55:44.893Z
Proper posture for mental arts 2015-08-31T02:29:01.312Z
Looking for a likely cause of a mental phenomenon 2012-12-01T19:43:32.916Z


Comment by Valentine on Signaling isn't about signaling, it's about Goodhart · 2022-01-14T14:23:55.995Z · LW · GW

So you seem to be focused in this post on ways to generate signals.

No. I'm focused on how attention to signals tends to create Goodhart drift.

Comment by Valentine on Signaling isn't about signaling, it's about Goodhart · 2022-01-07T16:14:43.052Z · LW · GW

A different frame on what I see as the same puzzle:

If faced with the choice, would you rather self-deceive, or die?

It sure looks like the sane choice is self-deception. You might be able to unwind that over time, whereas death is hard to recover from.

Sadly, this means you can be manipulated and confused via the right kind of threat, and it'll be harder and harder for you over time to notice these confusions.

You can even get so confused you don't actually recognize what is and isn't death — which means that malicious (to you) forces can have some sway over the process of your own self-deception.

It's a bit like the logic of "Don't negotiate with terrorists":

The more scenarios in which you can precommit to choosing death over self-deception, the less incentive any force will have to try to present you with such a choice, and thus the more reliably clear your thinking will be (at least on this axis).

It just means you sincerely have to be willing to choose to die.

Comment by Valentine on Signaling isn't about signaling, it's about Goodhart · 2022-01-07T16:03:24.758Z · LW · GW

Isn't the main point of acting on cultural differences to make others feel more comfortable? Or to show that you're interested in/you care about their culture?

As viewed from the outside, yes.

I think navigating this truthfully feels different from that analysis on the inside though.

If I think "I'm going to make these people feel comfortable by matching their cultural norms", this can often create the opposite effect. I described the dynamics of this in the OP.

The reason those norms help put people at ease is because of what they imply (signal) about a certain quality of attention and compatibility you're bringing. If you just are attentive then that'll emerge naturally. No reason to think explicitly about the norms.

This is a little like noticing how all things about love and romance are ultimately about sex, but how thinking about it that way can actually jam their ability to function properly. This isn't to deny the centrality of evolutionary forces. It's noticing how thinking about those forces while inside them can create loops that bring in influences you may not want. Hence the "Just be yourself" advice.


Probably most people should lean towards wearing what they feel like more, but having this as a general policy might be quite costly, because people judge a lot based on clothing.

Yep. And if you focus your attention on other people's judgments this way, you totally summon Goodhart's Demon.

So which do you want? The risk of paying a social cost for a while, or the risk of floating along in Goodhart drift?


[…] I think that a large proportion of signalling involves unconscious calculations or self-deception, and it takes a huge amount of work to make those explicit. So the category of "signalling" may, because of that, seem more pervasive and deeper-rooted to me than it does to you.

That's not what's going on here.

I'm guessing you think I'm talking about actually in fact dropping all signaling.

That's definitely not what I mean. That doesn't make sense to me. It'd be on par with "Stop being affected by physics."

When I say "Drop attempts to signal", I'm describing the subjective experience of enacting this shift as I currently understand it.

I mean the thing where, when sitting across from someone on a first date, I can track the thoughts that are about "making a good impression" and either lean into them or sort of drop them. The first one structurally creates problems. The second is less likely to.

On the inside it feels like going in the direction of just not caring about what impressions I do or don't give her. Which is to say, on the inside it feels like dropping all attempts to signal.

But of course my body language and word choice and dress and so on will signal all kinds of things to her. I haven't actually dropped all signaling, or even subconscious attempts to signal.

It's just that by pointing this optimization force away from those signals, I can encourage them to reflect reality instead of the (possibly false) image of myself a part of me wants her to see.

And by holding such a policy in myself, the signals I end up sending will always systematically (at least in the limit) align with the truth of this transparency. Signaling non-deception by not deceiving. Focus — even subconscious — on signals just can't beat this strategy for fidelity of transmission best as I can tell.

Which is to say, the strategy of "Drop all attempts to signal" is a signaling strategy.

…at least in one analysis. Because thinking of it that way makes it harder to use, it helps to reframe it.

But my guess is that this resolves the difference in perspective here between you and me. Yes?

Comment by Valentine on Signaling isn't about signaling, it's about Goodhart · 2022-01-07T14:18:41.243Z · LW · GW

That's a really good point. It's like stealth obsession with signaling, because there's a need to not signal.

This in turn reminds me of how beginning statistics students often confuse independence and anti-correlation. I'm trying to point at the analog of independence, but if folk who feel compelled that I'm pointing at something real don't grok what I'm pointing at, they're likely to land on the analog of anti-correlation.

Comment by Valentine on Signaling isn't about signaling, it's about Goodhart · 2022-01-07T14:13:43.175Z · LW · GW

Yep, I'm pretty uncertain too.

I think that at least some politeness falls more under the category of language. Like, I'm in Mexico, and it's often helpful for me to switch to Spanish. I'm totally manipulating my signals there, but it seems… fine? Like I just don't see the Goodhart pressure appearing there at all. Saying "Gracias, ¡hasta luego!" instead of "Thank you, have a good day!" seems perfectly fine.

But some politeness very much does introduce Goodhart drift. "How dare you say that?! That's so rude!" This is a weird signal suppression system that introduces what some folks near Toronto coined as "untalkaboutability" (read as: "un-talk-about-ability"). 

Likewise with pretending to be friendly. Lots of shop owners here will call out to me as I pass saying something like "Hey! Hey there my friend! Tell me, where are you from?" The context makes it pretty obvious that they're being friendly to hook me into their shop. But the reason the hook works at all is because of the plausible deniability that that's their purpose. "Oh, don't be like that! I'm just being friendly!" This is weaponization of signals of friendliness, which is possible because of the Goodhart drift applied to those signals.

But yeah, I have a question around language here, and cultural standards. Like shaking hands in North America vs. bowing in Japan. This is actually a better edge case than is Spanish: It seems fine to recognize and act on the cultural difference… 

…unless I switch because I'm trying to make others feel more comfortable. At that point I'm focusing on the signal in order to manipulate the other, which starts to introduce Goodhart drift. The fact that my intentions are good or that this is common doesn't save the signal from Goodhart's Demon.

Whereas if I can focus on grokking the cultural difference, and then set that entirely aside and do what I feel like doing… I think something like that naturally results in the politeness that matters.

Comment by Valentine on Signaling isn't about signaling, it's about Goodhart · 2022-01-07T13:36:57.900Z · LW · GW

My main takeaway from this post is that it's important to distinguish between sending signals and trying to send signals, because the latter often leads to goodharting.

That is a wonderful summary.


For instance, I make more of an effort now than I used to, to notice when I appreciate what people are doing, and tell them, so that they know I care. And I think this has basically been very good. This is very much not me dropping all effort to signal.

But I think what you're talking about is very applicable here, because if I were just trying to maximise that signal, I would probably just make up compliments, and this would probably be obviously insincere.


There's an area of fuzz for me here that matters. I don't intellectually know how to navigate it.

A much more blatant example is with choosing a language. Right now I'm in Mexico. Often I'll talk to the person behind the counter in Spanish. Why? Because they'll understand me better. If they don't speak English, it's sort of pointless to try to communicate in English.

This is totally shaping my behavior to impact the other person.

But it's… different. It's really different. I can tell the difference intuitively. I just don't know what the difference really is.

I notice that your example absolutely hits my sense of "Oh, no, this is invoking the Goodhart thing." It seems innocent enough… but where my eyes drift to is: Why do you have to "make more of an effort now than [you] used to"? If I feel care for someone, and I notice that my sharing it lets them feel it more readily, and that strikes me as good, then I don't have to put in effort. It just happens, kind of like drinking water from my cup in my hand when I'm thirsty just happens.

I would interpret that effort as maintaining behavior in the face of not having taken the truth all the way into your body. Something like… you understand that people need to hear your appreciation in order to feel your care, but you haven't grokked it yet. You can still manipulate your own behavior without grokking, but it really is self-manipulation based on a mental idea of how you need to behave in order to achieve some imagined goal.

(I want to acknowledge that I'm reading a lot into a short statement. If I've totally misread you here, please take this as a fictional example. I don't mean any of this as a critique of your behavior or choices.)

I'd like to extend your example a bit to point out what I can see going wrong here.

Suppose a fictional version of you in fact doesn't care about these others and is only interested in how he benefits from others' actions. And maybe he recognizes that his "appreciation", if nakedly seen, would cause these people to (correctly!) feel dehumanized. This fictional you would therefore need to control his signals and make his appreciation come across as genuine in order to get the results he wants.

If he could, he might even want to convince himself of his sincerity so that his signal hacking is even harder to detect.

(I think of that as "Newcomblike self-deception".)

The fact that fictional you could be operating like this means that hacking your own signal is itself a subtle meta-signal that you might be this fictional version of you. The default thing people seem to try to do to get around this is to distract people with the volume of the signal. ("Oh, wow! This is sooo amazing! Thank you so, so, SO much!") This is the "feeding psychopaths" thing I mentioned.

If you happen to never notice and fear this, and the people you're expressing appreciation for never pick up on this, then you accidentally end up in a happy equilibrium.

(…although I think people pick up on this stuff pretty automatically and just try to be numb to it. Most people seem to be manipulating their signals at one another all the time, which sometimes requires signaling that they're not noticing what the other is signaling.)

It's just very unstable. All it takes is one misstep somewhere. One flicker of worry. And if it happens to hit someone where they're emotionally sensitive… KABLOOEY! Signaling arms race.

Whereas if you put your attention on grokking the thing and then letting people have whatever impression of you they're going to have, you end up in an immensely stable equilibrium. Your appreciation becomes transparent because you are transparent and you in fact appreciate them.

(…with a caveat here around the analog of learning Spanish. Which, again, I can feel but don't understand yet.)


So I guess the big question is, which things do you stop trying to do?

I agree. That's the big question. I don't know. But I like you bringing it up explicitly.

Comment by Valentine on Signaling isn't about signaling, it's about Goodhart · 2022-01-07T13:13:35.867Z · LW · GW

I don't think you're "dropping all effort" to signal, you're rather getting good at signaling, by actually being truthful and information-focused.

…which is much more likely to fail if I think of it like this while doing it.

I agree with what I think you're saying. I think there's been a definitional sliding here. When I say "Drop all effort to signal", I'm describing the experience on the inside. I think you're saying that from the outside, signaling is still happening, and the benefits of "dropping all effort to signal" can be understood in signaling terms.

I agree with that.

I'm just suggesting that in practice, the experience on the inside is of turning attention away from signals and entirely toward a plain and simple attention on what is.


I don't think we can go so far as to say they're equivalent, just that signaling is yet another domain subject to goodheart's law.

I agree. I didn't mean to imply otherwise.

(I imagine this is a reaction to the title? That was tongue-in-cheek. I said so, though maybe you missed it. It was meant to artistically gesture at the thesis in an entertaining way rather than as a truth statement accurately summarizing the point.)

Comment by Valentine on What are sane reasons that Covid data is treated as reliable? · 2022-01-05T23:45:55.575Z · LW · GW

Has personal testimony from our own social groups become the best we can do?

Sadly yes, at least on my side.

I think your questions are very sane. Sadly I'm not the person to do this kind of data collection. The way some people have the opposite of a green thumb when it comes to plants, I have something like that for putting together numerically focused models. As soon as I move away from geometry or contact with physical reality, errors like 2+3=6 dominate and my models' output becomes gobbledegook. I was astoundingly good at geometry and utter garbage at algebra in math grad school.

I think most of the people I'm referring to were pointed at VAERS. This was from months ago, buried in old Facebook threads, so it'd take quite a bit of digging to find and I'm not sure I could. So this is based on a fuzzy impression of seeing that acronym in that context. But I do recall many of them were given a hotline number to call if they got side effects, and in calling the number they got the "Well, the vaccines are safe, so these must be from something else" line.

Without an explicit probability calculation, how exactly are we supposed to determine what the levels of side effects in reality are, vs what the medical data that has been collected and reported suggests, vs what the average person thinks is true?

Yep. This has been part of my problem. I'm living in a sea of vastly deeper uncertainty than the people around me seem to think they're in. I'm hoping to do slightly better than either of "No one knows anything and anyone who claims otherwise is deluded" or "My tribe is right." I've just been having a lot of trouble finding that alternative.

(…and this discussion is helping.)

Comment by Valentine on What are sane reasons that Covid data is treated as reliable? · 2022-01-05T23:33:55.041Z · LW · GW

How many people do you know? What rate are we talking, re: "many people"?

I don't think I can give very useful data here. I can give some rough numbers but they aren't going to be very informative. I stopped bothering to listen to or look for reports of people's vaccine side effects getting rejected after something like ten-ish because I was starting to notice something like overfitting going on in my head.

The important (to me) part was that there were multiple such cases, very distributed, which meant there's some kind of bureaucratic mechanism in place (as opposed to one grumpy bureaucrat somewhere). I knew I couldn't see it, and I observed that no one seemed to be talking about it (except the disgruntled vaccine-injured folk who were feeling swayed by the conspiracy theorists), which made the confidence folk were asserting about "The vaccines are safe & effective" look like mindless propaganda repetition to me even if it accidentally happened to be correct.

I was hoping for an update on that here. I've gotten quite a few others. Sadly on this one I'm not seeing much in the way of hope for clarification just yet.

Comment by Valentine on What are sane reasons that Covid data is treated as reliable? · 2022-01-05T23:25:48.165Z · LW · GW

Yep, that does seem reasonable.

Several of the people I talked to or indirectly listened in on said they'd been given a number to call if they got any side effects. Then when they got side effects and called, they were given the "The vaccine is safe so this must be something else" line.

Clearly that's not everyone's experience. But since I don't know the structure these people encountered in almost any detail, my net emotional update was "Fuck this 'data'."

Comment by Valentine on What are sane reasons that Covid data is treated as reliable? · 2022-01-05T23:21:59.720Z · LW · GW

Thank you. This is clear and points me in directions that let me explore more and see through the fog of war.

Comment by Valentine on What are sane reasons that Covid data is treated as reliable? · 2022-01-05T23:20:14.505Z · LW · GW

Are all those people you are talking about outside of the rationality community?

Yep. As far as I know, but I'd be pretty surprised if any of them were here.


We have people we pay to do contact tracing.

Would you be willing to point at more details about this? I recall seeing a lot about how we weren't doing adequate contact tracing, but not much on how we have been.


From a conversation I had with a doctor, it seems that our medical system generally does a lot fewer autopsies than we did 20 years ago.

Mmm. Good to know.

Although that basically means the problem with data collection I was describing is actually a step farther up the chain. That cremation isn't where the data are getting destroyed. If they're not even bothering to verify the causes of death via autopsies and there was (is?) financial incentive to conclude "Covid"… well, I believe in incentive landscapes.


The labs that do PCR testing retest some of the positive tests with variant-specific tests. Different countries have different policies about that. 

Ah. And some people go through the different countries' policies and numbers and do some data crunching to extrapolate something? Okay. Who are these data crunchers then? All this is an opaque screen from where I'm standing. I just see final numbers asserted in public.

(Thank you, by the way. Gratitude for the energy you've put into answering this.)

Comment by Valentine on What are sane reasons that Covid data is treated as reliable? · 2022-01-05T23:12:27.098Z · LW · GW


I'd have to check, but I think it was the VAERS system that these folk were told to report to, and who turned down the data based on the circular logic I described in the OP.

But this is based on my recollection of that acronym looking familiar in this context. Don't take that too seriously. Just a little seriously.

Comment by Valentine on The Map-Territory Distinction Creates Confusion · 2022-01-04T22:12:41.604Z · LW · GW

The sentence 'snow is white' is true because that sentence predicts (relation) experience (reality).

I'll give my interpretation, although I don't know whether Gordon would agree:

What you're saying here isn't my read. The sentence "Snow is white" is true to the extent that it guides your anticipations. The sentence doesn't predict anything on its own. I read it, I interpret it, it guides my attention in a particular way, and when I go look I find that my anticipations match my experience.

This is important for a handful of reasons. Here are a few:

  • In this theory of truth, things can't be true or false independent of an experiencer. Sentences can't be true or false. Equations can't be true or false. What's true or false is the interaction between a communication and a being who understands.
  • This also means that questions can be true or false (or some mix). The fallacy of privileging the hypothesis gestures in this direction.
  • Things that aren't clearly statements or even linguistic can be various degrees of true or false. An epistemic hazard can have factually accurate content but be false because of how it divorces my anticipations from reality. A piece of music can inspire an emotional shift that has me relating to my romantic partner differently in ways that just start working better. Etc.

So in some sense, this vision of truth aims less at "Do these symbols point in the correct direction given these formal rules?" and more at "Does this matter?"

I haven't done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.

Comment by Valentine on The Machine that Broke My Heart · 2022-01-01T12:32:51.928Z · LW · GW

I wondered that too. I miss the old LW social norm where downvotes were expensive and came with an explanation. Here I'm just left shrugging because I'm not sure what update to make.

(I mean this as a sharing of my experience, not a critique of how LW is designed. I'm sure Oli & Ben et al. put a ton of thought into details like this and landed on the current karma model for good reasons.)

Comment by Valentine on The Machine that Broke My Heart · 2021-12-30T15:27:58.458Z · LW · GW

I didn't mean to imply that per se. But yes, I do see that playing a strong role here, and that's why I thought to bring Forrest's article forward here.

Comment by Valentine on The Machine that Broke My Heart · 2021-12-30T13:18:16.157Z · LW · GW

This is a beautiful piece of writing. I can feel you clearly here. Your care, your hope, your desolated disappointment. This short post took me on a journey. Thank you.

As a probably annoying but potentially enlightening aside, you might get a lot out of reading Lynne Forrest's article on Karpman's Drama Triangle. My guess is that you haven't touched the true core of your heartbreak yet. If you want to, this might be a powerful direction for doing so.

Comment by Valentine on Where can one learn deep intuitions about information theory? · 2021-12-27T12:19:48.477Z · LW · GW

Thank you for this context & perspective. I found this quite helpful.

Comment by Valentine on Universal counterargument against “badness of death” is wrong · 2021-12-27T12:18:24.273Z · LW · GW

I don't see the 'intricacy' at all.

Yes, I agree. You don't.

I'm not available for arguing you into seeing it. If you can't see it from what I've already said, then I'm not the one to show you.

Comment by Valentine on Universal counterargument against “badness of death” is wrong · 2021-12-23T18:30:27.137Z · LW · GW

I don't intend to continue this exchange. Just so you know. I've walked this particular road plenty of times already and am just not interested anymore.

But I sincerely wish you well on your journey.

Comment by Valentine on Universal counterargument against “badness of death” is wrong · 2021-12-22T16:57:34.726Z · LW · GW

All these questions are not boredom or overpopulation – they are something like a protection from a new idea. Or protection against fear of death. Its like a Stockholm syndrome, where a victim take the side of a terrorist.

FWIW: This is part of the standard immortalist memetic immune system response. It's stuff like this happening in my own head for decades that prevented me from really listening to people.

At the risk of being really annoying to you, here are a few related elements to point at what I mean:

  • I basically said that these counterarguments weren't really about boredom or overpopulation. See the analogy with the partner nominally talking about groceries. So why say that?
  • It's pretty important which new idea they're encountering. It's not a generic response. Most people don't respond to first hearing the idea of blockchain with "But what if you get bored?"
  • The thing about protection against fear of death is an assumption. It's a super duper common one in immortalist circles. Likewise with deathist arguments being about Stockholm Syndrome. It's a clear hypothesis. But that's not how it's used. It's not presented to stir curiosity and exploration of the deathist psyche. The default reason for saying things like this is to dismiss the deathist concerns as basically delusional. (Speaking from decades of personal experience of enacting this disrespect.)


If people were really afraid about overpopulation, they should ban sex first. 

Well, if the single thing they cared about was overpopulation, then sure. The math checks out.

I think it's a combination of (a) they don't really care about overpopulation per se, (b) they're bad at exponential reasoning, and (c) they care about a whole system of things that are all interconnected but typically don't notice the system as a whole.

But that's just my guess.


You are right: they feel that something is amiss.  The idea of immortality without an image of paradise is really boring. Becoming immortal without becoming God and without living in galactic size paradize is wrong and they feel it. 


Okay, so: If I could eliminate aging in my body as is, right now, I would.

I'm totally fine with not having an image of paradise that this leads me to. I'm fine with not clearly seeing how this makes me God. That's fine.

I'm happy to live as this human for a few centuries. Wandering the Earth.

Even if I'm the only one.

I don't intuit anything deeply wrong with that. Maybe I'm numb and stupid here. Maybe whatever that intuition others get was burned out of me by my immortalist family.

But my guess is, the deathist cringe isn't to a lack of vision of paradise or of becoming God.

Like, another deathist counterargument goes along the lines of "But people I love will die. I'd have to deal with that again and again, forever."

It's curious how often this shows up when talking about ending aging for everyone. It doesn't make logical sense given the thought experiment: those loved ones would have their aging cured too.

And yet.

So what's up with that? If I had to hazard a guess, it's that they're carrying collective (and maybe personal) trauma from losing loved ones. The burden of the mortality of those who came before us across the aeons. And they just don't know how to orient to that titanic burden.

I don't think offering them a vision of Heaven would address that. What of the scar that Yehuda left? There's a soul-rending agony and grief here to reconcile with. The engineering question is important too, but on its own it comes across like the stereotypical "man fixes women's feelings" scenario.

I think this whole scenario is way more intricate and nuanced than immortalist narratives tend to allow for.

Comment by Valentine on Universal counterargument against “badness of death” is wrong · 2021-12-22T15:51:11.060Z · LW · GW

FWIW, I spent quite a long time staring at things like this. I started as a die-hard immortalist (ha!). I was raised in a family that signed me up for cryonics when I was 5. They helped inoculate my mind against all the standard deathist stuff. It's in my memetic DNA at this point.

And yet, now am something like… mmm, an integral immortalist? Which is to say, I think deathist arguments are often bad articulations of something true. And it seems to me that the usual immortalist counterarguments to deathism are attacking the "bad articulation" part instead of orienting to the "something true".

(But I'm still signed up for cryonics and would love to stick around for at least a few centuries.)

One of my favorite examples is "But won't you get bored?" Some standard immortalist rebuttals go along the lines of:

  • "What if you get bored now? Would you want to die? Or would you rather stick around and see if things get better?"
  • "I'm pretty sure I'll find something to do given infinite time."
  • "I'd like the option to find out whether I'll get bored, thank you very much."
  • "…versus being dead."

The thing is, none of these address the reason why people say things like "But won't you get bored living forever?"

One reason is that they've heard this concern and are just repeating it. Basic memetics. In which case these immortalist counterarguments are just acting as memetic immune responses for the immortalist. They're basically useless for opening up a viable incentivized path from the deathist mental frame to the immortalist one.

But that's a relatively boring case. I think there's one that's way, way more interesting:

I think this is often expressing a real concern people are experiencing in the agony of being alive.

How much more alive were you when you were five years old? How ready were you to find something interesting to explore, do, play with? As adults, we can reach so, so much farther with our understanding… and yet. Maybe you in particular are blessed with ever-increasing vitality and fascination. But most people in practice experience something like a gradual entombing.

A sign of this is the wake-up call that a medical death sentence for a loved one brings. If you knew your next conversation with a beloved were your last, there's a kind of lucidity that would be present in looking into their eyes one last time. Old arguments feel different — not irrelevant, but without their weight either.

Yes, we can analyze this in terms of iterated vs. finite prisoner dilemmas, etc. But that just produces more thoughts and analysis. On the inside, in practice, death brings a freshness that's absolutely dear to the heart.

The horror of the deathist "But won't you get bored?" is often pointing at this. If you take humans as they current are and you remove death… where does the freshness come from? It doesn't? Ever? Except for some vague "We'll think of something" that feels like it's missing the point?

It's as though by default we're slowly getting buried alive, but at least at some point you die. But if you remove death without orienting to the burial…


This stuff is really easy to miss if you boil down the expressions of deathist fears into a few core principles and then logically beat them down. It's great for memetics, but it doesn't speak to the heart of the matter.

A loud hint of this is how deathists will usually either disengage or switch arguments once you successfully start countering the argument they've put forward. It has a similar structure to when a partner comes up and says something like "I feel hurt and disrespected. The way you, um, brought in the groceries…." If you counter with "Well, here are the logical reasons why I brought in the groceries the way I did, so see it actually makes sense", you're presuming that the groceries are the real reason for them bringing in the conversation. What if it was the example that just happened to come to their mind for something that's hard to articulate and maybe isn't fully conscious? Well, now you've shut down the one avenue they've thought of to try to bring their felt sense into contact with you.

Similarly: "Well, okay, yeah, I guess I don't want to kill myself when I get bored… but what about overpopulation?"

There's something wrong they're intuiting about the effort to live forever. They just don't know how to say it, and they probably don't know how to think it.

That doesn't mean it has no value.

Comment by Valentine on Where can one learn deep intuitions about information theory? · 2021-12-16T16:31:33.641Z · LW · GW

Oh, I would certainly love that. Statistical mechanics looks like it's magic, and it strikes me as absolutely worth grokking, and yeah I haven't found any entry point into it other than the Great Formal Slog.

I remember learning about "inner product spaces" as a graduate student, and memorizing structures and theorems about it, but it wasn't until I had already finished something like a year of grad school that I found out that the intuition behind inner products was "What kind of thing is a dot product in a vector space? What would 'dot product' mean in vector spaces other than the Euclidean ones?" Without that guiding intuition, the whole thing becomes a series of steps of "Yep, I agree, that's true and you've proven it. I don't know why we're proving that or where we're going, but okay. One more theorem to memorize."

I wonder if most "teachers" of formal topics either assume the guiding intuitions are obvious or implicitly think they don't matter. And maybe for truly gifted researchers they don't? But at least for people like me, they're damn close to all that matters.

Comment by Valentine on Where can one learn deep intuitions about information theory? · 2021-12-16T16:17:17.233Z · LW · GW


Comment by Valentine on Where can one learn deep intuitions about information theory? · 2021-12-16T15:59:48.718Z · LW · GW

I feel like I know a fair amount of game theory already. Is there a good bridge you could point toward between game theory and information theory? I was able to debate details about emergent game-theoretic engines for years, and reasoning under uncertainty, without the slightest hint about what "bits of information" really were.

Comment by Valentine on Where can one learn deep intuitions about information theory? · 2021-12-16T15:56:35.610Z · LW · GW

Do you have suggestions for where to dive into that? That same gap between "Here's a fuzzy overview" and "Here's a textbook optimized for demonstrating your ability to regurgitate formalisms" appears in my skimming of that too. I have strong but fuzzy intuitions for how thermodynamics works, and I have a lot of formal skill, but I have basically zero connection between those two.

Comment by Valentine on Is "gears-level" just a synonym for "mechanistic"? · 2021-12-14T00:15:00.372Z · LW · GW

I for one don't plan on using "mechanistic" where I currently talk about "gears-like" simply because I know what intuition the latter is pointing at but I'm much less sure about the former. Maybe down the road they'll turn out to be equivalent. But I'll need to see that, and why, before it'll feel-make sense for me to switch. Sort of like needing to see and grok a math proof that two things are equivalent before I feel comfortable using that fact.

Not that I determine how Less Wrong does or doesn't use this terminology. I'm just being honest about my intentions here.

A minor aside: To me, "gears-level" doesn't actually make sense. I think I used to use that phrasing, but it now strikes me as an incoherent metaphor. Level of what? Level of detail of the model? You can add a ton of detail to a model without affecting how gears-like it is. I think it's self-referential in roughly the style of "This quoted sentence talks about itself." I think it's intuitively pointing at how gears-like a model is, and on the scale of "not very many gears at all" to "absolutely transparently made of gears", it's on a level where we can talk about how the gears interact.

That said, there is a context in which I'd use a similar phrase and I think it makes perfect sense. "Can we discuss this model at the gears level?" That feels to me like we're talking about a very gears-like model already but we aren't yet examining the gears.

I interpret the opening question being about whether the property of being visibly made of gears is the same as "mechanistic". I think that's quite plausible, given that "mechanistic" means "like a mechanism", which is a metaphor pointing at quite literally a clockwork machine made of literal physical gears. The same intuition seems to have inspired both of them.

But as I said, I await the proof.

Comment by Valentine on Is "gears-level" just a synonym for "mechanistic"? · 2021-12-14T00:01:02.960Z · LW · GW

Huh. That's a neat distinction. It doesn't feel quite right, and in particular I notice that in practice there absolutely super duper very much is a sliding scale of gears-ness. But the "no black box" thing does tie together some things nicely. I like it.

A simple counterpoint: There's a lot of black box in what a "gear" is when you talk about gears in a box. Are we talking about physical gears operating with quantum mechanics to create physical form? A software program such that these are basically data structures? A hypothetical universe in which things actually in fact magically operate according to classical mechanics and things like mass just inherently exist without a quantum infrastructure? And yet, we can and do black-box that level in order to have a completely gears-like model of the gears-in-a-box.

My guess is you have to fuse this black box thing with relevance. And as John Vervaeke points out, relevance is functionally incomputable, at least for humans.

Comment by Valentine on Kenshō · 2021-12-13T23:16:56.618Z · LW · GW


FWIW, I've come to think that wireheading is an anti-concept as applied to humans. It's one of those "presume the conclusion" type mental movements. In practice it seems to act like a back door for arguments based on belief residue like the Protestant work ethic / "pleasure is sinful" stuff.

(A little more concretely: It makes sense to talk about some system engaging in wireheading only when there's a goal imposed from outside the system. It's like glorified Goodharting. But if the goals come from within the system, it stops being clear what "wireheading" means. On the inside it might feel like "Oh, I just found a vastly easier way to get what I want — and what I want wasn't what I thought I wanted!" Without an external evaluation criterion, that actually just becomes correct.)

With that said, I think I intuit what you mean by calling music and Looking "wireheading". I don't mean to dismiss that. Stuff like, if you meditate enough to get Great Insights™ such that you don't bother to eat food anymore and you die, that seems like a pretty dramatic failure and kind of throws those "insights" into question.

Comment by Valentine on Grading scheme from Valentine Smith · 2021-10-24T16:29:31.938Z · LW · GW

Christian speaks truly. I don't think there's a write-up.

I can give a very quick version here. One minor correction to the query though: I didn't have the students grade each other. I had them grade themselves. Part of the whole point was to tighten the feedback loop the students were getting so that the delay between "I tried this math problem" and "Here's what you got right, and here's what you could do to do better going forward" was as short as I could imagine making it given the constraints. I also wanted to give them mental practice correcting their own work.

So with that, here's the method in outline:

  • I taught two 90-minute math classes a week, Tuesday & Thursday afternoons, with a group of about 30 students. (This means I could see them all.)
  • Every Thursday there was a quiz. I didn't grade homework, but I drew inspiration from the homework to create quiz questions.
  • I set up the quiz questions on a PowerPoint. Students were to answer one question at a time on their papers (which I'd printed out for them) using a black or blue pen.
  • After giving them some time with the question, I'd have them put down their black/blue pens and pick up their red pens. I'd then click the PowerPoint forward to reveal the answer. We usually had a brief discussion to answer questions or get clarification about whether a certain thing "counted".
  • We'd go back and forth like this until the end of the quiz, at which point I'd have them hand in their papers.
  • My weekly flight up to Berkeley was on Thursday nights, so I usually reviewed their papers in the airport or on the plane and entered grades as needed. Part of this was to look for obvious cheating, and part was to help me stay familiar with the work and style of each student.

Some bugs I noticed:

  • Sometimes I'd discover during the quiz that our discussion after answering one question would make the next question downright trivial. It's long enough ago that I don't remember clear examples. But there were a few cases where I wondered about just flat-out skipping a question I'd put up there.
  • Obviously, cheating is a potential issue. I consciously decided not to care. I figured it was rare enough to not worry about, and that if it happened and I missed it then so be it.
  • It takes a long time during class to do this. A 10- or 15-minute quiz will take well over an hour. I'm not sure this is really a bug though. This meant the students were spending class time practicing the right skills. If the goal of a class were to train the students to actually have the skills, this method seems pretty good and could probably become excellent with iteration. But at the time, it was quite tricky because in practice I had only about half the lecture time of a normal iteration of the course, and I was still expected to "cover" the same amount of material. So the curriculum sometimes felt rushed outside the quizzes, and I felt a kind of pressure to make the quizzes take less time in class somehow. (That said, this is an artifact of classes caring more about talking about ideas than about encouraging students to actually master a level of skill. This conundrum is secretly baked into most undergraduate math courses I've seen.)
  • Students end up practicing a different skill than the one they'll need for the final exam. Time- and question-management mostly don't appear in this style of doing quizzes. Most college students have plenty of practice with that anyway, so maybe this isn't actually a problem. I just flat-out don't remember whether my students had a problem with the final exam this way. (That would have been May 2012, which was also when CFAR was running its first workshop. I cared a lot more about the workshop than I did about those students' grades.)

I'm sure there are others. This is off the top of my head, inspired from memory.

Does this answer your question?

Comment by Valentine on Creating a truly formidable Art · 2021-10-22T02:42:03.093Z · LW · GW

Quite welcome. Glad that helped. :-)

Comment by Valentine on Creating a truly formidable Art · 2021-10-22T02:41:44.481Z · LW · GW

That seems pretty darn good to me!

Comment by Valentine on Creating a truly formidable Art · 2021-10-19T00:57:52.341Z · LW · GW

Very cool. Thank you for explaining all that.

Comment by Valentine on Creating a truly formidable Art · 2021-10-19T00:56:56.552Z · LW · GW

Thank you for this. I learned stuff from it!

I'm glad to hear it! :-)


I did this [preacher-hacking] some back in the day!

That was really neat to read. Thank you for sharing!


People like that often have good hearts, but just (as far as I can tell) have never really experienced intellectual rigor (or maybe never aimed it at themselves and what they love).

I agree.

Also, for what it's worth, my impression is that this is a quality of a mind that's mostly independent of a mind's ability to be jammed. If there were a knob for a given mind that would let its owner increase or decrease its logical rigor, I don't think having more rigor would prevent jamming. It just changes what kinds of things can and can't jam it.

One of my favorite recent examples was "You cannot know that this statement is true." It's like an interpersonal version of the "This statement is false" thing. I know the statement is true, and you can know that I know it's true, but you can't know it's true.

This seems to cause some minds (roughly medium to strong rigor minds) to hiccup and glitch. It causes much less rigorous minds to sort of eyeroll or glaze over as a deflection. Some minds (like those trained to think rigorously in terms of self-reference) can navigate it enough to note "Oh, that's a cool example" and smoothly move on.


Also, I stopped because I began to worry that I might break or non-trivially harm someone who was like... like "staying on the wagon through the power of Jesus"? It seems ethically acceptable to poke at my OWN foundations... but maybe not those of random strangers?

I like the deeper thing you're pointing at here. Something like, noticing that certain moves can be ontologically violent, and taking that into account when it comes to being kind to others.

That said, I feel uneasy with the "ethically acceptable" question. It seems to compress too much and dances dangerously close to Drama Triangle dynamics (namely, taking responsibility for someone else's choices).

But my heart agrees with the intuition I think you're pointing at here.


Basically, I think that a commitment to truth does not have to be the cause of every motion in every instant. When interacting with humans, I think other factors than the truth are properly relevant to the decision. Like promises, cherishing, duties, apologies, precendents, role modeling, caring, and so on.

I agree. I'm not suggesting abandoning those things. I'm suggesting an internal design in which they flow from devotion to truth.

In practice I find that at least at the stage I've reached, I'm not focused on truth in every moment, and often this is perfectly fine. It's just that when I'm not paying attention like that, it's easy for some program to boot up that claims to be important for thus-and-such reason but is actually there to distract me.

To the extent that I've cleaned up my autopilot so that certain proxies are pretty reliably connected to the process of leaning toward truth, then I can loosen the reigns a bit and still get the desired outcome.

For instance: Sure, promises work okay as a proxy… as long as the promises aren't being made in order to be liked so as to fill a sense of inadequacy (for instance). If that's what's happening, then this idea that it's fine to drop focus on truth in order to keep promises can keep me confused and never addressing the sense of inadequacy directly. At some point on the devotional path, it's necessary to turn to cherished promises and say something like "This too. This too can die. Let others distrust and hate me for letting them down if they must. I put this on truth's altar and set it ablaze. I will keep only whatever remains."

Thereafter, the reason for caring about promises becomes the linking power between keeping them and the truth — in which case I keep them not because "I made a promise and that's the game theory" but because the act of keeping a given promise is part of my devotion to truth. It comes before reasons. The reasons are in service to the real thing underneath.

(I also want to add a periodic reminder I've said several times in these comments, but I imagine is easy to miss or forget: I'm not saying that this is what everyone should do. I'm saying that this kind of utter devotion seems to me both powerful and necessary for Beisutsukai.)


My first step aiming at a pure dojo would be to find co-founders and construct with them a collection of finite commitments and easy promises with failover clauses, that were unlikely to be regretted, to lead the dojo as a group for at least... 3 years? Then, in the first year, maybe you decide it is likely to be shut down.

Enough promises from enough adequate co-founders would make things very likely to work well, I think?

I have a hard time imagining how this would actually work. It feels too… outside to me. Like it's pretending we can program the social environment we're in as though we're looking at it from the outside. And like it's also trying to replace "Do the thing and not the non-thing" with semi-formal structures, which from what I can tell basically always decays via Goodhart.

But maybe you have a clever way of navigating that…? I'm happy to have my skepticism dispelled here. I'd just need to understand better what you're thinking first.


This links, in my mind, to the Drama Triangle stuff. Like: why not... just not do drama? Maybe the drama triangle is just a locally relevant pet theory of a therapist who sees a lot of people who need her kind of therapy?

You might find my reply to bglass clarifying. I wonder how much of this is just a matter of what I mean by Drama here being unclear due to a paucity of examples.

For my part, I really doubt this is just a therapeutic selection effect. I spent a few years trying to name a cultural engine and later found Lynne Forrest's application of Karpman's Triangle much, much better at communicating sight of the engine than I'd worked out how to on my own. I'd even encountered Karpman's Triangle a lot before, but it didn't land as useful until I combined Forrest's take with the core thing I was seeing in basically every human culture.

I still think the core thing of "I need you to do/be a certain thing for me to be okay" is better at capturing the essential logic I care about in the Drama Triangle. But in practice, splitting it up into these three strategy types and their interactions seems to make it a lot easier for people to learn how to notice in themselves and others.

With all that said, I do think the essence really amounts to "why not… just not do drama?"

I think the answer is basically that most people — and basically all the loud or visible collectives — are highly addicted to the sensations of drama. It lands a little like "why not… just stop smoking?" Ultimately, yes, of course. But in practice I think it's trickier than you seem to think it is.

And if I'm mistaken about that, I'd very much love to learn about it. I'd be quite happy to learn of a shortcut.

(Aside: I think this language around addiction is a bit confusing and doesn't need to be. It's more like, addiction is avoidance of a sensation or inner experience, and the addictive substance or behavior is an effective and habitual distraction from that intolerable sensation in a way that doesn't actually address the cause of the sensation in question. This habit loop's default is to deal with this problem by increasing the addictive substance's intensity in a kind of arms race, but the actual way out is to turn around and build capacity to be with the "intolerable" experience.)


I enjoyed the "silence perspective" here, and I wonder if any of it was connected to an almost literally opposite system of mental practices, that was recently advocated to intentionally create banishable and invokable tulpas that would pop up and give helpful advice in people's inner working memory as auditory/visual/etc content?

In short, no. I still haven't read Duncan's post. (I felt like I probably got the basic idea from the title combined with having had many conversations with Duncan where he explicitly used shoulder advisors.) The system (as I understand it) is too shallow to cut to the core of confusion. I mean, what are you using to pick your advisors? If you focus on listening to them as your basis, doesn't this mean you end up with a level of clarity that's roughly the weighted average of your advisors?

I do think it's a really good technique. I just don't see it as on the same tier as cultivating silence.

(It also feels important to acknowledge: Maybe Duncan addresses these and related questions in his post. I might go read it and find out. But in terms of answering the question about how connected the Void stuff is with Duncan's post, the answer will lie in my impression of what Duncan probably talks about there, and my barely educated impression of Duncan's point falls far short of the Void.)


> In particular, it's unclear whether and exactly how a given person would create a rationality dojo as part of their own training.

The thing that seems like it would make sense here is if you were in a business, and you were bottlenecked on hiring talent, and the talent had to be able to engage in cooperative creative systematic problem solving under uncertainty in order to be worth hiring, and then if you can't hire enough of it already on the open market...

...maybe you hire people close to the target, give them on-the-job training in a "working dojo", and then promote them to the real job when they can handle the real job?

I wonder about editing that sentence in the OP to clarify. I didn't mean that it's unclear how someone could possibly create a dojo as part of their training. I meant that given a Bob, would Bob end up creating a dojo as part of his training, and if so then what would his dojo end up looking like? That seems hard for me to predict.

(I get glimmers of intuitions about this for some people, but I haven't had much of a chance to calibrate those intuitions.)

But setting that aside to look at your idea:

I like the part where it's a kind of grounded and real. I don't care for the part where their livelihood gets tied to training. That slips in perverse incentives. I don't know of a good way to overcome those here.

I'm reminded of teaching math in university years ago. I tried as hard as I could at the time to find a way to teach well. And I innovated some tricks I still use to this day. But in retrospect, the main hurdle I could never overcome was that the classes I was teaching were required for a wide swath of majors (precalc, business calc, business statistic, etc.). I know I deeply touched a very few students in my years there, and I'm grateful for that opportunity. But for the majority? I could never have hoped to overcome the fact that they were there only because they believed they had to be.

Today I'd be much, much more wicked and direct. While they still have a chance to switch which version of the class they're in (i.e., in the first week), I would tell them that they're in for a bizarre ride and that they should leave if they want to, and then demonstrate it ASAP. I'd give them core tools for sovereignty (like the Drama Triangle and somatic self-soothing), spell out the trauma structures associated with math and child-rearing, and focus on them clearing those in themselves first. Every step of the way thereafter, I'd hone in on every breath of bullshit and slay it, and as a class we'd collectively look at how (a) they can each take full conscious ownership of their lives, including whether and how they wanted to navigate my particular "math" class; and (b) how they might orient to passing the end-of-term math test given their resources, including the time remaining in the course. I might very well make the final exam worth 100% of the course grade to help capture the spirit of this.

(Fun fact: "mathematics" comes from the Greek for "one who knows". Mathema was literally the art of knowing. The above is, in my opinion, not just a prerequisite structure for teaching math but is the art of mathema applied to the bizarre situation of a required academic class on computation.)

Maybe there's a way to modify that approach for jobs…?

But I tried to figure out something very closely related for over a year and couldn't figure it out on my own. I got a solution, but it amounted to "Don't allow any perverse incentives at all. If any threaten, put them on your shoulders, not your students', and only if it's natural to make your growing immunity to it part of your practice."

So… shrug? I'd be curious if you actually have a solution here in the shape you're pointing at.


At points in your essay, I was reminded of the cultural aspects of the Toyota Production System.

There's a scene in Spear's classic book where someone who has had ~3 months of training in "doing manufacturing optimization experiments slowly and correctly" (as part of training to be a manager with a Japenese boss) and he moves to a new place and gets 3 days of intensive practice sorta "speedrunning" the previous very slow practices...

Oh, there's something lovely and resonant here. Maybe this is what you meant by "working dojo"…?

This has gears turning in the back of my mind. Like it's fitting a piece together that makes the challenges/pressure-tests make more sense.

It's funny watching my mind trying to solve that. It's not something I'm consciously determined to do. But it's apparently a fun puzzle for me!


I like the groundedness of a tool space as a foundation for a rationality dojo. Like, producing real things and solving real problems. That's very resonant.

I think there's something slippery happening in terms of the Art being domain-general but cashing out in domain-specific ways.

I don't have succinct tidy thoughts at this point. I like the inspiration food.

Thank you for your thought storm. :-)

Comment by Valentine on Creating a truly formidable Art · 2021-10-18T17:12:53.019Z · LW · GW

I like this. Thank you for bringing it up here.

How does it work? I'm not finding an obvious instruction manual or introduction. The first one seems like the first puzzle, but I'm not quite sure how it works. Would someone who wants to jump in just… reply in the comments with what they try to do? Or is this a template for an RPG session someone could run with others? Or something else?

I'd shied away from RPG style simulated practice because of the difficulty with embodied integration. I find it far too easy to view my character from the outside and solve their situation like a puzzle, rather than experiencing myself as the character who's actually encountering the confusion and psychological states and trying to navigate them from the inside. From a skim, it looks like you're navigating this in roughly the same way Eliezer seemed to be trying to do in creating the genre of "rationalist fiction" (where you show rather than describe the experience of making the inner mental movements that produce clarity).

Comment by Valentine on Creating a truly formidable Art · 2021-10-18T16:27:50.024Z · LW · GW

I'm not sure why you're saying this. I wonder if you're seeking accolades for your cleverness…?

The whole point of the preacher puzzle isn't to have a solution, but to find one.

If this were actually a Beisutsu challenge and I were your sensei (with my current skill set, which is a paradox, but I'll ignore that for now), I would focus you on two points:

  • "This is a distraction from the point. What in you is pulling you in a different direction? Look there and address it."
  • "Now taboo this trick with the Song of Solomon and try again."

I want to acknowledge and emphasize that I'm not a Beisutsu sensei, and even if I were you have not asked to be my student.

I'm offering this anyway because I imagine the hypothetical correction will be clarifying for some people — possibly including you!

Comment by Valentine on Creating a truly formidable Art · 2021-10-18T16:13:47.187Z · LW · GW

Reply part 2:

Could you give a specific example of a terminal value failing to fit reality, and what abandoning it/changing it to fit reality would look like?

I can answer what I think is the spirit of this question. I've been playing along with the "terminal value" frame, but honestly I think it confuses things. Rather than trying to stick to the formal idea of a terminal value in humans, I'll just point at what I'm talking about.

One example: deconversion. If you believe in God and love Him and this brings you tremendous meaning and orientation in your life, dare you take seriously the arguments that He doesn't exist? Dare you even look? This isn't just a matter of flipping a mental "god_exists" Boolean variable from "true" to "false"; for many people this can be on the level of losing God's love and approval, and like the very force of gravity is no longer His will but is instead some kind of dead monstrosity. That's something you risk if you're more interested in truth than in being close to Him. What in you would need to shift so that your inner answer is "Yes, yes, a thousand times yes, let me see the truth"?

Another example: breaking up with a friend. Maybe you've known someone since childhood… but some of this Drama Triangle stuff starts to click and you see that actually everything about your connection is based on (say) them Rescuing you and you playing Victim. When you try to talk to them about this, they brush it off, maybe even playing the Victim card themselves ("I just care about you! Don't you appreciate all that I do for you?"). You could just keep playing along… or you could notice that you're actually a "no" for playing this dynamic with anyone anymore, even your old friend. But maybe there's nothing deeper than the Drama dynamic, and maybe they won't be available for building something more. So what do you do? What resource in you do you call upon in order to choose to prefer truth even to this long-standing friendship? Are you willing to grieve, and have your old friend feel hurt at you (the shift to Persecutor), and practice standing your ground (i.e., deepening your devotion to truth)? Or do you cherish things as they are more than you want to recognize the deeper truth?

This stuff shows up in a thousand different ways, and my experience is that the more refined my "truth sight" becomes the more micro-level these little opportunities appear. Like, as I write this, is each keystroke devotional? Or am I focused more on making sure I answer your question than I am on whether it's true for me to do so? What in me do I need to acknowledge and let go of in order to have each breath be married to reality?

Does this answer your question?

Comment by Valentine on Creating a truly formidable Art · 2021-10-18T16:13:29.398Z · LW · GW

Really enjoyed this article!

I'm glad to hear it. :-) 


How do you see motivation working once you start abandoning the concept of goals?

It's not really that one abandons the concept of goals. It's that doing serves being, so goals arise and fade within a larger context.

What's your motivation for continuing to live? If presented with two buttons, one of which will let you leave the button situation & continue your life while the other one has you die right on the spot, I imagine you have little difficulty choosing the first one. You might be able to justify your choice afterwards as "survival instinct" or "net positive expected global utility from your remaining life" or whatever… but I'm guessing the clear knowing of the choice comes before all that. Your choice probably wouldn't change whatsoever if you spent a while meditating and calming your reactions, for instance.

(Said differently: the clarity arises from the Void.)

The word "motivation" has a common linguistic root with "motor". It's that which causes movement. So the "motivation" of a stone rolling downhill is gravity. The motivation of a high school student attending college is (often) a whole social atmosphere that acts something like a gravitational field (what I've occasionally heard termed "an incentive landscape" in rationalist circles). There's something very mechanical about the whole thing.

But when we talk about "being motivated" or epic feats like "shut up and do the impossible", particularly when there's any hint of "should" attached to them (like "I should shut up & do the impossible"), there's usually an implication of free will. As though beyond all causes is some kind of power of choice. It's obviously a bit batty when said that way, but we mostly agree not to pay attention to that.

…with the result that we have bizarre statements like "We should end racism." What exactly is that as a choice? It's not at all of the same type as "We should turn off the stove." In practice it's an application of a social force meant to shift the incentive landscape (usually via Drama Triangle dynamics, I'll parenthetically add). But what's causing that force to be applied? If you start tabooing the concept of free will, most statements about social movements and public policy start looking patently insane. If you finish tabooing it, they appear as they are: manifestations of a kind of collective mental software glitch that keeps human minds distant from reality. Stones rolling downhill.

Same for statements like "I should lose weight." With what magical power? By the power of research and effort? If so, can you notice the element of magic being added wherein you somehow mysteriously can make yourself do the research and put in effort as though your choice is beyond all cause?

(The fact that the motivations often aren't beyond experienced causes is part of why shame and inadequacy enter the picture. "I failed, and that means I suck" doesn't make any more sense than "The stone rolled all the way to the bottom of the hill, and that means I suck." Of course, the judgment isn't causeless either.)

Intellectually solving the reductionist puzzle of free will is not at all the same as integrating the insight into your being and perception.

So, what does it feel like on the inside to end all distortions about free will?

I'm pretty sure this is part of what the Void stuff is getting you into contact with.

The place from which you choose to move your fingers is void of experience. It's a kind of empty. Once you make the choice, there's a cascade of experience and the result is basically predetermined by the mechanisms of reality. But the choice itself feels on the inside like it's causeless.

Goals are in the realm of causes. Within sensation. They're part of the machinery of the world.

When you see this clearly and stop pretending that getting somewhere is what existence is about, then your "motivation" emerges from the causeless realm of emptiness. You just do what you want.

Of course, within physics this is still mechanical. The reductionist lens sees that "causeless choice" is basically just how we experience a type of ignorance.

But at least the machinery stops being confused in practice about what the "free will" function is actually doing. And our narratives about ourselves and others stop trying to rely on these magical forces that don't actually exist.

…though that's still described from the outside.

On the inside, it feels silent.

I do because I want to.

"Where does the desire come from?" becomes a koan. The act of looking for the answer points back to the silence.

Which means that the carnival of sensation is much, much less able to control what I choose to do.

Does this answer your question?

(I'll answer your second question in a separate reply since the topics are different. I don't see this done often here… but I think it makes more sense given the nature of upvoting/downvoting, so I'll try it and see what happens.)

Comment by Valentine on Creating a truly formidable Art · 2021-10-17T23:56:12.047Z · LW · GW

Almost everything in this post sounds right to me.



I can see those [Drama Triangle] patterns in argumentation online--a lot--and in a few dysfunctional people I know, and indeed in my own past in some places. Regarding my real-life modern friends, family, and coworkers, it doesn't seem like anyone relates to each other through those roles (at least not often enough to describe it as 'utterly everywhere').


Perhaps I'm missing something. If it's just that few of the people in my life regularly have the victim mindset, I feel very fortunate.

Maybe you are blessed!

That said, my guess based on priors is that you're probably just not familiar with how to notice these patterns at subtle levels.

A few days ago I had just finished lunch with my parents. After we'd finished, there was a bit of time left before I needed to head out to make it to an online call. So I started using that time to help clean up. Dad turned to me and said something like "You have a call. You should get going and leave this to me." I know Dad plenty well to know that this isn't because of some love of kitchen cleaning on his part. At first blush it looks like caring, and it's how he has learned how to express caring, but it's actually a subtle invitation to Rescue me. At other times he'll do the Rescuer-turned-Victim thing about how there's always so very much work to do.

Nearly all plots from dramas and romcoms are variations on Drama Triangle themes. It's a big tangled mess of "I need you to do/be a certain thing in order for me to be okay." If Alice needs Ben to do X and Ben needs Carol to do Y and Carol needs Alice to do Z, but Alice doing Z makes it tricky for Ben to do X, then you have a very entertaining spiral as no one takes responsibility for their own wellbeing and everyone gets an emotional orgasm of offense/excitement/sorrow/etc. in their collective arms race of attempted emotional co-manipulation. What fun!

Once in the Czech Republic I was sitting at a bar finishing a glass of beer. The waitress came by and asked to take my glass. I wasn't done and I said as much. She put her fist on her hips with one hand and gestured at my glass with the other in a lot of irritation and said "Come on. There's just a mouthful left." Her Persecutor inviting me to Victim. At the time I accepted the Drama bid and felt resentful for a while afterwards. I can only guess, but my guess is that had I refused and told her to come back later she would have fumed about me for a while afterwards. She might have done so anyway.

Nearly every graduate student I've interacted with has learned how to play Victim as part of their role. That's a huge chunk of what PhD Comics is a caricature of. "Oh man, this is such a huge workload, and I haven't done nearly enough, so my advisor is gonna be so disappointed in me tomorrow…." Often it's the system as a whole that's acting as the Persecutor, at least from the grad student's point of view. The pointlessness of fighting back helps to feed the Victim narrative of hopelessness.

A pretty good rule of thumb is: If you're stressed enough about something that it's activating your SNS but you aren't in a situation where a burst of speed will solve the problem before you run out of emergency energy, then you're almost certainly confused about what's real, and it's very often because you've fallen into the Triangle somehow. The overwhelming majority of efforts to "save the world" or "fight for justice" are of this type, often from the Rescuer corner (although with cancel culture we've seen a blatant wave of Rescuer-turned-Persecutor patterns pop up). It's easy to sort of motte-and-bailey this point by focusing on how important the causes are ("We're talking about existential risk!") rather than really looking at the Drama pattern of how concern for the causes are being used, and in service to what. The complaints about "White Saviorism" are exactly objecting to the condescension of the Rescuer pattern — but the nature of the complaints are often just retaliating with Persecutor. Round & round we go!

I'll pause there. I could go on for hours.

Hopefully that helps clarify what I'm talking about there.


[…] I don't agree that Truth is the only thing that matters, or the ultimate thing that matters.

Ah, I didn't mean to say it was. Sorry if I misspoke somewhere.

I meant to say that devoting to truth is coherent and very powerful, and the more deeply I do so the more obvious it is to me that nothing else makes sense for me.

But maybe looking deeply at the truth would wreck a given person's 20-year marriage, and he'd rather live the life he's built for himself than go on some grand spiritual journey. That's perfectly fine. That matters to him, which means it matters.

For myself, that's not an option anymore. I've already crossed too many points of no return. And I don't regret it one bit. If I end up married, it'll almost certainly be because my wife is devoted to truth too, and we learned how to build a life together within that context. If I have to wait a hundred years for that, or it never happens, then so be it.

I'm suggesting that this kind of devotion to truth is necessary for Beisutsukai. That's all.


It may be that every single time someone thinks what they want is at odds with the truth, they are wrong--is that what you meant?

That's not what I meant, though I think that's basically true too.


Or perhaps, did you simply mean that getting at the truth requires unwavering devotion, far stronger than what people normally apply toward anything they want?

Something more like this, yes.

Whatever you want more than truth leaves you with a question: Why do you want it more? What if looking at that question caused you to realize that your desire stems from an illusion? The very act of noticing this might cause you to cease pursuing this treasure greater than truth. So you'd best not look!

This isn't a fictitious reasoning pathway. It's the standard trick of the ego.

(In particular, it's close to what Anna Salamon at least used to call a "broccoli error": If someone who hates broccoli is given an opportunity to push a button and enjoy broccoli instead, they might respond "I'm not pushing that button! If I did, I'd eat more broccoli, and I hate broccoli!")

The only path I know of that relentlessly and unwaveringly moves toward clarity and freedom is total devotion to truth. Any deviance from that path leads to confusion.

…which is not the same as saying that deviance is wrong or that people who don't devote to truth are making a mistake.

It's just a fact. Preferring anything over truth creates room for confusion.

So anyone who wants to master any art of cutting through confusion would do extremely well to fully devote to truth.

But maybe that 20-year marriage sounds way sweeter.

That's actually, truly okay.

Comment by Valentine on Creating a truly formidable Art · 2021-10-14T22:45:55.623Z · LW · GW

If you want to emphasize that power in your writing, I'd recommend that when you talk about these emotional movements you use 'you' pronouns or speak about a community.

That's actually how it came out of me in the first draft. I noticed it and cleaned it up. Some part of me wanted to pressure people to feel compelled, and wants to teach people and present profound visions rather than admit I'm just experiencing things and projecting on others. And its desire isn't emanating from devotion to truth. It leads me toward confusion. So noticing this and cleaning it up was helpful for me because it highlights for me a piece of something to digest and let go of. And the result is that my writing is cleaner and more honest.

In terms of social dynamics, if someone needs that kind of power in my writing to hear my message, and I have to twist myself a little away from the truth in order to deliver that power to their liking, then I'm in service to them. Why? What if they don't like my writing anyway? Ah, now I have to start pressuring them in order for me to get validation… and I go insane.

It's hard for me to read what you are serving in offering this recommendation. I imagine the conscious thought is to be some kind of helpful. I like the spirit of connection there. I don't plan on shaping my communications that way though.


Secondly, to preserve the flow of writing, it's important to use words/phrases that don't need links to be understood. For example, the way you use 'grok' or 'information asymmetry' breaks the flow.

As Yoav indicates, this is just a Less Wrong cultural norm. I would agree with you in other contexts. Here, I hyperlink (a) terms that I imagine most people here already know but some might need help with, and (b) references to other ideas in LW rationality space (like when I link to Mandatory Secret Identities in referring to how running a rationality dojo isn't for everyone, because that Sequence post highlighted a similar idea). That's the standard I sort of absorbed by example years ago.

If the standard of writing and linking here has changed, though, I'd be happy to learn about it. I mean no confusion or disrespect.


The mixture of emotional states with values is very powerful, especially when you share a common identity and goal with your audience. Good writing.

Thank you.

Comment by Valentine on Creating a truly formidable Art · 2021-10-14T22:29:12.365Z · LW · GW

Exactly. And in particular, it would have me prioritizing impact on others over alignment with truth. That would be antithetical to both (a) my own devotion to truth and (b) skillfully communicating the point. The version of me that would be willing to use that manipulation tool would be less clear about the whole message overall.

Comment by Valentine on Creating a truly formidable Art · 2021-10-14T18:10:04.040Z · LW · GW

I imagine an algorithm gaining more and more subroutine calls (is_planning_fallacy()) and becoming noisier and noisier.

In practice these subroutines work by something like getting fed clock cycles. If you stop feeding them, they stop running. If you examine them while they're running and sort of take them apart with spacious attention, they stop booting up when they would otherwise get fed.

The reason people end up with runaway internal arms races like you're describing is that they're trying to pit one bit of noise against another while feeding both. If you think "I should drop this into the Void and stop feeding this", then that thought calls for more clock cycles to fight the is_planning_fallacy() thing, which calls for more clock cycles to fight back, etc.

That's why you have to build familiarity with the actual Void first as a separate practice.

(Again acknowledging the info asymmetry here. I claim you can just see what I'm talking about if you look at it yourself, but steadying the inner eye takes a little while, so I imagine that right now this is landing as some kind of theoretical claim I'm simply asserting. I trust whatever you choose to do or not do with what I'm saying here. All in good time.)


You seem to argue that we need to instead be deliberate about the functions we implement in our own mind; to zoom out and start over with a clean slate; to strive towards a well-implemented, purposeful function which achieves one's goals.

No, but that's a pretty good initial description.

The concept of goals gets pretty slippery as you do this because being takes precedence over doing. Most minds I've had a chance to examine seem to be very, very highly tuned toward doing, often using the lack of permission to be as fuel to feed subroutines. It's sort of a parasitic relationship between the human and the thought structures. I'm suggesting that epic levels of clarity requires dismantling that on the inside.

I think that at first emphasizing which functions one implements puts the focus backwards. Who or what is deciding what gets implemented? If that's where you start, the chances are it'll be a cluster of subroutines, often using internal words like "I" and "me" and "my".

The starting point is more like (a) noticing that functions are currently running and (b) developing the skill of turning them off.

I didn't really go into this in the article, but FWIW: After a while the mind will build new functions that more helpfully point to the Void. It'll also try to build ones that distract from the Void by talking about "the Void", but after a while it'll get that this ultimately doesn't work and mostly (but not entirely) give up.

Then you can start being deliberate about which functions you implement. In part because you're a lot less confused about who it is that's being deliberate.

And the whole way through you'll hone a better and better sense of how much inner space you actually want and need for various tasks. I don't know if "start over with a clean slate" is quite the thing… but knowing how to (a) close programs and (b) reboot your computer to clear programs that aren't responding is pretty helpful.


Truth is not my (sole) terminal value, but a complementary ingredient. I want my terminal values to be truly reflected in the state of reality—for the world to be better and happier—but I also terminally care about those other values. It seems dishonest and untrue for me to pretend otherwise, internally.

Cool. Then two notes:

  • Devotion to truth might not be for you. That's totally fine. It's not for everyone.
  • Your last sentence here highlights to me how devotion to truth would require you to see how your other "terminal values" play a role. The way this would work is: What if something you "terminally" desire in the world isn't a fit for reality? Would you rather discover that and grieve, or not look and keep trying? I don't mean this stupidly; many people honestly would choose the second, and that's fine. It's their lives. I'm just observing from the vantage point of someone who is a devotee of truth: To me the choice is clear, because any "terminal value" I have that cannot fit into reality is keeping me deluded without helping me with that delusional value. So why would I treasure it more than truth? That way lies pointless (to me) suffering. I'd rather let the version of me that clings to that "terminal value" die.


Perhaps one answer is to reflect and consider whether these terminal values would be better served by my becoming the kind of person who does put truth cardinally first. If so, perhaps I could decide to be that kind of person thereafter. This is not a decision to be made lightly.

Exactly. It's like choosing to get married to reality. I'm not using the word "devotion" flippantly here.

A minor point that's maybe obvious to you: You're describing the Gandhi murder pill from the "before pill" POV. From my vantage point, once you start putting truth first, eventually you'll get confronted with your motivation: "Ah, I want happiness for all beings / lots of sex / a fluffy dog / etc., and this is why I'm devoting to truth." What if you don't get those? What if you only think you want those because you're actually (say) seeking validation and trying to distract yourself from that truth via fervent activity? Ah, now the attempt to devote to truth gives you a choice: Sacrifice who you were on the altar, or go no further. If you proceed, maybe you still get lots of sex or whatever, but only if it survives the purification by Eternal Flame. You don't get to know ahead of time. That's the price.

Maybe that's what you meant when you note that "This is not a decision to be made lightly."

It's also not a decision most people can make without standing in the Void. It's like having wedding vows to stay with your beloved "in sickness and in health" without having a damn clue what sickness is. Maybe you keep the vow, but it'll be pretty much accidental.

Comment by Valentine on What is up with spirituality? · 2021-01-27T23:52:28.416Z · LW · GW

I think you want John Vervaeke's series "Awakening from the Meaning Crisis". Very grounded in the scientific materialist framework, and thoroughly answers your question while also giving a wonderful historical overview of Western meaning-making. You'll know if this is for you after watching the first two episodes, possibly after the first one.

Comment by Valentine on On clinging · 2021-01-25T15:08:08.833Z · LW · GW

I really like this distinction. Thank you for writing this up.

A few related thoughts/claims:

  • There's a reason "clinging" seems like a fitting metaphor. I'm guessing it's related to something very primal — e.g., you're hungry and holding a piece of food, but something or someone is trying to snatch it away from you.
  • This inner clinging is an attempt to hold onto a way of being in defiance of reality. It's an embodied distrust of truth.
  • Ergo it makes sense to do only when the being in question believes that letting in truth (i.e., generalized updating) is dangerous. This shows up for kids who are…
    • (a) …dealing with others (adults) who are doing this inner clinging and yet…
    • (b) …themselves lacking a more skillful alternative to navigating others' violence-backed demands.

Because lies are contagious in the mind, this tends to encourage the inner spread of use of clinging. Eventually the child learns to live in a hypo-psychotic delusion that's compatible with the adults'.

Hence transgenerational trauma.

From a mind design point of view, I think it makes tremendous sense to relinquish all clinging in tandem with learning skillful non-clinging-based ways of navigating others' violence-backed demands. My guess is, the primal psyche (very loosely speaking, "System 1") will actively fight relinquishing clinging unless and until it feels the novel safety of doing so.

I think it's reasonable to view the Sequences as having been an attempt to offer a cognitive alternative to clinging. Hence e.g. the Litany of Gendlin.

Comment by Valentine on The Intelligent Social Web · 2020-02-21T18:12:39.699Z · LW · GW

I'm glad to have helped. :)

I'll answer the rest by PM. Diving into Integral Theory here strikes me as a bit off topic (though I certainly don't mind the question).

Comment by Valentine on The Intelligent Social Web · 2020-02-19T23:09:50.260Z · LW · GW
I don't think everyone playing on the propositional level is unaware of its shortcomings…

I didn't mean to imply that everyone was unaware this way. I meant to point at the culture as a whole. Like, if the whole of LW were a single person, then that person strikes me as being unaware this way, even if many of that person's "organs" have a different perspective.

…propositional knowledge is the knowledge that scales…

That's actually really unclear to me. Christendom would have been better defined by a social order (and thus by individuals' knowing how to participate in that culture) than it would have by a set of propositions. Likewise #metoo spread because it was a viable knowing-how: read a #metoo story with the hashtag, then feel moved to share your own with the hashtag such that others see yours.

Comment by Valentine on The Intelligent Social Web · 2020-02-19T22:34:05.993Z · LW · GW
I'm not sure "actionable" is the right lens but something nearby resonated.

Agreed. I mean actionability as an example type. A different sort of example would be Scott introducing the frame of Moloch. His essay didn't really offer new explicit models or explanations, and it didn't really make any action pathways viable for the individual reader. But it was still powerful in a way that I think importantly counts.

By way of contrast, way back in the day when CFAR was but a glimmer in Eliezer's & Anna's eye, there was an attempted debiasing technique vs. the sunk cost fallacy called "Pretend you're a teleporting alien". The idea was to imagine that you had just teleported into this body and mind, with memories and so on, but that your history was something other than what this human's memory claimed. Anna and Eliezer offered this to a few people, presumably because the thought experiment worked for them, but by my understanding it fell flat. It was too boring to use. It sure seems actionable, but in practice it neither lit up a meaningful new perspective (the way Meditations on Moloch did) nor afforded a viable action pathway (despite having specific steps that people could in theory follow).

What it means to know (in a way that matters) why that technique didn't work is that you can share a debiasing technique with others that they can and do use. Models and ideas might be helpful for getting there… but something goes really odd when the implicit goal is the propositional model. Too much room for conversational Goodharting.

But a step in the right direction (I think) is noticing that the "alien" frame doesn't in practice have the kind of "kick" that the Moloch idea does. Despite having in-theory actionable steps, it doesn't galvanize a mind with meaning. Turns out, that's actually really important for a viable art of rationality.

Not necessarily because it's the best or only way, as Romeo said, because it's a thing that can scale in a particular way and so is useful to build around.

I'm wanting to emphasize that I'm not trying to denigrate this. In case that wasn't clear. I think this is valuable and good.

…an environment that's explicitly oriented towards bridging gaps between explicit and tacit knowledge…

This resonates pretty well with where my intuition tends to point.

Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing.

That's something of an illusion. It's a habit we've learned in terms of how to relate to writing. (Although it's kind of true because we've all learned it… but it's possible to circumnavigate this by noticing what's going on, which a subcommunity like LW can potentially do.)

Contrast with e.g. Lectio Divina.

More generally, one can dialogue with the text rather than just scan it for information. You can read a sentence and let it sink in. How does it feel to read it? What is it like to wear the perspective that would say that sentence? What's the feel on the inside of the worldview being espoused? How can you choose to allow the very act of reading to transform you?

A lot of Buddhist texts seem to have been designed to be read this way. You read the teachings slowly, to let it really absorb, and in doing so it guides your mind to mimic the way of being that lets you slip into insight.

This is also part of the value of poetry. What makes poetry powerful and important is that it's writing designed specifically to create an impact beneath the propositional level. There's a reason Rumi focused on poetry after his enlightenment:

"Sit down, be still, and listen.
You are drunk
and this is
the edge of the roof."

Culture has quite a few tools like these for powerfully conveying deep ways of knowing. Along the same lines as I mentioned in my earlier comment above, I can imagine a potential Less Wrong that wants to devote energy and effort toward mastering this multimodal communication process in order to dynamically create a powerful community of deep practice of rationality. But it's not what I observe. I doubt three months from now that there'll be any relevant uptick in how much poetry appears on LW, for instance. It's just not what the culture seems to want — which, again, seems like a fine choice.

Comment by Valentine on The Intelligent Social Web · 2020-02-19T18:18:19.822Z · LW · GW

I can't point to a specific post without doing more digging than I care to do right now. I wouldn't be too shocked to find out I'm drastically wrong. It's just my impression from (a) years of interacting with Less Wrong before plus (b) popping in every now and again to see what social dynamics have and haven't changed.

With that caveat… here are a couple of frames to triangulate what I was referring to:

  • In Ken Wilber's version of Spiral Dynamics, Less Wrong is the best display of Orange I know of. Most efforts at Orange these days are weaksauce, like "I Fucking Love Science" (which is more like Amber with an Orange aesthetic) or Richard Dawkins' "Brights" campaign. I could imagine a Less Wrong that wants to work hard at holding Orange values as it transitions into 2nd Tier (i.e., Wilber's Teal and Turquoise Altitudes), but that's not what I see. What I see instead is a LW that wants to continue to embody Orange more fully and perfectly, importing and translating other frameworks into Orange terms. In other words, LW seems to me to have devoted to keep playing in 1st Tier, which seems like a fine choice. It's just not the one I make.
  • There's a mighty powerful pull on LW to orient toward propositional knowing. The focus is super-heavy on languaging and explicit models. Questions about deeper layers of knowing (e.g., John Vervaeke's breakdown in terms of procedural, perspectival, and participatory forms of knowing) undergo pressure to be framed in propositional terms and evaluated analytically to be held here. The whole thing with "fake frameworks" is an attempt to acknowledge perspectival knowing… but there's still a strong alignment I see here with such knowing being seen as preliminary or lacking in some sense unless and until there's a propositional analysis that shows what's "really" going on. I notice the reverse isn't really the case: there isn't a demand that a compelling model or idea be actionable, for instance. This overall picture is amazing for ensuring that propositional strengths (e.g., logic) get integrated into one's worldview. It's quite terrible at navigating metacognitive blindspots though.

From what I've seen, LW seems to want to say "yes" maximally to this direction. Which is a fine choice. There aren't other groups that can make this choice with this degree of skill and intelligence as far as I know.

There's just some friction with this view when I want to point at certain perspectival and participatory forms of knowing, e.g. about the nature of the self. You can't argue an ego into recognizing itself. The whole OP was an attempt to offer a perspective that would help transform what was seeable and actionable; it was never meant to be a logical argument, really. So when asked "What can I do with this knowledge?", it's very tricky to give a propositional model that is actually actionable in this context — but it's quite straightforward to give some instructions that someone can try so as to discover for themselves what they experience.

I was just noticing that bypassing theory to offer participatory forms of knowing was a mild violation of norms here as I understand them. But I was guessing it was a forgivable violation, and that the potential benefit justified the mild social bruising.

Comment by Valentine on The Intelligent Social Web · 2020-01-05T23:26:45.902Z · LW · GW
I think what I'd personally prefer (over the new version), is a quick: “Epistemic Status: Fake Framework”.

Like so? (See edit at top.) I'm familiar with the idea behind this convention. Just not sure how LW has started formatting it, or if there's desire to develop much precision on this formatting.

I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.

Mmm. That makes sense.

My impression looking back now is that the dynamic was something like:

  • [me]: Here's an epistemic puzzle that emerges from whether people have or haven't experience flibble.
  • [others]: I don't believe there's an epistemic puzzle until you show there's value in experiencing flibble.
  • [me]: Uh, I can't, because that's the epistemic puzzle.
  • [others]: Then I'm correct not to take the epistemic puzzle seriously given my epistemic state.
  • [me]: You realize you're assuming there's no puzzle to conclude there's no puzzle, right?
  • [others]: You realize you're assuming there is a puzzle to conclude there is, right? Since you're putting the claim forward, the onus is on you to break the symmetry to show there's something worth talking about here.
  • [me]: Uh, I can't, because that's the epistemic puzzle.

(Proceed with loop.)

What I wasn't acknowledging to myself (and thus not to anyone else either) at the time was that I was loving the frustration of being misunderstood. Which is why I got exasperated instead of just… being clearer given feedback about how I wasn't clear.

I'm now much better at just communicating. Mostly by caring a heck of a lot more about actually listening to others.

I think you're naming something I didn't hear back then. And if nothing else, it's something you value now, and I can see how it makes sense as a value to want to ground Less Wrong in. Thanks for speaking to that.

I don’t think things necessarily need to be ‘rigorously grounded’ to be in the 2018 Book, but I do think the book should include “taking stock of ‘what the epistemic status of each post is’ and checking for community consensus on whether the claims of the post hold up’", with some posts flagged as "this seems straightforwardly true" and others flagged as "this seems to point in an interesting and useful thing, but further work is needed."

That seems great. Kind of like what Duncan did with the CFAR handbook.

This is all to say: I have gotten value out of this post and think it’s pointing at a true thing, but it’s also a post that I’d be particularly interested in people reviewing, from a standpoint of “okay, what actual claims is the post implying? What are the limits of the fake framework here? How does this connect to the rest of our best understanding of what's going on in the brain?” (the previous round of commenters explored this somewhat but only in very vague terms).

Mmm. That's a noble wish. I like it.

I won't respond to that right now. I don't know enough to offer the full rigor I imagine you'd like, either. So I hope for your sake that others dive in on this.

Comment by Valentine on The Intelligent Social Web · 2020-01-01T13:01:44.073Z · LW · GW

I've made my edits. I think my most questionable call was to go ahead and expand the bit on how to Look in this case.

If I understand the review plan correctly, I think this means I'm past the point where I can get feedback on that edit before voting happens for this article. Alas. I'm juggling a tension between (a) what I think is actually most helpful vs. (b) what I imagine is most fitting to where Less Wrong culture seems to want to go.

If it somehow makes more sense to include the original and ignore this edit, I'm actually fine with that. I had originally planned on not making edits.

But I do hope this new version is clearer and more helpful. I think it has the same content as the original, just clarified a bit.