Of Two Minds 2018-05-17T04:34:51.892Z
Noticing the Taste of Lotus 2018-04-27T20:05:23.898Z
Mythic Mode 2018-02-23T22:45:06.709Z
The Intelligent Social Web 2018-02-22T18:55:36.414Z
Kenshō 2018-01-20T00:12:01.879Z
CFAR 2017 Retrospective 2017-12-19T19:38:35.516Z
In praise of fake frameworks 2017-07-11T02:12:32.017Z
Gears in understanding 2017-05-12T00:36:17.086Z
The art of grieving well 2015-12-15T19:55:44.893Z
Proper posture for mental arts 2015-08-31T02:29:01.312Z
Looking for a likely cause of a mental phenomenon 2012-12-01T19:43:32.916Z


Comment by Valentine on What is up with spirituality? · 2021-01-27T23:52:28.416Z · LW · GW

I think you want John Vervaeke's series "Awakening from the Meaning Crisis". Very grounded in the scientific materialist framework, and thoroughly answers your question while also giving a wonderful historical overview of Western meaning-making. You'll know if this is for you after watching the first two episodes, possibly after the first one.

Comment by Valentine on On clinging · 2021-01-25T15:08:08.833Z · LW · GW

I really like this distinction. Thank you for writing this up.

A few related thoughts/claims:

  • There's a reason "clinging" seems like a fitting metaphor. I'm guessing it's related to something very primal — e.g., you're hungry and holding a piece of food, but something or someone is trying to snatch it away from you.
  • This inner clinging is an attempt to hold onto a way of being in defiance of reality. It's an embodied distrust of truth.
  • Ergo it makes sense to do only when the being in question believes that letting in truth (i.e., generalized updating) is dangerous. This shows up for kids who are…
    • (a) …dealing with others (adults) who are doing this inner clinging and yet…
    • (b) …themselves lacking a more skillful alternative to navigating others' violence-backed demands.

Because lies are contagious in the mind, this tends to encourage the inner spread of use of clinging. Eventually the child learns to live in a hypo-psychotic delusion that's compatible with the adults'.

Hence transgenerational trauma.

From a mind design point of view, I think it makes tremendous sense to relinquish all clinging in tandem with learning skillful non-clinging-based ways of navigating others' violence-backed demands. My guess is, the primal psyche (very loosely speaking, "System 1") will actively fight relinquishing clinging unless and until it feels the novel safety of doing so.

I think it's reasonable to view the Sequences as having been an attempt to offer a cognitive alternative to clinging. Hence e.g. the Litany of Gendlin.

Comment by Valentine on The Intelligent Social Web · 2020-02-21T18:12:39.699Z · LW · GW

I'm glad to have helped. :)

I'll answer the rest by PM. Diving into Integral Theory here strikes me as a bit off topic (though I certainly don't mind the question).

Comment by Valentine on The Intelligent Social Web · 2020-02-19T23:09:50.260Z · LW · GW
I don't think everyone playing on the propositional level is unaware of its shortcomings…

I didn't mean to imply that everyone was unaware this way. I meant to point at the culture as a whole. Like, if the whole of LW were a single person, then that person strikes me as being unaware this way, even if many of that person's "organs" have a different perspective.

…propositional knowledge is the knowledge that scales…

That's actually really unclear to me. Christendom would have been better defined by a social order (and thus by individuals' knowing how to participate in that culture) than it would have by a set of propositions. Likewise #metoo spread because it was a viable knowing-how: read a #metoo story with the hashtag, then feel moved to share your own with the hashtag such that others see yours.

Comment by Valentine on The Intelligent Social Web · 2020-02-19T22:34:05.993Z · LW · GW
I'm not sure "actionable" is the right lens but something nearby resonated.

Agreed. I mean actionability as an example type. A different sort of example would be Scott introducing the frame of Moloch. His essay didn't really offer new explicit models or explanations, and it didn't really make any action pathways viable for the individual reader. But it was still powerful in a way that I think importantly counts.

By way of contrast, way back in the day when CFAR was but a glimmer in Eliezer's & Anna's eye, there was an attempted debiasing technique vs. the sunk cost fallacy called "Pretend you're a teleporting alien". The idea was to imagine that you had just teleported into this body and mind, with memories and so on, but that your history was something other than what this human's memory claimed. Anna and Eliezer offered this to a few people, presumably because the thought experiment worked for them, but by my understanding it fell flat. It was too boring to use. It sure seems actionable, but in practice it neither lit up a meaningful new perspective (the way Meditations on Moloch did) nor afforded a viable action pathway (despite having specific steps that people could in theory follow).

What it means to know (in a way that matters) why that technique didn't work is that you can share a debiasing technique with others that they can and do use. Models and ideas might be helpful for getting there… but something goes really odd when the implicit goal is the propositional model. Too much room for conversational Goodharting.

But a step in the right direction (I think) is noticing that the "alien" frame doesn't in practice have the kind of "kick" that the Moloch idea does. Despite having in-theory actionable steps, it doesn't galvanize a mind with meaning. Turns out, that's actually really important for a viable art of rationality.

Not necessarily because it's the best or only way, as Romeo said, because it's a thing that can scale in a particular way and so is useful to build around.

I'm wanting to emphasize that I'm not trying to denigrate this. In case that wasn't clear. I think this is valuable and good.

…an environment that's explicitly oriented towards bridging gaps between explicit and tacit knowledge…

This resonates pretty well with where my intuition tends to point.

Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing.

That's something of an illusion. It's a habit we've learned in terms of how to relate to writing. (Although it's kind of true because we've all learned it… but it's possible to circumnavigate this by noticing what's going on, which a subcommunity like LW can potentially do.)

Contrast with e.g. Lectio Divina.

More generally, one can dialogue with the text rather than just scan it for information. You can read a sentence and let it sink in. How does it feel to read it? What is it like to wear the perspective that would say that sentence? What's the feel on the inside of the worldview being espoused? How can you choose to allow the very act of reading to transform you?

A lot of Buddhist texts seem to have been designed to be read this way. You read the teachings slowly, to let it really absorb, and in doing so it guides your mind to mimic the way of being that lets you slip into insight.

This is also part of the value of poetry. What makes poetry powerful and important is that it's writing designed specifically to create an impact beneath the propositional level. There's a reason Rumi focused on poetry after his enlightenment:

"Sit down, be still, and listen.
You are drunk
and this is
the edge of the roof."

Culture has quite a few tools like these for powerfully conveying deep ways of knowing. Along the same lines as I mentioned in my earlier comment above, I can imagine a potential Less Wrong that wants to devote energy and effort toward mastering this multimodal communication process in order to dynamically create a powerful community of deep practice of rationality. But it's not what I observe. I doubt three months from now that there'll be any relevant uptick in how much poetry appears on LW, for instance. It's just not what the culture seems to want — which, again, seems like a fine choice.

Comment by Valentine on The Intelligent Social Web · 2020-02-19T18:18:19.822Z · LW · GW

I can't point to a specific post without doing more digging than I care to do right now. I wouldn't be too shocked to find out I'm drastically wrong. It's just my impression from (a) years of interacting with Less Wrong before plus (b) popping in every now and again to see what social dynamics have and haven't changed.

With that caveat… here are a couple of frames to triangulate what I was referring to:

  • In Ken Wilber's version of Spiral Dynamics, Less Wrong is the best display of Orange I know of. Most efforts at Orange these days are weaksauce, like "I Fucking Love Science" (which is more like Amber with an Orange aesthetic) or Richard Dawkins' "Brights" campaign. I could imagine a Less Wrong that wants to work hard at holding Orange values as it transitions into 2nd Tier (i.e., Wilber's Teal and Turquoise Altitudes), but that's not what I see. What I see instead is a LW that wants to continue to embody Orange more fully and perfectly, importing and translating other frameworks into Orange terms. In other words, LW seems to me to have devoted to keep playing in 1st Tier, which seems like a fine choice. It's just not the one I make.
  • There's a mighty powerful pull on LW to orient toward propositional knowing. The focus is super-heavy on languaging and explicit models. Questions about deeper layers of knowing (e.g., John Vervaeke's breakdown in terms of procedural, perspectival, and participatory forms of knowing) undergo pressure to be framed in propositional terms and evaluated analytically to be held here. The whole thing with "fake frameworks" is an attempt to acknowledge perspectival knowing… but there's still a strong alignment I see here with such knowing being seen as preliminary or lacking in some sense unless and until there's a propositional analysis that shows what's "really" going on. I notice the reverse isn't really the case: there isn't a demand that a compelling model or idea be actionable, for instance. This overall picture is amazing for ensuring that propositional strengths (e.g., logic) get integrated into one's worldview. It's quite terrible at navigating metacognitive blindspots though.

From what I've seen, LW seems to want to say "yes" maximally to this direction. Which is a fine choice. There aren't other groups that can make this choice with this degree of skill and intelligence as far as I know.

There's just some friction with this view when I want to point at certain perspectival and participatory forms of knowing, e.g. about the nature of the self. You can't argue an ego into recognizing itself. The whole OP was an attempt to offer a perspective that would help transform what was seeable and actionable; it was never meant to be a logical argument, really. So when asked "What can I do with this knowledge?", it's very tricky to give a propositional model that is actually actionable in this context — but it's quite straightforward to give some instructions that someone can try so as to discover for themselves what they experience.

I was just noticing that bypassing theory to offer participatory forms of knowing was a mild violation of norms here as I understand them. But I was guessing it was a forgivable violation, and that the potential benefit justified the mild social bruising.

Comment by Valentine on The Intelligent Social Web · 2020-01-05T23:26:45.902Z · LW · GW
I think what I'd personally prefer (over the new version), is a quick: “Epistemic Status: Fake Framework”.

Like so? (See edit at top.) I'm familiar with the idea behind this convention. Just not sure how LW has started formatting it, or if there's desire to develop much precision on this formatting.

I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.

Mmm. That makes sense.

My impression looking back now is that the dynamic was something like:

  • [me]: Here's an epistemic puzzle that emerges from whether people have or haven't experience flibble.
  • [others]: I don't believe there's an epistemic puzzle until you show there's value in experiencing flibble.
  • [me]: Uh, I can't, because that's the epistemic puzzle.
  • [others]: Then I'm correct not to take the epistemic puzzle seriously given my epistemic state.
  • [me]: You realize you're assuming there's no puzzle to conclude there's no puzzle, right?
  • [others]: You realize you're assuming there is a puzzle to conclude there is, right? Since you're putting the claim forward, the onus is on you to break the symmetry to show there's something worth talking about here.
  • [me]: Uh, I can't, because that's the epistemic puzzle.

(Proceed with loop.)

What I wasn't acknowledging to myself (and thus not to anyone else either) at the time was that I was loving the frustration of being misunderstood. Which is why I got exasperated instead of just… being clearer given feedback about how I wasn't clear.

I'm now much better at just communicating. Mostly by caring a heck of a lot more about actually listening to others.

I think you're naming something I didn't hear back then. And if nothing else, it's something you value now, and I can see how it makes sense as a value to want to ground Less Wrong in. Thanks for speaking to that.

I don’t think things necessarily need to be ‘rigorously grounded’ to be in the 2018 Book, but I do think the book should include “taking stock of ‘what the epistemic status of each post is’ and checking for community consensus on whether the claims of the post hold up’", with some posts flagged as "this seems straightforwardly true" and others flagged as "this seems to point in an interesting and useful thing, but further work is needed."

That seems great. Kind of like what Duncan did with the CFAR handbook.

This is all to say: I have gotten value out of this post and think it’s pointing at a true thing, but it’s also a post that I’d be particularly interested in people reviewing, from a standpoint of “okay, what actual claims is the post implying? What are the limits of the fake framework here? How does this connect to the rest of our best understanding of what's going on in the brain?” (the previous round of commenters explored this somewhat but only in very vague terms).

Mmm. That's a noble wish. I like it.

I won't respond to that right now. I don't know enough to offer the full rigor I imagine you'd like, either. So I hope for your sake that others dive in on this.

Comment by Valentine on The Intelligent Social Web · 2020-01-01T13:01:44.073Z · LW · GW

I've made my edits. I think my most questionable call was to go ahead and expand the bit on how to Look in this case.

If I understand the review plan correctly, I think this means I'm past the point where I can get feedback on that edit before voting happens for this article. Alas. I'm juggling a tension between (a) what I think is actually most helpful vs. (b) what I imagine is most fitting to where Less Wrong culture seems to want to go.

If it somehow makes more sense to include the original and ignore this edit, I'm actually fine with that. I had originally planned on not making edits.

But I do hope this new version is clearer and more helpful. I think it has the same content as the original, just clarified a bit.

Comment by Valentine on The Intelligent Social Web · 2019-12-21T20:18:44.565Z · LW · GW

I don't know if I'll ever get to a full editing of this. I'll jot notes here of how I would edit it as I reread this.

  • I'd ax the whole opening section.
    • That was me trying to (a) brute force motivation for the reader and (b) navigate some social tension I was feeling around what it means to be able to make a claim here. In particular I was annoyed with Oli and wanted to sidestep discussion of the lemons problem. My focus was actually on making something in culture salient by offering a fake framework. The thing speaks for itself once you look at it. After that point I don't care what anyone calls it.
    • This would, alas, leave out the emphasis that it's a fake framework. But I've changed my attitude about how much hand-holding to do for stuff like that. Part of the reason I put that in the beginning was to show the LW audience that I was taking it as fake, so as to sidestep arguments about how justified everything is or isn't. At this point I don't care anymore. People can project whatever they want on me because, uh, I can't really stop them anyway. So I'm not going to fret about it.
    • I had also intended the opening to have a kind of conversational tone, as part of a Sequence that I never finished (on "ontology-cracking"). I probably never will finish it at this point. So no point in making this stand-alone essay pretend to be part of an ongoing conversation.
  • A minor nitpick: I open the meat of the idea by telling some facts about improv theater. I suspect it'd be more engaging if I had written it as a story illustrating the experience. "Bob walked onto the stage, his heart pounding. 'God, what do I say?'" Etc. The whole thing would have felt less abstract if I had done that. But it clearly communicated well for this audience, so that's not a big concern.
  • One other reviewer mentioned how the strong examples end up obfuscating my overall point. That was actually a writing strategy: I didn't want the point stated early on and elucidated throughout. I wanted the reader to resonate with what I was describing, and then use that resonance to point out an implication of the reader's own life. That said, I bet I could do that with more punch and precision these days.
  • Reading over the "abuser"/"victim"/"rescuer" stuff, I'm now reminded of Karpman's Triangle. I didn't know about that at the time. Karpman was a grad student under Eric Berne, the father of Transactional Analysis. These days many folk know it as "the drama triangle". Were I writing this essay today I might reference this triangle.
  • I feel like most of the value of the improv analogy is actually in the contrast between player and character. When I hear about people being impacted by this article, most of what I hear has to do with the mechanics of how the social scene unfolds and how that creates constraints (anti-slack). Which is wonderful! But if I had to choose one illumination for people to experience from this whole thing, I'd rather they get a glimpse of who they are as the player, and how much that really really isn't the character that's usually talking and saying "I", "me", and "my". It's immensely freeing to see this clearly. But there's a lot of pleasure to be taken in playing genre-naïve characters, and I don't mean to dismiss that. That's just not the scene type I want to play in anymore. So on net, this wish of mine probably wouldn't meaningfully affect how I'd edit this piece.
  • The reason for referencing Omega was to foreshadow a later post on Newcomblike self-deception.
    • The short version is: If Omega is modeling your self-model instead of your actual source code to predict your actions, then you're highly incentivized to separate your self-model from your method of choosing your actions. Then you can two-box while convincing Omega you'll one-box by sincerely but falsely believing you're going to one-box. This paints a pretty vivid picture if you view the intelligent social web as the real-world version of Omega with "social role" playing the part of "self-model".
    • I'd now skip that whole reference. It made sense only in my mind. And even if I had finished the Sequence this was part of, the references to Omega would make sense only to those who had finished it and then went back to reread this essay.
  • There's something about how this essay uses the concept of slack that nags at me. I suspect it's fine for the purposes of the 2018 review, but I'd be remiss not to mention it. The intuition about slack is itself interpreted from within the social web. But slack affects only the character. So although slack is a genre-savvy concept, it's still a concept within the web itself. That introduces a dimension of self-reference that might be elegantly self-reinforcing, paradoxical, or something else. I honestly don't know.
    • This has me wonder about there being a type of construct, which is genre-savvy concepts. This whole model is an example, as is the concept of genre-savviness. I suspect that's a gateway to an insight type that's usually called "spiritual".
  • There's a bit where I refer to the possibility of using Looking to shift roles. I have a much more sophisticated view of this now. I think I was being truthful and reasonably accurate… and yet for the sake of the essay I would either expand on that reference to clarify it, or remove the reference entirely. It's not helpful to say "There's a magic consciousness thingie you can do that'll do things your character can't understand" if that's literally all I say about it.

So, with all that said, here are the edits I'd make:

  • Cut the opening section.
  • Add a hyperlink to Karpman's Triangle.
  • Erase references to Omega, maybe expanding a bit where needed instead.
  • Either delete references to changing one's fate by Looking, or spell it out in less mysterious terms.
Comment by Valentine on The Intelligent Social Web · 2019-12-21T19:16:24.717Z · LW · GW

Thank you. Thank you for sharing how you were impacted. That touched me. I'm delighted to have played a role in you enjoying your life more fully. :-)

The post’s focus on salient examples (family roles, the convert boyfriend, the white man’s role) also has a downside, in that it’s somewhat difficult to keep track of the main thrust of Valentine’s argument. The entire introductory section also does nothing to help the essay cohere; it makes claims about personal benefits Valentine has acquired by using this framework. These claims are neither substantiated nor explored further in the essay, and they are also unnecessary — the essay is compelling by the force of its insight and not by promising a laundry list of results.

I quite agree. Thank you for stating this so clearly.

At the time I was under the delusion that people would read and consider what I had to say because they consciously could expect a benefit from doing so. So I tried to state the value up front. I think I was also a little embarrassed to be talking in public in a way I wasn't aware of, so the "laundry list" was a way of assuaging my unrecognized shame.

All of which is to say, I agree. :-) And I'm glad this point got into the reviews for this.

Comment by Valentine on Noticing the Taste of Lotus · 2019-12-21T19:07:47.657Z · LW · GW

Ah, I didn't realize these post as comments. That's fine, I'll leave this here.

I'm also amused by my poor modeling of intending "a few quick notes". I'm smiling bemusedly at myself, and also taking in that this has been a chronic years-long glitch in self-modeling. Oh, humans.

Comment by Valentine on Noticing the Taste of Lotus · 2019-12-21T19:01:35.611Z · LW · GW

I thought I'd add a few quick notes as the author.

As I reread this, a few things jump out for me:

  • I enjoy its writing style. Its clarity is probably part of why it was nominated.
  • I'd now say this post is making a couple of distinct claims:
    • External forces can shape what we want to do. (I.e., there are lotuses.)
    • It's possible to notice this in real time. (I.e., you can notice the taste of lotuses.)
    • It's good to do so. Otherwise we find our wanting aligned with others' goals regardless of how they relate to our own.
    • If you notice this, you'll find yourself wanting to spit out lotuses that you can tell pull you away from your goals.
  • I still basically agree with the content.
  • I think the emotional undertone is a little confused, says the version of me about 19 months later.

That last point is probably the most interesting to meta-reviewers, so I'll say a little about that here.

The basic emotional backdrop I brought in writing this was something like, "Look out, you could get hijacked! Better watch out!" And then luckily there's this thing you can be aware of, to defend yourself against one more form of psychic/emotional attack. Right?

I think this is kind of nuts. It's a popular form of nuts, but it's still nuts.

Looking at the Duolingo example I gave, it doesn't address the question of why those achievements counted as a lotus structure for me. There are tons of things others find have lotus nature that I don't (e.g., gambling). And vice versa: my mother (who's an avid Duolingo user) couldn't care less about those achievements.

So what gives?

I have a guess, but I think that's outside the purview of the purpose of these reviews. I'll just note that "We're in a worldwide memetic war zone where everyone is out to get us by hijacking our minds!" is (a) not the hypothesis to default to and (b) if true is itself a questionable meme that seems engineered to stimulate fight-or-flight type reactions that do, indeed, hijack clarity of mind.

With all that said, I still think there's a ton of value in "noticing the taste of lotus" as the title suggests. It's pointing out where we're more likely to believe our motivations are getting diverted from our goals if we were to notice.

It's just that, about a year and a half later, I now reflect on this being a very basic entry point to a much more interesting question.

In particular, this "hijacking" is basically how culture works from what I can tell. Is culture wicked? Or is it benevolent? Or is it a mix? How can we tell whether the reasoning faculties we're using to work out these puzzles are themselves "hijacked" by having been immersed in a culture of lotus-eaters?

From what I've been able to see for myself and reason about, I think you can't answer those questions from within the framework that's asking them. It's too fear-based. "Fear-based" isn't inherently bad, but when the fear isn't acknowledged as the base then you can basically guarantee that the thinking isn't clear. (As Carl Jung said: "Until you make the unconscious conscious, it will direct your life and you will call it 'fate'.")

A few relatively minor notes that I imagine y'all would find relevant:

  • I went back to Duolingo a few months ago. I'm even using the achievements a bit. I just worked out a way to have the "lotus nature" work toward my goals with French.
  • I made a minor edit to the article, changing a single letter to correct the grammar ("build" to "built").
Comment by Valentine on Of Two Minds · 2018-06-10T15:52:47.577Z · LW · GW

Yep, that seems like a correct nuance to add. I meant "predict" in a functional sense, rather than in a thought-based one, but that wasn't at all clear. I appreciate you adding this correction.

Comment by Valentine on Of Two Minds · 2018-06-10T15:51:11.398Z · LW · GW
You might have gone too far with speculation - your theory can be tested.

I think that's good, isn't it? :-D

If your model was true, I would expect a correlation between, say, the ability to learn ball sports and the ability to solve mathematical problems.

Maybe…? I think it's more complicated than I read this implying. But yes, I expect the abilities to learn to be somewhat correlated, even if the actualized skills aren't.

Part of the challenge is that math reasoning seems to coopt parts of the mind that normally get used for other things. So instead of mentally rehearsing a physical movement in a way that's connected to how your body can actually move and feel, the mind mentally rehearses the behavior (!) of some abstract mathematical object in ways that don't necessarily map onto anything your physical body can do.

I suspect that closeness to physical doability is one of the main differences between "pure" mathematical thinking and engineering-style thinking, especially engineering that's involved with physical materials (e.g., mechanical, electrical — as opposed to software). And yes, this is testable, because it suggests that engineers will tend to have developed more physical coordination than mathematicians relative to their starting points. (This is still tricky to test, because people aren't randomly sorted into mathematicians vs. engineers, so their starting abilities with learning physical coordination might be different. But if we can figure out a way to test this claim, I'd be delighted to look at what the truth has to say about this!)

Comment by Valentine on Of Two Minds · 2018-05-17T20:44:34.185Z · LW · GW

I mostly agree. I had, like, four major topics like this that I was tempted to cram into this essay. I decided to keep it to one message and leave things like this for later.

But yes, totally, nearly everything we actually care about comes from the social mind doing its thing.

I disagree about curiosity though. I think that cuts across the two minds. "Oh, huh, I wonder what would happen if I connected this wire to that glowing thing…."

Comment by Valentine on Noticing the Taste of Lotus · 2018-04-29T18:04:55.382Z · LW · GW
Yes, most pleasures grab your wanting. I'm suggesting that you actually enjoy collecting arbitrary achievements, there is no "hijacking" about it. And I don't understand why collecting arbitrary achievements needs to be meaningful, while delicious food is allowed to be meaningless.

Okay, seriously? You want to play this game?

Meta time:

I get that status here comes in part from good arguments. It's a fine metric for truth-seeking. But it isn't the same as truth-seeking, and it Goodharts into disagreement-hunting even where the disagreements don't matter.

I'm trying to point at a simple observation: some things grab your wanting directly and yank you off-course. Seems like a good idea to notice when that happens. That's all.

I'm not saying that one shouldn't ever let those want-grabbers do their thing. But maybe you can't tell I wasn't saying that; communication is hard. But if you think I am saying that… then can't you just notice that that's stupid, mention that, and highlight the point I should have made?

So… I mean, really, you seriously think you're meaningfully refuting my points by saying I enjoy achievements and therefore there's no hijacking? Seriously? Seriously?

I mean, I think your next norm-driven move is to say "Yes, seriously." And then do some kind of weird philosophical thing that, I don't know, makes it sound like I'm arguing that some wants are good and others are bad, and then knocking down that strawman. Or something.

But… come on! Really?

Can we just… not fence for status?


I don't understand why collecting arbitrary achievements needs to be meaningful, while delicious food is allowed to be meaningless.

I never said anything about food. Or about what needs to be meaningful. Just that there are want-grabbers that are meaningfulness-symmetric.

I don't usually think of good food as lotus-like. Like, here are some pleasurable non-lotuses (for me):

  • Walks in nature.
  • Kissing someone I'm dating.
  • Meditating.
  • Intense exercise.
  • Breaking a fast with good food.
  • Doing an acrobatic flip.

I basically never find these yanking me away from what I'm doing. I just like them. Sometimes I want to do some of them more and it's hard to make myself. Very not lotus-ish.

Sometimes I don't do these things because I'm busy, I don't know, getting sucked into getting achievements on some game that leaves Tetris effects in my brain.

I mean, if I want to do that, then that seems cool.

Seems bad not to even notice that's happening though. Then Facebook gets to program my wants however it chooses to.

I worry that the more important distinction between collecting achievements and eating food is that the former is a low-status activity.

I don't think of it as low-status. FWIW.

I don't think there is any objective measure to tell what desire is ok and what is a compulsion. I think, similarly to the word "disease", a desire is "compulsive" only if you think it causes problems for you.

Uh… then I'm not sure what your point is. You said:

"My point is that there is nothing inherently wrong with arbitrary pleasures that don't improve your life. The problem is when you develop compulsions. There seems to be a difference between simple desire and compulsive desire."

So… if I take you literally, I think you just said that the only problem is when you develop a desire that causes you a problem.

Like, I don't think that's actually what you mean. I'm strawmanning your words to point out that I think I haven't understood your real message.

Help me understand?

Comment by Valentine on Noticing the Taste of Lotus · 2018-04-29T16:36:12.609Z · LW · GW
Valentine apparently enjoys collecting arbitrary achievements.

"Enjoy" is too simple to describe what's true here. I find myself motivated to collect them. When I get another one, I get a "I'm getting closer!" feeling. Getting them all gives me a few moments of satisfaction, sort of like having carefully organized a silverware drawer might.

And I agree, there's nothing wrong with that per se.

I just don't want that process to hijack my effort to learn French.

[…] it seems that [Valentine is] feeling some guilt about it.

Uh, no. I don't know where you got that impression. I don't feel guilty about eating lotus. I just want to notice when I am, because apparently I can be fed lotus without my asking for it. If I don't notice, then others can tell me what my goals are, even accidentally. I don't like that.

Comment by Valentine on Noticing the Taste of Lotus · 2018-04-29T16:24:32.367Z · LW · GW

Yeah. I think giving up on things that are appealing, doesn't work. That's why I titled this about noticing the taste of lotus, rather than noticing lotuses. We have to use proxy goals. The trick is, noticing when we're getting Goodharted.

Comment by Valentine on Noticing the Taste of Lotus · 2018-04-28T21:36:42.416Z · LW · GW

Oh. You're asking how noticing lotus flavor plays out in domains that make people addicted to insights?

I don't know. Seems like it'd work the same as in any other domain. Either you notice and have some choice about whether to get sucked in, or you don't.

Comment by Valentine on Noticing the Taste of Lotus · 2018-04-28T21:21:26.612Z · LW · GW
Is delicious food also a lotus?

I think it sort of misses the point to worry about what is or isn't a lotus. The point is to notice what grabs your wanting, and how that affects you later.

Clearly, it doesn't make your life better after you've eaten it, and that seem to be the criteria you use.

Not what I meant to convey. A lotus is something that grabs your wanting directly. When it's designed by someone else, it usually doesn't quite fit what's meaningful to you. Then it's pretty common to find yourself doing whatever it is a lot, and not benefitting much from it, and not caring about that fact.

My point is that there is nothing inherently wrong with arbitrary pleasures that don't improve your life.


The problem is when you develop compulsions. There seems to be a difference between simple desire and compulsive desire.

I don't know what a "compulsion" is. I mean, I know the word. But I don't really know what it is.

The problem I care about here, is that things can hijack what you care about, and the method they use for it doesn't correlate much with value delivered. Seems like something worth noticing when it's happening.

Maybe you mean the same thing. I just don't know what I'd use to sort out "simple desire" from "compulsive desire", so to me right now they're just words.

Comment by Valentine on Noticing the Taste of Lotus · 2018-04-28T00:18:39.523Z · LW · GW

Um… what? Can you say more words?

Comment by Valentine on Noticing the Taste of Lotus · 2018-04-28T00:17:30.849Z · LW · GW

Yep. It varies by lotus too. What counts as a lotus, and how strongly, seems to depend on whom we’re talking about.

And clearly there are trends. Otherwise Facebook wouldn’t have its business model.

Comment by Valentine on Noticing the Taste of Lotus · 2018-04-27T20:13:31.715Z · LW · GW

There's an awesome fictional metaphor of this that's really off-color. The online sex humor comic Oglaf has a two-page bit where the poor teased apprentice ends up so very much wanting a pinecone that he does some NSFW things he clearly would rather not have to do. I'll make you do a bit of work to find it though so you can only blame yourself if you don't like what you see there: oglaf dot com slash pinecone

Comment by Valentine on Mythic Mode · 2018-03-29T20:21:51.269Z · LW · GW

I'd be happy to.

…though after reflecting on it and starting a few drafts of a comment here, I'm starting to wonder if I should instead spell it out in more detail in its own post.

The gist of it is that every framework thinks every other framework is seriously missing the point in some way. If you can nail down X's critique of Y and Y's critique of X, and both critiques are made of Gears, you can use those critiques to emphasize a boundary between them and to intentionally switch between them.

In practice, we usually want to switch between a kind of science-based frame and a new hypothetical one we want to test out. When both the science frame and the new to-be-sandboxed frame both have allergic reactions to the other, then they're never going to mix, and there's no risk of the "Aha, consciousness collapses quantum probability waves!" type error. You can then leverage each frame's critique of the other to switch between them, or to verify which one you're in.

After that you can set up some TAPs to create mental warning bells whenever you enter one, or to remember to verify which one you're in if you want to double-check before doing a given kind of reasoning or making a given kind of decision.

In practice I find this makes each mode more clear and internally consistent, in part by exposing and removing internal inconsistencies. E.g., in the "consciousness collapses quantum probability waves" thing, you can actually find the logical point where "consciousness first" and quantum mechanics slam into one another, at which point you need to separate them more fully. Then it becomes more obvious that the "consciousness first" paradigm doesn't allow us to start with the frame of there being an objective reality that there is subjective experience of. This lets you keep your sanity in quantum mechanics even when sometimes trying on the "consciousness first" paradigm, because the two basically can't coexist in the same effort to explain a given phenomenon.

The only thing I know of that breaks these sandboxes is if you find a Gears-based link between the two. But if you actually find a Gears-based link between the science frame and a new frame, then what you have is a scientific hypothesis. At that point you can test it empirically.

Unless and until you find such a Gears-based link, though, the science frame will find it correct to view those other frames as possibly or definitely wrong or misguided in some way. Hence preemptive naming of such frameworks as "fake": it acts as a reminder to come back to your home ontology and to keep it from being corrupted by these other ones you're playing with.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-19T03:00:34.252Z · LW · GW
Alright, I think I now understand much better what you mean, thank you.

Great. :-)

[…]these immune responses are there for a reason.

Of course. As with all other systems.

Specifically in the case of Looking, what rings my alarm bells is not so much the "this-ness" etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).

The following has been said many times already, but I'll go ahead and reiterate it here once more: I was not trying to claim that Looking is beyond rational explanation.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-18T17:18:10.245Z · LW · GW
My impression from the "phone" allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don't want to acknowledge it because it would disrupt some self-deception.

People don't need to already know it in order for this dynamic to play out. All that's required is that the person have some kind of idea of what type of impact it'll have on their mental architecture — and that "some kind of idea" needn't be accurate.

This gets badly exacerbated if the concept is hard to understand. See e.g. "consciousness collapses quantum uncertainty" type beliefs. This does a reasonably good job of immunizing a mind against more materialist orientations to quantum phenomena.

But to illustrate in a little more detail how this might make Looking more difficult to understand, here's a slightly fictionalized exchange I've had with many, many people:

  • Them: "Give me an example of Looking."
  • Me: "Okay. If you Look at your hand, you can separate the interpretation of 'hand' and 'blood flow' and all that, and just directly experience the this-ness of what's there…"
  • Them: "That sounds like woo."
  • Me: "I'm not sure what you mean by 'woo' here. I'm inviting you to pay attention to something that's already present in your experience."
  • Them: "Nope, I don't believe you. You're trying to sell me snake oil."

After a few months of exploring this, I gathered that the problem was that Looking didn't have a conceptual place to land in their framework that didn't set off "mystical woo" alarm bells. Suddenly I'm talking to their epistemic immunization maximizer, which has some sense that whatever "Looking" is might affect its epistemic methods and therefore is Bad™. Everything from that point forward in the conversation just plays out that subsystem's need to justify its predetermined rejection of attempts to understand what I'm saying.

Certainly not everyone does this particular one. I'm just offering one specific example of a type.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-17T02:35:45.757Z · LW · GW
Of course we can use reductionist materialism to reason about processes that happen in our brain when we are doing this very reasoning.

I'm not disagreeing with that. I'm saying that:

  • It's pretty normal to miss the confusion in this case.
  • Looking isn't reasoning.

The reason the paperclip maximizer won't listen is because it doesn't care, not because it doesn't understand what you're saying. So, this allegory would only make sense if, some parts of our mind don't care about the benefits of Looking while other parts do care. It still shouldn't be an impediment to understand what Looking is.

…unless it suspects that understanding what Looking is might make it less effective at maximizing paperclips.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-15T23:05:06.378Z · LW · GW
But my impression is that, while Valentine has expressed approval of your post and said that he feels understood and so forth, he thinks there are important aspects of Looking/enlightenment/kensho/... that it doesn't (and maybe can't) cover.

Doesn't: yes, for sure.

Can't: mmm, maybe? I expect that by the end of the sequence I'm writing, we'll return to Kaj's interpretation of Looking and basically just use it as a given — but it'll mean something slightly different. Right now, I expect that if we just assume Kaj's interpretation, we're going to encounter a logjam when we apply Looking to the favored LW ontology, and the social web will have a kind of allergic reaction to the logjam that prevents collective understanding of where it came from. Once we collectively understand the structure of that whole process, we can smash face-first into the logjam, notice the confusion that results, and then make some meaningful progress on making our epistemic methods up to tackling serious meta-ontological challenges. At that point I think it'll be just fine to say "Yep, we can think of Looking as compatible with the standard LW ontology." Just not before.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-15T23:01:43.958Z · LW · GW

Meta: Okay, I'm super confused what just happened. The webpage refreshed before I submitted my reply and from what I could tell just erased it. Then I wrote this one, submitted it, and the one I had thought was erased appeared as though I'd posted it.

(And also, I can't erase either one…?)

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-15T22:59:16.894Z · LW · GW
I have largely lost hope, though, that any of the Enlightened[1] will seriously attempt to explain how, rather than just continuing to tell us Unenlightened[2] folks that our ontology, or paperclip-maximizer-like brain subagents, or whatever, block us from understanding.

I really am trying. When I talk about paperclip-maximizer-like subagents or ontological self-reference, it's not my intent to say "You can't understand because of XYZ." I'm trying to say something more like, "I'd like you to notice the structure of XYZ and how it interferes with understanding, so that you notice and understand XYZ's influence while we talk about the thing."

Right now there's too large of an inferential gap for me to answer the "how" question directly, and I can see specific ways in which my trying will just generate confusion, because of XYZs. But I really am trying to get there. It's just going to take me a little while.

One specific possibility relevant to those footnotes is worth being explicit about: it could be that the Enlightened have genuine insights that they have gained through their Enlightenment -- but that some of the Unenlightened have some of the same insights too, but it's difficult to recognize that one insight is the same as the other.

Strong agreement.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-15T22:25:49.848Z · LW · GW
I have largely lost hope, though, that any of the Enlightened[1] will seriously attempt to explain how, rather than just continuing to tell us Unenlightened[2] folks that our ontology, or paperclip-maximizer-like brain subagents, or whatever, block us from understanding.

I really am sincerely trying. In this case there's a pretty epic inferential gap, and I'm working on bridging that gap… and it requires first talking about paperclip-maximizing-like mechanisms and illusions created by self-reference within ontologies that one is subject to. Then I can point at the Gödelian loophole, and we can watch our minds do summersaults, and we'll recognize the summersaults and can step back and talk coherently about what the existence of the ontological wormhole might mean for epistemology.

Or at least that's the plan.

And… I recognize it's frustrating in the middle. And if I were more clever and/or more knowledgeable, I might have seen a way to make it less frustrating. I'd rather not create that experience for y'all.

FWIW, I don't think the Unenlightened[2] can't understand where I'm going. I just need some conceptual structures, like the social web thing, to make where I'm going even possible to say — at least given my current skill with expressing this stuff.

Still, I continue to harbour some hope that Valentine's future articles may be, um, enlightening.

Ha! :-)

I hope so too.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-15T22:00:24.033Z · LW · GW
Or is it something more like, LW's approach appreciates "we create objects in order to think" on an intellectual level but not on a practical level?

That one.

Though to be clear, I'm not trying to talk specifically about the "there are no objects" thing exactly. I was using that as an example of something seen via Looking that I imagine sounds kind of crazy or nonsensical.

But I do mean that LW culture occurs to me as being subject to its ontology, and to the extent that there's discussion of this, that discussion is pretty reliably done within that ontology. This gives the illusion of it being justified (when that's actually just a consistency check) and makes the ontology's blindspots incredibly difficult to point out.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-15T17:36:46.166Z · LW · GW
It sounds like Looking is a skill that lets someone have more introspective access to their own neural network structures. If this is a correct understanding, it seems perfectly compatible with LW's current approach to ontology, or at least the approach laid out in Eliezer's Sequences (with one caveat being that I think we should be careful/skeptical about whether someone purporting to be Looking is really introspecting parts of their neural network structures, or merely doing some form of epistemic wireheading). Do you agree?

Hmm. I need to answer this in two pieces simultaneously:

  • The short and slightly deceptive answer is "Yes I agree." A more careful answer: From within LW's current approach to ontology, the restriction of Looking to that ontology works perfectly well, although there are some things (like what Eric S. Raymond refers to in Dancing With the Gods) that will at best make sense while remaining largely inaccessible.
  • Your very first sentence here presupposes the standard LW ontology: "It sounds like Looking is a skill that lets someone have more introspective access to their own neural network structures." The structure of your question then goes on to ask about Looking's compatibility with that ontology from within that ontology. The answer has to be "yes", because the question makes sense within the ontology. This generates a "Get out of the car" problem. This isn't a huge problem right here and now, but it will be a problem down the road when I start more explicitly pointing at some results of Looking at ontologies.
Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-14T22:42:01.108Z · LW · GW

I like this. I largely agree.

I'd like to pinpoint a few differences I notice. I hope the collective here takes this as me coming from a spirit of "Here's the delta I see" rather than "I disagree and here's why." By and large I really like the clarity Kaj has brought to this.

First, a meta thing:

While I liked Valentine’s recent post on kensho and its follow-ups a lot, one thing that I was annoyed by were the comments that the whole thing can’t be explained from a reductionist, third-person perspective.

I didn't mean to convey that it can't be explained this way. I now think I was combining a few different things in a way that accidentally made it hard to understand:

  • One key thing I now see is that Looking doesn't require self-reference — but most of the interesting applications of Looking that I'm aware of do require navigating self-reference. An example of this is the "get out of the car" problem. (I'll have more to say about Kaj's interpretation of that in a bit.)
  • The main thrust of what I'm poking at is a collection of results of Looking at ontology (whereas here Kaj focuses mostly on Looking at suffering). If we drag in an ontology to say "Okay, here's what Looking is, and now we've nailed it down", and then you use that definition of Looking instead of the phenomenological skill, then it's going to be enormously hard to Look at the ontology used to define Looking. And in this particular case, people seem to be prone to not noticing when they've made this error. (Again with the car/phone analogies.) I'm particularly concerned here because the culture around LW-style rationality seems to emphasize a very specific and almost mathematically precise ontology in a way that is often super useful but that I don't think is a necessary consequence of the epistemic orientation. That made me really hesitant to put a bunch of effort into spelling out what Looking might be within that favored ontology, since the whole point is to notice restrictions on epistemic strength imposed by ontological rigidity. I was (and am) concerned about early attempts to explain this stuff locking out a collective ability to understand.
  • With that said, my personal impression had been that it's actually quite easy to see what Looking is and how one might translate it into reductionist third-person perspectives. But my personal experience had been that whenever I tried to share that translation, I'd bounce off of weird walls of misunderstanding. After a while I noticed that the nature of the bounce had a structure to it, and that that structure has self-reference. (Once again, analogies along the lines of "get out of the car" and "look up from your phone" come to mind.) After watching this over several months and running some informal tests, and comparing it to things CFAR has been doing (successfully, strugglingly, or failing to do) over the six years I've been there, it became obvious to me that there are some mental structures people run by default that actively block the process of Looking. And for many people, those structures have a pretty strong hold on what they say and consciously think. I've learned to expect that explaining Looking to those structures simply will never work. (There are other structures in human mind design, though. And I claim there's a pretty reliable back door to such self-referential architectures. But explaining that explicitly seems to run into the same communication problem… which is why I'm writing this meta-ontology sequence.)

So… I'm happy to go with the main thrust of what I receive Kaj as expressing here. I just also want to add an asterix saying something like "Beware, we've now entered a realm where the illusion of safety has become stronger, and I fear this will make what's coming that much more painful by comparison and thus harder to understand."

I believe that this kind of thing is what Valentine means when he talks about Looking: being able to develop the necessary mental sharpness to notice slightly lower-level processing stages in your cognitive processes, and study the raw concepts which then get turned into higher-level cognitive content, rather than only seeing the high-level cognitive content.


…with a caveat that I'm pretty sure Kaj gets and I think even said but that I don't know if the casual reader will reliably catch:

On the inside, before you Look, the thing you're about to Look at doesn't look on the inside like "high-level cognitive content". It looks like how things are. This ends up with me saying things that sound kind of crazy or nonsensical, but to me are obvious once I Look at them. (E.g., there are no objects. We create objects in order to think. Because language is suffused with object-ness, though, I don't know of any coherent way of talking about this.)

Understanding suffering is a special case of Looking, but a sufficiently important one that it deserves to be briefly discussed in some detail.

I want to highlight this. I quite agree, suffering is a really important special case, and I'm delighted with what Kaj did with it. And also, it's a special case. Nearly all discussion of enlightenment-flavored stuff I've encountered has been about the alleviation of suffering or the cultivation of happiness, and I think these are great and important things to emphasize… and I think there's something else in this domain that's more central to the rationality project here. (Although I do think that the path of alleviating suffering via e.g. Looking at the nature of the self does result in a bunch of the right kind of epistemic updates. I just suspect it's insufficient.)

So what’s all this “look up” and “get out of the car” stuff? […] You can’t defuse from the content of a belief, if your motivation for wanting to defuse from it is the belief itself.

This comes across to me as a great explanation of a special case. Kaj might mean the general thing, but I'm not sure.

I'm going to claim there are two kinds of problems that this "get out of the car" thing is pointing at:

  • Structural self-reference in defusion. In other words, if a belief you're fused with is being used somehow in the effort to defuse from it, then the defusion is likely to fail. Kaj gives one type of example of this. Another one is if the belief provides the framework by which you're orienting to the possibility of Looking at the belief in the first place. I have a post planned about two or three out that will go into more detail about this, but to (maybe dangerously?) gesture at the thing: Starting from reductionist materialism to frame the question of what our brains are doing when we Look at reductionist materialism yields a strange loop that often causes the process to glitch (and can reinforce an impression of real-ness that's strange the way believing in objectively existing objects is strange).
  • Orthogonality. If I try to argue with a paperclip maximizer about how maximizing paperclips isn't all there is to life, it will care to listen only to the extent that listening will help it maximize paperclips. I claim that by default, human mind design generates something analogous to a bunch of paperclip maximizers. If I'm stuck talking to one of someone's paperclip maximizers, then even if I see that there are other parts of their mind that would like to engage with what I'm saying, I'm stuck talking to a chunk of their mind that will never understand what I'm saying. (I'll have more to say about this in my next post (or the one after it if I need to split them again).)

The second case is pretty straightforward to bypass by Looking. The first is much trickier, but I think might be doable if you track a kind of phenomenological feedback loop that the self-reference generates and use that as a warning sign. (Unfortunately, I think there are structural reasons why the warning sign can't say anything much more specific than "Do something different." In short, the part of the mind that's trying to work out what to do is almost always using the belief in question, so no amount of instruction is going to help it get a meaningfully useful answer.)

All of this particularly applies for trying to overcome suffering. Because remember, suffering is caused by a belief that pain is intrinsically bad. That belief is what causes you to try to flinch away from pain in a way which, by itself, creates the suffering.

I like this. I hadn't quite thought of it this explicitly.

It also suggests some hope in approaching the domain from the angle of a desire for good epistemics instead, which is roughly where I've been coming from. I haven't yet noticed any self-referential glitch, instead finding things like the Litany of Tarski.

…but knowing the rhythm of this domain, I suspect this is just a description of my current ignorance.

But if you cared about things like saving the world, then you will still continue to work on saving the world, and you will be Looking at things which will help you save the world - including ones that increase your rationality.


I've come to learn that the communities that talk most about enlightenment-related things are very particular about the word "enlightenment" and seemed to bristle at how I used it. So I'll add an adjustment to language (but not to what I have been meaning to convey) and clarify that I don't mean to imply that I am fully enlightened. I still suffer, I still usually operate under the delusion that I have a self (though I've seen through that one twice so far), and I haven't Looked carefully at impermanence or unsatisfactoriness as yet.

And with that said, I strongly resonate with this sentiment.

I'm writing what I'm writing, and I continue to teach at CFAR, and do all the things I'm doing, because I care to do some world-saving things.

And I see some things, via Looking, that I think are very important to share.

(It just takes a while to build a scaffold that might work for sharing it. And much appreciation for people like Kaj who build better scaffolds than I've managed so far!)

Comment by Valentine on Mythic Mode · 2018-03-14T04:59:42.313Z · LW · GW

I generally agree. One nitpick:

Your s1 doesn't interface with reality, it interfaces with the real-world omega. Remove all signs of omega, and you're left with no handles for action at all.

My impression is that S1 solves roughly two kinds of problems: movement and other people. I think the latter tends to dominate, sometimes overwhelmingly. But with or without myths, you can still lift your arm.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-14T01:46:23.977Z · LW · GW

Yep. I feel understood.

Comment by Valentine on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-14T01:29:48.919Z · LW · GW
The "no-self" thing was still getting interpreted in terms of my existing ontology, rather than the ontology updating.


I'll finish reading the other comments and then, time permitting, I'll add my own.

I'll just note for now that there's a kind of "being clear" that I think is dangerous for rationality, in a way analogous to what you describe here about no-self. The sketch is something like: if an epistemology is built on top of an ontology, then that epistemology is going to have a hard time with a wide swath of ontological updates. Getting around this seems to require Looking at one's ontologies and somehow integrating Looking into one's epistemology. Being required to explain that in terms of a very specific ontology seems to give an illusion of understanding that often becomes sticky.

Comment by Valentine on The Intelligent Social Web · 2018-02-28T03:06:08.464Z · LW · GW

For what it’s worth, the mythic mode name I usually give the social web is “Fate”, and the mythic name I give scripts played out in the web is “fates”. As in, “It’s his fate to be poor, so Fate will see to it that his business does not succeed.”

Comment by Valentine on The Intelligent Social Web · 2018-02-26T00:48:34.145Z · LW · GW

Woohoo! I am pleased to be wrong here!

Comment by Valentine on Mythic Mode · 2018-02-26T00:41:34.633Z · LW · GW
I have just recently read Meditations on Moloch and I agree it is fascinating post, but also entirely misses the point. Competition does not make you sacrifice your values[…]

Scott wasn't suggesting that competition alone makes people sacrifice their values. He was suggesting (as I understand it) that the following configuration tends to suck for everyone pretty systematically:

  • You have a bunch of agents who are in competition for some resource.
  • Each agent is given an opportunity to sacrifice something important to them in order to gain competitive advantage over the other agents.
  • The agents can't coordinate about who will or won't take advantage of this opportunity.

The net effect is generally that agents who accept this trade tend to win out over those who don't. This incentivizes each agent to make the trade so that they can at least stay in competition.

In particular, this means that even if there's common knowledge of this whole setup, and there's common knowledge that it sucks, it's still the case that no one can do anything about it.

That, personified, is Moloch.

Comment by Valentine on Mythic Mode · 2018-02-26T00:32:24.231Z · LW · GW

I seriously doubt that'll ever happen. The closest I would expect is if the community schisms on an axis like "Is mythic mode okay to use?" and the mantle of "rationalist" is seen as moving with the "yes" camp. And I think that whole schism would be dumb and would make both groups dumber regardless of what happens to which labels.

Comment by Valentine on The Intelligent Social Web · 2018-02-26T00:22:40.496Z · LW · GW

Okay, persuaded. How's this?

(Unfortunately, this breaks links to this post…)

Comment by Valentine on Mythic Mode · 2018-02-25T01:34:41.145Z · LW · GW

That might well be. I haven't a clue which hash functions do what relative to one another. But yeah, the thing it encodes is English ASCII text.

Comment by Valentine on The Intelligent Social Web · 2018-02-24T23:49:26.781Z · LW · GW
Like, in the kensho post it was clear that you were afraid of falling into the "I am looking higher on my screen" trap, so it seemed like you had some kind of notion of what that would non-metaphorically look like, which is what I was trying to get at.

Oh! Oh jeez. That makes a lot of sense. I can give tons of examples of that! That's a very different thing in my mind.

Heh, although, I should warn that giving examples of this is prone to starting arguments. Just tag all of this as "Val's interpretations of the world" and we're good. :-)

So with that, here's a few:

  • For a few months before my kenshō, ialdabaoth kept telling me that I had a social strategy that was being really annoying to him, something something sexual competition something something. I kept listening to what he was saying and thinking carefully about it, and I tried to do focusing on it, but it felt weird and I kept thinking that he was probably wrong (but as a general policy I kept in mind that I might just be deluded). This contrasts with right after the kenshō: one of the first things I Looked at was my sexual strategy system. If I remember right, I laughed and said something like, "Oh, that poor Valentine creature! It's like a leg that twitches until it fucks!" I ended up apologizing to ialdabaoth because I could clearly See what he was talking about now. We've been great on that dimension ever since. But yeah, I think it'd be fair to say he was trying to get me to Look and I was doing something that seemed perfectly sensible to me in response to that, but it sure wasn't Looking.
  • Sometimes I try to convey something one could loosely tag as "sovereignty" but is really about Seeing one's own existence and what that implies. A downstream effect of it is that there's now a meaningful difference in my mind between a "decision" that's about navigating the social web, versus a dedication that will in fact not even flinch in the face of temptation. I totally used to conflate those two, and I now think that most folk around me do too most of the time. I end up saying "No, really, choose. It's okay." And what I get back is… someone trying to sound confident or assertive as they strongly say one option, but it's really obvious that they haven't done anything different internally and are going to keep doubting themselves.
  • There's a tendency in authentic relating practices, or in Circling, where folk will make eye contact and often end up holding it for long periods of time. Many, many times, I've seen people then try to don a "loving look". Sometimes this is sincere, but sometimes it's something that folk have picked up from the culture as "what ya do, ya know?" I and others who know how to See the relevant thing here sometimes try to point out to such folk that e.g. the point is to attend to their experience rather than to have an effect on the other person. Sometimes they adjust in a seemingly useful way… and sometimes they just switch the strategy they're using to come across well, seeming to think that they're following the instruction.

Hopefully that clarifies rather than confuses. It's just that… in my mind, things you're likely to see me doing that you'd mistake for Looking is a really different category from things that people are likely to mistake for Looking in themselves.

Comment by Valentine on Mythic Mode · 2018-02-24T21:26:38.584Z · LW · GW
If you want to give your argument some extra oomph beyond what the evidence suggests, why do you want that? You could be wrong, and make many people wrong. Better spend that extra time making your evidence-based argument better.
Even shorter: I don't want powerful weapons to argue for truth. I want asymmetric weapons that only the truth can use. Myth isn't such a weapon, so I'll leave it in the cave where it was found.

I deeply respect that, and your choice.

I think I want the same end result you do: I want truth and clarity to reign. This has led me to intentionally use mythic mode because I see the influence of things like it all over the place, and I want to be able to notice and track that, and get practice extracting the parts that are epistemically good. And I need to have a cultivated skill with countering uses of mythic language that turn out to have deceived (or were intentionally used to deceive).

But I think it's totally a defensible position to say "Nope, this is too fraught and too symmetric, I ain't touchin' that" and walk away.

Comment by Valentine on Mythic Mode · 2018-02-24T21:20:57.291Z · LW · GW
Isn't it the case that our reasoning about that is in itself part of the role we are currently playing?

Yep. That's why this is a weaker partial solution than is Looking.

Is there even such a thing as "the things our inner selves want, independently of the role"?

I claim yes, kind of.

There's secretly a type error embedded in here, but language is horrid for pointing this particular thing out, so I'll just gesture toward the wave of mystical stuff that keeps saying "there is no self" and claim that there's some implicit confusion in the ontology I read being used here.

But if we ignore that and round it to the nearest true thing as I understand it… then yes, your "inner self" can want things in a way that isn't derived from your position in the web. That's part of why Looking is even possible.

Comment by Valentine on Mythic Mode · 2018-02-24T21:13:31.590Z · LW · GW

A lot of what you say here is why I think it's maybe really important to learn how to sandbox mythic mode, even if you don't want to intentionally use it. Otherwise I think something like it seeps into your system anyway.

You seem to have re-derived Jungian Archetypes with the distributed network / Omega playing the role of the collective unconscious.

Yep! I debated framing it this way, but I eventually decided against it because I thought it would be distracting here. And as you say, I rederived the ideas, and then later noticed that they corresponded to my read of what Jung was talking about… and not having really read Jung in any depth, I didn't want to tie my ideas to other things he might have claimed.

I think the main difference is that you posit the distributed intelligence to be able to predict people's actions.

Mmm… not exactly. More like, I posit that it has scripts, and guides people to play them out. This often involves an element of predicting people's actions, but it's more a matter of predicting what kinds of actions someone is likely to take. "What kind of person is this?" rather than "What is this person going to do?"

How I interpret your advice here is that if we find ourselves unhappy with our state of affairs we should try to find locations where there are either forks in the paths or places where they are quite close together to make a quick jump...never straying outside of a path for very long.

I think that's close enough. I'd just add the caveat that by my model, people mostly can't intentionally stray from paths. There are exceptions, but they're relatively rare, and when done without finesse it can create some pretty ferocious responses. Like, I suspect that psychopathy is in part being unaffected by Omega's tugs, and people generally really really don't like others to be quite that free.

The main problem I see is that we need to be able to make predictions in the spaces between narratives, which according to this framework is difficult if not impossible.

Yep, I agree, that's important, and the framework says that it's extremely difficult for the most part (except where it doesn't matter to the "scene", or where it's about things that aren't subject to scripts the way physics isn't). This is another way of stating what I see as a core challenge for a mature art of rationality to gracefully navigate.

Comment by Valentine on Mythic Mode · 2018-02-24T21:03:52.156Z · LW · GW
things people love are dangerous

So… we should respond by removing the things people love?

I suspect I just disagree with your claim. But even if you were right, I don't think the right answer is to ban beloved things. I think it's to learn how to have beloved things and still be sane.

By my own personal judgment, rationalist culture developed a lot of epistemic viciousness by gripping hard onto the chant "Politics is the mind-killer!" and thereby banning all development of the Art in that domain. The Trump election in 2016 displayed that communal weakness in force, with rationalists getting sucked into the same internal signaling games as all the other primates, and then being shocked when he won.

I mean, think about that. A whole community that grew out of an attempt to practice an art of clear thinking that supposedly tries to pay rent largely made the same wrong prediction. Yes, I know there are exceptions. I live with one of them. But that just says that some people in that community managed not to get swept up.

This doesn't bode well for a Calvinist approach to epistemic integrity.

(…and that method is a lot less fun!)

Comment by Valentine on Mythic Mode · 2018-02-24T20:45:08.855Z · LW · GW

While my primate political side really likes the alignment and agreement, I want to encourage good epistemic norms here. So, I'll ask an impolitic question:

What gives you the impression that your ability for independent thought and action has "gone way up"? In particular, how do you know that you aren't kidding yourself? (Not meaning to claim you are! Just trying to nudge toward sharing the causes of your belief here.)

Comment by Valentine on Mythic Mode · 2018-02-24T20:38:38.164Z · LW · GW
I understand that you're trying to sandbox this reasoning to "mythic mode", but the way you write about it in this post (while presumably not in mythic mode) makes it seem like the sandbox might be a bit leaky.

I was not, in fact, staying consistently outside of mythic mode when writing this post. I didn't think what it was would convey well if I had.

Instead, I tried to weave in and out of it while highlighting signposts. When I talk about coincidences lining up and how one gets used to things like that while in mythic mode, or when I talk about seeing the gods… that's operating mythically. When I then talk about how there's an easy way of seeing how this could come from cherry-picking which things are significant, that's outside of mythic mode.

I haven't checked carefully, but I'm pretty sure I could insert <mythic> and </mythic> pseudo-HTML tags throughout the OP.