Posts
Comments
This cognitive phenomenon is usually lumped in with “confirmation bias.” However, it seems to me that the phenomenon of trying to test positive rather than negative examples, ought to be distinguished from the phenomenon of trying to preserve the belief you started with. “Positive bias” is sometimes used as a synonym for “confirmation bias,” and fits this particular flaw much better.
Subtle distinction I almost missed here. Worth expanding.
I think this page would be more useful if it linked to the individual sequences it lists.
As far as I've seen, there is no page that links to all sequences in order, which would be useful for working through them systematically.
This works on a number of levels, although perhaps the most obvious is the divide between styles of thought on the order of "visual thinker", "verbal thinker", etc. People who differ here have to constantly reinterpret everything they say to one another, moving from non-native mode to native mode and back with every bit of data exchanged.
Have you written more about those different styles somewhere?
And this is how talking is anchrored in Costly Signaling.
(Note that "I dunno, probably around 9 pm." is still an assurance, though of a different kind: You're assuring that 9 pm is an honest estimate. If it turns out you make such statements up at random, it will cost you.)
And that's why talking can convey information at all.
TL;DR It often takes me a bit to grasp what you're pointing to.
Not because you're using concepts I don't know but because of some kind of translation friction cost. Writing/reading as an ontological handshake.
For example:
>How does task initiation happen at all, given the existence of multiple different possible acts you could take? What tips the mind in the direction of one over another?
The question maps obviously enough to my understandings, in one way or another*, but without contextual cues, decoding the words took me seconds and marginally-conscious searching.
* I basically took it as "How do decisions work?". Though, given the graphic, it looks like you're implying a kind of privileged passive state before a "decision"/initiation happens, but that part of the model is basically lost on me because its exact shape is within a meaning searchspace with too many remaining degrees of freedom.
>There are four things people confuse all the time, and use the same sort of language to express, despite them meaning very different things:
I think my brain felt a bit of "uncertainty what to do with the rest of the sentence", in a "is there useful info in there" sense, after the first 9 words. I think the first 9 words sufficed for me, they (with context below) contained 85% of the meaning I took away.
>Whether you're journaling, Internal Double Cruxing, doing Narrative Therapy, or exploring Internal Family Systems, there's something uniquely powerful about letting your thoughts finish.
Strikes me as perhaps a plain lack of Minto (present your conclusion/summary first, explanations/examples/defenses/nuances second, for that's how brains parse info). For the first half of the sentence my brain is made to store blank data, waiting for connections that will turn them into info.
Also reminded of parts of this, which imo generalizes way beyond documentations.
Dunno if this is even useful, but it'd be cool if you had some easy to fix bottlenecks.
But those are all just a few ways to unblock the initial spark/decision/compulsion to do something you deliberately plan to do. If you don’t focus too much on deliberate steps of an action, you might find yourself able to do them more easily by just following notions; “non-doing,” or wuwei, is a phrase often used for this state. Of course, you also might find yourself non-doing something else other than the thing you “intend” to (that’s rather the point).
But that this “cheat” can work at all indicates again that there’s something about deliberate attention and focus that can evoke things which demotivate us, or paralyze us with indecision or fear. Acting before your conscious thoughts can get in the way is, in many ways like putting yourself in a state of total freedom from consequences; consequences only impact our behavior when we know about and believe in them, after all. This is a great strategy when the risks or consequences aren’t “real.”
This just gave me a massive "click".
Meta-feedback: I find your content really good conceptually, but unfortunately harder to read than other top posters'
~Don't aim for the correct solution, (first) aim for understanding the space of possible solutions
Seconded (after working with this concept-handle for a day). This here seems to be the exact key for (dis)solving the way my brain executes self-deception (clinging, attachment, addiction,).
(I'm noticing that in writing this, my brain is fabricating an option that has all the self-work results I envision, without any work required)
I find that [letting go of the (im)possible worlds where I'm not trapped] helps reframe/dissolve the feeling of trappedness.
However, that kind of letting go often feels like paying a large price. E.g. in case of sensory overload it can feel like giving up on having any sense of control over reality/sensory-input whatsoever.
Does that maybe get at what you were asking?
It all does! Again, thanks for sharing.
Exciting stuff. This feels like a big puzzle piece I'd been missing. Have you written more about this, somewhere?
~vague gesturing at things I find interesting:
-How do different people (different neurotypes? different childhoods? personality types?) differ in the realities they want to share?
-How do shared realities relate to phenomena like extraversion, charisma, autism?
-What's the significance of creating shared realities by experiencing things together?
Besides, do you use other neglected people-models that are similarly high-yield? Vague gesturing appreciated.
Problem: Abyss-staring is aversive, for some (much) more than for others.
In my case, awareness hasn't removed that roadblock. Psychedelics have, to some degree, but I find it hard to aim them well. MDMA, maybe?
Example: Dividing the cake according to NEEDS versus CONTRIBUTION (progressive tax, capitalism/socialism,)
Both, I'd think.
Also this entire post by Duncan Sabien
(@ Tech Executives, Policymakers & Researches)
Back in February 2020, the vast majority of people didn't see the global event of Covid coming, even though all the signs were there. All it took was a fresh look at the evidence and some honest extrapolation.
Looking at recent AI progress, it seems very possible that we're in the "February of 2020" of AI.
(original argument by Duncan Sabien, rephrased)
(@ Tech Executives, Policymakers & Researches)
If you genuinely believe that the world is ending in 20 years, but are not visibily affected by this, or considering extreme actions, people may be less likely to believe that you believe what you say you do.
IMO, that's not the bottleneck. The bottleneck is people thinking you're insane, which composure mitigates.
"Every paper published is a shot fired in a war"
Epistemic virtue isn't a good strategy in that war, I suspect. Voicing your true best guesses is disincentivized unless you can prove them.
Fishbach & Dhar find that re-framing the achievement in terms of showing commitment to values, rather than progress toward goals, has a tendency to reinforce the behavior rather than the paradoxical self-licensing effect.
Hm. Does that mean "Rationality is about winning" is ultimately a bad mantra?
Good stuff.
Speculation time: Would this predict that shame-prone people have bigger, deeper identities? Identities seem like a good place for storing those justifications, and those justifications look like a candidate for the reason we have identities in the first place.
Shame appears to be a reaction to perceived norm violation, so shame-prone people would be those with strong and restrictive internalized social norms.
I don't mind self-help-books-level advice if it pointedly helps me improve my mental hygene. This did.
Which is perhaps most efficiently achieved by killing the wisher and returning an arbitrary inanimate object.
Personal experience / opinion: For me sleeping positions are an issue of expanded (back) or contracted (side) body language.
In an expanded state I seem to have a lower threshold for cognitive dissonance. I.e. my mind is less prone to indulging in pleasant-but-at-odds-with-reality thought trains. So I, for mental health reasons, try to fall asleep on my back when I can manage to tolerate the expanded state.
Powerful improv metaphor. Powerful post.
Ah, but if we’re immersed in a culture where status and belonging are tied to changing our minds, and we can signal that we’re open to updating our beliefs, then we’re good… as long as we know Goodhart’s Demon isn’t lurking in the shadows of our minds here. But surely it’s okay, right? After all, we’re smart and we know Bayesian math, and we care about truth! What could possibly go wrong?
The trickiness of roles that involve the disidentification with specific roles or the concept of roles in general must not be underestimated. That's especially true for roles that seem to be opposed to the prevalent social structure.
I'm also reminded of Transactional Analysis. In particular, Games and Life Scripts.
People are just really bad at seeing the merits of things they aren't already in favour of.
I'd consider that an important factor in whether something ends up being an antimeme in a given culture.
In my understanding of the term, the most straightforward definition of antimemecy is "very low cultural infection rate".
(And implicit in the discussion so far seems to have been a certain expected usefulness of mentioned examples. Maybe we should focus the conversation on things with high expected value and low cultural infection rate / overall prevalence in western culture.)
My impression is that in-group status is always, inherently zero-sum.
While the influence/worth distinction may be a relevant one, I think it'd be relative worth that satisfies status-as-social-need.
Praise certainly meets other emotional needs, though, and it may well be rational to have more of it.
It would tend to have the effect of making most people give up on the idea of antimeme
Yes, that effect on most people is kinda in the nature of antimemes.
In a LW context I wouldn't paint the picture too black though. The average poster's epistemic standards are high. High enough to warrant a mindful reader's second look at the antimemes they're proposing.
The corresponding discussions would certainly not be frictionless. That doesn't mean they couldn't provide some high-value insight to a few people, though.
To me this looks like the stuff LW is all about. I mean, aren't we looking at low-hanging fruit hidden from vantage points of naive epistemology?
Unlike good confabulations, antimemetic confabulations will make you increasingly uncomfortable. You might even get angry. The distractions feel like being in the brain of a beginner meditator or distractible writer. They make you want to look away.
You can recognize this pattern as an antimemetic signature. People love explaining things. If you feel uncomfortable showing off your knowledge it's probably because you have something to hide.
That seems useful. Cognitive Dissonance as a cognitive Code Smell.
I'd love to read more on the topic.
A longer list of what LW folk consider to be antimemes would be pretty interesting, too. I like to think I gained some insight from the mention of Lisp and entrepreneurship.
To prove it wrong it should be a meme that is complex and difficult to understand.
I'd propose as examples "most stuff taught at university". Even outside of teaching institutions, complex ideas commonly spread memetically if the incentives for acquiring them are sufficiently visible from the outset. Think Evolutionary Theory, Object-Oriented Programming, or Quantum Physics.