Posts

The Curse Of The Counterfactual 2019-11-01T18:34:41.186Z
Is there a definitive intro to punishing non-punishers? 2019-10-31T20:20:30.653Z
TV's "Elementary" Tackles Friendly AI and X-Risk - "Bella" (Possible Spoilers) 2014-11-22T19:51:37.699Z
[Link, Humor] The Best Christmas Ever 2011-12-25T23:14:04.223Z
[Link] The Typical Mind Fallacy, Illustrated 2011-09-02T17:22:44.526Z
Necessary, But Not Sufficient 2010-03-23T17:11:03.256Z
Improving The Akrasia Hypothesis 2010-02-26T20:45:19.942Z
The Dirt on Depression 2009-07-15T17:58:44.128Z
The Physiology of Willpower 2009-06-18T04:11:52.445Z
Rationality Quotes - June 2009 2009-06-14T22:00:28.697Z
Spock's Dirty Little Secret 2009-03-25T19:07:21.908Z

Comments

Comment by pjeby on Notes on Forgiveness · 2021-01-26T03:48:30.559Z · LW · GW

My take: "forgiveness" is a big confused word that applies to lots of things and isn't terribly useful.

The thing that is useful that sometimes gets called "forgiveness" (or thought of as part of it) is the part where you stop thinking someone shouldn't have done something, or that they should have done something else.

Most concepts of forgiveness (and related concepts in this article) assume that ceasing to churn over a counterfactual also means that you don't take action against the "guilty" party. But this is not necessarily true.

If it's strategically wise to punish someone for defecting, then it will continue to be wise whether you experience the emotion of a grudge or not. But instead of feeling compelled to action, one can consider the decision with less bias in a particular direction.

Another assumption often made is that keeping a grudge has benefits. As summarized in this article, one view holds that keeping a grudge allows you to remember something, treat it as important, and be more aware of our values.

And all three of these ideas are complete rubbish.

First, removing a grudge does not change your ability to remember what happened, or act on pattern recognition. Quite the opposite in fact, since we can think more resourcefully and consider a broader range of options when not under the influence of a grudge.

Second, saying that grudges help you treat something as important is a circular argument, as it presupposes that treating the thing as important is important, no matter how unimportant it might actually be if you didn't have the grudge. As the story goes of the woman who didn't like peas: "I'm glad I don't like peas, because if I liked them, I might eat them, and I don't want to eat them, because I don't like them!"

In truth, the only thing that grudges support the importance of, is themselves... and they do so distinct from whatever actual grievance or problem might need addressing. A grudge is an insistence that reality should have been different than it was, while a grievance or problem represents a desire to change something in the present and future. Dropping the grudge merely acknowledges the truth about the current state of affairs, rather than continuing to "rehearse" the past. It doesn't magically make any existent problem disappear or become unimportant, it merely removes a perceptual bias from your thinking about the current state of things.

Third, and finally, grudges do not help you become more aware of your values or avoid doing bad things. They might affect which bad things you do, though: holding a grudge inclines you to moral license regarding the subject of your grudge, or to increase your sense of entitledness generally.

In short, all three ideas are confusion and rationalization -- and grudges are the king of rationalization generators. A grudge will do almost anything to sustain itself, and rationalizing reasons why grudges are good is only the beginning.

Map-Territory Confusion

Of course, these ideas also reflect confusion: people routinely equate their grudges (maps) with their grievances (territory). A grievance is "this thing happened, and I need to do something about it." A grudge is, "this thing never should have happened, and somebody must be punished". The two are actually mutually exclusive, in the sense of mental experiences, but in the grudge state we tend to assume that giving up the grudge equals giving up on taking action: that if the grudge did not exist, it would be bad because someone is going to get away without being punished for their badness.

This is why instructions on forgiveness are so convoluted and complicated. People think "forgive" means to forego corrective action, but this is not necessary in order to gain the emotional and health benefits. Instead, all that is required is to stop being in the "denial, anger and bargaining" stage that one is surrounding the loss.

Our grievances are losses. They are things that actually happened and had an impact. But our grudges are actually a kind of angry, bargaining denial: we feel that if only we can punish somebody enough, then somehow our original loss will be canceled out, and balance restored to the universe.

In effect, a grudge is a stuck form of grief. We have not yet acknowledged the loss, and are trying to make it "not count". This is a significant distraction from actually moving forward with one's life (including addressing or redressing the loss), because it is focused on punishment instead of practicalities.

In the modern environment, more often than not there is almost no benefit to punishing people as an individual. Most of the entities that inspire our grudges are large corporations we have no real ability to punish, or else they are people being Wrong On The Internet. In neither case will our instinct to punish someone actually serve us well. Yelling at the rep or flaming the trolls might make us feel momentarily better, but it won't improve our actual circumstances, which would be better served by strategic action, rather than instinctual action.

(And, better yet, when you let go of the instinct to punish, you more often than not find that it was not actually something very important in the grand scheme of things, or that at least you have better things you could be doing with your time.)

Better Ways To Forgive

Early on in my self-help research and experimentation, I discovered that forgiving myself for things that happened to me when I was younger often had a profound impact on my self-esteem and subsequent behavior. (I released some of those early results in a workshop dubbed "Instant Self-Esteem".)

After some experiments with other people, though, I came to realize that my definition of "forgiveness" was vague, and I often had to use descriptions like, "just let it go, like you're literally dropping the baggage".

Since then, I found the Work of Byron Katie, which is a much more precisely targeted process with much higher repeatability than my vague instructions or those of the nine-step process mentioned in this article. It's fast, it's simple, and it's teachable. (It's also being studied by psychologists under the name MBSR: Mindfulness-Based Stress Reduction, even though IMO the mindfulness part seems more like fashionable branding than anything else; you do have to be mindful to do it, but you have to be mindful to do almost anything else that changes things, so it's not a very useful name.)

Beyond that, the Work is a generally useful Ritual For Actually Changing One's Mind. As its creator describes, it's not about letting go of your thoughts, but getting your thoughts to let go of you. For LWers, I suggest also reviewing my notes on doing the Work as they provide a more reductionistic view of certain steps in the process that may be more comprehensible than the sometimes vague or woo-ish sounding descriptions in other sources.

I would also encourage LWers to entirely taboo the concept of "forgiveness" and instead simply consider whether they are rehashing the same experiences over and over while experiencing anger, suffering, or the desire to see some kind of "justice" (i.e. punishment) done. If this is the case, you can probably benefit from a bit of mental surgery to remove the grudge, as it will restore a state where you can consider your options and weigh your values without the giant finger-on-the-scale that is the grudge monster screaming "Bad! Shouldn't happen! Must Punish!" in your ear 24/7.

(Especially since for many people, the #1 person the grudge monster wants to punish is themselves.)

Grudges As Moral Wireheading

In the years since my first experiments with forgiveness, I belive I've tabooed the idea for long enough that I can define the essential concept in a more reductionist way.

Specifically, a grudge is rooted in the idea of "things you believe mean someone deserves to be treated badly for".

Or, to reduce it further: the source of a grudge is a belief that an act grants moral righteousness to those who treat the actor badly.

It's not enough that the bad treatment might be useful as a deterrent, or balance the scales of fairness, or serve as an example to others.

Rather, the thing that makes a grudge is the sense of vindication and moral elevation attached to the idea of treating someone badly!

What the Work helps people do, is stop believing that a particular rule or idea they've learned about how people "should" behave, is actually a blessing of righteousness on the idea of treating people badly.

And that's why it generates self-justifying circular reasoning: the brain wants the "high" to continue, and correctly predicts that giving up the grudge will lead to a state with fewer righteousness-hedons in play... and then since that seems to be a self-evidently worse state, it then searches for reasons to explain why it would be bad to give up the grudge. (While avoiding admitting that it has anything to do with the sweet, sweet virtue-signalling that's going on.)

So this is something that (IMO) every rationalist needs to understand. If you are operating on a grudge, you are in a state of moral wireheading. Your brain is high on being, not just right, but also in the right, and downright righteous.

And this will distort and twist your reasoning like nobody's business. Power corrupts, and this is one of the ways it does so: a grudge feels like it's granting you power and authority.

And in a way, it is.

Where This Instinct Comes From

In the ancestral environment, enforcing a tribal standard would be virtue signalling of the highest order: you're taking a risk, or forgoing the rewards you could get by not doing so (so it's a costly signal), and so you're showing that you're both fit enough to get away with it, and you're loyal to the tribe's values. Win win!

So our instincts have evolved to treat such situations as an opportunity: our brain makes us feel good, and powerful/status-ful at the same time.

(It's probably a big reason why people have more and more outrage these days over ever-smaller things: we have few other opportunities in modern life to feel righteous, vindicated, and powerful!)

But this feeling, like our desire for sugar, is not terribly helpful to follow in the modern era. As modern life becomes ever more complex with ever-more-stringent standards for behavior, it becomes ever easier to reach for the outrage drug, while the health side effects of being stressed all the time slowly add up.

More important for the rationalist, being high on righteousness is an absolutely lousy mental state for actually considering the possibility that, you know...

You might be wrong.

And that's a rationalist "sin" of the highest order.

(Just don't think that that means you "deserve" to be punished... or if you do, then forgive yourself, and move on.)

Comment by pjeby on Confirmation Bias in Action · 2021-01-25T04:52:39.343Z · LW · GW

On a personal level, this means that will not be able to accept something as true until you have a basic idea of what you would do if that was true.

The fourth question of The Work is: "Who would you be without that thought?", intended to provoke a near-mode, concrete prediction of what your life would be like if you were not thinking/believing the thought in question.

Which is to say that it is also hard to accept something is not true until you have a basic idea of what you would do if it were false. ;-)

Also, the energy model presented in this article is, I think, a very good one. The idea that we reject new models unless they're comprehensively better, yet continue using old ones until they can be comprehensively disproven, is an apt description of the core difficulties in Actually Changing One's Mind at the level of emotional "knowledge" and assumptions.

I also like the hierarchical processing part -- it'll give me another tool to explain why changing general beliefs and patterns of behavior requires digging into details of experience, and provides a good intuition pump for seeing why you can't just "decide" to think differently and have it work, if the belief in question is a predictive or evaluative alief, rather than just a verbal profession to others.

Comment by pjeby on just_browsing's Shortform · 2021-01-22T23:26:59.100Z · LW · GW

Yeah, it seems that the desire to write is tied is often tied to a desire to explain things, it's just that our past self is usually the first person we want to explain things to. ;-) We could think of it as being like a pressure differential of knowledge, where you need a lower-pressure area for your knowledge to overflow into. Having a mental model of a person who needs to know, but doesn't, then feels like an opportunity to relieve the sudden pressure differential. ;-)

In principle, I suppose imagining that person might also work if you can model such a person well enough in your mind.

Comment by pjeby on just_browsing's Shortform · 2021-01-22T18:01:53.983Z · LW · GW

This isn't a direct answer to your question, but what I've personally found is that if I want to get re-excited about a topic that has already passed that critical period, the best thing to do is find people either asking questions about it or Being Wrong On The Internet about it, so that then I want to explain or rant about it again. ;-)

Comment by pjeby on How do I improve at being strategic? · 2021-01-22T03:59:26.918Z · LW · GW

In my experience, the number one obstacle to strategic thinking is that people tend to confuse their impulses to virtue signaling with their actual goals. People tend to be very strategic about actual terminal goals: that is, things they genuinely desire, in the same way as one might desire ice cream when hungry, or air while drowning.

So my go-to tactic for helping someone to be more strategic is to test their desire: can they actually experience the in-the-torso feelings that are strongly correlated with desire and pleasure, when thinking about the goal?

If not, they probably do not have a terminal outcome that is actually desired, and instead are being confused by their brain's attempt to signal virtue, solve a perceived problem, or reduce cognitive dissonance.

(By "signal virtue", btw, I don't just mean conspicuous displays of morality, but also things like following in parents' footsteps or trying to live up to the expectations of others, trying to justify one's existence or purpose, and rather a wide variety of other weird things brains do to promote or maintain perceived self-worth and/or social standing.)

Anyway, when humans appear (in my biased sample of experiences) to not be being consciously strategic, it is generally because they are being unconsciously strategic about achieving an entirely different goal than the one they believe they're trying to achieve. And the goal they consciously believe they're seeking is in fact the result of their brain's strategic planning, rather than the input to another round of such planning.

The ultimate goal of such things is usually "to be a good, worthy, lovable person who visibly cares about the right things according to the value system(s) I have internalized".

Such goals, however, seem to run on different hardware than practical, desire-based goals. And if the non-desire-based goal is based on an idea that one "should" be a particular way, then it becomes virtually impossible to trigger the desire-based machinery at all.

(Because it is very hard to feel desire for something you believe you're already supposed to have done, had, or been.)

So.... if you want to be practically strategic, the very first step is to make sure you know what you want and why you want it. If the real goal cannot be defined in terms of a concrete observable outcome in external reality, that you can actually physically feel some form of pleasure at the idea of attaining (vs. merely feeling an anxious need to have), there is little point in going forward with any strategic planning, because strategic planning and social signalling tend to be mutually incompatible.

(Because our "desire to signal" wants to make our signalling-driven desires appear "honest", i.e. that they're not being done in order to signal.)

This makes it difficult to notice at first glance when we're doing so, so the desire test (aka the "mmmm" test", as in, "can you think about this in a way that makes you sound like you're enjoying yourself?") is a hack to work around this potential for self-deception. Our signaling desires seem to be injected on a different subsystem of the brain (maybe by rewarding certain directions of thought directly?) than the one that is used to pursue tangible desires like food or mates.

Food and mates make our mouths water or bellies rumble. Rest and safety make us go "ahhh" and relax. All of these pleasurable feelings arise from tangible goals, and motivate us to actually pursue them.

This one insight is, in my experience, worth a thousand abstract treatises on planning or decision-making. If you try to apply such ideas while actually pursuing a goal to virtue-signal, resolve cognitive dissonance, or fight something that seems "wrong", you're virtually guaranteed to use them in ways that will subtly sabotage any real action. (See, for example, all the vaccine distribution issues stemming from virtue signaling -- we are not immune to doing this sort of thing just because we label ourselves rationalists.)

Comment by pjeby on Deconditioning Aversion to Dislike · 2021-01-16T04:01:38.559Z · LW · GW

isn't personality not very mutable past the age of around 7? At least without psychedelics or brain damage or something?

Only if you define "personality" as "whatever doesn't change past the age of around 7". ;-)

The complexity comes from the fact that we can't really measure "personality", only observed behavior (from the outside) or feelings/perceptions/preferences (from the inside).

Given that there are lots of things that can influence all of those things on a day to day basis, it's kind of hard to make a distinction between what might be considered "innate".

This is further complicated by the fact that human beings are innately disposed to think of people as having innate personality characteristics, when in fact there is little reason to assume that human beings' behavior is so simply defined, or so independent of social/environmental cues, incentives, whether someone's eaten recently, etc.

Comment by pjeby on The Curse Of The Counterfactual · 2021-01-11T01:33:58.529Z · LW · GW

I got an email from Jacob L. suggesting I review my own post, to add anything that might offer a more current perspective, so here goes...

One thing I've learned since writing this is that counterfactualizing, while it doesn't always cause akrasia, it is definitely an important part of how we maintain akrasia: what some people have dubbed "meta-akrasia".

When we counterfactualize that we "should have done" something, we create moral license for our past behavior. But also, when we encounter a problem and think, "I should [future action]", we are often licensing ourselves to not do something now.

In both cases, the real purpose of the "should" in our thoughts is to avoid thinking about something unpleasant in the current moment. Whether we punish our past self or promote our future self, both moves will feel better than thinking about the actual problem... if the problem conflicts with our desired self-image.

But neither one actually results in any positive change, because our subconscious intent is to virtue-signal away the cognitive dissonance arising from an ego threat... not to actually do anything about the problem from which the ego threat arose.

In the year since I wrote this article, I've stopped viewing the odd things people have to be talked out of (in order to change) as weird, individual, one-off phenomena, and begun viewing them in terms of "flinch defenses"... which is to say, "how people keep themselves stuck by rationalizing away ego threats instead of addressing them directly."

There are other rationalizations besides counterfactual ones, of course, but the concepts in this article (and the subsequent discussion in comments) helped to point me in the right direction to refine the flinch-defense pattern as a specific pattern and category, rather than as an ad hoc collection of similar-but-different behavior patterns.

Comment by pjeby on Empiricism in NLP : Test Operate Text Exit (TOTE) · 2021-01-02T22:47:34.896Z · LW · GW

When I say "closed loop within the organism" I mean "having within the organism all the error detection and machinery for reducing the error", not that the subject of perception is also within the organism.

Note, too, that It's possible for people to display apparently-homeostatic processes where no such process is actually occurring.

For example, outside observation might appear to create the impression that say, a person is afraid of success and downregulating their ambitions or skill in order to maintain a lower level of success.

However, upon closer observation, it might instead be the case that the person is responding in a stimulus-response based way to something that is perceived as a threat related to success.

While you could reframe that in terms of homeostasis away from anxiety or threat perception, this framing doesn't give you anything new in terms of solving the problem -- especially if the required solution is to remove the conditioned threat perception. If anything, trying to view that problem as homeostatic in nature is a red herring, despite the fact that homeostasis is the result of the process.

This is a practical example of how using PCT as an explanatory theory -- rather than simply a modeling paradigm -- can interfere with actually solving problems.

In my early learning of PCT, I was overly excited by its apparent explanatory power, but later ended up dialing it back significantly as I realized it was mainly a useful tool for communicating certain ideas; the number of high-level psychological phenomena that actually involve homeostasis loops in the brain appear to be both quite few and relatively short-term in nature.

Indeed, to some extent, looking at things through the PCT lens was a step backwards, as it encouraged me to view things in terms of such higher-order homeostasis loops when those loops were merely emergent properties, rather than reified primitives. (And this especially applies when we're talking about unwanted behavior.)

To put it another way, some people may indeed regulate their perception of "success" in some abstract high-level fashion. But most of the things that one might try to model in such a way, for most people, most of the time, actually involve much tinier, half-open controls like "reduce my anxiety in response to thinking about this problem, in whatever way possible as soon as possible", and not some hypothetical long-term perception of success or status or whatnot.

Comment by pjeby on Empiricism in NLP : Test Operate Text Exit (TOTE) · 2021-01-02T20:13:11.115Z · LW · GW

PCT does that too. Except that sometimes, body and brain processes are open-ended, with an important part of the loop existing in the outside world.

The problem with a model that can explain anything, is that you can't notice when you're being confused by a fake explanation.

A useful explanatory model needs to be able to rule things out, as well as "in".

I think we are talking about different meanings of "modeling" here. There are plenty of uses for which PCT and TOTEs are apt. But if you're trying to discover something about the physical nature of things involved, being able to explain anything equally well is not actually a benefit. That is, it doesn't provide us with any information we don't already know, absent the model.

So e.g. in your thermostat example, the TOTE model doesn't provide you with any predictions you didn't have without it: a person who lacks understanding of how thermostats work internally can trivially make the prediction that something is wrong with it, since it's supposed to produce the reqested temperature.

Conversely, if you know the thermostat contains a sensor, then the idea that "it might be broken if the room temperature is wrong" is trivially derivable from that mere fact, without a detailed control systems model.

IOW, the TOTE model adds nothing to your existing predictions; it doesn't constitute evidence of anything you didn't already know.

This doesn't take away from the many valuable uses of paradigms like PCT or TOTE: it's just that they're one of those things that seems super-valuable because it seems to be a more efficient mental data compressor than whatever you had before. But being a good compressor for whatever data you have is not the same as having any new data!

So paradigmatic models are more about being able to more efficiently think or reason about something, or focus your attention in useful ways, without necessarily changing much about how much one actually knows, from an evidentiary perspective.

Comment by pjeby on Asking For Help · 2021-01-02T19:38:13.208Z · LW · GW

As you've no doubt noticed, our anxieties don't often respond to mere reasoning.

Two of the most common sources for this type of anxiety related to behavior are:

  1. A judgment about the "kind of people who X" (e.g. ask for help, bother other people, etc.)
  2. A self-concept about what one does or doesn't deserve or have the right to

Narrowing it down begins with checking how you feel about other people doing whatever X is. For example, if you picture someone else bothering a person of high status by asking them for help, how do you feel?

If the response is a negative judgment, empathetic embarrassment, anxiety, etc., then it's very likely you have a learned "behavior X = bad person" type of rule in your brain.

Direct negative judgments are usually fairly straightforward to get rid of: in the simplest case, by just letting go of the rule if you no longer endorse it on any level. More complex methods for more stubborn cases include those of Crane ("releasing" technique) or Byron Katie (The Work).

If the judgment is more indirect or only applies to yourself, the techniques involved are more complex, and typically involve investigation into the specific circumstances that created the anxiety. The good news, though, is that usually some information about that will surface during the failure of the Work or releasing, which is why I try to start there first.

Successful intervention would mean that you no longer feel that particular anxiety when imagining the need to ask someone for help.

This is a different approach than the traditional one, in which one is told to fake it until you make it, i.e. keep doing the thing and maybe the anxiety will go away... eventually. Given my experience of dealing with various sorts of anxieties for years or decades with no change, I am not particularly satisfied by that sort of advice. It is definitely possible to do better: to change our minds in at least some areas, instead of just having to live with them.

Comment by pjeby on Vaccination with the EMH · 2021-01-02T19:20:21.581Z · LW · GW

It's possible for a smart adult to pretty reliably make a good living in any job as long as they're willing to put in about 65 hours a week for about 5 years building their skills/credentials/business. Making more money in less time typically requires tradeoffs in terms of intrinsic talent/assuming risk/doing unpleasant work. The idea that the average person can do more with less is less plausible the more extreme the edge is supposed to be.

I think this paragraph is contradicting itself. It begins with "possible for a smart adult" and then concludes with "the average person". The average person is not a smart adult.

The point of 1000 True Fans is not about averages, but about comparative advantage. Focusing on a niche audience that values what you produce to the point that they'll spend 1000 times or more what the average person would. (i.e. $100 year vs. a few cents for a non-fan for the same music).

Catering to a niche is not a violation of the EMH (which for some reason I always initially read as "the Emergency Medical Hologram"). Effective marketing is also not a violation of the EMH. Both are ways to increase demand, or at least shift it to a different target, for a subset of the market. (They also scale up: you can have 100 superfans who spend $1000, or 10 super-super-fans who spend $10,000 -- though probably not if what you sell is music. But for other types of product this is certainly possible.)

If someone reads this article and interprets it to mean that they should just take whatever random job making roughly the same amount of money, it could be a disaster for their utility (and maybe that of the world), depending on what they value. It is not easy to make lots of money, but there are many ways and circumstances under which "make money on the internet doing something you love" is both more easy and more rewarding than working a job would be.

I quit my last corporate gig in part because I couldn't bear sitting in another meeting arguing the same things over and over and never getting any conclusion to them... when over the weekend prior I'd made five figures by talking for around ten minutes.

I am, of course, leaving out the years of work that previously went into being able to give that talk, what I spent to be in the context where I gave the talk, etc., but all of those things were still rewarding in ways that a corporate job wasn't. In particular, a sense of personal meaning in overcoming challenges to do things I wanted done, vs. the oppressive sense of having to shovel shit to get things done for one company or another.

Can the "average" person do that sort of thing? No. Absolutely not! I have attended internet marketing courses and observed how little the "average" person comprehends or is able to reproduce what they are taught with regard to marketing. (And I'm not claiming to be particularly good at it -- I've always considered myself awful at it, only to recently realize that the vast majority of people are even worse!)

Are LWers "average" in general? Maybe. But ISTM that a lot of very non-average people congregate here. I would guess that the main ingredient a typical LWer would lack at being able to take advantage of existing market inefficiencies and civilizational inadequacies is either the ability to think like a marketer, and/or the willingness to act like one.

Comment by pjeby on Small Habits Shape Identity: How I became someone who exercises · 2021-01-02T18:30:20.582Z · LW · GW

I'm pretty sure that trying not to have an identity just means that your brain will identify as the sort of superior person who is above having identities, in much the same way that claims to not care about status are basically claiming to have status higher than everyone who cares about status.

I imagine that there might be meditative or other practices to shut off or disconnect the brain bits that care about identity and status, but mere conscious avoidance of anything related to identity or status is just defining another kind of identity and status, with the same detrimental effects on reasoning... while also foregoing any practical benefits that otherwise could have been obtained. A bit like going into a restaurant, paying for the food, and then not eating it because it was too expensive. If you're going to foot the bill either way, you might as well get some nutrients out of it.

(Also, the Dark Lord comment kinda sounds like an appeal to identity, i.e. "don't be the kind of foolish person who bargains with dark lords", and an implied humblebrag that you can achieve your goals without needing to do this sort of thing, and thus are higher status than the post author or anyone who might find the post helpful. Even if those things weren't your intent, even if your brain didn't include them in the message on purpose behind your back, some, if not most of the recipients of your message have brains that will take it that way, because that's just how ubiquitously insidious the status-and-identity hardware is.)

Comment by pjeby on Empiricism in NLP : Test Operate Text Exit (TOTE) · 2021-01-02T18:08:12.589Z · LW · GW

A framework that can predict anything is not really a predictive framework; it's just a modeling convention.

In the specific case of PCT, the model treats everything as closed-loop homeostasis occurring within the organism being modeled. However, there are plenty of situations where a significant part of the loop control occurs outside the organism, or where organism behavior is only homeostatic if certain EEA assumptions apply. (e.g. the body's tendencies to hoard certain nutrients and flush others, based on historic availability rather than actual availability)

While this doesn't harm PCT's use as a conceptual model of organism behavior, it limits its use as a predictive framework with regard to what 1) we will find happening in the hardware, and 2) we will find happening in actual behavior.

The extension of this problem to TOTE loops is straightforward, since a TOTE loop is just a description of one possible implementation strategy for a PCT control loops and linkage, and one that similarly doesn't always map to the hardware or the location where the tests and operations are taking place (i.e., in-organism or outside-organism).

Comment by pjeby on Empiricism in NLP : Test Operate Text Exit (TOTE) · 2021-01-01T18:53:12.612Z · LW · GW

Interesting perspective. I never learned TOTE loops as an intervention strategy but as a modeling one: i.e., the observation that people behave as if their internal operations are TOTE loops -- a parallel framework to the higher-order control systems in Perceptual Control Theory.

But in retrospect, looking at it from this point of view my Collect-Connect-Correct framework for self-improvement is literally a TOTE loop (because part of Collect is identifying a test, and part of Correct is seeing if the test result has changed so you can loop back or exit). I mean, it's kind of a meta-TOTE loop because you're choosing a test, first... I suppose there's the outer TOTE where you decide to use CCC in the first place, and then there's the TOTE where you identify the test to be used in the CCC -- because you may have to try a few different things to identify a suitable test for the Connect/Correct TOTE loop.

On the other hand, TOTEs share the weakness of PCT that they can be used to basically model anything.

But on the other other hand, I can't count how many places where explicitly adding tests and gating around existing self-help processes has made them more repeatable, reliable, and teachable. I wasn't thinking specifically in terms of TOTEs when I did those things, but it makes good sense.

TBH, I looked at it more as an application of testing in general, plus outcome frames and well-formedness conditions... the latter two of which I learned from NLP.

The idea that things in the brain have syntax, a sequence of steps required to unlock them if one is applying certain techniques, allows you to use TOTEs as part of a training process.

Concrete example: my "desk cleaning trick" video describes the "mmm" test, without which the trick will not do anything. Having an explicit test condition for exiting a "step" of a mental process makes it vastly more useful than merely having a list of mental steps.

IOW, explicit tests between steps of a mental process, as well-formedness conditions for proceeding to the next step greatly enhance communicability and reproducability of a technique, which helps to bypass the Interdict of Merlin with regard to self-help.

Comment by pjeby on Anti-Aging: State of the Art · 2021-01-01T18:20:41.303Z · LW · GW

already known to be safe interventions, like vitamin D

Any pointers on what to search to find more info on this from actual research? I wasn't aware that vitamin D was considered to have anti-aging properties, or that there was much consensus on its safety.

Comment by pjeby on Give it a google · 2020-12-30T04:27:51.053Z · LW · GW

SEO has gotten to the point that it actually takes some skill to google things usefully.

Interesting. I thought it was because Google progressively dumbed down their search and optimized it for finding things that most people wanted to find, but not what I wanted to find. But I suppose there's no reason it can't be both causes.

Comment by pjeby on Death Positive Movement · 2020-12-26T22:09:47.369Z · LW · GW

I think you're missing the point of my question, which was that "Death is bad" is, at least on the surface, an instance of the Mind Projection Fallacy: projecting a label out into the world as if it could exist independently of the mind doing the labeling.

Specifically, "badness" requires a mind capable of experiencing the concept of badness... and a dead person lacks such a mind. So to say "Death is bad" is leaving out the whom. That is, it's bad as perceived by the living. Dead people lack any values by which to judge it, or an active mind with which to do the judging.

While a person is alive, they can look forward to a future in which they would dead, and experience emotions regarding this imaginary predicted future... but that's not the same thing as that future actually being "bad" for them in that future time. It can only be bad for people still alive.

So, (for example) the analogy to torture fails here. A tortured person is alive and can perceive the experience to be bad, regardless of whether anyone else cares. But a dead person can only matter to the living.

Comment by pjeby on Gauging the conscious experience of LessWrong · 2020-12-20T22:43:01.943Z · LW · GW

It's the Typical Mind Fallacy. See e.g. Generalizing From One Example, which is actually about this very topic (diversity of internal sensory experiences).

Comment by pjeby on It turns out that group meetings are mostly a terrible way to make decisions · 2020-12-18T08:38:03.948Z · LW · GW

When I was a manager needing to build consensus -- especially with other managers outside my department -- I found it much more useful to get one-on-one meetings to feel out people's needs and negotiate buy-in well before any larger meetings. Trying to get consensus in a big meeting was a big waste of time, except maybe sometimes within my own department. The big meeting is really just an opportunity to show the higher-ups that all the other departments are already on-board with my plan. ;-)

Comment by pjeby on To listen well, get curious · 2020-12-18T07:43:29.431Z · LW · GW

Mirroring is actually a normal side-effect of empathic connection, in an interaction that's going well. When I was a teenager doing telephone technical support, I one day noticed that I was unconsciously changing my speech accent, pacing, and vocabulary to resemble that of the people I spoke with (who were from many different parts of the country). This happens to me all the time when I get involved in an interaction with someone, but doesn't have anything to do with attraction as such. I mean, if somebody was a jerk, then I certainly didn't find myself mirroring in this fashion, but if the interaction was at all positive, then it tended to happen.

I have seen lots of stuff talking about trying to deliberately generate rapport via mirroring, but in my experience any literal and direct mirroring has always been something that was the effect of the rapport, rather than being a way of causing it.

In my experience, if one must mirror deliberately, it's much better to simply mirror pacing and rhythm, not specific actions. For example, in BDSM, timing one's whip strokes or other actions based on your partner's breathing or writhing is a much better use of mirroring than just directly copying their movements or breathing. (If you think of an interaction between two people as being a series of disturbances in a medium sending out waves towards each other, it becomes easy to see why interaction on unsynchronized wavelengths creates a disruptive experience, but synchronized ones build in intensity as both parties' responses are cresting or peaking together.)

Outside of such specialized uses of mirroring, though, it's more useful as a tool to do two things:

  1. Observing someone well enough to follow their rhythm keeps you out of your own head
  2. Noticing how your own rhythm is or isn't naturally following theirs helps you notice in real time how well the interaction is actually going.

Copying their every motion doesn't really improve on these things, at least in my opinion. In truth, copying gross movements might distract you from more useful information -- like breathing and postural tension -- that give you more detailed information about a person's emotional state.

So, if for example a topic becomes uncomfortable for them, you can notice this without them needing to say anything, and then either change the subject or step back and express openness to hearing their position, e.g. "sorry, does that subject bother you?"

Conversely, if someone becomes energized or engaged when talking about something, or in response to something you say, then that's also excellent information.

(A surprising amount of social competence boils down to valuing sensory and nonverbal information channels equally or higher than verbal/intellectual ones.)

Comment by pjeby on To listen well, get curious · 2020-12-18T07:02:54.948Z · LW · GW

This is exactly the opposite of curiosity, it's an attempt to gloss over your ignorance, which seems both lazy and mean to the person you're talking to.

Ironically, I see this as 100% the opposite. If you're paraphrasing, then that means you're basically guessing what the words mean, inserting your own ideas instead of holding open the possibility that you don't actually know what was said. It also means that you're not necessarily listening to what exact words somebody used. (A pet peeve of mine, TBH: people rounding off what I say to the nearest familiar thing, rather than listening with precision.)

So, demonstrating the ability to parrot-phrase is a much stronger signal to me that someone is paying close attention to what I actually said, and not just jumping to a round-off.

Parrot-phrasing comes across as kind of manipulative in this description

I don't see any problem with the first two points, as putting extra effort into something is not a measure of virtue.

For the third point, that's a bit out of context: that person's video describes how she used it as a new department head who didn't yet understand all the technical details of what they were doing, but needed to get to know her staff and their concerns. Parrot-phrasing allowed her to quickly become familiar with her staff, the terms and what things were important to said staff without needing to stop conversations to learn all the terms first. (From context, I gather that she looked up the terms afterward, instead of making the staff explain everything to her up front -- thereby allowing her to focus her learning on the things the staff thought most important.)

In context, that sounds like an unequivocal good for everyone involved.

From a computer programming perspective, I look at this as simply being able to use "forward references" -- i.e., the ability to use a term as a placeholder that has not yet been defined. In truth, until the terms are defined, you don't really know what somebody is using their words to mean anyway. But you can learn quite a lot about a situation or person without yet knowing their precise definitions of the words. And your value as a listener doesn't often require complete understanding, anyway.

For example, I often help people work through problems where I don't know an exact definition of every word they're using, and sometimes if the subject matter is uncomfortable to discuss, I will have them use code words whose entire purpose is to ensure I don't know part of what they're talking about!

Similarly, computer programming professionals know that "rubber ducking" doesn't require a deep understanding anyway, as otherwise one could not use a rubber duck to do it. The fact that people poured out their deepest secrets to good ol' ELIZA should be an indication of how valuable simply providing a mental loopback interface to someone can be... not to mention how often it is that just providing the loopback is more valuable to the recipient than any actual interjection from another brain.

To put it another way, when people want a listener, the listener's actual understanding is far less important than it appears. Even if the listener is a professional helper of some kind, their value is usually more in the area of guiding the speaker through a reflective process of some kind... in which the speaker's understanding of their own thoughts is the actually important part.

Comment by pjeby on Death Positive Movement · 2020-12-17T21:30:12.914Z · LW · GW

The website's name, "Order of the Good Death" is oxymoronic. Death is bad. Even if people can die at age 90 in exactly the way they want, have their remains taken care of exactly how they want, and be assured that their decaying body won't negatively impact the environment, their death is still bad.

Bad for whom?

Comment by pjeby on Death Positive Movement · 2020-12-17T21:27:38.429Z · LW · GW

Could one be part of the Death Positive Movement and still have a personal desire to live forever? Is a cultural attitude of death positivity compatible with a cultural desire to disarm death?

Sure. Why not?

ISTM the term "Death Positive" is a weird misnomer, though I can sort of understand it as a spin on "Sex Positive", as in, sex positivity is about removing taboos from the discussion of sex. But it sounds more like "Positive" in these terms expand to, "let's make it so we can talk about this without it being weird," not necessarily "there should be more of this thing".

Comment by pjeby on The Curse Of The Counterfactual · 2020-12-16T10:37:50.687Z · LW · GW

Personal bias alert — I would guess that my own moral brain is perhaps in the 5th percentile of judginess and desire to punish transgressors

Note that this is not evidence in favor of being able to unlearn judginess, unless you're claiming you were previously at the opposite end of the spectrum, and then unlearned it somehow. If so, then I would love to know what you did, because it would be 100% awesome and I could do with being a lot less judgy myself, and would love a way to not have to pick off judgmental beliefs one at a time.

If you have something better than such one-off alterations, and it can be taught and used by persons other than yourself, in a practical timeframe, then such a thing would be commercially quite valuable.

I am aware of many self-help approaches for eliminating specific judgments. However, apart from long-term meditation, or a sudden enlightenment/brain tumor/stroke, I am not aware of any methods for globally "unlearning" the capacity for judginess. If you know how to do such a thing, please publish! You will be revolutionizing the field.

I got a sense from your description that you see it as universal and immutable, not as something we learned from parents/peers and can unlearn.

Define "it". ;-)

the complex "moral brain" that you described

I think perhaps we're talking past each other here, since I don't see a "complex" moral brain, only several very simple things working together, in a possibly complex way. (Many of these things are also components shared by other functions, such as our purity-contamination system, or the "expected return calculation" system described by prospect theory and observed in various human and animal experiments.)

For example, we have emotions that bias us towards punishing things, but we can certainly learn when to feel that way. You can learn not to punish things, but this won't remove the hardware support for the ability to feel that emotion. Both you and the woman you mentioned are capable of feeling outrage, even though you've learned different things to be outraged about. That animals raised in captivity, and pre-verbal human children can both be observed expressing outrage over perceived unfair treatment or reduced rewards without first needing an example to learn from is highly suggestive here as well.

I think it's safe to say that these low-level elements -- such as the existence of an emotions like moral outrage and moral disgust -- are sufficiently universal as to imply hardware backing, despite the fact that the specific things that induce those emotions are culturally learned. AFAIK, they have universal facial expressions as found in even the most remote of tribes, which is strong evidence for hardware support for these emotions. (There are also established inbuilt biases for various types of moral learning, such as associations to purity, contamination, etc. -- see e.g. the writings of Haidt on this.)

Can you learn to route around these emotions or prevent them arising in the first place, to the point that it might seem you're "unlearning" them? Well, I imagine that if you meditated long enough, you might be able to, as some people who meditate a lot become pretty nonjudgmental. But I don't think that's "unlearning" judgmental emotions, so much as creating pathways to inhibit one's response to the emotion. The meditator still notices the emotion arising, but then refrains from responding to it.

That people can meditate for years and still not achieve such a state also seems to me like strong evidence for judgmental emotions as being the function of a piece of hardware that can't just be turned off, only starved of stimulation or routed around. The literature around meditation likewise suggests that people have been trying for thousands of years to turn off attachment and judgment, with only limited success. If it were purely a software problem, I rather expect humanity would have figured something out by now.

Comment by pjeby on The Curse Of The Counterfactual · 2020-12-14T20:26:33.775Z · LW · GW

I'm curious what the objection to the "moral brain" term is. As used in this article, it's mainly shorthand for a complex interaction of social learning, biases, specialized emotions, and prospect theory's notion of a baseline expectation of what one "ought" to have or be able to get in a specific circumstance or in exchange for a specific cost. (Or conversely what some specific thing "ought" to cost.)

Comment by pjeby on To listen well, get curious · 2020-12-13T00:48:00.077Z · LW · GW

Fascinating. A possible counterexample is that I have noticed in myself (and others) a tendency to sometimes be calmed down or reassured by even a mechanical repetition that is known to be mechanical by the recipient. This can happen in spite of consciously thinking "them saying that shouldn't work," and being annoyed at it, at the same time as feeling better because of it. In the instances where it has worked, I have found myself hearing back what I said and having an internal response like, "Yeah! You get me!" And my wife has done the same thing.

Granted, it doesn't happen every time; I think it's partly a question of what words are used. Indeed, on reflection I think it might have to do with parrot-phrasing rather than paraphrasing. At least the instances I remember of this working, were times where I used the same words as the other person, or vice versa.

This suggests to me that there exists some mechanism by which reflection is perceived as supportive, such that it can produce some effect even in the absence of curiosity.

Comment by pjeby on alkjash's Shortform · 2020-12-11T21:51:29.326Z · LW · GW

NLP stands for Neurolinguistic Programming -- a spur-of-the-moment name given by Richard Bandler after glancing at the titles of the books in his car when he was stopped by police for speeding, and was asked his occupation. Before that point, it was just a group of students and academics doing weird psychology experiments, after Bandler noticed some common language patterns between certain therapists whose books he was transcribing and editing (one a Gestalt therapist, the other a family therapist), and went to ask his linguistics professor about it.

Bandler later settled on a definition of NLP as, "an attitude which is an insatiable curiosity about human beings with a methodology that leaves behind it a trail of techniques." Which, one might argue, is just another way of saying "Science!"... but the more philosophically-oriented works of the NLP creators spend a lot of time talking about how so much of psychological science at the time (60's and 70's) was "how do we define how fucked-up somebody is", not "what can we do to help".

In contrast, the philosophy of NLP presupposes that people are not broken: whatever it is they're doing, they're doing perfectly according to their programming: a programming that can be understood in terms of internal processing steps (represented in sensory terms), and in terms of people's internal models, or maps of the territory. Behavior that may seem crazy or stupid can thus be understood as straightforward, even rational, when considering both a person's map and the processing steps they are using to think and respond to what they observe.

The Structure of Magic (the first book on NLP, which IIUC was also Bandler's masters thesis) was written to capture something that it appeared that more-effective therapists were doing to change people: specifically, noticing map-territory gaps and getting people to confront those gaps.

Bandler noticed the verbal patterns because he was typing the same kinds of questions and statements over and over, so he consulted the linguistics professor at his college to ask about them, and got help to describe the patterns in linguistic terms (like "lost performatives", "modal operators", and "complex equivalence").

Together, they concluded that the distortions of model-making -- that is, the distinctions between map and territory -- had specific, observable linguistic markers for the information that was being generalized, distorted, or deleted in the process of mental model-making. And that some effective therapists were people who had learned to pick up on these markers and respond to them with certain types of questions, if they perceived that the modeling distortions were relevant to the problem at hand. (Since we all distort things constantly, forcing everything to be specific just grinds all communication to a halt.)

Many of their original classifications and inventions from back in the 70's would be recognizable as LessWrong-style rationality moves. For example, one of Bandler's favorite techniques was to effectively Taboo people's problem descriptions, by telling them he was going to be hired to "have their problem" for them, so that they could have a day off from it, so he needed to know all the details. "How will I know when to start panicking?," he might ask, getting a person to literally coach him on the details of whatever the problem was, in the process eliciting all sorts of falsifiable information about what's going on in the person's model, cues, and behavior, rather than listening to a person's ideas about the problem.

This also illustrates the "methodology" of NLP: empirically observing behavior and taking people's statements literally and seriously, to an autism-like degree. (In fact, when I first read Animals In Translation by Temple Grandin, one of my first thoughts was that its author was basically applying the NLP philosophy to animal husbandry: seeing the details of what was happening from their perceptual and world-modeling point of view, instead of projecting expectations.)

Is there a book or resource that you would most recommend to learn NLP?

It sort of depends on what it is you want to learn. The more academically-oriented materials are things like The Structure of Magic (volume 1 is the more useful one of its two volumes) and Neurolinguistic Programming, Volume 1 (I don't know if a volume two was ever written.) Those are the books with the most formal structure and attempts at making falsifiable claims and coherent theories; most other books by the creators are essentially workshop transcripts of them teaching therapists to do interesting things.

(But then, while Using Your Brain For A Change is a workshop transcript and is mostly about techniques, there is also a fair amount of offhand commentary that describes the philosophy of NLP, with regard to things like empiricism, testing, "going first", doing different things until you find something that works, etc.)

If you have an interest in hypnosis, and the links between it and NLP, then Patterns of the Hypnotic Techniques of Milton H. Erickson, M.D. is an interesting contrast to The Structure of Magic, as it is all about the language of deliberate vagueness (which can be very useful in hypnosis and persuasion), while Magic is about the language of specificity. NLPers refer to the models presented in these books as the "meta model" (for specificity and "chunking down") and the "Milton model" (for vagueness and "chunking up").

The meta model should be of particular interest to LessWrongers, since it is a catalogue of patterns that help identify where a person's map (expressed in language) may differ from the territory, and a set of questions that can be used to increase specific understanding. (For example, the move described in The Power To Demolish Arguments is what NLP called "chunking down" (going from abstracts to instances or examples) as opposed to "chunking up" (going from examples to abstracts).

Most other books tend to be catalogues of techniques developed using the theoretical models, philosopy, and methodology described in those books. A lot of these techniques are considered "NLP", in part because Bandler and Grinder did workshops to teach a bunch of the interesting things their experimentation club came up with, and called it an NLP practitioner certification. (Tony Robbins took an early course in NLP and then wrote his book Unlimited Power based on that and some other stuff.)

In general, though, popular works on NLP tend to have little connection or relevance to the core ideas, philosophies, or methods, and instead focus on specific ideas or techniques in a particular area of application. (Which is a bit like going around writing books on calculation tricks for how to target artillery shells effectively, and calling what you're writing about "Science". It might even be technically correct, but it's terribly misleading!)

In addition, they often promote concepts the NLP creators themselves discarded or updated decades ago, or discard essential information for applying the technique comprehensively. (The main missing ingredient usually being what the CFAR handbook describes as "Polaris", i.e. aiming at a goal and doing different things until you get it. For example, the NLP "swish" technique is often described in shorthand with a very specific set of visualization parameters, but for best results you actually need to both tune those parameters to the individual and test the results to make sure you didn't miss anything -- steps that are typically omitted in popular descriptions.)

In general, I view NLP as a kind of proto-rationality that was aimed at practical individual improvement, wherein many important tools we use today, were discovered, named, or experimented with. Its history also offers lessons regarding how difficult it is to take something that's rationality-oriented and bring it to a larger audience without losing the very thing that made it useful in the first place, and why it's important to focus more on what predictions a theory makes vs. how ludicrous the theory itself is. (Paraphrasing something the NLP developers said once, "Everything we're going to tell you is a lie. Some of these lies will be more useful than others.")

Comment by pjeby on alkjash's Shortform · 2020-12-10T23:25:02.972Z · LW · GW

The trust stuff you're talking about is generally referred to in NLP as creating rapport, though Robbins is certainly really good at it.

The purpose of creating rapport in a coaching or therapeutic context is to get someone to actually do something that will help them, and especially to do that thing sincerely (vs. "trying" to do it, pretending to do it half-heartedly or ironically, etc.). They need to trust that it's safe to do the thing, and know that you're not going to pull something on them or use it against them somehow. Otherwise, they may not commit to it.

This trust is extremely contextual, and to some extent conditional on the shared belief that the thing you're telling them to do will ultimately be good for them. I don't think "love" is a particularly good name for the state, though. If the intervention is successful, it's likely that you will retain respect or a certain fondness for the mentor that helped you with something important. There's a personal connection, yes, but it's the connection of a student to a good teacher, or someone who set you on a good path.

At most, one might compare it to the sensation that happens when trusting someone to belay you in a ropes exercise or catch you in a trust fall. Successfully completing the exercise doesn't magically make you trust and love the person forever.

I would love to hear which pieces of his written work you think of as "actually new or useful insights," the only thing that fits that description for me from his youtube videos is the Six Human Needs, which is a useful template for goal-factoring for me.

Ironically, that's one of the models that he didn't create; it's a refinement on an earlier model by Hyrum W. Smith. (The founder of what became later became Franklin-Covey.)

The written work I had in mind was Awaken The Giant Within, especially his chapters on Questions and Rules. While much of the book isn't terribly original, and many pieces are just a popularization of similar concepts from NLP, his integration of some of those pieces into his "master system" was an advance of the field.

(Remember, though, that ATGW was written about 30 years ago (1992 publication), so many of the ideas that were brand new or barely heard of at the time of its writing, have since become part of the self-help zeitgeist... in large part because of said book.)

Comment by pjeby on alkjash's Shortform · 2020-12-09T05:04:06.027Z · LW · GW

To see what his power is, I think it's worth watching some of his relationship intervention videos. As far as I can tell, one of his core strategies is "solve an irreversibly damaged relationship in an hour by making both parties fall deeply in love with him (TR) and then transfer that love to each other."

I don't think I've watched anything of his on Youtube except his TED talk, but I have watched some of the Robbins/Madanes trainings and seen some episodes of Breakthrough, not to mention the Netflix special, "I Am Not Your Guru". In none of those places do I recall see him doing anything that looks like making people fall in love with him and transferring it, though I do occasionally recall him directing couples to do things that, in the moment, I would expect to create a sense of intimacy or love for each other. I've also been to one of his live events (almost twenty years ago), and didn't see anything like that.

My impression from that live event, and from the recordings I've seen, is that his events are very attractive to extroverts (or people who need excuses to act in a more extroverted way), as they're rather like a rock concert slash religious revival slash sporting event. It's not that he doesn't also have very sensible and intelligent things embedded in that context, but without the extrovert juice I doubt they'd sell out so well. Despite the fact that he despises being known as a "motivational" speaker, a huge amount of his success is likely due to just him being The Most Enthusiastic Man In The World.

While his books read in a fairly overblown way, and some things are just repackaged versions of other things, I think there are a lot of actually new or useful insights in his work, taken as a whole. But his big-audience stuff is relatively dumbed-down because, when dealing with a big audience, you have about five words, and he clearly knows it. And for him, the five words to use in the context of relationship repair appear to be: "Get your partner's needs met."

Comment by pjeby on Luna Lovegood and the Chamber of Secrets - Part 4 · 2020-12-05T18:43:00.511Z · LW · GW

That sounds like something you inferred, rather than something I implied. (But even if I had, I still don't understand how that would be rude.)

Comment by pjeby on Luna Lovegood and the Chamber of Secrets - Part 4 · 2020-12-04T22:20:20.904Z · LW · GW

Why would it be rude to ask the question?

Comment by pjeby on Luna Lovegood and the Chamber of Secrets - Part 4 · 2020-12-04T00:53:11.161Z · LW · GW

Was this chapter coauthored with GPT-3? I ask because some of it has the same dreamlike quality seen in some previous GPT-generated HPMOR fanfiction.

Comment by pjeby on Straight-edge Warning Against Physical Intimacy · 2020-12-03T22:35:43.378Z · LW · GW

Speaking as someone who tends to be low-oxytocin by default, I've got to note that there are some serious drawbacks to quality of life without oxytocin-inducing activities. For me, my "true self" is more evident during the day or two of "afterglow" from high-oxytocin activities than in the dragging, "who cares about people and what's the use of living anyway" tendencies of lower oxytocin.

So I would be rather doubtful of the utility of avoiding "normal" amounts of oxytocin induction. Am I a different person while riding an oxytocin high? Yep: cuddling and zoning out in a satisfied way goes to the top of my preferences, followed by playfulness and silliness. But this phase doesn't last very long, and is quickly replaced by, "I feel/act like a normal human being again, with a greater sense of well-being", while oxytocin deprivation leads to a decreased sense of well-being.

Based on my experiences, I would expect that chronic deprivation of oxytocin-inducing activities would produce significant physical and mental stress and illness. I can't even imagine how I could have even survived the last two and a half decades without my spouse's affection... or that I would have even wanted to.

Comment by pjeby on In Addition to Ragebait and Doomscrolling · 2020-12-03T20:12:18.487Z · LW · GW

I don't think that outrage is different from contempt in terms of being a free hit of righteous moral superiority. Outrage may create more motivation to do something, but that "something" will be biased towards protest and/or punishment, not actual problem-solving... and in the case of online media, the protest and/or punishment is likely to take the form of more posting to the same media outlet. So the optimally addictive mix would need both outrage and contempt. Too little contempt, and pure outrage would be exhausting. Too little outrage, and not enough people post vs. read.

For me, the optimum solution to these problems is to avoid as much as possible any media streams that are consolidated by Big Social. For example, I never, ever, ever look at my Facebook account's main page, or look at my notifications. Instead, I browse things I want to browse in their own little information silos. (That is, specific groups or pages.) RSS feeds are helpful tools for this, which is why RSS is so largely dead.

The problem with Big Social isn't that you end up with filter bubbles, it's that Big Social tries to consolidate things in such a way as to control your information consumption priorities, while pushing "discovery" of things you didn't actually want or need to know... like Twitter randomly showing me stuff from people followed by people I follow, or stuff that people I follow liked or replied to.

Comment by pjeby on Notes on Endurance · 2020-11-26T23:39:17.435Z · LW · GW

Studies on ego depletion suggest that endurance is not a "virtue", but rather a matter of having the right expectations.

If you expect something to be difficult, you are more likely to persist in the face of difficulty, than if you expect it to be easy.

More precisely, if you expect it should be difficult: i.e., if your Prospect Theory baseline is calibrated to a level that is at -- or above -- the actual level of difficulty.

The reason "no pain, no gain" is a slogan among bodybuilders is because it's an exhortation to expect a high level of pain, as how things "should" be. That is, to treat it as a Prospect Theory baseline.

One book I've read ("The Tools" by Stutz and Michels) includes an exercise of saying things like "I love pain" or "Pain sets me free", as an attempt to engage this mindset. It is different from merely thinking or "knowing" that a thing is going to be difficult, because one can still be feeling (in effect) that it shouldn't be, or that being a smart or talented person means it should be easier for you.

Instead, the correct mindset is treating the pain as a signal that one is getting closer to the desired result... like the old joke about the optimistic child who, upon being placed in a room full of literal horse shit, immediately began digging to look for the pony.

I guess what I'm saying is, most of this article reads to me like random speculation without a gears-level model of endurance. My take on the gears-level model of endurance is just Prospect Theory: if the cost of something is greater than the level we take for granted it should, we count it double and rapidly lose motivation to continue. Conversely, if the cost is less than what we take for granted, then we experience neutral or positive affect, and carry on.

Paradoxically, this leads to people making lots of exhortations to treat pain, grit, endurance, willpower, and other things as positive attributes, in an attempt to get others to update their baselines!

But these exhortations to virtue never worked for me, personally, compared to just understanding this principle and deliberately adjusting my expectations so that "horse shit" means "I'm getting closer to the pony!"

That's because, at least for me, most exhortations towards enduring pain sound like delusional virtue-signaling rather than inspiring advice. Understanding these exhortations as a crude attempt at teaching a mindset that reliably reduces the subjective experience of pain, frustration, and discouragement makes a big difference.

To put it another way, on the surface, "embrace pain" is a stupid statement, as pain is not the unit of effort. But with the added meta-level of "Embrace the mindset of embracing pain in order to experience less ego depletion, less subjective discomfort, and increased motivation", it makes a heck of a lot more sense, and matches an actual gears-level model.

Comment by pjeby on Impostor Syndrome as skill/dominance mismatch · 2020-11-07T19:37:33.532Z · LW · GW

So, to distinguish types of impostor syndrome, I'll refer to the type I typically work with as "unfulfilled ambition". I feel comfortable saying that its cause is a pre-existing self-definition of being unworthy as a person, with the ambitions being driven by a desire to fix or eliminate this unworthiness.

Why? Because altering the perception of unworthiness fixes the problem, as one is no longer seeking the validation that the goals cannot provide. Afterward, people either change goals or enjoy them for what they are, instead of seeking them to fix the hole inside themselves where self-worth was supposed to be.

This is not consistent with the predictions of your model, AFAICT. You hypothesize that impostor syndrome is about altering perceived competence, but people with unfulfilled ambition do not have an inaccurate assessment of their competence. The hypothetical author doesn't believe themselves incapable of writing a book, but rather they seem themselves as a non-author who managed to get a book published. A "real" author would have accomplished more, they think.

While it's true that they are always comparing themselves unfavorably to those who have greater skill, they are not confused as to what their actual skill is. Your model, if I understand it correctly, claims that the author should not keep trying to status climb, since the purpose is to avoid the threat of status claims.

If you succeed to convince yourself that your problem is lack of skill, then working on improving your skill is a rational reaction (given your beliefs), isn't it?

Or maybe the causality is the other way round. People who habitually work on improving their competence are more likely to get into the Impostor situation (and then they continue doing what they always did).

IIUC, you're now contradicting the major premise of your article: that the function of impostor syndrome is to avoid apparent status competition by appearing too competent. So unless you explicitly exclude unfulfilled ambition-style impostor syndrome from your model, ISTM to be a direct refutation of the premise.

A second type of impostor syndrome that's been brought up by others in the thread is "people make a big deal of my skill that I consider modest or unworthy of such praise", which seems functionally more similar to unfulfilled ambition than to the model your article describes. Again, the person experiencing the syndrome does not misperceive their skill, but doesn't consider it to be terribly important.

Why doesn't this match your model? Let's look at a counter-example: using one's competence to claim higher status. Consider the stereotype of the arrogant engineer who believes they know everything, rebelling against the "suits" -- who are likely taller, physically stronger, and far more socially dominant.

If your model were predictive, this should not be possible, or at least not common enough to be a stereotype. The arrogant engineer probably has actually been personally beaten up by people like the people they are going against! Your model predicts that this person should be downplaying their competence, not using it as a weapon.

Next, let us contrast your model with a simpler one: "people can be differently-calibrated regarding what competence levels equate to a role/identity, or a particular level of status". This model elegantly predicts the existence of all the subtypes of impostor syndrome that have to date been reported in this thread, and includes the possibility of overestimation by the arrogant engineer.

Finally, the one category of behavior people have that seems to match your model -- downplaying one's skills so as not to appear arrogant -- is generally not referred to as impostor syndrome. People who are trying not to appear arrogant, out of modesty or lack of confidence, do not generally describe themselves as feeling like an impostor or fake. In order to feel like an impostor or fake from the inside, some kind of dissonance is required between an inside view and an outside view.

That is, I cannot view myself as a "fake" unless I have some concept of what would be "real", in order to experience a discrepancy between the two. In each subtype of impostor syndrome, the two things being compared are different (e.g. whether I "feel like" an author, or whether people's amount of praise feels "appropriate"), but the concept is the same: there is a dissonant comparison.

IOW, I contend that people who are actually experiencing a desire to downplay competence to avoid status competition are highly unlikely to call what they are experiencing "impostor syndrome" or "feeling like a fake". And conversely, that people who do describe their experiences as feeling like a fake or impostor, are highly unlikely to be downplaying their competence to avoid status competition. (vs. merely feeling their competence to be overrated or their status to be under-validated.)

To define an experimental model, I mean that if you take those groups of people, and then determine what exactly is going on in their heads -- by fixing it -- then you could experimentally show that the experience of "fakeness" requires contrasting status perceptions, while downplaying one's competence does not imply an experience of "fakeness".

That is, I don't think that the thing you have described in this article can be meaningfully labeled "impostor syndrome", and that actual people experiencing the thing you have described would be much more likely to self-label as "being modest" or "lacking confidence" or something similar, rather than impostor syndrome.

(This is further supported by the number of people in the comments who have experienced one form of impostor syndrome or another, being confused by your model.)

I would suggest renaming your model and adjusting its explanation accordingly.

Comment by pjeby on Impostor Syndrome as skill/dominance mismatch · 2020-11-07T03:59:20.355Z · LW · GW

Your second prediction isn't unique to this model. The first is more novel, but I'm not sure how you'd measure it in a consistent way.

Also, ISTM your model is saying that the function of Impostor Syndrome is to lower self-perception of competence in order to lower other-perception of confidence... but this seems to be contradicted in practice by the amount of time people with impostor syndrome spend working on improving their competence.

Do people exist who downplay their competence to avoid over-claiming status? Of course! But we don't usually call this impostor syndrome. We might describe it as a lack of confidence or fear of speaking up. The thing that such people want to change (if they want to change) is to increase their confidence.

In contrast, the thing that I usually call "impostor syndrome" is when a person, despite evidence to the contrary, perceives themselves as being fake or inferior compared to the "real" authorities of the applicable field. Like an author who has written books, but doesn't feel they are a "real author" yet because they haven't met hurdles X, Y, or Z... only to discover that upon reaching those hurdles (e.g. a major publishing contract), they still don't "feel" like a "real" author, and then, say, decide they need a bestseller.

I don't see how to fit this type of impostor syndrome within your model, though. While you could argue that the goal of the behavior is to keep the person forever dangling the next carrot in front of themselves (i.e. always maintaining low perceived competence), modeling this in terms of a dominance hierarchy makes no sense to me. The author isn't worried that other authors are going to beat him or her up for daring to compete, otherwise it would make absolutely no sense for them to keep going after bigger and better accomplishments!

If anything, you could argue that it's somebody seeking a particular level of status, but then not being able to take in the feedback that tells them they've reached it, or are miscalibrated as to what feedback they should be getting for that accomplishment. So they conclude that the goal is insufficient and advance to the next level of status target(s). (So it's definitely not an attempt to avoid status competition!)

Of course, this is just one possible definition of "impostor syndrome", and I imagine that others can exist, but this particular type is the one I help people with the most, so it's what I tend to think of when somebody says something about impostor syndrome.

Comment by pjeby on The Curse Of The Counterfactual · 2020-08-27T20:06:46.715Z · LW · GW

I don't think the word is the critical part. The critical part is the underlying sense of "supposed to (have) happen(ed), and if anyone disagrees, then they're wrong". There are a lot of different words that can be used to describe that state, including should, have to, ought to, supposed to, must, required, etc.

This type of "should" is effectively a statement of moral rectitude or righteousness, coupled with condemnation. It's not the same as e.g.

  • I would be better off if I did this
  • It would be a good idea if I did this
  • I wish something else had happened

etc. These are also expansions of "should", but lack the implication "and it's a violation of propriety if not implemented" or "you're a less righteous person if you don't".

Comment by pjeby on The Book of HPMOR Fanfics · 2020-07-04T17:59:16.431Z · LW · GW

OMG.

" You see, children, your brain is an arrogant and lazy lump of gray matter. It thinks it can handle anything, so it doesn’t bother telling you what’s going on unless it thinks you might actually care. That is why, children, you must be careful to inform your brains of everything they should be listening for.”

And:

“Do you know what this monster is?”

In the abrupt silence, Harry spoke. “You, teacher?”

“No,” said Professor Quirrell. His lips twisted. “The plot.”

There was a baffled silence.

Then, the Hufflepuff girl called out, “The story is making sense, I feel a great sense of foreboding.”

The girl sitting next to her, her hand raised, was Hermione Granger. “Um, that’s because there’s a story in this chapter.”

It's amazing how good these riffs are at getting Quirrell's lecturing voice right, not to mention occasionally making a scary amount of sense.

Comment by pjeby on When is it Wrong to Click on a Cow? · 2020-06-29T02:24:08.297Z · LW · GW

If I were to guess at the source of your intuition, I would say you were taught a value system that denigrates people for not putting enough effort into things, or being focused on one's own pleasure rather than doing more "important" things.

The musician and video game player have to at least work for their pleasure, and their reward is a place on a status ladder of some kind. The stim user isn't climbing any ladders or putting in any effort, and thus should be denigrated/disapproved of.

This isn't an acultural moral intuition, though: it's based on your specific schooling, family, or other cultural upbringing. A person brought up in an environment where personal ambition is denigrated would likely see the musician as a try-hard, the video game player as acceptable as long as they're not too serious about it, and the stim user as someone to get together and have a stim party with!

Further:

we must do our best to optimize what little time we have as effectively as possible

Healthy humans are usually more satisficing than optimizing. (Slack is healthy, Moloch not so much.)

In general, my observation has been that the more somebody talks up some form of utility maximization, not as a simple consequence of math or a useful tool, but as a moral imperative and a personal ideal, the more likely the interest arises from a compulsion to virtue-signal in opposition to something one has been taught should be denigrated. The virtue signaling impulse then happens whenever one is exposed to examples of the denigrated thing (e.g. thinking about somebody using a stim machine).

Cultural indoctrination like this can be altered or deleted fairly simply using memory reconsolidation techniques, after which one ceases to have the urge to denigrate or virtue-signal in response to a pattern-match, replaced with something like, "well, it depends" -- i.e., specific-case reasoning rather than a compulsive heuristic.

Comment by pjeby on Don't Make Your Problems Hide · 2020-06-28T23:00:24.115Z · LW · GW

One relevant factor here is that the conscious mind is largely driven by subconscious beliefs in the first place, so the direction in which the conscious mind attempts to edit beliefs is often dysfunctional; see e.g. the part of your previous article that said:

People know intuitively where leverage points are.... Everyone is trying very hard to push it IN THE WRONG DIRECTION!

The desires we have to edit our inner leverage points are no different: a person who is a perfectionist will rarely work on editing themselves to be less perfectionistic, vs. trying to edit themselves to be better at not making mistakes.

Even in the case of trying to edit one's self to be "less perfectionistic", one is likely to approach it as something like, "How can I stop being upset over these stupid mistakes (so that I can get closer to being perfect sooner)?", not "How can I stop thinking mistakes mean I'm a shitty person?"

Conscious editing without first looking for background assumptions (like "mistakes = shitty person") will just be rearranging the furniture instead of actually moving house. But we don't consciously notice these background assumptions by default, because our brain doesn't attribute the problems we experience as a result of them, as having anything to do with them. We see surface symptoms and try to fix those symptoms, not question the underpinnings of our model of the world!

Comment by pjeby on What is Ra? · 2020-06-07T02:06:39.380Z · LW · GW

ISTM that's a result of worshipping Ra, rather than Ra-worship itself. Perhaps I am biased by my mother's example, but she was not a part of any mysterious organizations or their status incentives. She merely believed that Church, State, Schools, Companies, or other such Capitalized Entities had mystical powers to which mere human individuals could not aspire, unless they were assimilated into those institutions and thereby earned the blessing of said mystical powers.

AFAICT, this did not come from the type of organizational evolution and incentives that you're talking about; rather, this was simply a widely-held belief of hers that was largely independent of what competencies or institutions were being discussed. In her mind, ordinary humans couldn't do jack squat; anything an ordinary human did without an appropriate institutional blessing was merely an exception that didn't count the same as doing the thing "for real" -- it was in her mind the same as an actor pretending to be a priest not being able to actually forgive your sins or perform a marriage ceremony... just extended to everything that institutions or some sort of orthordoxy existed for.

So ISTM that the primary dynamic is that deification of the abstract offers a superstimulus that can't be matched by real, concrete, imperfect individuals, leading to worship of the abstraction in place of critical thinking or analysis. In effect, my mother was just doing the organizational/societal equivalent of people preferring their anime waifus or surgically-altered pornstars over real-life people. (IOW, removing details that imply imperfection or excess complexity is already a standard route to superstimulus in humans.)

Comment by pjeby on What is Ra? · 2020-06-06T17:10:19.827Z · LW · GW

I thought the article provided a pretty clear definition: i.e., a preference for Mysterious, Prestigious, collective Authority over known, functional, individual capability.

Thank you for posting this, btw, because I hadn't actually heard of it before, and reading the article allowed me to finally make sense of a way that my mother treated me as a child, that I couldn't get my head around before. (Because it just seemed like she was demeaning me and my abilities personally, rather than simply having a semi-religious belief that no mere individual imperfect human could ever do something meaningful through their own actions, rather than through the divine authority of proper institutions.)

Oddly enough, I was actually trying to change a belief I'd picked up from her (that I can't do anything meaningful or important "for real") when I had the impulse to go look at LW and spotted your question, then read the article. It was just what I needed to wrap my head around the belief and remove it so that I don't get nervous when I get close to doing something meaningful "for real".

Indeed, what I found was that while I hadn't fully internalized her belief in Ra, I effectively picked up as a background assumption the idea that only certain blessed people are allowed to market themselves successfully or succeed in business in a big way, or write Proper Books... and that I'm not one of them.

So even though I am about as anti-Ra in philosophy as they get, I still had a Ra-like alief that made me feel inadequate compared to the Mysterious Authorities when I tried writing books or promoting my work too effectively. (Versus the very ineffective methods of doing both that I've been doing for the past 14 years.) I'm very much looking forward to see what I can do when I don't have Ra-induced feelings of inadequacy dogging my steps.

Comment by pjeby on From self to craving (three characteristics series) · 2020-05-31T00:43:57.583Z · LW · GW

Eh. Sorta? I've been busy with clients the last few days, not a lot of time for experimenting. I have occasionally found myself, or rather, found not-myself, several times, almost entirely accidentally or incidentally. A little like a perspective shift changing between two possible interpretations of an image; or more literally, like a shift between first-person, and third-person-over-the-shoulder in a video game.

In the third person perspective, I can observe limbs moving, feel the keys under my fingers as they type, and yet I am not the one who's doing it. (Which, I suppose, I never really was anyway.)

TBH, I'm not sure if it's that I haven't found any unpleasant experiences to try this on, or if it's more that because I've been spontaneously shifting to this state, I haven't found anything to be an unpleasant experience. :-)

Comment by pjeby on Speculations on the Future of Fiction Writing · 2020-05-28T22:06:36.588Z · LW · GW

Since you can only nitpick a movie after you've already paid to see it, where is the economic incentive for anyone to do this?

The actual problem is even more complicated than that, though. The movie business is run by producers -- the people who put up the money. They may have experience in the film business, but since there is no universally agreed criteria as to what constitutes a good investment, it is subject to bias, nepotism, superstition, etc. Favors are owed to people, the star or director has a pet issue that must be addressed to get them on board, etc.

This means that to a great extent the artistic direction of a film is determined by committee, many members of which have only the most superficial understanding of what is going on or what the film is about or anything else, and have no real desire to understand more deeply because such an understanding would not do anything useful for their own interests.

IOW, it's a values alignment problem of the same type that produces other forms of civilizational inadequacy, and the mere existence of better writing tools can't help it, for the same reason that the vast storehouse of existing wisdom and literature of telling effective stories on film doesn't help it that much either. The screenplays have to be effectively written to get bought, but once they're bought the connection between what was first written and what actually gets filmed can be quite tenuous indeed.

Comment by pjeby on From self to craving (three characteristics series) · 2020-05-27T17:04:37.497Z · LW · GW

No, that's a good point, as far as it goes. There does seem to be some sort of meta-process that you can use to decouple from craving regarding these things, though in my experience it seems to require continuous attention, like an actively inhibitory process. In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from, and I don't think that's predictively accurate.

Your points regarding what's possible with meditation also make some sense... it's just that I have trouble reconciling the obvious evolutionary model with "WTF is meditation doing?" in a way that doesn't produce things that shouldn't be there.

Consciously, I know it's possible to become willing to experience things that you previously were unwilling to experience, and that this can eliminate aversion. I model this largely under the second major motivational mechanic, that of risk/reward, effort/payoff.

That is, that system can decide that some negative thing is "worth it" and drop conflict about it. And meditation could theoretically reset the threshold for that, since to some extent meditation is just sitting there, despite the lack of payoff and the considerable payoffs offered to respond to current urges. If this recalibrates the payoff system, it would make sense within my own model, and resolve the part where I don't see how what you describe could be a truly conscious process, in the way that you made it sound.

IOW, I might more say that part of our motivational system is a module for determining what urges should be acted upon and which are not worth it, or perhaps that translates mind/body/external states into urges or lack thereof, and that you can retrain this system to have different baselines for what constitutes "urge"-ency. ;-) (And thus, a non-conscious version of "valence" in your model.)

That doesn't quite work either, because ISTM that meditation changes the threshold for all urges, not just the specific ones trained. Also, the part about identification isn't covered here either. It might be yet another system being trained, perhaps the elusive "executive function" system?

On the other hand, I find that the Investor (my name for the risk/reward, effort/payoff module) is easily tricked into dropping urges for reasons other than self-identification. For example, the Investor can be tricked into letting you get out of a warm bed into a cold night if you imagine you have already done so. By imagining that you are already cold, there is nothing to be gained by refraining from getting up, and this shifts the "valence", as you call it, in favor of getting up, because the Investor fundamentally works on comparing projections against an "expected status quo". So if you convince it that some other status quo is "expected", it can be made to go along with almost anything.

And so I suppose if you imagine that it is not you who is the one who is going to be cold, then that might work just as well. Or perhaps making it "not me" somehow convinces the Investor that the changes in state are not salient to its evaluations?

Hm. Now that my attention has been drawn to this, it's like an itch I need to scratch. :) I am wondering now, "Wait, why is the Investor so easily tricked?" And for that matter, given that it is so easily tricked, could the feats attributed to long-term meditation be accomplished in general using such tricks? Can I imagine my way to no-self and get the benefits without meditating, even if only temporarily?

Also, I wonder if I have been overlooking the possibility to use Investor mind-tricks to deal with task-switching inertia, which is very similar to having to get out of a warm bed. What if I imagine I have already changed tasks? Hm. Also, if I am imagining no-self, will starting unpleasant tasks be less aversive?

Okay, I'm off to experiment now. This is exciting!

Comment by pjeby on From self to craving (three characteristics series) · 2020-05-25T21:09:39.050Z · LW · GW

This is all very interesting, but I can't help but notice that this idea of valence doesn't seem to be paying rent in predictions that are different from what I'd predict without it. And to the extent it does make different predictions, I don't think they're accurate, as they predict suffering or unsatisfactoriness where I don't consciously experience it, and I don't see what benefit there is to having an invisible dragon in that context.

I mean, sure, you can say there is a conflict between "I want that food" and "I don't have it", but this conflict can only arise (in my experience) if there is a different thought behind "I want", like "I should". If "I want" but "don't have", this state is readily resolved by either a plan to get it, or a momentary sense of loss in letting go of it and moving on to a different food.

In contrast, if "I should" but "don't have", then this actually creates suffering, in the form of a mental loop arguing that it should be there, but it isn't, but it was there, but someone ate it, and they shouldn't have eaten it, and so on, and so forth, in an undending loop of hard-to-resolve suffering and "unsatisfactoriness".

In my model, I distinguish between these two kinds of conflict -- trivially resolved and virtually irreconcilable -- because only one of them is the type that people come to me for help with. ;-) More notably, only one can reasonably be called "suffering", and it's also the only one where meditation of some sort might be helpful, since the other will be over before you can start meditating on it. ;-)

If you want to try to reduce this idea further, one way of distinguishing these types of conflict is that "I want" means "I am thinking of myself with this thing in the future", whereas "I should" means "I am thinking of myself with this thing in the past/present".

Notice that only one of these thoughts is compatible with the reality of not having the thing in the present. I can not-have food now, and then have-food later. But I can't not-have food now, and also have-food now, nor can I have-food in the past if I didn't already. (No time travel allowed!)

Similarly, in clinging to positive things, we are imagining a future negative state, then rejecting it, insisting the positive thing should last forever. It's not quite as obvious a causality violation as time travel, but it's close. ;-)

I guess what I'm saying here is that ISTM we experience suffering when our "how things (morally or rightly) ought to be" model conflicts with our "how things actually are" model, by insisting that the past, present, or likely future are "wrong". This model seems to me to be a lot simpler than all these hypotheses about valence and projections and self-reference and whatnot.

You say that :

  • You witness someone being wrong on the internet
  • The moral judgment system creates an urge to argue with them
  • Your mind notices this urge and forms the prediction that resisting it would feel unpleasant, and even though giving into it isn't necessarily pleasant either, it's at least less unpleasant than trying to resist the urge
  • There's a craving to give in to the urge, consisting of the hypothesis that "I need to give in to this urge and prove the person on the internet wrong, or I will experience greater discomfort than otherwise"
  • The craving causes you to give in to the urge

But this seems like adding unnecessary epicycles. The idea of an "urge" does not require the extra steps of "predicting that resisting the urge would be unpleasant" or "having a craving to give in to the urge", etc., because that's what "having an urge" means. The other parts of this sequence are redundant; it suffices to say, "I have an urge to argue with that person", because the urge itself combines both the itch and the desire to scratch it.

Notably, hypothesizing the other parts doesn't seem to make sense from an evolutionary POV, as it is reasonable to assume that the ability to have "urges" must logically precede the ability to make predictions about the urges, vs. the urges themselves encoding predictions about the outside world. If we have evolved an urge to do something, it is because evolution already "thinks" it's probably a good idea to do the thing, and/or a bad idea not to, so another mechanism that merely recapitulates this logic would be kind of redundant.

(Not that redundancy can't happen! After all, our brain is full of it. But such redundancy as described here isn't necessary to a logical model of craving or suffering, AFAICT.)

Comment by pjeby on From self to craving (three characteristics series) · 2020-05-24T17:33:29.504Z · LW · GW

Interesting model. I'm not 100% certain that mere identification+valence is sufficient (or necessary) to create craving, though. In my experience, the kind of identification that seems to create craving and suffering and mental conflict is the kind that has to do with self-image, in the sense of "what kind of person this would make me", not merely "what kind of sensory experience would I be having".

For example, I can imagine delicious food and go "mmmm" and experience that "mmm" myself, without necessarily creating attachment (vs. merely desire, and the non-self-involved flow state of seeking out food).

But perhaps I'm misinterpreting your model, since perhaps what you're saying is that I would have to also think "it would make me happy to eat that, so I should do that in order to be happy."

Though I think that I'm trying to clarify that it is not merely valence or sensation being located in the self, but that another level of indirection is required, as in your "walk to relax" example... Except that "walk to relax" is really an attempt to escape non-relaxedness, which is already a level of indirection. If I am stressed, and think of taking a walk, however, I could still feel attracted to the calming of walking, without it being an attempt to escape the stress per se.

Yeah. So ISTM that indirection and self-image are more the culprits for creating dysfunctional loops, than mere self-identified experience. Seeking escape from a negative state, or trying to manipulate how the "self" is seen, seem to me to be prerequisites for creating dysfunctional desire.

In contrast, ISTM that many things that induce suffering (e.g. wanting/not wanting to get up) are not about this indirection or self-image manipulation, but rather about wanting conflicting things.

IOW, reducing to just self-identified valence seems like a big oversimplification to me, with devils very much in the details, unless I'm misunderstanding something. Human motivation is rather complex, with a lot of different systems involved, that I roughly break down as:

  • pleasure->planning (the system that goes into flow to create a future state, whose state need not include a term for the self)
  • effort/reward (the system that makes us bored/frustrated on the one hand, or go sunk-cost on the other)
  • moral judgment and righteousness (which can include ideas about the "right" way to do things or the "right" way of being, ideals of perfection or happiness, etc.)
  • self-image/esteem (how we see ourselves, as a proxy for "what people will think")
  • simple behavioral conditioning and traumatic conditioning

...and those are just what I've found it useful to break it down into. I'm sure it's a lot more complicated than just that. My own observation is that only the first subsystem produces useful motivation for personal goals without creating drama, addiction, self-sabotage, or other side effects... and then only when run in "approach" mode, rather than "avoid" mode.

So for example, the self-image/esteem module is absolutely in line with your model, in the sense that a term for "self" is in the equation, and that using the module tends to produce craving/compulsion loops. But the moral judgment system can produce craving/compulsion loops around other people's behavior, without self-reference! You can go around thinking that other people are doing the wrong thing or should be doing something else, and this creates suffering despite there not being any "self" designated in the thought process. (e.g. "Someone is wrong on the internet!" is not a thought that includes a self whose state is to be manipulated, but rather a judgment that the state of the world is wrong and must be fixed.)

Comment by pjeby on The Curse Of The Counterfactual · 2020-02-11T18:18:22.970Z · LW · GW

What I feel is different is that the Way provide a mean for systematically findind this underlying should and explaining it from the inside.

I notice that I am confused, because I'm not at all clear how Nate's conceptual model would have helped me find the body-memory of my mother screaming at me about some deadline as a child. In contrast, using the Work to surface my objection to not doing something led me to that memory in a few minutes without me needing to do any particular analysis, consequentialist or otherwise.

This isn't to say that his approach is wrong, just that it's incomplete. Notably, it doesn't provide any guards against confabulating your "explanations" of what your thought process is. When you use analytical reasoning to understand yourself, the answers are often wrong because the thing that is actually causing your response is rarely based on any sort of analysis, rather than simple pattern matching. (In the specific example above, my brain was pattern matching "important thing I'm supposed to do -> stress about it, don't allow yourself to do anything else, and call it taking things seriously, or else you're a bad person".)

Finding patterns like this requires observation of what your body and mind are doing, while disengaging from attempts to logically "explain" things, since patterns like these trivially hijack your analytical reasoning (e.g. by tricking you into defining what you're doing as "taking things seriously" rather than "freaking out").

Comment by pjeby on The Curse Of The Counterfactual · 2020-02-10T21:22:16.284Z · LW · GW

On the other hand, I see this post as more about examining the exact mechanism underlying this error we make.

Yes, though an important part of it is also tackling the means by which the algorithm can be swiftly undone from the inside. Nate's tools are oriented more to the object level of a specific "should", whereas I focus more on exposing the assumptions and social imprints that cause us to develop shoulds in the first place.

For example, with Nate's tools I could have deconstructed the idea that "I should be doing something right now", but they would likely not have led me to discovering the underlying idea of "I should be taking things seriously", and the underlying imprinted-by-example meaning of "taking things seriously = freaking the fork out about them".

To be fair, I'm sure there's context to Nate's tools I'm leaving out, and I occasionally do use things somewhat like them with clients, not as an ongoing approach but more as a preparatory stage in learning the Work, to show them the illogicality of a "should" they might be clinging to. (e.g. to demonstrate why "I should have done X yesterday" is based strictly on imaginary hypotheticals)

But in the long run, I consider logical disputation to mostly be useful as a tool for identifying experiential counterpoints to the emotion-backed aliefs that drive the process. You can't (directly) reason yourself out of what you were never (directly) reasoned into.