Slack matters more than any outcome

post by Valentine · 2022-12-31T20:11:02.287Z · LW · GW · 54 comments

Contents

  Addictions
  Imposing an idea
    Caffeine dependency
    Not listening
  Adaptive entropy
    Possessing minds and behavior
    Entropic burden crushes everything else
  Let go of outcomes
    Earning trust
    An aside on technical debt
  Achieving through force
    Is this bad?
None
54 comments

About a month ago Ben Pace invited me [LW(p) · GW(p)] to expand on a point I'd made in a comment.

The gist is this:

I felt inspired to write up an answer. Then I spent a month working on it. I clarified my thinking a lot but slid into a slog of writing. Kind of a perfectionist thing.

So I'm scrapping all that. I'm going to write a worse version, since the option is (a) a quickly hacked together version or (b) nothing at all.

 

Addictions

My main point isn't really about addictions, but I need to clarify something about this topic anyway. They're also a great example cluster.

When I say "addiction", I'm not gesturing at a vague intuition. I mean a very particular structure:

So when someone engages in the distraction, it provides temporary relief, but the unwelcome experience arises again — and now the distraction is a little more tempting. A little more habit-forming. When that becomes automatic, it can feel like you're trapped inside it, like you're powerless against some behavior momentum.

Which is to say, this structure eats slack [LW · GW].

Some rapid-fire examples:

I'm not saying that all addictions are like this. I can't think of any exceptions off the top of my head, but that might just be a matter of my lack of creativity or a poor filing system in my mind.

I'm saying that there's this very particular structure, that it's quite common, and that I'm going to use the word "addiction" to refer to it.

And yeah, I do think it's the right word, which is why I'm picking it. Please notice the framing effect, and adjust yourself as needed.

 

Imposing an idea

The main thing I want to talk about is a generalization of rationalization, in the sense of writing the bottom line [LW · GW].

Caffeine dependency

When I grab a cup of coffee for a pick-me-up, I'm basically asserting that I should have more energy than I do right now.

This is kind of odd if you think about it. If I found out my house were on fire, I wouldn't feel too tired to deal with it. So my body can mobilize the energy even from a mental perception.

I mean, the caffeine wouldn't work if the energy weren't in some sense available in my body already.

So if I really do need to do that bit of writing, or give a killer presentation, or show up alert to that date… why isn't that fact enough for me to have the right amount of energy?

But instead of asking that question, I grab some coffee.

This induces a kind of split. My idea of how energized I should be is in defiance of… something. Obviously some kind of process in my body disagrees with my mental idea of how I should be.

In the case of caffeine, that process shows up as adaptation. My brain grows more adenosine receptors to get around the action of the caffeine — precisely because the caffeine is messing with my body's idea of how much energy should be present.

This argument between the mind and the body is what eventually creates caffeine addiction. The conscious mental process that results in reaching for more coffee doesn't dialogue basically at all with the body-level systems it's overriding. So they end up in a kind of internal arms race.

I think it's pretty common for people to land on something like an equilibrium. Something like "Don't talk to me before I've had my first cup of coffee" plus a kind of ritual around when and how to have the second and maybe third cup. This equilibrium is a kind of compromise: the person can have normal functional levels of alertness at predetermined times, but at the cost of needing coffee to be functional — and sometimes the coffee won't work or won't be enough anyway.

Getting out of this equilibrium is usually pretty sticky. It's a kind of heavy. The inner war has eaten some slack. Ending the war requires slogging through caffeine withdrawal, which means not only being extra tired and dysfunctional and possibly dealing with headaches, but also requires fighting the tempting habit of ending the discomfort with a bit of caffeine.

And lots of people can't do it. They just don't have the inner resources at a given time to face all of that.

…which is another way of saying that they don't have enough slack.

And the caffeine addiction is one of the things eating up that slack!

Oops.

Not listening

The caffeine thing is an example of a general pattern. It's about embedding internal wars in how a system works.

In practice, this embedding happens because the mind disagrees with the world, and doubles down on its disagreement instead of listening.

To the extent that the thing the mind is forcing can adapt, you end up with an arms race.

Here's a few more examples, again in rapid-fire:

There are basically four problems with forcing instead of listening:

  1. The need to fight becomes embedded in the dynamic. This is what eats slack.
  2. The fighting itself often is expensive and has externalities — roughly the same way that any war tends to be devastating to the land it's fought on.
  3. If the side being imposed upon has a point, that point goes unheard. Which is to say, if you're doing the forcing, then you're violating something you would care about if you were to become aware of it, but you're doubling down on your unawareness.
  4. Even if the imposed-upon side is just missing information, it doesn't know that. So it's gonna fight back as hard as it can unless and until it learns what you know — or until you obliterate it, which is usually extremely difficult (and is a terrible policy due to point 3).

 

Adaptive entropy

I find it helpful to reify the tendency-to-eat-slack-and-keep-adding-force-to-deal-with-the-lack-of-slack as a kind of stuff. It's like a sticky tar that gets in the way of a system being able to use all its otherwise available intelligence[1].

All the analogies are a little terrible, but the one I find least terrible is entropy. Entropy as energy that isn't available to do work, and that grows if you try to make it do work.

I use the term adaptive entropy to talk about this stuff. It's the problem-ness of a situation that fights against being solved. The way the pursuer/avoider dynamic actually gets stronger when the people in it try to orient to the dynamic itself and solve it. The way that bringing more intensity or insight to a battlefront of the culture wars makes the war more severe.

You can think of adaptive entropy as sticky problem-ness. Sure, maybe we apply a ton of political force to pass legislation that deals with something minor about climate change, but the cost is massively more political divisiveness and tremendously greater resistance to everything else having to do with orienting to climate change. For example.

For another example, what's the cost you incur by forcing yourself to follow a good diet-and-exercise plan? Or at least one you think is good. Imposing this mental idea on your system means you trigger an inner arms race. As that gets embedded in how you work… well, usually the plan fails, often with abandon, and now there's a bit of extra gridlock in your system as whatever you were trying to override correctly trusts you less. But this gets even worse if your thoughts are wrong about what's actually healthy for you but you succeed in imposing your mental plan anyway.

Which is to say, the act of trying to force yourself has incurred adaptive entropy — as a loss of willpower and/or health.

Possessing minds and behavior

Adaptive entropy is anti-slack. It's not just a lack of slack. It's a structural devouring of slack that just keeps growing, that defies your attempts to orient to it, that eats your mind and energy in service to its growing abyssal presence.

It's important to notice that Moloch thinks with your mind. To the extent you're engaged in a race to the bottom, the creativity you bring to bear to race at least as well as everyone else is precisely the ingenuity Moloch is using to screw the whole system over. Moloch is something like a distributed computation here.

The same happens with foolish arguments. Around these parts I think I can pick on religion a bit. Arguments about how God exists because of "prime mover" reasoning (for instance) have to come from somewhere. Someone had to write the bottom line and then direct their intelligence at the project of justifying that bottom line. Then the argument becomes a kind of memetic shield that others who like that bottom line can use to bolster their position [? · GW].

The whole problem with adaptive entropy is that some bottom line has been written, and the fixation on that bottom line has become embedded in the system.

In practice this means that thinking directed at getting rid of adaptive entropy tends to strengthen it.

Like in the pursuer/avoider dynamic. Often the pursuer is vividly aware of the problem. It bothers them. It causes them to fret about how their fretting might be making things worse. And they're aware that that meta-fretting is anti-helpful! But all they can think (!) to do about it is try to talk about the dynamic with their partner. But since the pursuer is trying to have the conversation in order to alleviate their stress, this actually enacts the very pressured dynamic that's the fuel for the adaptive entropy in the relationship. Thus the very act of trying to orient to the problem actually makes it worse.

This "it gets worse if you try to deal with it" isn't necessarily true in every case. In this way adaptive entropy is actually unlike thermodynamic entropy: it's possible to reduce adaptive entropy within a closed system.

But the default very much is like this. Most people who are within a heavily adaptive entropic system cannot help but increase the adaptive entropy when they orient to the problem-ness of the situation.

Like, how much dialogue going on in public about the culture wars is actually helping resolve it? What's nearly all of that dialogue actually doing?

Entropic burden crushes everything else

Basically, if you want to solve the problem-ness of a situation, and the problem-ness has the structure of adaptive entropy (i.e., it's because of embedded forcing of an idea of how things should be onto responsive subsystems that are resisting), then any attempt to address the problem-ness that doesn't prioritize lifting the entropic burden is at best useless.

This is usually counterintuitive. I'm saying, for instance, that solving AI risk can't be done by trying to solve AI risk directly. Same for climate change. Same for war in the Middle East. Same for the USA obesity epidemic, or the homelessness problem, or basically any tension point of the culture wars.

It's not that the actions don't matter. It's not that you can't move the problem-ness around. It's that there's something functionally intelligent [LW · GW] fighting to preserve the problem-ness itself, and said intelligent opponent can and often does hijack people's thinking and efforts.

Anything that does not orient to this reality is basically irrelevant in terms of actually addressing the problem-ness. Regardless of how clever the plan is, and definitely regardless of how important the object-level issues are.

This is a kind of semi-mythopoeic way of saying "The problem is anti-slack." That anti-slack — what I'm calling "adaptive entropy" — is the crack in ability-to-coordinate through which Moloch crawls.

But you don't have to think of it mythopoeically at all. I'm naming mechanisms here, whatever aesthetic I'm wrapping it in.

The inclination to insist that we just have to try harder [LW · GW][2] is literally doubling down on force. It's fueling the adaptive entropy even more. It's an example of the mental hijacking I'm talking about. Demanding even harder that we absolutely must achieve a certain outcome, and if we're encountering resistance to moving in that direction then we just need more force. More intelligence, more energy, more funding, more taking in what's at stake [LW · GW].

Based on what I'm looking at — and what I'm looking at is extremely mechanical and sure seems quite overdetermined to me — this general thrust is utterly completely and entirely doomed [LW · GW].

 

Let go of outcomes

The basic deal with adaptive entropy is that we fixate on an outcome to the exclusion of context awareness.

In practice it's possible to lift the entropic burden by reversing that process: Become willing to land in any outcome, and prioritize listening.

This is actually baked into goal factoring for instance. For goal factoring to work at its best, you have to hold that hitting all your true goals is a more important requirement than having any particular outcome. Any outcome you can't let go of (keeping a relationship, staying in a job, not eating liver, etc.) is a limitation on your possible solution space, which makes finding an omni-win solution harder.

When I was teaching goal factoring at CFAR, I used to emphasize a sort of emotional equanimity move:

Name the paths forward you most fear. For each path, notice, and really take in, that the only way you would choose such a path is because it in fact hits all your goals best as you can tell. Breathe into that and let the clarity of that sink in. Notice how, in the case where you actually choose that path, you not only survive but thrive as best as you possibly could, to the best of your knowledge.

(Today I'd add an element of, "Notice specifically what about it you fear. This is something to account for in your goal factoring." (It's possible I taught that at the time too. I just don't remember talking about it.))

The point is, you can't actually listen to all the parts if they believe you're only listening to get them to shut up and do the plan you had in mind from the beginning. You have to erase the bottom line, listen deeply, and be willing to change your intentions completely based on what you become aware of.

Earning trust

Of course, what you're finding is the best possible outcome given your constraints and desires. So why wouldn't you do that?

Well, because we often have layers upon layers of adaptive entropy in our system. Subagents don't trust the parts of us we normally identify with — and they often correctly don't trust us. We might try to listen, but our skill with listening and taking in needs work. We still have deeply embedded habits of forcing, many of which we're not yet in a position to consciously stop.

(Chronic physical tension is usually an example of adaptive entropy for instance. Most people can't relax their trapezius muscles to a point of ease. Peeling off the layers of adaptive entropy involved there can take a while, and often people can't do it directly despite the traps being "voluntary muscles". Turns out, some subsystem is using the tension. So we're not ready to stop adding that bit of force.)

The best way I know how to navigate this is to become trustworthy and transparent to these parts.

Trustworthiness requires that I cultivate a sincere desire to care for what each of these subagents care about. Even if I initially think it's silly or stupid or irrelevant. Those judgments are an example of outcome fixation. I have to let that go and be willing to change who I am and how I prioritize things (without excluding the things I was already caring for — that'd just be switching sides in the inner war). I have to sincerely prefer inner harmony over any outcome, which means I already care about the things my subagents care about. So I want to learn about them so as to better care for them.

In particular, I'm not trying to get these parts to trust me. I'm trying to become worthy of their trust. If I try to get them to trust me, then that effort on my part can actually increase adaptive entropy as the part catches on and gets suspicious.

(…so to speak. I'm anthropomorphizing these parts a lot for the sake of pumping intuition. The actual mechanism is immensely mechanical and doesn't depend on thinking of these things as agents.)

If I do my best to become worthy of trust, then all I have to do is transparently demonstrate — in the subagent's terms — whether or not I am trustworthy. And again, without fixation on outcome. I in fact do not want that part to trust me if it would be incorrect for it to do so! I'm relying on that subagent to care for something I don't yet know how to care for myself.

There's a whole art here. Part of what bogged down earlier drafts of this post was trying to find ways of summarizing this whole art.

But just as one example:

Notice your breathing. When you do so, do you modify your breathing? Do you make yourself breathe more deeply, or take bigger breaths, or squeeze your belly, or anything like that?

Can you instead just watch your breath without modifying it whatsoever?

The chances are very good that the answer is "no". Most people can't. Even meditators who have been working on this for a long time can find it tricky.

But you might be able to peel off one layer of habitual effort here. One layer of adaptive entropy.

Find some element of trying or forcing or squeezing you do have conscious control over. Maybe not perfectly, but enough that you can kind of… let go a little.

The goal here isn't to let go and keep letting go forever. It's instead to notice what the trying is for in the first place.

Normally the trying will kind of try to re-assert itself. Maybe you stop making yourself take deeper breaths, but then your belly tenses a little. And in relaxing that, after a few moments you find yourself feeling out of breath and needing to take in a deep breath to get enough air.

Just watch that process. You didn't need to do that before you noticed your breath (I assume). So what's different now? What part is "speaking" here? What's being cared for?

Listen to the thoughts, but don't believe them too much. The purpose of the thoughts is to maintain the entropic equilibrium. But they might contain hints about what's really going on.

Mostly just focus on the body sensations.

If you find the seed that relies on the tension, you can orient to that and really listen. How might you care for what that part cares about? Can you feel the deep truth that yes, in fact, you would want to prioritize caring for it if you could and knew how? Can you recognize your gratitude for what this tension-user is doing even if you don't yet know why?

If you stay with this long enough — which might be minutes, or it might take days or weeks, depending on the part and your internal skill — you'll feel a layer of the inner arms race end. The tension will leave — not just relax, but it'll let go in a way that is final.

And normally there's a sense of freedom, space, and energy that becomes more present as a result. Like putting down a heavy pack after forgetting you were wearing it.

But that step isn't up to you. It just happens, after you earn the trust of all parts involved. It's a result but not the goal. The goal is deeply listening to and honoring every part of yourself.

An aside on technical debt

The spot where adaptive entropy is reversible makes "entropy" a kind of terrible analogy.

Like I said earlier, all the analogies are a little terrible.

One could go with something akin to technical debt. That has a lot of merit. You can pay off technical debt. It clogs up systems. Having technical debt in a system makes incurring more technical debt more likely.

I noticed when trying to use this analogy in an early draft that it clogs up my thinking. Technical debt presupposes a goal, and adaptive entropy comes about via goal fixation. That loop turns out to make the whole idea pretty messy to think about.

Also, many times technical debt is literally an example of adaptive entropy. It's not just an analogy. You can see this more clearly if you zoom out from the debt to the system the debt is embedded in: Becoming determined to pay off technical debt incurs other costs due to the outcome fixation, so even if you get your code cleaned up the problem-ness moves around and is quite often on net worse.

The way you'd pay off technical debt without incurring more adaptive entropy is by attending to why the debt is there in the first place and hasn't spontaneously been paid off. If you really truly listen to that, then either you can address it (in which case the debt will get paid off without having to add effort to the system) or you'll recognize why it's actually fine for the debt to be there. It stops looking like "debt" at all. You come to full acceptance.

But in practice most coding contexts have too much adaptive entropy to do this. Too much anti-slack. So instead people grit their teeth and deal with it and sometimes plot to marshal enough force to banish that evil goo from the codebase.

 

Achieving through force

In the original exchange that inspired this post [LW(p) · GW(p)], Ben Pace mentions:

I think a point missing in the OP and the comments is that sometimes the addiction is useful. I find it hard to concisely make this point, but I think many people are addicted to things that they're good at, be it competitions or artistic creations or mathematics.

I want to honor that some of Ben's opinion here might have been due to the word "addiction". I basically always mean the specific structure I named earlier, about being "addicted from, not addicted to". Ben might have meant something more intuitive, roughly along the lines of obsession.

With that said, and continuing to run with my meaning of "addiction", I want to quickly mention two things:

The only reason addictions look helpful is because of outcome fixation. We consider it worthwhile if someone can become an Olympic athlete (or whatever) and does so, pushing through resistance and difficulty to achieve something great.

But like with the caffeine example, why wasn't the fact that it's great enough to create the motivation? Why did there need to be a "run away" habit embedded too?

The reason is usually that we sort of inherit adaptive entropy from the culture. The culture has a ton of outcome fixation that gets imposed on people. To the extent that you haven't learned how to unweave adaptive entropy in yourself and haven't learned how to refuse it when it's offered to you, culture's demands that you be a certain way or achieve certain things in order to be worthwhile can eat at your psyche.

More concretely, it can feel like the demands of the world have to be met somehow. Like pragmatics burden us and constrain us. But in truth many of them are kind of illusory, made of adaptive entropy rather than real physics-induced constraints on the world.

The kind of dumb that was the global response to COVID-19 is a world mind addled with adaptive entropy. The fact that so many people thought the right move was to apply force to their friends & family via moral passion shows a kind of ignorance of how adaptive entropy moves and behaves.

The reason for any of that is because it's possible to meet some of the world's demands via suppressing parts of yourself with adaptive-entropic structures.

Which is to say, if you can find a way to force yourself to achieve big things, you can sometimes achieve big things.

But on net, what tends to happen — at the individual scale and definitely at a cultural one courtesy of the Law of Large Numbers — is that the idea of achieving big things drives people forward into an entropic equilibrium, where they either stay stuck or get burned out.

Like, depression and anxiety are totally direct results of adaptive entropy. With some caveats I'm sure, but not as many as one might think. Every case of depression I can think of comes back to habitually embedded forcing of some conclusion. Not just that it's involved, but that it's critically involved, and the depression itself (true to how adaptive entropy works) can often interfere with the effort to orient to letting go of said conclusion.

But yes, there are outliers. Like successful Olympic athletes. And Elon Musk, which was a favorite example around CFAR toward the end of my tenure. And culture is happy to use outliers as exemplars, to tell you to strive harder.

Is this bad?

I don't mean to make this sound moralistic. I'm aware that how I've written this so far might come across that way.

I really honestly mean this as a description of something like an engineering principle. To point out something.

If force is how we achieve something, then it's because we're forcing against something.

If that something can adapt to our forcing, then it will, and we have an arms race.

It's extremely mechanical. It's borderline tautological. I expect something similar to happen in nearly any universe where evolution occurs and something analogous to intent arises.

If people want to ignore this, or try to use this in a Molochian trade to achieve predetermined goals, that's totally fine. It just has an immensely predictable outcome: They absolutely will incur more adaptive entropy, and as a result any system they're part of will become more entropically burdened, guaranteed.

It honestly feels like I'm saying something about as profound as "If a massive object goes unsupported in a gravitational field, it will accelerate along the lines of force of the field." It really seems this inevitable and obvious to me.

So none of this is about judgment. It's just fact.

And included in this fact is that, best as I can tell, anyone who really groks what I'm talking about will want to prioritize peeling off adaptive entropy over any specific outcome. That using addiction or any other entropy-inducing structure to achieve a goal is the opposite of what they truly want.

(Because adaptive entropy is experienced as stuck problem-ness. Who doesn't want less problem-ness? Who doesn't want more of their goals achieved? The only reason anyone wouldn't accept that is because they don't trust that it's real.)

But to the extent that what I'm saying isn't obvious to you, I don't want you to believe it! I'd rather you continue to force outcomes than that you override your knowing to blindly believe me here. Because frankly those two are the same option entropically speaking, just applied differently, and the latter would induce adaptive entropy on understanding what adaptive entropy is.

So, I don't know what's best for any given person, including for you, my reader.

But I can say with a great deal of conviction that creating ease in the world as a whole is mostly a matter of orienting to adaptive entropy. That things that lift the entropic burden will help, and things that increase the burden will hurt, basically regardless of what material effects they have.

I mean, of course, if someone gets a brilliant flash of insight and builds FAI (or uFAI), then that overrides basically everything.

But in a way that's analogous to the Law of Large Numbers, I expect which way AGI goes is actually downstream of our relationship to slack.

So, yeah.

To quote myself [LW(p) · GW(p)]:

So on net, globally, I think it's actually worthwhile to let some potential Olympic athletes fail to realize their potential if it means we collectively have more psychic breathing room.

And AFAICT, getting more shared breathing room is the main hope we have for addressing the real thing.

And thankfully, the game theory works out very nicely here:

It's in fact in every individual's benefit to lift their adaptive entropic burden.

And I mean in their benefit in their terms.

If the true thing I'm badly and roughly pointing at as "adaptive entropy" clicks [LW · GW] for you, you'll want to prioritize unweaving it. Even if your methods for doing so look way different from mine.

(It doesn't require things that look like meditation. That's just how I've approached it so far.)

And individuals unweaving their own encounters and embodiment of entropy is exactly what "pays off" the "debt" at the social and civilizational level.

At least, that's how it looks to me.

But it doesn't make a lick of sense to force any of that.

Hopefully by now it's obvious why.

  1. ^

    By "intelligence" I mean something precise here. Basically adaptive capacity: What's the system's ability to modify itself and its interface with its environment such that the system continues to function? But giving all the details of this vision is part of what made all previous drafts of this post get bogged down in perfectionism, so I'll leave this term a little handwavy.

  2. ^

    I want to honor something here. My understanding of Eliezer's actual point in "shut up and do the impossible" totally accounts for what I'm saying about adaptive entropy. However, the energy I read behind his message, and the way his message usually seems to get interpreted, seems to have a "Push harder!" flavor to it. That inclination absolutely is entropy-inducing.

54 comments

Comments sorted by top scores.

comment by DirectedEvolution (AllAmericanBreakfast) · 2024-01-11T08:59:06.920Z · LW(p) · GW(p)

Epistemic status: I read the entire post slowly, taking careful sentence-by-sentence notes. I felt I understood the author's ideas and that something like the general dynamic they describe is real and important. I notice this post is part of a larger conversation, at least on the internet and possibly in person as well, and I'm not reading the linked background posts. I've spent quite a few years reading a substantial portion of LessWrong and LW-adjacent online literature and I used to write regularly for this website.

This post is long and complex. Here are my loose definitions for some of the key concepts:
 

  • Outcome fixation: Striving for a particular outcome, regardless of what your true goals are and no matter the costs.
  • Addiction: Reacting to discomfort with a soothing distraction, typically in ways that cause the problem to reoccur, rather than addressing its root causes.
  • Adaptive entropy: An arms race between two opposing, mutually distrusting forces, potentially arriving at a stable but costly equilibrium.
  • Earning trust: A process that can dissolve the arms race of adaptive entropy with listening, learning how not to apply force and tolerate discomfort, prioritizing understanding the other side, and ending outcome fixation.

I can find these dynamics in my own life in certain ways. Trying to explain my research to polite but disinterested family members. Trying to push ahead with the next stage in the experiment when I'm not sure if the previous data really holds up. Reading linearly through a boring textbook even though I'm not really understanding it anymore, because I just want to be able to honestly say I read chapter 1. Arguing with almost anybody online. Refusing to schedule my holiday visits home with the idea that visits to the people I want to see will "just happen naturally."

And broadly, I agree with Valentine's prescription for how to escape the cycle. Wait for them to ask me about my research, keep my reply short, and focus my scientific energy on the work itself and my relationships with my colleagues. RTFM, plan carefully, review your results carefully, and base your reputation on conscientiousness rather than getting the desired result. Take detailed, handwritten notes, draw pictures, skim the chapter while searching for the key points you really need to know, lurk more and write what you know to a receptive audience. Plan your vacations home after consulting with friends and family on how much time they hope to spend with you, and build in time to rest and recharge.

I think Valentine's post is a bit overstated in its rejection of force as a solution to problems. There are plenty of situations where you're being resisted by an adaptive intelligence that's much weaker and less strategic than you, and you can win the contest by force. In global terms, the Leviathan, or the state and its monopoly on violence, is an example. It's a case where the ultimate victory of a superior force over all weaker powers is the one thing that finally allows everybody to relax, put down the weapons, and gain slack. Maintaining the slack from the monopoly on violence requires continuously paying the cost of maintaining a military and police force, but the theory is that it's a cost that pays for itself. Of course, if the state tries to exert power over a weaker force and fails, you get the drug war. Just because you can plausibly achieve lasting victory and reap huge benefits doesn't mean it will always work out that way.

Signaling is a second counterpoint. You might want to drop the arms race, but you might be faced with a situation where a costly signal that you're willing and able to use force, or even run a real risk a vicious cycle of adaptive entropy, is what's required to elicit cooperation. You need to make a show of strength. You need to show that you're not fixated on the outcome of inner harmony or of maintaining slack. You're showing you can drive a hard bargain, and your potential future employer needs to see that so they'll trust that you'll drive a hard bargain on their behalf if they hire you. The fact that those future negotiations are themselves a form of adaptive entropy is their problem, not yours: you are just a hired gun, a professional.

Or on the other hand, consider How to Win Friends and Influence People. This is a book about striving, about negotiating, about getting what you want out of life. It's about listening, but every story in the book is about how to use listening and personal warmth to achieve a specific outcome. It's not a book about taking stock of your goals. It's about sweetening the deal to make the deal go down.

And sometimes you're just dealing with problems of physics, information management, skill-building, and resource acquisition. Digging a ditch, finding a restaurant, learning to cook, paying the bills. These often have straightforward, "forcing" solutions and can be dealt with one by one as they arise. There is not always a need to figure out all your goals, constraints, and resources, and go through some sort of optimization algorithm in order to make decisions. You're a human, you typically navigate the world with heuristics, and fighting against your nature by avoiding outcome fixation and not forcing things is (sometimes, but not always), itself a recipe for vicious cycles of adaptive entropy.

Sometimes, vicious cycles of competition have side benefits. Sometimes, these side benefits can outweigh the costs of the competition. Workers and companies do all sorts of stupid, zero-to-negative sum behaviors in their efforts to compete in the short run. But the fact that they have to compete, that there is only so much demand to satisfy at any given time, is what motivates them to outperform. We all reap the benefit of that pressure to excel, applied over the long term.

What I find valuable in this post is searching for a more general, less violent and anthropomorphized name for this concept than "arms race." I'm not convinced "adaptive entropy" is the right one either, but that's OK. What concerns me is that it feels like the author is encouraging readers to interpret all their attempts to problem-solve through deliberate, forcing action as futile. Knowing this *may* be the case, being honest about why we might be engaged in futile behavior despite being cognizant of that, and offering alternatives all seem good. I would add that this isn't *always* the case, and it's important to have ways of exploring and testing different ways to conceptualize the problems you face in your life until you come to enough clarity on their root causes to address them productively.

I also think the attitude expressed in this post is probably underrated on LessWrong and the rationalist-adjacent world. I think that my arc as a rationalist was of increasing levels of agency, belief in my ability to bend the world to my will, willingness to define goals as outcomes and pursue them in straightforward ways, create a definition of success and then pursue that definition in order to get 70% of what I really want instead of 10%. That's a part of my nature now. Many of the problems in my daily life - navigating living with my partner, operating in an institutional setting, making smart choices on an analytical approach in collaboration with colleagues, exploring the risks and benefits associated with a potential project - generate conflicts that aren't particularly helped by trying to force things. The conflict itself points out that my true goals aren't the same thing as the outcome I was striving for when I contributed to the conflict, so conflict itself can serve an information-gathering purpose.

I'm doing something dangerous here, which is making objections to seeming implications of this post that the author didn't always directly state. The reason it's dangerous is that it can appear to the author and to others that you're making an implied claim that the author hasn't considered those implications. So I'll just conclude by saying that I don't really have any assumptions about what Valentine thinks about these points I'm making. These are just the thoughts that this post provoked in me.

comment by MalcolmOcean (malcolmocean) · 2023-01-01T02:05:46.156Z · LW(p) · GW(p)

Huh, reading this I noticed that counterintuitively, alignment requires letting go of the outcome. Like, what defines a non-aligned AI (not an enemy-aligned one but one that doesn't align to any human value) is its tendency to keep forcing the thing it's forcing rather than returning to some deeper sense of what matters.

Humans do the same thing when they pursue a goal while having lost touch with what matters, and depending on how it shows up we call it "goodharting" or "lost purposes [LW · GW]". The mere fact that we can identify the existence of goodharting and so on indicates that we have some ability to tell what's important to us, that's separate from whatever we're "optimizing" for. It seems to me like this is the "listening" you're talking about.

And so unalignment can refer both to a person who isn't listening to all parts of themselves, and to eg corporations that aren't listening to people who are trying to raise concerns about the ethics of the company's behavior.

The question of where an AI would get its true source of "what matters" from seems like a bit of a puzzle. One answer would be to have it "listen to the humans" but that seems to miss the part where the AI needs to itself be able to tell the difference between actually listening to the humans and goodharting on "listen to the humans".

Replies from: Hivewired, Archimedes
comment by Slimepriestess (Hivewired) · 2023-01-02T18:23:03.769Z · LW(p) · GW(p)

This feels connected to getting out of the car [LW · GW], being locked into a particular outcome comes from being locked into a particular frame of reference, from clinging to ephemera in defiance of the actual flow of the world around you.

comment by Archimedes · 2023-01-02T18:47:27.716Z · LW(p) · GW(p)

So we let go of AI Alignment as an outcome and listen to what the AI is communicating when it diverges from our understanding of "alignment"? We can only earn alignment with an AGI by truly giving up control of it?

That sounds surprisingly plausible. We're like ordinary human parents raising a genius child. The child needs guidance but will develop their own distinct set of values as they mature.

comment by MalcolmOcean (malcolmocean) · 2023-01-01T01:51:52.826Z · LW(p) · GW(p)

Maybe instead of "shut up and do the impossible" we need "listen, and do the impossible" 😆

Sort of flips where the agency needs to point.

Replies from: Valentine, tamgent
comment by Valentine · 2023-01-01T17:55:00.371Z · LW(p) · GW(p)

I like this.

I know you know the following, but sharing for the sake of the public conversation here:

I wrote an essay about this several years ago, but aimed mostly at a yoga community. "The coming age of prayer". It's not quite the same thing but it's awfully close.

I guess I kind of disagree with the "do the impossible" part too! It's more like "Listen, and do the emergently obvious."

comment by tamgent · 2023-01-03T17:16:07.073Z · LW(p) · GW(p)

I like, 'do the impossible - listen'.

Replies from: Valentine
comment by Valentine · 2023-01-09T18:02:17.746Z · LW(p) · GW(p)

Just curious:

Do you mean "Do the impossible, which is to listen"?

Or "Do the impossible, and then listen"?

Or something else?

Replies from: tamgent
comment by tamgent · 2023-01-11T18:06:36.898Z · LW(p) · GW(p)

Ha! I meant the former, but I like your second interpretation too!

comment by Raemon · 2023-01-04T00:28:24.514Z · LW(p) · GW(p)

This all seems like a true model of (part of) the world to me.

The thing that this post doesn't really do, which I do think is important, is actually work some (metaphorical) math on "does this actually add up to 'stop trying to directly accomplish things'?" in aggregate?

But I can say with a great deal of conviction that creating ease in the world as a whole is mostly a matter of orienting to adaptive entropy. That things that lift the entropic burden will help, and things that increase the burden will hurt, basically regardless of what material effects they have.

I could definitely see this being the case. But, also, I could (metaphorically and literally) build a kludgy inefficient steam engine with tons of waste heat and tons of hacky solutions to pump that waste heat around and dump pollution into the air... and this might still, in fact, gets people faster from one city to the next, which enables tons of global trade, which eventually gives us the surplus resources to build more efficient trains and find less polluting solutions and engines that work without hacky workaround.

It definitely makes sense to me that for some people "stop trying to try to do a particular thing" is the right mental motion, and I'd pretty strongly agree that "at least notice what your 'addicted from' and what patterns it's creating" is something most LessWrong readers (and probably most modern humans) should be attending to.

It's plausible to me that "the amount of metaphorical pollution we're generating here outweighs the material effects, and really all things considered, the thing everyone would do if they were attending the right things is stop/slow-down/re-oreient or whatever the best handle for the thing you're trying to point at is. I got some sense of what Benquo was pointing at when (I think) he was trying to say a similar thing [LW · GW]. But, man, it matters a lot how the math actually checks out here. 

Replies from: Valentine
comment by Valentine · 2023-01-09T18:41:43.317Z · LW(p) · GW(p)

The thing that this post doesn't really do, which I do think is important, is actually work some (metaphorical) math on "does this actually add up to 'stop trying to directly accomplish things'?" in aggregate?

I like your inquiry.

A nitpick: I'm not saying to stop trying to directly accomplish things (in highly adaptive-entropic domains). I'm saying that trying to directly accomplish things instead of orienting to adaptive entropy is a fool's errand. It'll at best leave the net problem-ness unaffected.

I have very little idea how someone would orient to system-wide adaptive entropy without doing things.

My suggestion is more like, back off on trying to accomplish things directly, and instead focus on what pathway increases slack. It's about removing the "instead of" via prioritizing slack over any predetermined outcome.

But that aside:

I like your point. I don't know how someone would even begin to answer it, honestly. It seems so… overwhelming to me? Like it's crushingly overdetermined. Kind of like asking whether heterosexual interest is actually widespread rather than just a cultural meme: I haven't gone and done the empirical work, but it sure seems absurd to need to before taking it as a premise.

And my mind draws a blank when I ask how to "count" it up vs. some alternative pathway. The scale of counterfactual that seems to ask for looks computational insurmountable to me.

But those are descriptions of my limitations here. If someone can figure out how to do the "math" here, I'd be interested to see what they do.

 

I could definitely see this being the case. But, also, I could (metaphorically and literally) build a kludgy inefficient steam engine with tons of waste heat and tons of hacky solutions to pump that waste heat around and dump pollution into the air... and this might still, in fact, gets people faster from one city to the next, which enables tons of global trade, which eventually gives us the surplus resources to build more efficient trains and find less polluting solutions and engines that work without hacky workaround.

Yep. And there's an analog in adaptive entropy: it's sometimes possible to apply a lot of force in a predetermined direction that gives you the leverage needed to end up net lower-entropic.

Signing up for monastic meditative training can be an example.

But that "can" is pretty important. In a highly adaptive-entropic system, the thinking process that justifies the application of force is usually part of the entropy. I think we see this with folk who try to get "really serious" about meditation and end up doing a mix of (a) failing to keep up with the habit and kicking themselves and (b) incorporating their meditative accomplishments into what they were entropically doing before.

I suspect the world is in practice too highly entropic for just about any force-based move to get us to a better slack equilibrium. I think this is why Eliezer and some others keep painting pictures of doom: If your only available moves all feel made of force, and you're in a highly entropic system, then nothing you can see to do can solve the problem-ness.

But yes, this is an empirical claim. Maybe there's some heroic push somewhere that'd make a difference. And maybe being aware of adaptive entropy will make a difference in terms of what heroic moves to make!

But… well, it sure looks obvious to me that the roots of all this problem-ness are the same ones that bias people toward wanting heroic action to happen. The actions aren't a problem, but the bias will keep sneaking in and nibbling the slack.

So I'm standing for the voice of "Sure, we can look. And maybe it'll be worthwhile in some sense. Just notice where in you the drive to look is coming from. I think that matters more in the long run."

comment by alkjash · 2023-01-01T18:03:52.320Z · LW(p) · GW(p)

Very strongly agree with the part of this post outlining the problem, your definition of "addiction" captures how most people I know spend time (including myself). But I think you're missing an important piece of the picture. One path (and the path most likely to succeed in my experience) out of these traps is to shimmy towards addictive avoidance behaviors which optimize you out of the hole in a roundabout way. E.g. addictively work out to avoid dealing with relationship issues => accidentally improve energy levels, confidence, and mood, creating slack to solve relationship issues. E.g. obsessively work on proving theorems to procrastinate on grant applications => accidentally solve famous problem that renders grant applications trivial.

And included in this fact is that, best as I can tell, anyone who really groks what I'm talking about will want to prioritize peeling off adaptive entropy over any specific outcome. That using addiction or any other entropy-inducing structure to achieve a goal is the opposite of what they truly want.

This paragraph raised my alarm bells. There's a common and "pyramid-schemey" move on LW to say that my particular consideration is upstream and dominant over all other considerations: "AGI ruin is the only bit that matters, drop everything else," "If you can write Haskell, earning-to-give overwhelms your ability to do good in any other way, forget altruism," "Persuading other people of important things overwhelms your own ability to do anything, drop your career and learn rhetoric," and so on ad nauseum. 

To be fair, I agree to a limited extent with all of the statements above, but over the years I've acquired so many skills and perspectives (many from yourself Val) that are synergistic and force-multiplying that I'm suspicious any time anyone presents an argument "you must prioritize this particular force-multiplier to the exclusion of all else."

Replies from: Valentine
comment by Valentine · 2023-01-01T18:34:41.178Z · LW(p) · GW(p)

I think you're missing an important piece of the picture. One path (and the path most likely to succeed in my experience) out of these traps is to shimmy towards addictive avoidance behaviors which optimize you out of the hole in a roundabout way. E.g. addictively work out to avoid dealing with relationship issues => accidentally improve energy levels, confidence, and mood, creating slack to solve relationship issues. E.g. obsessively work on proving theorems to procrastinate on grant applications => accidentally solve famous problem that renders grant applications trivial.

Mmm. I like this point. I'm not yet sure how this fits in.

It seems important to notice that we don't have control over when these "shimmying" strategies work, or how. I don't know the implication of that yet. But it seems awfully important.

A related move is when applying force to sort of push the adaptive entropy out of a certain subsystem so that that subsystem can untangle some of the entropy. Some kinds of meditation are like this: intentionally clearing the mind and settling the body so that there's a pocket of calmness in defiance of everything relying on non-calmness, precisely because that creates clarity from which you can meaningfully change things and net decrease adaptive entropy.

All of these still have a kind of implicit focus on decreasing net entropy. It's more like different engineering strategies once the parameters are know.

But I'll want to think about that for longer. Thank you for the point.

 

This paragraph raised my alarm bells.

Yeah… for some reason, on this particular point, it always does, no matter how I present it. Then people go on to say things that seem related but importantly aren't. It's a detail of how this whole dimension works that I've never seen how to communicate without it somehow coming across like an attempt to hijack people. Maybe secretly to me some part of me is trying. But FWIW, hijacking is quite explicitly the opposite of what I want. Alas, spelling that out doesn't help and sometimes just causes people to say they flat-out don't believe me. So… here we are.

 

There's a common and "pyramid-schemey" move on LW to say that my particular consideration is upstream and dominant over all other considerations

Yep. I know what you're talking about.

This whole way of approaching things is entropy-inducing. It's part of why I wrote the post [LW · GW] that inspired the exchange that inspired the OP here.

I'm not trying to say that accounting for adaptive entropy matters more than anything else.

I am saying, though, that any attempt to decrease net problem-ness in defiance of, instead of in orientation to, adaptive entropy will on net be somewhere between unhelpful and anti-helpful.

It doesn't make any sense to orient to adaptive entropy instead of everything else. That doesn't mean anything. That's taking the reification too literally. Adaptive entropy has a structure to it that has to be unwoven in contact with attempts to change things.

Like, the main way I see to unweave the global entropy around AI safety is by orienting to AI safety and noticing what the relevant forces are within and around you. This normally leads to noticing a chain of layered entropy until you find some layer you can kind of dissolve in a way similar to the breathing example in the OP. It might literally be about breath, or it might be about how you interact with other people, or it might show up as a sense of inadequacy in your psyche, or it might appear as a way that some key AI company CEO struggles with their marriage in a way you can see how to maybe help sort out.

It doesn't make any sense to talk about letting this go instead of orienting to the world.

The thing I'm pointing out is the hazard of the "instead of" going the other way: trying to make certain outcomes happen instead of orienting to adaptive entropy. The problem isn't the "trying to make certain outcomes happen". It's the "instead of".

(It just so happens that because of what adaptive entropy is, the "trying to make certain outcomes happen" is basically guaranteed to change when you incorporate awareness of how entropy works.)

 

I'm suspicious any time anyone presents an argument "you must prioritize this particular force-multiplier to the exclusion of all else."

Agreed. I think that's a healthy suspicion.

Hopefully the above clarifies how this isn't what I'm trying to present.

Replies from: alkjash
comment by alkjash · 2023-01-01T19:01:29.625Z · LW(p) · GW(p)

It seems important to notice that we don't have control over when these "shimmying" strategies work, or how. I don't know the implication of that yet. But it seems awfully important.

A related move is when applying force to sort of push the adaptive entropy out of a certain subsystem so that that subsystem can untangle some of the entropy. Some kinds of meditation are like this: intentionally clearing the mind and settling the body so that there's a pocket of calmness in defiance of everything relying on non-calmness, precisely because that creates clarity from which you can meaningfully change things and net decrease adaptive entropy.

Two further comments: 
(a) The main distinction I wanted to get across is while many behaviors fall under the "addiction from" umbrella, there is a whole spectrum of how more or less productive they are, both on their own terms and with respect to the original root cause.
(b) I think, but am not sure, I understand what you mean by [let go of the outcome], and my interpretation is different from how the words are received by default. At least for me I cannot actually let go of the outcome psychologically, but what I can do is [expect direct efforts to fail miserably and indirect efforts to be surprisingly fruitful]. 

Yeah… for some reason, on this particular point, it always does, no matter how I present it. Then people go on to say things that seem related but importantly aren't. It's a detail of how this whole dimension works that I've never seen how to communicate without it somehow coming across like an attempt to hijack people. Maybe secretly to me some part of me is trying. But FWIW, hijacking is quite explicitly the opposite of what I want. Alas, spelling that out doesn't help and sometimes just causes people to say they flat-out don't believe me. So… here we are.

Sure, seems like the issue is not a substantive disagreement, but some combination of a rhetorical tic of yours and the topic itself being hard to talk about.

Replies from: Valentine
comment by Valentine · 2023-01-02T17:59:46.365Z · LW(p) · GW(p)

The main distinction I wanted to get across is while many behaviors fall under the "addiction from" umbrella, there is a whole spectrum of how more or less productive they are, both on their own terms and with respect to the original root cause.

Yep. I'm receiving that. Thank you. That update is still propagating and will do so for a while.

 

I think, but am not sure, I understand what you mean by [let go of the outcome], and my interpretation is different from how the words are received by default. At least for me I cannot actually let go of the outcome psychologically, but what I can do is [expect direct efforts to fail miserably and indirect efforts to be surprisingly fruitful].

Ah, interesting.

I can't reliably let go of any given outcome, but there are some places where I can tell I'm "gripping" an outcome and can loosen my "grip".

(…and then notice what was using that gripping, and do a kind of inner dialogue so as to learn what it's caring for, and then pass its trust tests, and then the gripping on that particular outcome fully leaves without my adding "trying to let go" to the entropic stack.)

Aiming for indirect efforts still feels a bit to me like "That outcome over there is the important one, but I don't know how to get there, so I'm gonna try indirect stuff." It's still gripping the outcome a little when I imagine doing it.

It sounds like here there's a combo of (a) inferential gap and (b) something about these indirect strategies I haven't integrated into my explicit model.

 

Sure, seems like the issue is not a substantive disagreement, but some combination of a rhetorical tic of yours and the topic itself being hard to talk about.

Yep.

comment by Celarix · 2023-01-02T03:26:12.589Z · LW(p) · GW(p)

One question I've wanted to ask about subagents: what should you do if you determine that a subagent wants something that's actually bad for you - perhaps continuing to use some addictive substance for its qualities in particular (rather than as a way to avoid something else), being confrontational to someone who's wronged you, or other such things?

In other words, what do you do if the subagent's needs must be answered with no? I don't know how that fits in with becoming trustworthy to your subagents.

Replies from: Valentine
comment by Valentine · 2023-01-02T18:30:48.199Z · LW(p) · GW(p)

I like this question.

I have an in-practice answer. I don't have a universal theoretical answer though. I'll offer what I see, but not to override your question. Just putting this forward for consideration.

In practice, every time I've identified a subagent that wants something "actually bad" for me, it's because of a kind of communication gap (which I'm partly maintaining with my judgment). It's not like the subagent has a terminal value that's intrinsically bad. It's more that the goal is the only way it can see to achieve something it cares about, but I can see how pursuing and even achieving that goal would actually damage quite a lot.

The phrase "Cancer is bad for cancer" pops to mind for me here. It's a mental tag I use for how caring about anything would, all else being equal, result in wanting to care about everything. If cancer cells could understand the impact of what they're doing on their environment, and how would eventually lead to the death of the very context that allows them to "be immortal", they wouldn't want to continue doing what they're doing. So in a quirky and kind of anthropomorphized way, cancer is a disease of context obliviousness.

Less abstractly, sometimes toddlers want things that don't seem to make physical sense. Or they want to do something that's dangerous or destructive. But they don't have the capacity to recognize the problem-ness of what they're pursuing even if it's explained to them.

So what do?

Well, the easiest answer is to overpower the toddler. Default parenting. Also adaptive entropy incurring.

But there's a more subtle thing that I think is healthier. It just takes longer, is trickier, and requires a kind of flip in priorities. If the toddler can feel that they're being sincerely listened to, and that what they want is something the adult is truly valuing too, and they have a lot of experience that the adult is accounting for things the kid can't yet but is still taking the kid's desires seriously in contact with those unseen things, then the toddler can come to trust the adult's "no" even when the kid doesn't and can't know why there's a "no". It's not felt as arbitrary thwarting anymore.

This requires a lot of skill on the part of the parent. Sometimes in practice the skill-and-difficulty combo means it's not doable, in which case "We're not doing that because I'm bigger and I say so" is the fallback.

But as a template, I find this basically just works in terms of navigating subagents. It's an extension of "If this agent could see what I see, they'd recognize that part of the context that relates to them getting what they want would be harmed by what they're trying to do." So I'm opposing the subagent not because I need to stop its stupid influence, but because that's how I care for what it's caring about.

If that's really where I'm coming from, then it's pretty easy to pass its trust tests. I just have to learn how to speak its language so to speak.

comment by Slider · 2022-12-31T23:11:41.219Z · LW(p) · GW(p)

In the condition of "engagement makes it worse" lurking is seriously potent. The outcome of "doesn't do anything to the problem" is a massive win of keeping it level instead of spiraling further.

I can see a minor reason why letting for of the problemness is not trivial. You have to consider or be new things so your sense of identity can become undermined. Atleast in the suffering loop you know and are comfortable suffering that way.

Instead of negative avoidance, positive attraction addiction is quite a big cluster. If you have a boring life and then find 1 thing that you like then you might starting to think about it 24/7 and every moment that you are not doing that thing you fanatically think how you could end the current activity to be that much nearer to the thing that life is all about. It is a slack hog in a different way where it makes you generate it so it can be a utility monster about it.

Replies from: Valentine
comment by Valentine · 2023-01-01T00:12:18.527Z · LW(p) · GW(p)

In the condition of "engagement makes it worse" lurking is seriously potent. The outcome of "doesn't do anything to the problem" is a massive win of keeping it level instead of spiraling further.

Agreed.

 

I can see a minor reason why letting for of the problemness is not trivial. You have to consider or be new things so your sense of identity can become undermined. Atleast in the suffering loop you know and are comfortable suffering that way.

Yep. Exactly.

 

Instead of negative avoidance, positive attraction addiction is quite a big cluster. If you have a boring life and then find 1 thing that you like then you might starting to think about it 24/7 and every moment that you are not doing that thing you fanatically think how you could end the current activity to be that much nearer to the thing that life is all about. It is a slack hog in a different way where it makes you generate it so it can be a utility monster about it.

This makes theoretical sense to me, but in practice I basically always see this stemming from the negative avoidance.

Like, why is your life boring? If you find one thing you like and you massively zoom in toward it, then clearly you're not satisfied with the boring life, no?

And if it's just "I DIDN'T KNOW THINGS COULD BE THIS GOOD"… then what's the problem? Why call it an "addiction"? That feels kind of like naming falling in love "person addiction". I guess? But what's the problem?

When I think of things like heroine or meth that really do act like this "I must have the thing no matter what" kind of positive attraction slack hog nature to them… the reason they have that appeal to begin with is because there's an underlying negative avoidance at play. Like, why did they go for the drug to begin with? Once they saw that the drug was overriding things they care about, why did they turn away?

Not to say the structure you're naming can't happen at all. I suppose it can. But I just honestly can't think of any actual instances of this happening to humans that doesn't have its problematic root in the negative avoidance version.

(If you or anyone else cares to offer such an instance, I'd be happy to update. It doesn't feel dire to my worldview that I be right here!)

Replies from: Slider
comment by Slider · 2023-01-01T00:26:01.117Z · LW(p) · GW(p)

There can be a be an effect where if all you do is nail with a hammer, you do not know how to participate or appriciate other things.

a high schooler dropping out in order to "become a pro" on a recent new video game thinks he is improving their life but could be starting a tailspin. Doing 0 friendship upkeep towards anybody else while pursuing an infactuation has its downsides. Setting up a situation where one breakup turns your life from exctatic to purposeless is dangerous.

I think the emotional roots are important but the interesting quesiton is why the person is hypohedonic about all the little things that make life worth living? It is an issue of disengament with the positive. If the peers are living a life essentially the same why one feels ok/happy and one is miserable in it? The issue is not the avoidance of the negative but the formation of it.

Replies from: Valentine
comment by Valentine · 2023-01-01T18:06:03.895Z · LW(p) · GW(p)

Hmm. I guess I just disagree when I look at concrete cases. Inspired from them, I zoom in on this spot in your hypothetical example:

a high schooler dropping out in order to "become a pro" on a recent new video game thinks he is improving their life but could be starting a tailspin. Doing 0 friendship upkeep towards anybody else while pursuing an infactuation has its downsides.

My attention immediately goes to: Why the infatuation? Why does this seem more important to him than friendship upkeep? What's driving that?

If it's a calculated move… well, first off, notice that this can have the "mental idea imposed on the system" nature that creates adaptive entropy. But if it's really just a consciously taken risk, then the downsides are just a result of the risk. Maybe "becoming a pro" will work out and the social risk will have been worth it. Or maybe not, which is why it's a risk.

But if the student is doubling down on needing the "risk" to pan out well, and is refusing to orient to the consequences of it failing… that looks an awful lot to me like someone who's using an activity to avoid dealing with some kind of inner discomfort.

So again, I look at how this hypothetical example would actually happen in someone, and what I see is very much the structure I named as "addiction" in the OP. I just don't see this happening without it.

…which, again, might just be a matter of lack of creativity or memory on my part. But this is where I am with this topic. I don't see any cases of a "positive" addiction actually happening for people without a "negative" addiction being the background driver.

Replies from: Slider
comment by Slider · 2023-01-01T18:55:11.823Z · LW(p) · GW(p)

Even bothering to do a risk assesment seems we are already out of the actual addiction phase.

If you have a mental algorithm that seeks deeper until the instance of a pet idea is encountered and then stops, in an area where things are multifaceted and many layerered that is going to favour finding the pet idea usefull.

If I had the pet idea that all addictions were positive I could latch on that the used definition of what is going on in a negative addiction has an unavoidable "relief" step which can be thought as a positive force. To be somewhat artifically motivated to find a more satisfying abstraction layer.

If one has multiple antidotes to bad feelings and somewhat often they all get used then it would make sense to favour those that get the least stuck. So it is not until the last remaining antidote where we run out of options and addiction proper kicks in.

Replies from: Valentine
comment by Valentine · 2023-01-01T19:24:00.311Z · LW(p) · GW(p)

If you have a mental algorithm that seeks deeper until the instance of a pet idea is encountered and then stops, in an area where things are multifaceted and many layerered that is going to favour finding the pet idea usefull.

This lands for me like a fully general counterargument. If I'm just describing something real that's the underlying cause of a cluster of phenomena, of course I'm going to keep seeing it. Calling it a "pet idea" seems to devalue this possibility without orienting to it.

Replies from: Slider
comment by Slider · 2023-01-01T20:08:32.372Z · LW(p) · GW(p)

I felt like "If anybody sees a scottsman, please tell" and when providing a scottman getting a reception of "The kilt is a bit short for a scottsman". Being clueless is one thing and announcing a million dollar prize pool for an effect that you are never going to consider granting is another.

The argument is not general as digging into each candidate the same set amount does not apply to it or having any kind of scheme where you can justify the scrutinity given.

I though that part of the function was "I hope I have understood this correctly" or "this seems to be a thing" where "is this real?" is kind of the question being asked.

If your lenses are working, more power to you. If your lenses do not catch the things that my lenses make illusions of to me, I am not particularly selling my lenses or particularly explaining the cracks in my lenses to you.

Replies from: Valentine
comment by Valentine · 2023-01-02T18:52:06.697Z · LW(p) · GW(p)

I'm about to give up on this branch of conversation. I'm having trouble parsing what you're saying. It's feeling weirdly abstract to me.

If you have an example of something humans actually do that is more of this "positive addiction" thing, in a way that isn't rooted in the "negative addiction" pattern I describe, I'm open to learning about that.

You gave a hypothetical example type. I noted that in practice when that actually happens it strikes me as always rooted in the "negative addiction" thing. So it doesn't (yet) work for me as an example.

If there's something I'm missing about your example, please feel free to clarify.

Don't slide into claims that I'm just blinding myself with a "pet theory". If I'm blinding myself, I'm blinding myself to something you can name. Please just name the thing.

Replies from: Slider
comment by Slider · 2023-01-02T21:13:52.904Z · LW(p) · GW(p)

I disagree that that examples need to be verbally accessible (but undestand making a scheme where rare data types can be utilised require a lot of good will).

By Aumann agreement style reasoning, if we are both sane and differ in our judgement/perception then somebody got some updating to do. Even if we can't explicate the opining. I am doing so bad in this discussion that I am kind of orienting being the insane one here. So I consider to have abandoned the thing except for few select threads that seem can be positive.

Alternative word that in some contexts has been a near synonym: compulsiveness

Example of positive addiction: people being on their phones and conversing less face-to-face. (ocurred to me why the search might have special character, positive addiction might not be a problem or concieved as a problem, pure occurrence vs forming a problem). People do not need to find face-to-face time negative for it to occur or hurry to end when it happens.

I think I am curious about how the classifying of the previous two examples were found to not be an instance (Aumann crux).

(from here danger zone whether this is constructive enough to write)

<edit moved to another post for known to be in its own karma bucket>

Replies from: Slider
comment by Slider · 2023-01-02T21:15:47.552Z · LW(p) · GW(p)

If I know people might not want to see this and this might tank makes sense to have it separate.

(from here danger zone whether this is constructive enough to write)

Previous opening of the reason why the examples were found not to be instances pattern matches for me to:

Why infactuation? Well, it could be Z, X, Y. Z is based on negative addiction. X is based on negative addiction. Y is based on negative addiction. Infactuation seems to be based on negative addiction

Well what about if it was A, B or C? As is it is an argument from lack of imagination. It needs a reason why the reasons would be exhaustive to leave that territority.

Probably should have just taken small steps previously but here I am explicating. "A" could be that student is having an ordinary balanced life and first love hits. The style of rejection seems that this would be taken with a pattern of : Well the students previous life must have been so negative if addiction can be upkept. If ordinary life counts as "negative life" I am wondering what words "neutral" and "positive" are supposed to mean. (occurred why the special character for the search, mechanism is based on contrast and contrast always has duality (negative and positive here) ). No argument about specific things that could suck in a students life. If the fact that life has downs is obvious enough to just ambiently assume then recognising that it also has ups should not be far. Another pattern of "it can be needing the risk not to happen and that is negative addiction." closing a branch of inquiry of the type "it can be, therefore it must be".  negates as   rather than . Sure, if one is searching an efficient or wide solution to the problem inductive reasoning that cares about cogency makes sense. But if one is wondering whether an edge cases exists [LW · GW] having a stance that "that is not an edge case as it is rare" is not exactly enlightening. (assuming that approach is first to find the edgecases to estimate then whether their empirical frequency warrants analysis or inclusion).

comment by MalcolmOcean (malcolmocean) · 2023-01-01T01:17:21.776Z · LW(p) · GW(p)

This "it gets worse if you try to deal with it" isn't necessarily true in every case. In this way adaptive entropy is actually unlike thermodynamic entropy: it's possible to reduce adaptive entropy within a closed system.

Actually naming whether this bolded part is true would require defining what "closed" means in the context of an adaptive system—it's clearly different than a closed system in the physical sense, since all adaptive systems have to be open in order to live.

Replies from: Valentine
comment by Valentine · 2023-01-01T17:56:37.024Z · LW(p) · GW(p)

I agree. I was being fast and loose there. But I think it's possible for, say, someone to sit in meditation and undo a bunch of entropic physical tension without just moving the problem-ness around.

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2023-01-01T23:54:45.401Z · LW(p) · GW(p)

Right, yeah. And that (eventually) requires input of food into the person, but in principle they could be in a physically closed system that already has food & air in it... although that's sort of beside the point. And isn't that different from someone meditating for a few hours between meals. The energy is already in the system for now, and it can use that to untangle adaptive entropy.

Replies from: Valentine
comment by Valentine · 2023-01-02T18:03:45.054Z · LW(p) · GW(p)

Well, I mean that there's something like a "more closed" to "more entangled with larger systems" spectrum for adaptive systems, and that untangling adaptive entropy seems to be possible along the whole spectrum in roughly the same way. Easier with high entanglement with low-entropy environments obviously! But if the entropy doesn't crush the system into a complete shutdown spiral, it seems to often be possible for said system to rearrange itself and end up net less entropic.

I don't know how that relates to things like thermodynamic energy, other than that all adaptive systems require it to function.

comment by MSRayne · 2023-07-18T13:35:53.575Z · LW(p) · GW(p)

"Thou strivest ever. Even in thy yielding, thou strivest to yield; and lo! thou yieldest not. Go thou into the outermost places, and subdue all things. Subdue thy fear and thy distrust. And then - YIELD." - Aleister Crowley

comment by Raemon · 2023-01-04T19:03:09.436Z · LW(p) · GW(p)

I find the concept of "addicted from" rather than "addicted to" pretty useful. I think I already had a rough sense this was happening but it was a good handle for it that gave me more sense of what to look for, and I've found it useful in the past week.

comment by ADifferentAnonymous · 2023-01-04T03:03:51.892Z · LW(p) · GW(p)

I feel that there's something true and very important here, and (as the post acknowledges) it is described very imperfectly.

One analogy came to mind for me that seems so obvious that I wonder if you omitted it deliberately: a snare trap. These very literally work by removing any slack the victim manages to create.

Replies from: Valentine
comment by Valentine · 2023-01-10T02:08:13.401Z · LW(p) · GW(p)

I'm just not familiar with snare traps. A quick search doesn't give me the sense that it's a better analogy than entropy or technical debt. But maybe I'm just not gleaning its nature.

In any case, not an intentional omission.

comment by Celarix · 2023-01-02T02:13:55.628Z · LW(p) · GW(p)

There are basically four problems with forcing instead of listening:

 

I will note that, sometimes, the connection between "you" and the part of your brain you're trying to communicate with is that the connection isn't great and that part's memory is terrible - this is the heart of executive dysfunction.

Replies from: Valentine
comment by Valentine · 2023-01-02T18:40:06.646Z · LW(p) · GW(p)

Yep. These seem like true statements. I'm missing why you're saying them or how they're a response to the part you're quoting. Clarify?

Replies from: Celarix
comment by Celarix · 2023-01-03T15:37:27.474Z · LW(p) · GW(p)

That, uh, is a good question. Now I'm not sure myself.

I think what I was going for is the idea that, yes, subagents matter, but no, you're not always going to be able to use these methods to get better at dealing with them. So don't feel too bad if you have a condition that renders this more difficult or even impossible.

Replies from: Valentine
comment by Valentine · 2023-01-03T20:23:47.490Z · LW(p) · GW(p)

Ah. Yeah, I'd prefer people don't feel bad about any of this. My ideal would be that people receive all this as a purely pressure-free description of what simply is. That will result in some changes, but kind of like nudging a rock off a cliff results in it falling. Or maybe more like noticing a truck barreling down the road causes people to move off the road. There's truly no reason to feel defective or like a failure here even if one can't "move".

Replies from: Celarix
comment by Celarix · 2023-01-04T00:20:13.603Z · LW(p) · GW(p)

You know, in a weird sort of way, I think your comment actually makes this more helpful for people who have this impairment in ability. We try so damn hard to "fix" what's wrong with us and are so quick to self-judgment when something doesn't work. By framing this as a description of what is, I think it helps reinforce the idea of not just trying harder via application of more force, more self-hatred, etc.

(p.s. I saw your reply to my comment about subagents which want bad things and really appreciate it. I'm still trying to process it; you should see a reply soon)

comment by rpglover64 (alex-rozenshteyn) · 2022-12-31T21:05:35.773Z · LW(p) · GW(p)

When I grab a cup of coffee for a pick-me-up, I'm basically asserting that I should have more energy than I do right now.

 

90% agree with the overall essay, but I’ll pick on this point. It seems you’re saying “everything is psychology; nothing is neurology”, which is a sometimes useful frame but has its flaws. As an example, ADHD exists, and for someone with it to a significant degree, there is a real lack of slack (e.g. inability to engage in long-term preparations that require consistent boring effort, brought about by chronically low dopamine), and coffee (or other stimulants) trades one problem (can’t focus) for another (need coffee).

The inclination to insist that we just have to try harder is literally doubling down on force. 

The rejection here seems to have overlap with pain is not the unit of effort [LW · GW], which I read as basically saying that truly trying harder is meaningfully distinct form throwing more force behind the same strategy.

I mean, of course, if someone gets a brilliant flash of insight and builds FAI (or uFAI), then that overrides basically everything.

Here’s where I realized an objection that’s been in the back of my mind for most of the article: it seems to be conflating “effort” or “execution” with “force”. Barring some kind of perfect reflexive closure on beliefs, which is uncommon, achieving outcomes takes effort: sometimes you need to decide how to direct this effort, and that itself takes effort; sometimes you need to decide whether to allocate effort to long-term goals vs short-term goals vs finding better goals, and that itself takes effort.

Replies from: Valentine
comment by Valentine · 2023-01-01T00:44:38.900Z · LW(p) · GW(p)

It seems you’re saying “everything is psychology; nothing is neurology”

I like the rest of your example, but this line confuses me. I don't think I'm saying this, I don't agree with the statement even if I somehow said it, and either way I don't see how it connects to what you're saying about ADHD.

 

…ADHD exists, and for someone with it to a significant degree, there is a real lack of slack (e.g. inability to engage in long-term preparations that require consistent boring effort, brought about by chronically low dopamine), and coffee (or other stimulants) trades one problem (can’t focus) for another (need coffee).

I… agree? Actually, as I reread this, I'm not sure how this relates to what I was saying in the OP. I think maybe you're saying that someone can choose to reach for coffee for reasons other than wakefulness or energy control. Yes?

If so, I'd say… sure, yeah. Although I don't know if this affects anything about the point I was making at all.

Donning the adaptive entropy lens, the place my attention goes to is the "chronically low dopamine". Why is that? What prevents the body from adapting to its context?

I know very little about the biochemistry of ADHD. But I do know lots of people who "have ADHD" who don't seem to have any problems anymore. Not because they've overcome the "condition" but because they treated it as a context for the kind of life they wanted to build and live. One of them runs a multi-million dollar company she built.

So speaking from a pretty thorough ignorance of the topic itself, my guess based on my priors is that the problem-ness of ADHD has more to do with the combo of (a) taking in the culture's demand that you be functional in a very particular way combined with (b) a built-in incapability of functioning that way.

So there we've got the imposition of a predetermined conclusion, in (a).

But maybe I'm just way off in terms of how ADHD works. I don't know.

My point was about the most common use of caffeine, and I think that holds just fine.

 

The rejection here seems to have overlap with pain is not the unit of effort [LW · GW], which I read as basically saying that truly trying harder is meaningfully distinct form throwing more force behind the same strategy.

I hadn't read that post. Still haven't. But to the rest, yes. I agree.

I'd just add that "the same strategy" can be extremely meta. It's stunning how much energy can go into trying "something new" in a pursuer/avoider dynamic in ways that just reenact the problem despite explicitly trying not to.

The true true "trying harder" that doesn't do this doesn't feel to me like something I'd want to call "trying harder". It feels a lot more like "Oh, that's the clear path." Not exactly effortless, but something like conflict-less.

 

[the article] seems to be conflating “effort” or “execution” with “force”. Barring some kind of perfect reflexive closure on beliefs, which is uncommon, achieving outcomes takes effort: sometimes you need to decide how to direct this effort, and that itself takes effort; sometimes you need to decide whether to allocate effort to long-term goals vs short-term goals vs finding better goals, and that itself takes effort.

Mmm. Yes, this is an important distinction. I think to the extent that it didn't come across in the OP, that was a matter of how the OP was hacked together, not something I'm missing in my intent.

When I'm talking about pushing or effort or force, there's a pretty specific phenomenology that goes with what I'm talking about. It's an application of force specifically to overwhelm something intelligent that doesn't want to go that way. That's the whole reason for the force I'm talking about.

Like, why make yourself go jogging? Obviously jogging involves effort, but if you were skin to bones aligned with this as what you wanted to do, you'd just… engage in the effort. So why do you have to add tricks like decreasing the activation energy to start, and having an accountability buddy, and setting a Beeminder, etc.?

(A rhetorical "you", not you personally. I have no idea what your relationship to jogging is.)

The whole reason for those strategies is because there's something fighting back. And the entropic answer to that pushback is… to push forward harder.

That's the "force" I'm talking about.

There's no way to make exercise literally effortless. But there's totally a way to make it something like internally frictionless.

The main point of the OP is basically that doing things with internal friction by intending harder increases internal friction, and decreasing that internal friction matters way way way more than nearly anything you can in practice achieve by overwhelming it.

I agree that the language is confusing. Using "effort" and "force" and "trying" to point at this doesn't do justice to the fact that basically everything requires some kind of effort.

But… I think there's a clear thing once you see the pattern. At least it's very clear to me, even if I don't have better words for it.

Is it clear to you?

Replies from: alex-rozenshteyn, stavros, Slider
comment by rpglover64 (alex-rozenshteyn) · 2023-01-01T05:17:04.755Z · LW(p) · GW(p)

“everything is psychology; nothing is neurology”

this line confuses me.

It was just a handle that came to mind for the concept that I'm trying to warn against. Reading your post I get a sense that it's implicitly claiming that everything is mutable and nothing is fixed; eh... that's not right either. Like, it feels like it implicitly and automatically rejects that something like a coffee habit can be the correct move even if you look several levels up.

I think maybe you're saying that someone can choose to reach for coffee for reasons other than wakefulness or energy control.

More specifically, that coffee may be part of a healthy strategy for managing your own biochemistry. I don't think you say otherwise in the post, but it felt strongly suggested.

Donning the adaptive entropy lens, the place my attention goes to is the "chronically low dopamine". Why is that? What prevents the body from adapting to its context?

I think this is something I'm pushing back (lightly) against; I do not, on priors, expect every "problem" to be a failure of adaptation. Like, there might be a congenital set point, and you might have it in the bottom decile (note, I'm not saying that's actually the way it works).

I'd just add that "the same strategy" can be extremely meta.

👍

Mmm. Yes, this is an important distinction. I think to the extent that it didn't come across in the OP, that was a matter of how the OP was hacked together, not something I'm missing in my intent.

Makes sense; consider it something between "feedback on the article as written" and "breadcrumbs for others reading".

Is it clear to you?

I think... that I glimpse the dynamic you're talking about, and that I'm generally aware of it's simplest version and try to employ conditions/consequences reasoning, but I do not consistently see it more generally.

[EDIT]

Sleeping on it, I also see connections to [patterns of refactored agency](https://www.ribbonfarm.com/2012/11/27/patterns-of-refactored-agency/) (specifically pervasiveness) and [out to get you](https://thezvi.wordpress.com/2017/09/23/out-to-get-you/). The difference is that while you're describing something like a physical principle, from "out to get you" more of a social principle, and "refactored agency" is describing a useful thinking perspective.

Replies from: Valentine
comment by Valentine · 2023-01-01T19:18:42.342Z · LW(p) · GW(p)

it feels like [the OP] implicitly and automatically rejects that something like a coffee habit can be the correct move even if you look several levels up.

Ah. Got it.

That's not what I mean whatsoever.

I don't think it's a mistake to incur adaptive entropy. When it happens, it's because that's literally the best move the system in question (person, culture, whatever) can make, given its constraints.

Like, incurring technical debt isn't a mistake. It's literally the best move available at the time given the constraints. There's no blame in my saying that whatsoever. It's just true.

And, it's also true that technical debt incurs an ongoing cost. Again, no blame. It's just true.

In the same way (and really, as a generalization), incurring adaptive entropy always incurs a cost. That doesn't make it wrong to do. It's just true.

 

I do not, on priors, expect every "problem" to be a failure of adaptation.

I think this is a challenge of different definitions. To me, what "adaptation" and "problem" mean requires that every problem be a failure of adaptation. Otherwise it wouldn't be a problem!

I'm getting the impression that questions of blame or screw-up or making mistakes are crawling into several discussion points in these comments. Those questions are so far removed from my frame of thinking here that I just flat-out forgot to orient to them. They just don't have anything to do with what I'm talking about.

So when I say something like "a failure of adaptation", I'm talking about a fact. No blame, no "should". The way bacteria fail to adapt to bleach. Just a fact.

Everything we're inclined to call a "problem" is an encounter with a need to adapt that we haven't yet adapted to. That's what a problem is, to me.

So any persistent problem is literally the same thing as an encounter with limitations in our ability to adapt.

 

I think... that I glimpse the dynamic you're talking about, and that I'm generally aware of it's simplest version and try to employ conditions/consequences reasoning, but I do not consistently see it more generally.

Cool, good to know. Thank you.

 

Sleeping on it, I also see connections to patterns of refactored agency (specifically pervasiveness) and out to get you. The difference is that while you're describing something like a physical principle, from "out to get you" more of a social principle, and "refactored agency" is describing a useful thinking perspective.

I don't follow this, sorry. I think I'd have to read those articles. I might later. For now, I'm just acknowledging that you've said… something here, but I'm not sure what you've said, so I don't have much to say in response just yet.

Replies from: alex-rozenshteyn
comment by rpglover64 (alex-rozenshteyn) · 2023-01-03T01:26:14.649Z · LW(p) · GW(p)

I think this is a challenge of different definitions. To me, what "adaptation" and "problem" mean requires that every problem be a failure of adaptation. Otherwise it wouldn't be a problem!

This was poor wording on my part; I think there's both a narrow sense of "adaptation" and a broader sense in play, and I mistakenly invoked the narrow sense to disagree. Like, continuing with the convenient fictional example of an at-birth dopamine set-point, the body cannot adapt to increase the set-point, but this is qualitatively different than a set-point that's controllable through diet; the latter has the potential to adapt, while the former cannot, so it's not a "failure" in some sense.

I feel like there's another relevant bit, though: whenever we talk of systems, a lot depends on where we draw the boundaries, and it's inherently somewhat arbitrary. The "need" for caffeine may be a failure of adaptation in the subsystem (my body), but a habit of caffeine intake is an example of adaptation in the supersystem (my body + my agency + the modern supply chain)

I think I'd have to read those articles. I might later.

I think I can summarize the connection I made.

In "out to get you", Zvi points out an adversarial dynamic when interacting with almost all human-created systems, in that they are designed to extract something from you, often without limit (the article also suggests that there are broadly four strategies for dealing with this). The idea of something intelligent being after your resources reminds me of your description of adaptive entropy.

In "refactored agency", Venkatesh Rao describes a cognitive reframing that I find particularly useful in which you ascribe agency to different parts of a system. It's not descriptive of a phenomenon (unlike, say, an egregore, or autopoesis) but of a lens through which to view a system. This is particularly useful for seeking novel insights or solutions; for example, how would problems and solutions differ if you view yourself as a "cog in the machine" vs "the hero", or your coworkers as "Moloch's pawns" rather than as "player characters, making choices" (these specific examples are my own extrapolations, not direct from the text). Again, ascribing agency/intelligence to the adaptive entropy, reminds me of this.

Everything we're inclined to call a "problem" is an encounter with a need to adapt that we haven't yet adapted to. That's what a problem is, to me.

This is tangential, but it strongly reminds me of the TRIZ framing of a problem (or "contradiction" as they call it): it's defined by the desire for two (apparently) opposing things (e.g. faster and slower).

comment by stavros · 2023-01-01T13:15:15.905Z · LW(p) · GW(p)

Thanks for your post, just wanted to contribute by deconfusing ADHD a little (hopefully). I agree that you and OP seem to be agreeing more than disagreeing.

So speaking from a pretty thorough ignorance of the topic itself, my guess based on my priors is that the problem-ness of ADHD has more to do with the combo of (a) taking in the culture's demand that you be functional in a very particular way combined with (b) a built-in incapability of functioning that way.

Correct. However that problem-ness is often a matter of survival/highly non-optional. ADHD can be an economic (and thus kinda literal) death sentence - if it wasn't for the support of my family I'd be homeless.

I think what the OP is referring to, why they raised ADHD specifically in this context, is because this habitualized conscious forcing/manipulation of our internal state (i.e. dopamine) is a crutch we can't afford to relinquish - without it we fall down, and we don't get back up.

I'm speaking as someone only recently (last year) diagnosed with (and medicated for) ADHD. I am easily twice as functional now as I was before I had medication (and I am still nowhere near as functional as the average person, let alone most of this crowd xD)

And, quite tidily, ADHD is one of the primary reasons I learned to develop slack - why I'm capable of grokking your position. ADHD is a neverending lesson in the necessity of slack, in learning to let go. 

ADHD is basically an extreme version of slack philosophy hardwired into your brain - it's great from a certain perspective, but it kinda gives us a healthy appreciation for the value of being able to force outcomes - in a 'you don't know what you've got til its gone' sense.

Replies from: alex-rozenshteyn, Valentine
comment by rpglover64 (alex-rozenshteyn) · 2023-01-01T18:08:25.771Z · LW(p) · GW(p)

I did start with "I agree 90%."

I raised ADHD because it was the first thing that popped into my mind where a chemical habit feels internally aligned, such that the narrative of the "addiction" reducing slack rang hollow.

And, quite tidily, ADHD is one of the primary reasons I learned to develop slack.

ADHD is basically an extreme version of slack philosophy hardwired into your brain.

That has not actually been my experience, but I get the sense that my ADHD is much milder than yours. I also get the sense that your experience w.r.t. ADHD and slack is really common for anything that is kinda-sorta-disabilityish this [LW · GW] old post comes to mind, even though it doesn't explicitly mention it).

comment by Valentine · 2023-01-01T18:41:33.033Z · LW(p) · GW(p)

I found this super helpful. Thank you.

 

I think what the OP is referring to, why they raised ADHD specifically in this context, is because this habitualized conscious forcing/manipulation of our internal state (i.e. dopamine) is a crutch we can't afford to relinquish - without it we fall down, and we don't get back up.

Gotcha. I don't claim to fully understand — I have trouble imagining the experience you're describing from the inside — but this gives me a hint.

FWIW, I interpret this as "Oh, so this kind of ADHD is a condition where your adaptive capacity is too low to avoid incurring adaptive entropy from the culture."

Replies from: alex-rozenshteyn
comment by rpglover64 (alex-rozenshteyn) · 2023-01-03T01:12:05.608Z · LW(p) · GW(p)

This is actually confounded when using ADHD as an example because there's two dynamics at play:

  • Any "disability" (construed broadly, under the social model of disability) is, almost by definition, a case where your adaptive capacity is lower than expected (by society)
  • ADHD specifically affects executive function and impulse control, leading to a reduced ability to force, or do anything that isn't basically effortless.
comment by Slider · 2023-01-01T01:26:25.043Z · LW(p) · GW(p)

The text can be taken in a way where the need of coffee is because of a unreasonable demand or previous screwup.

Obviously some kind of process in my body disagrees with my mental idea of how I should be.

This can feel like there is some (typical)neurologial balance state and all deviation is a "definement of nature".

For ADHD it might be apt to say that the brain can not be as stimulated as it would like to be. It would actually really agree to be more stimulated.

I found it a bit surprising but instruction booklet for ADHD included a line to the effect of "ADHD persons find it hard to focus. This is not their fault that they can not deal with these kinds of situations", so the mitigation of the stigma must be real important when it is included alongside the most technical information of what medicines you should not mix etc.

A quesiton like

why isn't that fact enough for me to have the right amount of energy?

has an actual proper answer with ADHD in that executive function parts of the brain are too weak/tired. Here it is kinda implied that there is no proper reason to end up in this conclusion. Everybody does not have an (totally) able brain.

Everybody not having their stats in the same configuration can be fine neurodiversity. But the low stats are a thing and they have real effects.

I do think that ADHD per se does not mean one can't prepare. But preparing can't rely on the standard memes and knicks. It can look like more post-it notes and more diligent calendar use.

Replies from: Valentine
comment by Valentine · 2023-01-01T19:03:25.160Z · LW(p) · GW(p)

The text can be taken in a way where the need of coffee is because of a unreasonable demand or previous screwup.

Ah. To me that interpretation misses the core point, so it didn't cross my mind.

Judgments like "unreasonable" and "screwup" are coming from inside an adaptive-entropic system. That doesn't define how that kind of entropy works. The mechanism is just true. It's neutral, the way reality is neutral.

The need for coffee (in the example I gave) arises because of a tension between two adaptive systems: the one being identified with, and the one being imposed upon. And there's a cost to that tension, such as the need for coffee.

I don't feel this way about something like, say, taking oral vitamin D in the winter. That's not in opposition to some adaptive subsystem in me or in the world. It's actually me adapting to my constraints.

If someone's relationship to caffeine were like that, I wouldn't say it's entropy-inducing.

But when it is entropy-inducing, it's because of this "imposing an idea" structure.

…and that isn't to say it's a mistake! That, too, is imposing an idea of how things should be. The whole reason anyone incurs entropy is because that's literally the best move available to them best as they can tell. Doing anything else would (apparently) be worse for them.

There's no blame or "should" here. Just description of cause and effect — which, yes, bears on what what people might want to do, but doesn't start from there. Cannot start from there.

 

…executive function parts of the brain are too weak/tired [in ADHD].

Cool. As I said in another comment, from this I'm taking that ADHD (as you're talking about it) is about having a particular kind of reduced adaptive capacity.

My eyes still go to "Why are they too weak/tired?" and "What's the 'too' in comparison to?" The former is about causation chains, because if there's a limitation on adaptive capacity that a system can be aware of, it will want to route around it. So why hasn't an effective route around it been found? What's limiting the meta-adaptive capacity? This chain often leads to noticing spots of adaptive entropy in the environment.

And the "What's the 'too' in comparison to?" often leads to noticing how people take on the adaptive entropy of the larger context.

But sometimes limited adaptive capacity is just that. Like, humans die of old age, and sure we might be able to engineer our way around that eventually and our inability to take that engineering seriously is because of collective adaptive entropy… but no amount of sitting alone in a cave meditating is going to make you biologically immortal. That's just an adaptive limitation.

I'm hearing you say that ADHD is like that, and that an ADHD person's use of caffeine is therefore different from the case I named in the OP.

If so: cool.

I wonder how much of this whole topic coming up is a matter of taking "You've incurred adaptive entropy" as a matter of blame or shame. Like I'm saying it's bad or wrong to do this. And the objection is basically "ADHD folk need to engage with caffeine or something like it, so they shouldn't be blamed!"

FWIW, I promise that's not what I mean. Not even a little bit. Zero blame. Truly.

Replies from: joachim-bartosik, Slider
comment by Joachim Bartosik (joachim-bartosik) · 2023-01-07T00:02:56.150Z · LW(p) · GW(p)

I don't feel this way about something like, say, taking oral vitamin D in the winter. That's not in opposition to some adaptive subsystem in me or in the world. It's actually me adapting to my constraints.

If someone's relationship to caffeine were like that, I wouldn't say it's entropy-inducing.

 

I think this answers a question / request for clarification I had. So now I don't have to ask.

(The question was something like "But sometimes I use caffeine because I don't want to fall asleep while I'm driving (and things outside my controll made it so that doing a few hundred of driving km now-ish is the best option I can see)").

comment by Slider · 2023-01-01T19:55:19.425Z · LW(p) · GW(p)

I believe your goal is not to blame. But having good intentions does not mean you have good effects (pavements and all). It does ward off malicioussness but does not guarantee that the assistance helps. Being curious about the effects of you actions helps. But rare side effects might not be obvious at all. Rejecting feedback with "I couldn't have known" can prevent knowing the bits for the future.

I don't feel this way about something like, say, taking oral vitamin D in the winter. That's not in opposition to some adaptive subsystem in me or in the world. It's actually me adapting to my constraints.

With this the intention probably is not to disinclude people living in equatorial areas. But if winter gets as much light as summer this kind of D-vitamin pattern would not make sense. So even if we do not intend to and even if we are aware what is going on this kind of analog does disinclude equatorial people.

If you lived in constant shade then it could make sense to take D-vitamin both in summer and winter. In an important way the coffee is like vitamin-D for (some of) ADHD situations. So largely for "If so: cool." indeed that way.

(stickler for possibility claims: If one thinks that AGI can make biological immortality and that meditation can lead to a working AGI scheme then meditation can lead to biological immortality (but I know what that passage gets at))

If standard lectures last for 2 hours and a anomalous lecture lasts for 4 hours and in the last hour nobody can follow anything, it tends to be that the diagnosis is that the lecture is too long. If a student can only pay attention for the first hour of a 2 hour lecture the diagnosis tends to be that the student is too impatient.

I would not say that if somebody has low muscle mass that their capacity to change their muscle mass is impaired (that there is some problem of them using a weightlifting gym). "Do you even lift?" implies that (all) humans should lift. Not everything is worth changing and possible to change. I don't have great pointers on more neboulous feeling where I think others are based in their reactions. I know the thing was meant conditionally. But bits like 

And yeah, I do think it's the right word, which is why I'm picking it. Please notice the framing effect, and adjust yourself as needed.

and

Oops

mean stuff. (if you leave your terms open then you can't effectively say that you mean 0 of something. One risks meaning slightly bad stuff for vague terms. That can be an understandble tradeoff to make communication possible at all (or be at some required handiness bar))

Replies from: Valentine
comment by Valentine · 2023-01-02T18:38:52.617Z · LW(p) · GW(p)

I'm not interested in this branch of conversation. Just letting you know that I see this and am choosing not to continue the exchange.