Eli's shortform feed

post by elityre · 2019-06-02T09:21:32.245Z · score: 31 (6 votes) · LW · GW · 76 comments

I'm mostly going to use this to crosspost links to my blog for less polished thoughts, Musings and Rough Drafts.

76 comments

Comments sorted by top scores.

comment by elityre · 2019-09-27T22:08:05.174Z · score: 49 (12 votes) · LW · GW

New post: Some things I think about Double Crux and related topics

I've spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements and transferring tacit knowledge. I'm also probably the person who has spent the most time explicitly thinking about and working with CFAR's Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I'm not going to go into detail about, defend, or substantiate, most of them.

The following are my own beliefs and do not necessarily represent CFAR, or anyone else.

I, of course, reserve the right to change my mind.

[Throughout I use "Double Crux" to refer to the Double Crux technique, the Double Crux class, or a Double Crux conversation, and I use "double crux" to refer to a proposition that is a shared crux for two people in a conversation.]

Here are some things I currently believe:

(General)

  1. Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The Double Crux framework is maybe the most important tool (that I know of) for resolving disagreements, but it is only one tool/framework in an ensemble.
    1. Some other tools/ frameworks, that are not strictly part of Double Crux (but which are sometimes crucial to bridging disagreements) include NVC, methods for managing people's intentions and goals, various forms of co-articulation (helping to draw out an inchoate model from one's conversational partner), etc.
    2. In some contexts other tools are substitutes for Double Crux (ie another framework is more useful) and in some cases other tools are helpful or necessary compliments (ie they solve problems or smooth the process within the Double Crux frame).
    3. In particular, my personal conversational facilitation repertoire is about 60%  Double Crux-related techniques, and 40% other frameworks that are not strictly within the frame of Double Crux.
  2. Just to say it clearly: I don't think Double Crux is the only way to resolve disagreements, or the best way in all contexts. (Though I think it may be the best way, that I know of, in a plurality of common contexts?)
  3. The ideal use case for Double Crux is when...
    1. There are two people...
    2. ...who have a real, action-relevant, decision...
    3. ...that they need to make together (they can't just do their own different things)...
    4. ...in which both people have strong, visceral intuitions.
  4. Double Cruxes are almost always conversations between two people's system 1's.
  5. You can Double Crux between two people's unendorsed intuitions. (For instance, Alice and Bob are discussing a question about open borders. They both agree that neither of them are economists, and that neither of them trust their intuitions here, and that if they had to actually make this decision, it would be crucial to spend a lot of time doing research and examining the evidence and consulting experts. But nevertheless Alices current intuition leans in favor of open borders , and Bob's current intuition leans against. This is a great starting point for a Double Crux.)
  6. Double cruxes (as in a crux that is shared by both parties in a disagreement) are common, and useful. Most disagreements have implicit Double Cruxes, though identifying them can sometimes be tricky.
  7. Conjunctive cruxes (I would change my mind about X, if I changed my mind about Y and about Z, but not if I only changed my mind about Y or about Z) are common.
  8. Folks sometimes object that Double Crux won't work, because their belief depends on a large number of considerations, each one of which has only a small impact on their overall belief, and so no one consideration is a crux. In practice, I find that there are double cruxes to be found even in cases where people expect their beliefs have this structure.
    1. Theoretically, it makes sense that we would find double cruxes in these scenarios: if a person has a strong disagreement (including a disagreement of intuition) with someone else, we should expect that there are a small number of considerations doing most of the work of causing one person to think one thing and the other to think something else. It is improbable that each person's beliefs depend on 50 factors, and for Alice, most of those 50 factors point in one direction, and for Bob, most of those 50 factors point in the other direction, unless the details of those factors are not independent. If considerations are correlated, you can abstract out the fact or belief that generates the differing predictions in all of those separate considerations. That "generating belief" is the crux.
    2. That said, there is a different conversational approach that I sometimes use, which involves delineating all of the key considerations (then doing Goal-factoring style relevance and completeness checks), and then dealing with each consideration one at time (often via a fractal tree structure: listing the key considerations of each of the higher level considerations).
      1. This approach absolutely requires paper, and skillful (firm, gentle) facilitation, because people will almost universally try and hop around between considerations, and they need to be viscerally assured that their other concerns are recorded and will be dealt with in due course, in order to engage deeply with any given consideration one at a time.
  9. About 60% of the power of Double Crux comes from operationalizing or being specific.
    1. I quite like Liron's [LW · GW] recent sequence [LW · GW] on being specific. It re-reminded me of some basic things that have been helpful in several recent conversations. In particular, I like the move [LW · GW] of having a conversational partner paint a specific, best case scenario, as a starting point for discussion.
      1. (However, I'm concerned about Less Wrong readers trying this with a spirit of trying to "catch out" one's conversational partner in inconsistency, instead of trying to understand what their partner wants to say, and thereby shooting themselves in the foot. I think the attitude of looking to "catch out" is usually counterproductive to both understanding and to persuasion. People rarely change their mind when they feel like you have trapped them in some inconsistency, but they often do change their mind if they feel like you've actually heard and understood their belief / what they are trying to say / what they are trying to defend, and then provide relevant evidence and argument. In general (but not universally) it is more productive to adopt a collaborative attitude of sincerely trying to help a person articulate, clarify, and substantiate the point your partner is trying to make, even if you suspect that their point is ultimately wrong and confused.)
    2. As an aside, specificity and operationalization is also the engine that makes Non Violent communication work. Being specific is really super powerful.
  10. Many (~50%) disagreements evaporate upon operationalization, but this happens less frequently than people think: and if you seem to agree about all of the facts, and agree about all specific operationalizations, but nevertheless seem to have differing attitudes about a question, that should be a flag. [I have a post that I'll publish soon about this problem.]
  11. You should be using paper when Double Cruxing. Keep track of the chain of Double Cruxes, and keep them in view.
  12. People talk past each other all the time, and often don't notice it. Frequently paraphrasing your current understanding of what your conversational partner is saying, helps with this. [There is a lot more to say about this problem, and details about how to solve it effectively].
  13. I don't endorse the Double Crux "algorithm [LW · GW]" described in the canonical post. That is, I don't think that the best way to steer a Double Crux conversation is to hew to those 5 steps in that order. Actually finding double cruxes is, in practice, much more complicated, and there are a large number of heuristics and TAPs that make the process work. I regard that algorithm as an early (and self conscious [LW · GW]) attempt to delineate moves that would help move a conversation towards double cruxes.
  14. This is my current best attempt at distilling the core moves that make Double Crux work, though this leaves out a lot.
  15. In practice, I think that double cruxes most frequently emerge not from people independently generating their own list cruxes (though this is useful). Rather double cruxes usually emerge from the move of "checking if the point that your partner made is a crux for you."
  16. I strongly endorse facilitation of basically all tricky conversations, Double Crux oriented or not. It is much easier to have a third party track the meta and help steer, instead of the participants, who's working memory is (and should be) full of the object level.
  17. So called, "Triple Crux" is not a feasible operation. If you have more than two stakeholders, have two of them Double Crux, and then have one of those two Double Crux with the third person. Things get exponentially trickier as you add more people. I don't think that Double Crux is a feasible method for coordinating more than ~ 6 people. We'll need other methods for that.
  18. Double Crux is much easier when both parties are interested in truth-seeking and in changing their mind, and are assuming good faith about the other. But, these are not strict prerequisites, and unilateral Double Crux is totally a thing.
  19. People being defensive, emotional, or ego-filled does not preclude a productive Double Crux. Some particular auxiliary skills are required for navigating those situations, however.
    1. This is a good start for the relevant skills.
  20. If a person wants to get better at Double Crux skills, I recommend they cross-train with IDC. Any move that works in IDC you should try in Double Crux. Any move that works in Double Crux you should try in IDC. This will seem silly sometimes, but I am pretty serious about it, even in the silly-seeming cases. I've learned a lot this way.
  21. I don't think Double Crux necessarily runs into a problem of "black box beliefs" wherein one can no longer make progress because one or both parties comes down to a fundamental disagreement about System 1 heuristics/ models that they learned from some training data, but into which they can't introspect. Almost always, there are ways to draw out those models.
    1. The simplest way to do this (which is not the only or best way, depending on the circumstances, involves generating many examples and testing the "black box" against them. Vary the hypothetical situation to triangulate to the exact circumstances in which the "black box" outputs which suggestions.
    2. I am not making the universal claim that one never runs into black box beliefs that can't be dealt with.
  22. Disagreements rarely come down to "fundamental value disagreements". If you think that you have gotten to a disagreement about fundamental values, I suspect there was another conversational tact that would have been more productive.
  23. Also, you can totally Double Crux about values. In practice, you can often treat values like beliefs: often there is some evidence that a person could observe, at least in principle, that would convince them to hold or not hold some "fundamental" value.
    1. I am not making the claim that there are no such thing as fundamental values, or that all values are Double Crux-able.
  24. A semi-esoteric point: cruxes are (or can be) contiguous with operationalizations. For instance, if I'm having a disagreement about whether advertising produces value on net, I might operationalize to "beer commercials, in particular, produce value on net", which (if I think that operationalization actually captures the original question) is isomorphic to "The value of beer commercials is a crux for the value of advertising.  I would change my mind about advertising in general, if I changed my mind about beer commercials." (In this is an evidential crux, as opposed to the more common causal crux. (More on this distinction in future posts.))
  25. People's beliefs are strongly informed by their incentives. This makes me somewhat less optimistic about tools in this space than I would otherwise be, but I still think there's hope.
  26. There are a number of gaps in the repertoire of conversational tools that I'm currently aware of. One of the most important holes is the lack of a method for dealing with psychological blindspots. These days, I often run out of ability to make a conversation go well when we bump into a blindspot in one person or the other (sometimes, there seem to be psychological blindspots on both sides). Tools wanted, in this domain.

(The Double Crux class)

  1. Knowing how to identify Double Cruxes can be kind of tricky, and I don't think that most participants learn the knack from the 55 to 70 minute Double Crux class at a CFAR workshop.
  2. Currently, I think I can teach the basic knack (not including all the other heuristics and skills) to a person in about 3 hours, but I'm still playing around with how to do this most efficiently. (The "Basic Double Crux pattern" post is the distillation of my current approach.)
    1. This is one development avenue that would particularly benefit from parallel search: If you feel like you "get" Double Crux, and can identify Double Cruxes fairly reliably and quickly, it might be helpful if you explicated your process.
  3. That said, there are a lot of relevant compliments and sub-skills to Double Crux, and to bridging disagreements more generally.
  4. The most important function of the Double Crux class at CFAR workshops is teaching and propagating the concept of a "crux", and to a lesser extent, the concept of a "double crux". These are very useful shorthands for one's personal thinking and for discourse, which are great to have in the collective lexicon.

(Some other things)

  1. Personally, I am mostly focused on developing deep methods (perhaps for training high-expertise specialists) that increase the range of problems of disagreements that the x-risk ecosystem can solve at all. I care more about this goal than about developing shallow tools that are useful "out of the box" for smart non-specialists, or in trying to change the conversational norms of various relevant communities (though both of those are secondary goals.)
  2. I am highly skeptical of teaching many-to-most of the important skills for bridging deep disagreement, via anything other than ~one-on-one, in-person interaction.
  3. In large part due to being prodded by a large number of people, I am polishing  all my existing drafts of Double Crux stuff (and writing some new posts), and posting them here over the next few weeks. (There are already some drafts, still being edited, available on my blog.)

I have a standing offer to facilitate conversations and disagreements (Double Crux or not) for rationalists and EAs. Email me at eli [at] rationality [dot] org if that's something you're interested in.

comment by Zack_M_Davis · 2019-09-28T16:36:06.449Z · score: 13 (4 votes) · LW · GW

People rarely change their mind when they feel like you have trapped them in some inconsistency [...] In general (but not universally) it is more productive to adopt a collaborative attitude of sincerely trying to help a person articulate, clarify, and substantiate [bolding mine—ZMD]

"People" in general rarely change their mind when they feel like you have trapped them in some inconsistency, but people using the double-crux method in the first place are going to be aspiring rationalists, right? Trapping someone in an inconsistency (if it's a real inconsistency and not a false perception of one) is collaborative: the thing they were thinking was flawed, and you helped them see the flaw! That's a good thing! (As it is written of the fifth virtue, "Do not believe you do others a favor if you accept their arguments; the favor is to you.")

Obviously, I agree that people should try to understand their interlocutors. (If you performatively try to find fault in something you don't understand, then apparent "faults" you find are likely to be your own misunderstandings rather than actual faults.) But if someone spots an actual inconsistency in my ideas, I want them to tell me right away. Performing the behavior of trying to substantiate something that cannot, in fact, be substantiated (because it contains an inconsistency) is a waste of everyone's time!

In general (but not universally) it is more productive to adopt a collaborative attitude

Can you say more about what you think the exceptions to the general-but-not-universal rule are? (Um, specifically [LW · GW].)

comment by Slider · 2019-09-28T20:08:54.890Z · score: 1 (1 votes) · LW · GW

I would think that inconsistencies are easier to appriciate when they are in the central machinery. A rationalist might have more load bearing on their beliefs so most beliefs are central to atleast something but I think a centrality/point-of-communication check is more upside than downside to keep. Also cognitive time spent looking for inconsistencies could be better spent on more constructive activities. Then there is the whole class of heuristics which don't even claim to be consistent. So the ability to pass by an inconsistency without hanging onto it will see use.

comment by DanielFilan · 2019-09-27T23:18:52.368Z · score: 2 (1 votes) · LW · GW

FYI the numbering in the (General) section is pretty off.

comment by elityre · 2019-09-28T07:01:41.277Z · score: 4 (2 votes) · LW · GW

What do you mean? All the numbers are in order. Are you objecting to the nested numbers?

comment by DanielFilan · 2019-09-28T21:01:42.455Z · score: 2 (1 votes) · LW · GW

To me, it looks like the numbers in the General section go 1, 4, 5, 5, 6, 7, 8, 9, 3, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 2, 3, 3, 4, 2, 3, 4 (ignoring the nested numbers).

comment by DanielFilan · 2019-09-28T21:10:01.837Z · score: 2 (1 votes) · LW · GW

(this appears to be a problem where it displays differently on different browser/OS pairs)

comment by elityre · 2019-08-24T02:44:31.028Z · score: 46 (14 votes) · LW · GW

Old post: RAND needed the "say oops" skill

[Epistemic status: a middling argument]

A few months ago, I wrote about how RAND, and the “Defense Intellectuals” of the cold war represent another precious datapoint of “very smart people, trying to prevent the destruction of the world, in a civilization that they acknowledge to be inadequate to dealing sanely with x-risk.”

Since then I spent some time doing additional research into what cognitive errors and mistakes  those consultants, military officials, and politicians made that endangered the world. The idea being that if we could diagnose which specific irrationalities they were subject to, that this would suggest errors that might also be relevant to contemporary x-risk mitigators, and might point out some specific areas where development of rationality training is needed.

However, this proved somewhat less fruitful than I was hoping, and I’ve put it aside for the time being. I might come back to it in the coming months.

It does seem worth sharing at least one relevant anecdote, from Daniel Ellsberg’s excellent book, the Doomsday Machine, and analysis, given that I’ve already written it up.

The missile gap

In the late nineteen-fifties it was widely understood that there was a “missile gap”: that the soviets had many more ICBM (“intercontinental ballistic missiles” armed with nuclear warheads) than the US.

Estimates varied widely on how many missiles the soviets had. The Army and the Navy gave estimates of about 40 missiles, which was about at parity with the the US’s strategic nuclear force. The Air Force and the Strategic Air Command, in contrast, gave estimates of as many as 1000 soviet missiles, 20 times more than the US’s count.

(The Air Force and SAC were incentivized to inflate their estimates of the Russian nuclear arsenal, because a large missile gap strongly necessitated the creation of more nuclear weapons, which would be under SAC control and entail increases in the Air Force budget. Similarly, the Army and Navy were incentivized to lowball their estimates, because a comparatively weaker soviet nuclear force made conventional military forces more relevant and implied allocating budget-resources to the Army and Navy.)

So there was some dispute about the size of the missile gap, including an unlikely possibility of nuclear parity with the Soviet Union. Nevertheless, the Soviet’s nuclear superiority was the basis for all planning and diplomacy at the time.

Kennedy campaigned on the basis of correcting the missile gap. Perhaps more critically, all of RAND’s planning and analysis was concerned with the possibility of the Russians launching a nearly-or-actually debilitating first or second strike.

The revelation

In 1961 it came to light, on the basis of new satellite photos, that all of these estimates were dead wrong. It turned out the the Soviets had only 4 nuclear ICBMs, one tenth as many as the US controlled.

The importance of this development should be emphasized. It meant that several of the fundamental assumptions of US nuclear planners were in error.

First of all, it meant that the Soviets were not bent on world domination (as had been assumed). Ellsberg says…

Since it seemed clear that the Soviets could have produced and deployed many, many more missiles in the three years since their first ICBM test, it put in question—it virtually demolished—the fundamental premise that the Soviets were pursuing a program of world conquest like Hitler’s.

That pursuit of world domination would have given them an enormous incentive to acquire at the earliest possible moment the capability to disarm their chief obstacle to this aim, the United States and its SAC. [That] assumption of Soviet aims was shared, as far as I knew, by all my RAND colleagues and with everyone I’d encountered in the Pentagon:
The Assistant Chief of Staff, Intelligence, USAF, believes that Soviet determination to achieve world domination has fostered recognition of the fact that the ultimate elimination of the US, as the chief obstacle to the achievement of their objective, cannot be accomplished without a clear preponderance of military capability.
If that was their intention, they really would have had to seek this capability before 1963. The 1959–62 period was their only opportunity to have such a disarming capability with missiles, either for blackmail purposes or an actual attack. After that, we were programmed to have increasing numbers of Atlas and Minuteman missiles in hard silos and Polaris sub-launched missiles. Even moderate confidence of disarming us so thoroughly as to escape catastrophic damage from our response would elude them indefinitely.
Four missiles in 1960–61 was strategically equivalent to zero, in terms of such an aim.

This revelation about soviet goals was not only of obvious strategic importance, it also took the wind out of the ideological motivation for this sort of nuclear planning. As Ellsberg relays early in his book, many, if not most, RAND employees were explicitly attempting to defend US and the world from what was presumed to be an aggressive communist state, bent on conquest. This just wasn’t true.

But it had even more practical consequences: this revelation meant that the Russians had no first strike (or for that matter, second strike) capability. They could launch their ICBMs at American cities or military bases, but such an attack had no chance of debilitating US second strike capacity. It would unquestionably trigger a nuclear counterattack from the US who, with their 40 missiles, would be able to utterly annihilate the Soviet Union. The only effect of a Russian nuclear attack would be to doom their own country.

[Eli’s research note: What about all the Russian planes and bombs? ICBMs aren’t the the only way of attacking the US, right?]

This means that the primary consideration in US nuclear war planning at RAND and elsewhere, was fallacious. The Soviet’s could not meaningfully destroy the US.

…the estimate contradicted and essentially invalidated the key RAND studies on SAC vulnerability since 1956. Those studies had explicitly assumed a range of uncertainty about the size of the Soviet ICBM force that might play a crucial role in combination with bomber attacks. Ever since the term “missile gap” had come into widespread use after 1957, Albert Wohlstetter had deprecated that description of his key findings. He emphasized that those were premised on the possibility of clever Soviet bomber and sub-launched attacks in combination with missiles or, earlier, even without them. He preferred the term “deterrent gap.” But there was no deterrent gap either. Never had been, never would be.
To recognize that was to face the conclusion that RAND had, in all good faith, been working obsessively and with a sense of frantic urgency on a wrong set of problems, an irrelevant pursuit in respect to national security.

This realization invalidated virtually all of RAND’s work to date. Virtually every, analysis, study, and strategy, had been useless, at best.

The reaction to the revelation

How did RAND employees respond to this reveal, that their work had been completely off base?

That is not a recognition that most humans in an institution are quick to accept. It was to take months, if not years, for RAND to accept it, if it ever did in those terms. To some degree, it’s my impression that it never recovered its former prestige or sense of mission, though both its building and its budget eventually became much larger. For some time most of my former colleagues continued their focus on the vulnerability of SAC, much the same as before, while questioning the reliability of the new estimate and its relevance to the years ahead. [Emphasis mine]

For years the specter of a “missile gap” had been haunting my colleagues at RAND and in the Defense Department. The revelation that this had been illusory cast a new perspective on everything. It might have occasioned a complete reassessment of our own plans for a massive buildup of strategic weapons, thus averting an otherwise inevitable and disastrous arms race. It did not; no one known to me considered that for a moment. [Emphasis mine]

According to Ellsberg, many at RAND were unable to adapt to the new reality and continued (fruitlessly) to continue with what they were doing, as if by inertia, when the thing that they needed to do (to use Eliezer’s turn of phrase) is “halt, melt, and catch fire.”

This suggests that one failure of this ecosystem, that was working in the domain of existential risk, was a failure to “say oops [LW · GW]“: to notice a mistaken belief, concretely acknowledge that is was mistaken, and to reconstruct one’s plans and world views.

Relevance to people working on AI safety

This seems to be at least some evidence (though, only weak evidence, I think), that we should be cautious of this particular cognitive failure ourselves.

It may be worth rehearsing the motion in advance: how will you respond, when you discover that a foundational crux of your planning is actually mirage, and the world is actually different than it seems?

What if you discovered that your overall approach to making the world better was badly mistaken?

What if you received a strong argument against the orthogonality thesis?

What about a strong argument for negative utilitarianism?

I think that many of the people around me have effectively absorbed the impact of a major update at least once in their life, on a variety of issues (religion, x-risk, average vs. total utilitarianism, etc), so I’m not that worried about us. But it seems worth pointing out the importance of this error mode.

A note: Ellsberg relays later in the book that, durring the Cuban missile crisis, he perceived Kennedy as offering baffling terms to the soviets: terms that didn’t make sense in light of the actual strategic situation, but might have been sensible under the premiss of a soviet missile gap. Ellsberg wondered, at the time, if Kennedy had also failed to propagate the update regarding the actual strategic situation.

I believed it very unlikely that the Soviets would risk hitting our missiles in Turkey even if we attacked theirs in Cuba. We couldn’t understand why Kennedy thought otherwise. Why did he seem sure that the Soviets would respond to an attack on their missiles in Cuba by armed moves against Turkey or Berlin? We wondered if—after his campaigning in 1960 against a supposed “missile gap”—Kennedy had never really absorbed what the strategic balance actually was, or its implications.

I mention this because additional research suggests that this is implausible: that Kennedy and his staff were aware of the true strategic situation, and that their planning was based on that premise.

comment by habryka (habryka4) · 2019-08-24T03:50:55.284Z · score: 18 (6 votes) · LW · GW

This was quite valuable to me, and I think I would be excited about seeing it as a top-level post.

comment by elityre · 2019-08-24T07:22:18.897Z · score: 3 (3 votes) · LW · GW

Can you say more about what you got from it?

comment by billzito · 2019-08-26T21:28:13.198Z · score: 4 (3 votes) · LW · GW

I can't speak for habryka, but I think your post did a great job of laying out the need for "say oops" in detail. I read the Doomsday Machine and felt this point very strongly while reading it, but this was a great reminder to me of its importance. I think "say oops" is one of the most important skills for actually working on the right thing, and that in my opinion, very few people have this skill even within the rationality community.

comment by Adam Scholl (adam_scholl) · 2019-08-26T06:34:36.666Z · score: 4 (3 votes) · LW · GW

There feel to me like two relevant questions here, which seem conflated in this analysis:

1) At what point did the USSR gain the ability to launch a comprehensively-destructive, undetectable-in-advance nuclear strike on the US? That is, at what point would a first strike have been achievable and effective?

2) At what point did the USSR gain the ability to launch such a first strike using ICBMs in particular?

By 1960 the USSR had 1,605 nuclear warheads; there may have been few ICBMs among them, but there are other ways to deliver warheads than shooting them across continents. Planes fail the "undetectable" criteria, but ocean-adjacent cities can be blown up by small boats, and by 1960 the USSR had submarines equipped with six "short"-range (650 km and 1,300 km) ballistic missiles. By 1967 they were producing subs like this, each of which was armed with 16 missiles with ranges of 2,800-4,600 km.

All of which is to say that from what I understand, RAND's fears were only a few years premature.

comment by elityre · 2019-08-13T18:02:40.236Z · score: 38 (16 votes) · LW · GW

New post: What is mental energy?

[Note: I’ve started a research side project on this question, and it is already obvious to me that this ontology importantly wrong.]

There’s a common phenomenology of “mental energy”. For instance, if I spend a couple of hours thinking hard (maybe doing math), I find it harder to do more mental work afterwards. My thinking may be slower and less productive. And I feel tired, or drained, (mentally, instead of physically).

Mental energy is one of the primary resources that one has to allocate, in doing productive work. In almost all cases, humans have less mental energy than they have time, and therefore effective productivity is a matter of energy management, more than time management. If we want to maximize personal effectiveness, mental energy seems like an extremely important domain to understand. So what is it?

The naive story is that mental energy is an actual energy resource that one expends and then needs to recoup. That is, when one is doing cognitive work, they are burning calories, depleting their bodies energy stores. As they use energy, they have less fuel to burn.

My current understanding is that this story is not physiologically realistic. Thinking hard does consume more of the body’s energy than baseline, but not that much more. And we experience mental fatigue long before we even get close to depleting our calorie stores. It isn’t literal energy that is being consumed. [The Psychology of Fatigue pg.27]

So if not that, what is going on here?

A few hypotheses:

(The first few, are all of a cluster, so I labeled them 1a, 1b, 1c, etc.)

Hypothesis 1a: Mental fatigue is a natural control system that redirects our attention to our other goals.

The explanation that I’ve heard most frequently in recent years (since it became obvious that much of the literature on ego-depletion was off the mark), is the following:

A human mind is composed of a bunch of subsystems that are all pushing for different goals. For a period of time, one of these goal threads might be dominant. For instance, if I spend a few hours doing math, this means that my other goals are temporarily suppressed or on hold: I’m not spending that time seeking a mate, or practicing the piano, or hanging out with friends.

In order to prevent those goals from being neglected entirely, your mind has a natural control system that prevents you from focusing your attention on any one thing at a time: the longer you put your attention on something, the greater the build up of mental fatigue, causing you to do anything else.

Comments and model-predictions: This hypothesis, as stated, seems implausible to me. For one thing, it seems to suggest that that all actives would be equally mentally taxing, which is empirically false: spending several hours doing math is mentally fatiguing, but spending the same amount of time watching TV is not.

This might still be salvaged if we offer some currency other than energy that is being preserved: something like “forceful computations”. But again, it doesn’t seem obvious why the computations of doing math would be more costly than those for watching TV.

Similarly, this model suggests that “a change is as good as a break”: if you switch to a new task, you should be back to full mental energy, until you become fatigued for that task as well.

Hypothesis 1b: Mental fatigue is the phenomenological representation of the loss of support for the winning coalition.

A variation on this hypothesis would be to model the mind as a collection of subsystems. At any given time, there is only one action sequence active, but that action sequence is determined by continuous “voting” by various subsystems.

Overtime, these subsystems get fed up with their goals not being met, and “withdraw support” for the current activity. This manifests as increasing mental fatigue. (Perhaps your thoughts get progressively less effective, because they are interrupted, on the scale of micro-seconds, by bids to think something else).

Comments and model-predictions: This seems like it might suggest that if all of the subsystems have high trust that their goals will be met, that math (or any other cognitively demanding task) would cease to be mentally taxing. Is that the case? (Does doing math mentally exhaust Critch?)

This does have the nice virtue of explaining burnout: when some subset of needs are not satisfied for a long period, the relevant subsystems pull their support for all actions, until those needs are met.

[Is burnout a good paradigm case for studying mental energy in general?]

Hypothesis 1c: The same as 1a or 1b, but some mental operations are painful for some reason.

To answer my question above, one reason why math might be more mentally taxing than watching TV, is that doing math is painful.

If the process of doing math is painful on the micro-level, then even if all of the other needs are met, there is still a fundamental conflict between the subsystem that is aiming to acquire math knowledge, and the subsystem that is trying to avoid micro-pain on the micro-level.

As you keep doing math, the micro pain part votes more and more strongly against doing math, or the overall system biases away from the current activity, and you run out of mental energy.

Comments and model-predictions: This seems plausible for the activity of doing math, which involves many moments of frustration, which might be meaningfully micro-painful. But it seems less consistent with activities like writing, which phenomenologically feel non-painful. This leads to hypothesis 1d…

Hypothesis 1d: The same as 1c, but the key micro-pain is that of processing ambiguity second to second

Maybe the pain comes from many moments of processing ambiguity, which is definitely a thing that is happening in the context of writing. (I’ll sometimes notice myself try to flinch to something easier when I’m not sure which sentence to write.) It seems plausible that mentally taxing activities are taxing to the extent that they involve processing ambiguity, and doing a search for the best template to apply.

Hypothesis 1e: Mental fatigue is the penalty incurred for top down direction of attention.

Maybe consciously deciding to do things is importantly different from the “natural” allocation of cognitive resources. That is, your mind is set up such that the conscious, System 2, long term planning, metacognitive system, doesn’t have free rein. It has a limited budget of “mental energy”, which measures how long it is allowed to call the shots before the visceral, system 1, immediate gratification systems take over again.

Maybe this is an evolutionary adaption? For the monkeys that had “really good” plans for how to achieve their goals, never panned out for them. The monkeys that were impulsive some of the time, actually did better at the reproduction game?

(If this is the case, can the rest of the mind learn to trust S2 more, and thereby offer it a bigger mental energy budget?)

This hypothesis does seem consistent with my observation that rest days are rejuvenating, even when I spend my rest day working on cognitively demanding side projects.

Hypothesis 2: Mental fatigue is the result of the brain temporarily reaching knowledge saturation.

When learning a motor task, there are several phases in which skill improvement occurs. The first, unsurprisingly, is durring practice sessions. However, one also sees automatic improvements in skill in the hours after practice [actually this part is disputed] and following a sleep period (academic link1, 2, 3). That is, there is a period of consolidation following a practice session. This period of consolidation probably involves the literal strengthening of neural connections, and encoding other brain patterns that take more than a few seconds to set.

I speculate, that your brain may reach a saturation point: more practice, more information input, becomes increasingly less effective, because you need to dedicate cognitive resources to consolidation. [Note that this is supposing that there is some tradeoff between consolidation activity and input activity, as opposed to a setup where both can occur simultaneously (does anyone have evidence for such a tradeoff?)].

If so, maybe cognitive fatigue is the phenomenology of needing to extract one’s self from a practice / execution regime, so that your brain can do post-processing and consolidation on what you’ve already done and learned.

Comments and model-predictions: This seems to suggest that all cognitively taxing tasks are learning tasks, or at least tasks in which one is encoding new neural patterns. This seems plausible, at least.

It also seems to naively imply that an activity will become less mentally taxing as you gain expertise with it, and progress along the learning curve. There is (presumably) much more information to process and consolidate in your first hour of doing math than in your 500th.

Hypothesis 3: Mental fatigue is a control system that prevents some kind of damage to the mind or body.

One reason why physical fatigue is useful is that it prevents damage to your body. Getting tired after running for a bit, stops you for running all out for 30 hours at a time, and eroding your fascia.

By simple analogy to physical fatigue, we might guess that mental fatigue is a response to vigorous mental activity that is adaptive in that it prevents us from hurting ourselves.

I have no idea what kind of damage might be caused by thinking too hard.

I note that mania and hypomania involve apparently limitless mental energy reserves, and I think that theses states are bad for your brain.

Hypothesis 4: Mental fatigue is a buffer overflow of peripheral awareness.

Another speculative hypothesis: Human minds have a working memory: a limit of ~4 concepts, or chunks, that can be “activated”, or operated upon in focal attention, at one time. But meditators, at least, also talk a peripheral awareness: a sort of halo of concepts and sense impressions that are “loaded up”, or “near by”, or cognitively available, or “on the fringes of awareness”. These are all the ideas that are “at hand” to your thinking. [Note: is peripheral awareness, as the meditators talk about,  the same thing as “short term memory”?]

Perhaps if there is a functional limit to the amount of content that can be held in working memory, there is a similar, if larger, limit to how much content can be held in peripheral awareness. As you engage with a task, more and more mental content is loaded up, or added to peripheral awareness, where it both influences your focal thought process, and/or is available to be operated on directly in working memory. As you continue the task, and more and more content gets added to peripheral awareness, you begin to overflow its capacity. It gets harder and harder to think, because peripheral awareness is overflowing. Your mind needs space to re-ontologize: to chunk pieces together, so that it can all fit in the same mental space. Perhaps this is what mental fatigue is.

Comments and model-predictions: This does give a nice clear account of why sleep replenishes mental energy (it both causes re-ontologizing, and clears the cache), though perhaps this does not provide evidence over most of the other hypotheses listed here.

Other notes about mental energy:

  • In this post, I’m mostly talking about mental energy on the scale of hours. But there is also a similar phenomenon on the scale of days (the rejuvenation one feels after rest days) and on the scale of months (burnout and such). Are these the same basic phenomenon on different timescales?
  • On the scale of days, I find that my subjective rest-o-meter is charged up if I take a rest day, even if I spend that rest day working on fairly cognitively intensive side projects.
    • This might be because there’s a kind of new project energy, or new project optimism?
  • Mania and hypomania entail limitless mental energy.
  • People seem to be able to play video games for hours and hours without depleting mental energy. Does this include problem solving games, or puzzle games?
    • Also, just because they can play indefinitely does not mean that their performance doesn’t drop. Does performance drop, across hours of playing, say, snakebird?
  • For that matter, does performance decline on a task correlate with the phenomenological “running out of energy”? Maybe those are separate systems.
comment by gilch · 2019-09-04T05:53:12.904Z · score: 6 (2 votes) · LW · GW

On Hypothesis 3, the brain may build up waste as a byproduct of its metabolism when it's working harder than normal, just as muscles do. Cleaning up this buildup seems to be one of the functions of sleep. Even brainless animals like jellyfish sleep. They do have neurons though.

comment by G Gordon Worley III (gworley) · 2019-08-13T19:57:32.655Z · score: 6 (3 votes) · LW · GW

I also think it's reasonable to think that multiple things may be doing on that result in a theory of mental energy. For example, hypotheses 1 and 2 could both be true and result in different causes of similar behavior. I bring this up because I think of those as two different things in my experience: being "full up" and needing to allow time for memory consolidation where I can still force my attention it just doesn't take in new information vs. being unable to force the direction of attention generally.

comment by elityre · 2019-09-01T04:57:12.787Z · score: 3 (2 votes) · LW · GW

Yeah. I think you're on to something here. My current read is that "mental energy" is at least 3 things.

Can you elaborate on the what "knowledge saturation" feels like for you?

comment by G Gordon Worley III (gworley) · 2019-09-02T16:40:31.733Z · score: 2 (1 votes) · LW · GW

Sure. It feels like my head is "full", although the felt sense is more like my head has gone from being porous and sponge-like to hard and concrete-like. When I try to read or listen to something I can feel it "bounce off" in that I can't hold the thought in memory beyond forcing it to stay in short term memory.

comment by mr-hire · 2019-09-02T02:50:04.934Z · score: 3 (2 votes) · LW · GW

Isn't it possible that there's some other biological sink that is time delayed from caloric energy? Like say, a very specific part of your brain needs a very specific protein, and only holds enough of that protein for 4 hours? And it can take hours to build that protein back up. This seems to me to be at least somewhat likeely.

comment by Ruby · 2019-09-02T16:44:04.425Z · score: 2 (1 votes) · LW · GW

Someone smart once made a case like to this to me in support of a specific substance (can't remember which) as a nootropic, though I'm a bit skeptical.

comment by eigen · 2019-09-01T17:13:13.623Z · score: 2 (2 votes) · LW · GW

I think about this a lot. I'm currently dangling with the fourth Hypothesis, which seems more correct to me and one where I can actually do something to ameliorate the trade-off implied by it.

In this comment [LW · GW], I talk what it means to me and how I can do something about it, which ,in summary, is to use Anki a lot and change subjects when working memory gets overloaded. It's important to note that mathematics is sort-of different from another subjects, since concepts build on each other and you need to keep up with what all of them mean and entail, so we may be bound to reach an overload faster in that sense.

A few notes about your other hypothesis:

Hypothesis 1c:

it doesn’t seem obvious why the computations of doing math would be more costly than those for watching TV.

It's because we're not used to it. Some things come easier than other; some things are more closely similar to what we have been doing for 60000 years (math is not one of them). So we flinch from that which we are not use to. Although, adaptation is easy and the major hurdle is only at the beginning.


This seems plausible for the activity of doing math, which involves many moments of frustration, which might be meaningfully micro-painful.

It may also mean that the reward system is different. Is difficult to see on a piece of mathematics, as we explore it, how fulfilling it's when we know that we may not be getting anywhere. So the inherent reward is missing or has to be more artificially created.

Hypothesis 1d:


It seems plausible that mentally taxing activities are taxing to the extent that they involve processing ambiguity, and doing a search for the best template to apply.

This seems correct to me. Consider the following: “This statement is false”.

Thinking about it for a few minutes (or iterations of that statement) is quickly bound to make us flinch away in just a few seconds. How many other things take this form? I bet there are many.


For the monkeys that had “really good” plans for how to achieve their goals, never panned out for them. The monkeys that were impulsive some of the time, actually did better at the reproduction game?

Instead of working to trust System 2 is it there a way to train System 1? It seems more apt to me, like training tactics in chess or to make rapid calculations.

Thank you for the good post, I'd really like to further know more about your findings.

comment by Viliam · 2019-08-14T22:31:22.698Z · score: 2 (1 votes) · LW · GW

Seems to me that mental energy is lost by frustration. If what you are doing is fun, you can do it for a log time; if it frustrates you at every moment, you will get "tired" soon.

The exact mechanism... I guess is that some part of the brain takes frustration as an evidence that this is not the right thing to do, and suggests doing something else. (Would correspond to "1b" in your model?)

comment by AprilSR · 2019-08-14T00:11:18.010Z · score: 2 (2 votes) · LW · GW

I’ve definitely experienced mental exhaustion from video games before - particularly when trying to do an especially difficult task.

comment by elityre · 2019-10-26T14:48:03.638Z · score: 37 (12 votes) · LW · GW

New post: Some notes on Von Neumann, as a human being

I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this old PBS documentary about the man.

I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.)

Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits.

Watching this first clip, I noticed that I was surprised by a number of thing.

  1. That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent.
  2. That he was middling height (somewhat shorter than the presenter he’s talking too).
  3. The thing he is saying is the sort of thing that I would expect to hear from any scientist in the public eye, “science education is important.” There is something revealing about Von Neumann, despite being the smartest person in the world, saying basically what I would expect Neil DeGrasse Tyson to say in an interview. A lot of the time he was wearing his “scientist / public intellectual” hat, not the “smartest person ever to live” hat.

Some other notes of interest:

He was not a skilled poker player, which punctured my assumption that Von Neumann was omnicompetent. (pg. 5) Nevertheless, poker was among the first inspirations for game theory. (When I told this to Steph, she quipped “Oh. He wasn’t any good at it, so he developed a theory from first principles, describing optimal play?” For all I know, that might be spot on.)

Perhaps relatedly, he claimed he had low sales resistance, and so would have his wife come clothes shopping with him. (pg. 21)

He was sexually crude, and perhaps a bit misogynistic. Eugene Wigner stated that “Johny believed in having sex, in pleasure, but not in emotional attachment. HE was interested in immediate pleasure and little comprehension of emotions in relationships and mostly saw women in terms of their bodies.” The journalist Steve Heimes wrote “upon entering an office where a pretty secretary was working, von Neumann habitually would bend way over, more or less trying to look up her dress.” (pg. 28) Not surprisingly, his relationship with his wife, Klara, was tumultuous, to say the least.

He did however, maintain a strong, life long, relationship with his mother (who died the same year that he did).

Overall, he gives the impression of being a genius, overgrown child.

Unlike many of his colleagues, he seemed not to share the pangs conscience that afflicted many of the bomb creators. Rather than going back to academia following the war, he continued doing work for the government, including the development of the Hydrogen bomb.

Von Neumann advocated preventative war: giving the Soviet union an ultimatum, of joining a world government, backed by the threat of (and probable enaction of) nuclear attack, while the US still had a nuclear monopoly. He famously said of the matter, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock.”

This attitude was certainly influenced by his work on game theory, but it should also be noted that Von Neumann hated communism.

Richard Feynman reports that Von Neumann, in their walks through the Los Alamos desert, convinced him to adopt and attitude of “social irresponsibility”, that one “didn’t have to be responsible for the world he was in.”

Prisoner’s dilemma says that he and his collaborators “pursued patents less aggressively than the could have”. Edward Teller commented, “probably the IBM company owes half its money to John Von Neumann.” (pg. 76)

So he was not very entrepreneurial, which is a bit of a shame, because if he had the disposition he probably could have made sooooo much money / really taken substantial steps towards taking over the world. (He certainly had the energy to be an entrepreneur: he only slept for a few hours a night, and was working for basically all his working hours.

He famously always wore a grey oxford 3 piece suit, including when playing tennis with Stanislaw Ulam, or when riding a donkey down the grand canyon. But, I am not clear why. Was that more comfortable? Did he think it made him look good? Did he just not want to have to ever think about clothing, and so preferred to be over-hot in the middle of the Los Alamos desert, rather than need to think about if today was “shirt sleeves whether”?

Von Neumann himself once commented on the strange fact of so many Hungarian geniuses growing up in such a small area, in his generation:

Stanislaw Ulam recalled that when Von Neumann was asked about this “statistically unlikely” Hungarian phenomenon, Von Neumann “would say that it was a coincidence of some cultural factors which he could not make precise: an external pressure on the whole society of this part of Central Europe, a subconscious feeling of extreme insecurity in individual, and the necessity of producing the unusual or facing extinction.” (pg. 66)

One thing that surprised me most was that it seems that, despite being possibly the smartest person in modernity, he would have benefited from attending a CFAR workshop.

For one thing, at the end of his life, he was terrified of dying. But throughout the course of his life he made many reckless choices with his health.

He ate gluttonously and became fatter and fatter over the course of his life. (One friend remarked that he “could count anything but calories.”)

Furthermore, he seemed to regularly risk his life when driving.

Von Neuman was an aggressive and apparently reckless driver. He supposedly totaled his car every year or so. An intersection in Princeton was nicknamed “Von Neumann corner” for all the auto accidents he had there. records of accidents and speeding arrests are preserved in his papers. [The book goes on to list a number of such accidents.] (pg. 25)

(Amusingly, Von Neumann’s reckless driving seems due, not to drinking and driving, but to singing and driving. “He would sway back and forth, turning the steering wheel in time with the music.”)

I think I would call this a bug.

On another thread, one of his friends (the documentary didn’t identify which) expressed that he was over-impressed by powerful people, and didn’t make effective tradeoffs.

I wish he’d been more economical with his time in that respect. For example, if people called him to Washington or elsewhere, he would very readily go and so on, instead of having these people come to him. It was much more important, I think, he should have saved his time and effort.
He felt, when the government called, [that] one had to go, it was a patriotic duty, and as I said before he was a very devoted citizen of the country. And I think one of the things that particularly pleased him was any recognition that came sort-of from the government. In fact, in that sense I felt that he was sometimes somewhat peculiar that he would be impressed by government officials or generals and so on. If a big uniform appeared that made more of an impression than it should have. It was odd.
But it shows that he was a person of many different and sometimes self contradictory facets, I think.

Stanislaw Ulam speculated, “I think he had a hidden admiration for people and organizations that could be tough and ruthless.” (pg. 179)

From these statements, it seems like Von Neumann leapt at chances to seem useful or important to the government, somewhat unreflectively.

These anecdotes suggest that Von Neumann would have gotten value out of Goal Factoring, or Units of Exchange, or IDC (possibly there was something deeper going on, regarding a blindspots around death, or status, but I think the point still stands, and he would have benefited from IDC).

Despite being the discoverer/ inventor of VNM Utility theory, and founding the field of Game Theory (concerned with rational choice), it seems to me that Von Neumann did far less to import the insights of the math into his actual life than say, Critch.

(I wonder aloud if this is because Von Neumann was born and came of age before the development of cognitive science. I speculate that the importance of actually applying theories of rationality in practice, only becomes obvious after Tversky and Kahneman demonstrate that humans are not rational by default. (In evidence against this view: Eliezer seems to have been very concerned with thinking clearly, and being sane, before encountering Heuristics and Biases in his (I belive) mid 20s. He was exposed to Evo Psych though.))

Also, he converted to Catholicism at the end of his life, based on Pascal’s Wager. He commented “So long as there is the possibility of eternal damnation for nonbelievers it is more logical to be a believer at the end”, and “There probably has to be a God. Many things are easier to explain if there is than if there isn’t.”

(According to wikipedia, this deathbed conversion did not give him much comfort.)

This suggests that he would have gotten value out of reading the sequences, in addition to attending a CFAR workshop.

comment by Viliam · 2019-10-27T10:14:46.773Z · score: 4 (2 votes) · LW · GW

Thank you, this is very interesting!

Seems to me the most imporant lesson here is "even if you are John von Neumann, you can't take over the world alone."

First, because no matter how smart you are, you will have blind spots.

Second, because your time is still limited to 24 hours a day; even if you'd decide to focus on things you have been neglecting until now, you would have to start neglecting the things you have been focusing on until now. Being better at poker (converting your smartness to money more directly), living healthier and therefore on average longer, developing social skills, and being strategic in gaining power... would perhaps come at a cost of not having invented half of the stuff. When you are John von Neumann, your time has insane opportunity costs.

comment by Liam Donovan (liam-donovan) · 2019-10-27T13:45:53.458Z · score: 1 (1 votes) · LW · GW

Is there any information on how Von Neumann came to believe Catholicism was the correct religion for Pascal Wager purposes? "My wife is Catholic" doesn't seem like very strong evidence...

comment by elityre · 2019-10-28T15:09:26.860Z · score: 3 (2 votes) · LW · GW

I don't know why Catholicism.

I note that it does seem to be the religion of choice for former atheists, or at least for rationalists. I know of several rationalists that converted to catholicism, but none that have converted to any other religion.

comment by elityre · 2019-09-28T07:16:20.129Z · score: 21 (6 votes) · LW · GW

New post: The Basic Double Crux Pattern

[This is a draft, to be posted on LessWrong soon.]

I’ve spent a lot of time developing tools and frameworks for bridging "intractable" disagreements. I’m also the person affiliated with CFAR who has taught Double Crux the most, and done the most work on it.

People often express to me something to the effect, “The important thing about Double Crux is all the low level habits of mind: being curious, being open to changing your mind, paraphrasing to check that you’ve understood, operationalizing, etc. The ‘Double Crux’ framework, itself is not very important.”

I half agree with that sentiment. I do think that those low level cognitive and conversational patterns are the most important thing, and at Double Crux trainings that I have run, most of the time is spent focusing on specific exercises to instill those low level TAPs.

However, I don’t think that the only value of the Double Crux schema is in training those low level habits. Double cruxes are extremely powerful machines that allow one to identify, if not the most efficient conversational path, a very high efficiency conversational path. Effectively navigating down a chain of Double Cruxes is like magic. So I’m sad when people write it off as useless.

In this post, I’m going to try and outline the basic Double Crux pattern, the series of 4 moves that makes Double Crux work, and give a (simple, silly) example of that pattern in action.

These four moves are not (always) sufficient for making a Double Crux conversation work, that does depend on a number of other mental habits and TAPs, but this pattern is, according to me, at the core of the Double Crux formalism.

The pattern:

The core Double Crux pattern is as follows. For simplicity, I have described this in the form of a 3-person Double Crux conversation, with two participants and a facilitator. Of course, one can execute these same moves in a 2 person conversation, as one of the participants. But that additional complexity is hard to manage for beginners.

The pattern has two parts (finding a crux, and finding a double crux), and each part is composed of 2 main facilitation moves.

Those four moves are...

  1. Clarifying that you understood the first person's point.
  2. Checking if that point is a crux
  3. Checking the second person's belief about the truth value of the first person's crux.
  4. Checking the if the first person's crux is also a crux for the second person.

In practice: 

[The version of this section on my blog has color coding and special formatting.]

The conversational flow of these moves looks something like this:

Finding a crux of participant 1:

P1: I think [x] because of [y]

Facilitator: (paraphrasing, and checking for understanding) It sounds like you think [x] because of [y]?

P1: Yep!

Facilitator: (checking for cruxyness) If you didn’t think [y], would you change your mind about [x]?

P1: Yes.

Facilitator: (signposting) It sounds like [y] is a crux for [x] for you.

Checking if it is also a crux for participant 2: 

Facilitator: Do you think [y]?

P2: No.

Facilitator: (checking for a Double Crux) if you did think [y] would that change your mind about [x]?

P2: Yes.

Facilitator: It sounds like [y] is a Double Crux

[Recurse, running the same pattern on [Y] ]

Obviously, in actual conversation, there is a lot more complexity, and a lot of other things that are going on.

For one thing, I’ve only outlined the best case pattern, where the participants give exactly the most convenient answer for moving the conversation forward (yes, yes, no, yes). In actual practice, it is quite likely that one of those answers will be reversed, and you’ll have to compensate.

For another thing, this formalism is rarely so simple. You might have to do a lot of conversational work to clarify the claims enough that you can ask if B is a crux for A (for instance when B is nonsensical to one of the participants). Getting through each of these steps might take fifteen minutes, in which case rather than four basic moves, this pattern describes four phases of conversation. (I claim that one of the core skills of a savvy facilitator is tracking which stage the conversation is at, which goals have you successfully hit, and which is the current proximal subgoal.)

There is also a judgment call about which person to treat as “participant 1” (the person who generates the point that is tested for cruxyness). As a first order heuristic, the person who is closer to making a positive claim over and above the default, should usually be the “p1”. But this is only one heuristic.

Example:

This is an intentionally silly, over-the-top-example, for demonstrating the the pattern without any unnecessary complexity. I'll publish a somewhat more realistic example in the next few days.

Two people, Alex and Barbra, disagree about tea: Alex thinks that tea is great, and drinks it all the time, and thinks that more people should drink tea, and Barbra thinks that tea is bad, and no one should drink tea.

Facilitator: So, Barbra, why do you think tea is bad?
Barbra: Well it's really quite simple. You see, tea causes cancer.
Facilitator: Let me check if I've got that: you think that tea causes cancer?
Barbra: That's right.
Facilitator: Wow. Ok. Well if you found out that tea actually didn't cause cancer, would you be fine with people drinking tea.
Barbra: Yeah. Really the main thing that I'm concerned with is the cancer-causing. If tea didn't cause cancer, then it seems like tea would be fine.
Facilitator: Cool. Well it sounds like this is a crux for you Barb. Alex, do you currently think that tea causes cancer?
Alex: No. That sounds like crazy-talk to me.
Facilitator: Ok. But aside from how realistic it seems right now, if you found out that tea actually does cause cancer, would you change your mind about people drinking tea?
Alex: Well, to be honest, I've always been opposed to cancer, so yeah, if I found out that tea causes cancer, then I would think that people shouldn't drink tea.
Facilitator: Well, it sounds like we have a double crux!

In a real conversation, it often doesn't goes this smoothly. But this is the rhythm of Double Crux, at least as I apply it.

That's the basic Double Crux pattern. As noted there are a number of other methods and sub-skills that are (often) necessary to make a Double Crux conversation work, but this is my current best attempt at a minimum compression of the basic engine of finding double cruxes.

I made up a more realistic example here, and I'm might make more or better examples.

comment by elityre · 2019-08-19T22:36:06.928Z · score: 16 (6 votes) · LW · GW

Old post: A mechanistic description of status

[This is an essay that I’ve had bopping around in my head for a long time. I’m not sure if this says anything usefully new-but it might click with some folks. If you haven’t read Social Status: Down the Rabbit Hole on Kevin Simler’s excellent blog, Melting Asphalt read that first. I think this is pretty bad and needs to be rewritten and maybe expanded substantially, but this blog is called “musings and rough drafts.”]

In this post, I’m going to outline how I think about status. In particular, I want to give a mechanistic account of how status necessarily arises, given some set of axioms, in much the same way one can show that evolution by natural selection must necessarily occur given the axioms of 1) inheritance of traits 2) variance in reproductive success based on variance in traits and 3) mutation.

(I am not claiming any particular skill at navigating status relationships, any more than a student of sports-biology is necessarily a skilled basketball player.)

By “status” I mean prestige-status.

Axiom 1: People have goals.

That is, for any given human, there are some things that they want. This can include just about anything. You might want more money, more sex, a ninja-turtles lunchbox, a new car, to have interesting conversations, to become an expert tennis player, to move to New York etc.

Axiom 2: There are people who control resources relevant to other people achieving their goals.

The kinds of resources are as varied as the goals one can have.

Thinking about status dynamics and the like, people often focus on the particularly convergent resources, like money. But resources that are onlyrelevant to a specific goal are just as much a part of the dynamics I’m about to describe.

Knowing a bunch about late 16th century Swedish architecture is controlling a goal relevant-resource, if someone has the goal of learning more about 16th century Swedish architecture.

Just being a fun person to spend time with (due to being particularly attractive, or funny, or interesting to talk to, or whatever) is a resource relevant to other people’s goals.

Axiom 3: People are more willing to help (offer favors to) a person who can help them achieve their goals.

Simply stated, you’re apt to offer to help a person with their goals if it seems like they can help you with yours, because you hope they’ll reciprocate. You’re willing to make a trade with, or ally with such people, because it seems likely to be beneficial to you. At minimum, you don’t want to get on their bad side.

(Notably, there are two factors that go into one’s assessment of another person’s usefulness: if they control a resource relevant to one of your goals, and if you expect them to reciprocate.

This produces a dynamic where by A’s willingness to ally with B is determined by something like the product of

  • A’s assessment of B’s power (as relevant to A’s goals), and
  • A’s assessment of B’s probability of helping (which might translate into integrity, niceness, etc.)

If a person is a jerk, they need to be very powerful-relative-to-your-goals to make allying with them worthwhile.)

All of this seems good so far, but notice that we have up to this point only described individual pair-wise transactions and pair-wise relationships. People speak about “status” as a attribute that someone can possess or lack. How does the dynamic of a person being “high status” arise from the flux of individual transactions?

Lemma 1: One of the resources that a person can control is other people’s willingness to offer them favors

With this lemma, the system folds in on itself, and the individual transactions cohere into a mostly-stable status hierarchy.

Given lemma 1, a person doesn’t need to personally control resources relevant to your goals, they just need to be in a position such that someone who is relevant to your goals will privilege them.

As an example, suppose that you’re introduced to someone who is very well respected in your local social group: person-W. Your assessment might be that W, directly, doesn’t have anything that you need. But because person-W is well-respected by others in your social group are likely to offer favors to him/her. Therefore, it’s useful for person-W to like you, because then they are more apt to call on other people’s favors on your behalf.

(All the usual caveats about has this is subconscious, and humans are adaption-executors and don’t do explicit, verbal assessments of how useful a person will be to them, but rely on emotional heuristics that approximate explicit assessment.)

This causes the mess of status transactions to reinforce and stabilize into a mostly-static hierarchy. The mass of individual A-privileges-B-on-the-basis-of-A’s-goals flattens out, into each person having a single “score” which determines to what degree each other person privileges them.

(It’s a little more complicated than that because people who have access to their own resources have less need of help from other. So a person’s effective status (the status-level at which you treat them is closer to their status minus your status. But this is complicated again because people are motivated not to be dicks (that’s bad for business), and respecting other people’s status is important to not being a dick.)

[more stuff here.]

comment by Kaj_Sotala · 2019-08-20T06:37:34.898Z · score: 5 (2 votes) · LW · GW

Related: The red paperclip theory of status [LW · GW] describes status as a form of optimization power, specifically one that can be used to influence a group.

The name of the game is to convert the temporary power gained from (say) a dominance behaviour into something further, bringing you closer to something you desire: reproduction, money, a particular social position...

comment by Raemon · 2019-08-20T06:13:11.968Z · score: 5 (2 votes) · LW · GW

(it says "more stuff here" but links to your overall blog, not sure if that meant to be a link to a specific post)

comment by elityre · 2019-09-04T07:21:07.051Z · score: 11 (7 votes) · LW · GW

[Real short post. Random. Complete speculation.]

Childhood lead exposure reduces one’s IQ, and also causes one to be more impulsive and aggressive.

I always assumed that the impulsiveness was due, basically, to your executive function machinery working less well. So you have less self control.

But maybe the reason for the IQ-impulsiveness connection, is that if you have a lower IQ, all of your subagents/ subprocesses are less smart. Because they’re worse at planning and modeling the world, the only way they know how to get their needs met are very direct, very simple, action-plans/ strategies. It’s not so much that you’re better at controlling your anger, as the part of you that would be angry is less so, because it has other ways of getting its needs met.

comment by jimrandomh · 2019-09-10T00:05:13.511Z · score: 7 (2 votes) · LW · GW

A slightly different spin on this model: it's not about the types of strategies people generate, but the number. If you think about something and only come up with one strategy, you'll do it without hesitation; if you generate three strategies, you'll pause to think about which is the right one. So people who can't come up with as many strategies are impulsive.

comment by elityre · 2019-09-10T14:33:35.013Z · score: 1 (1 votes) · LW · GW

This seems that it might be testable. If you force impulsive folk to wait and think, do they generate more ideas for how to proceed?

comment by capybaralet · 2019-09-07T05:44:23.330Z · score: 1 (1 votes) · LW · GW

This reminded me of the argument that superintelligent agents will be very good at coordinating and just divvy of the multiverse and be done with it.

It would be interesting to do an experimental study of how the intelligence profile of a population influences the level of cooperation between them.

comment by elityre · 2019-09-09T04:30:20.271Z · score: 2 (2 votes) · LW · GW

I think that's what the book referenced here, is about.

comment by elityre · 2019-06-02T22:16:00.239Z · score: 10 (5 votes) · LW · GW

New post: some musings on deliberate practice

comment by Raemon · 2019-06-02T23:13:33.849Z · score: 7 (3 votes) · LW · GW

Thanks! I just read through a few of your most recent posts and found them all real useful.

comment by elityre · 2019-06-04T02:41:34.522Z · score: 5 (3 votes) · LW · GW

Cool! I'd be glad to hear more. I don't have much of a sense of which thing I write are useful or how.

comment by Hazard · 2019-11-05T00:15:07.140Z · score: 2 (1 votes) · LW · GW

Relating to the "Perception of Progress" bit at the end. I can confirm for a handful of physical skills I practice there can be a big disconnect between Perception of Progress and Progress from a given session. Sometimes this looks like working on a piece of sleight of hand, it feeling weird and awkward, and the next day suddenly I'm a lot better at it, much more than I was at any point in the previous days practice.

I've got a hazy memory of a breakdancer blogging about how a particular shade of "no progress fumbling" can be a signal that a certain about of "unlearning" is happening, though I can't find the source to vet it.

comment by elityre · 2019-11-12T18:00:26.584Z · score: 9 (4 votes) · LW · GW

New (short) post: Desires vs. Reflexes

[Epistemic status: a quick thought that I had a minute ago.]

There are goals / desires (I want to have sex, I want to stop working, I want to eat ice cream) and there are reflexes (anger, “wasted motions”, complaining about a problem, etc.).

If you try and squash goals / desires, they will often (not always?) resurface around the side, or find some way to get met. (Why not always? What are the difference between those that do and those that don’t?) You need to bargain with them, or design outlet polices for them.

Reflexes on the other hand are strategies / motions that are more or less habitual to you. These you train or untrain.

comment by elityre · 2019-08-05T20:33:04.751Z · score: 9 (5 votes) · LW · GW

new post: Intro to and outline of a sequence on a productivity system

comment by eigen · 2019-08-06T23:04:15.756Z · score: 3 (2 votes) · LW · GW

I'm interested about knowing more about the meditation aspect and how it relates to productivity!

comment by mr-hire · 2019-08-06T16:49:13.019Z · score: 2 (1 votes) · LW · GW

I'm currently running a pilot program that takes a very similar psychological slant on productivity and procrastination, and planning to write a sequence starting in the next week or so. It covers a lot of the same subjects, including habits, ambiguity or overwhelm aversion, coercion aversion, and creating good relationships with parts. Maybe we should chat!

comment by elityre · 2019-06-24T04:34:05.117Z · score: 9 (2 votes) · LW · GW

new (boring) post on controlled actions.

comment by elityre · 2019-06-04T08:33:53.379Z · score: 9 (2 votes) · LW · GW

New post: Why does outlining my day in advance help so much?

comment by rk · 2019-06-04T14:49:07.731Z · score: 1 (1 votes) · LW · GW

This link (and the one for "Why do we fear the twinge of starting?") is broken (I think it's an admin view?).

(Correct link)

comment by elityre · 2019-06-04T16:05:44.620Z · score: 1 (1 votes) · LW · GW

They should both be fixed now.

Thanks!

comment by elityre · 2019-11-09T05:01:41.111Z · score: 8 (4 votes) · LW · GW

Totally an experiment, I'm trying out posting my raw notes from a personal review / theorizing session, in my short form. I'd be glad to hear people's thoughts.

This is written for me, straight out of my personal Roam repository. The formatting is a little messed up because LessWrong's bullet don't support indefinite levels of nesting.

This one is about Urge-y-ness / reactivity / compulsiveness

  • I don't know if I'm naming this right. I think I might be lumping categories together.
  • Let's start with what I know:
    • There are three different experiences, which might turn out to have a common cause, or which might turn out to be inssuficently differentiated
      1. I sometimes experience a compulsive need to do something or finish something.
        1. examples:
          1. That time when I was trying to make an audiobook of Focusing: Learn from the Masters
          2. That time when I was flying to Princeton to give a talk, and I was frustratedly trying to add photos to some dating app.
      2. Sometimes I am anxious or agitated (often with a feeling in my belly), and I find myself reaching for distraction, often youtube or webcomics or porn.
      3. Sometimes, I don't seem to be anxious, but I still default to immediate gratification behaviors, instead of doing satisfying focused work ()"my attention like a plow, heavy with inertia, deep in the earth, and cutting forward"). I might think about working, and then deflect to youtube or webcomics or porn.
        1. I think this has to do with having a thought or urge, and then acting on it unreflectively.
        2. examples:
          1. I think I've been like that for much of the past two days. [2019-11-8]
    • These might be different states, each of which is high on some axis: something like reactivity (as opposed to responsive) or impulsiveness or compulsiveness.
    • If so, the third case feels most pure. I think I'll focus on that one first, and then see if anxiety needs a separate analysis.
    • Theorizing about non-anxious immediate gratification
      • What is it?
      • What is the cause / structure?
        • Hypotheses:
          1. It might be that I have some unmet need, and the reactivity is trying to meet that need or cover up the pain of the unmet need.
          2. This suggests that the main goal should be trying to uncover the need.
          3. Note that my current urgeyness really doesn't feel like it has an unmet need underlying it. It feels more like I just have a bad habit, locally. But maybe I'm not aware of the neglected need?
          4. If it is an unmet need or a fear, I bet it is the feeling of overwelm. That actually matches a lot. I do feel like I have a huge number of things on my plate and even though I'm not feeling anxiety per se, I find myself bouncing off them.
          5. In particular, I have a lot to write, but have also been feeling resistance to start on my writing projects, because there are so many of them and once I start I'll have loose threads out and open. Right now, things are a little bit tucked away (in that I have outlines of almost everything), but very far from completed, in that I have hundreds of pages to write, and I'm a little afraid of loosing the content that feels kind of precariously balanced in my mind, and if I start writing I might loose some of it somehow.
          6. This also fits with the data that makes me feel like a positive feedback attractor: when I can get moving in the right way, my overwhelm becomes actionable, and I fall towards effective work. When I can't get enough momentum such that my effective system believes that I can deal with the overwhelm, I'll continue to bounce off.
          7. Ok. So under this hypothesis, this kind of thing is caused by an aversion, just like everything else.
          8. This predicts that just meditating might or might not alleviate the urgeyness: it doesn't solve the problem of the aversion, but it might buy me enough [[metacognitive space]] to not be flinching away.
          9. It might be a matter of "short term habit". My actions have an influence on my later actions: acting on urges causes me to be more likely to act on urges (and vis versa) so there can be positive feedback in both directions.
          10. Rather than a positive thing, it might be better to think of it as the absence of a loaded up goal-chain.
          11. Maybe this is the inverse of [[Productivity Momentum]]?
        • My takeaway from the above hypotheses is that the urgeness, in this case is either the result of an aversion, overwhelm aversion in particular, or it is an attractor state, due to my actions training a short term habit or action-propensity towards immediate reaction to my urges.
        • Some evidence and posits
          • I have some belief that this is more common when I have eaten a lot of sugar, but that might be wrong.
          • I had thought that exercise pushes against reactivity, but I strength trained pretty hard yesterday, and that didn't seem to make much of a difference today.
          • I think maybe meditation helps on this axis.
          • I have the sense that self-control trains the right short term habits.
          • Things like meditation, or fasting, or abstaining from porn/ sex.
          • Waking up and starting work immediately
          • I notice that my leg is jumping right now, as if I'm hyped up or over-energized, like with a caffeine high.
      • How should I intervene on it?
        • background maintenance
          • Some ideas:
          1. It helps to just block the distracting sites.
          2. Waking up early and scheduling my day (I already know this).
          3. Exercising?
          4. Meditating?
          • It would be good if I could do statistical analysis on these.
          • Maybe I can use my toggl data and compare it to my tracking data?
          • What metric?
          • How often I read webcomics or watch youtube?
          • I might try both intentional, and unintentional?
          • How much deep work I'm getting done?
        • point interventions
          • some ideas
          1. When I am feeling urgey, I should meditate?
          2. When I'm feeling urgey, I should sit quietly with a notebook (no screens), for 20 minutes, to get some metacognition about what I care about?
          3. When I'm feeling urgey, I should do focusing and try to uncover the unmet need?
          4. When I'm feeling urgey, I should do 90 seconds of intense cardio?
          • Those first two feel the most in the right vein: the thing that needs to happen is that I need to "calm down" my urgent grabbiness, and take a little space for my deeper goals to become visible.
          • I want to solicit more ideas from people.
          • I want to be able to test these.
          • The hard part about that is the transition function: how do I make the TAP work?
          • I should see if somenone can help me debug this.
          • One thought that I have is to do a daily review every day, and to ask on the daily review if I missed any places where I was urgey: opportunities to try an intervention
comment by elityre · 2019-09-07T06:34:39.335Z · score: 8 (4 votes) · LW · GW

New post: Capability testing as a pseudo fire alarm

[epistemic status: a thought I had]

It seems like it would be useful to have very fine-grained measures of how smart / capable a general reasoner is, because this would allow an AGI project to carefully avoid creating a system smart enough to pose an existential risk.

I’m imagining slowly feeding a system more training data (or, alternatively, iteratively training a system with slightly more compute), and regularly checking its capability. When the system reaches “chimpanzee level” (whatever that means), you stop training it (or giving it more compute resources).

This might even be a kind of fire-alarm. If you have a known predetermined battery of tests, then when some lab develops a system that scores “at the chimp level” at that battery, that might be a signal to everyone, that it’s time to pool our resources and figure out safety. (Of course, this event might alternatively precipitate a race, as everyone tries to get to human-level first.)

Probably the best way to do this would be for both training data, and compute / architecture. Start with a given architecture, then train it, slowly increasing the amount or quality of the training data, with regular tests (done on “spurs”, the agent should never have episodic memory of the . When increasing training data plateaus, iteratively improve the architecture in some way, either by giving the systems it more compute resources, or maybe making small adjustment. Again train the new version of the system, with regular tests. If you ever start to get very steep improvement, slow down and run tests more frequently.

Naively, it seems like a setup like this would prevent an AI team from overshooting and making a system that is much more capable that they think (which gives rise to all kinds of problems, like treacherous turns), regardless of how close “chimp” is to “human” on some absolute intelligence scale.

Some considerations:

  • This of course, depends on having metrics that work. Of course, it seems very likely that early general AIs will be minds with a very unusual “shape”, such that it has sophisticated reasoning abilities along dimensions that are not natural categories to humans, while being weak on many of the dimensions we’re testing for.
    • Although, this seems a bit surprising, since presumably we would be training it to hit those metrics? Somehow it would get better, faster, at a kind of reasoning we were not intending to train, than at the kind of reasoning that we were optimizing for?
    • Also, can we just design metrics that get around this problem, by measuring exactly what we care about, i.e. danger? Something like a metric for consequentialist reasoning?
  • This idea depends on the system not deceiving you, or intentionally underperforming on your metrics.
    • Naively, this doesn’t seem like that much of an issue, if you are doing tests regularly enough. I’m presuming (perhaps wrongly), that a chimpanzee is not smart enough to infer that it is an agent in a simulation that is overseen by human agents, and that it should try to decide those humans. That is, we stop our progression before the agent is at the point where deception is a consideration.
      • Which means we really need to be confident about how smart something needs to be before it tries to deceive us?
        • Really? Its first attempts at tricking humans will be pretty transparent. Just as the attempts of animals / babies to trick us are pretty transparent.
  • At least one “danger point”, is when the system is capable enough to realize the instrumental value of self improving by seizing more resources.
    • How smart is this?
      • My guess, is really smart. Animals come pre-loaded with all kinds of instincts that cause them to seek out food, water, etc. These AI systems would not have an instinct to seek more training data / computation. Most humans don’t reason their way into finding ways to improve their own reasoning. If there was a chimp, even loose in the internet (whatever that means), would it figure out to make itself smarter?
      • If the agent has experienced (and has memories of) rounds of getting smarter, as the humans give it more resources, and can identify that these improvements allow it to get more of what it wants, it might instrumentally reason that it should figure out how to get more compute / training data. But it seems easy to have a setup such that no system has episodic memories previous improvement rounds.
        • [Note: This makes a lot less sense for an agent of the active inference paradigm]
          • Could I salvage it somehow? Maybe by making some kind of principled distinction between learning in the sense of “getting better at reasoning” (procedural), and learning in the sense of “acquiring information about the environment” (episodic).
comment by jimrandomh · 2019-09-10T01:34:00.749Z · score: 14 (6 votes) · LW · GW

In There’s No Fire Alarm for Artificial General Intelligence Eliezer argues:

A fire alarm creates common knowledge, in the you-know-I-know sense, that there is a fire; after which it is socially safe to react. When the fire alarm goes off, you know that everyone else knows there is a fire, you know you won’t lose face if you proceed to exit the building.

If I have a predetermined set of tests, this could serve as a fire alarm, but only if you've successfully built a consensus that it is one. This is hard, and the consensus would need to be quite strong. To avoid ambiguity, the test itself would need to be demonstrably resistant to being clever Hans'ed. Otherwise it would be just another milestone.

comment by elityre · 2019-09-10T14:40:37.110Z · score: 3 (2 votes) · LW · GW

I very much agree.

comment by elityre · 2019-11-08T04:06:48.231Z · score: 7 (4 votes) · LW · GW

New post: Some musings about exercise and time discount rates

[Epistemic status: a half-thought, which I started on earlier today, and which might or might not be a full thought by the time I finish writing this post.]

I’ve long counted exercise as an important component of my overall productivity and functionality. But over the past months my exercise habit has slipped some, without apparent detriment to my focus or productivity. But this week, after coming back from a workshop, my focus and productivity haven’t really booted up.

Here’s a possible story:

Exercise (and maybe mediation) expands the effective time-horizon of my motivation system. By default, I will fall towards attractors of immediate gratification and impulsive action, but after I exercise, I tend to be tracking, and to be motivated by, progress on my longer term goals. [1]

When I am already in the midst of work: my goals are loaded up and the goal threads are primed in short term memory, this sort of short term compulsiveness causes me to fall towards task completion: I feel slightly obsessed about finishing what I’m working on.

But if I’m not already in the stream of work, seeking immediate gratification instead drives me to youtube and web comics and whatever. (Although it is important to note that I did switch my non self tracking web usage to Firefox this week, and I don’t have my usual blockers for youtube and for SMBC set up yet. That might totally account for the effect that I’m describing here.)

In short, when I’m not exercising enough, I have less meta cognitive space for directing my attention and choosing what is best do do. But if I’m in the stream of work already, I need that meta cognitive space less: because I’ll default to doing more of what I’m working on. (Though, I think that I do end up getting obsessed with overall less important things, compared to when I am maintaining metacognitive space). Exercise is most important for booting up and setting myself up to direct my energies.

[1] This might be due to a number of mechanisms:

  • Maybe the physical endorphin effect of exercise has me feeling good, and so my desire for immediate pleasure is sated, freeing up resources for longer term goals.
  • Or maybe exercise involves engaging in intimidate discomfort for the sake of future payoff, and this shifts my “time horizon set point” or something. (Or maybe it’s that exercise is downstream of that change in set point.)
    • If meditation also has this time-horizon shifting effect, that would be evidence for this hypothesis.
    • Also if fasting has this effect.
  • Or maybe, it’s the combination of both of the above: engaging in delayed gratification, with a viscerally experienced payoff, temporarily retrains my motivation system for that kind of thing.)
  • Or something else.
comment by Viliam · 2019-11-08T20:46:20.365Z · score: 2 (1 votes) · LW · GW

Alternative hypothesis: maybe what expands your time horizon is not exercise and meditation per se, but the fact that you are doing several different things (work, meditation, exercise), instead of doing the same thing over and over again (work). It probably also helps that the different activities use different muscles, so that they feel completely different.

This hypothesis predicts that a combination of e.g. work, walking, and painting, could provide similar benefits compared to work only.

comment by elityre · 2019-11-08T23:44:58.870Z · score: 2 (1 votes) · LW · GW

Well, my working is often pretty varied, while my "being distracted" is pretty monotonous (watching youtube clips), so I don't think it is this one.

comment by elityre · 2019-10-28T22:40:04.584Z · score: 7 (3 votes) · LW · GW

Can someone affiliated with a university, ect. get me a PDF of this paper?

https://psycnet.apa.org/buy/1929-00104-001

It is on Scihub, but that version is missing a few pages in which they describe the methodology.

[I hope this isn't an abuse of LessWrong.]

comment by elityre · 2019-08-07T18:41:06.925Z · score: 7 (4 votes) · LW · GW

New post: Napping Protocol

comment by Raemon · 2019-08-20T06:16:47.730Z · score: 5 (2 votes) · LW · GW

Some of these seem likely to generalize and some seem likely to be more specific.

Curious about your thoughts "best experimental approaches to figuring out your own napping protocol."

comment by elityre · 2019-09-12T08:44:03.142Z · score: 6 (4 votes) · LW · GW

New (image) post: My strategic picture of the work that needs to be done

comment by Raemon · 2019-09-13T22:20:32.404Z · score: 1 (2 votes) · LW · GW

I edited the image into the comment box, predicting that the reason you didn't was because you didn't know you could (using markdown). Apologies if you prefer it not to be here (and can edit it back if so)

comment by elityre · 2019-09-14T08:01:22.943Z · score: 13 (3 votes) · LW · GW

In this case it seems fine to add the image, but I feel disconcerted that mods have the ability to edit my posts.

I guess it makes sense that the LessWrong team would have the technical ability to do that. But editing a users post, without their specifically asking, feels like a pretty big breach of... not exactly trust, but something like that. It means I don’t have fundamental control over what is written under my name.

That is to say, I personally request that you never edit my posts, without asking (which you did, in this case) and waiting for my response. I furthermore, I think that should be a universal policy on LessWrong, though maybe this is just an idiosyncratic neurosis of mine.

comment by Raemon · 2019-09-14T09:07:09.020Z · score: 5 (2 votes) · LW · GW

Understood, and apologies.

A fairly common mod practice has been to fix typos and stuff in a sort of "move first and then ask if it was okay" thing. (I'm not confident this is the best policy, but it saves time/friction, and meanwhile I don't think anyone had had an issue with it). But, your preference definitely makes sense and if others felt the same I'd reconsider the overall policy.

(It's also the case that adding an image is a bit of a larger change than the usual typo fixing, and may have been more of an overstep of bounds)

In any case I definitely won't edit your stuff again without express permission.

comment by elityre · 2019-09-14T10:02:26.387Z · score: 1 (1 votes) · LW · GW

Cool.

: )

comment by Wei_Dai · 2019-09-14T08:23:41.752Z · score: 5 (2 votes) · LW · GW

I furthermore, I think that should be a universal policy on LessWrong, though maybe this is just an idiosyncratic neurosis of mine.

If it's not just you, it's at least pretty rare. I've seen the mods "helpfully" edit posts several times (without asking first) and this is the first time I've seen anyone complain about it.

comment by elityre · 2019-09-14T07:50:16.204Z · score: 1 (1 votes) · LW · GW

I knew that I could, and didn’t, because it didn’t seem worth it. (Thinking that I still have to upload it to a third party photo repository and link to it. It’s easier than that now?)

comment by Raemon · 2019-09-14T09:07:46.118Z · score: 3 (1 votes) · LW · GW

In this case your blog already counted as a third party repository.

comment by elityre · 2019-07-14T21:52:48.749Z · score: 5 (3 votes) · LW · GW

New (unedited) post: The bootstrapping attitude

comment by elityre · 2019-07-14T21:51:54.416Z · score: 4 (3 votes) · LW · GW

New (unedited) post: Exercise and nap, then mope, if I still want to

comment by elityre · 2019-06-04T08:16:19.621Z · score: 4 (3 votes) · LW · GW

New post: _Why_ do we fear the twinge of starting?


comment by elityre · 2019-11-13T21:08:12.093Z · score: 2 (1 votes) · LW · GW

new post: Metacognitive space


[Part of my Psychological Principles of Personal Productivity, which I am writing mostly in my Roam, now.]

Metacognitive space is a term of art that refers to a particular first person state / experience. In particular it refers to my propensity to be reflective about my urges and deliberate about the use of my resources.

I think it might literally be having the broader context of my life, including my goals and values, and my personal resource constraints loaded up in peripheral awareness.

Metacognitive space allows me to notice aversions and flinches, and take them as object, so that I can respond to them with Focusing or dialogue, instead of being swept around by them. Similarly, it seems to, in practice, to reduce my propensity to act on immediate urges and temptations.

[Having MCS is the opposite of being [[{Urge-y-ness | reactivity | compulsiveness}]]?]

It allows me to “absorb” and respond to happenings in my environment, including problems and opportunities, taking considered instead of semi-automatic, first response that occurred to me, action. [That sentence there feels a little fake, or maybe about something else, or maybe is just playing into a stereotype?]

When I “run out” of meta cognitive space, I will tend to become ensnared in immediate urges or short term goals. Often this will entail spinning off into distractions, or becoming obsessed with some task (of high or low importance), for up to 10 hours at a time.

Some activities that (I think) contribute to metacogntive space:

  • Rest days
  • Having a few free hours between the end of work for the day and going to bed
  • Weekly [[Scheduling]]. (In particular, weekly scheduling clarifies for me the resource constraints on my life.)
  • Daily [[Scheduling]]
  • [[meditation]], including short meditation.
    • Notably, I’m not sure if meditation is much more efficient than just taking the same time to go for a walk. I think it might be or might not be.
  • [[Exercise]]?
  • Waking up early?
  • Starting work as soon as I wake up?
    • [I’m not sure that the thing that this is contributing to is metacogntive space per se.]

[I would like to do a causal analysis on which factors contribute to metacogntive space. Could I identify it in my toggl data with good enough reliability that I can use my toggl data? I guess that’s one of the things I should test? Maybe with a servery asking me to rate my level of metacognitive space for the day every evening?]

Erosion

Usually, I find that I can maintain metacogntive space for about 3 days [test this?] without my upkeep pillars.

Often, this happens with a sense of pressure: I have a number of days of would-be-overwhelm which is translated into pressure for action. This is often good, it adds force and velocity to activity. But it also runs down the resource of my metacognitive space (and probably other resources). If I loose that higher level awareness, that pressure-as-a-forewind, tends to decay into either 1) a harried, scattered, rushed-feeling, 2) a myopic focus on one particular thing that I’m obsessively trying to do (it feels like an itch that I compulsively need to scratch), 3) or flinching way from it all into distraction.

[Metacognitive space is the attribute that makes the difference between absorbing, and then acting gracefully and sensibly to deal with the problems, and harried, flinching, fearful, non-productive overwhelm, in general?]

I make a point, when I am overwhelmed, or would be overwhelmed to make sure to allocate time to maintain my metacognitive space. It is especially important when I feel so busy that I don’t have time for it.

When metacognition is opposed to satisfying your needs, your needs will be opposed to metacognition

One dynamic that I think is in play, is that I have a number of needs, like the need for rest, and maybe the need for sexual release or entertainment/ stimulation. If those needs aren’t being met, there’s a sort of build up of pressure. If choosing consciously and deliberately prohibits those needs getting met, eventually they will sabotage the choosing consciously and deliberately.

From the inside, this feels like “knowing that you ‘shouldn’t’ do something (and sometimes even knowing that you’ll regret it later), but doing it anyway” or “throwing yourself away with abandon”. Often, there’s a sense of doing the dis-endorsed thing quickly, or while carefully not thinking much about it or deliberating about it: you need to do the thing before you convince yourself that you shouldn’t.

[[Research Questions]]

What is the relationship between [[metacognitive space]] and [[Rest]]?

What is the relationship between [[metacognitive space]] and [[Mental Energy]]?

comment by elityre · 2019-07-17T17:05:48.547Z · score: 1 (1 votes) · LW · GW

New post: my personal wellbeing support pillars

comment by Raemon · 2019-07-17T18:31:31.029Z · score: 10 (4 votes) · LW · GW

I'm interested in knowing your napping tools

comment by elityre · 2019-08-07T19:19:27.478Z · score: 1 (1 votes) · LW · GW

Here you go.

New post: Napping Protocol

comment by Raemon · 2019-08-07T19:21:27.460Z · score: 3 (1 votes) · LW · GW

Thanks!

comment by elityre · 2019-06-04T16:04:22.197Z · score: 1 (1 votes) · LW · GW

New post: The seed of a theory of triggeredness