Posts

How do you assess the quality / reliability of a scientific study? 2019-10-29T14:52:57.904Z · score: 63 (14 votes)
Request for stories of when quantitative reasoning was practically useful for you. 2019-09-13T07:21:43.686Z · score: 10 (4 votes)
What are the merits of signing up for cryonics with Alcor vs. with the Cryonics Institute? 2019-09-11T19:06:53.802Z · score: 20 (7 votes)
Does anyone know of a good overview of what humans know about depression? 2019-08-30T23:22:05.405Z · score: 14 (6 votes)
What is the state of the ego depletion field? 2019-08-09T20:30:44.798Z · score: 28 (11 votes)
Does it become easier, or harder, for the world to coordinate around not building AGI as time goes on? 2019-07-29T22:59:33.170Z · score: 85 (27 votes)
Are there easy, low cost, ways to freeze personal cell samples for future therapies? And is this a good idea? 2019-07-09T21:57:28.537Z · score: 21 (9 votes)
Does scientific productivity correlate with IQ? 2019-06-16T19:42:29.980Z · score: 28 (9 votes)
Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? 2019-06-16T19:12:48.358Z · score: 32 (11 votes)
Eli's shortform feed 2019-06-02T09:21:32.245Z · score: 31 (6 votes)
Historical mathematicians exhibit a birth order effect too 2018-08-21T01:52:33.807Z · score: 112 (36 votes)

Comments

Comment by elityre on Fake Morality · 2019-11-20T07:01:54.482Z · score: 6 (1 votes) · LW · GW
The very fact that a religious person would be afraid of God withdrawing Its threat to punish them for committing murder, shows that they have a revulsion of murder which is independent of whether God punishes murder or not.  If they had no sense that murder was wrong independently of divine retribution, the prospect of God not punishing murder would be no more existentially horrifying than the prospect of God not punishing sneezing.

Well, not necessarily.

They may not have a revulsion to murdering, so much as a fear of being murdered. A religious person might (semi-correctly? incorrectly?) be modeling that if other people didn't believe that God would punish murder, then they (that religious person) is more likely to be killed.

But most people don't make appropriate map / territory distinctions, and so "it would be bad if other people didn't believe that God punishes murder" gets collapsed to "it would be bad if God didn't punish murder."


Comment by elityre on Are wireheads happy? · 2019-11-15T04:11:38.197Z · score: 4 (2 votes) · LW · GW

Man this post is horrifying to me. It seems to imply (or at least suggest) a world where everyone, individually and collectively, is in the grip of mind controlling parasites, shifting people's choices and redirecting societies resources, against our best interests. We're all trapped in these mental snares where we can't even choose something different.

I've never really taken akrasia, per se, seriously, because I basically believed the claim about revealed preferences. If you're engaging in some behavior that's because some part of you wants (and (implicitly) likes) the resulting consequences.

This new view is dystopic.

Comment by elityre on Are wireheads happy? · 2019-11-15T04:01:57.823Z · score: 2 (1 votes) · LW · GW
I think a crucial difference between the two cases is that non-pollution makes it even more profitable for others to pollute, which would make collective non-pollution (in the absence of a collective agreement) an unstable node. (For example, using less oil bids down the price and extends the scope of profitable uses.)

Wow. This is a really important point. I'd never realized that.

Comment by elityre on Instant stone (just add water!) · 2019-11-15T03:55:12.711Z · score: 9 (5 votes) · LW · GW

This was Awesome. Thanks for sharing.

Comment by elityre on Books/Literature on resolving technical disagreements? · 2019-11-15T03:24:42.064Z · score: 8 (2 votes) · LW · GW

I've looked into this question a little, but not very far. The following are some trailheads that I've have on the list to investigate, when I get around to it. My current estimation is that all of these, are at best, tangential to the problem that I (and it sounds like you) are interested in: getting to the truth of epistemic disagreements. My impression is there are lots of things in the world that are about resolving disputes, but not many people are interested in resolving disputes to get the answer. But I haven't looked very hard.

Nevertheless...

  • The philosopher Robert Stalnaker, has a theory of conversations that involves building up a series of premises that both parties agree with. If either makes a claim that the other doesn't buy, you back up and substantiate that claim. Or something like that. I can't currently find a link to the essay in which outlines this method (anyone have it?), but this seems the most interesting to me, of all the things on this list.
    • H/T to Nick Beckstead, who shared this with Anna, who shared it with me.
  • There's a book called How to Have Impossible Conversations. I haven't read it yet, but seems mostly about having reasonable conversations about heated political / culture war style topics.
  • Erisology is the study of disagreement, a term coined by John Nerst.
  • Argument mapping is a thing, that some people claim is useful for disagreement resolution. I'm not very impressed though.
  • Bay NVC teaches something called "convergent facilitation", which is about making decisions accommodating everyone's needs, and executing meetings rapidly.
  • There's circling, which an number of rationalists have gotten value from, including for resolving disagreement.

Most of the things that I know about, and seem like they're in the vein of what you want, have come from our community. As you say, there's CFAR's Double Crux. Paul wrote this piece as a precursor to an AI alignment idea. Anna Salamon has been thinking about some things in this space lately. I use a variety of homegrown methods. Arbital was a large scale attempt to solve this problem. I think the basic idea of AI safety via debate is relevant, if only for theoretical reasons (Double Crux makes use of the same principle of isolating the single most relevant branch in a huge tree of possible conversations, but Double Crux and AI safety via debate used different functions for evaluating which branch is "most relevant").

I happened to have written about another framework for disagreement resolution today, but this one in particular is very much in the same family as Double Crux.


Comment by elityre on Eli's shortform feed · 2019-11-13T21:08:12.093Z · score: 10 (3 votes) · LW · GW

new post: Metacognitive space


[Part of my Psychological Principles of Personal Productivity, which I am writing mostly in my Roam, now.]

Metacognitive space is a term of art that refers to a particular first person state / experience. In particular it refers to my propensity to be reflective about my urges and deliberate about the use of my resources.

I think it might literally be having the broader context of my life, including my goals and values, and my personal resource constraints loaded up in peripheral awareness.

Metacognitive space allows me to notice aversions and flinches, and take them as object, so that I can respond to them with Focusing or dialogue, instead of being swept around by them. Similarly, it seems to, in practice, to reduce my propensity to act on immediate urges and temptations.

[Having MCS is the opposite of being [[{Urge-y-ness | reactivity | compulsiveness}]]?]

It allows me to “absorb” and respond to happenings in my environment, including problems and opportunities, taking considered instead of semi-automatic, first response that occurred to me, action. [That sentence there feels a little fake, or maybe about something else, or maybe is just playing into a stereotype?]

When I “run out” of meta cognitive space, I will tend to become ensnared in immediate urges or short term goals. Often this will entail spinning off into distractions, or becoming obsessed with some task (of high or low importance), for up to 10 hours at a time.

Some activities that (I think) contribute to metacogntive space:

  • Rest days
  • Having a few free hours between the end of work for the day and going to bed
  • Weekly [[Scheduling]]. (In particular, weekly scheduling clarifies for me the resource constraints on my life.)
  • Daily [[Scheduling]]
  • [[meditation]], including short meditation.
    • Notably, I’m not sure if meditation is much more efficient than just taking the same time to go for a walk. I think it might be or might not be.
  • [[Exercise]]?
  • Waking up early?
  • Starting work as soon as I wake up?
    • [I’m not sure that the thing that this is contributing to is metacogntive space per se.]

[I would like to do a causal analysis on which factors contribute to metacogntive space. Could I identify it in my toggl data with good enough reliability that I can use my toggl data? I guess that’s one of the things I should test? Maybe with a servery asking me to rate my level of metacognitive space for the day every evening?]

Erosion

Usually, I find that I can maintain metacogntive space for about 3 days [test this?] without my upkeep pillars.

Often, this happens with a sense of pressure: I have a number of days of would-be-overwhelm which is translated into pressure for action. This is often good, it adds force and velocity to activity. But it also runs down the resource of my metacognitive space (and probably other resources). If I loose that higher level awareness, that pressure-as-a-forewind, tends to decay into either 1) a harried, scattered, rushed-feeling, 2) a myopic focus on one particular thing that I’m obsessively trying to do (it feels like an itch that I compulsively need to scratch), 3) or flinching way from it all into distraction.

[Metacognitive space is the attribute that makes the difference between absorbing, and then acting gracefully and sensibly to deal with the problems, and harried, flinching, fearful, non-productive overwhelm, in general?]

I make a point, when I am overwhelmed, or would be overwhelmed to make sure to allocate time to maintain my metacognitive space. It is especially important when I feel so busy that I don’t have time for it.

When metacognition is opposed to satisfying your needs, your needs will be opposed to metacognition

One dynamic that I think is in play, is that I have a number of needs, like the need for rest, and maybe the need for sexual release or entertainment/ stimulation. If those needs aren’t being met, there’s a sort of build up of pressure. If choosing consciously and deliberately prohibits those needs getting met, eventually they will sabotage the choosing consciously and deliberately.

From the inside, this feels like “knowing that you ‘shouldn’t’ do something (and sometimes even knowing that you’ll regret it later), but doing it anyway” or “throwing yourself away with abandon”. Often, there’s a sense of doing the dis-endorsed thing quickly, or while carefully not thinking much about it or deliberating about it: you need to do the thing before you convince yourself that you shouldn’t.

[[Research Questions]]

What is the relationship between [[metacognitive space]] and [[Rest]]?

What is the relationship between [[metacognitive space]] and [[Mental Energy]]?

Comment by elityre on Eli's shortform feed · 2019-11-12T18:00:26.584Z · score: 9 (4 votes) · LW · GW

New (short) post: Desires vs. Reflexes

[Epistemic status: a quick thought that I had a minute ago.]

There are goals / desires (I want to have sex, I want to stop working, I want to eat ice cream) and there are reflexes (anger, “wasted motions”, complaining about a problem, etc.).

If you try and squash goals / desires, they will often (not always?) resurface around the side, or find some way to get met. (Why not always? What are the difference between those that do and those that don’t?) You need to bargain with them, or design outlet polices for them.

Reflexes on the other hand are strategies / motions that are more or less habitual to you. These you train or untrain.

Comment by elityre on How do you assess the quality / reliability of a scientific study? · 2019-11-10T05:07:27.358Z · score: 6 (3 votes) · LW · GW

Thanks.

This point in particular sticks with me:

I consider "pretending to have sources and reasons" a worse sin than "not giving a source or reason"

I notice that one of the things that tips me off that a scientist is good, is if her/his work demonstrates curiosity. Do they seem like they're actually trying to figure out the answer? Do they think though and address counterarguments, or just try to obscure those counterargument?

This seems related: a person who puts no source might still be sharing their actual belief, but a person who puts a fake source seems like they're trying to sound legitimate.

Comment by elityre on Eli's shortform feed · 2019-11-09T05:01:41.111Z · score: 8 (4 votes) · LW · GW

Totally an experiment, I'm trying out posting my raw notes from a personal review / theorizing session, in my short form. I'd be glad to hear people's thoughts.

This is written for me, straight out of my personal Roam repository. The formatting is a little messed up because LessWrong's bullet don't support indefinite levels of nesting.

This one is about Urge-y-ness / reactivity / compulsiveness

  • I don't know if I'm naming this right. I think I might be lumping categories together.
  • Let's start with what I know:
    • There are three different experiences, which might turn out to have a common cause, or which might turn out to be inssuficently differentiated
      1. I sometimes experience a compulsive need to do something or finish something.
        1. examples:
          1. That time when I was trying to make an audiobook of Focusing: Learn from the Masters
          2. That time when I was flying to Princeton to give a talk, and I was frustratedly trying to add photos to some dating app.
      2. Sometimes I am anxious or agitated (often with a feeling in my belly), and I find myself reaching for distraction, often youtube or webcomics or porn.
      3. Sometimes, I don't seem to be anxious, but I still default to immediate gratification behaviors, instead of doing satisfying focused work ()"my attention like a plow, heavy with inertia, deep in the earth, and cutting forward"). I might think about working, and then deflect to youtube or webcomics or porn.
        1. I think this has to do with having a thought or urge, and then acting on it unreflectively.
        2. examples:
          1. I think I've been like that for much of the past two days. [2019-11-8]
    • These might be different states, each of which is high on some axis: something like reactivity (as opposed to responsive) or impulsiveness or compulsiveness.
    • If so, the third case feels most pure. I think I'll focus on that one first, and then see if anxiety needs a separate analysis.
    • Theorizing about non-anxious immediate gratification
      • What is it?
      • What is the cause / structure?
        • Hypotheses:
          1. It might be that I have some unmet need, and the reactivity is trying to meet that need or cover up the pain of the unmet need.
          2. This suggests that the main goal should be trying to uncover the need.
          3. Note that my current urgeyness really doesn't feel like it has an unmet need underlying it. It feels more like I just have a bad habit, locally. But maybe I'm not aware of the neglected need?
          4. If it is an unmet need or a fear, I bet it is the feeling of overwelm. That actually matches a lot. I do feel like I have a huge number of things on my plate and even though I'm not feeling anxiety per se, I find myself bouncing off them.
          5. In particular, I have a lot to write, but have also been feeling resistance to start on my writing projects, because there are so many of them and once I start I'll have loose threads out and open. Right now, things are a little bit tucked away (in that I have outlines of almost everything), but very far from completed, in that I have hundreds of pages to write, and I'm a little afraid of loosing the content that feels kind of precariously balanced in my mind, and if I start writing I might loose some of it somehow.
          6. This also fits with the data that makes me feel like a positive feedback attractor: when I can get moving in the right way, my overwhelm becomes actionable, and I fall towards effective work. When I can't get enough momentum such that my effective system believes that I can deal with the overwhelm, I'll continue to bounce off.
          7. Ok. So under this hypothesis, this kind of thing is caused by an aversion, just like everything else.
          8. This predicts that just meditating might or might not alleviate the urgeyness: it doesn't solve the problem of the aversion, but it might buy me enough [[metacognitive space]] to not be flinching away.
          9. It might be a matter of "short term habit". My actions have an influence on my later actions: acting on urges causes me to be more likely to act on urges (and vis versa) so there can be positive feedback in both directions.
          10. Rather than a positive thing, it might be better to think of it as the absence of a loaded up goal-chain.
          11. Maybe this is the inverse of [[Productivity Momentum]]?
        • My takeaway from the above hypotheses is that the urgeness, in this case is either the result of an aversion, overwhelm aversion in particular, or it is an attractor state, due to my actions training a short term habit or action-propensity towards immediate reaction to my urges.
        • Some evidence and posits
          • I have some belief that this is more common when I have eaten a lot of sugar, but that might be wrong.
          • I had thought that exercise pushes against reactivity, but I strength trained pretty hard yesterday, and that didn't seem to make much of a difference today.
          • I think maybe meditation helps on this axis.
          • I have the sense that self-control trains the right short term habits.
          • Things like meditation, or fasting, or abstaining from porn/ sex.
          • Waking up and starting work immediately
          • I notice that my leg is jumping right now, as if I'm hyped up or over-energized, like with a caffeine high.
      • How should I intervene on it?
        • background maintenance
          • Some ideas:
          1. It helps to just block the distracting sites.
          2. Waking up early and scheduling my day (I already know this).
          3. Exercising?
          4. Meditating?
          • It would be good if I could do statistical analysis on these.
          • Maybe I can use my toggl data and compare it to my tracking data?
          • What metric?
          • How often I read webcomics or watch youtube?
          • I might try both intentional, and unintentional?
          • How much deep work I'm getting done?
        • point interventions
          • some ideas
          1. When I am feeling urgey, I should meditate?
          2. When I'm feeling urgey, I should sit quietly with a notebook (no screens), for 20 minutes, to get some metacognition about what I care about?
          3. When I'm feeling urgey, I should do focusing and try to uncover the unmet need?
          4. When I'm feeling urgey, I should do 90 seconds of intense cardio?
          • Those first two feel the most in the right vein: the thing that needs to happen is that I need to "calm down" my urgent grabbiness, and take a little space for my deeper goals to become visible.
          • I want to solicit more ideas from people.
          • I want to be able to test these.
          • The hard part about that is the transition function: how do I make the TAP work?
          • I should see if somenone can help me debug this.
          • One thought that I have is to do a daily review every day, and to ask on the daily review if I missed any places where I was urgey: opportunities to try an intervention
Comment by elityre on Eli's shortform feed · 2019-11-08T23:44:58.870Z · score: 2 (1 votes) · LW · GW

Well, my working is often pretty varied, while my "being distracted" is pretty monotonous (watching youtube clips), so I don't think it is this one.

Comment by elityre on Eli's shortform feed · 2019-11-08T04:06:48.231Z · score: 7 (4 votes) · LW · GW

New post: Some musings about exercise and time discount rates

[Epistemic status: a half-thought, which I started on earlier today, and which might or might not be a full thought by the time I finish writing this post.]

I’ve long counted exercise as an important component of my overall productivity and functionality. But over the past months my exercise habit has slipped some, without apparent detriment to my focus or productivity. But this week, after coming back from a workshop, my focus and productivity haven’t really booted up.

Here’s a possible story:

Exercise (and maybe mediation) expands the effective time-horizon of my motivation system. By default, I will fall towards attractors of immediate gratification and impulsive action, but after I exercise, I tend to be tracking, and to be motivated by, progress on my longer term goals. [1]

When I am already in the midst of work: my goals are loaded up and the goal threads are primed in short term memory, this sort of short term compulsiveness causes me to fall towards task completion: I feel slightly obsessed about finishing what I’m working on.

But if I’m not already in the stream of work, seeking immediate gratification instead drives me to youtube and web comics and whatever. (Although it is important to note that I did switch my non self tracking web usage to Firefox this week, and I don’t have my usual blockers for youtube and for SMBC set up yet. That might totally account for the effect that I’m describing here.)

In short, when I’m not exercising enough, I have less meta cognitive space for directing my attention and choosing what is best do do. But if I’m in the stream of work already, I need that meta cognitive space less: because I’ll default to doing more of what I’m working on. (Though, I think that I do end up getting obsessed with overall less important things, compared to when I am maintaining metacognitive space). Exercise is most important for booting up and setting myself up to direct my energies.

[1] This might be due to a number of mechanisms:

  • Maybe the physical endorphin effect of exercise has me feeling good, and so my desire for immediate pleasure is sated, freeing up resources for longer term goals.
  • Or maybe exercise involves engaging in intimidate discomfort for the sake of future payoff, and this shifts my “time horizon set point” or something. (Or maybe it’s that exercise is downstream of that change in set point.)
    • If meditation also has this time-horizon shifting effect, that would be evidence for this hypothesis.
    • Also if fasting has this effect.
  • Or maybe, it’s the combination of both of the above: engaging in delayed gratification, with a viscerally experienced payoff, temporarily retrains my motivation system for that kind of thing.)
  • Or something else.
Comment by elityre on How do you assess the quality / reliability of a scientific study? · 2019-11-01T22:10:29.326Z · score: 4 (2 votes) · LW · GW
Assuming Eli is okay with this

This sounds cool to me!

Comment by elityre on Talking Snakes: A Cautionary Tale · 2019-10-29T15:11:05.944Z · score: 2 (1 votes) · LW · GW
a parrot named Alex even accidentally learned how to spell out a word for emphasis when the listener didn't seem to be paying attention.

That sounds awesome. Citation?

Comment by elityre on George's Shortform · 2019-10-28T22:46:01.508Z · score: 2 (1 votes) · LW · GW
for example, custom-cat production is very expensive not because the materials are expensive, but because a custom yacht requires loads of specialized artisan work.

Presumably a typo? Though I bet there is something like designer cats.

Comment by elityre on Eli's shortform feed · 2019-10-28T22:40:04.584Z · score: 7 (3 votes) · LW · GW

Can someone affiliated with a university, ect. get me a PDF of this paper?

https://psycnet.apa.org/buy/1929-00104-001

It is on Scihub, but that version is missing a few pages in which they describe the methodology.

[I hope this isn't an abuse of LessWrong.]

Comment by elityre on Eli's shortform feed · 2019-10-28T15:09:26.860Z · score: 3 (2 votes) · LW · GW

I don't know why Catholicism.

I note that it does seem to be the religion of choice for former atheists, or at least for rationalists. I know of several rationalists that converted to catholicism, but none that have converted to any other religion.

Comment by elityre on Eli's shortform feed · 2019-10-26T14:48:03.638Z · score: 37 (12 votes) · LW · GW

New post: Some notes on Von Neumann, as a human being

I recently read Prisoner’s Dilemma, which half an introduction to very elementary game theory, and half a biography of John Von Neumann, and watched this old PBS documentary about the man.

I’m glad I did. Von Neumann has legendary status in my circles, as the smartest person ever to live. [1] Many times I’ve written the words “Von Neumann Level Intelligence” in a AI strategy document, or speculated about how many coordinated Von Neumanns would it take to take over the world. (For reference, I now think that 10 is far too low, mostly because he didn’t seem to have the entrepreneurial or managerial dispositions.)

Learning a little bit more about him was humanizing. Yes, he was the smartest person ever to live, but he was also an actual human being, with actual human traits.

Watching this first clip, I noticed that I was surprised by a number of thing.

  1. That VN had an accent. I had known that he was Hungarian, but somehow it had never quite propagated that he would speak with a Hungarian accent.
  2. That he was middling height (somewhat shorter than the presenter he’s talking too).
  3. The thing he is saying is the sort of thing that I would expect to hear from any scientist in the public eye, “science education is important.” There is something revealing about Von Neumann, despite being the smartest person in the world, saying basically what I would expect Neil DeGrasse Tyson to say in an interview. A lot of the time he was wearing his “scientist / public intellectual” hat, not the “smartest person ever to live” hat.

Some other notes of interest:

He was not a skilled poker player, which punctured my assumption that Von Neumann was omnicompetent. (pg. 5) Nevertheless, poker was among the first inspirations for game theory. (When I told this to Steph, she quipped “Oh. He wasn’t any good at it, so he developed a theory from first principles, describing optimal play?” For all I know, that might be spot on.)

Perhaps relatedly, he claimed he had low sales resistance, and so would have his wife come clothes shopping with him. (pg. 21)

He was sexually crude, and perhaps a bit misogynistic. Eugene Wigner stated that “Johny believed in having sex, in pleasure, but not in emotional attachment. HE was interested in immediate pleasure and little comprehension of emotions in relationships and mostly saw women in terms of their bodies.” The journalist Steve Heimes wrote “upon entering an office where a pretty secretary was working, von Neumann habitually would bend way over, more or less trying to look up her dress.” (pg. 28) Not surprisingly, his relationship with his wife, Klara, was tumultuous, to say the least.

He did however, maintain a strong, life long, relationship with his mother (who died the same year that he did).

Overall, he gives the impression of being a genius, overgrown child.

Unlike many of his colleagues, he seemed not to share the pangs conscience that afflicted many of the bomb creators. Rather than going back to academia following the war, he continued doing work for the government, including the development of the Hydrogen bomb.

Von Neumann advocated preventative war: giving the Soviet union an ultimatum, of joining a world government, backed by the threat of (and probable enaction of) nuclear attack, while the US still had a nuclear monopoly. He famously said of the matter, “If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o’clock, I say why not 1 o’clock.”

This attitude was certainly influenced by his work on game theory, but it should also be noted that Von Neumann hated communism.

Richard Feynman reports that Von Neumann, in their walks through the Los Alamos desert, convinced him to adopt and attitude of “social irresponsibility”, that one “didn’t have to be responsible for the world he was in.”

Prisoner’s dilemma says that he and his collaborators “pursued patents less aggressively than the could have”. Edward Teller commented, “probably the IBM company owes half its money to John Von Neumann.” (pg. 76)

So he was not very entrepreneurial, which is a bit of a shame, because if he had the disposition he probably could have made sooooo much money / really taken substantial steps towards taking over the world. (He certainly had the energy to be an entrepreneur: he only slept for a few hours a night, and was working for basically all his working hours.

He famously always wore a grey oxford 3 piece suit, including when playing tennis with Stanislaw Ulam, or when riding a donkey down the grand canyon. But, I am not clear why. Was that more comfortable? Did he think it made him look good? Did he just not want to have to ever think about clothing, and so preferred to be over-hot in the middle of the Los Alamos desert, rather than need to think about if today was “shirt sleeves whether”?

Von Neumann himself once commented on the strange fact of so many Hungarian geniuses growing up in such a small area, in his generation:

Stanislaw Ulam recalled that when Von Neumann was asked about this “statistically unlikely” Hungarian phenomenon, Von Neumann “would say that it was a coincidence of some cultural factors which he could not make precise: an external pressure on the whole society of this part of Central Europe, a subconscious feeling of extreme insecurity in individual, and the necessity of producing the unusual or facing extinction.” (pg. 66)

One thing that surprised me most was that it seems that, despite being possibly the smartest person in modernity, he would have benefited from attending a CFAR workshop.

For one thing, at the end of his life, he was terrified of dying. But throughout the course of his life he made many reckless choices with his health.

He ate gluttonously and became fatter and fatter over the course of his life. (One friend remarked that he “could count anything but calories.”)

Furthermore, he seemed to regularly risk his life when driving.

Von Neuman was an aggressive and apparently reckless driver. He supposedly totaled his car every year or so. An intersection in Princeton was nicknamed “Von Neumann corner” for all the auto accidents he had there. records of accidents and speeding arrests are preserved in his papers. [The book goes on to list a number of such accidents.] (pg. 25)

(Amusingly, Von Neumann’s reckless driving seems due, not to drinking and driving, but to singing and driving. “He would sway back and forth, turning the steering wheel in time with the music.”)

I think I would call this a bug.

On another thread, one of his friends (the documentary didn’t identify which) expressed that he was over-impressed by powerful people, and didn’t make effective tradeoffs.

I wish he’d been more economical with his time in that respect. For example, if people called him to Washington or elsewhere, he would very readily go and so on, instead of having these people come to him. It was much more important, I think, he should have saved his time and effort.
He felt, when the government called, [that] one had to go, it was a patriotic duty, and as I said before he was a very devoted citizen of the country. And I think one of the things that particularly pleased him was any recognition that came sort-of from the government. In fact, in that sense I felt that he was sometimes somewhat peculiar that he would be impressed by government officials or generals and so on. If a big uniform appeared that made more of an impression than it should have. It was odd.
But it shows that he was a person of many different and sometimes self contradictory facets, I think.

Stanislaw Ulam speculated, “I think he had a hidden admiration for people and organizations that could be tough and ruthless.” (pg. 179)

From these statements, it seems like Von Neumann leapt at chances to seem useful or important to the government, somewhat unreflectively.

These anecdotes suggest that Von Neumann would have gotten value out of Goal Factoring, or Units of Exchange, or IDC (possibly there was something deeper going on, regarding a blindspots around death, or status, but I think the point still stands, and he would have benefited from IDC).

Despite being the discoverer/ inventor of VNM Utility theory, and founding the field of Game Theory (concerned with rational choice), it seems to me that Von Neumann did far less to import the insights of the math into his actual life than say, Critch.

(I wonder aloud if this is because Von Neumann was born and came of age before the development of cognitive science. I speculate that the importance of actually applying theories of rationality in practice, only becomes obvious after Tversky and Kahneman demonstrate that humans are not rational by default. (In evidence against this view: Eliezer seems to have been very concerned with thinking clearly, and being sane, before encountering Heuristics and Biases in his (I belive) mid 20s. He was exposed to Evo Psych though.))

Also, he converted to Catholicism at the end of his life, based on Pascal’s Wager. He commented “So long as there is the possibility of eternal damnation for nonbelievers it is more logical to be a believer at the end”, and “There probably has to be a God. Many things are easier to explain if there is than if there isn’t.”

(According to wikipedia, this deathbed conversion did not give him much comfort.)

This suggests that he would have gotten value out of reading the sequences, in addition to attending a CFAR workshop.

Comment by elityre on Building up to an Internal Family Systems model · 2019-10-19T00:05:12.766Z · score: 2 (1 votes) · LW · GW
And the people who come to me talking about how wonderful IFS is, frequently seem to be the ones with the worst denial issues

Huh. This does not resonate with my experience, but I will henceforth be on the lookout for this.

Comment by elityre on Building up to an Internal Family Systems model · 2019-10-18T23:48:04.377Z · score: 7 (3 votes) · LW · GW
What I'm arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns.

Cool. That makes sense.

It's bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.

Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of "akrasia" and they'll conceptualize it, more or less, as "my system 1 is stupid and doesn't understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing."

And then I might suggest that they try on the frame where "the akrasia part", is actually an intelligent "agent" trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?

And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.

[I'm obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]

That is, in practice, the part, or subagent framing helps at least some people to own their desires more, not less.

[I do want to note that you explicitly said, "What I am saying, and have been saying, is that nominalizing behavior patterns as "parts" or "agents" is bad reductionism, independent of its value as a therapeutic metaphor."]

---

To put it another way, if there are "agents" (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life.

This doesn't seem right in my personal experience, because the "agents" are all me. I'm conceptualizing the parts of myself as separate from each other, because it's easier to think about that way, but I'm not disowning or disassociating from any of them. It's all me.


Comment by elityre on Building up to an Internal Family Systems model · 2019-10-18T23:21:15.120Z · score: 4 (2 votes) · LW · GW
This distinction alone is huge when you look at IFS' Exiles. If you have an "exile" that is struggling to be capable of doing something, but only knows how to be in distress, it's helpful to realize that it's just the built-in mental muscle of "seeking care via distress", and that it will never be capable of doing anything else. It's not the distressed "part"'s job to do things or be capable of things, and never was. That's the job of the "everyday self" -- the set of mental muscles for actual autonomy and action. But as long as someone's reinforced pattern is to activate the "distress" muscle, then they will feel horrible and helpless and not able to do anything about it.

I wonder how much of this discussion comes down to a different extensional referent of the word "part".

According to my view, I would call "the reinforced pattern to activate the 'distress' muscle [in some specific set of circumstances]" a part. That's the thing that I would want to dialogue with.

In contrast, I would not call the "distress muscle" itself a part, because (as you say) the distress muscle doesn't haven anything like "beliefs" that could update.

In that frame, do you still have an objection?

Comment by elityre on Building up to an Internal Family Systems model · 2019-10-18T23:06:47.118Z · score: 4 (2 votes) · LW · GW
presupposing that all my desires are mine and that I have good reasons even for doing apparently self-destructive things

I've always disliked the term "subagent", but this sentence seems to capture what I mean when I'm talking about psychological "parts".

So I think I agree with you about the ontological status of parts, but I can't tell, if you're making some bolder claim.

What are you imagining would be the case if IFS was literally true, and subagents were real, instead of "just a metaphor"?

. . .

In fact, I dislike the word "subagent", because it imports implications that might not hold. A part might be agent-like, but it also might be closer to an urge or a desire or an impulse.

To my understanding the key idea of the "parts" framing, is that I should assume, by default, that each part is acting from a model, a set of beliefs about the world or my goals. That is, my desire/ urge / reflex, is not "mindless": it can update.

Overall this makes your comment read to me as "these things are not really [subagents], they're just reactions that have [these specific properties of subagents]."





Comment by elityre on Building up to an Internal Family Systems model · 2019-10-18T23:06:26.802Z · score: 2 (1 votes) · LW · GW
it feels like the "internal compassion" frame seems to help with a lot of things such as just wanting to rush into solutions

+1.

Comment by elityre on Building up to an Internal Family Systems model · 2019-10-18T23:05:33.659Z · score: 6 (3 votes) · LW · GW

This is a great comment, and I glad you wrote it. I'm rereading it several times over to try and get a handle on everything that you're saying here.

In particular, I really like the "muscle" vs. "part" distinction. I've been pondering lately, when I should just squash an urge or desire, and when I should dialogue with it, and this distinction brings some things into focus.

I have some clarifying questions though:

For example, when you try to do "self-leadership", what you're doing is trying to model that behavior through practice while counter-reinforcement is still in place. It's far more efficient to delete the rules that trigger conflicting behavior before you try to learn self-leadership, so that you aren't fighting your reinforced behaviors to do so.

I don't know what you mean by this at all. Can you give (or maybe point to) an example?

---

But if I had tried to model the above pattern as parts... I'm not sure how that would have gone. Probably would've made little progress trying to persuade a "manager" based on my mother to act differently if I couldn't surface the assumptions involved, because any solution that didn't involve me being stressed would mean I was a bad person.
Sure, in the case of IFS, we can assume that it's the therapist's job to be aware of these things and surface the assumptions. But that makes the process dependent on the experiences (and assumptions!) of the therapist... and presumably, a sufficiently-good therapist could use any modality and still get the result they're after, eventually. So what is IFS adding in that case?

This is fascinating. When I read your stressing out example, my thought was basically "wow. It seems crazy-difficult to surface the core underlying assumptions".

But you think that this is harder, in the IFS framework. That is amazing, and I want to know more.

In practice, how do you go about eliciting the rules and then emotionally significant instances?

Maybe in the context of this example, how do you get from "I seem to be overly stressed about stuff" to the memory of your mother yelling at you?

---

You're trying to get the brain to learn a new implicit pattern alongside a broken one, hoping the new example(s) won't simply be filtered into meaninglessness or non-existence when processed through the existing schemas. In contrast, direct reconsolidation goes directly to the source of the issue, and replaces the old implicit pattern with a new one, rather than just giving examples and hoping the brain picks up on the pattern.

I'm trying to visualize someone doing IFS or IDC, and connect it to what you're saying here, but so far, I don't get it.

What are the "examples"? Instances that are counter to the rule / schema of some part? (e.g. some part of me believes that if I ever change my mind about something important, then no one will love me, so I come up with an example of when this isn't or wasn't true?)

---

but state-dependent memory and context-specific conditioning show that reinforcement learning doesn't have any notion of global coherence.

Given that, doesn't it make sense to break down the different parts of a RL policy into parts? If different parts of a policy are acting at cross purposes, it seems like it is useful to say "part 1 is doing X-action, and part 2 is doing Y-action."

...But you would say that it is even better to say "this system, as a whole is doing both X-action, and Y-action"?

Comment by elityre on Misconceptions about continuous takeoff · 2019-10-15T18:03:51.728Z · score: 2 (1 votes) · LW · GW

The following is mostly a nitpick / my own thinking through of a scenario:

If this hypothesis is true, I don't find it compelling that AlphaGo is evidence for a discontinuity for AGI, since such funding gaps are likely to be much smaller for economically useful systems.

If there is no fire alarm for general intelligence, it's not implausible that that there will be a similar funding gap for useful systems. Currently, there are very few groups explicitly aiming at AGI, and of those groups Deep Mind is by far the best funded.

If we are much nearer to AGI than most of us suspect, we might see the kind of funding differential exhibited in the Go example for AGI, because the landscape of people developing AGI will look a lot closer to that of Alpha Go (only one group trying seriously), vs. the one for GANs (many groups making small iterative improvements on each-other's work).

Overall, I find this story to be pretty implausible, though. It would mean that there is a capability cliff very nearby in ML design space, somehow, and that cliff is so sharp to be basically undetectable right until someone's gotten to the top of it.

Comment by elityre on Misconceptions about continuous takeoff · 2019-10-15T01:25:02.064Z · score: 6 (1 votes) · LW · GW
Yet, at no point during this development did any project leap forward by a huge margin. Instead, every paper built upon the last one by making minor improvements and increasing the compute involved. Since these minor improvements nonetheless happened rapidly, the result is that the GANs followed a fast development relative to the lifetimes of humans.

Does anyone have time series data on the effectiveness of Go-playing AI? Does that similarly follow a gradual trend?

AlphaGo seems much closer to "one project leaps forward by a huge margin." But maybe I'm mistaken about how big an improvement AlpahGo was over previous Go AIs.

Comment by elityre on AI alignment landscape · 2019-10-14T23:51:21.932Z · score: 4 (2 votes) · LW · GW

Link to a video of the talk.

Comment by elityre on Instrumental vs. Epistemic -- A Bardic Perspective · 2019-10-14T20:57:54.829Z · score: 4 (2 votes) · LW · GW

I'm aware of this book: Models: Attract Women Through Honesty.


Comment by elityre on Instrumental vs. Epistemic -- A Bardic Perspective · 2019-10-14T20:52:05.439Z · score: 2 (1 votes) · LW · GW

> (I was particularly struck by one PUA who spent at least 2000 words discussing how to differentiate women who might sleep with him that night from 'princesses' who would require many dates and gifts before even considering sex.)

Do you still have a link?

Comment by elityre on No License To Be Human · 2019-10-13T12:27:06.797Z · score: 2 (1 votes) · LW · GW

This is a very clear articulation. Thank you.

Comment by elityre on No License To Be Human · 2019-10-10T16:47:57.604Z · score: 3 (2 votes) · LW · GW

Saying that "right = human" is to deny the idea of moral progress. If what the human thing to do can change over time, and what is right doesn't change, then they can't be the same thing.

Slavery was once very human. I think many of us (though not the relativists) would reject the claim that because it was human, it was also right. It was always wrong, regardless of how common.

Comment by elityre on No License To Be Human · 2019-10-10T16:42:08.397Z · score: 2 (1 votes) · LW · GW

This is now in the running for my favorite posts of the sequences.

Comment by elityre on Sorting Pebbles Into Correct Heaps · 2019-10-10T16:25:32.935Z · score: 9 (4 votes) · LW · GW
What would happen if a Pebble Sorter came to understand primes? I'm guessing that a lot of them would feel as though the bottom was falling out of their civilization and there was no point to life.

Really? I think they would think it is an amazing revelation. They don't need to fight about heap-correctness, anymore, they can just calculate heap-correctness.

Remember, the meaning of the pebblesorting way of life is to construct correct heaps, not to figure out which heaps are correct.


Comment by elityre on Awww, a Zebra · 2019-10-09T22:24:27.755Z · score: 2 (1 votes) · LW · GW

Hahahahah.

Comment by elityre on Morality as Fixed Computation · 2019-10-09T14:50:28.120Z · score: 0 (2 votes) · LW · GW
where the "..." is around a thousand other things.

Do you mean literally a thousand? That's tiny!

Comment by elityre on Replace judges with Keynesian beauty contests? · 2019-10-08T12:43:17.778Z · score: 2 (1 votes) · LW · GW

This is great. Thanks for writing.


Comment by elityre on Evan Rysdam's Shortform · 2019-09-28T20:55:04.939Z · score: 2 (1 votes) · LW · GW

Thanks!

Comment by elityre on Running the Stack · 2019-09-28T12:44:47.568Z · score: 2 (1 votes) · LW · GW

Cool. This clarified what you're pointing at, for me.

Comment by elityre on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-28T12:42:35.762Z · score: 10 (4 votes) · LW · GW

That this particular case would destroy a lot of trust.

This seemed to me like a fun game with stakes of social disapproval on one side, and basically no stakes on the other. This doesn't seem like it has much bearing on the trustworthiness of members of the rationality community in situations with real stakes, where there is a stronger temptation to defect, or it would have more of a cost on the community.

I guess implicit to what I'm saying is that the front page being down for 24 hours doesn't seem that bad to me. I don't come to Less Wrong most days anyway.

Comment by elityre on Rationality and Levels of Intervention · 2019-09-28T11:53:52.614Z · score: 5 (3 votes) · LW · GW
It might be that the right allocation of one’s error identification resources is 90% to identifying biases and fixing System 2 and 10% to overcoming deep psychological distortions in System 1. Or it might be 10% and 90%.

This seems like an important question, but it seems mostly orthogonal to the 5 levels you outline, which seem to be mostly a matter of the timescale on which one intervenes (or how long you wait to see the results of a process before you judge it as good or bad, epistemic or non-epistemic.)

Maybe I'm missing something, but it seems like you could try to correct conscious S1 processes, with feedback on the scale of years, or on the scale of seconds. Likewise, you could try and correct unconscious S2 processes, with feedback on the scale of years, or on the scale of seconds.

Comment by elityre on Rationality and Levels of Intervention · 2019-09-28T11:48:24.723Z · score: 15 (3 votes) · LW · GW
I suspect this is one of the larger historical disagreements that I have had with various members of the rationality community. Right now when it comes to intellectual practice, I am most in favor of Levels 1-3 for beginners who are building functional scaffolds, and Levels 4-5 for intermediate level practitioners and beyond. The rationalist community and corpus seems to me to prefer Levels 1-3 much more for all practitioners.

Can you give a specific example of the way a person would act or think in some situation if they were prioritizing levels 1-3 vs. how they would act or think if they were prioritizing levels 4-5?

. . .


One thing I could imagine you to be saying is, "it is really useful to have a 'brainstorming / generation' mode, in which you try to come up with as many possible hypotheses as you can, and don't worry if they're false (or even incoherent)."

Or maybe you're saying "It is good and fine to adopt 'crazy hypotheses' for years at a time, because you'll get a lot of information that way, which ultimately helps you figure out what's true."

Or maybe, "It is a good idea to have some false beliefs, so long as they are the sort of false beliefs that great scientists typically have. This actually helps you get more relevant truths in the long run."

Or maybe (as a sort of extension of my second guess) you're saying "Individuals should 'specialize' in specific broad hypotheses. Instead of everyone having multiple models and frames, and trying to balance them, different people should 'hedgehog' on different models, each one adopting it really hard, and letting the epistemic process happen between the people, instead of within the people.

Does any of that match what you would recommend?



Comment by elityre on A simple environment for showing mesa misalignment · 2019-09-28T09:39:38.280Z · score: 3 (2 votes) · LW · GW

I am really excited about this line of exploration. Thanks for writing this post. I wish I could upvote it harder!

Comment by elityre on Evan Rysdam's Shortform · 2019-09-28T09:30:11.932Z · score: 2 (1 votes) · LW · GW

Can you share it?

Comment by elityre on romeostevensit's Shortform · 2019-09-28T08:48:37.547Z · score: 4 (2 votes) · LW · GW

Hahahahahaha.


Comment by elityre on lionhearted's Shortform · 2019-09-28T08:20:01.960Z · score: 14 (5 votes) · LW · GW
According to some theorists (e.g. Anderson 2001), information processing speed forms the basis of individual differences in IQ.

My understanding is that information processing time (as measured by reaction time) is pretty standard across humans, and pretty close the physical limits of neurons.

It is true that reaction time correlates with IQ, but that is a bit misleading. Average reaction time correlates with IQ, but every person's best case reaction time is about the same. It seems that the the correlation with between average RT and IQ is mediated by better vigilance. That is, people with higher IQs are able to score better on RT tasks because they are able to maintain their attention on the task (and therefore maintain an RT closer to their best), for longer periods.

My citation is chapter 3 (?) of this textbook.


Comment by elityre on Eli's shortform feed · 2019-09-28T07:16:20.129Z · score: 21 (6 votes) · LW · GW

New post: The Basic Double Crux Pattern

[This is a draft, to be posted on LessWrong soon.]

I’ve spent a lot of time developing tools and frameworks for bridging "intractable" disagreements. I’m also the person affiliated with CFAR who has taught Double Crux the most, and done the most work on it.

People often express to me something to the effect, “The important thing about Double Crux is all the low level habits of mind: being curious, being open to changing your mind, paraphrasing to check that you’ve understood, operationalizing, etc. The ‘Double Crux’ framework, itself is not very important.”

I half agree with that sentiment. I do think that those low level cognitive and conversational patterns are the most important thing, and at Double Crux trainings that I have run, most of the time is spent focusing on specific exercises to instill those low level TAPs.

However, I don’t think that the only value of the Double Crux schema is in training those low level habits. Double cruxes are extremely powerful machines that allow one to identify, if not the most efficient conversational path, a very high efficiency conversational path. Effectively navigating down a chain of Double Cruxes is like magic. So I’m sad when people write it off as useless.

In this post, I’m going to try and outline the basic Double Crux pattern, the series of 4 moves that makes Double Crux work, and give a (simple, silly) example of that pattern in action.

These four moves are not (always) sufficient for making a Double Crux conversation work, that does depend on a number of other mental habits and TAPs, but this pattern is, according to me, at the core of the Double Crux formalism.

The pattern:

The core Double Crux pattern is as follows. For simplicity, I have described this in the form of a 3-person Double Crux conversation, with two participants and a facilitator. Of course, one can execute these same moves in a 2 person conversation, as one of the participants. But that additional complexity is hard to manage for beginners.

The pattern has two parts (finding a crux, and finding a double crux), and each part is composed of 2 main facilitation moves.

Those four moves are...

  1. Clarifying that you understood the first person's point.
  2. Checking if that point is a crux
  3. Checking the second person's belief about the truth value of the first person's crux.
  4. Checking the if the first person's crux is also a crux for the second person.

In practice: 

[The version of this section on my blog has color coding and special formatting.]

The conversational flow of these moves looks something like this:

Finding a crux of participant 1:

P1: I think [x] because of [y]

Facilitator: (paraphrasing, and checking for understanding) It sounds like you think [x] because of [y]?

P1: Yep!

Facilitator: (checking for cruxyness) If you didn’t think [y], would you change your mind about [x]?

P1: Yes.

Facilitator: (signposting) It sounds like [y] is a crux for [x] for you.

Checking if it is also a crux for participant 2: 

Facilitator: Do you think [y]?

P2: No.

Facilitator: (checking for a Double Crux) if you did think [y] would that change your mind about [x]?

P2: Yes.

Facilitator: It sounds like [y] is a Double Crux

[Recurse, running the same pattern on [Y] ]

Obviously, in actual conversation, there is a lot more complexity, and a lot of other things that are going on.

For one thing, I’ve only outlined the best case pattern, where the participants give exactly the most convenient answer for moving the conversation forward (yes, yes, no, yes). In actual practice, it is quite likely that one of those answers will be reversed, and you’ll have to compensate.

For another thing, this formalism is rarely so simple. You might have to do a lot of conversational work to clarify the claims enough that you can ask if B is a crux for A (for instance when B is nonsensical to one of the participants). Getting through each of these steps might take fifteen minutes, in which case rather than four basic moves, this pattern describes four phases of conversation. (I claim that one of the core skills of a savvy facilitator is tracking which stage the conversation is at, which goals have you successfully hit, and which is the current proximal subgoal.)

There is also a judgment call about which person to treat as “participant 1” (the person who generates the point that is tested for cruxyness). As a first order heuristic, the person who is closer to making a positive claim over and above the default, should usually be the “p1”. But this is only one heuristic.

Example:

This is an intentionally silly, over-the-top-example, for demonstrating the the pattern without any unnecessary complexity. I'll publish a somewhat more realistic example in the next few days.

Two people, Alex and Barbra, disagree about tea: Alex thinks that tea is great, and drinks it all the time, and thinks that more people should drink tea, and Barbra thinks that tea is bad, and no one should drink tea.

Facilitator: So, Barbra, why do you think tea is bad?
Barbra: Well it's really quite simple. You see, tea causes cancer.
Facilitator: Let me check if I've got that: you think that tea causes cancer?
Barbra: That's right.
Facilitator: Wow. Ok. Well if you found out that tea actually didn't cause cancer, would you be fine with people drinking tea.
Barbra: Yeah. Really the main thing that I'm concerned with is the cancer-causing. If tea didn't cause cancer, then it seems like tea would be fine.
Facilitator: Cool. Well it sounds like this is a crux for you Barb. Alex, do you currently think that tea causes cancer?
Alex: No. That sounds like crazy-talk to me.
Facilitator: Ok. But aside from how realistic it seems right now, if you found out that tea actually does cause cancer, would you change your mind about people drinking tea?
Alex: Well, to be honest, I've always been opposed to cancer, so yeah, if I found out that tea causes cancer, then I would think that people shouldn't drink tea.
Facilitator: Well, it sounds like we have a double crux!

In a real conversation, it often doesn't goes this smoothly. But this is the rhythm of Double Crux, at least as I apply it.

That's the basic Double Crux pattern. As noted there are a number of other methods and sub-skills that are (often) necessary to make a Double Crux conversation work, but this is my current best attempt at a minimum compression of the basic engine of finding double cruxes.

I made up a more realistic example here, and I'm might make more or better examples.

Comment by elityre on Eli's shortform feed · 2019-09-28T07:01:41.277Z · score: 4 (2 votes) · LW · GW

What do you mean? All the numbers are in order. Are you objecting to the nested numbers?

Comment by elityre on Eli's shortform feed · 2019-09-27T22:08:05.174Z · score: 49 (12 votes) · LW · GW

New post: Some things I think about Double Crux and related topics

I've spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements and transferring tacit knowledge. I'm also probably the person who has spent the most time explicitly thinking about and working with CFAR's Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I'm not going to go into detail about, defend, or substantiate, most of them.

The following are my own beliefs and do not necessarily represent CFAR, or anyone else.

I, of course, reserve the right to change my mind.

[Throughout I use "Double Crux" to refer to the Double Crux technique, the Double Crux class, or a Double Crux conversation, and I use "double crux" to refer to a proposition that is a shared crux for two people in a conversation.]

Here are some things I currently believe:

(General)

  1. Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The Double Crux framework is maybe the most important tool (that I know of) for resolving disagreements, but it is only one tool/framework in an ensemble.
    1. Some other tools/ frameworks, that are not strictly part of Double Crux (but which are sometimes crucial to bridging disagreements) include NVC, methods for managing people's intentions and goals, various forms of co-articulation (helping to draw out an inchoate model from one's conversational partner), etc.
    2. In some contexts other tools are substitutes for Double Crux (ie another framework is more useful) and in some cases other tools are helpful or necessary compliments (ie they solve problems or smooth the process within the Double Crux frame).
    3. In particular, my personal conversational facilitation repertoire is about 60%  Double Crux-related techniques, and 40% other frameworks that are not strictly within the frame of Double Crux.
  2. Just to say it clearly: I don't think Double Crux is the only way to resolve disagreements, or the best way in all contexts. (Though I think it may be the best way, that I know of, in a plurality of common contexts?)
  3. The ideal use case for Double Crux is when...
    1. There are two people...
    2. ...who have a real, action-relevant, decision...
    3. ...that they need to make together (they can't just do their own different things)...
    4. ...in which both people have strong, visceral intuitions.
  4. Double Cruxes are almost always conversations between two people's system 1's.
  5. You can Double Crux between two people's unendorsed intuitions. (For instance, Alice and Bob are discussing a question about open borders. They both agree that neither of them are economists, and that neither of them trust their intuitions here, and that if they had to actually make this decision, it would be crucial to spend a lot of time doing research and examining the evidence and consulting experts. But nevertheless Alices current intuition leans in favor of open borders , and Bob's current intuition leans against. This is a great starting point for a Double Crux.)
  6. Double cruxes (as in a crux that is shared by both parties in a disagreement) are common, and useful. Most disagreements have implicit Double Cruxes, though identifying them can sometimes be tricky.
  7. Conjunctive cruxes (I would change my mind about X, if I changed my mind about Y and about Z, but not if I only changed my mind about Y or about Z) are common.
  8. Folks sometimes object that Double Crux won't work, because their belief depends on a large number of considerations, each one of which has only a small impact on their overall belief, and so no one consideration is a crux. In practice, I find that there are double cruxes to be found even in cases where people expect their beliefs have this structure.
    1. Theoretically, it makes sense that we would find double cruxes in these scenarios: if a person has a strong disagreement (including a disagreement of intuition) with someone else, we should expect that there are a small number of considerations doing most of the work of causing one person to think one thing and the other to think something else. It is improbable that each person's beliefs depend on 50 factors, and for Alice, most of those 50 factors point in one direction, and for Bob, most of those 50 factors point in the other direction, unless the details of those factors are not independent. If considerations are correlated, you can abstract out the fact or belief that generates the differing predictions in all of those separate considerations. That "generating belief" is the crux.
    2. That said, there is a different conversational approach that I sometimes use, which involves delineating all of the key considerations (then doing Goal-factoring style relevance and completeness checks), and then dealing with each consideration one at time (often via a fractal tree structure: listing the key considerations of each of the higher level considerations).
      1. This approach absolutely requires paper, and skillful (firm, gentle) facilitation, because people will almost universally try and hop around between considerations, and they need to be viscerally assured that their other concerns are recorded and will be dealt with in due course, in order to engage deeply with any given consideration one at a time.
  9. About 60% of the power of Double Crux comes from operationalizing or being specific.
    1. I quite like Liron's recent sequence on being specific. It re-reminded me of some basic things that have been helpful in several recent conversations. In particular, I like the move of having a conversational partner paint a specific, best case scenario, as a starting point for discussion.
      1. (However, I'm concerned about Less Wrong readers trying this with a spirit of trying to "catch out" one's conversational partner in inconsistency, instead of trying to understand what their partner wants to say, and thereby shooting themselves in the foot. I think the attitude of looking to "catch out" is usually counterproductive to both understanding and to persuasion. People rarely change their mind when they feel like you have trapped them in some inconsistency, but they often do change their mind if they feel like you've actually heard and understood their belief / what they are trying to say / what they are trying to defend, and then provide relevant evidence and argument. In general (but not universally) it is more productive to adopt a collaborative attitude of sincerely trying to help a person articulate, clarify, and substantiate the point your partner is trying to make, even if you suspect that their point is ultimately wrong and confused.)
    2. As an aside, specificity and operationalization is also the engine that makes Non Violent communication work. Being specific is really super powerful.
  10. Many (~50%) disagreements evaporate upon operationalization, but this happens less frequently than people think: and if you seem to agree about all of the facts, and agree about all specific operationalizations, but nevertheless seem to have differing attitudes about a question, that should be a flag. [I have a post that I'll publish soon about this problem.]
  11. You should be using paper when Double Cruxing. Keep track of the chain of Double Cruxes, and keep them in view.
  12. People talk past each other all the time, and often don't notice it. Frequently paraphrasing your current understanding of what your conversational partner is saying, helps with this. [There is a lot more to say about this problem, and details about how to solve it effectively].
  13. I don't endorse the Double Crux "algorithm" described in the canonical post. That is, I don't think that the best way to steer a Double Crux conversation is to hew to those 5 steps in that order. Actually finding double cruxes is, in practice, much more complicated, and there are a large number of heuristics and TAPs that make the process work. I regard that algorithm as an early (and self conscious) attempt to delineate moves that would help move a conversation towards double cruxes.
  14. This is my current best attempt at distilling the core moves that make Double Crux work, though this leaves out a lot.
  15. In practice, I think that double cruxes most frequently emerge not from people independently generating their own list cruxes (though this is useful). Rather double cruxes usually emerge from the move of "checking if the point that your partner made is a crux for you."
  16. I strongly endorse facilitation of basically all tricky conversations, Double Crux oriented or not. It is much easier to have a third party track the meta and help steer, instead of the participants, who's working memory is (and should be) full of the object level.
  17. So called, "Triple Crux" is not a feasible operation. If you have more than two stakeholders, have two of them Double Crux, and then have one of those two Double Crux with the third person. Things get exponentially trickier as you add more people. I don't think that Double Crux is a feasible method for coordinating more than ~ 6 people. We'll need other methods for that.
  18. Double Crux is much easier when both parties are interested in truth-seeking and in changing their mind, and are assuming good faith about the other. But, these are not strict prerequisites, and unilateral Double Crux is totally a thing.
  19. People being defensive, emotional, or ego-filled does not preclude a productive Double Crux. Some particular auxiliary skills are required for navigating those situations, however.
    1. This is a good start for the relevant skills.
  20. If a person wants to get better at Double Crux skills, I recommend they cross-train with IDC. Any move that works in IDC you should try in Double Crux. Any move that works in Double Crux you should try in IDC. This will seem silly sometimes, but I am pretty serious about it, even in the silly-seeming cases. I've learned a lot this way.
  21. I don't think Double Crux necessarily runs into a problem of "black box beliefs" wherein one can no longer make progress because one or both parties comes down to a fundamental disagreement about System 1 heuristics/ models that they learned from some training data, but into which they can't introspect. Almost always, there are ways to draw out those models.
    1. The simplest way to do this (which is not the only or best way, depending on the circumstances, involves generating many examples and testing the "black box" against them. Vary the hypothetical situation to triangulate to the exact circumstances in which the "black box" outputs which suggestions.
    2. I am not making the universal claim that one never runs into black box beliefs that can't be dealt with.
  22. Disagreements rarely come down to "fundamental value disagreements". If you think that you have gotten to a disagreement about fundamental values, I suspect there was another conversational tact that would have been more productive.
  23. Also, you can totally Double Crux about values. In practice, you can often treat values like beliefs: often there is some evidence that a person could observe, at least in principle, that would convince them to hold or not hold some "fundamental" value.
    1. I am not making the claim that there are no such thing as fundamental values, or that all values are Double Crux-able.
  24. A semi-esoteric point: cruxes are (or can be) contiguous with operationalizations. For instance, if I'm having a disagreement about whether advertising produces value on net, I might operationalize to "beer commercials, in particular, produce value on net", which (if I think that operationalization actually captures the original question) is isomorphic to "The value of beer commercials is a crux for the value of advertising.  I would change my mind about advertising in general, if I changed my mind about beer commercials." (In this is an evidential crux, as opposed to the more common causal crux. (More on this distinction in future posts.))
  25. People's beliefs are strongly informed by their incentives. This makes me somewhat less optimistic about tools in this space than I would otherwise be, but I still think there's hope.
  26. There are a number of gaps in the repertoire of conversational tools that I'm currently aware of. One of the most important holes is the lack of a method for dealing with psychological blindspots. These days, I often run out of ability to make a conversation go well when we bump into a blindspot in one person or the other (sometimes, there seem to be psychological blindspots on both sides). Tools wanted, in this domain.

(The Double Crux class)

  1. Knowing how to identify Double Cruxes can be kind of tricky, and I don't think that most participants learn the knack from the 55 to 70 minute Double Crux class at a CFAR workshop.
  2. Currently, I think I can teach the basic knack (not including all the other heuristics and skills) to a person in about 3 hours, but I'm still playing around with how to do this most efficiently. (The "Basic Double Crux pattern" post is the distillation of my current approach.)
    1. This is one development avenue that would particularly benefit from parallel search: If you feel like you "get" Double Crux, and can identify Double Cruxes fairly reliably and quickly, it might be helpful if you explicated your process.
  3. That said, there are a lot of relevant compliments and sub-skills to Double Crux, and to bridging disagreements more generally.
  4. The most important function of the Double Crux class at CFAR workshops is teaching and propagating the concept of a "crux", and to a lesser extent, the concept of a "double crux". These are very useful shorthands for one's personal thinking and for discourse, which are great to have in the collective lexicon.

(Some other things)

  1. Personally, I am mostly focused on developing deep methods (perhaps for training high-expertise specialists) that increase the range of problems of disagreements that the x-risk ecosystem can solve at all. I care more about this goal than about developing shallow tools that are useful "out of the box" for smart non-specialists, or in trying to change the conversational norms of various relevant communities (though both of those are secondary goals.)
  2. I am highly skeptical of teaching many-to-most of the important skills for bridging deep disagreement, via anything other than ~one-on-one, in-person interaction.
  3. In large part due to being prodded by a large number of people, I am polishing  all my existing drafts of Double Crux stuff (and writing some new posts), and posting them here over the next few weeks. (There are already some drafts, still being edited, available on my blog.)

I have a standing offer to facilitate conversations and disagreements (Double Crux or not) for rationalists and EAs. Email me at eli [at] rationality [dot] org if that's something you're interested in.

Comment by elityre on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T19:03:34.942Z · score: 2 (1 votes) · LW · GW
If you wanted to convince me, you could make a case that destroying trust is really bad, and that in this particular case pressing the button would destroy a lot of trust, but that case hasn't really been made.

This basically seems right to me.

Comment by elityre on Running the Stack · 2019-09-27T18:41:14.300Z · score: 14 (4 votes) · LW · GW

Thanks for writing this.

I believe that consistently running down the stack after you got distracted makes you less distractible going forwards, because there's less payoff to doing so.

I'm actually not sure what you mean by "running down the stack." Do you mean "when I get distracted I mentally review my whole stack, from most recent item added to most ancient item"? Or do you mean "when I get distracted, I 'pop' the next item/intention in the stack (the one that was added most recently), and execute that one next (as opposed to some random one).

I originally read you as saying the second thing, which seems like it would entail running one's life as a series of nested open loops (sort of like lisp).

In any case, I immediately implemented the second thing, on a trial basis. I'll see how it goes.

I believe that consistently running down the stack after you got distracted makes you less distractible going forwards, because there's less payoff to doing so.

Less payoff to getting distracted? To being distractible?

Why is that? Because if you get distracted you have to complete the distraction?

Comment by elityre on Running the Stack · 2019-09-27T18:28:02.398Z · score: 2 (1 votes) · LW · GW
unless new information compelling obsoletes it

Typo?