Circling as Cousin to Rationality 2020-01-01T01:16:42.727Z · score: 72 (35 votes)
Self and No-Self 2019-12-29T06:15:50.192Z · score: 39 (16 votes)
T-Shaped Organizations 2019-12-16T23:48:13.101Z · score: 51 (14 votes)
ialdabaoth is banned 2019-12-13T06:34:41.756Z · score: 30 (17 votes)
The Bus Ticket Theory of Genius 2019-11-23T22:12:17.966Z · score: 64 (18 votes)
Vaniver's Shortform 2019-10-06T19:34:49.931Z · score: 10 (1 votes)
Vaniver's View on Factored Cognition 2019-08-23T02:54:00.915Z · score: 41 (9 votes)
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z · score: 41 (10 votes)
Commentary On "The Abolition of Man" 2019-07-15T18:56:27.295Z · score: 65 (15 votes)
Is there a guide to 'Problems that are too fast to Google'? 2019-06-17T05:04:39.613Z · score: 49 (15 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 103 (57 votes)
Steelmanning Divination 2019-06-05T22:53:54.615Z · score: 145 (59 votes)
Public Positions and Private Guts 2018-10-11T19:38:25.567Z · score: 95 (30 votes)
Maps of Meaning: Abridged and Translated 2018-10-11T00:27:20.974Z · score: 54 (22 votes)
Compact vs. Wide Models 2018-07-16T04:09:10.075Z · score: 32 (13 votes)
Thoughts on AI Safety via Debate 2018-05-09T19:46:00.417Z · score: 88 (21 votes)
Turning 30 2018-05-08T05:37:45.001Z · score: 75 (24 votes)
My confusions with Paul's Agenda 2018-04-20T17:24:13.466Z · score: 90 (22 votes)
LW Migration Announcement 2018-03-22T02:18:19.892Z · score: 139 (37 votes)
LW Migration Announcement 2018-03-22T02:17:13.927Z · score: 2 (2 votes)
Leaving beta: Voting on moving to 2018-03-11T23:40:26.663Z · score: 6 (6 votes)
Leaving beta: Voting on moving to 2018-03-11T22:53:17.721Z · score: 139 (42 votes)
LW 2.0 Open Beta Live 2017-09-21T01:15:53.341Z · score: 23 (23 votes)
LW 2.0 Open Beta starts 9/20 2017-09-15T02:57:10.729Z · score: 24 (24 votes)
Pair Debug to Understand, not Fix 2017-06-21T23:25:40.480Z · score: 8 (8 votes)
Don't Shoot the Messenger 2017-04-19T22:14:45.585Z · score: 11 (11 votes)
The Quaker and the Parselmouth 2017-01-20T21:24:12.010Z · score: 6 (7 votes)
Announcement: Intelligence in Literature Prize 2017-01-04T20:07:50.745Z · score: 9 (9 votes)
Community needs, individual needs, and a model of adult development 2016-12-17T00:18:17.718Z · score: 12 (13 votes)
Contra Robinson on Schooling 2016-12-02T19:05:13.922Z · score: 4 (5 votes)
Downvotes temporarily disabled 2016-12-01T17:31:41.763Z · score: 17 (18 votes)
Articles in Main 2016-11-29T21:35:17.618Z · score: 3 (4 votes)
Linkposts now live! 2016-09-28T15:13:19.542Z · score: 27 (30 votes)
Yudkowsky's Guide to Writing Intelligent Characters 2016-09-28T14:36:48.583Z · score: 4 (5 votes)
Meetup : Welcome Scott Aaronson to Texas 2016-07-25T01:27:43.908Z · score: 1 (2 votes)
Happy Notice Your Surprise Day! 2016-04-01T13:02:33.530Z · score: 14 (15 votes)
Posting to Main currently disabled 2016-02-19T03:55:08.370Z · score: 22 (25 votes)
Upcoming LW Changes 2016-02-03T05:34:34.472Z · score: 46 (47 votes)
LessWrong 2.0 2015-12-09T18:59:37.232Z · score: 92 (96 votes)
Meetup : Austin, TX - Petrov Day Celebration 2015-09-15T00:36:13.593Z · score: 1 (2 votes)
Conceptual Specialization of Labor Enables Precision 2015-06-08T02:11:20.991Z · score: 10 (11 votes)
Rationality Quotes Thread May 2015 2015-05-01T14:31:04.391Z · score: 9 (10 votes)
Meetup : Austin, TX - Schelling Day 2015-04-13T14:19:21.680Z · score: 1 (2 votes)
Sapiens 2015-04-08T02:56:25.114Z · score: 42 (36 votes)
Thinking well 2015-04-01T22:03:41.634Z · score: 28 (29 votes)
Rationality Quotes Thread April 2015 2015-04-01T13:35:48.660Z · score: 7 (9 votes)
Meetup : Austin, TX - Quack's 2015-03-20T15:12:31.376Z · score: 1 (2 votes)
Rationality Quotes Thread March 2015 2015-03-02T23:38:48.068Z · score: 8 (8 votes)
Rationality Quotes Thread February 2015 2015-02-01T15:53:28.049Z · score: 6 (6 votes)
Control Theory Commentary 2015-01-22T05:31:03.698Z · score: 18 (18 votes)


Comment by vaniver on Steelmanning Divination · 2020-01-18T22:36:00.854Z · score: 14 (3 votes) · LW · GW

Thanks for the detailed commentary, and welcome to LessWrong!

I think we have two main disagreements, and two minor ones.

First is what I set out to do, which is perhaps tied up with what I should have set out to do. I'm not trying to explain Xunzi, or even all of the I Ching; I'm trying to open a door that was rightfully closed on the likes of fortune cookies and astrology to rescue things like the I Ching, that seem superficially similar but have a real depth to them.

And, as Xunzi points out when describing the Way, "No one corner is sufficient to exhibit it fully." If someone is going to get something real out of the I Ching, they're going to do it through practice, not through a summary, or reading it cover to cover, and the best I can do is point at why that would be good in a way that they can see from the other side of the door.

From the shape of your disappointment, I'm guessing you wanted me to explain Xunzi more fully, instead of just making an indirect pitch, or more fully grapple with philosophy or classicism. If I were out to do the former, my preferred strategy would be to see if Hutton would let me post the entirety of his translation of Undoing Fixation (which I think is way more readable than the in-public-domain one I found). For the latter, I'll readily admit I'm an amateur instead of an scholar, following trails as they appear and catch my interest instead of hoping for completion. I didn't reference Wang Bi because this is the first I've heard of him, but I'm not surprised to hear this is an old viewpoint. [Indeed, I suspect if this sort of thing hadn't been appreciated in Xunzi's time by his audience, he would have written an essay about it instead of just mentioning it and moving on.]

This bears on our second disagreement, which might be illusory. I agree that ritual, in Xunzi's conception, is primarily social instead of individual. But this isn't exclusively the case, and my impression was that divination as performed by individuals was primarily about cultivating reflection and broadmindedness; if there's a social dimension to it that I'm missing, I'm very interested in hearing more.

The minor disagreements:

To the extent that Xunzi’s work is richer than theirs, it is richer because Xunzi can build on their work. Xunzi can argue against Mozi and Mencius; Mozi and Mencius can’t argue against Xunzi.

Surely it is the case that later thinkers have an advantage over earlier thinkers. I, for example, relied on the mathematicians who worked through set theory to understand the point that "white horses are not horses," whereas Xunzi dismisses it as unproductive sophistry.

Nevertheless, I think people do vary in both the speed and the shape of their thoughts, and this can sometimes be picked up on from the texts they write, even in the presence of other advantages. I hesitate to judge the latter across the gulfs of time and translation; it may have just been bad luck that other thinkers failed to translate as well as Xunzi. Nevertheless, the translations are what we have, and my sense is what it is, and given the agreement of someone who knew more than I it seemed worth sharing.

[And, note the corollary; given that ancient thinkers are disadvantaged in this way compared to modern thinkers, a busy reader needs special inducement to look at the past instead of a modern textbook. Sometimes it's a desire to see the foundations; sometimes it's a belief that times have changed and bad ideas then might have become good ideas now; sometimes it's a glimpse of a genius making sense of another time and another place. Someone who shared my desire to read the original works from the classical period wouldn't need my recommendation to do so, whereas someone looking for a glimpse of genius benefits from the recommendation.]

All in all, it’s an excellent way to miss out on reading the past honestly on its own terms and to not realise that there were rationalists in the past as well.

I thought it was clear in my post that I thought Xunzi had enough of rationality figured out to count as a rationalist, in a way both evident in the text and in the historical record. (Sadly, he didn't approve of Qin, and so maybe he regrets teaching what he knew to others.)

I have more complicated thoughts about meeting the past on its own terms; it definitely has its uses, but 'steelmanning' is normally a different move.

Both the SEP and the IEP are freely available

As it happens, while I agree with the current (as of 2018) SEP article on Xunzi, when I first researched this I thought that SEP article badly misunderstood Xunzi's disagreement with Mencius, in a way that made me pessimistic about reading more SEP articles about it.

Nothing, except that it stunts learning.

There are two different claims here; the first is about my (already professed) ignorance, which I agree with, and the second is about whether or not it is contagious, which I think I disagree with. That is, suppose I read all of Xunzi, and tell others to read a single chapter. This implies that the other chapters are lower value, and so some in the audience will be more willing to move on after reading that chapter; but also presumably it increases the number of people who have read that chapter, and if you picked a good one that's worth it. And the most interested will be excited by that chapter, and then read more.


I so far have not had the time to write up what I like about Xunzi as a whole, and his relevance to modern institutions and individuals, and I'm not sure that I will. I'd be excited to read anything you wanted to write here on the subject, or related ones; another thing on my "someday maybe" list is the relevance of Mohism to modern utilitarianism and effective altruism.

Comment by vaniver on In Defense of the Arms Races… that End Arms Races · 2020-01-16T00:54:00.483Z · score: 7 (3 votes) · LW · GW

The principle: it may make sense to start an arms race if you think you are going to win if you start now, provided that a nastier arms race is inevitable later. 

My impression is that while there were some people who thought the Soviet Union would turn out to be troublesome, most people believed (either genuinely or as the result of wishful thinking) that capitalism and communism could coexist, and thus the nastier arms race later was not inevitable.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-14T19:48:47.373Z · score: 6 (2 votes) · LW · GW

I think the "increase in sensory acuity is an increase in introspective access to signal already present" point is central to understanding meditation, Circling, and their similarity, and think your post makes that point well.

If you just mean the version of "empiricism" where "knowledge comes only or primarily from sensory experience", then I think it does a good job of pointing at how it's trying to do that. I think there are important connotations of empiricism that are missing, tho; like, what is the sensory experience of? This is what I've vaguely gesturing at with the "nourishing properties of the universe" thing; if I look at a thermometer, I'm trying to get my sense data to connect to 'objective reality'; if I look at my own thoughts, the connection to 'objective reality' is more tenuous. It's not absent; and I think having some sort of reflective practice is a good idea, but I feel like Circling can make a stronger case than "a corollary of meditation." 

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-12T18:09:21.958Z · score: 11 (4 votes) · LW · GW

So, wait. Have you ever used Circling to resolve conflicts? Or, seen it used this way? Or, know anyone (whose word you trust) who has used it this way?

I have seen... maybe a dozen attempts to use it this way that I can remember (at least vaguely). Some of them were successful, some weren't; many had the flavor of "well, we haven't resolved anything yet but we know a lot more now". (Also I'm not counting conflicts about where the group attention should be going, which are happen pretty frequently.)

Some of the conflicts were quite serious / high-stakes; described somewhat vaguely, I remember one where a wife was trying to 'save her marriage' (the husband was also in the Circle), and over the course of an hour or so we got to the label of her felt sense of what was happening, figured out an "if X, then Y" belief that she had so deeply she hadn't ever looked at it, and then when she asked the question "is that true?" it dissolved and she was able to look at the situation with fresh eyes.

I don't remember being one of the primary parties for any of those conflicts; the closest was when I organized a Circle focused on me to work through my stance towards someone in my life that I was having a conflict with who wasn't present. (I thought that was helpful, but it's only sort of related.)

Also, I noticed a day or two ago that maybe I should back up a bit: when I'm talking about "resolving conflicts," I mean something closer to "do work towards a resolution" than "conflict goes in, result comes out." Like, if we think about democracy, there's a way in which candidate debates help resolve an election, but they aren't the election itself.

There's not an arbitration thing going on, where you take a conflict to the Circle, talk about it for a while, and then the facilitator or the group as a whole or whatever says "well, this is what I think" and then that's the ruling. Instead it's much closer to Alice and Bob relating to each other in a way that conflicts, and that getting explored, and then sometimes Alice and Bob end up relating to each other in a way they agree on, and sometimes they don't.

There's also a clear way in which Circles are a conflict-generating mechanism, in that Alice and Bob can be unaware that they disagree on a topic until it comes up, and now they can see their disagreement clearly.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-12T17:45:58.627Z · score: 3 (1 votes) · LW · GW

I wouldn't want that. 

If I'm inferring correctly, the thing that's going on here is your frustration is at both how the thing went down and that the person who did it is superior to you. If he 'lets you' scream, it's not a fight or a remonstration, it's him humoring you, which isn't the real thing.

To start off: What effects can and should circling have on the social reality while not circling?

Yeah, this is a really tricky question. I think the answer to both is "lots of effects."

Sometimes there are confidentiality agreements (where people get into a high-trust state and share info and then by default that info isn't widely propagated, so that you don't have to be think as much about "I trust Alice, but do I trust Alice's trust?") but there aren't any sort of "forgetting agreements" (where I share something shocking about me and you don't want to be friends anymore and then I can say "well, can you just forget the shocking thing?").

Given that it can have lots of effects on the social reality outside of Circling, the question of "are those expected effects good or bad?" is quite important, as is the question of "what standard should you use to measure goodness or badness of those effects?".

A section of my draft for this post that I decided to move to a comment, and then later decided should be its own post, is about the "will Circling with people I know be good for my social goals?" question, which I answered with "quite probably not on the meta-level I think you're thinking on, but I think it will on a different meta-level, and I think you might want to hop to the other meta-level."

To the extent it's possible, I think it's good for people to have the option of Circling with strangers, in order to minimize worries in this vein; I think this is one of the other things that makes the possibility of Circling online neat.

Comment by vaniver on Voting Phase of 2018 LW Review (Deadline: Sun 19th Jan) · 2020-01-09T23:34:28.240Z · score: 5 (2 votes) · LW · GW

I also note that I think there's signal in your decision to only skim a post, as opposed to reading it, but as noted in habryka's response, it's probably a weak signal.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-09T23:14:47.484Z · score: 3 (1 votes) · LW · GW

So my question is: can Circling tell you “actually, what you need is not Circling but something else [like a (metaphorical) ‘court’]”? Or, to put it another way: when should you not use Circling, but instead use some ‘court-like’ approach?

My first reaction is to pick apart the question, which suggests to me we have some sort of conceptual mismatch. But before I try to pick it apart, I'll try to answer it.

I think Circling won't "tell you" anything about that, except in the most metaphorical of senses. That is, suppose you're not bought into using Circling for resolving issue X; Circling will likely bring that to conscious attention, and then you might realize "ah, what I really want to do instead is settle this another way." But the judgment is yours, not Circling's, because Circling isn't trying to generate judgments. (I should note that it could be the case that the other participants either notice their own resistance to using the Circle in that way, or might notice your resistance before you do and bring that up, so I mean "yours" in the 'final judgment' sense as opposed to the 'original thinking' sense; you can end up agreeing to things you wouldn't imagine.)

As mentioned before, if you're not an experienced Circler, I wouldn't use it as a conflict-resolution mechanism, and I would be suspicious of someone who was an experienced Circler trying to immediately jump to conflict-resolution with someone new to Circling. If you have a conflict where everyone thinks everyone understands the issue, and yet there's still a conflict, I don't think Circling will point towards a resolution.

in that case I have further questions, concerning the meaning of the comments that gave me said impression

I would be interested in seeing the things that gave you this impression.

I confess I still do not know whether you think (and/or claim) that Circling is supposed to be used for object-level conflict resolution, or not. I think that this is important; in fact, I don’t know how much more progress can be made without getting clear on this point.

I agree that settling that seems useful. I think your question attempts to be "yes xor no" but the answer to the question as written is "yes and no," and so I responded with a question-substitution to try to identify the thing that I think divides the cases more cleanly.

That is, I claim that Circling can help people understand each other (and their way of interacting) better. Separately, I observe that many conflicts have, at their root, a misunderstanding. This generates the hypothesis that Circling would resolve many conflicts by knocking out the root misunderstanding generating them, or by transforming them from "two people trying to solve two problems" to "two people trying to solve one problem," which may do most of the work of resolution.

Of course, not all conflicts have a misunderstanding at the root; sometimes only one of us gets to win the chess game, or decide what restaurant we go to, or whatever. For such conflicts, there's no strong reason to think Circling would help. (There are weak reasons, like an outside-view guess that "if you think there are no misunderstandings, this is nevertheless sometimes a thing you think where there are misunderstandings," but I wouldn't want to make a strong case on weak reasons.)

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-09T20:08:54.749Z · score: 10 (4 votes) · LW · GW

It could mean that the problem is gone, but it propably means you're setting the cut later.

I asked because it is considered appropriate in Circling to bring emotions in the forms they want to be expressed in, including things like screams. Also the sorts of emotions people express in Circles run basically the whole emotional range, from pleasant to challenging.

I had the hypothesis that you were imagining a version where emotions had to pass through some external filter, like "politeness," and so rather than ending up with an accurate picture of where people are at, Circlers would end up with a systematically biased or censored picture. I don't think that happens with an external filter based on valence. That is, I think there are internal filters and people self-censor a lot (as part of being authentic to the complicated thing that they are), and I think there might be some external procedural filters.

I am somewhat worried about those procedural filters. Like, if I have a desire to be understood on a narrow technical point, the more Circling move is to go into what it's like to want to convey the point, but the thing the emotion wants is to just explain the thing already; if it could pick its expression it would pick a lecture.

[Worried because of the "can't allow for intimacy" point, and what to make of that is pretty complicated because it touches on lots of stuff that I haven't written about yet.]

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-09T19:48:52.601Z · score: 3 (1 votes) · LW · GW

your first thought is "let's analyze why the person said that", rather than "wait, is my thing bad?"

It definitely makes sense to be worried about Bulverism, where my attention becomes solely about how it lands for the other person (and figuring out what mistake of theirs prevents it from landing the way I want).

I think you often want to figure out all of 1) what the causal history of their statement is, 2) whether your thing was bad according to you, 3) whether your sense of goodness/badness is bad according to you, in contact with their statement and its causal history. What order you do those in will depend on what the situation is.

Like, suppose I write a post and someone comments with a claim that I made a typo. Presumably my attention jumps to the second point, of "oh, did I type incorrectly?", and only later (if ever) do I ask the questions of "why do they care about this?" and "am I caring the right amount about spelling errors?"

If instead I make a claim and someone says "that claim misses my experience," presumably my attention should jump to the first point, of what their experience was so that I can then determine whether or not I was missing their experience when I said it, or expressed myself poorly, or was misheard, or whatever.


I note that I am personally only minimally worried about specific Circlers that I know falling into Bulverism, and I feel like if I knew the theory / practice of it more I would be able to point to the policies and principles they're using that mean that error is unlikely for them. Like, for me personally, one of the protective forces is something like "selfish growth," where there's a drive to interpret information in a way that leads to me getting better at what I want to get better at, and so it would be surprising to see me 'write off' criticism after analyzing it, because the thing I want is the growth, not the defense-from-attack.


I think there are definitely developmental stages that people can pass through that make them more annoying when they advance a step. Like, I can imagine someone who mostly cares about defending themselves from attacks, and basically doesn't have a theory of mind, and you introduce them to the idea that they can figure out why other people say things, and so then they go around projecting at everyone else. I think so long as they're still accepting input / doing empiricism / able to self-reflect, this will be a temporary phase as their initial random model gradient-descents through feedback to something that more accurately reflects reality. If they aren't, well, knowing about biases can hurt people, and they might project why other people dislike their projections in a way that's self-reinforcing and get stuck in a trap.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-09T03:12:07.816Z · score: 6 (2 votes) · LW · GW

is Circling supposed to be for resolving conflicts and other object-level situations, or is Circling supposed to be for investigating this meta-level “how does the machinery operate” stuff? I’ve seen pro-Circling folks, you included, appear to vacillate between these two perspectives.

I think 'better Circling' involves leaning towards investigating the meta-level. I wouldn't recommend that anyone's first Circle be about exploring a dispute they're involved in; that seems like it would be likely to go poorly. In situations that seem high-stakes, it's better to understand the norms you're operating under than not understand them!

Perhaps it can be used for both? This would be surprising, and would increase the improbability of the pro-Circling position, but certainly cannot be ruled out a priori.

I think it helps you understand conflicts, and that sometimes resolves them, and sometimes doesn't. If Alice thinks meat should be served at an event, and Bob thinks the event should be vegan, a Circle that includes Alice and Bob and is about that issue might end up with them understanding more why they think and feel the way they do, and how their dynamic of coming to a decision together works. But they're still going to come to the decision using whatever dynamic they use.

To the extent people think Circling is useful for mediation or other sorts of resolution, I think that's mostly informed by a belief that a very large fraction of conflicts have misunderstandings at their root, or that investigating the generators is more fruitful than dealing with a particular instance. 

Perhaps so, but it seems to me that this is all the more reason why Circling is an inappropriate tool with which to determine whether what you need is meditation, or a court.

I'm confused by this, because it seems to me to imply that I thought or argued that Circling was the tool you would use to determine how to resolve an issue. What gave you that impression?

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-09T01:24:27.330Z · score: 3 (1 votes) · LW · GW

Thanks for the detailed reply!

The whole thing just reeks of valium. I'm sure you'd say theres a lot of emotionality in circling and that you felt some sort of deep connection or something. This is quite possibly true, but it seems theres an important part of it thats missing.

Would this feel different if people screamed when they wanted to scream, during Circling?

I intended something more like control. My anger is mine, its form is mine, and its destruction is mine. Restricting my expression of it is prima facie bad, if sometimes necessary. Restricting its form in my head, under the guise of intimacy no less, is the work of the devil.

What I'm hearing here (and am repeating back to see if I got it right) is the suggestion is heard as being about how you should organize your internal experience, in a way that doesn't allow for the way that you are organized, and so can't possibly allow for intimacy with the you that actually exists.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-08T02:24:17.813Z · score: 10 (3 votes) · LW · GW

I'd be interested in that, but don't think I believe it enough to write it myself.

A brief sketch of why: there's the "external universe", and the "conscious mind", and normal scientific empiricism is a way for the conscious mind to expose itself to the universe, letting it be reshaped to better match the universe that it's in.

When you look at meditation, then you're replacing "external universe" with something like the "mental universe." And so you still have this way for the conscious mind to expose itself to the mental universe that it's in, and be reshaped to better match it. But it's less obvious that 'the mental universe as revealed by meditation' is worth reshaping towards, or has the 'nourishing properties of the universe' or whatever. 

Like, with regular science we have materialism, and a pretty strong belief that there's one underlying reality, and that it's explainable through math. With Circling, we have other people to get around our blind spots. With meditation... there's some reason to be optimistic, but it seems weaker.


Comment by vaniver on Circling as Cousin to Rationality · 2020-01-08T01:06:59.809Z · score: 5 (2 votes) · LW · GW

This seems pretty accurate to me. 

I think Circlers are more optimistic about Circling's ability to handle conflicts that arise in a Circle, or to use Circling as a method for mediation. I think this comes from an implicit (explicit?) belief that a lot of conflicts are the result of either simple or complex misunderstandings, and so by pressing the "understand more" button you can unravel many of them, or make them much simpler to resolve. 

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-08T00:59:56.345Z · score: 3 (1 votes) · LW · GW

I actually objected (and was somewhat surprised Vaniver didn't object to) your description upthread of "either Alice betrayed Bob, or she didn't". Betrayal is very much not an atomic object (and importantly so, not just in the generic "everything is complicated" sense)

I understood Said to mean something like "either Bob would think he had a convincing case that Alice betrayed him, or Bob would change his mind, and assuming Bob follows some standards of reasonableness, a Reasonable Observer would agree with Bob."

So early on in this thread I said:

Like, in my view this one is more of a "patch that prevents a predictable failure mode" than a claim that, like, justice or principles don't exist and only emotions do. [I am not sure how widespread my view is.]

and later I said:

He might discover that Alice is contrite and wants to do better, or that Alice thinks his expectations were unclear, or Alice thought he was in violation of some of her expectations, and so thought she was matching Bob's level of reliability. Or he might discover that Alice is uninterested in his wellbeing, or in collaboratively seeking solutions, or in discussing the possibility that she might have done anything wrong.

I thought the second does an adequate job of pointing out "betrayal is complicated," in that a discussion of it could go many ways and I do not believe "betrayal is a malformed concept," as pointed out in the first. Like, for any particular case, I think you could in principle reach a "fact of the matter" that either Alice betrayed Bob, didn't, or that Alice and Bob have irreconcilable standards (which you might lump into the 'betrayal' case, or might want to keep separate). 

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-08T00:58:24.669Z · score: 9 (4 votes) · LW · GW

the quoted part (which is a sentiment I have seen pro-Circling and pro-NVC and pro-similar-things folks express quite a few times) is something which seems to me to be taking a view of relationships, and people, which is deeply mistaken, insofar as it fails to correctly describe how many (perhaps, even most) people operate.

I have seen this misunderstanding happen and result in a significant amount of misery. (That is, Bob viewed themselves as being treated unjustly by Alice, who cared about Bob's suffering and was interested in understanding it, but a big part of Bob's suffering was that Bob and Alice didn't share a notion of 'justice,' and so they couldn't agree on 'what happened' or 'what mattered' because they had different type signatures for them.) I was not able to bridge it that time, despite seeing both sides (I think).

what we are interested in discussing aren’t the sense-impressions—we care about the things themselves.

Where my attention is going at the moment is not the sense-impressions, or the things themselves, but the machinery that turns the sense-impressions into models of the things, and the machinery that refines that modeling machinery. 

I think it's difficult to keep one's attention on that part of the process; seeing the lens instead of just seeing the object through the lens. I view "owning experience" as, among other things, an attempt to direct attention towards the lens using a rule that's understandable even before you see the lens.

[I hope it's clear, but it's worth saying, Circling is a lot like meditation, and very little like courts. That is, I expect it to help you deepen your understanding of how you perceive the world and how others perceive the world, and for it to make difficult topics easier to navigate, but I expect it to sometimes do those things at the expense of figuring out object-level issues. As in this set of paragraphs, where I followed my attention from the object level case to the more abstract question of how we settle such cases.]

By all means, we can say “Bob, you think that Alice betrayed you, but consider that perhaps actually she didn’t?”. But any account of the situation, or any attempt to resolve the matter, that fails to refer primarily to the fact (or non-fact) of Alice’s betrayal will quite miss the point.

I do object here to some of the implications of saying "the point" instead of "Bob's point." (While thinking that it's bad to miss Bob's point.)

Like, given that Bob made the point, calling it 'the' point is probably legitimate, but it is interesting that in this situation Bob cares about this when Carl, put into the same situation, might not. The implication that I'm troubled by is the one where Bob is assuming a shared level of understanding or buy-in to their conception of where the importance is, while not seeing it as a choice out of many possible choices.

Like, in my mind it's the difference between the judge, who orients around determining what The Law says about the case in front of them, and the legislator, who orients around determining which of many possible laws should be enacted. Or it's mistaking the intersubjective and the objective, thinking that the rules of chess are inherent in mathematics instead of agreed on.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-07T20:01:13.383Z · score: 11 (4 votes) · LW · GW

It seems you're thinking of Bob as someone who's already pretty assertive and just needs tactical advice, and in that case I agree it can be good advice. But for someone who's less assertive, they might interpret the advice as basically "be more meek", especially if there's pressure to follow it.

I think this goes through for a less assertive Bob as well, but perhaps it depends on why Bob is less assertive.

That is, suppose Bob is not happy with how things are going, but also thinks it's very costly to have arguments with Alice. So Bob stores up resentments until they exceed the threshold of the cost of having an argument, and then they have the argument, and then the resentments are depleted. But also probably Bob afterwards feels a desire to walk back his position, since he was out of line, according to himself; he had to force himself into the argument and then it's not a place where he's comfortable.

One of the things we might say from the outside is "ah, Bob, you should resent things more, then the explosions will happen more frequently," which Bob might think is not obviously making him better off. Or we might say something like "ah, Bob, you should imagine the costs of an argument with Alice are lower than they actually are," which Bob might think is misrepresenting his experience or ability to assess costs.

The thing "owning experience" is suggesting is closer to "there's a way to bring your actual experience to the relationship that is less likely to lead to those sorts of arguments." That is, you can lower the cost of sharing using technique, and so if someone is sharing too little because the costs are too high, it's useful for them as well.

And if Bob discovers that Alice is indifferent to his suffering, well, that's a thing that he should think seriously about.

But also it might be the case that Bob is less assertive because Bob doesn't think his suffering matters, and so the only way he protects himself is by relying on abstract rules about concepts like "betrayal." Then saying "don't talk about the abstract rules, talk about the impact to you" makes Bob not say anything, because he's ruled out caring about the impact to him and now he thinks the context has ruled out caring about the abstract rules.

For such people, I don't think the first exercise should involve lowering of boundaries. Instead it'd be something like practicing saying "no" and laughing in someone's face, until it no longer feels uncomfortable. Doing these kinds of things certainly helped me.

So according to me, Circling is about understanding psychology / relationships, and boundaries come up because they're both an important part of the source material and they're related to how you look at the source material. The primary mechanism is 'understanding' boundaries instead of 'lowering' them, tho; like, often you end up in situations where you look at your boundaries and go "yep, that's definitely helpful and where it should be" or you notice the way that you had been forcing yourself to behave a particular way and that was self-harming because you were ignoring one of your own boundaries.

But also I think I run into this alternative impression a lot, and so something is off about how I or others are communicating about it. I'd be interested in hearing why it seems like Circling would push towards 'letting betrayal slide' or 'lowering boundaries' or similar things.

[I have some hypotheses, which are mostly of the form "oh, I was assuming prereq X." For example, I think there's a thing that happens sometimes where people don't feel licensed to have their own boundaries or preferences, or aren't practiced at doing so, and so when you say something like "operate based on your boundaries or preferences, rather than rules of type X" the person goes "but... I can't? And you're taking away my only means of protecting myself?". The hope is it's like pushing a kid into a pool with a lifeguard right there, and so it generally works out fine, but presumably there's some way to make the process more clear, or to figure out in what cases you need a totally different approach.]

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-07T05:02:35.583Z · score: 8 (3 votes) · LW · GW

Your analysis seems fine, and it also seems worth noting that while Circling might teach you broadly applicable lessons, they're time-boxed containers where everyone involved has chosen to be there. That is...

These are not easy questions. They must be answered with attention to the particulars of a person’s situation—their personality, their social circle, etc.—and with the fact firmly in mind that such choices, if made repeatedly, compound, and form incentives for the actions of others, and signal various things to various sorts of people.

It seems to me like some large part of the usefulness of Circling comes from "owning experience" compounding and forming incentives and signalling things. That's separate from the claim that you should own your experience everywhere.

I think that this perspective focuses entirely too much on people’s feelings about things, and not nearly enough on the facts of the matter. 

I think that, at least with relationships, people's feelings are often the primary facts of the matter. Like, obviously when you're interacting with your barista, what you ordered and what drink they prepared are the primary facts of relevance, and how the two of you feel about it is secondary. But if Alice and Bob are choosing to build a relationship together, how they think and feel about their interactions is much more important than basic facts about those interactions.

not nearly enough on the facts of the matter. 

Actually, a different take: "owning experience" is about teaching people the map-territory distinction in emotionally charged situations. Like, it will feel as tho "the territory is that you betrayed me," and the principle forces a swap to "my map is that I'm alone." This lets you look at how the map is constructed, which is potentially more fruitful ground for exploration than whether or not it passes or fails a particular experimental test this time.

And the change in type is important; if you just let people say the words "my map" instead of "the territory" they will change their language but not their thinking, and this will impede their ability to go deeper. 

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-07T04:44:43.747Z · score: 3 (1 votes) · LW · GW

how would you, for instance, differentiate between “I feel the absence of companionship” from “I feel lonely, and I think this is due to absence of companionship”

For me personally, the first one is like seeing the words "absence of companionship" in my mind's eye, and the second one is like feeling a tugging at my navel, trying to label it with "absence of companionship", and getting only partial resonance. Like, I'm not confident yet, and so it seems like there's still more info there that I should search for; maybe it's romantic companionship, maybe it's having a regular D&D group again, maybe it's something else.

in other words, what you conceptualize as an affective state, could also be conceptualized as the combination of an affective state with a cognitive one, yes?

Yes, altho I don't think I'd categorize 'states' that way. (Like, all mental states are 'cognitive' in some sense, and the standard definition of 'affective' seems very broad; like, I see a cat on the street and I feel valence and motivational intensity.)

it nonetheless seems like quite a poor idea to refer to them using the same word that we also use to refer to an entirely external fact.

I mean, it sure is nice to use two syllables instead of more than a dozen! When typing you really don't have a good option besides using more words to achieve more precision, but when physically embodied subtext can be quite rich. (Like, compare describing a 'spiral staircase' with text, or with your voice and hands.)

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-06T22:48:35.693Z · score: 6 (2 votes) · LW · GW

I think I've felt distinct things that corresponded to:

  • "I feel less companionship than I did a moment ago"
  • "I feel the absence of companionship"
  • "I think I would be happier if I had more companionship." 

Now, which one of those is "I feel alone" and which one is "I feel lonely"? Probably not obvious, and maybe I'd even refer to them using the same short phrase each time. But it seems useful to try to feel and convey those sorts of distinctions using word choice, as well as more words.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-06T18:03:50.280Z · score: 6 (2 votes) · LW · GW

I agree with that, so let me see if I can point more clearly at where I think the difference is.

If Bob leads with impact to Bob, he sets up a conversational context of collaboratively determining what situation they're in. He might discover that Alice is contrite and wants to do better, or that Alice thinks his expectations were unclear, or Alice thought he was in violation of some of her expectations, and so thought she was matching Bob's level of reliability. Or he might discover that Alice is uninterested in his wellbeing, or in collaboratively seeking solutions, or in discussing the possibility that she might have done anything wrong. In all of those cases, Bob has opened up to more information about the world, and has a better vantage point to move forward from (even in cases where he decides to no longer associate with Alice!).

[Of course, it helps to be clear about what sort of bids and frames he's suggesting if this is new to Alice; cultural communication tech works better when both parties have it.]

If Bob leads with Bob's frame, he sets up a conversational context of arguing who gets to decide what situation they're in, with the opening bid being "Bob" with the relevance that Bob thinks "Alice misbehaved." Even if Alice would believe that Alice misbehaved looking at it from the outside, Alice might have serious objections to different layers of the procedure, which are now mixed in to the object level issue, and it's quite possible that Alice wouldn't believe that she misbehaved if looking at it from the outside.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-06T04:54:03.131Z · score: 3 (1 votes) · LW · GW

It seems more grammatical to say "lonely," but I notice the two words have different feels to me, and it could be the case that "alone" fits more than "lonely" does, tho the difference between them is subtle.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-06T04:52:51.928Z · score: 5 (2 votes) · LW · GW

Perhaps this is a tangent to the discussion

This seems like a promising starting point to explore what's going on, from my perspective.

the thing that follows those first two words is not an emotion, but a claim about the world: "(I am) alone."

As it happens, I'm currently typing this comment in a room that I'm in by myself. But there's a specific bodily / emotional sensation that I'm not feeling at present, which I was feeling the last time I said "I feel alone" to someone, despite being in a room with multiple people then. 

It's also the case that I can feel my chair pressing into my body, and the top of the desktop pressing into my leg where I'm awkwardly resting it, and some tension in my arms because they had to stretch to my distant keyboard. (Don't worry, I've since moved closer to it.)

One thing that's true of my experience (which I expect to be true of the experience of, like, somewhere around 80% of people?) is that I will sometimes get sensations that are connected to 'beliefs' as part of my sensorium. That is, they're more like the haptic sensations corresponding to sitting than they are like my internal monologue or other things that I traditionally think of as "beliefs". Sometimes this is an embodied sensation, like "it would be inappropriate for me to say something here" might manifest as a tightness of the throat, but sometimes it isn't.

[Staying with the level of sensation helps build this mapping and keep things 'accurate'; if I feel a tightness in the throat and I don't know what belief about the world it corresponds to yet, it's probably better for me to share the sensation than it is to share my guess of what I'm reacting to about the world.]

Speculation time: sometimes I think embodied emotions are straightforwardly phyisiological; like I get angry and feel it in my arms because my SNS is actually making my arms behave differently. Other times I think what's happening is something like the proprioceptive sense, but for 'important concepts', like relationships / what other people are thinking / how particular fields of math or science work.

Like, imagine we're drifting on rafts on a body of water; I could see us moving away from each other and call that out to you, and you could presumably also see the same thing. Or there could be the two of us having a conversation, and I could have a sensation that seems basically the same, except it's metaphorical; "I'm feeling us moving apart" in the weird part of my world-model that's using a spatial analogy for stances we're taking towards each other or beliefs we have about each other or whatever. Sharing that seems potentially more useful here, because we might be tracking movement through different 'metaphorical oceans'.

Acknowledging it as a belief makes it possible to consider "Is this belief true? Why do I believe it is true?" Miscalling it a feeling protects it from testing against reality: "How can you question my FEELINGS?"

One 'fun game' you can play with friends is to have person A turn away from person B, who then lightly touches the back of person A, with a randomly chosen number of fingers, and then person A has to guess how many fingers they're being touched with. (Generally, people do 'okay' at this, which is much less well than they expect to be able to do.) Or you can do the cutaneous rabbit effect.

Much less fun to do a demonstration of, and so I recommend just reading about it, are edge cases of pain sensation, like when a man felt intense pain due a nail passing through his boot, despite it missing his foot.

That is, if you view feelings as sense data like any other, it makes sense to apply the same sorts of consistency checks that you would to normal sense data. Like, if you live in a world where your eyes can be fooled, and your feeling of how many points are touching your back can be imprecise, presumably you should have similar sorts of suspicion towards your feeling that your housemate isn't doing their fair share of the chores.

According to me, the way you fix things like optical illusions is not by closing your eyes, but instead by developing a more precise model of how exactly your vision works.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-05T21:41:33.611Z · score: 3 (1 votes) · LW · GW

It seems clear to me that one should not, under any circumstances engage in a group therapy exercise designed to lower your emotional barriers and create vulnerability in the presence of anyone you trust less than 100%

I agree with this almost completely. Two quibbles: first, styles of Circling vary in how much they are a "group therapy exercise" (vs. something more like a shared exploration or meditation), and I think "100%" trust of people is an unreasonable bar; like, I don't think you should extend that level of trust to anyone, even yourself. So there's actual meat in the question of "what's it like to Circle with someone that you 90% trust? Should you do that?".

Also I think the ideal of Circling agrees with this underlying sentiment, in that the goal is not to lower emotional barriers but to understand them. It may be that as part of understanding them, they get lowered, or they might be maintained or raised. I've been in many Circles where the content of the Circle was "huh, it seems like we don't trust each other enough to be open / handle deep topic X. What's that like?".

One of the things that I worry about some with Circling and rationalists is something like... the uncanny valley of noticing emotional responses while not trusting them, or something? Like I'm reminded of this comment by jimmy:

You have to be careful when dismissing subconscious fears as irrational. They were put there for a reason, and they may still be relevant. If I was staying in a "haunted house" in a city where it was not isolated or abandoned or anything, I don't think it'd scare me one bit. A secluded/abandoned haunted house might be scary, and for good reasons. It would be unwise to assume that your fear is entirely irrational.

I went to a local park with some friends one night to hang out. Both I and another friend were uneasy about it, but dismissed our fears as irrational (and didn't mention it). We both figured that we didn't have any reason to think that something bad was gonna happen in the sense that you can't predict the future through "ESP", but it didn't occur to us that "you're scared because that isn't a safe place to be at night you dolt!"

Turns out some guys showed up and tried to stab us, nearly succeeding. I learned the "almost hard" way not to disregard fears right off the bat.

If I thought Circling would on average make people more exploitable or worse at defending themselves / avoiding bad outcomes, I wouldn't recommend it. I'm less clear about what to do when there's a valley you need to cross, which I think is true for theory and rationality as well, and my rough guess is "the only way out is through."

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-05T21:16:30.269Z · score: 6 (2 votes) · LW · GW

I thought I responded to this a few days ago, but apparently never hit submit.

"Alice betrayed Bob" contains some information that "Bob feels alone" doesn't contain, though. I don't think we should always discard such information.

Specifically, the sorts of additional bits of information that I think are important are 1) Bob's expectations and 2) the appropriateness of Alice's or Bob's emotions. (If Bob's expectation of Alice was reasonable, then it is appropriate for Bob to feel hurt and appropriate for Alice to feel remorseful; if Bob's expectation of Alice wasn't reasonable, then it might be inappropriate for Bob to feel hurt.)

I don't see the Circling suggestion here as a moral claim, of the form "this sort of information is bad / you shouldn't reason using it"; I view it as a practical claim, of the form "Bob will probably be more satisfied with how the interaction goes if he opens it with 'I feel alone' than with 'you betrayed me'." Like, in my view this one is more of a "patch that prevents a predictable failure mode" than a claim that, like, justice or principles don't exist and only emotions do. [I am not sure how widespread my view is.]


Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T22:38:39.713Z · score: 24 (8 votes) · LW · GW

I think it would be a mistake for us to use that term here; I think as well as describing a pattern of behavior it comes with an implied interpretation of blameworthiness that we really don't want to import.

Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T01:12:41.480Z · score: 24 (6 votes) · LW · GW

my guess is that they would not share his assessment that the explanation that has been provided by T3t above is completely inadequate

This worries me because of double illusion of transparency concerns. That is, one frame we could have here is that Said is virtuously refusing to pretend to understand anything he doesn't understand. Suppose the version of "authentic" that is necessary to make this post work is actually quite detailed and nuanced, in ways that T3t's guess don't quite get at; then it seems like T3t and I might mistakenly believe that communication has taken place when it actually hasn't, whereas Said and I will have no such illusions.

If there are problems with this situation, I think they come from differing people having different expectations of how bad it is to not have communicated something to Said, and I think we fix that by aligning those expectations.

The usual pattern of Said's comments as I experience them has been (and I think this would be reasonably straightforward to verify)

This lines up with a model where Said is being especially rigorous when it comes to dependencies, and the audience isn't, and the audience has some random scattering of dependencies where each further reply is only useful to a smaller fraction of the population. It also is explained by people becoming more and more pessimistic that communication will happen, and so not tuning in to the tree to follow things.

Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T17:43:11.857Z · score: 22 (7 votes) · LW · GW

No, it does not clearly demonstrate that. What it demonstrates is that you, and specifically you, do not understand the concept

Also quanticle, also nshepperd, and presumably lurkers who upvoted their comments.

that explaining it to you specifically is difficult.

I think it'd be fair to read the first paragraph of my post as implicitly setting my hopes for this post as "explaining it to Said." (In the second paragraph I say I'm not going to fully explain Circling, but if the core analogy that I'm trying to make is missing a crucial detail, that seems quite relevant.)

In a different recent post, I explicitly set my bar as "I ~80% expect this to seem like nonsense." I don't know how much of that post seemed like nonsense to Said, but I'd guess 'a lot', and nevertheless he left a detailed comment that struck me as a solid example of "yes, and" or "this fuzzy thing seems like it rhymes with the fuzzy thing you said."

Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T17:28:17.955Z · score: 3 (1 votes) · LW · GW

I think your broader position in this tree is a incorrect hypothesis that's pointing towards correct observations, in approximately the same vein as how, according to Google, people start complaining about the strictness of code reviews when reviewers take too long to respond.

I have a hypothesis of my own, but I'm both not very certain of it yet and it relies on some Circling jargon, and so I want to privately check some things (with both you and Said and maybe other observers) before going to the trouble of explaining the generator of the hypothesis.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-02T00:29:30.993Z · score: 3 (1 votes) · LW · GW

Either that or Google Docs. But anyway, I currently don't expect to get started on such a draft until Friday, probably, and so I think you should comment here if you want to get such comments out sooner than then.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-02T00:12:30.249Z · score: 16 (6 votes) · LW · GW

(This, by the way, is why I prefer Eliezer’s method of starting from the dependencies…)

I wanted to note that if dependencies are randomly already present in some fraction of the population, the 'reverse order' lets you convey your point to growing fractions of the population (as you go back and fill in more and more dependencies), whereas the 'linear order' doesn't let you convey your point until the end (when everyone is able to get it at once).

Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T00:02:59.308Z · score: 48 (12 votes) · LW · GW

My guess would also be that Vaniver perceived the comment as at least somewhat of an attack, though I am not super confident, though he could chime in and give clarification on that.

The history was as follows:

  1. Look at the earliest reply in my inbox, agree with it (and Raemon's comment), and edit the post.
  2. Scroll up and see a large comment tree.
  3. In finding the top of the large comment tree, see another comment; decide "I'll handle that one first."

So my view of Said's comment was in the context of nshepperd's comment, at which point I already saw the hole in the post and its shape.

This splits out two different dimensions; the 'attack / benign' dimension and the 'vague / specific' dimension. Of them, I think the latter is more relevant; Said's comment is a request of the form "say more?" and nshepperd's is a criticism of the form "your argument has structure X, but this means it puts all its weight on Y, which can't hold that much weight." The latter is more specific than the former, and I correspondingly found it more useful. [Like, I'm not sure I would have noticed that I also don't define truth from just reading Said's comment, which was quite helpful in figuring out what parts of 'authenticity' were relevant to describe.]

However, this is because nshepperd made a bet that paid off, in that they were able to precisely identify the issue with the post in a way that could be easily communicated to me. If nshepperd had made a similarly precise but incorrect guess, it easily could have been worse off than a vague "say more?". That is, there's not just the question of where the 'interpretive labor' burden falls, but also a question of what overall schemes minimize interpretative labor (measured using your cost function of choice).

I interpreted both of them as benign; if anything nshepperd's is more of an attack because it directly calls "authenticity" an applause light.

Also, related to a thread elsewhere, on 'obligations' to respond to comments: I mostly don't worry about outstanding 'attacks' on me of this type, because of something like socially recognized gnosis. That is:

  • In worlds where "everybody knows" what authenticity is, and Said is the lone ignoramus, I lose very few points by not responding to Said saying "but what is authenticity?", because most of the audience views the question as a tiresome distraction.
  • In worlds where I want to believe or want to enforce that "everybody knows" what authenticity is, then I lose many points by not responding to Said saying "but what is authenticity?", because the audience views the question as a pertinent point, or at least evidence that others don't know also.
  • In worlds where some people know lots about authenticity, and others know little, then when Said says "but what is authenticity?", I can respond with "this post is for people who know what I mean by that, and I'm not holding it to the standards of people who don't know what I mean by that" and both groups can continue satisfied (the former, discussing among a group that shares vocabulary, the latter, knowing that the post is openly not up to their standards). Which should generally be a thing that I'm willing to be open about, altho it sometimes generates some social awkwardness.
  • And in worlds where I just forgot that not everybody has 'authenticity' as a shared label, then the question "but what is authenticity?" is a welcome pointer towards more that has to be written.

So some things that I think would be nice:

  • It is permissible to respond to clarifying questions with "sorry, that's a prerequisite that I won't explain here," which is taxed according to how ludicrous it is to impose that as a prerequisite.
  • Authors have well-placed trust in the audience's ability to assess what observations are germane, and how seriously to take various 'criticisms,' so the tax from the previous point seems accurate / ignoring comments that seem bad to them is cheap instead of expensive.
  • "The Emperor Has No Clothes" objections have a place, tho it might not be every post.
  • Everyone gets better at interpretative labor in a way that makes communication flow more easily.
Comment by vaniver on Don't Double-Crux With Suicide Rock · 2020-01-01T23:06:53.945Z · score: 36 (8 votes) · LW · GW

Presumably double crux with Suicide Rock would reveal that the Rock doesn't have any cruxes, and double crux with someone suffering from algorithmic bad faith would also reveal that, tho perhaps more subtly?

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-01T22:53:47.752Z · score: 3 (1 votes) · LW · GW

I meant "the draft of the future post," which doesn't exist yet. 

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-01T21:58:56.004Z · score: 4 (2 votes) · LW · GW

I do have much to say about this concept as you’ve described it. I wonder if you would prefer such comments here, or saved for the fuller description/explanation posts which you intend (if I understand your comments correctly) to write in the future?

Hmm. Rather than saving such comments for the future post, I'd rather see them on the draft of it, so that it's polished by the time it gets published, instead of going through many revisions in the open or the fuller meaning being hidden in deep comment trees. But if it takes a while to write the other post, then that imposes the cost of missing out on the comments on this post. My guess is you should write comments here, tho I will be more likely than normal to say "ah, I'll respond to that later."

Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T17:34:53.843Z · score: 6 (3 votes) · LW · GW

As a first approximation, "authenticity" means communicating one's thoughts and feelings as one feels them, without adding the thoughts and feeling made up for strategic purposes.

To elaborate on this, a common move in Circling is to notice the thoughts and feelings made up for strategic purposes and explicitly name the strategic purposes. That is, when I notice "I want to be closer to you," I might directly say "I notice I want to be closer to you" instead of saying a sentence designed to have the effect of making us closer.

That is, just like I don't think authenticity involves "saying everything you think," I don't think authenticity means "giving up on goals and strategies." 

Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T17:29:57.987Z · score: 5 (2 votes) · LW · GW

Indeed, I hear that "Circling" may itself become legally protected to refer to a specific style (probably the Circling Institute's), rather than the umbrella term for related styles, in a way that will further complicate things if that happens. 

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-01T17:27:24.186Z · score: 14 (7 votes) · LW · GW

It seems to me like a stretch to take Sean's commitment to authenticity as being just like what a scientists does who's committed to the truth.

I mean, all analogies are stretches; the question is in what way and how far. There's a reason the post has 'cousin' in the title instead of 'sibling' or 'distant relation.'

You could similarly describe the commitment to God of a catholic as Catholicism being like rationalism.

Specifically, the way I would do that is as follows:

Suppose for these paragraphs we use "Faith" to refer to 'privileging model A over model B' when we're making decisions and those two models disagree with each other. This can be used to protectively shield beliefs from criticism ("Well, I get that you have all these detailed arguments for the historical record not being the way I think it is, but God Said So, and I have faith in God."), and it can be used to integrate considerations that are too remote to be positively identified in a model but which can be easily labelled ("Well, I get that I am extremely confident that the experiment would go a particular way, but Empiricism Requires We Run It, and I have faith in Empiricism.").

In my youth I got to see an example of this up close, where the church I was a member of was considering undergoing a major construction project; one of the members was a financial analyst and looked at the numbers and thought "this really doesn't add up," but put that against Bible verses that "God would provide" and reluctantly supported the project.

The difference between the Catholic and the Rationalist is not whether they have 'multiple models' (both do) and whether or not they have different weights for their models (both do), but what they think the weights should be and how they justify those weights. Importantly, it's also not that the Rationalist has 'tested' the thing they have Faith in and the Catholic hasn't; both Faith procedures described here are self-reinforcing ("Turns out, God says I should trust God!" and "Turns out, running experiments suggests that I should run experiments!"). It's that empiricism has other coherence properties that seem pretty solid, and that trusting God doesn't have those properties, and what other coherence properties it has seem much shakier.

Thus I think rationalists are doing the right thing, and Catholics are doing the wrong thing, because the rationalists are using this mechanism in order to make themselves predictably better off (according to me) and the Catholics are making themselves predictably worse off (according to me). When I turn my attention towards Circlers, I notice "huh, there's an empiricism thing going on here, and a reflection thing; both of those seem like they have solid coherence properties."

Eliezer argued in Beyond the Reach of God that rationalists shouldn't believe in sacred principles that could be fundamentally more valuable.

I interpret that differently; I saw in it "the universe runs on system dynamics, not morality" and more weakly "there is no policy that you can follow that will guarantee good consequences."

To be clear, "empiricism" is not a policy that guarantees good consequences. It's a virtue, and virtues act by both only giving probabilistic guarantees and by shifting the standards of what 'success' even means. 

I have a sense that you don't have a good idea of what radical honesty is. I think there's a good chance that you would be pleasantly surprised if you would do a workshop with someone like Taber Shadburne.

I think it's worth separating "radical honesty" as understood by its originators and "radical honesty" as interpreted-by-default; I am not surprised to learn of people under that banner who are successfully doing something healthy and authentic.

The problem here is closer to "if you want to add an additional 'should' to an equilibrium, you should anticipate resistance in the form of reductios," and I do not think that "authenticity is better than inauthenticity" means "always being completely honest," and instead means a more nuanced and subtle thing.

Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T16:39:55.399Z · score: 9 (4 votes) · LW · GW

Finally (as noted by someone I discussed this post with elsewhere), Vaniver, in the OP, analogizes ‘authenticity’ to truth. Indeed, as far as I can tell, the entirety of the post’s rhetorical force comes from this analogy. Yet recall how much effort Eliezer dedicated, in the Sequences and later, to explaining just what in the world he meant by ‘truth’! However much effort it takes to explain ‘truth’—Eliezer applied that effort, because it was necessary.

Does ‘truth’ deserve extensive, laborious explanation, but ‘authenticity’—only a breezy dismissal?

I wanted to note here that I think this is right; that the analogy between truth and authenticity is what gives this post rhetorical force (and is a huge chunk of why I think rationality and Circling are cousins), that it was good to give truth an extensive, laborious explanation, and that it would also be good to give 'authenticity' an extensive, laborious explanation.

Furthermore, I think one of the ways in which Eliezer is an exceptional writer is that he notices dependencies and serializes them; "ah, in order to explain C, I must first explain B, and for B I must first explain A." I often find myself in the opposite approach; "explain C, figure out what was missing, and then explain B, figure out what was missing, and then explain A." (Tho I think this happens to Eliezer too.) Pushback of the form "but what do you mean by B?" is an integral part of this process.


That said, sometimes there's a post intended to explain C to people who already have B, or B grounds out in experience; we talk about color without feeling a need to explain color to the blind. I think that's not the case here; I am hoping to make the thing I like about Circling legible to the highly skeptical, systematic thinkers who want to compile the thing themselves and so want me to provide the dependency chain.

But also I'm not convinced that I can succeed, as parts of it may end up depending on experience, but at least we can figure out which parts and what experience. 

Comment by vaniver on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T16:18:43.871Z · score: 3 (1 votes) · LW · GW

I saw nsheppard's comment first, and gave a partial response there.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-01T16:17:46.354Z · score: 60 (15 votes) · LW · GW

I think you're right that the functional role of "authentic" in the above post is as an applause light. But... I think the same goes for "truth," in the way that you point out in your 2nd point. [In the post as a whole, I think "deep" also doesn't justify its directionality, but I think that's perhaps more understandable.]

That is, a description of what 'truth' is looks like The Simple Truth, which is about 20 pages long. I'm editing in that link to the relevant paragraph, as well as an IOU for 'authenticity,' which I think will be a Project to actually pay down.

But for this comment, let me see if I can write a short version that does enough of the work.

"Truth" is a label we use to distinguish the products of a coherence process, where a 'statement' corresponds to 'reality.' Untruth is when that coherence process fails, where the statement either corresponds to a different reality than the one we're in or fails to correspond to any possible reality. There are also interesting edge cases that point out the importance of the process that generates coherence, rather than it merely happening to be true that the two correspond with one another in this instance.

In Public Positions and Private Guts, I identify two sorts of things you might call beliefs, where 'private guts' roughly correspond to the actual causal mechanisms leading to a conclusion (which may or may not be well-understood, and generally are difficult to articulate), and 'public positions' roughly correspond to the sort of conclusions / justifications you can legibly articulate.

Authenticity is similarly a label we use to distinguish the products of a coherence process, generally between something like 'outward appearance' and 'inward feeling.' Inauthenticity is when that coherence process fails, where the outward appearance corresponds to a different inward feeling than the one actually felt, or fails to correspond to any possible inward feeling.

Here's where one of the disclaimers comes in about openness: if I feel that vanilla is a better flavor than strawberry, and also feel that flavor preferences should be private, then it seems more authentic to keep my flavor preferences private than share them.

I think there are a bunch of arguments in favor of authenticity, and a bunch of arguments in favor of inauthenticity. For some example arguments for inauthenticity, note that "Thank someone who gave you a gift even if you don't like the gift" has an authentic version and an inauthentic version, and many cultures think you get to the authentic version by practicing the inauthentic version; "fake it til you make it" is a heuristic that inauthenticity helps develop authenticity.

A simple argument in favor of authenticity is that knowing more about your preferences, and communicating them more honestly to others, is a useful tool in making your corner of the world look more like you want it to. (See the old okTrends blog post on how variance in ratings is useful.) Decision theory suggests you should attempt to develop true beliefs; it just as clearly suggests you should attempt to develop a true utility function!

Circlers care a lot about differentiating the subtleties of internal experience. But as Paul puts it, If we can't lie to others, we will lie to ourselves. That might look like a reversal, so let me elaborate: if I have to carefully police my outward appearance for acceptability, then in order to minimize the amount of explicit lying or hiding I have to do I will also have to police my inward feelings for acceptability, and this will get in the way of figuring out what I actually am feeling at the moment, which will get in the way of me understanding myself or moving in the direction that I would reflectively want to move in.

Of course, you can probably imagine how the argument for inauthenticity responds. Suppose I'm annoyed by how another person behaves, but also don't want to get into an extended conflict; I might prefer to swallow my annoyance instead of trying to fix their behavior, and much of the 'technology for avoiding civil war' is about determining what sorts of inward feelings are and aren't appropriate to express. It might say "because we can't tell the truth to others, we must lie to ourselves."

But I have a sense that more is possible, and that it is possible to have difficult conversations in ways that end well, and that doing so requires careful, empirical development of knowledge and skill. When we choose swallow our annoyances, we can do so authentically, in a way that actually digests them; when we choose to bring our annoyances, we can do so in a way that makes the world better.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-01T14:55:50.198Z · score: 27 (10 votes) · LW · GW

I think Circling, or related practices, are an important part of the great common Neo-Enlightenment project of human progress. It’s a mechanism to understand more about ourselves and each other, and it involves some deliberate attempts to not steer towards ‘candy’ and instead stay focused on deepening. It’s a genuine practice with a body of knowledge, and Circling Europe in particular seems to have had a technological edge (in that their online Circling platform has allowed them to get many more people spending many more hours Circling).

I also think there are massive cultural differences between ‘rationalists’ as a people group and ‘relationalists’ (what I sometimes hear them called) as a people group. As an analogy, I think athleticism is an important part of being a human with a body, and yet have difficulty finding fitness approaches or products that don’t want to scream ‘jock masculinity!’ or ‘yoga femininity!’ at me. Consider Convict Conditioning, a solid series of graduated bodyweight exercises that build small skills in the right order, pitched as the sort of thing you can do in prison and which will make you “a TRUE man.”

Most people who come to Circling events do it because they think it’s fun, and because they’re genuinely interested in other people and connecting to them. Many of them are hippies and respond positively to woo. The Circler who clearly stated his commitment to openness in a way that crystallized this post (and made it better in the process) was Sean Wilkinson, whose bio I only felt comfortable linking to after that preamble.

And so just like I think it’s a mistake to let ‘dancing’ and ‘sports’ be forever out of reach because they’re “not for nerds,” I think it’s a mistake to let ‘human connection’ and ‘Circling’ be forever out of reach because they’re “not for nerds.” Progress on this front looks like a combination of ‘tolerating cultural differences’ and ‘creating additional products / marketing angles.’ For example, I imagine a Circling Immersion weekend with mostly rationalists to be more interesting for rationalists than a Circling Immersion weekend with mostly non-rationalists, but others who have more experience with both will have more informed views on the subject. (Almost all of my Circling experience is with rationalists and highly skilled facilitators, instead of median relationalists.)

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-01T14:49:32.169Z · score: 7 (3 votes) · LW · GW

To be clear, are you saying that your interpretation of the latter quote’s meaning (the one about being “open to the universe”) comes from the speaker’s explanation of what he meant by it? Or, is the quote all there was, and the explanation is your own gloss?

He definitely said a longer sentence, but I think most of the explanatory power came from what he was responding to, which I no longer remember the details of but which I remember as having the emotional content of "I am afraid to do X because I don't know how it will turn out." 

Comment by vaniver on 2020's Prediction Thread · 2020-01-01T14:38:06.568Z · score: 3 (1 votes) · LW · GW

Bryan bets on the percentage of 18-24 year olds enrolled in 4-year degree-granting institutions (here's 2000-2017 numbers). I'm sort of astounded that anyone would take the other side of the bet as he proposed it (a decline from 30% to 20% over the course of 10 years); in my mind a decline from 30% to 25% would be 'substantial'.

For the more specific version that I have in mind (a 'coming apart' of "bachelor's degrees" and "valuable bachelor's degrees"), I think it has to show up in a change of enrollment statistics split out by major, which might be too hard to operationalize ahead of time.

Comment by vaniver on Circling as Cousin to Rationality · 2020-01-01T03:14:44.406Z · score: 5 (2 votes) · LW · GW

Thanks; fixed. 

Comment by vaniver on 2020's Prediction Thread · 2020-01-01T01:52:11.776Z · score: 3 (1 votes) · LW · GW

Also, I think the law school bubble burst in the wake of the 2008 financial crisis and the contraction in law firms, which you can see in student enrollment statistics but not inflation-adjusted tuition

Comment by vaniver on 2020's Prediction Thread · 2020-01-01T01:34:52.858Z · score: 5 (2 votes) · LW · GW

Okay then, how about higher education as a fraction of GDP?

When I tried to calculate the equivalent thing for real estate and GDP for the 2008 financial crisis, as far as I can tell the fraction of GDP provided by real estate went up instead of down. The bubble bursting is clearly visible in the home price index, tho. So if someone creates a 'degree value index', that's where I'd expect to see it; the closest approximations that I'm aware of are the wage premium and underemployment rate (this can point to a few things; I mean the "person with a degree working a job you don't need a degree for" one instead of the "unemployed plus part-time seeking full-time work" one). 

[Also I'm going to ping Bryan Caplan and see if he has a good operationalization.]

Comment by vaniver on 2020's Prediction Thread · 2019-12-31T22:15:31.566Z · score: 3 (1 votes) · LW · GW

Re: higher education bubble, do you also predict that tuition increases will not outpace inflation?

My model doesn't give a detailed answer; I think I expect the number and type of people participating in higher education to change, and then it's unclear what that will do to average tuition. For example, in worlds where all undergraduate education becomes free-to-the-end-user but med school and law school still exist, the tuition statistics become apples to oranges.

Comment by vaniver on human psycholinguists: a critical appraisal · 2019-12-31T18:16:43.607Z · score: 28 (12 votes) · LW · GW

Writing styles don't seem to reveal anything deep about cognition to me; it's a question of word/punctuation choice, length of sentences, and other quirks that people probably learn associatively as well.

But isn't it interesting that the way human linguists thought word/punctuation choice worked in humans failed to produce human-like speech, and yet GPT-2 successfully produces human-like speech? Yes, obviously, it's the babbler instead of the full brain. But that definitely lines up with my internal experience, where I have some 'conceptual realm' that hands concepts off to a babbler, which then generates sentences, in a way that lines up with how GPT-2 seems to operate (where I can confidently start a sentence and not know how I'll finish it, and then it's sensible by the time I get there).

Comment by vaniver on AI Alignment, Constraints, Control, Incentives or Partnership? · 2019-12-31T18:05:20.405Z · score: 9 (4 votes) · LW · GW

"Partnership" is a weird way to describe things when you're writing the source code. That is, it makes sense to think of humans 'partnering' with their children because the children bring their own hopes and dreams and experiences to the table; but if parents could choose from detailed information about a billion children to select their favorite instead of getting blind luck of the draw, the situation would seem quite different. Similarly with humans partnering with each other, or people partnering with governments or corporations, and so on.

However, I do think something like "partnership" ends up being a core part of AI alignment research; that is, if presented between the option of 'writing out lots of policy-level constraints' and 'getting an AI that wants you to succeed at aligning it' / 'getting an AI that shares your goals', the latter is vastly preferable. See the Non-adversarial principle and Niceness is the first line of defense.

Some approaches focus on incentives, or embedding constraints through prices, but the primary concerns here are 1) setting the prices incorrectly and 2) nearest unblocked strategies. You don't want the AI to "not stuff the ballot boxes", since it will just find the malfeasance you didn't think of; you want the AI to "respect the integrity of elections."

Another way to think about this, instead of those two buckets, are Eliezer's three buckets of directing, limiting, and opposing.

Comment by vaniver on 2020's Prediction Thread · 2019-12-31T00:59:19.902Z · score: 10 (6 votes) · LW · GW

Sorted approximately by strength:

The UK will leave the European Union. (95%)

Industrial / financial consolidation will continue instead of reversing, and the 'superstar cities' phenomenon will be stronger in 2030 than 2020. (90%)

The 'higher education bubble' will burst. (80%) This feels mostly like a "you'll know it when you see it" thing, but clear evidence would be a substantial decrease in the fraction of Americans going to college, or a significant decline in the wage premium for "any college diploma" over a high school diploma (while perhaps some diplomas will retain significant wage premiums). [Edit 2020-1-1: on further reflection I think this is more like 70%.]

No armed conflict between Japan and any of its neighbors. (70%)

My favorite movie released between 2020 and 2029 will be animated instead of live-action. (60%)

Queen Elizabeth II will still be alive. (30%) [This requires her to make it to 103; life expectancy for a 93-year old British woman is only 3.64 years, her mother made it to 101, her father to 56, and her sister to 71. It seems unlikely that new medical technology will make a significant difference during that time; basically the only medication I expect to be available in time to do anything useful will be metformin.] 

Emperor Emeritus Akihito will still be alive. (30%) [This requires him to make it to 96; while life expectancy for a 86-year old Japanese man is 6.3 years, he's already abdicated due to poor health.]