Narrative Syncing
post by AnnaSalamon · 2022-05-01T01:48:45.889Z · LW · GW · 48 commentsContents
A situation where the concept of “narrative syncing” seems to me to add clarity Example: Narrative syncing about “helpful” careers Scenario 1: “I try to tell the truth” Scenario 2: “I go along with what he seems to want” My current models of what was happening here Scenario 3: Using my new understanding of “narrative syncing” My take-aways from the above My remaining confusions and open questions None 49 comments
Partial attempt to make sense of: On Doing the Improbable [LW · GW]
Epistemic status: On my inside view, this seems like a clear and useful distinction that could use a short, shared term. Hasn’t been checked much by others. Some edge cases I’m still confused about. Also I may have a suboptimal term; help generating the right term greatly appreciated.
I’d like a short term for such sentences as:
- “The sand is lava; if you touch it you die” (when introducing the rules to a game, not making a false prediction)
- “Colloquium is at 3pm on Wednesdays” (when creating a new colloquium, or adding weight to a common-knowledge agreement / Schelling point)
- “We don’t shout around here” (stated normatively)
I’m going to suggest the term “narrative syncing.” I’d like to distinguish “narrative syncing” from “sharing information” of a sort whose accuracy is not intended to be created/affected by the act of thinking/stating/believing the given sentence, i.e. to distinguish the sentences above from such sentences as:
- “It will probably rain tomorrow.”
- “3 + 5 = 8.”
- “The people over there in that organization seem to avoid shouting, and to somewhat shun those who shout – though I have no stance on whether that’s a good norm, I’m just describing/predicting it.”
That is, I’d like to use the term “narrative syncing” (or some better such term that y’all suggest; suggestions appreciated) to distinguish sentences whose primary purpose is to sync up with some set of other people as to how we all think or talk about a given thing around here (as opposed to sentences whose primary purpose is to describe a piece of outside reality).
A situation where the concept of “narrative syncing” seems to me to add clarity
One reason I’d like a short term for this, is IMO having such a term would’ve helped me understand some past social situations I was in.
Example: Narrative syncing about “helpful” careers
Sometimes participants at CFAR’s/MIRI’s “AI risks for computer scientists” program would ask me something like “what careers should I go into, if I want to help with AI risk?” And I would sort-of interpret this as a request for information, and would nevertheless feel a predictable frustration when I heard it as I braced for the subterranean/confusing-to-me conflict that I anticipated would follow my words, without really understanding why (I would anticipate this based partly on the particular person’s nonverbals as they asked the question -- not all participants were like this). I’ll elaborate how this interaction tended to go by describing a stereotyped conglomerate participant of this sort named Alec:
Scenario 1: “I try to tell the truth”
Alec: “What careers should I go into, if I want to help with AI risk?”
Me: “Well, unfortunately, I’m not really sure. My own models are messy and confused, as to what paths there are to an existential win, what actions of yours might end up helping how much with each, etc. I’d mostly recommend you try to develop inside views on how to get to victory; although I happen also to like MIRI, who is [at that time] hiring engineers and scientists; Paul is also looking for ML engineers, and seems smart and like he’s really trying; and ML seems likely to be a skill that more and more groups that say they are aimed at safety are trying to hire people with… But there aren’t ready-made careers I know of that produce some reasonable guarantee of helping.”
Alec: [nonverbals I interpreted as frustration/anger/desire to get me to act “correctly” again/turmoil]: “Why did I even sign up for this program?”
Me, to myself: [... I guess people just really want me to simplify, so they can believe the world is easier than it is? I notice I’m confused, though.]
Scenario 2: “I go along with what he seems to want”
Alec: “What careers should I go into, if I want to help with AI risk?”
Me: “For most people it seems probably best to study ML, or computer engineering broadly, and then apply to AI safety orgs. Math/physics is probably also a decent bet. You can also try to develop an inside view of the problem, by thinking about how the heck you’d code an aligned AI, and e.g. making the puzzle easier to get started on by imagining you get to start with a hypercomputer, or the super-duper-deepnets of the future, or something.”
Alec: “Okay.”
Me, to myself: [huh, I feel vaguely dirty/contaminated after that exchange; not sure why; I guess it’s because I was simplifying? My relationship to Alec makes me feel weird, now.]
My current models of what was happening here
The way the above interaction seems to me now, is that Alec was actually trying to request narrative syncing rather than information exchange. He wanted to know: “which careers are should I go into, if I want you / MIRI / the EA community to, in exchange, regard me as ‘committed’ and as ‘a real member of the group, who should be supported and included and viewed as legitimately taking part in the shared effort’? What are the agreed-on local rules for which careers count as ‘AI Safety’?” And in response to this question, in scenario 1, I was sort-of saying “sorry, I don’t play that game,” but I was doing it non-explicitly, in a way that was hard for Alec to process. This was especially so since I was sending other signals, elsewhere in AIRCS, that I was playing games of that sort – that I did play a conscious role in the community’s decisions about who was/wasn’t inside the community, or about which ideas counted as legitimate. Thus Alec felt frustrated; and I sensed a difference in wavelengths but failed to quite understand why [LW · GW].
In scenario 2, I was answering the question in a way Alec could take as narrative syncing: “These are the careers such that, if you pursue them, I and such others as I can predict/influence/sync with will regard you as being part of the real AI safety effort; I will take on some of the responsibility for your choice; if you choose these things, your views and actions can be part of us.” But I wasn’t actually okay with doing that, and so I felt weird/bad/as though my new relationship with Alec was unacceptable to me in some way.
If I had that scene to do over again:
Scenario 3: Using my new understanding of “narrative syncing”
Alec: “What careers should I go into, if I want to help with AI risk?”
Me: “We try to have a culture around here where there is no vetted-by-the-group answer to this; we instead try to encourage forming your own inside-view model of how AI risk might work, what paths through to a good future might be possible, etc. One nice thing is that money is pretty abundant lately: if you end up with an inside-view saying a particular research direction is worth trying, you’re almost certain to be able to get funding to pursue it, at least after you develop it a bit (given you’re starting as a smart PhD student), and I’d be glad to help you seek funding in such a situation if you want. I’m also happy to share my own inside-view models, but you should know that I might change them, and that others in AI safety may disagree.”
(Note that in this example, the “me” character does do narrative syncing – but in a way that does not disguise itself as information sharing about AI. Instead, I bid for the norm that “we” try to have a culture “around here” in which we each form an inside view, rather than syncing about which inside view of AI is the “us view” or the “good view” or something. This may still involve sharing misleading info about how "we" do things around here, which may still lead to problems.)
My take-aways from the above
It seems to me that narrative syncing is not always bad, and in fact, has real uses. It is nice to be able to create “house rules” for a board game, to decide that colloquium is at 3pm on Wednesdays, and to create norms for what “we” do and do not “do around here.”
However, narrative syncing is sometimes mistaken for (or deliberately disguised as) information sharing, and this is bad for epistemics in at least two ways:
- Direct misunderstanding: People often think other people are giving them information/predictions when they are actually making social moves, which leads to updates they wouldn't make if they understood what was happening.
- Indirect “inquiry-dampening”: If there is an attempt to cause social coordination around some “what we say around here” matter that sure sounds like a matter of prediction or information exchange (such as whether it's harmful to advance ML research, or more local matters like whether so-and-so chose good groceries for the workshop) this is apt to create social pressure for others not to express their own views freely, which is bad for epistemics on this point, and which risks setting up “watch what you say” conversation-dampening dynamics broadly.
So, it’s valuable to be aware that “narrative syncing disguised as information requests / information sharing” is a thing, and to have some notion what it looks like, what kind of places you’re likely to find it, etc. Much as it’s valuable to be aware that people lie sometimes, and to have some notion what kinds of things people are likely to lie about.
(Though, unlike the situation with more textbook examples of "lying," I'd like to emphasize that most people doing this "disguised narrative syncing" stuff do not have an explicit model under which this is harmful, and so the thing to do is more like to look into things curiously and collaboratively, or to just sidestep the dynamic, and less to tell people "cut it out please" as though it has been established that that's the thing to do. Also my concepts need vetting.)
Less importantly: understanding the positive uses of narrative syncing seems at least a bit useful to me also vis a vis understanding what social requests people sometimes have, and being able to meet those requests more straightforwardly. Though this part is more speculative, and is not a crux for me for most of the above.
My remaining confusions and open questions
My biggest remaining confusion is this: what’s up with self-fulfilling (or partially self-fulfilling) predictions playing a role in internal coordination? For example, I tell myself that I’d like to write this before I sleep, and now I have more of a desire and plan to write this than I did before; or, parents tell a kid they like the color blue and it catches; etc.? What’s a good ontology around this? “Truth-telling” vs “lying” vs “bullshitting” doesn’t seem like the most natural set of distinctions for this sort of loopy sentence. And while this is arguably a weird edge-case if your goal is to make Bayesian predictions about the outside world, it seems basic and not at all an edge-case if your goal is to understand what humans are or how humans or groups of humans act in the world or where agency comes from.
A few other questions that’re still open for me, mostly as recap:
- Is “narrative syncing” an okay enough term that we can use it for this thingy I’m describing? Anyone have an idea for a term that better cleaves nature at its joints?
- Does “narrative syncing disguised as information sharing” have important positive uses?
- Is it [edit: narrative syncing disguised as information sharing] something we should simply try to minimize if we want good epistemics? (My current hypothesis is "yes," but this hasn't been checked enough.)
- What’re some places where this happens a lot? Where it happens almost not at all? How can a person notice it? How can a person shift things toward less confusing patterns?
48 comments
Comments sorted by top scores.
comment by AnnaSalamon · 2022-05-02T05:54:33.669Z · LW(p) · GW(p)
I agree with some commenters (e.g. Taran [LW(p) · GW(p)]) that the one example I gave isn’t persuasive on its own, and that I can imagine different characters in Alec’s shoes who want and mean different things. But IMO there is a thing like this that totally happens pretty often in various contexts. I’m going to try to give more examples, and a description of why I think they are examples, to show why I think this.
Example: I think most startups have a “plan” for success, and a set of “beliefs” about how things are going to go, that the CEO “believes” basically for the sake of anchoring the group, and that the group members feel pressure to avow, or to go along with in their speech and sort-of in their actions, as a mechanism of group coordination and of group (morale? something like this). It’s not intended as a neutral prediction individuals would individually be glad to accept/reject bets based on. And the (admittedly weird and rationalist-y) CEOs I’ve talked to about this have often been like “yes, I felt pressure to have beliefs or to publicly state beliefs that would give the group a positive, predictable vision, and I found this quite internally difficult somehow.”
When spotting “narrative syncing” in the wild, IMO a key distinguisher of “narrative syncing” (vs sharing of information) is whether there is pressure not to differ from the sentences in question, and whether that pressure is (implicitly) backchained from “don’t spoil the game / don’t spoil our coordination / don’t mess up an apparent social consensus.” So, if a bunch of kids are playing “the sand is lava” and you say “wait, I’m not sure the sand is lava” or “it doesn’t look like lava to me,” you’re messing up the game. This is socially discouraged. Also, if a bunch of people are coordinating their work on a start-up and are claiming to one another that it’s gonna work out, and you are working there too and say you think it isn’t, this… risks messing up the group’s coordination, somehow, and is socially discouraged.
OTOH, if you say “I like pineapples on my pizza” or “I sometimes pick my nose and eat it” and a bunch of people are like “eww, gross”… this is social pressure, but the pressure mostly isn’t backchained from anything like “don’t spoil our game / don’t mess up our apparent social consensus”, and so it is probably not narrative syncing.
Or to take an intermediate/messy example: if you’re working at the same start-up as in our previous example, and you say “our company’s logo is ugly” to the others at that start-up, this… might be socially insulting, and might draw frowns or other bits of social pressure, but on my model the pressure will be weaker than the sort you’d get if you were trying to disagree with the core narrative the start-up is using to coordinate (“we will do A, then B, and thereby succeed as a company”), and what push-back you do get will have less of the “don’t spoil our game!” “what if we stop being able to coordinate together!” nature (though still some) and more other natures such as “don’t hurt the feelings of so-and-so who made the logo” or “I honestly disagree with you.” It’s… still somewhat narrative syncing-y to my mind, in that folks in the particular fictional start-up I’m imagining are e.g. socially synchronizing some around the narrative “everything at our company is awesome,” but … less.
It seems to me that when a person or set of people “take offense” at particular speech, this is usually (always?) an attempt to enforce narrative syncing.
Another example: sometimes a bunch of people are neck-deep in a discussion, and a new person wanders in and disagrees with part X of the assumed model, and people really don’t want to have to stop their detailed discussion to talk again about whether X is true. So sometimes, in such cases, people reply with social pressure to believe/say X — for example, a person will “explain” to the newcomer “why X is true” in a tone that invites capitulation rather than inquiry, and the newcomer will feel as though they are being a bit rude if they don’t buy the argument. I’d class this under “don’t mess up our game”-type social pressure, and under “narrative syncing disguised as information exchange.” IMO, a better thing to do in this situation is to make the bid to not mess up the game explicit, by e.g. saying “I hear you, it makes sense that you aren’t sure about X, but would you be up for assuming X for now for the sake of the argument, so we can continue the discussion-bit we’re in the middle of?”
Replies from: ben-lang, AllAmericanBreakfast, ambigram, Vaniver↑ comment by Ben (ben-lang) · 2022-05-03T15:28:41.184Z · LW(p) · GW(p)
If you are looking for more examples of narrative syncing:
- "I have read, and accept, the terms and conditions [tick box]". I have not read the terms and conditions. They know I haven't. This is not an information exchange.
- I was shopping with my Grandma once. I knew bananas were on the list and put them in the trolley. She asked "why didn't you take these bananas" and indicated a different brand. I thought she was asking for information so provided it, saying "they are smaller, cost more, and are wrapped up in plastic.". I got body language that indicated I had mis stepped. The next day my mum told me grandma was upset that I had "forced" her to buy bananas that were too big. (Her doctor had told her to eat a target number of pieces of fruit so she was buying the smallest fruit she could find.) She hadn't really been asking why I had spurned the small ones, she was asking to swap, had I understood that I would have immediately complied.
- It is often not considered "fair" for a referee of a scientific paper to challenge what I would call "field mythology". At some point someone said in a paper conclusion "A is possibly, kind of useful for X". Then someone else said "good for X" in an introduction, citing the first person. 10 years later it is a commonly stated myth that this specific science topic is "good for X". People working on the topic don't really know or care if it is useful for that, they are working on it because they think its good science (the usefulness of science A for problem X is not a load bearing part of the argument for why they, or anyone else, is studying A.) If a referee challenges such a claim in a paper they they may be factually correct, but they are failing to play by "the rules of the game". Also, it is not really "fair" to berate the paper for parroting the myths of its genre, the referee should be attacking what the paper adds to human knowledge (its marginal).
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2022-05-02T17:31:27.953Z · LW(p) · GW(p)
I think there’s an alternative explanation for why social pressure may be used to make the dissenter capitulate. This is when an uninformed non-expert is dissenting from the informed judgments of an expert on their core subject.
This is done for a few reasons. One is to use mild humiliation to motivate the non-expert to get past their pretentions and ego, do their research, and become more useful. Another is when it would be a bad use of time and resources to educate the non-expert on the spot. A third is when the non-expert is pointing out a real but surmountable challenge to the strategy, which is best dealt with at a later time. And a fourth is when the root of the issue is a values mismatch that is being framed as a disagreement about facts.
This maybe is why I’m more inclined to view this as “leadership language” than as “narrative syncing.” Narrative syncing implies that such actions are undertaken mainly for empty propagandistic reasons, to keep the game going. I think that much of the time, the motivations of the people pressuring dissenters to capitulate are as I’m describing here. It’s just that sometimes, the leaders are wrong.
↑ comment by ambigram · 2022-05-03T04:36:34.562Z · LW(p) · GW(p)
Potentially related examples:
Group identity statements (pressure not to disagree)
- A team believes themselves to be the best at what they do and that their training methods etc. are all the best/correct approach. If you suggest a new training method that seems to be yielding good results for other teams, they wouldn't treat it seriously because it's a threat to their identity. However, if the team also takes pride in their ability to continuously refine their training methods, they would be happy to discuss the new method.
- If a group considers themselves to be "anti-pineapple" people, then saying "I like pineapples on my pizza" would signal that you're not really part of the group. Or maybe they think X is harmful and everyone knows pineapples contain X, then proudly declaring "I like pineapples on pizza" would mark you as an outsider.
Self-fulfilling prophecies (coordination + presure not to disagree publicly)
- It's the first week of school and the different student clubs and societies have set up booths to invite students to join. The president of club X tells you that they are the second largest club in the school. This makes club X seem like an established group and is one of the reasons you register your interest and eventually decide to join the group. Later on, you find out that club X actually had very few members initially. The president was basing his claim on the number of people who had registered their interest, not the actual members. However, since he managed to project the image of club X as a large and established group, many people join and it indeed becomes one of the largest student groups.
- A captain tells the team before a game that they are going to win. The team is motivated and gives their best, therefore winning the game. (Some people may know the statement is false, which they may reveal to others in private conversations. They won't state it in public because they know that the statement is intended to coordinate the team (i.e. it's not meant to be literally true) and that they are more likely to succeed if everyone believes it to be true. It is important only for those who think this is a factual statement and are likely to give up if it's false to believe the statement is true. People who think this is just a pep talk would assume that the captain will say the same thing regardless of what's true.)
- The Designated Driver Campaign successfully introduced the practice of having a designated driver when out for drinks by portraying it as a norm in entertainment shows. I'm not sure what it was like, whether people adopted the idea because they thought it was a good one even when they knew it was artificial or because it seemed like everyone was doing it, but here we have a social norm that existed only on TV that became an actual social norm because enough people decided to go along with it.
Group norms (coordination + disagreement is rejected)
- We use 2 whitespaces instead of 4 whitespaces for indentation as a convention.
- We act as one team. If we have come to a decision as a team, everyone follows the decision whole-heartedly even if they disagree. If you don't want to, please quit.
↑ comment by Vaniver · 2022-05-02T21:16:56.824Z · LW(p) · GW(p)
OTOH, if you say “I like pineapples on my pizza” or “I sometimes pick my nose and eat it” and a bunch of people are like “eww, gross”… this is social pressure, but the pressure mostly isn’t backchained from anything like “don’t spoil our game / don’t mess up our apparent social consensus”, and so it is probably not narrative syncing.
Huh, I would have thought that counted as narrative syncing as well?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2022-05-02T21:20:52.455Z · LW(p) · GW(p)
Can you say a bit more about why?
Do you agree that the social pressure in the pineapple and nose-picking examples isn't backchained from something like "don't spoil our game, we need everyone in this space to think/speak a certain way about this or our game will break"?
↑ comment by Vaniver · 2022-05-03T00:59:26.237Z · LW(p) · GW(p)
Can you say a bit more about why?
I think it's mostly from thinking about it in terms of means instead of ends. Like, my read of "narrative syncing" is that it is information transfer, but about 'social reality' instead of 'physical reality'; it's intersubjective instead of objective.
There's also something going on here where I think most in-the-moment examples of this aren't backchained, and are instead something like 'people mimicking familiar patterns'? That can make it ambiguous whether or not a pattern is backchained, if it's sometimes done for explicitly backchainy reasons and is sometimes others copying that behavior.
Do you agree that the social pressure in the pineapple and nose-picking examples isn't backchained from something like "don't spoil our game, we need everyone in this space to think/speak a certain way about this or our game will break"?
Suppose some people are allergic to peanuts, and so a space decides to not allow peanuts; I think the "[normatively]: peanuts aren't allowed here" is an example of narrative syncing. Is this backchained from "allowing peanuts will spoil our game"? Ehhh... maybe? Maybe the anti-peanut faction ended up wanting the norm more than the pro-peanut faction, and so it's backchained but not in the way you're pointing at. Maybe being a space that was anti-peanut ended up advantaging this space over adjacent competitive spaces, such that it is connected with "or our game will break" but it's indirect.
Also, I predict a lot of this is the result of 'play' or 'cognitive immaturity', or something? Like, there might be high-stakes issues that you need to enforce conformity on or the game breaks, and low-stakes issues that you don't need to enforce conformity on, but which are useful for training how to do conformity-enforcement (and perhaps evade enemy conformity-enforcement). Or it may be that someone is not doing map-territory distinctions or subjective-objective distinctions when it comes to preferences; Alice saying "I like pineapple on pizza" is heard by Bob in a way that doesn't cash it out to "Alice prefers pizza with pineapple" but instead something like "pineapple on pizza is good" or "Bob prefers pizza with pineapple", both of which should be fought.
comment by jimrandomh · 2022-05-01T05:56:14.401Z · LW(p) · GW(p)
This feels like a TDT-style generalization of the linguistic concept of performative utterances. If there were no colloquium scheduled and I said "the colloquium is at 3pm on Wednesday", that would be a performative utterance. But if it were already scheduled for that time, then under the usual definition, the same sentence would not be a performative utterance. This seems weird.
In the generalized version, which you're calling narrative syncing, there is still a self-fulfilling aspect, but instead of being localized to a specific utterance, it's more like a TDT decision node, possibly shared between multiple people.
comment by ambigram · 2022-05-01T03:29:14.460Z · LW(p) · GW(p)
I don't really understand... Suppose I am a computer scientist who has just learned about AI risks and am eager to contribute, but I'm not sure where to start. Then the natural step would be to ask someone who does have experience in this area for their advice (e.g. what careers should I go into), so I can take advantage of what's already known rather than starting from scratch.
My surface question is about careers, but my true/implicit question is actually asking for help on getting started contributing to AI risk. It's not about wanting to know how to be considered a real member of the group? I would be annoyed by the first answer because it answers my superficial question while rebuffing my true question (I still don't know what to do next!). I am asking you for help and advice, and yet your response is that you have no answer. I mean it is technically true and it would be a good answer for a peer, but in this context it sounds like you're refusing to help me (you're not playing your role of advisor/expert?).
Answer 3 is good because you are sharing your experience and offering help. There's no need to make reference to culture. I would be be just as happy with an answer that says "no one really knows" (instead of "I don't know"), because then you are telling me the current state of knowledge in the industry.
A similar example:
When I asked someone who was teaching me how to cook how much salt I should add to the dish, they answered "I don't know". I was annoyed because it sounded like they were refusing to help me (because they're not answering my implicit question of how to decide how much salt to put when I cook next time) when they were the expert. It would have been better if they'd said that the saltiness varies based on the type of salt and the other ingredients and your own preferences, etc. so it's not possible to have a known, fixed amount of salt. What cooks do is that they try adding a bit first (e.g. half a teaspoon) then taste and make adjustments. Nowadays I would know how to rephrase my question to get the answer I want, but I didn't use to be able to.
Replies from: tamgent, TekhneMakre↑ comment by tamgent · 2022-05-01T21:22:40.409Z · LW(p) · GW(p)
I am curious about how you felt when writing this bit:
Replies from: ambigramThere's no need to make reference to culture.
↑ comment by ambigram · 2022-05-02T17:03:56.922Z · LW(p) · GW(p)
I guess I'd say frustrated, worried, confused. I was somewhat surprised/alarmed by the conclusion that Alec was actually trying to request information on how to be considered part of the group.
It seems to me like a rather uncharitable interpretation of Alec's response, to assume that he just wants to figure out how to belong, rather than genuinely desiring to find out how best to contribute.
We try to have a culture around here where there is no vetted-by-the-group answer to this; we instead try to encourage forming your own inside-view model of how AI risk might work, what paths through to a good future might be possible, etc.
I would be rather insulted by this response, because it implies that I am looking for a vetted-by-the-group answer, and also seems to be criticising me for asking Anna about opimal careers. Firstly, that was never my intent. Secondly, asking an expert for their opinion sounds like a perfectly reasonable course of action to me. However, this does assume that Alec shares my motivations and assumptions.
I'm not sure of my assumptions/beliefs/conclusions though. I might be missing context (e.g. I don't know what bay area is like, or the cultural norms), and I didn't really understand the essay (I found the example too distracting for me to focus on the concept of narrative syncing - I like the new examples much more).
↑ comment by TekhneMakre · 2022-05-01T05:12:27.420Z · LW(p) · GW(p)
>my true question (I still don't know what to do next!)
(The following might be a strawman; sorry if so; but it is a genuine question about a phenomenon I've seen.) This part sounds like you're saying, you're asking someone what to do next. If so, what if it's a mistake for you to be mentally executing the procedure 1. hear what someone says to do next, 2. do that thing? How would the person you're asking communicate to you that it's a mistake for you to execute that procedure, without you taking their attempted statements of fact as cryptic instructions for you to follow? (Assuming that's a mistake you think you could plausibly be making, and assuming you'd want the mistake communicated to you if you were making it.)
>in this context it sounds like you're refusing to help me (you're not playing your role of advisor/expert?).
Someone's role is a combination of their behavior and the behavior of others' behavior and expectations. Suppose that Berry wants to invite Alec to a workshop with an offer to help, but *not* with any implied intent to play a role of advisor/expert. How could Berry communicate that to Alec?
>What cooks do is that they try adding a bit first (e.g. half a teaspoon) then taste and make adjustments.
In the case of alignment stuff, no one knows how to cook, so the version of this that's available is "try to figure the whole thing out for yourself with no assumptions", which was part of Answer 1.
Replies from: ambigram↑ comment by ambigram · 2022-05-01T12:07:34.810Z · LW(p) · GW(p)
Hmm if CFAR organizes a workshop, then I would think it is reasonable to assume that the CFAR staff (assuming they are introduced as such) are there as experienced members of the AI risk community who are there to offer guidance to people who are new to the area.
Thus, if I ask them about career paths, I'm not asking for their opinions as individuals, I'm asking for their opinions as people with more expertise than I have.
Two possible motivations I can think of for consulting someone with expertise would be:
-
In school, I ask the teacher when I have a question so they tell me the answer. Likewise, if I'm not sure of the appropriate career move, I should ask an expert in AI risk and they can tell me what to do. If I follow their answer and it fails, then it's their fault. I would feel betrayed and blame them for wasting my life.
-
Someone who has been working on AI risks would have more knowledge about the area. Thus, I should consult their opinions before deciding what to do. I would feel betrayed if I later find out that they gave me a misleading answer (e.g. telling me to study ML when that's not what they actually think).
In both cases I'm trying to figure out what to do next, but in the first case I want to be told what to do next, and in the second case I just want more information so I can decide what to do next. (Expert doesn't mean you bear responsibility for my life decisions, just responsibility for providing accurate information.)
If I'm not wrong, you're asking about the first case? (My comment was trying to describe the second scenario.)
If it's just a matter of ambiguity (e.g. I will accompany you just for moral support vs accompany you to provide advice), I would just state it explicitly (e.g. tell you "I won't be helping you. If someone asks you something and you look to me for help, I'll just look at you and smile.") and then do precisely what I said I'll do.
Otherwise, if it's a mindset issue (thinking I should do what others tell me to), it's a deeper problem. If it's someone I'm close to, I would address the issue directly e.g. saying "If the GPS tells you to drive into a lake and you listen and drown, whose problem is it? I can tell you what to do, but if I'm wrong, you're the one who will suffer, not me. Even if some very wise person tells you what to do, you still have to check if it makes sense and decide if you will do it, because you have the most to lose. If other people tell you the wrong thing, they can just go 'Oops sorry! I didn't realise.' You, on the other hand, are stuck picking up the pieces. Make sure you think it is worth it before implementing anyone's advice." And if that fails, just... let them make their mistakes and eventually they'll learn from their experiences?
In the case of alignment stuff, no one knows how to cook, so the version of this that's available is "try to figure the whole thing out for yourself with no assumptions", which was part of Answer 1.
Not really, saying "I don't know" is very different from saying "after years of research, we still don't know".
"I'm not sure" sounds like the kind of answer a newbie would give, so I'm not really learning anything new from the conversation. Even worse, "don't know" sounds like you don't really want to help me - surely an expert knows more than me, if they say they don't know, then that must mean they just don't want to share their knowledge.
In contrast, if you said that no one knows because the field is still too new and rapidly changing, then you are giving me an informed opinion. I now know that this is a problem even the experts haven't solved and can then make decisions based on this information.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-05-01T22:40:38.248Z · LW(p) · GW(p)
Anyone can say "I'm an expert". There has to be some other way that you're distinguishing who's a newbie who's just clueless, from an expert who's ignorance is evidence of a general difficulty. From my perspective, Berry is helping cooperation with Alec by just making straightforward statements from Berry's own perspective; then Alec can compare what Berry says with what other people say and with what seems to Alec to match reality and be logically coherent, and then Alec can distinguish who does and doesn't have informed, logically coherent opinions refined through reason. E.g., at a gloss, Yudkowsky says "no one has a plan that could possibly work", and Christiano says "no, I have a plan which, with some optimistic but not crazy assumptions, looks like it'll probably work", and Yudkowsky says something that people don't understand, and Christiano responds in a way that may or may not address Yudkowsky's critique, and who knows who's an expert? Just going by who says "I have a plan" is helpful if someone has a plan that will work (although it also opens up social niches for con artists).
Replies from: ambigram↑ comment by ambigram · 2022-05-02T15:05:42.407Z · LW(p) · GW(p)
From my perspective, Berry is helping cooperation with Alec by just making straightforward statements from Berry's own perspective; then Alec can compare what Berry says with what other people say and with what seems to Alec to match reality and be logically coherent, and then Alec can distinguish who does and doesn't have informed, logically coherent opinions refined through reason.
Ah yes agreed. Alec doesn't know that this is what's happening though (judging from his response to answer 1). Personally I'd default to assuming that Berry will play the role of expert since he's part of CFAR while I'm just a random computer scientist (from Berry's perspective). I would switch to a more equal dynamic only if there's a clear indicator that I should do so.
For example, if a student asks a professor a question, the professor may ask the student for their thoughts instead and then respond thoughtfully to their answer, like they would respond to a fellow professor. Or if a boss asks a new subordinate for their opinion but the subordinate thinks this is a fake question and tries to guess the cryptic instructions instead (because sometimes people ask you questions to hint that you should answer a certain way), the boss may ask someone else who's been on the team longer. When the new member sees the senior member responding honestly and boss engaging thoughtfully with the response, then the new member would know that the question was genuine.
In the careers example, I can't tell from the first answer that Anna is trying to engage me as a peer. (I'd imagine someone used to different norms might assume that as the default though.)
comment by LoganStrohl (BrienneYudkowsky) · 2022-05-09T19:43:02.322Z · LW(p) · GW(p)
feels weird to post this whole thing as a comment as on this essay, but it also feels weird not to mention here that i wrote it. here is a shortform post [LW(p) · GW(p)] that i made as a result of reading this, which does not engage at all with the content of the OP. it's a thing i felt i needed to do before i could [safely/sanely/consensually? intelligently? non-reactively?] engage with the content of the OP.
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-05-01T06:57:23.830Z · LW(p) · GW(p)
A possible limitation of “narrative syncing” as a term is when you’re syncing your own narrative. It seems to mainly suggest a social interpretation, but many such statements are intended to shape one’s own future behavior.
In general, the action being taken is a suggestion, model, or command for what to do or not to do, who’ll be in charge of directing those efforts, and how to communicate about it.
I might call this something like “leadership language.” It’s also imperfect. What I like about it is that it explicitly calls out the power dynamic. “Narrative syncing” suggests a neutral, computer-like update, while “leadership language” emphasizes that a person is exerting power and influence.
At the same time, “leadership” has broadly positive connotations, which I think is appropriate to the term.
This also suggests that leadership language interferes with epistemic when many independent and considered judgments are preferable to a uniform coordinated social action. When directing people toward independent thought, leadership language is appropriate.
Leadership language happens a lot in public statements by politicians and CEOs and activists, in Facebook posts, in the arts, and during ceremonies.
Replies from: tamgent↑ comment by tamgent · 2022-05-01T21:02:39.591Z · LW(p) · GW(p)
This comment reminded me of the confusion Anna mentioned at the end around self-fulfilling prophecies. It also reminded me of a book called Leadership is Language (which I recommend), with some interesting stories and examples. One I recall often from the book is about asking questions that invite open, thoughtful answers rather than closed, agreement-like answers. For example, "what am I missing", rather than "am I missing anything". I find I'm often in the latter mode and default to it, just wanting confirmation to roll ahead with my plan, rather than actually inviting others' views, so I try to remember this one.
Replies from: tamgent↑ comment by tamgent · 2022-05-01T21:07:50.230Z · LW(p) · GW(p)
I feel like expanding a bit on the confusion around self-fulfilling prophecy. So one theme that seems consistent on across narrative syncing, leadership language and self-fulfilling prophecies is that they're all paying attention to the constructivist forces.
comment by Cedar (xida-ren) · 2022-05-02T18:54:12.933Z · LW(p) · GW(p)
I kinda like the term "cheerleading" instead of narrative syncing. Kinda like "I define a cheer and y'all follow me and do the cheer".
Shameless plug here: I'm trying to get into alignment but struggling to get motivated emotionally. If any of you wants to do mutual cheerleading over a discord chat or something, please PM me. also PM me if you just want to hang out and chat and figure out whether you want information or cheerleading. I'd be glad to help with that too by rubber duckie-ing with you.
I'm doing this because I mistook my need for cheerleading as a need for information a while ago and had a very confusing 1-hour chat with a rationalist where he kept trying to give me information and I kept trying to look for inclusion/acceptance signals. I learned a lot both by listening to him and by reflecting upon that experience but I fear I've ended up wasting his time and I kinda feel sad about that. This is why I'm putting my email here.
comment by TekhneMakre · 2022-05-01T03:04:19.356Z · LW(p) · GW(p)
>Note that in this example, the “me” character does do narrative syncing – but in a way that does not disguise itself as information sharing about AI.
Well, it's better, but in I think you're still playing into [Alec taking things you say as orders], which I claim is a thing, so that in practice Alec will predictably systematically be less helpful and more harmful than if he weren't [taking things you say as orders].
>How can a person shift things toward less confusing patterns?
[Tossing out ideas, caveat emptor]
You could draw attention in the moment to that dynamic, so that Alec can get introspection on the phenomenon, or metaphorically so that Alec can start to get a "brain-computer interface readout" on "right now I'm looking to be told what to do".
You could abstractly clarify the situation (which this post helps with, thank you!).
You could do improv-y stuff, playing low status, playing like you're looking for orders from Alec. (Warning, might be a kind of deception.)The supposed upside is to "shake things loose" a little so that Alec gets exposed to what it would be like if he was High and you were Low, so that when he's back to his usual Low play, he can know it's happening and know what's different about it.
You could refuse to answer Alec until it seems like he's acting like his own boss.
You could technically explain why you don't want to give him orders.
You could address why Alec locally wants to be told what to do.
>Is it something we should simply try to minimize if we want good epistemics?
My wild guess is that first-order minimizing narrative-syncing isn't that important, but being open to second-order corrections is very important, because that's what determines your trajectory towards correct beliefs vs. delusion. There's always a ton of narrative-syncing taken for granted, and primordially speech is corrections to that; just-trying-to-be-epistemic speech is a rarefied sort of thing. I don't know how to balance being open to corrections against "losing momentum", but the obvious guess is to not sacrifice any ability to correct at all, because subtle failures-to-correct can propagate as a narrative-sync to just not think about something. In other words, yay to simply minimizing pushes against correcting errors, boo to minimizing narrative-syncing.
>What’re some places where this happens a lot?
This happens a lot with a power / status differential. This isn't a vacuous claim: one consequence is that I'm claiming that for each person Alec, there's a roughly 1-dimensional value called "power" (well, maybe it depends on social context...) that Alec assigns to each other person Berry, and this "power" variable predicts all of: the extent to which Alec imputes narrative-syncingness onto Berry's utterances; the extent to which Alec expects others to take Blob's utterances as narrative-syncing; the extent to which Alec expects Berry to take others's statements as narrative-syncing; the extent to which Alec, if so inclined, to punish deviations from those expectations.
>How can a person notice it?
If Berry "directs annoyance at" Alec in a way where the annoyance isn't hidden but the reasons for the annoyance are hidden, that might be because Berry expected Alec to coordinate in a narrative-sync but Alec didn't.
If Alec finds himself doing stuff on automatic, not really interested / curious / etc., that might be because he took orders from someone.
> “Why did I even sign up for this program?”
To say something maybe obvious, this seems like a great chance to pry your fingertips under the shield of narratives; why are you here Alec? You were expecting me to do something or know something? Here's a description of some of the tacks I've taken in trying to solve this thing, and why they can't work; what should we do?
Replies from: ViktoriaMalyasova, Daphne_W↑ comment by ViktoriaMalyasova · 2022-05-02T11:49:00.396Z · LW(p) · GW(p)
>> You could refuse to answer Alec until it seems like he's acting like his own boss.
Alternative suggestion: do not make your help conditional on Alec's ability to phrase his questions exactly the right way or follow some secret rule he's not aware of.
Just figure out what information is useful for newcomers, and share it. Explain what kinds of help and support are available and explain the limits of your own knowledge. The third answer gets this right.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-05-02T23:23:48.284Z · LW(p) · GW(p)
> secret rule
It shouldn't be secret.
The third answer gets it wrong if Alec takes it as an order as opposed to potentially useful information.
>Just figure out what information is useful for newcomers, and share it.
Yes, but this only makes sense if your statements are taken as information. If they aren't, then the useful information is the fact that your statements aren't being taken as information.
↑ comment by Daphne_W · 2022-05-01T17:35:36.093Z · LW(p) · GW(p)
Well, it's better, but in I think you're still playing into [Alec taking things you say as orders], which I claim is a thing, so that in practice Alec will predictably systematically be less helpful and more harmful than if he weren't [taking things you say as orders].
There seems to be an assumption here that Alec would do something relatively helpful instead if he weren't taking the things you say as orders. I don't think this is always the case: for people who aren't used to thinking for themselves, the problem of directing your career to reduce AI risk is not a great testbed (high stakes, slow feedback), and without guidance they can just bounce off, get stuck with decision paralysis, or listen to people who don't have qualms about giving advice.
Like, imagine Alec gives you API access to his brain, with a slider that controls how much of his daily effort he spends not following orders/doing what he thinks is best . You may observe that his slider is set lower than most productive people in AI safety, but (1) it might not help him or others to crank it up and (2) if it is helpful to crank it up, that seems like a useful order to give.
Anna's Scenario 3 seems like a good way to self-consistently nudge the slider upwards over a longer period of time, as do most of your suggestions.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-05-01T21:58:37.808Z · LW(p) · GW(p)
Good point. My guess is that if Alec is sufficiently like this, the right thing to do is to tell Alec not to work on AI risk for now. Instead, Alec, do other fun interesting things that matter to you; especially, try looking for things that you're interested in / matter to you apart from social direction / feedback (and which aren't as difficult as AI safety); and stay friends with me, if you like.
Replies from: Daphne_W↑ comment by Daphne_W · 2022-05-02T08:50:49.479Z · LW(p) · GW(p)
There definitely seem to be (relative) grunt work positions in AI safety, like this [LW · GW], this [LW · GW] or this [LW · GW]. Unless you think these are harmful, it seems like it would be better to direct the Alec-est Alecs of the world that way instead of risking them never contributing.
I understand not wanting to shoulder responsibility for their career personally, and I understand wanting an unbounded culture for those who thrive under those conditions, but I don't see the harm in having a parallel structure for those who do want/need guidance.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2022-05-02T09:23:32.464Z · LW(p) · GW(p)
That seems maybe right if Alec isn't *interested* in helping in non-"grunt" ways. (TBC "grunt" stuff can be super important; it's just that we seem much more bottlenecked on 1. non-grunt stuff, and 2. grunt stuff for stuff that's too weird for people like this to decide to work on.) I'm also saying that Alec might end up being able and willing to help in non-grunt ways, but not by taking orders, and rather by going off and learning how to do non-grunt stuff in a context with more clear feedback.
It could be harmful to Alec to give him orders to work on "grunt" stuff, for example by playing in to his delusion that doing some task is crucially important for the world not ending, which is an inappropriate amount of pressure and stress and more importantly probably is false. It could potentially be harmful of Alec if he's providing labor for whoever managed to gain control of the narrative via fraud, because then fraudsters get lots of labor and are empower to do more fraud. It could be harmful of Alec if he feels he has to add weight to the narrative that what he's doing matters, thereby amplifying information cascades.
comment by kave · 2024-01-15T05:39:36.166Z · LW(p) · GW(p)
I feel kind of conflicted about this post overall, but I certainly think using the concept fairly frequently.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2024-02-08T07:46:15.337Z · LW(p) · GW(p)
I made a new post just now, "Believing In [LW · GW]," which offers a different account of some of the above phenomena.
My current take is that my old concept of "narrative syncing" describes the behaviorist outside of a pattern of relating that pops up a lot, but doesn't describe the earnest inside that that pattern is kind of designed around.
(I still think "narrative syncing" is often done without an earnest inside, by people keeping words around an old icon after the icon has lost its original earnest meaning (e.g., to manipulate others), so I still want a term for that part; I, weirdly, do not often think using the term "narrative syncing," it doesn't quite do it for me, not sure what would. Some term that is to "believing in" as lying/deceiving is to "beliefs/predictions".)
Replies from: kave, kave↑ comment by kave · 2024-02-09T01:41:14.695Z · LW(p) · GW(p)
I'll expand a little on how I use the concept.
People around me often say things like, "We have a new toy! The norm is to put away all the pieces when you're done playing with it" (or, more subtly, "I propose a norm of putting away all the pieces"), or "the Schelling time is tomorrow at noon", or "now it's common knowledge that I went to the shops [after saying 'I went to the shops']". I often feel a little upset at this use of language. And then it helps me to think of the speech act as narrative syncing. My hackles are still a little raised (at least by the 'norm' one), but I understand more that the narrative syncing is doing something valuable.
comment by MSRayne · 2022-07-03T18:30:06.726Z · LW(p) · GW(p)
I think a common term for something similar to this is performative speech. Essentially what you're doing in this case is announcing that something is true in such a way as to make it so in some sense. Or, more accurately, in such as a way as to make acting like it is so the norm. Every time a speech act like this is performed, the norm "we act as if this is true" is strengthened slightly.
comment by Ruby · 2022-05-05T18:24:37.131Z · LW(p) · GW(p)
Curated. I think the underlying concept here is valuable and if we can develop a good ability to notice and reflect on instances of it, this could both improve our collective epistemics and coordination. Kudos to Anna and everyone in the comments who are trying to flesh out the models.
I do think the concept is much the same as performative utterance and there might be some useful stuff in the literature on that, though mostly I don't care about the exact term used so long as we've got some shared handle for the concept. "Narrative syncing" does feel more likely to catch on in our community, maybe, so I'm happy to broadcast it.
As follow-up work, I would love to see someone (or multiple someones) collecting and fleshing out examples of the phenomenon. Or perhaps the best is for people to report their own concrete experiences. I might point to my wedding [? · GW] as a case where my wife and I synced the narrative about our relationship in a way that persists eight years later.
Replies from: Raemon↑ comment by Raemon · 2022-05-06T09:35:54.398Z · LW(p) · GW(p)
FYI the concept seems fairly distinct to me from performative utterances – performative utterances include things that are not syncing narratives ("go to the store"), and I think narrative syncing is an act that requires two people to do (performative utterance is a thing one person says, which might or might not be part of a narrative sync).
(But, agreed the two concepts are certainly related, and I wouldn't be surprised if there's useful literature there)
Replies from: Slidercomment by c.trout (ctrout) · 2022-05-03T23:09:24.896Z · LW(p) · GW(p)
Yes, such sentences are a thing. Kendall Walton calls them "principles of generation" because, according to his analysis, they generate fictional truths (see his Mimesis as Make-Believe). Pointing at the sand and shouting "There is lava there!" we have said something fictionally true, in virtue of the game rule pronounced earlier. "Narrative syncing" sounds like a broader set of practices that generate and sustain such truths – I like it! (I must say "principles of generation" is a bit clunky anyway – but it's also more specific. Maybe "rule decreeing utterances" would be better?).
Anyway, I could imagine naturally extending this analysis beyond make-believe games to etiquette. The principles of generation here would be generating truths of etiquette.
And then, if you like, you could follow a (small) minority of philosophers in claiming that morality is constructed of such etiquette truths! See e.g.:
Foot, P., 1972. Morality as a system of Hypothetical Imperatives
Joyce, R., 2001. The Myth of Morality
Kalderon, M. E., 2005. Moral Fictionalism
comment by the gears to ascension (lahwran) · 2022-05-02T23:18:35.121Z · LW(p) · GW(p)
possible other phrases that don't require inventing new word meanings, and will therefore be understood by people who have not read this article:
- "syncing"
- "setting the tone"
- "set the stage"
- "setting the script"
- "discussing plans"
could anyone help me refine these into a solid replacement? I worry that heavy use of "narrative syncing" will further separate idiolects at a time when we urgently need to be seeking to simplify the universal shared idiolect and avoid proliferation of linguistic standards. In general, jargon is a code smell, especially since there is no isolated group of world savers and ideas need to spread far quickly.
comment by cata · 2022-05-01T07:01:02.465Z · LW(p) · GW(p)
Scenario 3 bothers me. Did you really have to do the thing where you generalize about the whole social group?
Compare:
We try to have a culture around here where there is no vetted-by-the-group answer to this; we instead try to encourage forming your own inside-view model of how AI risk might work, what paths through to a good future might be possible, etc...
with
I don't think I have any specific best answer for this in general. My best suggestion would be to encourage forming your own inside-view model of how AI risk might work, what paths through to a good future might be possible, etc...
To me, the new version sounds more personal, equally helpful, and is not misleading or attempting to do "narrative syncing." (Maybe I just don't understand what's going on, because the first scenario sounded pretty reasonable to me, and seems to contain basically the same content as the third scenario, so I would not have predicted vastly different reactions. The first scenario is phrased a little more negatively, I guess?)
Replies from: tamgent↑ comment by tamgent · 2022-05-01T21:18:44.923Z · LW(p) · GW(p)
I think the difference between 1 and 3 is that in 3 there is explicit acknowledgement of the idea that what the person might be asking for is "what is the done thing around here" by attempt to directly answer the inferred subtext.
Also, I like your revised answer.
comment by Mart_Korz (Korz) · 2022-05-05T18:24:42.453Z · LW(p) · GW(p)
One concept which I think is strongly related is common knowledge (if we extend it to not only refer to beliefs but also to norms).
I think that a good part of the difference between the scenarios for answering Alec is captured by the difference between sharing personal advice and knowledge compared to sharing common norms and knowledge. The latter will give Alec a lot of information about how to work together and cooperate with "the typical member of" the AI safety community, which is important information independently of whether it would be best for Alec's thinking to adopt the beliefs or follow the advice. At least he will be able to navigate working together with others in the community even if he disagrees with their opinions.
Of course there still can be the failure mode of just aiming to fit in into the group and neglecting the group's intended purpose.
I think one important aspect here is that mentally treating a group's norms, behaviours and beliefs as being simple and uniform has huge difficulty advantages for establishing common knowledge [LW · GW] and makes it much simpler for an individual to navigate the group. "The Intelligent Social Web [LW · GW]" nicely describes this and also the pull that people feel towards acting out well trodden social roles.
Of course, AI safety might well be one of the topics where we really want people not to just adopt the patterns of thought provided by the group without seriously thinking them through.
comment by phdead · 2022-05-03T02:26:01.799Z · LW(p) · GW(p)
My thoughts for each question:
- Depending on context, there are a few ways I would communicate this. Take the phrase "We are quiet here." Said to prospective tenants at an apartment complex, it is "communicating group norms." Said to a friend who is talking during a funeral, it is "enforcing group norms". Telling yourself you will do this before you sleep is "enforcing identity norms". You are sharing information, just local information about the group instead of global information about the world. All the examples given are information sharing.
- Believing in an opinion can be a group norm, and this can be useful or harmful. For example, "We believe victims" may not be bulletproof life advice, but groups which have that group norm often are more useful to survivors of sexual assault than groups which try to figure out if they believe the story first.
- I think motivations are complex and impossible to fully vocalize. Often I don't realize why I really want or don't want to do something until after I've had the instinctual response. Its possible narratives that obscure the full depth but feel true are good temporary stand ins in these cases.
- Communication and enforcement of group and identity norms happens everywhere all the time, but increases the more status is associated with the group or identity.
comment by jimmy · 2022-05-01T09:43:42.792Z · LW(p) · GW(p)
"Narrative syncing" took a moment to click to me, but when it did it brought up the connotations that I don't see in the examples alone. Personally, the words that first came to mind were "Presupposing into existence", and then after getting a better idea of which facet of this you were intending to convey, "Coordination through presupposition".
While it obviously can be problematic in the ways you describe, I wouldn't view it as "a bad thing" or "a thing to be minimized". It's like.. well, telling someone what to do can be "bossy" and "controlling", and maybe as a society we think we see too much of this failure mode, but sometimes commands really are called for and so too little willingness to command "Take cover!" when necessary can be just as bad.
Before getting into what I see as the proper role of this form of communication, I think it's worth pointing out something relevant about the impression I got when meeting you forever ago, which I'd expect others get as well, and would be expected to lead to this kind of difficulty and this kind of explanation of the difficulty.
It's a little hard to put into words, and not at all a bad thing, but it's this sort of paradoxically "intimidating in reverse" sort of thing. It's this sort of "I care what you think. I will listen and update my models based on what you say" aura that provokes anxieties of "Wait a minute, my status isn't that high here. This doesn't make sense, and I'm afraid if I don't denounce the status elevation I might fall less gracefully soon" -- though without the verbal explanation, of course. But then, when you look at it, it's *not* that you were holding other people above you, and there's no signals of "I will *believe* what you say*" or "I see you as claiming relevant authority here", just a lack of "threatened projection of rejection". Like, there was going to be no "That's dumb. You're dumb for thinking that", and no passive aggression in "Hm. Okay.", just an honest attempt to take things for what they appear to be worth. It's unusually respectful, and therefore jarring when people aren't used to being given the opportunity to take that kind of responsibility.
I think this is a good thing, but if you lack an awareness of how it clashes with expectations people are likely to have, it can be harder to notice and preempt the issues that can come up when people get too intimidated by what you're asking of them, which they are likely to flinch from. Your proposed fix addresses part of this because you're at least saying the "We expect you to think for yourself" part explicitly rather than presupposing it on them, but there are a couple pieces missing. One is that it doesn't acknowledge the "scariness" of being expected to come up with ones own perspectives and offer them to be criticized by very intelligent people who have thought about the subject matter more than you have. Your phrasing downplays it a bit ("no vetted-by-the-group answer to this" is almost like "no right answer here") and that can help, but I suspect that it ends up burying some of the intimidation under the rug rather than integrating it.
The other bit is that it doesn't really address the conceptual possibility that "You should go study ML" is actually the right answer here. This needs a little unpacking, I think.
Respect, including self respect or lack thereof, is a big part of how we reason collectively. When someone makes an explicit argument (or otherwise makes a bid for pointing our attention in a certain direction), we cannot default to always engage and try to fully evaluate the argument on the object level. Before even beginning to do that, we have to decide whether or not and to what extent their claim is worth engaging with, and we do that based on a sense of how likely it is that this person's thoughts will prove useful to engage with. "Respect" is a pretty good term for that valuation, and it is incredibly useful for communicating across inferential distances. It's always necessary to *some* degree (or else discussions go the way political arguments go even about trivial things), and excess amounts let you bridge much larger distances usefully because things don't have to be supported immediately relative to a vastly different perspective. When the homeless guy starts talking about the multiverse, you don't think quite so hard about whether it could be true as if it were a respected physics professor saying the same things. When someone you can see to see things you miss tells you that you're in danger and to follow their instructions if you want to live, it can be viscerally unnerving, and you might find yourself motivated to follow precautions you don't understand -- and it might very well be the right thing to do.
Returning to Alec, he's coming to *you*. Anna freakin' Salamon. He's asking you "What should I do? Tell me what I should do, because *I don't know* what I should do". In response one, you're missing his presupposition that he belongs in a "follower" role, as relates to this question, and elevating to "peer" someone who doesn't feel up to the job, without acknowledging his concerns or addressing them.
In response two, you're accepting the role and feeling uneasy about it, presumably because you intuitively feel like that leadership role is appropriate there, regardless of whether you've put it to words.
In response three, you lead yourself out of a leadership role. This is nice because it actually addresses the issue somewhat, and is a potentially valid use of leadership, but open to unintentional abuse of the same type that your unease with the second response warns of.
Returning to "narrative syncing", I don't see it so much as "syncing", as that implies a sort of symmetry that doesn't exist. It's not "I'm over here, where are you? How do we best meet up?". It's "We're meeting up *here*. This is where you will be, or you won't be part of the group". It's a decision coming from someone who has the authority to decide.
So when's that a good thing?
Well, put simply, when it's coming from someone who actually has the authority to decide, and when the decision is a good one. Is the statement *true?*
"We don't do that here" might be questionable. Do people there really not do it, or do you just frown at them when they do? Do you actually *know* that people will continue to meet your expectations of them, or is there a little discord that you're "shoulding" at them? Is that a good rule in the first place?
It's worth noticing that we do this all the time without noticing anything weird about it. What else is "My birthday party is this Saturday!", if not syncing narratives around a decision that is stated as fact? But it's *true*, so what's the problem? Or internally, "You know, I *will* go to that party!". They're both decisions and predictions simultaneously because that's how decisions fundamentally work. As long as it's an actual prediction and not a "shoulding", it doesn't suddenly become dishonest if the person predicting has some choice in the matter. Nor is there any thing wrong with exercising choice in good directions.
So as applied to things like "What should I do for AI risk?", where the person is to some degree asking to be coordinated, and telling you that they want your belief or your community's belief because they don't trust themselves to be able to do better themselves, do you have something worth coordinating them toward? Are you sure you don't, given how strongly they believe they need the direction, and how much longer you've been thinking about this?
An answer which denies neither possibility might look like..
"ML. Computer science in general. AI safety orgs. Those are the legible options that most of us currently guess to be best for most, but there's dissent and no one really knows. If you don't know what else to do, start with computer science while working to develop your own inside views about what the right path is, and ditch my advice the moment you don't believe it to be right for you. There's plenty of room for new answers here, and finding them might be one of the more valuable things you could contribute, if you think you have some ideas".
comment by Taran · 2022-05-01T07:35:57.446Z · LW(p) · GW(p)
When I first read this I intuitively felt like this was a useful pattern (it reminds me of one of the useful bits of Illuminatus!), but I haven't been able to construct any hypotheticals where I'd use it.
I don't think it's a compelling account of your three scenarios. The response in scenario 1 avoids giving Alec any orders, but it also avoids demonstrating the community's value to him in solving the problem. To a goal-driven Alec who's looking for resources rather superiors, it's still disappointing: "we don't have any agreed-upon research directions, you have to come up with your own" is the kind of insight you can fit in a blog post, not something you have to go to a workshop to learn. "Why did I sign up for this?" is a pretty rude thing for this Alec to say out loud, but he's kinda right. In this analysis, the response in scenario 3 is better because it clearly demonstrates value: Alec will have to come up with his own ideas, but he can surround himself with other people who are doing the same thing, and if he has a good idea he can get paid to work on it.
More generally, I think ambiguity between syncing and sharing is uncommon and not that interesting. Even when people are asking to be told what to do, there's usually a lot of overlap between "the things the community would give as advice" and "the things you do to fit in to the community". For example, if you go to a go club and ask the players there how to get stronger at go, and you take their advice, you'll both get stronger and go and become more like the kind of person who hangs out in go clubs. If you just want to be in sync with the go club narrative and don't care about the game, you'll still ask most of the same questions: the go players will have a hard time telling your real motivation, and it's not clear to me that they have an incentive to try.
But if they did care about that distinction, one thing they could do is divide their responses into narrative and informative parts, tagged explicitly as "here's what we do, and here's why": "We all studied beginner-level life and death problems before we tried reading that book of tactics you've got, because each of those tactics might come up once per game, if at all, whereas you'll be thinking about life and death every time you make a move". Or for the AI safety case, "We don't have a single answer we're confident in: we each have our own models of AI development, failure, and success, that we came to through our own study and research. We can explain those models to you but ultimately you will have to develop your own, probably more than once. I know that's not career advice, as such, but that's preparadigmatic research for you." (note that I only optimized that for illustrating the principle, not for being sound AI research advice!)
tl;dr I think narrative syncing is a natural category but I'm much less confident that “narrative syncing disguised as information sharing” is a problem worth noting, and in the AI-safety example I think you're applying it to a mostly unrelated problem.
Replies from: AnnaSalamon, AnnaSalamon↑ comment by AnnaSalamon · 2022-05-02T06:34:29.800Z · LW(p) · GW(p)
I haven't been able to construct any hypotheticals where I'd use it…. tl;dr I think narrative syncing is a natural category but I'm much less confident that “narrative syncing disguised as information sharing” is a problem worth noting,
I’m curious what you think of the examples in the long comment I just made [LW(p) · GW(p)] (which was partly in response to this, but which I wrote as its own thing because I also wish I’d added it to the post in general).
I’m now thinking there’re really four concepts:
- Narrative syncing. (Example: “the sand is lava.”)
- Narrative syncing that can easily be misunderstood as information sharing. (Example: many of Fauci’s statements about covid, if this article about it is correct.)
- Narrative syncing that sets up social pressure not to disagree, or not to weaken the apparent social norm about how we’ll talk about that. (Example: “Gambi’s is a great restaurant and we are all agreed on going there,” when said in an irate tone of voice after a long and painful discussion about which restaurant to go to.”)
- Narrative syncing that falls into categories #2 and #3 simultaneously. (Example: “The 911 terrorists were cowards,” if used to establish a norm for how we’re going to speak around here rather than to share honest impressions and invite inquiry.)
I am currently thinking that category #4 is my real nemesis — the actual thing I want to describe, and that I think is pretty common and leads to meaningfully worse epistemics than an alternate world where we skillfully get the good stuff without the social pressures against inquiry/speech.
I also have a prediction that most (though not all) instances of #2 will also be instances of #3, which is part of why I think there's a "natural cluster worth forming a concept around" here.
↑ comment by AnnaSalamon · 2022-05-02T15:06:19.953Z · LW(p) · GW(p)
For example, if you go to a go club and ask the players there how to get stronger at go, and you take their advice, you'll both get stronger and go and become more like the kind of person who hangs out in go clubs. If you just want to be in sync with the go club narrative and don't care about the game, you'll still ask most of the same questions: the go players will have a hard time telling your real motivation, and it's not clear to me that they have an incentive to try.
This seems right to me about most go clubs, but there’re a lot of other places that seem to me different on this axis.
Distinguishing features of Go clubs from my POV:
- A rapid and trustworthy feedback loop, where everyone wins and loses at non-rigged games of Go regularly. (Opposite of schools proliferating without evidence.)
- A lack of need to coordinate individuals. (People win or lose Go games on their own, rather than by needing to organize other people into coordinating their play.)
Some places where I expect “being in sync with the narrative” would diverge more from “just figuring out how to get stronger / how to do the object-level task in a general way”:
- A hypothetical Go club that somehow twisted around to boost a famous player’s ego about how very useful his particular life-and-death problems were, or something, maybe so they could keep him around and brag about how they had him at their club, and so individual members could stay on his good side. (Doesn’t seem very likely, but it’s a thought experiment.)
- Many groups with an “ideological” slant, e.g. the Sierra Club or ACLU or a particular church
- (?Maybe? not sure about this one) Many groups that are trying to coordinate their members to follow a particular person’s vision for coordinated action, e.g. Ikea's or most other big companies' interactions with their staff, or even a ~8-employee coffee shop that's trying to realize a particular person's vision
comment by Wall Flower (wall-flower) · 2022-05-31T05:31:42.147Z · LW(p) · GW(p)
I find this a useful concept, but the term itself is a horrible misnomer? "Narrative syncing" is a poor way of describing what I would call "coordination" (as an activity - "he's in charge of coordination") or as speech-act (maybe "coordinative statement") - as in the "coordination problem": https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095637587#:~:text=A%20situation%20in%20which%20the,with%20what%20the%20others%20do.
This is an important misnomer! If you say something is "narrative syncing" as opposed to "information sharing", the implication is that the statement contains a "narrative" as opposed to "information" - whereas a "coordinating statement" can include only factual information but is for social purposes what you call "narrative syncing." I don't know how else to say this. To quote the author directly: “narrative syncing disguised as information sharing” - why use such language? "Disguised" implies that one isn't the other, except it clearly is? "Narrative syncing" or, as I would label it, "coordination" is why we communicate. "Information sharing" is how we achieve our goals? If we didn't have a goal behind a sharing information, unless the goal is sharing information for its own sake, why would we bother spending the energy and effort at all?
In short, I largely agree with AllAmericanBreakfast - it's "leadership language", but it's also just - language, meant to influence the behaviours and beliefs of others (which extends beyond leaders to every coordinative activity even from a subordinate's position). The confusion is almost academic - OP seems to be interested in how and why they are used is social contexts, and what effects result from it - which means that of course it depends on social context and culture. How the OP would argue about sentences being (linguistically) "narrative syncing" in say, a Chinese context, with (to make it more difficult for the OP) say tone and body language, as opposed to understanding it as speech-act (socially/politically) would be interesting to me.
Replies from: wall-flower↑ comment by Wall Flower (wall-flower) · 2022-05-31T05:34:26.872Z · LW(p) · GW(p)
Even shorter - "narrative syncing" is speech-act contingent on social context and not much else. "information sharing" is a property of a given sentences given syntax and semantics independent of social context. Pitting the two against each other is a confusing way of doing philosophy.