A few thought on the inner ring
post by KatjaGrace · 2021-01-21T03:40:15.253Z · LW · GW · 23 commentsContents
24 comments
I enjoyed C.S.Lewis’ The Inner Ring, and recommend you read it. It basically claims that much of human effort is directed at being admitted to whatever the local in-group is, that this happens easily to people, and that it is a bad thing to be drawn in to.
Some quotes, though I also recommend reading the whole thing:
In the passage I have just read from Tolstoy, the young second lieutenant Boris Dubretskoi discovers that there exist in the army two different systems or hierarchies. The one is printed in some little red book and anyone can easily read it up. It also remains constant. A general is always superior to a colonel, and a colonel to a captain. The other is not printed anywhere. Nor is it even a formally organised secret society with officers and rules which you would be told after you had been admitted. You are never formally and explicitly admitted by anyone. You discover gradually, in almost indefinable ways, that it exists and that you are outside it; and then later, perhaps, that you are inside it.
There are what correspond to passwords, but they are too spontaneous and informal. A particular slang, the use of particular nicknames, an allusive manner of conversation, are the marks. But it is not so constant. It is not easy, even at a given moment, to say who is inside and who is outside. Some people are obviously in and some are obviously out, but there are always several on the borderline. And if you come back to the same Divisional Headquarters, or Brigade Headquarters, or the same regiment or even the same company, after six weeks’ absence, you may find this secondary hierarchy quite altered.
There are no formal admissions or expulsions. People think they are in it after they have in fact been pushed out of it, or before they have been allowed in: this provides great amusement for those who are really inside. It has no fixed name. The only certain rule is that the insiders and outsiders call it by different names. From inside it may be designated, in simple cases, by mere enumeration: it may be called “You and Tony and me.” When it is very secure and comparatively stable in membership it calls itself “we.” When it has to be expanded to meet a particular emergency it calls itself “all the sensible people at this place.” From outside, if you have dispaired of getting into it, you call it “That gang” or “they” or “So-and-so and his set” or “The Caucus” or “The Inner Ring.” If you are a candidate for admission you probably don’t call it anything. To discuss it with the other outsiders would make you feel outside yourself. And to mention talking to the man who is inside, and who may help you if this present conversation goes well, would be madness.
…
My main purpose in this address is simply to convince you that this desire is one of the great permanent mainsprings of human action. It is one of the factors which go to make up the world as we know it—this whole pell-mell of struggle, competition, confusion, graft, disappointment and advertisement, and if it is one of the permanent mainsprings then you may be quite sure of this. Unless you take measures to prevent it, this desire is going to be one of the chief motives of your life, from the first day on which you enter your profession until the day when you are too old to care. That will be the natural thing—the life that will come to you of its own accord. Any other kind of life, if you lead it, will be the result of conscious and continuous effort. If you do nothing about it, if you drift with the stream, you will in fact be an “inner ringer.” I don’t say you’ll be a successful one; that’s as may be. But whether by pining and moping outside Rings that you can never enter, or by passing triumphantly further and further in—one way or the other you will be that kind of man.
…
The quest of the Inner Ring will break your hearts unless you break it. But if you break it, a surprising result will follow. If in your working hours you make the work your end, you will presently find yourself all unawares inside the only circle in your profession that really matters. You will be one of the sound craftsmen, and other sound craftsmen will know it. This group of craftsmen will by no means coincide with the Inner Ring or the Important People or the People in the Know. It will not shape that professional policy or work up that professional influence which fights for the profession as a whole against the public: nor will it lead to those periodic scandals and crises which the Inner Ring produces. But it will do those things which that profession exists to do and will in the long run be responsible for all the respect which that profession in fact enjoys and which the speeches and advertisements cannot maintain.
His main explicit reasons for advising against succumbing to this easy set of motives are that it runs a major risk of turning you into a scoundrel, and that it is fundamentally unsatisfying—once admitted to the ingroup, you will just want a further in group; the exclusive appeal of the ingroup won’t actually be appealing once you are comfortably in it; and the social pleasures of company in the set probably won’t satisfy, since those didn’t satisfy you on the outside.
I think there is further reason not to be drawn into such things:
- I controversially claim that even the good of being high status is a crappy kind of good relative to those available from other arenas of existence.
- It is roughly zero sum, so hard to wholly get behind and believe in, what with your success being net bad for the rest of the world.
- To the extent it is at the cost of real craftsmanship and focus on the object level, it will make you worse at your profession, and thus less cool in the eyes of God, or an ideal observer, who are even cooler than your local set.
I think Lewis is also making an interesting maneuver here, beyond communicating an idea. In modeling the behavior of the coolness-seekers, you put them in a less cool position. In the default framing, they are sophisticated and others are naive. But when the ‘naive’ are intentionally so because they see the whole situation for what it is, while the sophisticated followed their brute urges without stepping back, who is naive really?
23 comments
Comments sorted by top scores.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-21T08:11:51.803Z · LW(p) · GW(p)
I also love The Inner Ring and basically endorse your take on it. One confusion/thought I had about it was: How do we distinguish between Inner Rings and Groups of Sound Craftsmen? Both are implicit/informal groups of people who recognize and respect each other. Is the difference simply that Sound Craftsmen's mutual respect is based on correct judgments of competence, whereas Inner Rings are based on incorrect judgments of competence? That seems reasonable, but it makes it very hard in some cases to tell whether the group you are gradually becoming part of -- and which you are excited to be part of -- is an Inner Ring or a GoSC. (Because, sure, you think these people are competent, but of course you'd probably also think that if they were an Inner Ring because you are so starstruck.) And also it means there's a smooth spectrum of which Inner Rings and GoSCs are ends; a GoSC is where everyone is totally correct in their judgments of competence and an Inner Ring is where everyone is totally incorrect... but almost every group, realistically, will be somewhere in the middle.
Replies from: David Hornbein, John_Maxwell_IV, tomNth↑ comment by David Hornbein · 2021-01-21T14:41:48.648Z · LW(p) · GW(p)
How do we distinguish between Inner Rings and Groups of Sound Craftsmen?
The essay's answer to this is solid, and has steered me well:
In any wholesome group of people which holds together for a good purpose, the exclusions are in a sense accidental. Three or four people who are together for the sake of some piece of work exclude others because there is work only for so many or because the others can’t in fact do it. Your little musical group limits its numbers because the rooms they meet in are only so big. But your genuine Inner Ring exists for exclusion. There’d be no fun if there were no outsiders. The invisible line would have no meaning unless most people were on the wrong side of it. Exclusion is no accident; it is the essence.
My own experience supports this being the crucial difference. I've encountered a few groups where the exclusion is the main purpose of the group, *and* the exclusion is based on reasonably good judgments of competence. These groups strike me as pathological and corrupting in the way that Lewis describes. I've also encountered many groups where exclusion is only "accidental", and also the people are very bad at judging competence. These groups certainly have their problems, but they don't have the particular issues that Lewis describes.
Replies from: Kaj_Sotala, daniel-kokotajlo↑ comment by Kaj_Sotala · 2021-01-22T19:15:58.448Z · LW(p) · GW(p)
I'm not sure in which category you would put it, but as a counterpoint, Team Cohesion and Exclusionary Egalitarianism [LW · GW] argues that for some groups, exclusion is at least partially essential and that they are better off for it:
... you find this pattern across nearly all elite American Special Forces type units — (1) an exceedingly difficult bar to get in, followed by (2) incredibly loose, informal, collegial norms with nearly-infinitely less emphasis on hierarchy and bureaucracy compared to all other military units.
To even "try out" for a Special Forces group like Delta Force or the Navy SEAL Teams, you have to be among the most dedicated, most physically fit, and most competent of soldier.
Then, the selection procedures are incredibly intense — only around 10% of those who attend selection actually make the cut.
This is, of course, exclusionary.
But then, seemingly paradoxically, these organizations run with far less hierarchy, formal authority, and traditional military decorum than the norm. They run... far more egalitarian than other traditional military unit. [...]
Going back [...] [If we search out the root causes of "perpetual bickering" within many well-meaning volunteer organizations] we can find a few right away —
*When there's low standards of trust among a team, people tend to advocate more strongly for their own preferences. There's less confidence on an individual level that one's own goals and preferences will be reached if not strongly advocated for.
*Ideas — especially new ideas — are notoriously difficult to evaluate. When there's been no objective standard of performance set and achieved by people who are working on strategy and doctrine, you don't know who has the ability to actually implement their ideas and see them through to conclusion.
*Generally at the idea phase, people are maximally excited and engaged. People are often unable to model themselves to know how they'll perform when the enthusiasm wears off.
*In the absence of previously demonstrated competence, people might want to show they're fit for a leadership role or key role in decisionmaking early, and might want to (perhaps subconsciously) demonstrate prowess at making good arguments, appearing smart and erudite, etc.
And of course, many more issues.
Once again, this is often resolved by hierarchy — X person is in charge. In the absence of everyone agreeing, we'll do what X says to do. Because it's better than the alternative.
But the tradeoffs of hierarchical organizations are well-known, and hierarchical leadership seems like a fit for some domains far moreso than others.
On the other end of the spectrum, it's easy when being egalitarian to not actually have decisions get made and fail to have valuable work getting done. For all the flaws of hierarchical leadership, it does tend to resolve the "perpetual bickering" problem.
From both personal experience and a pretty deep immersion into the history of successful organizations, it looks like often an answer is an incredibly high bar to joining followed by largely decentralized, collaborative, egalitarian decisionmaking.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-21T16:33:38.426Z · LW(p) · GW(p)
Thanks, this is helpful!
EDIT: more thoughts:
I think the case of limiting meetings because the room is only so big is too easy. What about limiting membership because you want only the best researchers in your org? (Or what if it's a party or retreat for AI safety people -- OK to limit membership to only the best researchers?) There's a good reason for selecting based on competence, obviously. But now we are back to the problem I started with, which is that every Inner Circle probably presents itself (and thinks of itself) as excluding based on competence.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2021-01-22T14:09:55.809Z · LW(p) · GW(p)
In Thinking Fast and Slow, Daniel Kahneman describes an adversarial collaboration between himself and expertise researcher Gary Klein. They were originally on opposite sides of the "how much can we trust the intuitions of confident experts" question, but eventually came to agree that expert intuitions can essentially be trusted if & only if the domain has good feedback loops. So I guess that's one possible heuristic for telling apart a group of sound craftsmen from a mutual admiration society?
Replies from: Kaj_Sotala, daniel-kokotajlo↑ comment by Kaj_Sotala · 2021-01-22T19:05:09.893Z · LW(p) · GW(p)
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-22T15:29:26.704Z · LW(p) · GW(p)
Man, that's a very important bit of info which I had heard before but which it helps to be reminded of again. The implications for my own line of work are disturbing!
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2021-01-23T20:15:18.367Z · LW(p) · GW(p)
There was an interesting discussion on Twitter the other day about how many AI researchers were inspired to work on AGI by AI safety arguments. Apparently they bought the "AGI is important and possible" part of the argument but not the "alignment is crazy difficult" part.
I do think the AI safety community has some unfortunate echo chamber qualities which end up filtering those people out of the discussion. This seems bad because (1) the arguments for caution might be stronger if they were developed by talking to the smartest skeptics and (2) it may be that alignment isn't crazy difficult and the people filtered out have good ideas for tackling it.
If I had extra money, I might sponsor a prize for a "why we don't need to worry about AI safety" essay contest to try & create an incentive to bridge the tribal gap. Could accomplish one or more of the following:
-
Create more cross talk between people working in AGI and people thinking about how to make it safe
-
Show that the best arguments for not needing to worry, as discovered by this essay contest, aren't very good
-
Get more mainstream AI people thinking about safety (and potentially realizing over the course of writing their essay that it needs to be prioritized)
-
Get fresh sets of eyes on AI safety problems in a way that could generate new insights
Another point here is that from a cause prioritization perspective, there's a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there's not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree). So we should expect the set of arguments which have been published to be imbalanced. A contest could help address that.
Replies from: habryka4, daniel-kokotajlo↑ comment by habryka (habryka4) · 2021-01-23T21:48:12.612Z · LW(p) · GW(p)
Another point here is that from a cause prioritization perspective, there's a group of people incentivized to argue that AI safety is important (anyone who gets paid to work on AI safety), but there's not really any group of people with much of an incentive to argue the reverse (that I can think of at least, let me know if you disagree).
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm? The incentives in the other direction easily seem 10x stronger to me.
Lobbying for people to ignore the harm that your industry is causing is standard in basically any industry, and we have a massive plethora of evidence of organizations putting lots of optimization power into arguing for why their work is going to have no downsides. See the energy industry, tobacco industry, dairy industry, farmers in general, technological incumbents, the medical industry, the construction industry, the meat-production and meat-packaging industries, and really any big industry I can think of. Downplaying risks of your technology is just standard practice for any mature industry out there.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2021-01-24T08:20:57.322Z · LW(p) · GW(p)
What? What about all the people who prefer to do fun research that builds capabilities and has direct ways to make them rich, without having to consider the hypothesis that maybe they are causing harm?
If they're not considering that hypothesis, that means they're not trying to think of arguments against it. Do we disagree?
I agree if the government was seriously considering regulation of AI, the AI industry would probably lobby against this. But that's not the same question. From a PR perspective, just ignoring critics often seems to be a good strategy.
Replies from: habryka4↑ comment by habryka (habryka4) · 2021-01-24T19:21:11.559Z · LW(p) · GW(p)
Yes, I didn't say "they are not considering that hypothesis", I am saying "they don't want to consider that hypothesis". Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2021-01-24T22:57:30.149Z · LW(p) · GW(p)
Yes, I didn't say "they are not considering that hypothesis", I am saying "they don't want to consider that hypothesis". Those do indeed imply very different actions. I think one gives very naturally rise to producing counterarguments, the other one does not.
They don't want to consider the hypothesis, and that's why they'll spend a bunch of time carefully considering it and trying to figure out why it is flawed?
In any case... Assuming the Twitter discussion is accurate, some people working on AGI have already thought about the "alignment is hard" position (since those expositions are how they came to work on AGI). But they don't think the "alignment is hard" position is correct -- it would be kinda dumb to work on AGI carelessly if you thought that position is correct. So it seems to be a matter of considering the position and deciding it is incorrect.
I am not really sure what you mean by the second paragraph. AI is being actively regulated, and there are very active lobbying efforts on behalf of the big technology companies, producing large volumes of arguments for why AI is nothing you have to worry about.
That's interesting, but it doesn't seem that any of the arguments they've made have reached LW or the EA Forum -- let me know if I'm wrong. Anyway I think my original point basically stands -- from the perspective of EA cause prioritization, the incentives to dismantle/refute flawed arguments for prioritizing AI safety are pretty diffuse. (True for most EA causes -- I've long maintained that people should be paid to argue for unincentivized positions.)
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-24T07:52:40.849Z · LW(p) · GW(p)
I do like the idea of sponsoring a prize for such an essay contest. I'd contribute to the prize pool and help with the judging!
comment by Pongo · 2021-01-21T17:38:53.322Z · LW(p) · GW(p)
Most of the Inner Rings I've observed are primarily selected on (1) being able to skilfully violate the explicit local rules to get things done without degrading the structure the rules hold up and (2) being fun to be around, even for long periods and hard work.
Lewis acknowledges that Inner Rings aren't necessarily bad, and I think the above is a reason why.
Replies from: Pongocomment by TsviBT · 2021-01-21T08:44:50.784Z · LW(p) · GW(p)
In modeling the behavior of the coolness-seekers, you put them in a less cool position.
It might be a good move in some contexts, but I feel resistant to taking on this picture, or recommending others take it on. It seems like making the same mistake. Focusing on the object level because you want to be [cool in that you focus on the object level], that does has the positive effect of focusing on the object level, but I think also can just as well have all the bad effects of trying to be in the Inner Ring. If there's something good about getting into the Inner Ring, it should be unpacked, IMO. On the face of it, it seems like mistakenly putting faith in there being an Inner Ring that has things under control / knows what's going on / is oriented to what matters. If there were such a group it would make sense to apprentice yourself to them, not try to trick your way in.
Replies from: Dirichlet-to-Neumann↑ comment by Dirichlet-to-Neumann · 2021-01-21T12:39:48.116Z · LW(p) · GW(p)
Exactly this. The whole point of the Inner Ring (which I did not read, but judging by the review and my knowledge of Lewis/Christian thought and virtue ethic) is that you should aim at the goods that are inherent to your trade or activity (i.e., if you are a coder, writing good code), and not care about social goods that are associated with the activity. Lewis then makes a second claim (which is really a different claim) that you will also reach social goods through sincerely pursuing the inherent goods of your activity.
Replies from: Viliam↑ comment by Viliam · 2022-12-13T15:26:09.145Z · LW(p) · GW(p)
You can write the best code in the world, but the Wikipedia page for "people who write the best code in the world" will only mention the members of the Inner Ring.
Unless you are of course so good that everyone knows you, in which case they will add you to that Wikipedia page. They will however not add the person who is the second best coder in the world. The list of "top five coders in the world" will include you, plus four Inner Ring members.
So the second claim is kinda yes, kinda no -- yes, you can reach the social goods exclusively through sincerely pursuing the inherent goods, but you must work twice as hard.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-01-15T03:18:20.315Z · LW(p) · GW(p)
I'm torn about this one. On the one hand, it's basically a linkpost; Katja adds some useful commentary but it's not nearly as important/valuable as the quotes from Lewis IMO. On the other hand, the things Lewis said really need to be heard by most people at some point in their life, and especially by anyone interested in rationality, and Katja did LessWrong a service by noticing this & sharing it with the community. I tentatively recommend inclusion.
The comments have some good discussion too.
comment by neotoky01 · 2021-01-21T05:49:29.986Z · LW(p) · GW(p)
There is a distinction between joining a group for the sake of joining a group and acquiring status, and joining a group because it offers you companionship, friendly competition, and entertainment. The feeling of status and of being a high-ranking person is a good feeling, most people feel this way. I don't think the question is whether this feeling is good or bad, whether we should feel this way at all; it's a question of time. How much time will it take to acquire that status? Is there a better way you can invest your time? If joining an in-group gives you the highest return on your invested time, accounting for all the risks (like being spontaneously ejected), then go for it. It's up to each individual to decide which set of actions has the highest "emotional" return, that really depends on their unique personal history and genetics.
comment by remizidae · 2021-01-21T05:13:22.549Z · LW(p) · GW(p)
If I play a zero sum game and win, that’s good for me, and not bad for the world as a whole. I don’t care about what God or an ideal observer would think, since there is no such thing. One way in which Lewis’s values are dramatically different from those of an atheist.
The only question that matters to me is whether seeking to get into the inner ring will make me happy or not. I see Lewis says it would not make me happy, but I don’t find his reasons really convincing (they seem to be a priori rather than drawn from experience).
Replies from: Dirichlet-to-Neumann↑ comment by Dirichlet-to-Neumann · 2021-01-21T11:20:13.265Z · LW(p) · GW(p)
Competing in zero sum games rather than looking for positive sum games to play is not good for the world (and probably not good for you either on average, unless you have reason to think you will be better than average at this).