The Seeker’s Game – Vignettes from the Bay
post by Yulia · 2023-07-09T19:32:58.717Z · LW · GW · 19 commentsContents
Introduction Personal Insecurities and Alienation Dumb Questions The Wrong Kind of Weird Attitudes towards Doom An Oasis of Sanity Flinching Away Clear-Eyed Despair Attitudes towards the EA Sphere Hard to Own-Up, Hard to Criticise Is EA bad, actually? The Dangers of EA Status Dynamics and Social Signalling Value and Proxies Distrust The Seeker's Game Belonging vs Competition Self-Censorship in Social Contexts Social Justice Conversations Delicate Topics To Gossip or Not to Gossip I won’t start a Culture War High Variance Self-Censorship in Professional Contexts “Declaring Loyalty” The Temptation of Legibility A Game of Diplomacy Tedious and Tiring Team Player Preformal Ideas None 19 comments
Introduction
Last year, one conversation left a lasting impression on me. A friend remarked on the challenges of navigating "corrupting forces" in the Bay Area. Intrigued by this statement, I decided to investigate the state of affairs in the Bay if I had the chance. So when I got the opportunity to visit Berkeley in February 2023, I prepared a set of interview questions. Can you share an experience where you had difficulty voicing your opinion? What topics are hard to clearly think about due to social pressures and factors related to your EA community or EA in general? Is there anything about your EA community that makes you feel alienated? What is your attitude towards dominant narratives in Berkeley? [1] In the end, I formally interviewed fewer than ten people and had more casual conversations about these topics with around 30 people. Most people were involved in AI alignment to some extent. The content for this collection of vignettes draws from the experience of around ten people. [2] I chose the content for the vignettes for one of two reasons – potential representativity and potential extraordinariness. I hypothesized that some experiences represent the wider EA Berkeley community accurately. Others, I included because they surprised me, and I wanted to find out how common they are. All individuals gave me their consent to post the vignettes in their current form.
How did I arrive at these vignettes? It was a four-step process. First, I conducted the interviews while jotting down notes. For the more casual conversations, I took notes afterwards. The second step involved transcribing these notes into write-ups. After that, I obscured any identifying details to ensure the anonymity of the interviewees. Lastly, I converted the write-ups into vignettes by condensing them into narratives and honing in on key points while trying to retain the essence of what was said.
I tried to reduce artistic liberties by asking participants to give feedback on how close the vignettes were to the spirit of what they meant (or think they meant at the time). It is worth noting that I bridged some gaps with my own interpretations of the conversations, relying on the participants to point out inaccuracies. By doing that, I might have anchored their responses. Moreover, people provided different levels of feedback. Some shared thorough, detailed reviews pointing out many imprecisions and misconceptions. Sometimes, that process spanned multiple feedback cycles. Other participants gave minimal commentary.
Because I am publishing the vignettes months after the conversations and interviews, I want to include how attitudes have changed in the intervening period. I generalised the attitudes into the following categories:
- Withdrawn endorsement (Status: The interviewee endorsed the following content during the interview but no longer endorses it at the time of publication.)
- Weakened endorsement (Status: The interviewee has weakened their endorsement of the following content since the interview.)
- Unchanged endorsement (Status: The interviewee maintains their endorsement of the following content, which has remained unchanged since the interview.)
- Strengthened endorsement (Status: The interviewee has strengthened their endorsement of the following content since the interview.)
I clustered the vignettes according to themes so it's easier to navigate them. Classifying them was difficult because many vignettes addressed overlapping topics. Especially, Self-Censorship in Social Contexts and Self-Censorship in Professional Contexts seem to intersect in intricate ways. I might reclassify the vignettes in the future.
What remains uncertain is how representative these vignettes are. I am keen to uncover this. So if you found this work valuable and wish to support it further, consider participating in the accompanying poll (particularly if you’re involved in the Berkeley community!). If you are interested in participating in potential future blog posts, please feel free to complete this form. Feel free to send me a direct message on LessWrong if you have suggestions for other relevant spaces to investigate!
I thank all participants who have shared their stories and experiences with me! Your willingness to engage in open dialogue, provide reviews, and answer all my questions has been instrumental in creating this vignette collection. I am particularly grateful for your patience throughout the feedback cycles. Thank you for your trust and collaboration!
Personal Insecurities and Alienation
Dumb Questions
Status: The interviewee has weakened their endorsement of the following content since the interview.
Hamlet is a young AI safety researcher. He vigilantly avoids asking stupid questions. His reason? The rampant elitism he perceives in the Berkeley community. Of course, that’s just the flip side of a great thing. A space like this, with tons of driven and talented people, allows for advanced intellectual conversations and remarkable collaborations. The filters maintaining the quality will exclude some who want to be there. Others, even when invited in, will be intimidated. Hamlet is one of those who get intimidated sometimes. Being associated with stupid questions is very uncomfortable for him. His primary concern lies in the second-order consequences. If others perceived him as dumb, they would dismiss him. He came to Berkeley to find mentors and collaborators, not to be ignored! So better lie low, occasionally drop sophisticated remarks – and don't be caught asking dumb questions.
The Wrong Kind of Weird
Status: The interviewee maintains their endorsement of the following content, which remains unchanged since the interview.
Bay Area rationalists have a penchant for a specific brand of weird ideas – at least in Adrian's experience. Sadly, he is often intrigued by topics whose weirdness rationalists do not appreciate. Adrian enjoys thinking about moral realism and consciousness, for example. What kind of structures could be conscious? Is there something like a global consciousness? After collecting weird looks at parties and being dismissed for bringing those topics up, Adrian has become more reserved. Now he has less trouble fitting in. Another person like him would probably have difficulty gaining acceptance and entry into the community. And navigating the differences in thinking style still poses a challenge.
Adrian develops his ideas through free association and creative, open exploration. Stream-of-consciousness writing works well for him. He likes using parables, metaphors, examples and anecdotes to express meaning. Because his approach does not conform to established epistemic norms on LessWrong, Adrian feels pressure to cloak and obscure how he develops his ideas. One way in which this manifests is his two-step writing process. When Adrian works on LessWrong posts, he first develops ideas through his free-form approach. After that, he heavily edits the structure of the text, adding citations, rationalisations and legible arguments before posting it. If he doesn’t "translate" his writing, rationalists might simply dismiss what he has to say.
Attitudes towards Doom
An Oasis of Sanity
Status: The interviewee maintains their endorsement of the following content, which has remained unchanged since the interview.
Romeo is a talented graduate excited to contribute to AI alignment. He comes to Berkeley to participate in an AI safety program. His first week in town feels like finally coming home. For years, Romeo had to deal with people confused by intellectualized tribalism or absorbed in trivial philosophical naval-gazing – individuals who didn’t seem to care whether humanity thrives or dies. Now, finally, Romeo is surrounded by people who actually give a damn. People who speak his language. People who take the possibility of the world ending seriously. In a world of irrationality and madness, Romeo has found an oasis of sanity!
Flinching Away
Status: The interviewee maintains their endorsement of the following content, which remains unchanged since the interview.
Antony's mind becomes clouded when contemplating scenarios with very high p(doom). He returns to them over and over – but again and again, he flinches away. Imagining concrete scenarios of that kind is oppressive, after all. But that's not all. A fear is lingering in the back of his mind. What kind of mindset would set in if he thought through the scenarios? Would he even want to live with such a mentality?
Clear-Eyed Despair
Status: The interviewee maintains their endorsement of the following content, which has remained unchanged since the interview.
Ajax wants to have well-calibrated ideas and intuitions around AI alignment. He wants to work hard – and he doesn't believe he can do that if he lets wishful thinking cloud his mind. So he tries to viscerally understand, no, feel what AI doom would be like. How would he feel if all the people he loved were killed? If all of civilization were destroyed? Ajax wants to let go of all delusional positivity. He wants to tap into the depths of despair so his mind can stay clear and focused.
Attitudes towards the EA Sphere
Hard to Own-Up, Hard to Criticise
Status: The interviewee endorsed the following content during the interview but no longer endorses it at the time of publication.
“The EA community building program is overrated – and quite possibly net-negative” – is what Aaron would like to say when the topic comes up. He doesn’t, though. He did not always see it that way. Aaron used to trust CEA's approach to community building. Before launching his career in AI alignment, he worked on community building himself, together with his friend Timon.
Aaron’s sense of his job was informed by reading between the lines of CEA material. Discussion templates came with the conclusions panel filled in? That probably meant he should set up engaging, seemingly open-ended conversations – while making sure that the group reached the predetermined conclusions. After all, new members should not be trusted with thinking for themselves. CEA had figured things out, and other EAs should be kept in line.
Aaron understood and agreed with many of the EA takes. Others, he tribally supported – even though he didn't fully understand them. He put subtle pressure on members of his local group to support them as well. As time went by, Aaron became more sceptical. Isn’t it much more critical to cultivate truth-seeking in people who want to do good rather than force-feeding them “correct” ideas? Now Aaron regrets manipulating conversations to reach "sanctioned" conclusions. He regrets pressuring others to fall in line.
Even though Aaron thinks that CEA's approach nudges community organisers into some poor behaviours, he feels bad about criticising a system he used to be part of. That would mean coming clean about his mistakes to his friends from the local group. But mostly, Aaron keeps quiet because he doesn’t want to trash his former collaborator’s work. Timon is a good person, and he tries hard.
Is EA bad, actually?
Status: The interviewee maintains their endorsement of the following content, which remains unchanged since the interview.
Fang would like to unflinchingly face the question: “Is EA bad, actually?” Of course, this broad concern drags many children behind it: Has AI alignment been net-negative so far? How should we integrate (sceptical) outside perspectives? What are sketchy assumptions that people make about AI? Thinking about whether the movement is going in the right direction is difficult for Fang. His map was built on writing that, to some extent, has fossilised into AI alignment orthodoxy. It’s hard for him to look at developments in alignment and the community with fresh eyes.
And then there’s a further dimension to these questions. Fang has bought into the movement, socially and professionally. Most of his friends and colleagues are part of EA. If he concluded that he was in the right spot doing his best, it would mean that nothing would change. But what would happen if he understood that his calling lied elsewhere, in a pursuit unrelated to the EA community? Fang fears that his current social circle would only stay superficially involved with him.
Support among EAs only goes so far. They’ll be there for you, stand by you, and some of them will be close friends – as long as you’re part of the community. Or work for a big player in that space. At least, that’s what Fang fears. The idea of doing something else is equivalent to starting over. Fang doesn’t like the possibility of such a conclusion. And he doesn't like that he flinches away from such a possibility. He wants to be more resilient, so he works on finding friends outside EA. Until then, it will probably be hard to think about the question closely.
The Dangers of EA
Status: The interviewee has strengthened their endorsement of the following content since the interview.
Is EA a dangerous movement? Maybe it isn’t. Malcome feels strongly, however, that he should at least ask that question. People around him seem to assume that EA is, by default, built to be altruistic. But should we really expect a group of people with firm ideologies to change the world positively? What are the consequences of a bunch of young, inexperienced people trying to gain influence and power? Should they be the ones who shape the future? These questions are critical, yet they only stir up confusion in his mind.
The potential implications are disturbing. Still, it’s hard to investigate these questions – they might be insurmountably difficult to answer. And it probably doesn't help that Malcome is invested in the movement. He tries not to think of EA as part of his identity. But at the end of the day, he must acknowledge that his identity is, to some extent, attached to EA. If Malcome gets to the bottom of this, he might want to distance himself from the ideological side of the movement.
Status Dynamics and Social Signalling
Value and Proxies
Status: The interviewee has weakened their endorsement of the content since the interview. At the time of publication, they don't consider the incident described below as representative of the EA Berkeley community.
Ulysses believes that status hierarchies are not inherently problematic. Basing them on people's usefulness for a shared mission can benefit coordination. Problems can arise, however, when people try to exploit those systems.
Ulysses has witnessed such manipulative tendencies firsthand. Once, while at a party, he was immersed in a discussion with an acquaintance named Kent. Amid their conversation, King Lear, a high-status member of the Berkeley EA community, approached to greet Kent. The two started chatting. Soon, their conversation revolved around acquiring status. That exchange struck Ulysses. It was incredible how casually those influential people talked about applying social tricks to gain power. They underscored the disparity between proxies for doing good and the actual act of doing good. They discussed how to inflate and manipulate these proxies to enhance the appearance of social contribution – without actually increasing it.
Ulysses felt increasingly uneasy as he listened to this exchange. He would have expected people familiar with prisoner's dilemmas and Goodhart's law to be less drawn to exploiting proxies. It felt like they were defecting in a multi-person prisoner’s dilemma! Despite his discomfort, he refrained from interjecting. Speaking up would have felt awkward and out of place since he wasn't directly involved in the discussion. He was but an incidental observer, after all. And what good would it have done to criticise? People like King Lear probably assume that applying such tricks is justified, believing that, as EAs, they will use that underhandedly harvested power for good.
Distrust
Status: The interviewee maintains their endorsement of the following content, which remains unchanged since the interview.
Richard worries that professional decisions and personal relations are highly coupled in EA. The social sphere seems to intersect with object-level conversations and decisions. That observation affects how he interprets interactions with fellow community members. For example, Richard doesn’t always know if he should take people’s word on their life decisions at face value. How much of what they say is social signalling? When he asks about their decisions, people bring up object-level arguments and seem to undervalue or downplay social reasons. That makes it confusing and difficult to navigate social interactions!
The Seeker's Game
Status: The interviewee maintains their endorsement of the following content, which has remained unchanged since the interview.
By chance, Valentine runs into Rosaline, an established professional in Berkeley. They chat for a bit. Rosaline seems to have a favourable first impression of Valentine and invites him to an after-hours office party. Valentine wasn’t even aware that these kinds of parties existed! He decides to attend – more out of curiosity than anything else. He doesn't have high expectations because he doesn't know who will be there. Surprisingly, there are a few high-profile AI alignment researchers in attendance. Valentine gets to talk to some of them. The conversations prove to be engaging and enlightening! They provide perspectives that might help Valentine with his research. Would he have heard of the party if he hadn’t randomly talked to Rosaline? In hindsight, that interaction reminds him of a computer game. It's similar to how interactions with other player characters can unlock new game events and help with game progress. Maybe he should talk to people like Rosaline more often.
Belonging vs Competition
Status: The interviewee maintains their endorsement of the following content, which remains unchanged since the interview.
Brutus is a young AI alignment researcher. He struggles with resolving the tension between social belonging and competition in the Berkeley community. Brutus is firmly rooted in this social circle. At the same time, he competes with fellow community members for limited resources and opportunities. This inner conflict manifests in Brutus being sensitive about not being invited to parties. The sense of competition makes it tempting to build up his social prestige. But that is not something he wants to focus on! He wants to do good alignment work. Brutus feels paranoid about unwittingly steering towards status and prestige – even if he doesn’t necessarily endorse it. Adjacent to that is a nebulous, ill-defined fear of being left behind. What will happen if he opts out while others keep working on their prestige? Will that give them an edge that Brutus can’t compensate for?
Self-Censorship in Social Contexts
Social Justice Conversations
Status: The interviewee has weakened their endorsement of the content since the interview.
Charles hesitates to participate in social justice conversations. Even when he knows specifics about incidents on the scale of sexual assault cases, he feels uncomfortable bringing them up. Determining in advance who will be reasonable in a conversation and who will not is just too difficult. There are some fanatics on the social justice side, after all. What if Charles expressed himself clumsily? Which is not unlikely since he doesn’t know much about the topic, hasn’t thought deeply about it and lacks the subtle tact required for such conversations. If he said something silly once, would people forgive him? Maybe not. Maybe they would cancel him – or at least try to do that – if he said the wrong thing. The concern is not particularly intense. Most likely, nothing would happen. However, the potential downsides are still not worth it.
Delicate Topics
Status: The interviewee maintains their endorsement of the following content, which has remained unchanged since the interview.
Some topics simply ooze bad vibes. Jupiter doesn’t bring them up. Sure, there might be reasons for disturbing the peace. Discussing EA's role in producing SBF-type people could be significant. It could be crucial to talk about the links between social, romantic and professional relationships in EA and how they could give rise to nepotistic behaviour. Is it worth the price, though? Why should Jupiter be the one to ruin the mood? These conversations are downers and simply not fun – not even for him. And how awkward would it be if he accidentally brought this up to someone involved in these issues? Not to mention that Jupiter might simply be overlooking some apparent arguments and end up looking foolish.
He could raise these controversial topics publically and risk awkward, uncomfortable conversations – and heck, maybe even end up accidentally accusing someone! Or he could stick to safe and non-polemical subjects. From Jupiter’s perspective, the choice is obvious.
To Gossip or Not to Gossip
Status: The interviewee maintains their endorsement of the following content, which remains unchanged since the interview.
Hastings is confused. He unmistakably recognizes the need to let people know about bad actors. It can be an integral part of keeping the community healthy. However, there is never a good time for it. Conversations like that just do not come up naturally! Hastings doesn't feel comfortable gossiping about random people to friends. And he doesn't feel comfortable gossiping with people he doesn’t consider a friend. It also feels inappropriate – don’t people deserve some privacy? Still, Hastings tried. Post-FTX, he unpromptedly pointed out individuals he perceived as bad actors. That was a tiring, awkward and uncomfortable ordeal. In the end, he wasn't sure if he did any good.
It’s just as bad when unfamiliar people approach Hastings to gossip. A remote acquaintance once warned him about dating a person he adored. They are of bad character and a poor match for you, the acquaintance said. Then they went ahead and presented evidence supporting their claim. That did nothing but leave Hastings bewildered, confused and vaguely annoyed. No more indulging in gossip for Hastings, that's for sure!
I won’t start a Culture War
Status: The interviewee has strengthened their endorsement of the following content since the interview.
With some regularity, discussions about visions for a future to strive for come up in Berkeley. People seem to agree that it’s a transhumanist future. Well, William rejects this idea. He opposes the implicit plans of what humanity should do after solving X-risks. It’s irritating how this cultural marinade soaks the discourse, slightly tinting the taste of beliefs and decisions. William firmly believes that Berkeley’s shared futuristic tech mythology is not an appropriate foundation for these conversations!
One of the reasons that he doesn’t challenge these ideas loudly is that he can’t offer alternatives on how to ground these discussions. Moreover, he has a lingering frustration about not being taken seriously. But most importantly, he fears starting a culture war about what the future should look like. William is not sure if it’s likely that challenging that shared vision would lead to that. Nevertheless, the significance of these questions pales in comparison to the imminent threats humanity faces. William doesn’t want to risk distracting the community from the most urgent problems.
High Variance
Status: The interviewee has strengthened their endorsement of the content since the interview.
Othello has a friend, Cassio, who works at Fancy AI Lab (FAIL). Othello suspects that the potential outcome of his friend's work has a very high variance – from insignificant to civilization ending bad. He does not press the matter, however. Cassio probably believes that his work is helping. It would be unbearably awkward to tell someone you’re friendly with, "Hey, I think you’re destroying the world!" Othello does not want to go that far when he is uncertain that his assessment is correct! Who knows, maybe it would even endanger the friendship.
Self-Censorship in Professional Contexts
“Declaring Loyalty”
Status: The interviewee endorsed the following content during the interview but no longer endorses it at the time of publication.
Brabantio is a young, not yet established AI alignment researcher. He has a hard time navigating conversations around AI policy. Many people have strong opinions on AI policy that they think are obvious. Brabantio feels that sharing his opinion on the topic would be akin to declaring loyalty to a whole cluster of ideas, part of which he doesn’t agree with or isn’t even aware of. He already has enough to think about – he doesn’t want to add navigating community politics to that! The clever thing is to keep quiet.
The Temptation of Legibility
Status: The interviewee endorsed the following content during the interview but no longer endorses it at the time of publication.
Cleon suspects that legible alignment research is over-prioritised. Of course, legible work has its perks. It can improve research quality, and it feels more motivating to work on a problem when you can track your progress. But there are costs to that. Focusing on legibility might sap the curiosity of the researcher. What Cleon is truly afraid of, however, is legible research diverting attention from more crucial, less tangible work. Cleon observes that there is additional pressure to make your research legible. With growing distrust in the community, it becomes vital that other people can easily evaluate the quality of the research. Additionally, there is more ease in communicating with outside groups like people in finance, academia and capability labs.
Cleon is confused and unsure about the extent of the problem. Is legible research truly overly prioritised? He might be wrong about this. He would like to think more publicly, openly and collaboratively about it.
He doesn’t bring the topic up, though. Not often, anyway. A vision in his mind holds him back. He imagines bringing up the topic – and seeing his conversation partner's smirk. “Very interesting that you want to do less legible research”, they would say as if to imply that Cleon wants to hide something. What would he do if someone wanted to discredit him like that? Cleon doesn't want to open himself up to such political attacks! So he keeps his confusion to himself.
A Game of Diplomacy
Status: The interviewee has weakened their endorsement of the content since the interview.
Thurio used to work as an alignment researcher for an AI organisation. When people inquire about his former employer, he hesitates to share his genuine opinion. Before he speaks, he usually activates a diplomatic filter. That way, “management was terrible” transforms into mild commentary on subpar leadership.
Why is Thurio holding back? For one, he doesn’t want people asking for advice to walk away with the impression that his former employer is worse than is the case. He doesn’t want to exaggerate the badness. In addition, Thurio doesn’t want people who still work there to learn about him “bad-mouthing” the organisation. He is sceptical that he could find high-impact opportunities outside of EA. AI alignment is a small world. Maintaining civil relations is crucial, even if not everything went splendidly in the past. Thurio doesn’t want to burn bridges – especially in a tiny village.
Tedious and Tiring
Status: The interviewee has weakened their endorsement of the content since the interview.
Seyton is strongly averse to bringing up opinions about the quality of other AI alignment research. It would take up too much bandwidth and energy, and the resulting fights would likely go nowhere. People would just get defensive and drag him into long, tedious debates – especially because the critique is often not anonymous or double-blinded. That’s why he didn't participate in the 2021 LW year-end review. Of course, Seyton considered using a pseudonym to share his critiques. But it was never high enough on his priority list. After all, it’s also not clear if people would even read that. And it takes a while to build up credibility and traction for a pseudonym.
Team Player
Status: The interviewee has weakened their endorsement of the content since the interview, particularly concerning friends. They maintain their endorsement with respect to more distant acquaintances.
When Anthropic received cloud compute from Google, Taurus was sceptical. Was that investment a good thing for the world? What kind of promises did Anthropic make to the investors? However, when talking with his friends who work at the lab, Taurus refrains from voicing his concerns. They might start doubting if he is on the same team otherwise. In addition, he doesn't want his friends to feel like he thinks that they are working for a bad company. If he was sure that was the case, he wouldn’t hesitate to tell them! And who knows, Taurus might apply for a job at Anthropic in the coming years. He doesn’t want to express concerns until he has a foot in the door. Why jeopardise his chances with a potential employer?
Preformal Ideas
Status: The interviewee maintains their endorsement of the following content, which remains unchanged since the interview.
Cato feels that people in rationalist and EA communities discourage voicing preformal ideas. They are strict, imposing a high bar on expression. Some budding ideas undergo destructive scrutiny – the first instinct seems to be to tear down half-baked concepts before they can mature. Finding someone who wants to collaborate on building up a not-yet-fleshed-out idea can be difficult. Sometimes Cato is lucky to encounter a person to think with. Often, he refrains from expressing thoughts that he has not fully developed yet. He keeps them to himself, brooding over them, until he thinks it would be challenging to dismiss them immediately.
- ^
I also asked questions about what would make it easier to bring up opinions that are hard to voice. I concluded that I couldn't easily represent the content in my chosen format. Maybe I'll publish those results in another post.
- ^
The appropriate choice of pronouns initially left me in a state of confusion. Most of my participants identify as male, while a minority identify as female. At first, I exclusively used gender-neutral pronouns to prevent inadvertently outing the women. Later, I decided on 'he/him' pronouns in all vignettes where gender dynamics seemed insignificant. I didn't want to obfuscate the skewed gender ratio, after all. The entire matter of pronoun selection still leaves me somewhat confused.
19 comments
Comments sorted by top scores.
comment by LawrenceC (LawChan) · 2023-07-12T18:26:26.699Z · LW(p) · GW(p)
Thanks for taking the time to do the interviews and writing this up! I think ethnographic studies (and qualitative research in general) are pretty neglected in this community, and I'm glad people are doing more of it these days.
I think this piece captures a lot of real but concerning dynamics. For example, I feel like I've personally seen, wondered about, or experienced things that are similar to a decent number of these stories, in particular:
- Clear-Eyed Despair
- Is EA bad, actually?/The Dangers of EA
- Values and Proxies
- To Gossip or Not to Gossip
- High Variance
- Team Player
And I've heard of stories similar to most of the other anecdotes from other people as well.
(As an aside, social dynamics like these are a big part of why I tend to think of myself as EA-adjacent and not a "real" EA.)
((I will caveat that I think there's clearly a lot of positive things that go on in the community and lots of healthier-than-average dynamics as well, and it's easy to be lost in negativity and lose sight of the big picture. This doesn't take away from the fact that the negative dynamics exist, of course.))
Of these, the ones I worry the most about are the story described in Values and Proxies (that is, are explicit conversations about status seeking making things worse?), and the conflict captured in To Gossip or Not to Gossip/Delicate Topics and Distrust (how much should we rely on whisper networks/gossip/other informal social accountability mechanisms, when relying on them too much can create bad dynamics in itself?). Unfortunately, I don't have super strong, well thought-out takes here.
In terms of status dynamics, I do think that they're real, but that explicitly calling attention to them can indeed make them worse. (Interestingly, I'm pretty sure that explicitly pursuing status is generally seen as low status and bad?) I think my current attitude is that "status" as a term is overrated and we're having too many explicit conversations about it, which in turn gives (new/insecure) people plenty of fodder to injure themselves on.
In terms of the latter problem, I could imagine it'd be quite distressing to hear negative feedback about people you admire, want to be friends with, or are attracted to, especially if said negative feedback is from professional experience when your primary interaction with the person is personal or vice versa. This makes it pretty awkward to have the "actually, X person is bad" conversations. I've personally tried to address this by holding a strong line between personal gossip and my professional attitude toward various people, and by being charitable/giving people the benefit of the doubt. However, I've definitely been burned by this in the past, and I do really understand the value in a lot of the gossip/whisper networks--style conversations that happen.
Since you've spent a bunch of time thinking about and writing about these problems, I'm curious why you think a lot of these negative dynamics exist, and what we can do to fix them?
If I had to guess, I'd say the primary reasons are something like:
- The problem is important and we need people, but not you. In many settings, there's definitely an explicit message of "why can't we find competent people to work on all these important problems?", but in adjacent settings, there's a feeling of "we have so many smart young people, why do we not have things to do" and "if they really needed people, why did they reject me?". For example, I think I get the former feeling whenever an AIS org I know tries to do hiring, while I get the latter when I talk to junior researchers (especially SERI MATS fellows, independent researchers, or undergrads). It's definitely pretty rough to be told "this is the most important problem in the world" and then immediately get rejected. And I think this is exacerbated a lot by pre-existing insecurities -- if you're relying on external validation to feel good about yourself, it's easy for one rejection (let alone a string of rejections) to feel like the end of the world.
- Mixing of personal and professional lives. That is, you get to/have to interact with the same people in both work and non-work contexts. For example, I think something like >80% of my friends right now are EA/EA-adjacent I think many people talk about this and claim it's the primary problem. I agree that this is real problem, but I don't think it's the only one, nor do I think it's easy to fix. Many people in EA work quite hard -- e.g. I think many people I know work 50-60 hours a week, and several work way more than that, and when that happens it's just easier to maintain close relationships with people you work next to. And it's not non-zero cost to ignore (gossip about) people's behavior in their personal lives; such gossip can provide non-zero evidence about how likely they are to be e.g. reliable or well-intentioned in a professional setting. It's also worth noting that this is not a unique problem to EA/AIS; I've definitely been part of other communities where personal relationships are more important for professional success. In those communities, I've also felt a much stronger need to "be social" and go to plenty of social events I didn't enjoy to to network.
- No (perceived) lines of retreat. I think many people feel (rightly or wrongly) like they don't have a choice; they're stuck in the community. Given that the community is still quite small somehow, this also means that they often have to run into the same people or issues that stress them out over and over. (E.g. realistically if you want to work on technical AI Safety, there's one big hub in the Bay Area, a smaller hub in the UK, and a few academic labs in New York or Boston.) Personally, I think this is the one that I feel most acutely -- people who know me IRL know that I occasionally joke about running away from the whole AI/AI Safety problem, but also that when I've tried to get away and e.g. return to my roots as a forecasting/psychology researcher, I find that I can't avoid working on AI Safety-adjacent issues (and I definitely can't get away from AI in general).
- Explicit status discussions. I'm personally very torn on this. I continue to think that thinking explicitly about status can be very damaging both personally and for the community. People are also very bad at judging their own status; my guess is there's a good chance that King Lear feels pretty status-insecure and doesn't realize his role in exacerbating bad dynamics by explicitly pursuing status as a senior researcher. But it's also not like "status" isn't tracking something real. As in the Seeker's Game story, it is the case that you get more opportunities if you're at the right parties, and many opportunities do come from (to put it flippantly) people thinking you're cool.
- Different social norms and general social awkwardness. I think this one is really, really underrated as an explanation. Many people I meet in the community are quite awkward and not super socially adept (and honestly, I often feel I'm in this category as well). At the same time, because the EA/AIS scene in the Bay Area has attracted people from many parts of the world, we end up with lots of miscommunication and small cultural clashes. For example, I think a lot of people I know in the community try to be very explicit and direct, but at the same time I know people from cultures where doing so is seen as a social faux pas. Combined with a decent amount of social awkwardness from the parties involved, this can lead to plenty of misunderstandings (e.g., does X hate me? why won't Y tell me what they think honestly?). Doubly so when people can't read/aren't familiar with/choose to ignore romantic signals from others.
I've been meaning to write more about this; maybe I will in the next few weeks?
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-07-10T06:43:57.394Z · LW(p) · GW(p)
After that, he heavily edits the structure of the text, adding citations, rationalisations and legible arguments before posting it.
This sounds like a really good thing, not a bad thing. I'm proud of a community which exerts social pressure for people to do this. Good job us!
Replies from: MSRayne↑ comment by MSRayne · 2023-07-10T13:41:11.830Z · LW(p) · GW(p)
No, it's called "lying". The text that he produces as a result of these social pressures does not reflect his actual thought processes. You can't judge a belief on the basis of a bunch of ex post facto arguments people make up to rationalize it - the method by which they came to hold the belief is much more informative, and for those of us with very roundabout styles of thinking (such as myself) being forced into this self-censorship and modification of our thought patterns into something "coherent" and easy to read actually destroys all the evidence of how we actually came to the idea, and thus destroys much of your ability to effectively examine its validity!
Replies from: GuySrinivasan, ricraz, nathan-helm-burger↑ comment by SarahNibs (GuySrinivasan) · 2023-07-10T14:51:38.654Z · LW(p) · GW(p)
Thinking and coming to good ideas is one thing.
Communicating a good idea is another thing.
Communicating how you came to an idea you think is good is a third thing.
All three are great, none of them are lying, and skipping the "communicating a good idea" one in hopes that you'll get it for free when you communicate how you came to the idea is worse (but easier!) than also, separately, figuring out how to communicate the good idea.
(Here "communicate" refers to whatever gets the idea from your head into someone else's, and, for instance, someone beginning to read a transcript of your roundabout thought patterns, bouncing off, and never having the idea cohere in their own heads counts as a failure to communicate.)
↑ comment by Richard_Ngo (ricraz) · 2023-07-10T14:25:48.609Z · LW(p) · GW(p)
Disagree. It's valuable to flag the causal process generating an idea, but it's also valuable to provide legible argumentation, because most people can't describe the factors which led them to their beliefs in sufficient detail to actually be compelling. Indeed, this is specifically why science works so well: people stopped arguing about intuitions, and started arguing about evidence. And the lack of this is why LW is so bad at arguing about AI risk: people are uninterested in generating legible evidence, and instead focus on presenting intuitions that are typically too fuzzy to examine or evaluate.
Replies from: Erich_Grunewald↑ comment by Erich_Grunewald · 2023-07-10T14:58:23.606Z · LW(p) · GW(p)
It's valuable to flag the causal process generating an idea, but it's also valuable to provide legible argumentation, because most people can't describe the factors which led them to their beliefs in sufficient detail to actually be compelling.
To add to that, trying to provide legible argumentation can also be good because it can convince you that your idea actually doesn't make sense, or doesn't make sense as stated, if that is indeed the case.
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-07-10T15:26:35.951Z · LW(p) · GW(p)
If you've got a written description of the thought process by which you came to the idea, keep that! But the thing that should be published should be the thing that is that plus supporting evidence like citations and logical reasoning describing how such a thing could have come to be the case. Simply don't destroy the evidence, and what you've got is pure improvement. If the non-rational hunch and analogy part seems hard to fit in to the cited polished product, then keep them as separate docs with links to each other.
comment by Elizabeth (pktechgirl) · 2023-07-10T06:48:37.376Z · LW(p) · GW(p)
Of course, that’s just the flip side of a great thing. A space like this, with tons of driven and talented people, allows for advanced intellectual conversations and remarkable collaborations.
For a while after moving to the bay I really struggled with feelings of laziness and stupidity. This stopped after I went to an outgroup friend's wedding, where I was obviously the most ambitious person there by a mile, and at least tied smartest. It clicked for me that I wasn't dumb or lazy, I had just selected for the smartest most ambitious people who would tolerate me, and I'd done a good job. Ever since then I've been much calmer about status and social capital, and when I do stress out I see it as a social problem rather than a reflection of me as a person.
I didn't initially tell my friend about this, because it seemed arrogant and judgemental. A few years later it came up naturally, so I told him. His response: "oh yeah, ever since I met you," [which was before I got into rationality or EA, and when I look back feel like I was wandering pointlessly] "you were obviously the person I knew who was most likely to be remembered after you died [by the world at large]".
comment by Said Achmiz (SaidAchmiz) · 2023-08-21T18:28:00.871Z · LW(p) · GW(p)
Thank you very much for doing this. This is very impressive and useful anthropological work.
I am not a participant in the Bay Area rationalist community, but I wonder if there’s any way that the rest of us can support or aid further work in this regard? Funding? Technical support (web hosting, design, development)? Other?
comment by Yulia · 2023-07-10T18:00:15.771Z · LW(p) · GW(p)
For everyone who wanted to participate in the poll but didn't because it seemed like too much work – I updated it! Here's the updated version. It should be easier to answer now :)
comment by Chi Nguyen · 2023-07-10T17:17:39.119Z · LW(p) · GW(p)
Thanks for posting this! I really enjoyed the read.
Feedback on the accompanying poll: I was going to fill it out. Then saw that I have to look up and list the titles I can (not) relate to instead of just being able to click "(strongly) relate/don't relate" on a long list of titles. (I think the relevant function for this in forms is "Matrix" or something). And my reaction was "ugh, work". I think I might still fill it in but I'm muss less likely to. If others feel the same, maybe you wanna change the poll?
Replies from: Yulia↑ comment by Yulia · 2023-07-10T17:53:34.295Z · LW(p) · GW(p)
Thank you so much for the feedback! I think you're totally right. Here's the updated poll.
comment by Jan_Kulveit · 2023-07-11T15:33:08.965Z · LW(p) · GW(p)
Because his approach does not conform to established epistemic norms on LessWrong, Adrian feels pressure to cloak and obscure how he develops his ideas. One way in which this manifests is his two-step writing process. When Adrian works on LessWrong posts, he first develops ideas through his free-form approach. After that, he heavily edits the structure of the text, adding citations, rationalisations and legible arguments before posting it. If he doesn’t "translate" his writing, rationalists might simply dismiss what he has to say.
cf Limits to legibility [LW · GW] ; yes, strong norms/incentives for "legibility" have this negative impact.
comment by Said Achmiz (SaidAchmiz) · 2023-08-21T18:23:25.424Z · LW(p) · GW(p)
Rationalist “community” was a mistake.
comment by Review Bot · 2024-08-10T00:58:07.297Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by MSRayne · 2023-07-10T13:39:05.012Z · LW(p) · GW(p)
I feel the same as Adrian and Cato. I am very much the opposite of a rigorous thinker - in fact, I am probably not capable of rigor - and I would like to be the person who spews loads of interesting off the wall ideas for others to parse through and expand upon those which are useful. But that kind of role doesn't seem to exist here and I feel very intimidated even writing comments, much less actual posts - which is why I rarely do. The feeling that I have to put tremendous labor into making a Proper Essay full of citations and links to sequences and detailed arguments and so on - it's just too much work and not worth the effort for something I don't even know anyone will care about.
Replies from: Erich_Grunewald, SaidAchmiz↑ comment by Erich_Grunewald · 2023-07-10T14:55:06.326Z · LW(p) · GW(p)
Have you considered writing (more) shortforms instead? If not, this comment is a modest nudge for you to consider doing so.
Replies from: MSRayne↑ comment by MSRayne · 2023-07-10T15:37:55.215Z · LW(p) · GW(p)
I'm never really sure what there's any point in saying. My main interests have nothing to do with AI alignment, which seems to be the primary thing people talk about here. And a lot of my thoughts require the already existing context of my previous thoughts. Honestly, it's difficult for me to communicate what's going on in my head to anyone.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-08-21T18:33:41.626Z · LW(p) · GW(p)
I would like to be the person who spews loads of interesting off the wall ideas for others to parse through and expand upon those which are useful
Such a role is useful only if a substantial proportion of those “off the wall ideas” turn out to be not just useful/correct/good, but also original. Otherwise it is useless. Weird ideas are all over the internet.
For example, take the Adrian vignette: he wants to discuss whether there’s “something like a global consciousness”. Well, first, that’s not a new idea. Second, the answer is “no”. Discussion complete. Does Adrian have anything new to say about this? (Does he know what has already been said on the matter, even?) If not, then his contribution is nil.