The Treacherous Path to Rationality
post by Jacob Falkovich (Jacobian) · 2020-10-09T15:34:17.490Z · LW · GW · 116 commentsContents
Rats v. Plague The Path Alternatives to Reason Underperformance Swamp Sinkholes of Sneer Strange Status and Scary Memes Valley of Disintegration None 117 comments
Cross-posted, as always, from Putanumonit.
Rats v. Plague
The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.
We started discussing the virus and raising the alarm in private back in January. By late February, as American health officials were almost unanimously downplaying the threat, we wrote posts on taking the disease seriously, buying masks, and preparing for quarantine [LW · GW].
Throughout March, the CDC was telling people not to wear masks and not to get tested unless displaying symptoms. At the same time, Rationalists were already covering every relevant angle, from asymptomatic transmission [LW · GW] to the effect of viral load [LW · GW], to the credibility of the CDC [LW · GW] itself. As despair and confusion reigned everywhere into the summer, Rationalists built online dashboards modeling nationwide responses and personal activity risk to let both governments and individuals make informed decisions.
This remarkable success did not go unnoticed. Before he threatened to doxx Scott Alexander and triggered a shitstorm, New York Times reporter Cade Metz interviewed me and other Rationalists mostly about how we were ahead of the curve on COVID and what others can learn from us. I told him that Rationality has a simple message: “people can use explicit reason to figure things out, but they rarely do”
Rationalists have been working to promote the application of explicit reason, to “raise the sanity waterline” as it were, but with limited success. I wrote recently about success stories of rationalist improvement but I don’t think it inspired a rush to LessWrong. This post is in a way a response to my previous one. It’s about the obstacles preventing people from training and succeeding in the use of explicit reason, impediments I faced myself and saw others stumble over or turn back from. This post is a lot less sanguine about the sanity waterline’s prospects.
The Path
I recently chatted with Spencer Greenberg about teaching rationality. Spencer regularly publishes articles like 7 questions for deciding whether to trust your gut or 3 types of binary thinking you fall for. Reading him, you’d think that the main obstacle to pure reason ruling the land is lack of intellectual listicles on ways to overcome bias.
But we’ve been developing written [? · GW] and in-person curricula for improving your ability to reason for more than a decade. Spencer’s work is contributing to those curricula, an important task. And yet, I don’t think that people’s main failure point is in procuring educational material.
I think that people don’t want to use explicit reason. And if they want to, they fail. And if they start succeeding, they’re punished. And if they push on, they get scared. And if they gather their courage, they hurt themselves. And if they make it to the other side, their lives enriched and empowered by reason, they will forget the hard path they walked and will wonder incredulously why everyone else doesn’t try using reason for themselves.
This post is about that hard path.
Alternatives to Reason
What do I mean by explicit reason? I don’t refer merely to “System 2”, the brain’s slow, sequential, analytical, fully conscious, and effortful mode of cognition. I refer to the informed application of this type of thinking. Gathering data with real effort to find out, crunching the numbers with a grasp of the math, modeling the world with testable predictions, reflection on your thinking with an awareness of biases. Reason requires good inputs and a lot of effort.
The two main alternatives to explicit reason are intuition and social cognition.
Intuition, sometimes referred to as “System 1”, is the way your brain produces fast and automatic answers that you can’t explain. It’s how you catch a ball in flight, or get a person’s “vibe”. It’s how you tell at a glance the average length of the lines in the picture below but not the sum of their lengths. It’s what makes you fall for the laundry list of heuristics and biases that were the focus of LessWrong Rationality in the early days. Our intuition is shaped mostly by evolution and early childhood experiences.
Social cognition is the set of ideas, beliefs, and behaviors we employ to fit into, gain status in, or signal to groups of people. It’s often intuitive, but it also makes you ignore your intuition about line lengths and follow the crowd in conformity experiments. It’s often unconscious — the memes a person believes (or believes that they believe) for political expediency often just seem unquestionably true from the inside, even as they change and flow with the tides of group opinion.
Social cognition has been the main focus of Rationality in recent years, especially since the publication of The Elephant in the Brain. Social cognition is shaped by the people around you, the media you consume (especially when consumed with other people), the prevailing norms.
Rationalists got COVID right by using explicit reason. We thought probabilistically, and so took the pandemic seriously when it was merely possible, not yet certain. We did the math on exponential growth. We read research papers ourselves, trusting that science is a matter of legible knowledge and not the secret language of elevated experts in lab coats. We noticed that what is fashionable to say about COVID doesn’t track well with what is useful to model and predict COVID.
On February 28th, famous nudger Cass Sunstein told everyone that the reason they’re “more scared about COVID than they have any reason to be” is the cognitive bias of probability neglect. He talked at length about university experiments with electric shocks and gambles, but neglected to calculate any actual probabilities regarding COVID.
While Sunstein was talking about the failures of intuition, he failed entirely due to social cognition. When the article was written, prepping for COVID was associated with low-status China-hating reactionaries. The social role of progressive academics writing in progressive media was to mock them, and the good professor obliged. In February people like Sunstein mocked people for worrying about COVID in general, in March they mocked them for buying masks, in April they mocked them for hydroxychloroquine, in May for going to the beach, in June for not wearing masks. When someone’s view of COVID is shaped mostly by how their tribe mocks the outgroup, that’s social cognition.
Underperformance Swamp
The reason that intuition and social cognition are so commonly relied on is that they often work. Doing simply what feels right is usually good enough in every domain you either trained for (like playing basketball) or evolved for (like recoiling from snakes). Doing what is normal and fashionable among your peers is good enough in every domain your culture has mastered over time (like cooking techniques). It’s certainly good for your own social standing, which is often the main thing you care about.
Explicit rationality outperformed both on COVID because responding to a pandemic in the information age is a very unusual case. It’s novel and complex, long on available data and short on trustworthy analysis, abutting on many spheres of life without being adequately addressed by any one of them. In most other areas reason does not have such an inherent advantage.
Many Rationalists have a background in one of the few other domains where explicit reason outperforms, such as engineering or the exact sciences. This gives them some training in its application, training that most people lack. Schools keep talking about imparting “critical thinking skills” to all students but can scarcely point to much success. One wonders if they’re really motivated to try — will a teacher really have an easier time with 30 individual critical thinkers rather than a class of password-memorizers [LW · GW]?
Then there’s the fact that most people engaged enough to answer a LessWrong survey [LW · GW] score in the top percentile on IQ tests and the SAT. Quibble as you may with those tests, insofar as they measure anything at all they measure the ability to solve problems using explicit reason. And that ability varies very widely among people.
And so most people who are newly inspired to solve their problems with explicit reason fail. Doubly so since most problems people are motivated to solve are complicated and intractable to System 2 alone: making friends, losing weight, building careers, improving mental health, getting laid. And so the first step on the path to rationality is dealing with rationality’s initial failure to outperform the alternatives.
Sinkholes of Sneer
Whether someone gives up after their initial failure or perseveres to try again depends on many factors: their personality, context, social encouragement or discouragement. And society tends to be discouraging of people trying to reason things out for themselves.
As Zvi wrote [LW · GW], applying reason to a problem, even a simple thing such as doing more of what is already working, is an implicit accusation against everyone who didn’t try it. The mere attempt implies that you think those around you were too dumb to see a solution that required no gifts or revelations from higher authority, but mere thought.
The loudest sneers of discouragement come from those who tried reason for themselves, and failed, and gave up, and declared publicly that “reason” is a futile pursuit. Anyone who succeeds where they failed indicts not merely their intelligence but their courage.
Many years ago, Eliezer wrote about trying the Shangri-La diet [LW · GW], a strange method based on a novel theory of metabolic “set points” and flavor-calorie dissociation. Many previous casualties of fad diets scoffed at this attempt not because they spotted a clear flaw in the Shangri-La theory, but at Eliezer’s mere hubris at trying to outsmart dieting and lose weight without applying willpower.
Oh, you think you’re so much smarter? Well let me tell you…
A person who is just starting (and mostly failing) to apply explicit reason doesn’t have confidence in their ability, and is very vulnerable to social pressure. The are likely to persevere only in a “safe space” where attempting rationality is strongly endorsed and everything else is devalued. In most normal communities the social pressure against it is simply too strong.
This is I think is the main purpose of LessWrong and the Rationalist community, and similar clubs throughout history and around the world. To outsiders it looks like a bunch of aspie nerds who severely undervalue tact, tradition, intuition, and politeness, building an awkward and exclusionary “ask culture [LW · GW]“. They’re not entirely wrong. These norms are too skewed in favor of explicit reason to be ideal, and mature rationalists eventually shift to more “normie” norms with their friends. But the nerd norms are just skewed enough to push the aspiring rationalist to practice the craft of explicit reason, like a martial arts dojo [LW · GW].
Strange Status and Scary Memes
But not all is smooth sailing in the dojo, and the young rationalist must navigate strange status hierarchies and bewildering memeplexes. I’ve seen many people bounce off the Rationalist community over those two things.
On the status front, the rightful caliph of rationalists is Eliezer Yudkowsky, widely perceived outside the community to be brash, arrogant, and lacking charisma. Despite the fact of his caliphdom, arguing publicly with Eliezer is one of highest-status things a rationalist can do, while merely citing him as an authority is disrespected.
People like Scott Alexander or Gwern Branwen are likewise admired despite many people not even knowing what they look like. Attributes that form the basis of many status hierarchies are heavily discounted: wealth, social grace, credentials, beauty, number of personal friends, physical shape, humor, adherence to a particular ideology. Instead, respect often flows from disreputable hobbies such as blogging.
I think that people often don’t realize that their discomfort with rationalists comes down to this. Every person cares deeply and instinctively about respect and their standing in a community. They are distressed by status hierarchies they don’t know how to navigate.
And if that wasn’t enough, rationalists believe some really strange things. The sentence “AI may kill all humans in the next decade, but we could live forever if we outsmart it — or freeze our brains” is enough to send most people packing.
But even less outlandish ideas cause trouble. The creator of rationality’s most famous infohazard observed that any idea can be an infohazard to someone who derives utility or status from lying about it. Any idea can be hazardous to to someone who lacks a solid epistemology to integrate it with.
In June a young woman filled out my hangout form, curious to learn more about rationality. She’s bright, scrupulously honest, and takes ideas very seriously, motivated to figure out how the world really works so that she can make it better. We spent hours and hours discussing every topic under the sun. I really liked her, and saw much to admire.
And then, three months later, she told me that she doesn’t want to spend time with me or any rationalists anymore because she picked up from us beliefs that cause her serious distress and anxiety.
This made me very sad also perplexed, since the specific ideas she mentioned seem quite benign to me. One is that IQ is real, in the sense that people differ in cognitive potential in a way that is hard to change as adults and that affects their potential to succeed in certain fields.
Another is that most discourse in politics and the culture war can be better understood as signaling, a way for people to gain acceptance and status in various tribes, than as behavior directly driven by an ideology. Hypocrisy is not an unusually damning charge, but the human default.
To me, these beliefs are entirely compatible with a normal life, a normal job, a wife, two guinea pigs, and many non-rationalist friends. At most, they make me stay away from pursuing cutting-edge academic mathematics (since I’m not smart enough) and from engaging political flame wars on Facebook (since I’m smart enough). Most rationalist believe these to some extent, and we don’t find it particularly remarkable.
But my friend found these ideas destabilizing to her self-esteem, her conception of her friends and communities, even her basic values. It’s as if they knocked out the ideological scaffolding of her personal life and replaced it with something strange and unreliable and ominous. I worried that my friend shot right past the long path of rationality and into the valley of disintegration.
Valley of Disintegration
It has been observed [LW · GW] that some young people appear to get worse at living and at thinking straight soon after learning about rationality, biases, etc. We call it the valley of bad rationality.
I think that the root cause of this downturn is people losing touch entirely with their intuition and social cognition, replaced by trying to make or justify every single decision with explicit reasoning. This may come from being overconfident in one’s reasoning ability after a few early successes, or by anger at all the unreasoned dogma and superstition one has to unlearn.
A common symptom of the valley are bucket errors [? · GW], when beliefs that don’t necessarily imply one another are entangled together. Bucket errors can cause extreme distress or make you flinch away from entire topics to protect yourself [LW · GW]. I think this may have happened to my young friend.
My friend valued her job, and her politically progressive friends, and people in general, and making the world a better place. These may have become entangled, for example by thinking that she values her friends because their political activism is rapidly improving the world, or that she cares about people in general because they each have the potential to save the planet if they worked hard. Coming face to face with the ideas of innate ability and politics-as-signaling while holding on to these bucket errors could have resulted in a sense that her job is useless, that most people are useless, and that her friends are evil. Since those things are unthinkable, she flinched away.
Of course, one can find good explicit reasons to work hard at your job, socialize with your friends, and value each human as an individual, reasons that have little to do with grand scale world-improvement. But while this is useful to think about, it often just ends up pushing bucket errors into other dark corners of your epistemology.
People just like their friends. It simply feels right. It’s what everyone does. The way out of the valley is to not to reject this impulse for lack of journal citations but to integrate your deep and sophisticated friend-liking mental machinery with your explicit rationality and everything else.
The way to progress in rationality is not to use explicit reason to brute-force every problem but to use it to integrate all of your mental faculties: intuition, social cognition, language sense, embodied cognition, trusted authorities, visual processing… The place to start is with the ways of thinking that served you well before you stumbled onto a rationalist blog or some other gateway into a method and community of explicit reasoners.
This idea commonly goes by metarationality, although it’s certainly present in the original Sequences [? · GW] as well. It’s a good description for what the Center for Applied Rationality teaches — here’s an excellent post [LW · GW] by one of CFAR’s founders about the valley and the (meta)rational way out.
Metarationality is a topic for more than two paragraphs, perhaps for an entire lifetime. I have risen out of the valley — my life is demonstrably better than before I discovered LessWrong — and the metarationalist climb is the path I see ahead of me.
And behind me, I see all of this.
So what to make of this tortuous path? If you’re reading this you are quite likely already on it, trying to figure out how to figure things out and dealing with the obstacles and frustrations. If you’re set on the goal that this post may offer some advice to help you on your way: try again after the early failures, ignore the sneers, find a community with good norms, and don’t let the memes scare you — it all adds up to normalcy in the end. Let reason be the instrument that sharpens your other instruments, not the only tool in your arsenal.
But the difficulty of the way is mostly one of motivation, not lack of instruction. Someone not inspired to rationality won’t become so by reading about the discouragement along the way.
And that’s OK.
People’s distaste for explicit reason is not a modern invention, and yet our species is doing OK and getting along. If the average person uses explicit reason only 1% of the time, the metarationalist learns that she may up that number to 3% or 5%, not 90%. Rationality doesn’t make one a member of a different species, or superior at all tasks.
The rationalists pwned COVID, and this may certainly inspire a few people to join the tribe. As for everyone else, it’s fine if this success merely raises our public stature a tiny bit, lets people see that weirdos obsessed with explicit reason have something to contribute. Hopefully it will make folk slightly more likely to listen to the next nerd trying to tell them something using words like “likelihood ratio” and “countersignaling”.
Because if you think that COVID was really scary and our society dealt with it really poorly — boy, have we got some more things to tell you [? · GW].
116 comments
Comments sorted by top scores.
comment by jdfaben · 2020-10-10T22:38:39.824Z · LW(p) · GW(p)
Seriously, in what sense did rationalists "pwn covid"? Did they build businesses that could reliably survive a year of person-to-person contact being restricted across the planet? Did they successfully lobby governments to invest properly in pandemic response before anything happened? Did they invest in coronavirus vaccine research so that we had a vaccine ready before the pandemic started? Did they start a massive public information campaign that changed people's behaviour and stopped the disease from spreading exponentially? Did they all move to an island nation where they could continue life as normal by shutting the border?
Honestly, it seems pretty distasteful to say that anyone 'pwned' a disease that has now killed over 1 million people, but on the face of it, it's also pretty ridiculous. So far as I can tell, a small handful of people divested a small amount of their stock portfolio, and a bunch of people wrote some articles about how the disease was likely to be a big deal, mostly around the time other people were also starting to come to the same conclusion. By late February it was probably already too late to start stockpiling for a quarantine without effectively taking those supplies away from someone else. Honestly, the practical benefits of being a couple of weeks ahead of the curve on this seem pretty minimal.
(Also, to be clear, it's not obvious you were that much ahead of the curve. The Vox article about 'no handshakes please' was written about 2 weeks before either of the 'rationalist' articles you link to, which implies that at least a good chunk of Silicon Valley was already taking it seriously)
I really think this is a big part of the reason that people don't benefit nearly as much as you might initially think from 'rationality'. There are huge benefits to going with the crowd, and huge coordination problems to be solved if you want to outperform the crowd. Sure, there are a handful of very impressive rationalists who seem to have done very impressive things, but there are a handful of very impressive people in pretty much any community built around any intellectual pursuit. I'm not sure I buy the premise that 'explicit rationality' is nearly as good as you think it is.
Replies from: FireStormOOO, Jacobian, Benito, FactorialCode, George3d6, voxelate, matthewvandyek↑ comment by FireStormOOO · 2020-10-29T19:58:33.951Z · LW(p) · GW(p)
FWIW I left a decent job that required regular air travel to deep red "COVID is a liberal hoax" areas of the US based heavily on content here. I had alternatives lined up but I probably would've stuck it out otherwise and I think that would've been a mistake.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-29T20:29:52.874Z · LW(p) · GW(p)
Thanks for sharing this info. It's helpful for writers in the community to hear about these sorts of effects their writing has :)
↑ comment by Jacob Falkovich (Jacobian) · 2020-10-12T18:51:50.137Z · LW(p) · GW(p)
We didn't get COVID, for starters. I live in NYC, where approximately 25% of the population got sick but no rationalists that I'm aware of did.
Replies from: jdfaben↑ comment by jdfaben · 2020-10-13T21:37:55.801Z · LW(p) · GW(p)
I'm actually confused by that response, and I don't think it's really part of your best attempt to explain what you meant by 'rationalists pwned covid'. I'll try to explain why I'm unimpressed with that response below, but I think we're in danger of getting into a sort of 'point-scoring' talking past each other. Obviously there were a few rhetorical flourishes in my original response, but I think the biggest part of what I'm trying to say is that the actual personal benefits to most people of being ahead of the curve on thinking about the pandemic were pretty minimal, and I think avoiding infection would fall in that 'minimal benefit' bucket for most of us.
I think we can be a bit more concrete - I think the actual personal benefits to either you or me of being aware of what was happening with COVID slightly before everyone else were pretty minimal. I really liked your article from February, and I really think the points you were making about conformity bias are probably the strongest part of your argument that rationality has practical uses, but you pretty much said yourself in that post that the actual, practical benefits were not that big:
"Aside from selling the equities, all the prep I’ve done was to stock a month of necessities so I can work from home and to hold off on booking flights for a trip I had planned for April."
And I think (and this is where we probably differ) that this is pretty typical of the sort of topics where you can get the right answer using explicit reason.
To address the actual claim that rationalist didn't get infected (although, as I said, I don't think it really gets the meat of what you were saying originally).
First, I think it's probably not true, there are two main reasons for this: one, there are 255k confirmed COVID cases in New York City, so if there are 10 million people in the city, and 25% of them have had COVID, then only 10% of the people who've had it know they've had it; two, I'm about 85% sure that I remember a post going round Bay Area rationalist Facebook friends in March about someone who had been to a party at a group house having a positive test.
Second, if it is (even proportionally) true, I think it's probably mostly down to demographics. I play in a regular bridge game with some Scottish and English internationals, and as far as I'm aware, none of them have had COVID. I think this is probably more to do with the fact that very few bridge players work in the service sector, and almost all of us were able to work from home during a pandemic than any particular perspicacity on our part.
Third, as I said above, it's a pretty low bar. If you're rich enough (and don't work at a hospital), avoiding personally getting infected is relatively straightforward, and while obviously it has some benefits, I don't think it would be enough of an incentive to convince me to take on a whole new worldview.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T15:03:40.238Z · LW(p) · GW(p)
Third, as I said above, it’s a pretty low bar. If you’re rich enough (and don’t work at a hospital), avoiding personally getting infected is relatively straightforward, and while obviously it has some benefits, I don’t think it would be enough of an incentive to convince me to take on a whole new worldview.
My personal experience is consistent with this take, for what it’s worth. I think that “rationalists didn’t get COVID” is indeed mostly due to substantially higher average income (perhaps not even among ‘rationalists’ but specifically among Jacob’s friends/acquaintances).
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-14T18:47:44.422Z · LW(p) · GW(p)
Something about that seems plausible to me. I'll think on it more...
↑ comment by Ben Pace (Benito) · 2020-10-12T11:29:37.061Z · LW(p) · GW(p)
The best startup people were similarly early, and I respect them a lot for that. If you know of another community or person that publicly said the straightforward and true things in public back in February, I am interested to know who they are and what other surprising claims they make.
I do know a lot of rationalists who put together solid projects and have done some fairly useful things in response to the pandemic – like epidemicforecasting.org and microcovid.org, and Zvi's and Sarah C's writing, and the LW covid links database, and I heard that Median group did a bunch of useful things, and so on. Your comment makes me think I should make a full list somewhere to highlight the work they've all done, even if they weren't successful.
I wouldn't myself say we've pwned covid, I'd say some longer and more complicated thing by default that points to our many flaws while highlighting our strengths. I do think our collective epistemic process was superior to that of most other communities, in that we spoke about it plainly (simulacra level 1) in public in January/February, and many of us worked on relevant projects.
Replies from: jdfaben↑ comment by jdfaben · 2020-10-13T21:52:04.424Z · LW(p) · GW(p)
I didn't really see much public discussion early outside of epidemiology Twitter. I'm married to an epidemiologist who stocked our flat with masks in December when there were 59 confirmed cases in Wuhan, and we bought enough tins of food to eat for a few weeks in January, as well as upgrading our work-from-home set up before things sold out. Although I completely failed to make the connection and move my pension out of equities, that doesn't actually seem to have cost me very much in the long run (for those keeping count, S&P 500 is up 17% year-on-year).
(The biggest very early warning sign, apparently, was that when there were 59 cases China was still claiming there was no person-to-person transmission, which seemed implausible).
I actually am impressed by how well the Lesswrong-sphere did epistemically. People here seem to have been taking COVID seriously before most other people were, but as I've tried to explain a bit more above, I'm not sure how much this good this did anyone personally. If the argument is 'listen to rationalists when they say weird things because they're more often right when they say weird things that most other people', then I think I'm on board. If the argument is 'try explicit rationality, it will make your life noticeably better in measurable ways', then I'm less convinced, and I think these really are distinct claims.
PS - your links seem to be broken, it's easy enough to follow them, as you gave full URL's, just thought I'd let you know.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-13T22:43:41.573Z · LW(p) · GW(p)
Good on your spouse! Very impressed.
(Also, I don't get the S&P being up so much, am generally pretty confused by that, and updated further that I don't know how to get information out of the stock market.)
I think epistemics is indeed the first metric I care about for LessWrongers. If we had ignored covid or been confident it was not a big deal, I would now feel pretty doomy about us, but I do think we did indeed do quite well on it. I could talk about how we discussed masks, precautions, microcovids, long-lasting respiratory issues, and so on, but I don't feel like going on at length about it right now. Thanks for saying what you said there.
Now, I don't think you/others should update on this a ton, and perhaps we can do a survey to check, but my suspicion is that LWers and Rationalists have gotten covid way, way less than the baseline. Like, maybe an order of magnitude less. I know family who got it, I know whole other communities who got it, but I know hundreds of rationalists and I know so few cases among them.
Of my extended circle of rationalist friends, I know of one person who got it, and this was due to them living in a different community with different epistemic standards, and I think my friend fairly viscerally lost some trust in that community for not taking the issue seriously early on. But otherwise, I just know somewhere between 100-200 people who didn't get it (a bunch of people who were in NY like Jacob, Zvi, etc), people who did basic microcovid calculations, started working from home as soon as the first case of community-transmission was reported in their area, had stockpiled food in February, updated later on that surface-transmission was not a big deal so stopped washing their deliveries, etcetera and so forth.
I also knew a number of people who in February were doing fairly serious research trying to figure out the risk factors for their family, putting out bounties for others to help read the research, and so on, and who made a serious effort to get their family safe.
There have been some private threads in my rationalist social circles where we've said "Have people personally caught the virus in this social circle despite taking serious quarantine precautions?" and there've been several reports of "I know a friend from school who got it" or "I know a family member who got it", and there's been one or two "I got a virus in February before quarantining but the symptoms don't match", but overall I just know almost no people who got it, and a lot of people taking quarantine precautions before it was cool. I also know several people who managed to get tests and took them (#SecurityMindset), and who came up negative, as expected.
One of the main reasons I'm not very confident is that I think it's somewhat badly incentivized for people to report that they personally got it. While it's positive for the common good, and it lets us know about community rates and so on, I think people expect they will be judged a non-zero amount for getting it, and can also trick themselves with plausible deniability because testing is bad ("Probably it was just some other virus, I don't know"). So there's likely some amount of underreporting, correlated with the people who didn't take it seriously in the first place. (If this weren't an issue, I would had said more like 500-1500 of my extended friends and acquaintances.)
And, even if it's true, I have concerns that we acted with appropriate caution in the first few months, but then when more evidence came in and certain things turned out to be unnecessary (e.g. cleaning deliveries, reheating delivery food, etc) I think people stuck those out much too long and some maybe still are.
Nonetheless, my current belief is that rationality did help me and a lot of my 100s of rationalist friends and acquaintances straightforwardly avoid several weeks and months of life lost in expectation, just by doing some basic fermi estimates about the trajectory and consequences of the coronavirus, and reading/writing their info on LessWrong. If you want you and your family to be safe from weird things like this in the future, I think that practicing rationality (and being on LessWrong) is a pretty good way to do this.
(Naturally, being married to an epidemiologist is another good way, but I can only have one spouse, and there are lots of weird problems heading our way from other areas too. Oh for the world where the only problem facing us was pandemics.)
(Also thx, I think I have fixed the links.)
Added: I didn't see your reply to Jacobian before writing this. Feel free to refer me to parts of that.
Replies from: Sherrinford↑ comment by Sherrinford · 2020-10-19T22:16:47.816Z · LW(p) · GW(p)
Hey Ben, given that you are able to keep track of 100s of friends and acquaintances, and assuming that you also have lots of other friends and acquaintances who are not rationalists but similar in other respects (probably: young; high income and education; jobs that can be transformed to remote jobs if they aren't already; not too uncomfortable with staying at home because they do not spend every weekend in a soccer stadium or dancing all night?):
How large do you estimate the differential impact of "being rationalist" to be?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-19T22:22:38.322Z · LW(p) · GW(p)
It's a good question. I'll see if I can write a reply in the next few days...
↑ comment by FactorialCode · 2020-10-11T00:01:14.539Z · LW(p) · GW(p)
I mean some of us made buckets of money off of the chaos, so theres that. [LW(p) · GW(p)]
Replies from: romeostevensit, Sherrinford↑ comment by romeostevensit · 2020-10-12T03:04:35.052Z · LW(p) · GW(p)
there are two sides to an options contract, when to buy and when to sell. Wei Dai did well on the first half but updated in the comments on losing most of the gains on the second half. This isn't a criticism, it's hard.
↑ comment by Sherrinford · 2020-10-19T22:01:08.067Z · LW(p) · GW(p)
Is "some of us" more than Wei Dai? Because it seems to me that only Wei Dai is mentioned as an example but it is implied that more people profited - not only by you, but in general when I see that claim.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-19T22:03:05.471Z · LW(p) · GW(p)
(I know of 1-2 other examples where people did something like double their net wealth.)
Replies from: Linch↑ comment by George3d6 · 2020-10-11T23:14:14.389Z · LW(p) · GW(p)
I'd stress the idea here that finding a "solution" to the pandemic is easy and preventing it early on based on evidence also is.
Most people could implement a solution better than those currently affecting the US and Europe, if they were a global tsar with infinite power.
But solving the coordination problems involved in implementing that solution is hard, that's the part that nerds solving and nobody is closer to a solution there.
↑ comment by voxelate · 2020-11-07T08:17:42.044Z · LW(p) · GW(p)
I saw the supply chain disruptions coming and made final preparations for it, I saw layoffs coming in my aviation-related job so I updated my resume, took a good severance package, and found a new, remote-based job with significantly higher pay. And yes, I also significantly re-balanced my portfolio and took advantage of the crash early this year. In all, I expect about 40% additional income/unrealized gains this year than last. To me that's more than minimal.
Rationalists that were paying attention get the 1st chance of understanding the implications and making moves (big or small) before a mass of people finally took it seriously in the US. I'll admit part of it is certainly luck since I can't really time the market or precisely know how gov't policies and actions will affect my stocks.
It's also hard to know how much of that on my part was explicit reason, I was certainly reading up on the literature about it, but there was not a ton of data. I did use some social cognition based on the Chinese response under the presumption that they knew more about it since it originated there.
I don't think the COVID response is even the best measure to judge the benefits of being rational, it's just one part of it. If you want to solve problems, you have to be rational... being irrational is a bad way to solve problems.
↑ comment by leonidasmith (matthewvandyek) · 2022-04-02T11:33:07.452Z · LW(p) · GW(p)
Thank you for this comment.
I have very mixed feelings about this website. On the one hand, it's got interesting articles and reading HPMOR was very entertaining. But, on the other, there's so many people who are just writing posts that are embarrassingly the opposite of what they claim they are all about: self-consciousness, in particular.
Me and my friends are rational, that is, all that is right and correct, by definition, and we call ourselves Rationalists. Can't you see the paradox in this claim? And yet I thought people here didn't find Spock rational, and correctly assessed that his character is "defined" as such by the story, but it fails to fulfill itself. As a human being you will always fail. The website is called "less wrong", for a random divinity's sake. It should be about striving to be less wrong while admitting we cannot avoid failure, not about jerking each other off about how rational and better we are than others. And yet...
In general I am highly suspicious of any lover of truth who will willingly call themselves a sophist: for a Rationalist, there is little praise as high as being a Rationalist, and therefore calling yourself and the people who agree with you with this name is very much patting your own back.
Especially when using to criticize and compare yourself to people you disagree with.
Having said this, there are interesting things in this post too. I'm not saying it's completely bad, but a lot of the language and framing here leads me to think there is also a lot of unnecessary arrogance. I just wanna say, more actual thinking, less implicit self-praise.
comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T12:54:59.701Z · LW(p) · GW(p)
I think this post is doing a simplification which is common in our community, and at some point we need to acknowledge the missing nuance there. The implicit assumption is that rationality is obviously always much better than believing whatever is socially expedient, and everyone who reject rationality are just doing a foolish error. In truth, there are reasons we evolved to believe whatever is socially expedient[1], and these reasons are still relevant today. Specifically, this is a mechanism for facilitating cooperation (which IMO can be given a rational, game-theoretic explanation). Moreover, it seems likely that for most people, during most of history, this strategy was the right choice.
IMO there are two major reasons why in these times rationality is the superior strategy, at least for the type of people drawn to LessWrong and in some parts of the world. First, the stakes are enormous. The freedom we enjoy in the developed world, and the pace of technological progress create many opportunities for large gains, from founding startups to literally saving the world from destruction. Given such stakes, the returns on better reasoning are large. Second, we can afford the cost. Because of freedom and individualism, we can profess unpopular beliefs and not be punished too heavily for it. EDIT: And, the Internet allows finding likeminded people even if you're weird.
The self-deceptive strategy has a serious failure mode: while you're self-deceiving, you cannot fully use your mental faculties to reassess the decision to self-deceive. (See also "against double think" [? · GW]). When self-deception is the right choice, that's not a problem. But when it's the wrong choice, it gets you stuck in a hard to escape attractor. This I think is the main source of obstacles on the path of coming over to rationality, when coming over to rationality is the right choice.
More precisely, pretend to believe by using the conscious mind [LW · GW] as a mask. EDIT: We intuitively divide questions into low-stakes (where knowing what's true has few effects on our lives the causality of which doesn't go through social reactions to the belief) and high-stakes (where knowing what's true does have direct effects on our lives). We then try to form accurate conscious beliefs about the latter and socially expedient conscious beliefs about the former. We do have more accurate intuitive beliefs about former, but they do not enter consciousness and their accuracy suffers since we cannot utilize consciousness to improve them. See also "belief in belief" [LW · GW] ↩︎
↑ comment by Kaj_Sotala · 2020-10-10T14:34:21.976Z · LW(p) · GW(p)
IMO there are two major reasons why in these times rationality is the superior strategy, at least for the type of people drawn to LessWrong and in some parts of the world.
A third reason is that believing in whatever is socially expedient works much better when the socially expedient beliefs have been selected to be generally adaptive. The hunter-gatherer environment didn't change much and culture had plenty of time to be selected for generally beneficial beliefs, but that's not the case for today's beliefs:
The trouble with our world is that it is changing. Henrich focuses on small scale societies. These societies are not static. The changes they undergo are often drastic. But the distance between the life-style of a forager today and that of her ancestors five hundred years ago pales next to the gap that yawns between the average city-slicker and her ancestors five centuries past. Consider the implications of what demographers call the "demographic transition model:"
Each stage in the model presents a different sort of society than that which came before it. Very basic social and economic questions—including subsistence strategy, family type, mechanisms for mate selection, and so forth—change substantially as societies move through one stage to the next. Customs and norms that are adaptive for individuals in stage two societies may not be adaptive for individuals in living in stage four societies.
If the transition between these stages was slow this would not matter much. But it is not. Once stage two begins, each stage is only two or three generations long. Europeans, Japanese, Taiwanese, and South Koreans born today look forward to spending their teenage years in stage five societies. What traditions could their grandparents give them that might prepare them for this new world? By the time any new tradition might arise, the conditions that made it adaptive have already changed.
This may be why the rationalist impulse wrests so strong a hold on the modern mind. The traditions are gone; custom is dying. In the search for happiness, rationalism is the only tool we have left.
↑ comment by Ben Pace (Benito) · 2020-10-10T20:08:22.581Z · LW(p) · GW(p)
Upvoted, it's also correct to ask whether taking this route is 'worth it'.
I am skeptical of "Moreover, it seems likely that for most people, during most of history, this strategy was the right choice." Remember that half of all humans existed after 1309 [LW · GW]. In 1561 Francis Bacon was born, who invented the founding philosophy and infrastructure of science. So already it was incredibly valuable to restructure your mind to track reality and take directed global-scale long-term action.
And plausibly it was so before then as well. I remember being surprised reading Vaniver's account of Xunzi's [LW · GW] in 300 BC, where Vaniver said:
Replies from: vanessa-kosoyBy the end of it, I was asking myself, "if they had this much of rationality figured out back then, why didn't they conquer the world?" Then I looked into the history a bit more and figured out that two of Xunzi's students were core figures in Qin Shi Huang's unification of China to become the First Emperor.
↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T20:25:29.042Z · LW(p) · GW(p)
Francis Bacon's father was a successful politician and a knight. Bacon was born into an extremely privileged position in the world, and wasn't typical by any margin. Moreover, ey were, quoting Wikipedia, a "devout Anglican", so ey only went that far in eir rationality.
Replies from: Jacobian, Benito↑ comment by Jacob Falkovich (Jacobian) · 2020-10-12T18:43:24.453Z · LW(p) · GW(p)
If I, a rationalist atheist, was in Francis Bacon's shoes I would 100% live my life in such a way that history books would record me as being a "devout Anglican".
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-12T20:43:51.236Z · LW(p) · GW(p)
Sure. But, in order to lie without the risk of being caught, you need to simulate the person who actually is a devout Anglican. And the easiest way to do that is, having your conscious self [LW · GW] actually be a devout Anglican. Which can be a rational strategy, but which isn't the thing we call "rationality" in this context.
Another thing is, we can speak of two levels of rationality: "individual" and "collective". In individual rationality, our conscious beliefs are accurate but we keep them secret from others. In collective rationality, we have a community of people with accurate conscious beliefs who communicate them with each other. The social cost of collective rationality is greater, but the potential benefits are also greater, as they are compounded through collective truth-seeking and cooperation.
↑ comment by Ben Pace (Benito) · 2020-10-10T22:03:27.404Z · LW(p) · GW(p)
This isn't much of an update to me. It's like if you told me that a hacker broke out of the simulation, and I responded that it isn't that surprising they did because they went to Harvard. The fact that someone did it all is the primary and massive update that it was feasible and that this level of win was attainable for humans at that time if they were smart and determined.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-11T11:08:40.642Z · LW(p) · GW(p)
We're discussing the question of whether for most people in the past, rationality was a strategy inferior to having a domain where conscious beliefs are socially expedient rather than accurate. You gave Francis Bacon as a counterexample. I pointed out that, first, Bacon was atypical along the very axes that I claim make rationality the superior choice today (having more opportunities and depending less on others). This weakens Bacon's example as evidence against my overall thesis. Second, Bacon actually did maintain socially expedient beliefs (religion, although I'm sure it's not the only one). There is a spectrum between average-Jane-strategy and "maximal" self-honesty, and Bacon certainly did not go all the way towards maximal self-honesty.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-11T21:51:33.720Z · LW(p) · GW(p)
I think the thing I want here is a better analysis of the tradeoff and when to take it (according to one's inside view), rather than something like an outside view account that says "probably don't".
(And you are indeed contributing to understanding that tradeoff, your first comment indeed gives two major reasons, but it still feels to me true to say about many people in history and not just people today.)
Suppose we plot "All people alive" on the x-axis, and "Probability you should do rationality on your inside view" on the y-axis. Here are two opinions one could have about people during the time of Bacon.
I want to express something more like the second one than the first.
↑ comment by Vladimir_Nesov · 2020-10-10T16:42:29.136Z · LW(p) · GW(p)
The implicit assumption is that rationality is obviously always much better than [...]
Instead of doing a better thing, one might do the more integrity-signaling thing, or pursue scholarship, or maximize personal wealth. Expecting an assumption about what's better relies on the framing of human pursuits as better-things-seeking.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T16:51:52.312Z · LW(p) · GW(p)
By "better" I mean "better in terms of the preferences of the individual" (however, we also constantly self-deceive about what our preferences actually are).
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-10T17:55:54.696Z · LW(p) · GW(p)
But if a person pursues something for reasons other than considering it the better thing, then the concept of "better" is useless for explaining their behavior. It might help with changing their behavior, if they might come to be motivated by the concept of "better", and form an understanding of what that might be. Before that happens, there is a risk of confusing the current pursuit (revealed preference) with a nascent explicitly conceptualized preference (the concept of "better") that's probably very different and might grow to fill the role of their pursuit if the person decides to change for the better (losing integrity/scholarly zeal/wealth/etc.).
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T18:15:41.344Z · LW(p) · GW(p)
Hmm, I think we might be talking past each other for some reason. IMO people have approximately coherent preferences (that do explain their behavior), but they don't coincide with what we consciously consider "good", mostly because we self-deceive about preferences for game theory reasons.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-10T18:57:59.881Z · LW(p) · GW(p)
The distinction between observed behavior (preferences that do explain behavior) and endorsed preference (a construction of reason not necessarily derived from observation of behavior) is actionable. It's not just a matter of terminology (where preference is redefined to be whatever observed behavior seems to seek) or hypocrisy (where endorsed preference is public relations babble not directly involved in determining behavior). Both senses of "preference" can be coherent. But endorsed preference can start getting increasignly involved in determining the purposes of observed behavior, and plotting how this is to happen requires keeping the distinction clear.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T19:24:36.510Z · LW(p) · GW(p)
I think that the "endorsed" preference mostly affects behavior only because of the need to keep up the pretense. But also, I'm not sure how your claim is related to my original comment?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-10T19:50:39.389Z · LW(p) · GW(p)
Humans can be spontaneous (including in the direction of gradual change). It's possible to decide to do an unreasonable thing unrelated to revealed preference or previous activity. Thus the need to keep up the pretense is not a necessary ingredient of the relationship between behavior and endorsed preference. It's possible to start out an engineer, then change behavior to pursuit of musical skill, all the while endorsing (but not effecting) promotion of communism as the most valuable activity. Or else the behavior might have changed to pursuit of promotion of communism. There is no clear recipe to these things, only clear ingredients that shouldn't be mixed up.
I'm not sure how your claim is related to my original comment
The statement in the original comment framed pursuit of rationality skills as pursuit of things that are better. This seems to substitute endorsed preference (things that are better) for revealed preference (actual pursuit of rationality skills). As I understand this, it's not necessary to consider an actual pursuit a good thing, but it's also prudent to keep track of what counts as a good thing, as it might one day influence behavior.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T20:07:59.145Z · LW(p) · GW(p)
IMO going from engineer to musician is not a change of preferences, only a change of the strategy you follow to satisfy those preferences. Therefore, the question is, is rationality a good strategy for satisfying the preferences you are already trying to satisfy.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-10T20:36:56.689Z · LW(p) · GW(p)
IMO going from engineer to musician is not a change of preferences, only a change of the strategy you follow to satisfy those preferences.
I would say about a person for whom this is accurate that they didn't really care about engineering, or then music. But there are different people who do care about engineering, or about music. There is a difference between the people who should be described as only changing their strategy, and those who change their purpose. I was referring to the latter, as an example analogous to changing one's revealed preference to one's endorsed preference, without being beholden to any overarching ambient preference satisfied by either.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T20:59:08.293Z · LW(p) · GW(p)
IMO such "change of purpose" doesn't really exist. Some changes happen with aging, some changes might be caused by drugs or diet, but I don't think conscious reasoning can cause it.
comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-09T19:20:52.535Z · LW(p) · GW(p)
(I don't think outsiders are more leery of the community because of its "ask culture" than of its "talk over people culture". It's something I have a problem with as a meetup organiser. Rats come fully prepared to sweep the floor, regardless of what happens to lie there.)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-09T19:21:56.891Z · LW(p) · GW(p)
Many of the best of us sure are disagreeable and forthright!
Replies from: mary-chernyshenko↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-09T19:43:05.843Z · LW(p) · GW(p)
And how good are the best of us at bringing cookies and tea and just putting things on the table before we start disagreeing?
"Tact" is a field you build for the disagreeing to be around the issues that people truly just don't agree about, not a bullet to be shot at an opponent someone wants to be charitable to.
Replies from: SaidAchmiz, Benito↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T01:59:28.503Z · LW(p) · GW(p)
And how good are the best of us at bringing cookies and tea and just putting things on the table before we start disagreeing?
This is one of the things that drove me away from casual in-person “rationalist community” gatherings. My habit when getting together with my friends is to bring some cookies (or something along these lines); my friends usually also contribute something. So the first several times I came to small gatherings of rationalist-type folks, I indeed brought (homemade!) cookies for everyone.
It turned out that (a) I was the only one who ever thought to bring any such thing (even after the first time), and (b) while everyone else was clearly happy to eat the cookies, not only did no one ever thank me for bringing them, but no one even commented on them or acknowledged in any way that I’d brought said cookies.
So, I stopped bringing cookies, and then stopping coming to such gatherings.
Replies from: Vladimir_Nesov, abramdemski, Benito, mary-chernyshenko↑ comment by Vladimir_Nesov · 2020-10-14T09:43:44.315Z · LW(p) · GW(p)
while everyone else was clearly happy to eat the cookies
I would've ignored the fact of there being cookies, as I wouldn't want to support the norm of bringing cookies (I don't care about there being cookies, and it would be annoying to be expected to bring generalized cookies), but I would've also intentionally avoided eating them (not participating in a norm goes both ways). So the claim that everyone ate the cookies seems surprising. There should be an option to register disapproval of a norm that wouldn't be seen as nonspecific rudeness.
Replies from: GuySrinivasan, SaidAchmiz↑ comment by SarahNibs (GuySrinivasan) · 2020-10-14T14:51:11.857Z · LW(p) · GW(p)
This sounds like a strategic misstep, and I'm guessing it was caused either by a hyperalert status manager in your brain or a bad experience at the hands of a bully (intentional or otherwise) in the past.
I estimate that (prepare for uncharitable phrasing) asking anyone with your mindset to try to self-modify to be okay with other people taking steps to make everyone happier in this way is a smaller cost than a norm of "don't bring [cookies], rationalists will turn around and blame everyone who didn't bring them if you dare".
But yeah I think spending points to teach people not to defect against a bring-cookies-if-you-wanna norm (aka thank them, aka don't look askance at the but-I-don't-wanna) is waaay better than spending points to disallow a bring-cookies-if-you-wanna norm.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-14T15:30:50.790Z · LW(p) · GW(p)
try to self-modify to be okay with
I'm okay with other people supporting norms that I don't support, and with following a norm that I don't support, if it happens to be accepted in a group. But there should be freedom to register disapproval of a norm, even when it ends up accepted (let alone in this case, where it apparently wasn't accepted). There is no call to self-modify anyone.
What felt annoying to me and triggered this subthread was that in Said's story there were only people who supported the norm he appeared to be promoting, and people who preyed on the commons. Disapproval of the norm was not a possibility, on pain of being bundled together with the defectors. This issue seems to me more important than the question of which norm is the right one for that setting (that is, which norm should have been supported).
Replies from: GuySrinivasan, SaidAchmiz↑ comment by SarahNibs (GuySrinivasan) · 2020-10-14T15:38:09.548Z · LW(p) · GW(p)
That's fair. There are definitely norms I think help overall (or situationally help) that I wish didn't help overall because I don't like them. For example tolerance of late arrivals. I hate it, and also if we didn't tolerate it my most valuable group would never have existed.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-14T15:55:06.177Z · LW(p) · GW(p)
That's strategic voting as opposed to voting-as-survey. What if nobody wants cookies, but most people vote for them in expectation that others would appreciate them? Voting-as-survey should be able to sort this out, but strategic voting suffers from confirmation bias. Everyone is bringing cookies, so apparently people like them. But with strategic voting this is begging the question, there might have been no attempt to falsify the assumption.
Thus I don't even see how it can be clear whether the cookies norm is better for the group that the no-cookies norm, and so whether the strategic vote should support the cookies. (In the case of cookies specifically, getting eaten is some sort of survey, but in general strategic voting breeds confusion.)
↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T15:54:52.483Z · LW(p) · GW(p)
What norm do you think I was (or appeared to be) promoting?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-14T16:25:06.173Z · LW(p) · GW(p)
The norm that everyone should occasionally sell some food for status. (I understand that many groups like the activity, in which case it's a good norm. Personally I don't like eating at social gatherings, or food-derived status, or being confused for a defector, so I don't like there being a norm like that.)
Replies from: mary-chernyshenko, SaidAchmiz↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-14T19:09:55.352Z · LW(p) · GW(p)
There's a clever trick to this effect. You can say thank you for others' sake without eating! Wouldn't that just throw a spanner into their Machiavellian calculations on who owes whom.
↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T16:31:55.171Z · LW(p) · GW(p)
You can hardly simultaneously describe the relevant dynamic as “selling food for status” and admit that many people/groups enjoy sharing food at social gatherings; these are mutually inconsistent characterizations.
ETA: It goes almost without saying that “sell some food for status” is an unnecessarily tendentious description, all by itself…
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-14T16:41:15.354Z · LW(p) · GW(p)
Huh? Where is the contradiction? Giving status for things you appreciate is enjoyable, as well as receiving status for a good deed. Not to mention all the delicious food generated by presence of the norm. It's clearly selling because not paying for the food (with a generalized "thank you" and possibly reciprocal participation in the norm) is defection. But there is nothing wrong with a good market!
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-14T18:55:08.426Z · LW(p) · GW(p)
"Everyone should occasionally sell some food for status" is not what's being discussed. Your phrasing sounds as though Said said everyone was supposed to bring cookies or something, which is obviously not what he said.
What's being discussed is more like "people should be rewarded for making small but costly contributions to the group". Cookies in-and-of-themselves aren't contributing directly to the group members becoming stronger rationalists, but (as well as just being a kind gift) it's a signal that someone is saying "I like this group, and I'm willing to invest basic resources into improving it".
If such small signals are ignored, it is reasonable to update that people aren't tracking contributions very much, and decide that it's not worth putting in more of your time and effort.
Replies from: Vladimir_Nesov, ioannes_shade, SaidAchmiz↑ comment by Vladimir_Nesov · 2020-10-14T19:27:47.530Z · LW(p) · GW(p)
I agree with the more general point about importance of tracking and rewarding contributions, but in this subthread I was specifically discussing cookies and difficulties with graciously expressing my lack of appreciation for them.
rewarded for making small but costly contributions
There is nothing good about contributions being costly. With signaling, the cost should pay for communication of important things that can't otherwise be communicated, because incentives don't allow trust; here that piece of critical intelligence would be posession of cooking skill and caring about the group. The cost is probably less than the cost of time spent in the meeting, so the additional signal is weak. If you like cooking, the cost might actually be negative. If you are not poor, the signal from store-bought food is approximately zero. (As signaling is about a situation without trust, it's not the thought that counts. I'm not saying that signaling is appropriate here, I'm considering the hypothetical where we are engaged in signaling for whatever reason.)
And it should actually matter whether the contributions are appreciated. So I guess it's possible that there is a difference in how people respond to costly signals, compared to useful contributions of indeterminate cost.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T23:52:54.956Z · LW(p) · GW(p)
The cost is probably less than the cost of time spent in the meeting, so the additional signal is weak. If you like cooking, the cost might actually be negative.
I’m sorry, but this is a ridiculous claim.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-15T08:03:38.613Z · LW(p) · GW(p)
Once upon a time, I liked programming. Time spent not programming was uncomfortable, and any opportunity to involve programming with other activities was welcome. If I could program some cookies for a meetup, I would describe the cost of that as negative. Thus by analogy I'm guessing that a person who similarly likes cooking would perceive the cost of cooking (not counting the price of ingredients) as negative. Maybe I liked programming to a ridiculous degree?
↑ comment by ioannes (ioannes_shade) · 2020-10-14T20:18:57.501Z · LW(p) · GW(p)
In folksier terms, what's being discussed is rationalists' often-strange relationship to common courtesy (i.e. Lindy social dynamics).
↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T18:56:54.516Z · LW(p) · GW(p)
Just so.
Replies from: mary-chernyshenko↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-15T10:53:10.112Z · LW(p) · GW(p)
(Not sure where this fits in the thread or if it does, so - sorry for offtop. At least one of ours has contracted the virus, AFAIK. He told me after we have talked for a bit about another business, I asked him to comment on something and he said sure, he'd have done it sooner but for covid... I have offered our local LW people to help pay for testing if anybody needs it, without any additional questions or conclusions. So far nobody has asked for it and I do hope this means something good, like "we're mostly healthy and have money" and not something bad, like "we would have asked for help but it's not done". Even to be able to offer anything meaningfully, I need people "to bring cookies".)
↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T12:44:58.973Z · LW(p) · GW(p)
I would’ve hoped that the use of ‘everyone’ in this context would be clearly enough slightly-hyperbolic to avoid this sort of misunderstanding…
This happened years ago, and I don’t have perfect recall of past events. Even at the time, I could not assert with confidence that literally every single person present at each of these events ate the cookies. (Indeed, a priori such a claim seems unlikely; surely at least one person was on a diet? Diabetic? Vegan? Lactose-intolerant? Not a fan of oatmeal / chocolate chip / whatever? A claim that literally everyone ate the cookies should be surprising for reasons entirely unrelated to any social norms!)
The cookies were eaten—that’s the point. Not long into each gathering, all the cookies (or other sweets; I think I may’ve brought brownies once) were gone. The majority of the other attendees seemed happy to eat them. These things, I can say with as great a confidence as I have in recollection of any other years-past event.
As for your main point…
I sympathize with being placed in the unpleasant situation of disapproving of a social norm that others are promulgating with good intentions. (Clearly, I disagree with you on the subject of this particular norm; what’s more, it seems to me that you are rather misinterpreting what the intended/desired norm is, in this case. I don’t know if you’d still disapprove of the actual norm I have in mind, properly understood… if so, our disagreement deepens, as I think that rejection of the norm in question, and those like it, is corrosive to any would-be community. But all of this is beside the point.)
But there are ways of handling such situations that contribute to social cohesion, and ways that detract from it.
In my experience, in most more or less casual social circles (whether they be centered around a workplace, group activity, or anything else), most people have little or no skill at cooking/baking. If one person does have such skill, and (for whatever occasion may warrant it—be that “it’s my birthday” or “it’s Friday”) brings homemade food or snacks, typically the other members of the group are somewhat surprised (it’s an unusual skill, after all), and express gratitude. If the food is skillfully made, there are comments noting this—praising the person who made and brought the food, and making note of their skill.
On other occasions, in such groups, other members of the group, who lack such cooking/baking skills, nevertheless see fit to bring food for sharing. This may be store-bought, prepared by a caterer, etc. The rest of the group expresses gratitude again, though not, of course, the other sentiments of praise and admiration.
Still others in such groups may rarely or never contribute food (homemade or otherwise) to group gatherings (but do typically, if they wish to be perceived as cooperative members, contribute in other informal ways).
This is a pattern I’ve seen play out many times, in many groups—academic, professional, hobby-oriented, generic social gatherings, etc. I have observed it on the East Coast, and on the West Coast, and in the Midwest; among “millennials” and “boomers”; among people and groups from a variety of cultural backgrounds.
In each case, the voluntary contribution of food to the group, for sharing by its members, with no direct compensation expected, is seen, correctly, as an act of deliberate cooperation. The contributor is rewarded with some social status, as well as the positive feelings that come from being directly thanked by a fellow group member. If the food was also made by the contributor (and is good)—i.e., if the contribution required skill and effort, i.e. is a costly signal of cooperation—a larger amount of status is bestowed (via expressions of admiration, etc.).
These responses cost the other group members little else but words. They need not create expectations or reciprocal obligations, please note! If I bring cookies, and everyone else says “oh, thanks for bringing cookies, Said!” and (assuming they are delicious) “ooh, these are great, Said, you made these? cool!”—this already discharges any obligations of reciprocity. Certainly there could be a norm that everyone contributes something (either every time, or in some sort of formal or informal rotation). But such a norm would be a separate and additional thing.
Now, suppose that you still strenuously object even to the implied suggestion that there might be any expectation of contributing food for sharing. Suppose you bristle at the notion that a group member may expect, and receive, any social status for such contributions. Nevertheless, unless you consider the contribution to be a hostile act, it is clearly counterproductive to punish it, yes? The signal you send, if you do so, is “this group neither appreciates nor rewards cooperation”. (Cooperation in general, note! If I bring cookies and get not a peep of acknowledgment or thanks, the message I get isn’t “we don’t do food sharing here”—it’s the aforesaid general rebuke. If you want to send the specific message and not the general one, you have to use actual words. But in such a case, you would have to ensure that the contributor is still rewarded for the impulse to cooperation…)
And, needless to say, taking advantage of the cooperative act, while neither rewarding it with even so much as acknowledgment, not to say thanks (and still less praise)… well, that is defection, pure and simple.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-14T13:11:37.017Z · LW(p) · GW(p)
A norm you might've intended is not part of the decision problem, if what people observe is only the cookies not accompanied by an essay detailing the intended norm. I'm still not sure what response you endorse for those who disapprove of what the norm appears to be (other than explicitly engaging in a discussion).
I wasn't literal with "everyone" either. The point is that in your recollection you've rounded down to zero the number of people who might've tried to respectfully decline (in the most straightforward way) the norm you appeared to be pushing.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T13:53:26.807Z · LW(p) · GW(p)
Respectfully, I think you are missing my point in a quite comprehensive way.
Perhaps others might weigh in on whether what I have said is clear (and, of course, whether they agree, etc.) I will refrain from further attempts at explanation until then.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-14T14:24:16.926Z · LW(p) · GW(p)
Clearly I wasn't engaging your point, I was clarifying my own point instead. So I don't see how it would be evident whether I was missing your point or not.
There are these defectors, and for any reasonable person whose reaction to a cookie is to explicitly conceptualize the social consequences of possible responses to being presented with it, it should be clear that silently eating the cookie and not otherwise responding in any way is defection. There are groups where a different response is prevalent, though probably for reasons other than higher propensity for consideration of social consequences of their actions or different results of that consideration. Because of these hypotheses where apparent cooperation follows for obscure reasons, and apparent defection follows from seeing a cookie as just food, I don't see how lack of apparent cooperation leads to any clear conclusions. (As an example of a point I chose not to engage.)
↑ comment by abramdemski · 2020-10-14T18:36:28.013Z · LW(p) · GW(p)
Wow, that sucks. I predict that my local meetup (in Los Angeles) would have been loudly thankful of your bringing cookies. I don't especially predict that anyone would have reciprocated, though.
↑ comment by Ben Pace (Benito) · 2020-10-14T02:16:46.781Z · LW(p) · GW(p)
In as much as my comment matters here, I'm sorry about that Said :/
↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-14T09:15:38.347Z · LW(p) · GW(p)
I'm sorry to hear that.
FWIW, I think you should not have brought homemade cookies, it means "something personal". In a way, having good but bought food is easier to cooperate on. I don't mean it was your fault the others took advantage of your generosity (that would be stupid), just that sometimes, we should try the easier way first. I was thinking about storing some generic cookies at our place as a fall-back option and tell people they can bring stuff to have "a more interesting table". And maybe once in a while collect a little money to replenish the stores. If someone new comes in and finds homemade food, its kind of awkward, and new people join in at times.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T12:08:11.144Z · LW(p) · GW(p)
In a way, having good but bought food is easier to cooperate on.
I tried that, too, as it happens. Would you care to guess what the result was?
Replies from: mary-chernyshenko↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-14T12:27:13.042Z · LW(p) · GW(p)
Our results were: 1) bought and forgot to eat; 2) bought enough for people who came there from home but too little for people who came from work; 3) bought and ate; 4) forgot to buy; 5) got so hungry we had to stop talking (about food) and send a guy out for sandwiches; 6) bought and saw that someone brought their own food, too, so we had to redistribute the leftovers... I mean, we are just great at this planning thing...
...but if you offer me to guess, I'd wager people said it's really just not worth a bother.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2020-10-14T12:30:44.117Z · LW(p) · GW(p)
What actually happened was exactly the same as what happened with the homemade cookies: people ate the food, without ever in any way acknowledging that I had brought it (thanking me wasn’t even on the radar); no one else ever brought anything.
Replies from: mary-chernyshenko↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-14T12:42:08.811Z · LW(p) · GW(p)
Tough...
↑ comment by Ben Pace (Benito) · 2020-10-09T19:57:51.157Z · LW(p) · GW(p)
If you've ever been to a CFAR workshop, you're aware of just how strong rationalists are at bringing forth the greatest snacks known to mankind :)
(Will edit to reply to the other part of your comment in a bit, in a meeting now.)
Replies from: mary-chernyshenko↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-09T20:24:33.246Z · LW(p) · GW(p)
Thank you, that instills hope. I've got to send c. 75% of them to CFAR and we're golden.
(Sorry about carrying on like this, I'm a bit mad right now. I have just lost a very nice and thoughtful introverted person (who is also an LW follower of several years) from our local online discussion group and nobody even noticed. I went to their discussion group last year, it was a dream come true, although not strictly LW-themed. And now she tried us out and withdrew. We didn't even get to disagree about anything.)
But seriously, a post on Meetup Food sometime before people can meet offline might be a good idea!
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-10-10T00:49:08.923Z · LW(p) · GW(p)
I'm sorry you lost the person from joining your discussion group. (PM'd you, I'd be interested to hear more about your group, and chat about how to cause cool people to reliably stick around.)
Replies from: mary-chernyshenko↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-10T06:28:28.119Z · LW(p) · GW(p)
Oh, thank you a lot, that'd be lovely!
comment by snog toddgrass · 2020-10-19T03:55:21.176Z · LW(p) · GW(p)
Much of this thread is long time rationalists talking about the experience of new people like me. Here's my experience as someone who found rationality a year ago. It bears more closely on the question than the comments of outliers. I read the sequences then applied rat ideas to dating, and my experience closely resembles Jacobians model. Note that LW has little dating advice, so I did the research and application myself. I couldn't just borrow techniques, had to apply rationality[^1]. My experience is evidence that rationality is improving our outcomes.
I picked up The Sequences in February 2020 on a recommendation from 80k. I read the Yud's sequences cover to cover. Their value was immediately obvious to me, and I read deeply.
I finished the sequences in May, and immediately started applying it to my problems. My goal was not to look cool or gain status on a weird blog. I just wanted to make my life better, and The Sequences gave me a sense that more was possible.
Improving my romantic life has been my greatest rationality project. Dating was a hard part of my life. After The Sequences I realized most dating advice rested on Fake Explanations, anti-reductionism, just-world bias, and is just general crap. I could see conventional dating wisdom for the bullshit that it is. An instrumentally rational model of mate selection must be a bit complicated and a lot weird, but I knew it existed.
I started writing blog posts analyzing my experience, proposing experiments, and looking for advice. I eventually found the best research by Miller, Fleischman, LukeProg, Putanomit and the great ancient Hugh Ristik. You can look through my own LW history to see what happened. Most posts apply ideas from Fleischman or Miller to my own particular situation or attack conventional wisdom about relationships. A few things happened.
-
Most posts were harshly criticized by LW'ers because people have strong feelings about romance. One post started a 50 comment debate about whether dating advice is too taboo for the site. I did not mind because the criticism was sometimes constructed and always less than my ideas got in the real world. The criticism is strong evidence my behavior was driven by problem solving not status seeking.
-
None-rationalists harshly criticized my findings. I lost status repeatedly.
-
I made mistakes. I overvalued status signalling sometimes. I overvalued mate choice copying. I under texted. I over texted. I worked until I found balance between intuition and model.
-
People repeatedly told me "You should not try. I tried to apply system 2 to dating, and my results were bad." I thought to myself "There's a 50% chance he's right and I get no benefit. But if they're wrong the benefit is huge" and kept working.
Now in October my romantic life is way better. My strategies are more adapted. My predictive capacity is stronger. Dating isn't a scary chaotic part of life, it's a fun, silly chaotic part of my life. It's still frustrating sometimes but the improvement has been huge.
##Conclusions
This post is accurate. I went through the swamp of underperformance. I endured the sneers. I accepted having deeply weird beliefs. I attacked ugh field after ugh field. I believed non-just-world truths sometimes (without going all "red-pill"). And it took time but it worked.
The tribal culture of LessWrong wasn't a problem. I wanted rational people to comment on my ideas, so I posted here. I got what I wanted. It's fine.
[^1] I eventually found Geoffrey Miller's Book "Mate" which saved me enormous time.
comment by Zack_M_Davis · 2020-10-10T03:00:41.566Z · LW(p) · GW(p)
The Rationality community [...] has been the main focus of Rationality [...] rationality's most famous infohazard [...] join the tribe [bolding mine]
I agree that explicit reasoning is powerful and that the lesswrong.com website has hosted a lot of useful information about COVID-19, but this self-congratulatory reification of "the community"—identifying "rationality" (!!) with this particular cluster of people who read each other's blogs—is super toxic [LW(p) · GW(p)]. (Talk about "bucket errors"!) Our little robot cult does not have a monopoly on reason itself!
the young rationalist must navigate strange status hierarchies and bewildering memeplexes. I've seen many people bounce off the Rationalist community over those two things.
Great! If bright young people read and understand the Sequences and go on to apply the core ideas (Bayesian reasoning, belief as anticipated experience [LW · GW], the real reasons being the ones that compute your decision [LW · GW], &c.) somewhere else, far away from the idiosyncratic status hierarchy of our idiosyncratic robot cult, that's a good thing. Because it is about the ideas, not just roping in more warm bodies to join the tribe, right?!
Replies from: vanessa-kosoy, Kenny↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T12:25:10.749Z · LW(p) · GW(p)
Rationality has benefits for the individual, but there are additional enormous benefits that can be reaped if you have many people doing rationality together, building on each other's ideas. Moreover, ideally this group of people should, besides the sum of its individuals, also have a set of norms that are conductive for collective truth-seeking. Moreover, the relationships between them shouldn't be purely impersonal and intellectual. Any group endeavor benefits from emotional connections and mutual support. Why? First, to be capable of working on anything you need to be able to satisfy your other human needs. Second, emotional connections is the machinery we have for building trust and cooperation, and that's something no amount of rationality can replace, as long as we're humans.
Put all of those things together and you get a "tribe". Sure, tribes also carry dangers such as death spirals [? · GW] and other toxic dynamics. But the solution isn't disbanding the tribe, that's throwing away the baby with the bathwater. The solution is doing the hard work of establishing norms that make the tribe productive and beneficial.
Replies from: FactorialCode↑ comment by FactorialCode · 2020-10-10T13:37:31.761Z · LW(p) · GW(p)
Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn't disbanding the tribe, that's throwing away the baby with the bathwater.
I think we need to be really careful with this and the dangers of becoming a "tribe" shouldn't be understated w.r.t our goals. In a community focused on promoting explicit reason, it becomes far more difficult to tell apart those who are carrying out social cognition from those who are actually carrying out the explicit reason, since the object level beliefs and their justifications of those doing social cognition and those using explicit reason will be almost identical. Likewise, it becomes much easier to slip back into the social cognition mode of thought while still telling yourself that your still reasoning.
IMO, if we don't take additional precautions, this makes us really vulnerable to the dynamics described here. Doubly so the second we begin to rack up any kind of power, influence or status. Initially everything looks good and everyone around you seems to be making their way along The Path^T^M. But slowly you build up a mass of people who all agree with you on the object level but who acquired their conclusions and justifications by following social cues. Once the group reaches critical mass, you might get into a disagreement with a high status individual or group, and instead of using reason and letting the chips fall where they may, standard human tribal coordination mechanisms are used to strip you of your power and status. Then you're expelled from the tribe. From there whatever mission the tribe had is quickly lost to the usual status games.
Personally, I haven't seen much discussion of mechanisms for preventing this and other failure modes, so I'm skeptical of associating myself or supporting any IRL "rationalist community/village".
Replies from: vanessa-kosoy, Benito↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T14:42:04.003Z · LW(p) · GW(p)
The problems you discuss are real, but I don't understand what alternative you're defending. The choice is not having society or not having society. You are going to be part of some society anyway. So, isn't it better if it's a society of rationalists? Or do you advocate isolating yourself from everyone as much as possible? I really doubt that is a good strategy.
In practice, I think LessWrong has been pretty good at establishing norms that promote reason, and building some kind of community around them. It's far from perfect, but it's quite good compared to most other communities IMO. In fact, I think the community is one of the main benefits of LessWrong. Having such a community makes it much easier to adopt rational reasoning without becoming completely isolated due to your idiosyncratic beliefs.
Replies from: FactorialCode, Vladimir_Nesov↑ comment by FactorialCode · 2020-10-10T23:38:44.287Z · LW(p) · GW(p)
So full disclosure, I'm on the outskirts of the rationality community looking inwards. My view of the situation is mostly filtered through what I've picked up online rather than in person.
With that said, in my mind the alternative is to keep the community more digital, or something that you go to meetups for, and to take advantage of societies' existing infrastructure for social support and other things. This is not to say we shouldn't have strong norms, the comment box I'm typing this in is reminding me of many of those norms right now. But the overall effect is that rationalists end up more diffuse, with less in common other than the shared desire for whatever it is we happen to be optimizing for. This in contrast to building something more like a rationalist community/village, where we create stronger interpersonal bonds and rely on each other for support.
The reason I say this is because as I understood it, the rationalist (at least the truth seeking side) came out of a generally online culture, where disagreement is (relatively) cheap, and individuals in the group don't have much obvious leverage over one another. That environment seems to have been really good for allowing people to explore and exchange weird ideas, and to follow logic and reason wherever it happens to go. It also allows people to more easily "tell it like it is".
When you create a situation where a group of rats become interdependent socially or economically, most of what I've read and seen indicates that you can gain quite a bit in terms of quality of life and group effectiveness, but I feel it also opens up the door to the kind of "catastrophic social failure" I'd mentioned earlier. Doubly so if the community starts to build up social or economic capital that other agents who don't share the same goals might be interested in.
Replies from: Viliam, Hazard, vanessa-kosoy↑ comment by Viliam · 2020-10-11T21:50:30.521Z · LW(p) · GW(p)
I think you are both right about important things, and the problem is whether we can design a community that can draw benefits of mutual support in real life, while minimising the risks. Keeping each other at internet distance is a solution, but I strongly believe it is far from the best we can do.
We probably need to accept that different people will have different preferences about how strongly involved they want to become in real life. For some people, internet debate may be the optimal level of involvement. For other people, it would be something more like the Dragon Army. Others will want something in between, and probably with emphasis on different things, e.g. more about projects and less about social interaction versus more about social interaction and less about projects. (Here, social interaction is my shortcut for solving everyday problems faced by individual people where they are now, as opposed to having a coherent outside-oriented project.)
But with different levels of involvement, there is a risk that people on some level would declare people on a different level to be "not true rationalists". (Those with low involvement are not true rationalists, because they only want to procrastinate online, instead of becoming stronger and optimizing their lives. Those with high involvement are not true rationalists, because they care less about having correct knowledge, and more about belonging to a tribe and having group sex.) And if people around you prefer a different level, there will be social pressure to also choose a level that is not comfortable for you.
My vision would be a community where multiple levels of involvement are acceptable and all are considered normal. I believe it is possible in principle, because e.g. the Catholic Church is kinda like this: you have levels of involvement starting with "remembers a few memes, and visits the church on Christmas if the weather is nice" and ending with "spends the whole life isolated from the world, praying and debating esoteric topics". Except for us it would go from "heard something about biases and how map is not the territory, and visits a LW/SSC meetup once in a while" to "lives in a group house and works full-time on preventing robot apocalypse".
Plus, there are people for whom just having a group boundary as such, no matter how small, even something as vague as "identifies as a 'rationalist', whatever that word might mean", is already too much. They can actually be a majority of LW readers, who knows; they are probably overrepresented among lurkers. But even for them, the website will continue existing approximately as they are now; and if some of them disappears, there are always other place on the internet.
tl;dr -- we need to somehow have stronger rationalist groups for those who want them, without creating social pressure on those who don't
↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-11T11:30:43.098Z · LW(p) · GW(p)
First, when Jacob wrote "join the tribe", I don't think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you're saying here doesn't seem like an argument against my objection to Zack's objection to Jacob.
Second, specifically regarding Crocker's rules, I'm not their fan at all. I think that you can be honest and tactful at the same time, and it's reasonable to expect the same from other people.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution [LW · GW] is a real danger (if not necessarily an insurmountable one).
I will probably never have the chance to live in a rationalist village, so for me the question is mostly academic. To me, a rationalist village sounds like a good idea in expectation (for some possible executions), but the uncertainty is great. However, why not experiment? Some rationalists can try having their own village. Many others wouldn't join them anyway. We would see what comes out of it, and learn.
Replies from: FactorialCode, FactorialCode↑ comment by FactorialCode · 2020-10-11T14:03:30.963Z · LW(p) · GW(p)
I'm breaking this into a separate thread since I think it's a separate topic.
Second, specifically regarding Crocker's rules, I'm not their fan at all. I think that you can be honest and tactful at the same time, and it's reasonable to expect the same from other people.
So I disagree. Obviously you can't impose Croker's rules on others, but I find it much easier and far less mentally taxing to communicate with people I don't expect to get offended. Likewise, I've gained a great deal of benefit from people very straightforwardly and bluntly calling me out when I'm dropping the ball, and I don't think they would have bothered otherwise since there was no obvious way to be tactful about it. I also think that there are individuals out there that are both smart and easily offended, and with those individuals tact isn't really an option as they can transparently see what you're trying to say, and will take issue with it anyways.
I can see the value of "getting offended" when everyone is sorta operating on simulacra level 3 and factual statements are actually group policy bids. However, when it comes to forming accurate beliefs, "getting offended" strikes me as counter productive, and I do my best to operate in a mode where I don't do it, which is basically Croker's rules.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-11T14:50:07.265Z · LW(p) · GW(p)
This might be another difference of personalities, maybe Crocker's rules make sense for some people.
The problem is, different people have conflicting interests. If we all had the same utility function then, sure, communication would be only about conveying factual information. But we don't. In order to cooperate, we need not only to share information, but also reassure each other we are trustworthy and not planning to defect. If someone criticizes me in a way that disregards tact, it leads me to suspect that eir agenda is not helping me but undermining my status in the group.
You can say, we shouldn't do that, that's "simulacra" and simulacra=bad. But the game theory is real, and you can't just magic it away by wishing it would be different. You can try just taking on faith that everyone are your allies, but then you'll get exploited by defectors. Or you can try to come up with a different set of norms that solves the problem. But that can't be Crocker's rules, at least it can't be only Crocker's rules.
Now, obviously you can go too far in the other direction and stop conveying meaningful criticism, or start dancing around facts that need to be faced. That's also bad. But the optimum is in the middle, at least for most people.
Replies from: FactorialCode↑ comment by FactorialCode · 2020-10-15T02:29:21.262Z · LW(p) · GW(p)
So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there's no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons.
This might be another difference of personalities...you can try to come up with a different set of norms that solves the problem. But that can't be Crocker's rules, at least it can't be only Crocker's rules.
I think that's a big part of it. Especially IRL, I've taken quite a few steps over the course of years to mitigate the trust issues you bring up in the first place, and I rely on social circles with norms that mitigate the downsides of Crocker's rules. A good combination of integrity+documentation+choice of allies makes it difficult to criticize someone legitimately. To an extent, I try to make my actions align with the values of the people I associate myself with, I keep good records of what I do, and I check that the people I need either put effort into forming accurate beliefs or won't judge me regardless of how they see me. Then when criticism is levelled against myself and or my group, I can usually challenge it by encouraging relevant third parties to look more closely at the underlying reality, usually by directly arguing against what was stated. That way I can ward off a lot of criticism without compromising as much on truth seeking, provided there isn't a sea change in the values of my peers. This has the added benefit that it allows me and my peers to hold each other accountable to take actions that promote each others values.
The other thing I'm doing that is both far easier to pull off and way more effective, is just to be anonymous. When the judging party can't retaliate because they don't know you IRL and the people calling the shots on the site respect privacy and have very permissive posting norms, who cares what people say about you? You can take and dish out all the criticism you want and the only consequence is that you'll need to sort through the crap to find the constructive/actionable/accurate stuff. (Although crap criticism can easily be a serious problem in and of itself.)
↑ comment by FactorialCode · 2020-10-11T13:45:36.110Z · LW(p) · GW(p)
First, when Jacob wrote "join the tribe", I don't think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you're saying here doesn't seem like an argument against my objection to Zack's objection to Jacob.
So my objection definitely applies much more to a village than less tightly bound communities, and Jacob could have been referring to anything along that spectrum. But I brought it up because you said:
Moreover, the relationships between them shouldn't be purely impersonal and intellectual. Any group endeavour benefits from emotional connections and mutual support.
This is where the objection begins to apply. The more interdependent the group becomes, the more susceptible it is to the issues I brought up. I don't think it's a big deal in an online community, especially with pseudonyms, but I think we need to be careful when you get to more IRL communities. With a village, treating it like an experiment is good first step, but I'd definitely be in the group that wouldn't join unless explicit thought had been put in to deal with my objections, or the village had been running successfully for long enough that I become convinced I was wrong.
Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one).
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don't know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I've vetted them. This makes it harder for social networks to put pressure on you or undermine you. 2) A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-11T14:32:04.652Z · LW(p) · GW(p)
So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don't know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I've vetted them. This makes it harder for social networks to put pressure on you or undermine you.
Hmm, at this point it might be just a difference of personalities, but to me what you're saying sounds like "if you don't eat, you can't get good poisoning". "Dual identity" doesn't work for me, I feel that social connections are meaningless if I can't be upfront about myself.
- A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
I guess? But in any case there will many subnetworks in the network. Even if everyone adopt the "village" model, there will be many such villages.
Replies from: FactorialCode↑ comment by FactorialCode · 2020-10-15T03:04:21.335Z · LW(p) · GW(p)
Hmm, at this point it might be just a difference of personalities, but to me what you're saying sounds like "if you don't eat, you can't get good poisoning". "Dual identity" doesn't work for me, I feel that social connections are meaningless if I can't be upfront about myself.
That's probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don't completely trust, and for most practical intents and purposes I'm quite a bit more "myself" online than IRL.
But in any case there will many subnetworks in the network. Even if everyone adopt the "village" model, there will be many such villages.
I think that's easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another:
Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there's a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.
↑ comment by Vladimir_Nesov · 2020-10-10T16:32:33.375Z · LW(p) · GW(p)
I think the point is to vigilantly keep track of the distinction between skills and tribes, to avoid any ambiguity in use of these different and opposed things, to never mention one in place of the other.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T16:48:52.403Z · LW(p) · GW(p)
Skills and tribes are certainly different things, I'm not sure why are they opposed things? We should keep track the distinction and at the same time continue building a beneficial tribe. I agree that in terms of terminology, "rationalist" is a terrible name for "member of the LessWrong-ish community" and we should use something else (e.g. LessWronger).
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-10T17:36:30.811Z · LW(p) · GW(p)
They are opposed in the sense that using one in place of the other causes trouble. For example, insisting on meticulous observation of skills would be annoying and sometimes counterproductive in a tribe, and letting tribal dynamics dictate how skills are developed would corrode quality.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T17:56:07.992Z · LW(p) · GW(p)
A tribe shouldn't insist on a meticulous observation of skills, broadly speaking, but it should impose norms on e.g. which rhetorical moves are encouraged/discouraged in a discussion, and it should create positive incentives for the meticulous observation of skills.
As to letting tribal dynamics dictate how skills are developed, I think we don't really have a choice there. People are social animals and everything they do and think is strongly effected by the society they are in. The only choice is trying to shape this society and those dynamics to make them beneficial rather than detrimental.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-10-10T18:30:13.940Z · LW(p) · GW(p)
This might be possible, but should be specific to particular groups, unless there is a recipe for reproducing the norms. It's very easy for any set of beneficial norms to be trampled by tribal dynamics. The standard story is loss of fidelity, with people who care about the mission somewhat less, or who are not as capable of incarnating its purpose, coming to dominate a movement. At that point, observation of the beneficial norms turns into a cargo cult.
Thus the phenomenon of tribes seeks to destroy the phenomenon of skills. This applies to any nuanced purpose, even when it's the founding purpose of a tribe. Survival of a purpose requires an explanation, which won't be generic tribal dynamics or a set of norms helpful in the short term.
everything they do and think is strongly affected by the society
A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won't affect this activity strongly.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2020-10-10T19:17:58.232Z · LW(p) · GW(p)
...Thus the phenomenon of tribes seeks to destroy the phenomenon of skills
I don't think it's "the phenomenon of tribes", I think it's a phenomenon of tribes. Humans virtually always occupy one tribe or another, so it makes no more sense to say that "tribes destroy skills" than, for example, "DNA destroys skills". There is no tribeless counterfactual we can compare to.
A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won't affect this activity strongly.
I think any tribe affects how you pursue skills by determining which skills are rewarded (or punished), and which skills you have room to exercise.
↑ comment by Ben Pace (Benito) · 2020-10-10T19:51:09.185Z · LW(p) · GW(p)
It is definitely the case, especially in the EA community, that I'm surrounded by a lot more people who express alliance via signaling and are making nontrivial commitments, for whom I've not seen real evidence that they understand how to think for themselves or take right action without a high status person telling them to do it.
That said I don't find it too hard myself to distinguish between such people, and people where I can say "Yeah, I've seen them do real things".
↑ comment by Kenny · 2020-10-10T04:23:23.887Z · LW(p) · GW(p)
Music isn't the sole domain of people that are particular interested in it either but it doesn't seem "super toxic" that they might consider themselves to be, let alone refer to themselves as, 'music people'. It seems like a natural shorthand given that that is the topic or subject around which they've organized.
And yes, it is – mostly – about the ideas. I've only been to a few meetups and generally prefer to read along and occassionally comment, but I'm open to 'joining the tribe' (or some 'band' closeby) too because it is nice to be able to socialize with people that think similarly and about the same topics.
The examples in the post about people bouncing off the community also seemed to be cases where they were bouncing off the ideas too.
Replies from: aaro-salosensaari↑ comment by Aaro Salosensaari (aaro-salosensaari) · 2020-10-18T09:59:58.275Z · LW(p) · GW(p)
The point is, the analogy fails because there is no "music people tribe" with "music meetups" organized at "MoreMusical.com". There is no Elizier Yudkowsky of "music tribe" (at most, everyone who appreciates the Western classical music has heard about Beethoven maybe) nor idea that people familiar with main ideas of music have learned them from a small handful of "music sequences" and interconnected resources that reference each other.
Picking at one particular point in the OP, there are no weird sexual dynamics of music (some localized groups or cultures might have, eg. one could talk about sexual culture in rock music in general, and maybe the dynamics at a particular scene, but they are not central to the pursuit of all of music, and even at the local level the culture is often very diffuse).
Music is widespread. There are several cultures of music that intersect with the wider society : no particular societal group has any claim of monopoly on teaching appreciation or practice of music. There is so much music that there are economies of music. There are many academies, even more teachers, untold amount of people who have varying expertise in playing instruments who apply them for fun or sometimes profit. Anyone with talent and opportunity can learn to appreciate music or play an instrument from lots of different resources.
It would be good for rationality to explicitly attempt become like music (or scientific thinking, or mathematics, or such), because then the issue perceived by some of being an insular tribe would simply not exist.
Instead of building a single community, build a culture of several communities. After all, the idea of good, explicit thinking is universally applicable, so there is nothing in it that would necessitate a single community, is there?
Replies from: Kenny↑ comment by Kenny · 2020-10-19T01:08:53.626Z · LW(p) · GW(p)
The point is, the analogy fails because there is no "music people tribe" with "music meetups" organized at "MoreMusical.com". There is no Elizier Yudkowsky of "music tribe" (at most, everyone who appreciates the Western classical music has heard about Beethoven maybe) ...
Yes, there is no single 'music people tribe' but there are very much tribes for specific music (sub-)genres. (Music is huge!)
But as you point out, there are people of 'similar' stature in music generally; really much greater stature overall. And 'music' is much much much older than 'rationality'. (Music is older than history!) And I'd guess it's inherently more interesting to many many more people too.
... nor idea that people familiar with main ideas of music have learned them from a small handful of "music sequences" and interconnected resources that reference each other.
I don't consider 'the sequences' or LW to be essential, especially now. The same insights are available from a lot of sources already and this should be more true in the future. It was, and perhaps is, a really good intro to what wasn't previously a particularly coherent subject.
Actual 'rationality' is everywhere. There was just no one persistently pointing at all of the common phenomena, or at least not recently and in a way that's accessible to (some) 'laypeople'.
But I wouldn't be surprised if there is something like a 'music sequences', e.g. a standard music textbook. I'd imagine 'music theory' or music pedagogy are in fact "interconnected resources that reference each other".
Again, if it wasn't already clear, the LW sequences are NOT essential for rationality.
Picking at one particular point in the OP, there are no weird sexual dynamics of music (some localized groups or cultures might have, eg. one could talk about sexual culture in rock music in general, and maybe the dynamics at a particular scene, but they are not central to the pursuit of all of music, and even at the local level the culture is often very diffuse).
There's no weird "sexual dynamics" in rationality – based on MY experience. I don't know why the people that publically write about that thing must define everyone else that's part of the overall network. I certainly don't consider any of it central to rationality.
I don't even know that "weird sexual dynamics" is a common feature of LW meetups, let alone other 'rationality'-related associations.
Music is widespread. There are several cultures of music that intersect with the wider society : no particular societal group has any claim of monopoly on teaching appreciation or practice of music. There is so much music that there are economies of music. There are many academies, even more teachers, untold amount of people who have varying expertise in playing instruments who apply them for fun or sometimes profit. Anyone with talent and opportunity can learn to appreciate music or play an instrument from lots of different resources.
Rationality, in the LW sense, could be all of these things. At least give it a few hundred years! Music is old.
And no one has a monopoly on rationality. If anything, LW-style rationality is competing with everything else; almost everything else is implicitly claiming to help you either believe truths or act effectively.
It would be good for rationality to explicitly attempt become like music (or scientific thinking, or mathematics, or such), because then the issue perceived by some of being an insular tribe would simply not exist.
I agree! We should definitely try to become 'background knowledge' or at least as diffuse or widespread as mathematics! I think this is already happening and that it was more widely known that it was. I may have assumed that anyone reading my comment knew (or believed) that too.
Instead of building a single community, build a culture of several communities. After all, the idea of good, explicit thinking is universally applicable, so there is nothing in it that would necessitate a single community, is there?
I agree! And again, I think this has already happened to an extent. I'm not a part of any rationality 'community'; not in the sense you've described. I think that's true for most of the people interested in this.
But, in case it's still not clear, I do NOT think rationality should or must be 'a single community'.
What I was pointing out is that if there was something named "music club" or you observed someone describe themselves as a 'music lover', it wouldn't be a big deal.
I also wrote that "I'm open to 'joining the tribe' (or some 'band' closeby)". I meant 'tribe' in the sense I think you mean 'culture' in "a culture of several communities". I meant 'band' in the sense of some – not the – real-world group of people that at least meetup regularly (and are united by at least a common interest in rationality).
Now I'm wondering where people get the idea that 'rationality' is any kind of IRL organization centered around, or run by, Elizier Yudkowsky. I think there's way more of us that aren't a member of such an organization, beyond being users of this site or readers of 'the diaspora'.
Replies from: aaro-salosensaari↑ comment by Aaro Salosensaari (aaro-salosensaari) · 2020-10-19T10:39:28.972Z · LW(p) · GW(p)
I do not feel like writing a point by point response, it seems we are in agreement over many issues but maybe not all.
Some paragrah-sized points I want to elaborate on, however:
1 If it is not clear, in my comment I attempted not to argue against your positions in particular. It was more in the support of the idea expressed upthread that building too much of the attitude of there being an identifiable "Rationality Tribe" is a net negative.
(1b Negative both to the objective of raising general societal sanity waterline and the tribespeople's ability of it. Especially I feel the point -- cant find link to comment with my phone -- how in a close-knit society where many opinons obtained by explicit thought are expressed, it can become difficult disengtangle which of my individual opinions I have obtained by my own explicit thought and agreeing with others because I agree with the logic, or which opinions I am agreeing with because of my social mind wants agree or disagree with some specific individuals or "group consensus")
2 One of the reasons I picked the sexual dynamics because OP mentions it in a figure caption as a joke. Nevertheless, it is an indication that at least in the OP the Tribe in question is not thought as existing in eg some abstract idea space but as a specific group of people living near each other enough to have sexual dynamics.
3 I find myself disagreeing with the idea that rationality-in-general (in contrast with LW-originated social group) is a new innovation. In near history perspective the first example that comes to mind, John Allen Paulos published Innumecary in 1988 ; I read it as a kid in 00s when I had no internet and LW did not exist, but it tickled the same parts of my brain as many ideas about putting numbers on arguments floating in LW-adjacent thoughtspace. In long-term history perspective, I'd make an argument that attempt at improving human ability at rational thought is part of the grand scientific project and tradition that goes back to Socrates.
4 I also I think that having social groups over common interests is good. I got started in local area SSC meetups because I was interested in talking with with people interested in AI, science, philosophy, and other such things Iassumed people reading SSC the blog would be interested in. (Maybe this would be "joining a band" in the metaphor.)
5 Writing and disseminating resources that help with better thinking is a good thing and worthwhile project. It is also quite natural that liked-minded people seek each other's company, resulting in a community. (Of which there are and can be many kinds: up until late-20th century, there was an intellectual community of "men of letters" primarly writing letters to each other if they did not live near enough for regular in-person discussion.)
6 The part that seems problematic (and the complaint this comment thread is about) is the point where it looks like the Bay Area community (or some members thereof) treats itself as having a kind-of weird cultural or intellectual monopoly over principles of rationality as the Rationality Community With Capital Letters, whose members tacitly assume after learning about Rationality, others would want join exactly their "tribe", instead of assuming more pluralistic outcomes.
This brings me back to your analogy that inspired me to claim rationality is not yet like music: some people most focused in tribes and communities do not talk in terms of having a music community in Bay Area, but of The Music Community.
comment by romeostevensit · 2020-10-09T21:07:57.077Z · LW(p) · GW(p)
The signaling commons is full of noise of the form 'do this thing and you'll win.' What do costly signals look like here? Many of the traditional ones have fallen apart as things are changing so fast that no one listens to older folks who could in the past have told you the outcomes of different strategies.
Replies from: ioannes_shade↑ comment by ioannes (ioannes_shade) · 2020-10-09T22:50:34.927Z · LW(p) · GW(p)
NXIVM had much recruiting success by training people on techniques that actually helped them quickly solve their present problems.
(NXIVM is a deeply problematic organization which contained a secret cult and in many ways should not be emulated.)
comment by Sammy Martin (SDM) · 2020-10-19T17:07:22.990Z · LW(p) · GW(p)
The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.
...
We started discussing the virus and raising the alarm in private back in January. By late February, as American health officials were almost unanimously downplaying the threat, we wrote posts on taking the disease seriously, buying masks, and preparing for quarantine [LW · GW].
...
The rationalists pwned COVID
This isn't true. We did see it coming more clearly than most of the governmental authorities and certainly were ahead of public risk communication, but we were on average fairly similar or even a bit behind the actual domain experts.
This article summarizes interviews with epidemiologists on when they first realized COVID-19 was going to be a huge catastrophe and how they reacted. The dates range from January 15th with the majority in mid-late February. See also this tweet from late February, from a modeller working of the UK's SAGE, confirming he thinks uncontrolled spread is taking place.
I have an email dated 27 Feb 2020 replying to a colleague: "My thoughts on Covid-19 - pandemic is very likely." It was such a dry, intellectual statement, and I remember feeling incredulous that I could write those words with such ease and certainty while feeling total uncertainty and fear about how this could play out.
...
Two moments stand out for me. One was in the first week of February, when I saw early signals that there could be substantial transmission before people show symptoms. Despite hopes of rapid containment, it was clear contact tracing alone would struggle to contain outbreaks
...
On January 23, I was at an NIH meeting related to potential pandemic pathogen research. Everyone had heard the news [that Wuhan had been locked down] and was beginning to discuss whether this would be a big deal. Over the next several weeks the concern turned more grave.
I believe February 27th was the same day as 'Seeing the Smoke', when it became accepted wisdom around here that coronavirus would be a huge catastrophe. Feb 27th was a day before I said I thought this would be a test-run for existential risk [LW(p) · GW(p)]. And late January, we were in the same position as the NIH of 'beginning to discuss whether this would be a big deal' without certainty. The crucial difference was understanding the asymmetric risk - A failure, but not of prediction.
So why didn't the domain experts do anything if so? I've been reading the book Rage by Bob Woodward which includes interviews with Fauci and other US officials from January and February. There was a constant emphasis on how demanding strict measures early would be 'useless' and achieve nothing from as early as the end of December!
I'm growing to think that a lot of health experts had an implicit understanding that the systems around them in the west were not equipped to carry out their best plans of action. In other words, they saw the smoke under the door, decided that if they yelled 'fire' before it had filled up the room nobody would believe them and then decided to wait a bit before yelling 'fire'. But since we weren't trying to produce government policy, we weren't subject to the same limitations.
Replies from: snog toddgrass, Jacobian↑ comment by snog toddgrass · 2020-10-19T21:50:26.460Z · LW(p) · GW(p)
Thanks for this well researched comment.
I'm growing to think that a lot of health experts had an implicit understanding that the systems around them in the west were not equipped to carry out their best plans of action. In other words, they saw the smoke under the door, decided that if they yelled 'fire' before it had filled up the room nobody would believe them and then decided to wait a bit before yelling 'fire'.
I believe you that the experts rationalize their behavior like so. The problem is that underselling a growing emergency was a terrible advocacy plan. Maybe it covered their asses, but it screwed over their stakeholders by giving us less time to prepare.
Their argument really proves too much. For example, the Wuhan provincial government could also use it to justify the disastrous coverup.
↑ comment by Jacob Falkovich (Jacobian) · 2020-10-19T20:29:08.309Z · LW(p) · GW(p)
Yes, really smart domain experts were smarter and earlier but, as you said, they mostly kept it to themselves. Indeed, the first rationalists picked up COVID worry from private or unpublicized communication with domain experts, did the math and sanity checks, and started spreading the word. We did well on COVID not by outsmarting domain experts, but by coordinating publicly on what domain experts (especially any with government affiliations) kept private.
comment by Yoav Ravid · 2022-01-09T10:53:28.549Z · LW(p) · GW(p)
I remember this post very fondly. I often thought back to it and it inspired some thoughts of my own about rationality (which I had trouble writing down and are waiting in a draft to be written fully some day). I haven't used any of the phrases introduced here (Underperformance Swamp, Sinkholes of Sneer, Valley of Disintegration...), and I'm not sure whether it was the intention.
The post starts with the claim that rationalists "basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts". Since it's not the point of the post I won't review this claim in depth, but it seems basically true to me. Elizabeth's review [LW(p) · GW(p)]here gives a few examples.
This post is about the difficulty and even danger in becoming a rationalist, or more generally, in using explicit reasoning [? · GW] (Intuition and Social Cognition being the alternatives).
The first difficulty is that explicit reasoning alone often fails to outperform intuition and social cognition where those perform well. I think this is true, and as the rationality community evolved it came to appreciate intuition and social cognition more, without devaluing explicit reason.
The second is persevering through the sneer and social pressure that comes from trying to use explicit reason to do things, often coming to very different approaches from other people, and often also failing.
The third is navigating the strange status hierarchy in the community, which mostly doesn't depend on regular things like attractiveness and more often on our ability to apply explicit reason effectively, as well as being scared by strange memes like AI risk and cryonics. I don't know to what extent the first part is true in the physical communities, but it definitely is in the virtual community.
The fourth is where the danger comes in. When you're in the Valley of Bad Rationality [? · GW] your life can get worse, and if you don't get out of it some way it might stay worse. So people either try to go back, which may or may not work, or try go through and become even better at explicit reasoning, which may or may not work (The author says it did for him).
The main error this post points at is the failure mode of treating Explicit Reasoning as the only tool in your belt, instead of one of many tools that is also used to improve those tools. And the main difficulty is one of motivation and perseverance.
I think this post is important reading for a young rationalist, as well as for more mature rationalists, so they know better who would fit to learn rationality and who would have trouble with it.
I think the advice the post gives to people already on the path is good:
try again after the early failures, ignore the sneers, find a community with good norms, and don’t let the memes scare you — it all adds up to normalcy in the end. Let reason be the instrument that sharpens your other instruments, not the only tool in your arsenal.
comment by Ben Pace (Benito) · 2020-10-09T19:20:01.425Z · LW(p) · GW(p)
Gosh, I love this post immediately. Thanks for saying all of these things. I don't know why nobody said them all at once before. I expect to link this to friends a bunch in the future.
comment by Ben Pace (Benito) · 2020-10-28T19:50:29.719Z · LW(p) · GW(p)
Curated. A lot of this was valuable, simple-language discussion of rationality, and the difficulties and costs associated with trying to become more rational. I expect this will inform my discussion of rationality going forward, and I'll likely link to it a lot. Furthermore, it's a very well put-together post. The images are great, the names are catchy, and it's very readable. I also found valuable much of the discussion under the post.
There were somewhat navel-gazing and narrative-building elements at the start that I'm not interested in curating, and for me weighed against curating the post. If there were more curated posts with content like that then I may not have curated this. But the rest of the content here was excellent, and I remain very excited about the post and its curation.
comment by Linch · 2020-11-02T05:37:29.724Z · LW(p) · GW(p)
The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.
Based on anecdotal reports, I'm not convinced that rationalist social media early on is substantially better than educated Chinese social media. I'm also not convinced that I would rather have rationalists in charge of the South Korean or Taiwanese responses than the actual people on the ground.
It's probable that this group did better than many Western authorities, but the bar of Kalia Beach, Palestine, is not very high.
I think it is true that in important ways the rationalist community did substantially better than plausible "peer" social groups, but nonetheless, ~2 million people still died, and the world is probably worse off for it.
And yet, we basically got everything about COVID-19 right
This specifically is quite surprising to me. I have a list of >30 mistakes I've made about covid*, and my impression is that I'm somewhat above average at getting things right. Certainly my impression is that some individuals seem to be noticeably more accurate than me (Divia Eden, Rob Wiblin, Lukas Gloor, and several others come to mind), but I would guess that a reasonably high fraction of people in this community are off by at least as much as I am, were they to venture concrete predictions.
(I have not read most of the post so I apologize if my points have already been covered elsewhere).
* I have not updated the list much since late May. If I were to do so, I suspect the list would at least double in size.
comment by Antonius Westerbrok · 2020-10-31T05:04:38.371Z · LW(p) · GW(p)
I was behind the curve on COVID, but the Seattle Rationalists are my tribe, so the social cognition I got from them had me leading the pack with the other groups I interact with (ie. pushing my company to start work-from-home earlier, and getting extended family to cancel large family events).
comment by KvmanThinking (avery-liu) · 2024-11-14T23:23:56.266Z · LW(p) · GW(p)
And I’m not even mentioning the strange sexual dynamics
Is this a joke? I'm confused.
comment by Teerth Aloke · 2020-11-04T14:02:49.971Z · LW(p) · GW(p)
Taking the pandemic seriously was not local to the rationalist community. Many people, including my father, began to take the pandemic seriously in late January. He avoided any major travel, and largely remained at home. He began to wear masks from early March, when cases were few. This is in India, not USA.
comment by JJC · 2020-10-29T00:23:17.825Z · LW(p) · GW(p)
In my experience, most people hear/see/think/believe what they want to hear/see/think/believe. Searching for validation is a lot more fun than seeking an uncomfortable counter factual, exposing possible weaknesses or errors in your thinking, this, your position. It’s humbling and hard work to want to know if you’re wrong, but for some it’s the only satisfying path. Looking forward to being a little less wrong tomorrow...
comment by velcro · 2020-10-30T00:03:33.668Z · LW(p) · GW(p)
I'll b honest, I almost stopped reading when the you said "Throughout March, the CDC was telling people not to wear masks and not to get tested unless displaying symptoms." as an example of how they got it wrong.
The reality is they did not encourage people to buy masks initially, because the very credible concern was that the public would hoard masks that were in short supply for people who absolutely needed them immediately. As soon as supplies were available, they recommended getting them for the public.
And similarly, the shortage of testing drove the very temporary discouragement of symptom-free people running out and getting tests.
Are you aware of these valid explanations?
Then I saw this: "When the article was written, prepping for COVID was associated with low-status China-hating reactionaries. The social role of progressive academics writing in progressive media was to mock them, and the good professor obliged. In February people like Sunstein mocked people for worrying about COVID in general, in March they mocked them for buying masks,"
If you have specific evidence of the claim about "progressive academics", please let me know.
Absent that evidence, this seems like a gross generalization. The point could have been made without your politically motivated mocking. I ought not let this sort of thing prevent me from gleaning any insight from the rest of the article, but I just can't stomach it, so I stopped there.