Posts
Comments
I've been enjoying the Sold a Story podcast, which explains how many schools stopped teaching kids to read over the last few decades, replacing phonics with an unscientific theory that taught kids to pretend to read (cargo cult vibes). It features a lot of teachers and education scholars who come face-to-face with evidence that they've been failing kids, and respond in many different ways — from pro-phonics advocacy and outright apology to complete refusal to engage. I especially liked one teacher musing on how disconcerting it was to realize her colleagues were "refuse to engage" types.
The relatable topic and straightforward reporting make the podcast very accessible. It's a good way to share a story with people outside the LessWrong bubble that may get them angry in a way that supports rationalist virtues.
Non-anonymous reacts feel less scary to me as a writer, and don't feel scary to me as a reactor, though I'd expect most people to be more nervous about publicly sharing a negative reaction than I am.
Overall, inline anonymous reacts feel better to me than named non-inline reacts. I care much more about getting specific feedback on my writing than seeing which specific people liked or disliked it.
This post led me to remove Chrome from my phone, which gave me back a few productive minutes today. Hoping to keep it up and compound those minutes into a couple of solid workdays over the rest of the year. Thanks for the inspiration!
On the Devil's Advocate side: "Wife" just rolls off the tongue in a way "husband" doesn't. That's why we have "wife guys" and "my wife!" jokes, but no memes that do much with the word "husband". (Sometimes we substitute the one-syllable word "man", as in "it's raining men" or "get you a man who can do both".)
You could also parse "wife years" as "years of being a wife" from the female perspective, though of course this still fails to incorporate couples where no wife-identifying person is involved.
...so it doesn't work well in a technical sense, but it remains very catchy.
Thanks for the further detail. It sounds like this wasn't actually a case of "no one in EA has funded X", which makes my list irrelevant.
(Maybe the first item on the list should be "actually, people in EA are definitely funding X", since that's something I often find when I look into claims like Christian's, though it wasn't obvious to me in this case.)
Thanks for sharing a specific answer! I appreciate the detail and willingness to engage.
I don't have the requisite biopolitical knowledge to weigh in on whether the approach you mentioned seems promising, but it does qualify as something someone could have been doing pre-COVID, and a plausible intervention at that.
My default assumptions for cases of "no one in EA has funded X", in order from most to least likely:
- No one ever asked funders in EA to fund X.
- Funders in EA considered funding X, but it seemed like a poor choice from a (hits-based or cost-effectiveness) perspective.
- Funders in EA considered funding X, but couldn't find anyone who seemed like a good fit for it.
- Various other factors, including "X seemed like a great thing to fund, but would have required acknowledging something the funders thought was both true and uncomfortable".
In the case of this specific plausible thing, I'd guess it was (2) or (3) rather than (1). While anything involving China can be sensitive, Open Phil and other funders have spent plenty of money on work that involves Chinese policy. (CSET got $100 million from Open Phil, and runs a system tracking PRC "talent initiatives" that specifically refers to China's "military goals" — their newsletter talks about Chinese AI progress all the time, with the clear implication that it's a potential global threat.)
That's not to say that I think (4) is impossible — it just doesn't get much weight from me compared to those other options.
FWIW, as far as I've seen, the EA community has been unanimous in support of the argument "it's totally fine to debate whether this was a lab leak". (This is different from the argument "this was definitely a lab leak".) Maybe I'm forgetting something from the early days when that point was more controversial, or I just didn't see some big discussion somewhere. But when I think about "big names in EA pontificating on leaks", things like this and this come to mind.
*****
Do you know of anyone who was trying to build out the gain-of-function project you mentioned during the time before the pandemic? And whether they ever approached anyone in EA about funding? Or whether any organizations actually considered this internally?
Thanks for sharing your experience.
I've been writing the EA Newsletter and running the EA Forum for three years, and I'm currently a facilitator for the In-Depth EA Program, so I think I've learned enough about EA not to be too naïve.
I'm also an employee of Open Philanthropy starting January 3rd, though I don't speak for them here.
Given your hypothetical and a few minutes of thought, I'd want Open Phil to write the check. It seems like an incredible buy given their stated funding standards for health interventions and reasonable assumptions about the "fewer manufacturing plants" counterfactual. (This makes me wonder whether Alexander Berger is among the leaders you mentioned, though I assume you can't say.)
Are any of the arguments that you heard against doing so available for others to read? And were the people you heard back from unanimous?
I ask not in the spirit of doubt, but in the spirit of "I'm surprised and trying to figure things out".
(Also, David Manheim is a major researcher in the EA community, which makes the whole situation/debate feel especially strange. I'd guess that he has more influence on actually EA-funded COVID decisions than most of the people I'd classify as "EA leaders".)
Nearly two years in the pandemic the core EA organizations still seem to show no sign of caring that they didn't prevent it despite their mission including fighting biorisks.
Which core organizations are you referring to, and which signs are you looking for?
This has been discussed to some extent on the Forum, particularly in this thread, where multiple orgs were explicitly criticized. (I want to see a lot more discussions like these than actually exist, but I would say the same thing about many other topics — EA just isn't very big and most people there, as anywhere, don't like writing things in public. I expect that many similar discussions happened within expert circles and didn't appear on the Forum.)
I worked at CEA until recently, and while our mission isn't especially biorisk-centric (we affect EA bio work in indirect ways on multi-year timescales), our executive director insisted that we should include a mention in the opening talk of the EA Picnic that EA clearly fell short of where it should have been on COVID. It's not much, but I think it reflects a broader consensus that we could have done better and didn't.
That said, the implication that EA not preventing the pandemic is a problem for EA seems reasonable only in a very loose sense (better things were possible, as they always are). Open Phil invested less than $100 million into all of its biosecurity grants put together prior to February 2020, and that's over a five-year period. That this funding (and direct work from a few dozen people, if that) failed to prevent COVID seems very unsurprising, and hard to learn from.
Is there a path you have in mind whereby Open Phil (or anyone else in EA) could have spent that kind of money in a way that would likely have prevented the pandemic, given the information that was available to the relevant parties in the years 2015-2019?
Doing so would require asking uncomfortable questions and accepting uncomfortable truths and there seems to be no willingness to do so.
I find this kind of comment really unhelpful, especially in the context of LessWrong being a site about explaining your reasoning and models.
What are the uncomfortable questions and truths you are talking about? If you don't even explain what you mean, it seems impossible to verify your claim that no one was asking/accepting these "truths", or even whether they were truths at all.
Reminds me of an old essay I wrote (not fully representative of Aaron!2021) about experiences with a dog who lived with a family but not other dogs, and could never get enough stimulation to meet his needs. A section I think still holds up:
The only “useful” thing he ever fetches is the newspaper, once per day. For thirty seconds, he is doing purposeful work. and his family is genuinely thankful for his help. But every other object he’s fetched has been something a person threw, for the express purpose of fetching. We all smile at him out of politeness or vague amusement and keep throwing the tennis balls and rubber bones, so he gets a constant stream of positive reinforcement for fetching.
This means his life is built around convincing people to throw things, and then bringing the things back to be thrown again. Literally running in circles. I’ve seen him play fetch for well over an hour before getting tired, taking a short break, drinking some water, and then coming back for more fetch.
And he really believes that his fetching is important: When a tennis ball rolls under a couch and he can’t reach it, he’ll sniff around as though it were lost. If he smells it, he’ll paw frantically trying to reach it. If he can’t, he’ll stand there looking miserable until someone reaches under and takes out the ball.
(I wonder how he feels in those moments: An impending sense of doom? Fear that the ball, lost out of sight, may cease to exist? A feeling of something-not-finished, as when a melody is cut short before the final note?)
Sounds great, thanks!
Would you be interested in crossposting this to the EA Forum? I think your points are equally relevant for those discussions, and I'd be interested to see how posters there would react.
As a mod, I could also save you some time by crossposting it under your account. Let me know if that would be helpful!
Epistemic status: Neither unique nor surprising, but something I felt like idly cataloguing.
An interesting example of statistical illiteracy in the field: This complaint thread about the shuffling algorithm on Magic: the Gathering Arena, a digital version of the card game. Thousands of unique players seem to be represented here.
MTG players who want to win games have a strong incentive to understand basic statistics. Players like Frank Karsten have been working for years to explain the math behind good deckbuilding. And yet, the "rigged shuffler" is a persistent belief even among reasonably engaged players; I've seen quite a few people try to promote it on my stream, which is not at all aimed at beginners.
(The shuffler is, of course, appropriately random, save for some "hand smoothing" in best-of-one matches to increase the chance of a "normal" draw.)
A few quotes from the thread:
How is that no matter how many people are playing the game, or how strong your deck is, or how great your skill level, I bet your winning percentage is 30% or less. This defies the laws of probability.
(No one ever seems to think the shuffler is rigged in their favor.)
As I mentioned in a prior post you never see these problems when they broadcast a live tournament.
(People who play in live tournaments are much better at deckbuilding, leading to fewer bad draws. Still, one recent major tournament was infamously decided by a player's atrocious draw in the last game of the finals.)
In the real world, land draw will not happens as frequent as every turns for 3 times or more. Or less than 2 to 3 turns, not drawing a land
(Many people have only played MTG as a paper game when they come to Arena. In paper, it's very common for people to "cheat" when shuffling by sorting their initial deck in a particular way, even with innocuous intent. When people are exposed to true randomness, they often can't tolerate it.)
Other common conspiracy theories about Arena:
- "Rigged matchmaking" (the idea that the developers somehow know which decks will be good against your deck, and ensure that you are matched up against it; again, I never see this theory in reverse)
- "Poker hands" (the idea that people get multiple copies of a card more often than would be expected)
- "50% bias" (the idea that the game arranges good/bad draws to keep players at a 50% win rate; admirably, these players recognize that they do draw well sometimes, but they don't understand what it means to be in the middle of a binomial distribution)
- Consider cross-posting this question to the EA Forum; discussion there is more focused on giving, so you might get a broader set of answers.
- Another frame around this question: "How can one go about evaluating the impact of a year's worth of ~$500 donations?" If you're trying to get leverage with small donations, you might expect a VC-like set of returns, where you can't detect much impact from most donations but occasionally see a case of really obvious impact. If you spend an entire year making, say, a dozen such donations, and none of them make a really obvious impact, this is a sign that you either aren't having much impact or don't have a good way to measure it (in either case, it's good to rethink your giving strategy).
- You could also try making predictions -- "I predict that X will happen if I give/don't give" -- and then following up a few months later. What you learn will depend on what you predict, but you'll at least be able to learn more about whether your donations are doing what you expect them to do.
Here's a link to Manson's announcement. Always good to see more people trying to swim up toward the sanity waterline.
Hey there!
You might want to post this over on the Effective Altruism Forum, which is built with the same structure as LessWrong but is focused entirely on EA questions (both about ways to do good and about community-building work like that of EA KC). I'm a moderator on that forum, and I think folks over there will be happy to help with your questions about organizing a group.
Edit: I see that you also asked this question on r/EffectiveAltruism. I like all the links people shared on that post!
How best to grow the EA movement is a complex question that many people have been working on for a long time. There's also a lot of research on various aspects of social movement growth (though less that's EA-specific).
I don't have the bandwidth to send a lot of relevant materials now, but I'd recommend you post your question on the EA Forum (which is built for questions like this), where you're more likely to get answers from people involved in community work.
To give a brief summary of one important factor: While the basic principles of EA aren't difficult to convey persuasively, there's a big gap between "being persuaded that EA sounds like a good thing" and "making large donations to effective charities" or "changing one's career". As part of my job at the Centre for Effective Altruism, I track mentions of EA on Twitter and Reddit, and it's very frequent to see people citing "effective altruism" as the reason that they give to (for example) their local animal shelter. EA is already something of a buzzword in the business and charitable communities, and trying to promote it to broad audiences runs the risk of the term separating even further from its intended meaning.
...but of course, this is far from the full story.
(If you do post this to the Forum, I'll write an answer with more detail and more ideas, but I'd prefer to wait until I think my response will be seen by more people focused on EA work, so that they can correct me/add to my thoughts.)
I don't think I've seen this point made in the discussion so far, so I'll note it here: Anonymous downvotes (without explanation) are frustrating, and I suspect that anonymous negative reacts would be even worse. It's one thing if someone downvotes a post I thought was great with no explanation -- trolls exist, maybe they just disagreed, whatever, nothing I can do but ignore it. If they leave an "unclear" react, I can't ignore that nearly as easily -- wait, which point was unclear? What are other people potentially missing that I meant to convey? Come back, anon!
(This doesn't overshadow the value of reacts, which I think would be positive on the whole, but I'd love to see Slashdot-style encouragement for people to share their reasoning.)
The growth of lots and lots of outlets for more “unofficial” or “raw” self-expression — blogs, yes, but before that cable TV and satellite radio, and long before that, the culture of “journalism” in 18th century America where every guy with a printing press could publish a “newspaper” full of opinions and scurrilous insults — tends to go along with more rudeness, more cursing, more sexual explicitness, more political extremism in all directions, more “trashy” or “lowest common denominator” media, more misinformation and “dumbing down”, but also some innovative/intellectual “niche” media.
Chaos is a centrifugal force; it increases the chance of any unexpected outcome. Good things, bad things, existential threats, brilliant ideas, and a lot of weird, gross, and disturbing stuff.
The idea of an "anti-chaos elite" sounds fairly accurate to me, and it shows up a lot in the work of Thaddeus Russell, who wrote a book about American elites' history of stamping out rude/chaotic behavior and runs a podcast where he interviews a wide range of people on the fringes of polite society (including libertarians, sex workers, anarchists, and weird people with no particular political affiliation). It's not perfect from an epistemic standpoint, but it's still worth a listen from anyone interested in this topic.
Looks like you already posted on the EA Forum, but in case anyone else spots this post and has the same question:
I'm an EA Forum moderator, and we welcome half-baked queries! Just like LessWrong, we have a "Questions" feature people can use when they want feedback/ideas from other people.
I have taken the survey.
Comment: "90% of humanity" seems a little high for "minimum viable existential risk". I'd think that 75% or so would likely be enough to stop us from getting back out of the hole (though the nature of the destruction could make a major difference here).
I took part in the Good Judgment Project, a giant prediction market study from Philip Tetlock (of "Foxes and Hedgehogs" theory). I also blogged about my results, and the heuristics I used to make bets:
http://aarongertler.net/good-judgment-project/
I thought it might be of interest to a few people -- I originally learned that I could join the GJP from someone I met at CFAR.
I wrote a pair of essays (and a shorter summary of both) on heroic responsibility, and how it could serve as a strong counterpart to empathy as a one-two punch for making good moral decisions:
http://aarongertler.net/heroism/
Seemed Less-Wrong-ish, though my "heroic responsibility" is written for a different audience than Eliezer's, and is a bit less harsh/powerful as a result.
This is the best article on EA and religion that I've seen so far, and uses selective Bible quotes to make points:
https://www.givingwhatwecan.org/blog/2014-12-02/christianity-and-giving
Of course, you can use selective Bible quotes to make nearly any point, so this probably won't work if framed as a counterargument. Perhaps you can just show it to your cofounders and ask what they think, as the beginning of a discussion about what God might want or what Christians owe to non-Christians.
But I second MattG's advice that leaving is probably advisable, particularly if the above goes nowhere.
"Applause, n. The echo of a platitude."
--Ambrose Bierce, The Cynic's Word Book
I will second The Charisma Myth and The Flinch. I have mixed feelings about Never Eat Alone, but if you live in a large city/on a college campus, Ferrazzi's advice is likely worth reading.
Yeah, this was disappointing to me as well. My feeling is that he's an "any publicity is good publicity" type, which could be seen as seedy (he has a book/classes to sell) or safe (he thinks he knows how to save a ton of time on exercising and prevent a lot of silly injuries, he wants as many people as possible to stay healthy). Having read a lot of his stuff and watched some of his talks, my beliefs tend towards the second, but it's unclear.
I gave $50, and plan to give substantially more within a year of graduation. That was one hell of a "big picture" section, Anna.
Teacher: So if you could live to be any age you like, what would it be?
Boy 2: Infinity.
Teacher: Infinity, you would live for ever? Why would you like to live for ever?
Boy 2: Because you just know a lot of people and make lots of new friends because you could travel to lots of countries and everything and meet loads of new animals and everything.
--Until (documentary)
Search "Rationality Diaries" on LW to see a huge archive of examples from recent years. (Those are places where users upload recent stories of victory from their lives.)
Done! The survey has been a progressively smoother experience each of the past three years. And it's nice to have a time to think about the past month's habits in a structured way during the school year.
I use a cardboard desk from chairigami.com. Single-surface, but I'm in the process of setting something up for less neck strain. The desk itself was very cheap and portable.
Wanted to experiment with working more often while standing (since I estimated a 40-50% chance this would be a good overall choice, between potential health gains and potential productivity gains). Winced at the thought of buying a $100 piece of furniture that would make this possible. Realized that this equated to about 25 cents a day, even at a relatively conservative value of how often I'd use it. And I would absolutely pay 25 cents per day to RENT this thing.
And now I own the thing! And I'm happy every time I see it, and so far I feel good on days when I use it. Odd that one of my lasting gains from CFAR is being better at spending money.
I'd think it wouldn't be too hard to have a selective set of clients. A single screening interview makes sense here, and might even help appeal to parents who want to think that their child is being treated as special -- which wouldn't be a bad thing, if the child actually was special.
As an SAT tutor, I've tried to impart life lessons along with bubble-filling lessons (on how to look at tests in general, how to hack studying, etc.), but the scope of those has necessarily been limited, both by the demands of the SAT and by the types of students I work with (I do more 1100-to-1500 transitions than 2000-to-2300).
Still, I feel that the "life lessons + advice for incoming college students" part of my work is much more valuable than the basic subject tutoring. And parents don't seem to object to my sharing their "turf" as far as lessons go. But this may be because I'm still young enough (20) to seem more like a high school student than a surrogate parent. And the life lessons were always a bonus in addition to SAT work; as a primary business, perhaps not so good.
Anyway, I'm sure the Cognito guys have considered all this -- I just hope that someone gives you the chance to pick up the work again in the future (and maybe hire me to help). Thanks for the Quora work, and good luck with your future endeavors!
“I refuse to answer that question on the grounds that I don't know the answer.”
― Douglas Adams
Wonderful question! I spent some time recently interviewing religious converts on my very un-religious campus, and I think you'll find your discussions fascinating, if not particularly epistemic-rational.
Some topics I'd bring up: Second CronoDas on "why are you not a Jew/Muslim?", as well as "what evidence (especially scientific evidence) could lead you to dramatically change your belief in God, if not stop believing altogether?"
Finally: "If you stopped believing in God, what do you think would be the consequences in your present life on Earth?" Many believers I've met seem to believe out of a desire for comfort/reassurance, which makes far more sense to me than believing based on evidence.
So as to keep the quote on its own, my commentary:
This passage (read at around age 10) may have been my first exposure to an EA mindset, and I think that "things you don't value much anymore can still provide great utility for other people" is a powerful lesson in general.
"Throughout the day, Stargirl had been dropping money. She was the Johnny Appleseed of loose change: a penny here, a nickel there. Tossed to the sidewalk, laid on a shelf or bench. Even quarters.
"I hate change," she said. "It's so . . . jangly."
"Do you realize how much you must throw away in a year?" I said.
"Did you ever see a little kid's face when he spots a penny on a sidewalk?”
Jerry Spinelli, Stargirl
Well, "learn from it" and "use the crapware" can mean different things. I've found useful the rule of thumb that "someone else once had your problem and you should find out what they did, even if they failed to solve it".
In the HUGR, I've included the advice "learn the sad stories of your lab as soon as possible" -- the most painful mistakes others, past and present, have made in the course of action. Helpful as a specific "ways things can go wrong" list.
I won't be able to respond individually to everyone, but thank you all for your contributions! If anything else comes to mind, please leave more quotes -- I'll check back periodically.
Indeed! I found this to be an extremely helpful resource w/r/t seeking out "meta-expertise":
http://faculty.chicagobooth.edu/jesse.shapiro/research/CodeAndData.pdf
Key quote: "Here is a good rule of thumb: If you are trying to solve a problem, and there are multi-billion-dollar firms whose entire business model depends on solving the same problem, and there are whole courses at your university devoted to how to solve that problem, you might want to figure out what the experts do and see if you can't learn something from it."
Have you looked at the Johns Hopkins/Center for Talented Youth forums at https://cogito.cty.jhu.edu? I think you need a special login to get on, and I forgot my info long ago, but the community still seems to be of a respectable size.
At least the existence of this post will make "discovery" easier for the next person who has to do this task (if they know to look for it, at least). Perhaps there are some steps in the process that are best taught instead of climbed, or vice-versa, and the challenge is to figure out the right mixture?
(I recall a coding bootcamp I was a part of, where a careful balance of "look this up" and "ask the instructor" was required so that the instructor wouldn't be overwhelmed and people wouldn't waste an entire day fixing a chain of mistakes flowing from some trivial error.)
When I saw the title, the first things I thought of were Ramit Sethi's videos on negotiation and the CFAR income negotiation workshop. This seems more focused on promotions than raises, but are you aware of any meta-studies that examine specifically the effect of different types of negotiation strategies?
I agree with avoiding identity-claim aspirations.
When I use the Ned Flanders example, what I'm thinking is:
I know Christians who say that belief in Jesus and being determined to love others will make life better, and they express this better-ness in their incredible patience and kindness--to the point where I wish I were equally patient and kind.
I think we could get to a point where Less Wrong members can say "living with a strong awareness of your own biases and a desire to improve yourself will make your life better", and express this better-ness by being good conversationalists, optimistic, and genuinely helpful to those with questions or problems--to the point where non-members wish they were equally cool/smart/fun/helpful, or whatever other values we hope to embody.
Me 70%.
When I visualize Bjorn Lomborg's "Indonesia 2100 should have the same GDP per capita as Denmark now" future, I start to glow on the inside. There are many things about LW that give me that glow. I just wish I were better at expressing the glow at the right times without sounding weird about it.
HPMOR is really cool, but I've also known several people who can't stand it. Too long/too Gary Stu/too strange for devoted fans of the original series. Luminosity is just as good, but suffers from some of the same issues. I think we need more short stories that have reasonable, non-utopian endings, things people can pick up and read in an hour. Though I say this knowing I likely won't be in a position to write any of these stories for a while...
"These advantages are real, significant, and probably even replicable for a more secular memeset - but I think if we tried it, we'd be missing our own point."
Interesting. I think that could be true of whatever our "point" is right now. But eventually, that point is probably going to have to involve something that people at the IQ 100 level can pick up and use with some success in their daily lives, the same way so many already do with religious principles. (Though LW principles can hopefully avoid most of the negative downsides that come with living religiously.)
Sounds like Jonathan Edwards, or maybe Timothy Dwight. Both of them have Yale residential colleges named after them. No one cares much about the Hell stuff here, though, probably because John Calhoun (another college namesake) was an infamous slaveholder.