Posts

Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams 2023-10-04T18:04:25.388Z
EA Forum Creative Writing Contest: $10,000 in prizes for good stories 2021-09-12T21:25:03.466Z
One Study, Many Results (Matt Clancy) 2021-07-18T23:10:24.588Z
Tom Chivers, author of "The AI Does Not Hate You", is running an AMA on the EA Forum 2021-03-11T06:22:57.011Z
aarongertler's Shortform 2021-01-06T23:43:03.921Z
What are good ways of convincing someone to rethink an impossible dream? 2020-03-19T00:06:53.991Z
Who are your favorite "hidden rationalists"? 2015-01-11T06:26:58.471Z
Good books for incoming college students? 2014-07-06T01:21:42.766Z
Should I take an academic class on rationality? 2014-04-27T21:54:15.336Z
What are some science mistakes you made in college? 2014-03-23T05:28:48.941Z
Rational Evangelism 2014-02-26T06:00:25.556Z
Buying Debt as Effective Altruism? 2013-11-13T06:09:53.715Z

Comments

Comment by aarongertler on FB/Discord Style Reacts · 2023-06-02T12:20:39.462Z · LW · GW

Non-anonymous reacts feel less scary to me as a writer, and don't feel scary to me as a reactor, though I'd expect most people to be more nervous about publicly sharing a negative reaction than I am.

Overall, inline anonymous reacts feel better to me than named non-inline reacts. I care much more about getting specific feedback on my writing than seeing which specific people liked or disliked it.

Comment by aarongertler on Do a cost-benefit analysis of your technology usage · 2022-05-17T10:18:10.923Z · LW · GW

This post led me to remove Chrome from my phone, which gave me back a few productive minutes today. Hoping to keep it up and compound those minutes into a couple of solid workdays over the rest of the year. Thanks for the inspiration!

Comment by aarongertler on Beyond micromarriages · 2022-03-11T16:16:11.768Z · LW · GW

On the Devil's Advocate side: "Wife" just rolls off the tongue in a way "husband" doesn't. That's why we have "wife guys" and "my wife!" jokes, but no memes that do much with the word "husband". (Sometimes we substitute the one-syllable word "man", as in "it's raining men" or "get you a man who can do both".)

You could also parse "wife years" as "years of being a wife" from the female perspective, though of course this still fails to incorporate couples where no wife-identifying person is involved. 

...so it doesn't work well in a technical sense, but it remains very catchy.

Comment by aarongertler on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-24T07:14:09.233Z · LW · GW

Thanks for the further detail. It sounds like this wasn't actually a case of "no one in EA has funded X", which makes my list irrelevant. 

(Maybe the first item on the list should be "actually, people in EA are definitely funding X", since that's something I often find when I look into claims like Christian's, though it wasn't obvious to me in this case.)

Comment by aarongertler on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-21T21:48:32.196Z · LW · GW

Thanks for sharing a specific answer! I appreciate the detail and willingness to engage.

I don't have the requisite biopolitical knowledge to weigh in on whether the approach you mentioned seems promising, but it does qualify as something someone could have been doing pre-COVID, and a plausible intervention at that.

My default assumptions for cases of "no one in EA has funded X", in order from most to least likely:

  1. No one ever asked funders in EA to fund X.
  2. Funders in EA considered funding X, but it seemed like a poor choice from a (hits-based or cost-effectiveness) perspective. 
  3. Funders in EA considered funding X, but couldn't find anyone who seemed like a good fit for it.
  4. Various other factors, including "X seemed like a great thing to fund, but would have required acknowledging something the funders thought was both true and uncomfortable".

In the case of this specific plausible thing, I'd guess it was (2) or (3) rather than (1). While anything involving China can be sensitive, Open Phil and other funders have spent plenty of money on work that involves Chinese policy. (CSET got $100 million from Open Phil, and runs a system tracking PRC "talent initiatives" that specifically refers to China's "military goals" — their newsletter talks about Chinese AI progress all the time, with the clear implication that it's a potential global threat.)

That's not to say that I think (4) is impossible — it just doesn't get much weight from me compared to those other options.

FWIW, as far as I've seen, the EA community has been unanimous in support of the argument "it's totally fine to debate whether this was a lab leak". (This is different from the argument "this was definitely a lab leak".) Maybe I'm forgetting something from the early days when that point was more controversial, or I just didn't see some big discussion somewhere. But when I think about "big names in EA pontificating on leaks", things like this and this come to mind.

*****

Do you know of anyone who was trying to build out the gain-of-function project you mentioned during the time before the pandemic? And whether they ever approached anyone in EA about funding? Or whether any organizations actually considered this internally?

Comment by aarongertler on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-21T21:17:36.920Z · LW · GW

Thanks for sharing your experience.

I've been writing the EA Newsletter and running the EA Forum for three years, and I'm currently a facilitator for the In-Depth EA Program, so I think I've learned enough about EA not to be too naïve. 

I'm also an employee of Open Philanthropy starting January 3rd, though I don't speak for them here.

Given your hypothetical and a few minutes of thought, I'd want Open Phil to write the check. It seems like an incredible buy given their stated funding standards for health interventions and reasonable assumptions about the "fewer manufacturing plants" counterfactual. (This makes me wonder whether Alexander Berger is among the leaders you mentioned, though I assume you can't say.)

Are any of the arguments that you heard against doing so available for others to read? And were the people you heard back from unanimous?

I ask not in the spirit of doubt, but in the spirit of "I'm surprised and trying to figure things out".

(Also, David Manheim is a major researcher in the EA community, which makes the whole situation/debate feel especially strange. I'd guess that he has more influence on actually EA-funded COVID decisions than most of the people I'd classify as "EA leaders".)

Comment by aarongertler on Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) · 2021-12-16T09:52:02.871Z · LW · GW

Nearly two years in the pandemic the core EA organizations still seem to show no sign of caring that they didn't prevent it despite their mission including fighting biorisks.

Which core organizations are you referring to, and which signs are you looking for?

This has been discussed to some extent on the Forum, particularly in this thread, where multiple orgs were explicitly criticized. (I want to see a lot more discussions like these than actually exist, but I would say the same thing about many other topics — EA just isn't very big and most people there, as anywhere, don't like writing things in public. I expect that many similar discussions happened within expert circles and didn't appear on the Forum.)

I worked at CEA until recently, and while our mission isn't especially biorisk-centric (we affect EA bio work in indirect ways on multi-year timescales), our executive director insisted that we should include a mention in the opening talk of the EA Picnic that EA clearly fell short of where it should have been on COVID. It's not much, but I think it reflects a broader consensus that we could have done better and didn't.

That said, the implication that EA not preventing the pandemic is a problem for EA seems reasonable only in a very loose sense (better things were possible, as they always are). Open Phil invested less than $100 million into all of its biosecurity grants put together prior to February 2020, and that's over a five-year period. That this funding (and direct work from a few dozen people, if that) failed to prevent COVID seems very unsurprising, and hard to learn from.

Is there a path you have in mind whereby Open Phil (or anyone else in EA) could have spent that kind of money in a way that would likely have prevented the pandemic, given the information that was available to the relevant parties in the years 2015-2019?

Doing so would require asking uncomfortable questions and accepting uncomfortable truths and there seems to be no willingness to do so.

I find this kind of comment really unhelpful, especially in the context of LessWrong being a site about explaining your reasoning and models. 

What are the uncomfortable questions and truths you are talking about? If you don't even explain what you mean, it seems impossible to verify your claim that no one was asking/accepting these "truths", or even whether they were truths at all.

Comment by aarongertler on Against Dog Ownership · 2021-05-11T02:55:18.912Z · LW · GW

Reminds me of an old essay I wrote (not fully representative of Aaron!2021) about experiences with a dog who lived with a family but not other dogs, and could never get enough stimulation to meet his needs. A section I think still holds up:

The only “useful” thing he ever fetches is the newspaper, once per day. For thirty seconds, he is doing purposeful work. and his family is genuinely thankful for his help. But every other object he’s fetched has been something a person threw, for the express purpose of fetching. We all smile at him out of politeness or vague amusement and keep throwing the tennis balls and rubber bones, so he gets a constant stream of positive reinforcement for fetching.

This means his life is built around convincing people to throw things, and then bringing the things back to be thrown again. Literally running in circles. I’ve seen him play fetch for well over an hour before getting tired, taking a short break, drinking some water, and then coming back for more fetch.

And he really believes that his fetching is important: When a tennis ball rolls under a couch and he can’t reach it, he’ll sniff around as though it were lost. If he smells it, he’ll paw frantically trying to reach it. If he can’t, he’ll stand there looking miserable until someone reaches under and takes out the ball.

(I wonder how he feels in those moments: An impending sense of doom? Fear that the ball, lost out of sight, may cease to exist? A feeling of something-not-finished, as when a melody is cut short before the final note?)

Comment by aarongertler on For Better Commenting, Avoid PONDS · 2021-01-22T07:48:50.721Z · LW · GW

Sounds great, thanks!

Comment by aarongertler on For Better Commenting, Avoid PONDS · 2021-01-21T02:14:09.119Z · LW · GW

Would you be interested in crossposting this to the EA Forum? I think your points are equally relevant for those discussions, and I'd be interested to see how posters there would react.

As a mod, I could also save you some time by crossposting it under your account. Let me know if that would be helpful!

Comment by aarongertler on aarongertler's Shortform · 2021-01-06T23:43:04.370Z · LW · GW

Epistemic status: Neither unique nor surprising, but something I felt like idly cataloguing.

An interesting example of statistical illiteracy in the field: This complaint thread about the shuffling algorithm on Magic: the Gathering Arena, a digital version of the card game. Thousands of unique players seem to be represented here.

MTG players who want to win games have a strong incentive to understand basic statistics. Players like Frank Karsten have been working for years to explain the math behind good deckbuilding. And yet, the "rigged shuffler" is a persistent belief even among reasonably engaged players; I've seen quite a few people try to promote it on my stream, which is not at all aimed at beginners.

(The shuffler is, of course, appropriately random, save for some "hand smoothing" in best-of-one matches to increase the chance of a "normal" draw.)

A few quotes from the thread:

How is that no matter how many people are playing the game, or how strong your deck is, or how great your skill level, I bet your winning percentage is 30% or less. This defies the laws of probability.

(No one ever seems to think the shuffler is rigged in their favor.)

As I mentioned in a prior post you never see these problems when they broadcast a live tournament.

(People who play in live tournaments are much better at deckbuilding, leading to fewer bad draws. Still, one recent major tournament was infamously decided by a player's atrocious draw in the last game of the finals.)

In the real world, land draw will not happens as frequent as every turns for 3 times or more. Or less than 2 to 3 turns, not drawing a land

(Many people have only played MTG as a paper game when they come to Arena. In paper, it's very common for people to "cheat" when shuffling by sorting their initial deck in a particular way, even with innocuous intent. When people are exposed to true randomness, they often can't tolerate it.)

Other common conspiracy theories about Arena:

  • "Rigged matchmaking" (the idea that the developers somehow know which decks will be good against your deck, and ensure that you are matched up against it; again, I never see this theory in reverse)
  • "Poker hands" (the idea that people get multiple copies of a card more often than would be expected)
  • "50% bias" (the idea that the game arranges good/bad draws to keep players at a 50% win rate; admirably, these players recognize that they do draw well sometimes, but they don't understand what it means to be in the middle of a binomial distribution)
Comment by aarongertler on How do you evaluate whether a $500 donation to a project that you know well is a good idea? · 2020-11-23T18:42:17.266Z · LW · GW
  1. Consider cross-posting this question to the EA Forum; discussion there is more focused on giving, so you might get a broader set of answers.
  2. Another frame around this question: "How can one go about evaluating the impact of a year's worth of ~$500 donations?" If you're trying to get leverage with small donations, you might expect a VC-like set of returns, where you can't detect much impact from most donations but occasionally see a case of really obvious impact. If you spend an entire year making, say, a dozen such donations, and none of them make a really obvious impact, this is a sign that you either aren't having much impact or don't have a good way to measure it (in either case, it's good to rethink your giving strategy).
  3. You could also try making predictions -- "I predict that X will happen if I give/don't give" -- and then following up a few months later. What you learn will depend on what you predict, but you'll at least be able to learn more about whether your donations are doing what you expect them to do.
Comment by aarongertler on Mark Manson and Evidence-Based Personal Development · 2020-11-23T18:33:26.186Z · LW · GW

Here's a link to Manson's announcement. Always good to see more people trying to swim up toward the sanity waterline.

Comment by aarongertler on EA Kansas City planning meetup, discussion & open questions · 2020-02-29T00:09:27.187Z · LW · GW

Hey there!

You might want to post this over on the Effective Altruism Forum, which is built with the same structure as LessWrong but is focused entirely on EA questions (both about ways to do good and about community-building work like that of EA KC). I'm a moderator on that forum, and I think folks over there will be happy to help with your questions about organizing a group.

Comment by aarongertler on Tips on how to promote effective altruism effectively? Less talk, more action. · 2020-01-15T23:49:54.467Z · LW · GW

Edit: I see that you also asked this question on r/EffectiveAltruism. I like all the links people shared on that post!

How best to grow the EA movement is a complex question that many people have been working on for a long time. There's also a lot of research on various aspects of social movement growth (though less that's EA-specific).

I don't have the bandwidth to send a lot of relevant materials now, but I'd recommend you post your question on the EA Forum (which is built for questions like this), where you're more likely to get answers from people involved in community work.

To give a brief summary of one important factor: While the basic principles of EA aren't difficult to convey persuasively, there's a big gap between "being persuaded that EA sounds like a good thing" and "making large donations to effective charities" or "changing one's career". As part of my job at the Centre for Effective Altruism, I track mentions of EA on Twitter and Reddit, and it's very frequent to see people citing "effective altruism" as the reason that they give to (for example) their local animal shelter. EA is already something of a buzzword in the business and charitable communities, and trying to promote it to broad audiences runs the risk of the term separating even further from its intended meaning.

...but of course, this is far from the full story.

(If you do post this to the Forum, I'll write an answer with more detail and more ideas, but I'd prefer to wait until I think my response will be seen by more people focused on EA work, so that they can correct me/add to my thoughts.)

Comment by aarongertler on FB/Discord Style Reacts · 2019-10-03T19:22:56.422Z · LW · GW

I don't think I've seen this point made in the discussion so far, so I'll note it here: Anonymous downvotes (without explanation) are frustrating, and I suspect that anonymous negative reacts would be even worse. It's one thing if someone downvotes a post I thought was great with no explanation -- trolls exist, maybe they just disagreed, whatever, nothing I can do but ignore it. If they leave an "unclear" react, I can't ignore that nearly as easily -- wait, which point was unclear? What are other people potentially missing that I meant to convey? Come back, anon!

(This doesn't overshadow the value of reacts, which I think would be positive on the whole, but I'd love to see Slashdot-style encouragement for people to share their reasoning.)

Comment by aarongertler on The Forces of Blandness and the Disagreeable Majority · 2019-05-03T05:35:45.349Z · LW · GW
The growth of lots and lots of outlets for more “unofficial” or “raw” self-expression — blogs, yes, but before that cable TV and satellite radio, and long before that, the culture of “journalism” in 18th century America where every guy with a printing press could publish a “newspaper” full of opinions and scurrilous insults  — tends to go along with more rudeness, more cursing, more sexual explicitness, more political extremism in all directions, more “trashy” or “lowest common denominator” media, more misinformation and “dumbing down”, but also some innovative/intellectual “niche” media.
Chaos is a centrifugal force; it increases the chance of any unexpected outcome. Good things, bad things, existential threats, brilliant ideas, and a lot of weird, gross, and disturbing stuff.

The idea of an "anti-chaos elite" sounds fairly accurate to me, and it shows up a lot in the work of Thaddeus Russell, who wrote a book about American elites' history of stamping out rude/chaotic behavior and runs a podcast where he interviews a wide range of people on the fringes of polite society (including libertarians, sex workers, anarchists, and weird people with no particular political affiliation). It's not perfect from an epistemic standpoint, but it's still worth a listen from anyone interested in this topic.

Comment by aarongertler on Does the EA community do "basic science" grants? How do I get one? · 2019-03-07T19:47:53.109Z · LW · GW

Looks like you already posted on the EA Forum, but in case anyone else spots this post and has the same question:

I'm an EA Forum moderator, and we welcome half-baked queries! Just like LessWrong, we have a "Questions" feature people can use when they want feedback/ideas from other people.

Comment by aarongertler on Lesswrong 2016 Survey · 2016-04-02T04:47:09.510Z · LW · GW

I have taken the survey.

Comment: "90% of humanity" seems a little high for "minimum viable existential risk". I'd think that 75% or so would likely be enough to stop us from getting back out of the hole (though the nature of the destruction could make a major difference here).

Comment by aarongertler on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-20T06:18:14.239Z · LW · GW

I took part in the Good Judgment Project, a giant prediction market study from Philip Tetlock (of "Foxes and Hedgehogs" theory). I also blogged about my results, and the heuristics I used to make bets:

http://aarongertler.net/good-judgment-project/

I thought it might be of interest to a few people -- I originally learned that I could join the GJP from someone I met at CFAR.

Comment by aarongertler on Open Thread, Jul. 6 - Jul. 12, 2015 · 2015-07-10T04:39:42.427Z · LW · GW

I wrote a pair of essays (and a shorter summary of both) on heroic responsibility, and how it could serve as a strong counterpart to empathy as a one-two punch for making good moral decisions:

http://aarongertler.net/heroism/

Seemed Less-Wrong-ish, though my "heroic responsibility" is written for a different audience than Eliezer's, and is a bit less harsh/powerful as a result.

Comment by aarongertler on Effective Altruism vs Missionaries? Advice Requested from a Newly-Built Crowdfunding Platform. · 2015-07-10T04:32:20.087Z · LW · GW

This is the best article on EA and religion that I've seen so far, and uses selective Bible quotes to make points:

https://www.givingwhatwecan.org/blog/2014-12-02/christianity-and-giving

Of course, you can use selective Bible quotes to make nearly any point, so this probably won't work if framed as a counterargument. Perhaps you can just show it to your cofounders and ask what they think, as the beginning of a discussion about what God might want or what Christians owe to non-Christians.

But I second MattG's advice that leaving is probably advisable, particularly if the above goes nowhere.

Comment by aarongertler on Rationality Quotes Thread February 2015 · 2015-02-09T01:10:28.133Z · LW · GW

"Applause, n. The echo of a platitude."

--Ambrose Bierce, The Cynic's Word Book

Comment by aarongertler on How to learn soft skills · 2015-02-08T05:38:23.861Z · LW · GW

I will second The Charisma Myth and The Flinch. I have mixed feelings about Never Eat Alone, but if you live in a large city/on a college campus, Ferrazzi's advice is likely worth reading.

Comment by aarongertler on Who are your favorite "hidden rationalists"? · 2015-01-11T08:09:58.778Z · LW · GW

Yeah, this was disappointing to me as well. My feeling is that he's an "any publicity is good publicity" type, which could be seen as seedy (he has a book/classes to sell) or safe (he thinks he knows how to save a ton of time on exercising and prevent a lot of silly injuries, he wants as many people as possible to stay healthy). Having read a lot of his stuff and watched some of his talks, my beliefs tend towards the second, but it's unclear.

Comment by aarongertler on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2014-12-28T03:51:54.389Z · LW · GW

I gave $50, and plan to give substantially more within a year of graduation. That was one hell of a "big picture" section, Anna.

Comment by aarongertler on Rationality Quotes November 2014 · 2014-11-17T17:26:01.689Z · LW · GW

Teacher: So if you could live to be any age you like, what would it be?

Boy 2: Infinity.

Teacher: Infinity, you would live for ever? Why would you like to live for ever?

Boy 2: Because you just know a lot of people and make lots of new friends because you could travel to lots of countries and everything and meet loads of new animals and everything.

--Until (documentary)

http://mosaicscience.com/extra/until-transcript

Comment by aarongertler on Stupid Questions (10/27/2014) · 2014-10-30T15:33:31.957Z · LW · GW

Search "Rationality Diaries" on LW to see a huge archive of examples from recent years. (Those are places where users upload recent stories of victory from their lives.)

Comment by aarongertler on 2014 Less Wrong Census/Survey · 2014-10-26T04:30:50.295Z · LW · GW

Done! The survey has been a progressively smoother experience each of the past three years. And it's nice to have a time to think about the past month's habits in a structured way during the school year.

Comment by aarongertler on Group Rationality Diary, September 1-15 · 2014-09-05T03:48:29.272Z · LW · GW

I use a cardboard desk from chairigami.com. Single-surface, but I'm in the process of setting something up for less neck strain. The desk itself was very cheap and portable.

Comment by aarongertler on Group Rationality Diary, September 1-15 · 2014-09-04T03:29:00.638Z · LW · GW

Wanted to experiment with working more often while standing (since I estimated a 40-50% chance this would be a good overall choice, between potential health gains and potential productivity gains). Winced at the thought of buying a $100 piece of furniture that would make this possible. Realized that this equated to about 25 cents a day, even at a relatively conservative value of how often I'd use it. And I would absolutely pay 25 cents per day to RENT this thing.

And now I own the thing! And I'm happy every time I see it, and so far I feel good on days when I use it. Odd that one of my lasting gains from CFAR is being better at spending money.

Comment by aarongertler on Moving on from Cognito Mentoring · 2014-05-19T04:25:55.992Z · LW · GW

I'd think it wouldn't be too hard to have a selective set of clients. A single screening interview makes sense here, and might even help appeal to parents who want to think that their child is being treated as special -- which wouldn't be a bad thing, if the child actually was special.

As an SAT tutor, I've tried to impart life lessons along with bubble-filling lessons (on how to look at tests in general, how to hack studying, etc.), but the scope of those has necessarily been limited, both by the demands of the SAT and by the types of students I work with (I do more 1100-to-1500 transitions than 2000-to-2300).

Still, I feel that the "life lessons + advice for incoming college students" part of my work is much more valuable than the basic subject tutoring. And parents don't seem to object to my sharing their "turf" as far as lessons go. But this may be because I'm still young enough (20) to seem more like a high school student than a surrogate parent. And the life lessons were always a bonus in addition to SAT work; as a primary business, perhaps not so good.

Anyway, I'm sure the Cognito guys have considered all this -- I just hope that someone gives you the chance to pick up the work again in the future (and maybe hire me to help). Thanks for the Quora work, and good luck with your future endeavors!

Comment by aarongertler on Rationality Quotes May 2014 · 2014-05-16T01:13:23.605Z · LW · GW

“I refuse to answer that question on the grounds that I don't know the answer.”

― Douglas Adams

Comment by aarongertler on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-27T22:32:20.601Z · LW · GW

Wonderful question! I spent some time recently interviewing religious converts on my very un-religious campus, and I think you'll find your discussions fascinating, if not particularly epistemic-rational.

Some topics I'd bring up: Second CronoDas on "why are you not a Jew/Muslim?", as well as "what evidence (especially scientific evidence) could lead you to dramatically change your belief in God, if not stop believing altogether?"

Finally: "If you stopped believing in God, what do you think would be the consequences in your present life on Earth?" Many believers I've met seem to believe out of a desire for comfort/reassurance, which makes far more sense to me than believing based on evidence.

Comment by aarongertler on Rationality Quotes April 2014 · 2014-04-04T17:56:29.271Z · LW · GW

So as to keep the quote on its own, my commentary:

This passage (read at around age 10) may have been my first exposure to an EA mindset, and I think that "things you don't value much anymore can still provide great utility for other people" is a powerful lesson in general.

Comment by aarongertler on Rationality Quotes April 2014 · 2014-04-04T17:55:35.667Z · LW · GW

"Throughout the day, Stargirl had been dropping money. She was the Johnny Appleseed of loose change: a penny here, a nickel there. Tossed to the sidewalk, laid on a shelf or bench. Even quarters.

"I hate change," she said. "It's so . . . jangly."

"Do you realize how much you must throw away in a year?" I said.

"Did you ever see a little kid's face when he spots a penny on a sidewalk?”

Jerry Spinelli, Stargirl

Comment by aarongertler on What are some science mistakes you made in college? · 2014-03-27T14:42:03.605Z · LW · GW

Well, "learn from it" and "use the crapware" can mean different things. I've found useful the rule of thumb that "someone else once had your problem and you should find out what they did, even if they failed to solve it".

Comment by aarongertler on What are some science mistakes you made in college? · 2014-03-27T00:32:22.845Z · LW · GW

In the HUGR, I've included the advice "learn the sad stories of your lab as soon as possible" -- the most painful mistakes others, past and present, have made in the course of action. Helpful as a specific "ways things can go wrong" list.

Comment by aarongertler on What are some science mistakes you made in college? · 2014-03-27T00:30:53.110Z · LW · GW

I won't be able to respond individually to everyone, but thank you all for your contributions! If anything else comes to mind, please leave more quotes -- I'll check back periodically.

Comment by aarongertler on What are some science mistakes you made in college? · 2014-03-27T00:30:08.108Z · LW · GW

Indeed! I found this to be an extremely helpful resource w/r/t seeking out "meta-expertise":

http://faculty.chicagobooth.edu/jesse.shapiro/research/CodeAndData.pdf

Key quote: "Here is a good rule of thumb: If you are trying to solve a problem, and there are multi-billion-dollar firms whose entire business model depends on solving the same problem, and there are whole courses at your university devoted to how to solve that problem, you might want to figure out what the experts do and see if you can't learn something from it."

Comment by aarongertler on How can Cognito Mentoring do the most good? · 2014-03-23T22:37:09.728Z · LW · GW

Have you looked at the Johns Hopkins/Center for Talented Youth forums at https://cogito.cty.jhu.edu? I think you need a special login to get on, and I forgot my info long ago, but the community still seems to be of a respectable size.

Comment by aarongertler on Don't teach people how to reach the top of a hill · 2014-03-09T20:06:39.638Z · LW · GW

At least the existence of this post will make "discovery" easier for the next person who has to do this task (if they know to look for it, at least). Perhaps there are some steps in the process that are best taught instead of climbed, or vice-versa, and the challenge is to figure out the right mixture?

(I recall a coding bootcamp I was a part of, where a careful balance of "look this up" and "ask the instructor" was required so that the instructor wouldn't be overwhelmed and people wouldn't waste an entire day fixing a chain of mistakes flowing from some trivial error.)

Comment by aarongertler on Political Skills which Increase Income · 2014-03-03T05:34:58.060Z · LW · GW

When I saw the title, the first things I thought of were Ramit Sethi's videos on negotiation and the CFAR income negotiation workshop. This seems more focused on promotions than raises, but are you aware of any meta-studies that examine specifically the effect of different types of negotiation strategies?

Comment by aarongertler on Rational Evangelism · 2014-02-28T05:49:59.388Z · LW · GW

I agree with avoiding identity-claim aspirations.

When I use the Ned Flanders example, what I'm thinking is:

I know Christians who say that belief in Jesus and being determined to love others will make life better, and they express this better-ness in their incredible patience and kindness--to the point where I wish I were equally patient and kind.

I think we could get to a point where Less Wrong members can say "living with a strong awareness of your own biases and a desire to improve yourself will make your life better", and express this better-ness by being good conversationalists, optimistic, and genuinely helpful to those with questions or problems--to the point where non-members wish they were equally cool/smart/fun/helpful, or whatever other values we hope to embody.

Comment by aarongertler on Meetup : NYC Rationality Megameetup and Unconference: April 5-6 · 2014-02-27T06:19:53.118Z · LW · GW

Me 70%.

Comment by aarongertler on Rational Evangelism · 2014-02-27T06:13:14.957Z · LW · GW

When I visualize Bjorn Lomborg's "Indonesia 2100 should have the same GDP per capita as Denmark now" future, I start to glow on the inside. There are many things about LW that give me that glow. I just wish I were better at expressing the glow at the right times without sounding weird about it.

Comment by aarongertler on Rational Evangelism · 2014-02-27T06:11:16.485Z · LW · GW

HPMOR is really cool, but I've also known several people who can't stand it. Too long/too Gary Stu/too strange for devoted fans of the original series. Luminosity is just as good, but suffers from some of the same issues. I think we need more short stories that have reasonable, non-utopian endings, things people can pick up and read in an hour. Though I say this knowing I likely won't be in a position to write any of these stories for a while...

Comment by aarongertler on Rational Evangelism · 2014-02-27T06:09:11.090Z · LW · GW

"These advantages are real, significant, and probably even replicable for a more secular memeset - but I think if we tried it, we'd be missing our own point."

Interesting. I think that could be true of whatever our "point" is right now. But eventually, that point is probably going to have to involve something that people at the IQ 100 level can pick up and use with some success in their daily lives, the same way so many already do with religious principles. (Though LW principles can hopefully avoid most of the negative downsides that come with living religiously.)

Comment by aarongertler on Rational Evangelism · 2014-02-27T06:06:08.491Z · LW · GW

Sounds like Jonathan Edwards, or maybe Timothy Dwight. Both of them have Yale residential colleges named after them. No one cares much about the Hell stuff here, though, probably because John Calhoun (another college namesake) was an infamous slaveholder.

Comment by aarongertler on Rational Evangelism · 2014-02-27T06:04:49.036Z · LW · GW

Targeted quick reads are great! That's one reason I like the quote threads so much--almost anyone will be fond of a few good rationality quotes, and that's a good way to introduce them to specific LW material.