Posts
Comments
Paying attention to social capital seems like one risk management mechanism. I try to ask - what sort of people is this likely to put me in tething alonouch with, and in what way? Will this increase the surface area of people to whom I can showcase my strengths and build relationships with ? I wrote something along these lines here (in the context of evaluating startups as an employee) - https://vaishnavsunil.substack.com/p/from-runway-to-career-capital-a-framework. Would be keen to hear what you think if you end up reading.
Thank you! Do you mean risk reduction strategy as in - how do you as an employer mitigate the downside risk of hiring people with less legible credentials ?
How much would we have to pay you to move to Congo ?
I posted this on the EA forum a couple of weeks ago - https://forum.effectivealtruism.org/posts/7WKiW4fTvJMzJwPsk/adverse-selection-in-minimizing-cost-per-life-saved
No surprise that people on the forum seem to think #4 is the right answer (although they did acknowledge this is a valid consideration). But a lot of it was "this is so cheap that this is probably still the right answer" and "we should be humble and not violate the intuition people have that all lives are equal".
Yes, unless what donors really want is to think no further than the cost of a DALY. Sure, GiveWell donors care about "actually having an impact" in that they're doing more than most donors to understand who to best delegate resource allocation to, but how many would actually change their allocation based on this information? I don't really know, but i'm not confident it's a high proportion.
Agree, this would be a more pertinent to answering this question than what GiveWell has commissioned thus far. I'm meeting someone this weekend who is working on DALYs at the Effective Institutions Project. Will update here if I hear something interesting.
- Thanks for the feedback. Thinking about it for a minute, it seems like your first point is more than just stylistic criticism. By "better" i meant we have strong intuitions about first person subjective experience, but i now realize the way I phrased it might be begging the question.
- Why do you think I'm making that assumption? I assume EAs care about all of these things with some reasonable exchange rate between all the three. Assuming you only care about doesn't this bias you towards enhaving subjective experience, pain relief etc (eg. Give Directly, Strong Minds etc) versus life saving interventions that might be barely net positive anyway, especially because things like malarial bed nets don't have other positive externalities (unlike something like deworming) I agree it's also an update towards any other things EAs could plausibly do, such as institutional imrprovements/human capital development etc.