Posts

Bayesian Persuasion? 2022-05-28T17:52:14.567Z

Comments

Comment by Karthik Tadepalli on Saving the world sucks · 2024-01-16T17:30:13.163Z · LW · GW

For what it's worth, your "small and vulnerable" post is what convinced me that people can really have an unbelievable amount of kindness and compassion in them, a belief that made me much more receptive to EA. Stay out of the misery mines pls!

Comment by Karthik Tadepalli on Saving the world sucks · 2024-01-16T17:27:55.929Z · LW · GW

I've seen a lot of EAs who are earnest. I think they are in for hurt down the line. I am not earnest in that way. I am not committed to tight philosophical justifications of my actions or values. I dont follow arguments to the end of the line. But one day I heard will macaskill describe the drowning child thought experiment, thought "yeah that makes sense to me", and added that to my list of thoughts. When I realized I was on the path to an economics PhD (for my own passions), I figured it was worth looking up this EA stuff and seeing what it had to say. I figured there would be lots of useful things I could do. I think that was the right intuition. I have found myself in a good position where I only need to make minor changes to my path to increase my impact dramatically.

Saving the world only sucks when you sacrifice everything else for it. Saving the world in your free time is great fun.

Comment by Karthik Tadepalli on Bayesian Injustice · 2023-12-17T16:17:49.247Z · LW · GW

Great essay. Before this, I thought that the impact of more noisy signals about Normies was mainly through selectors being risk-averse. This essay pointed out why even if selectors are risk neutral, noisy signals matter by increasing the weight of the prior. Given that selectors also tend to have negative priors about Normies, noisy signals really function to prevent positive updates.

Comment by Karthik Tadepalli on Bayesian Injustice · 2023-12-17T16:14:07.772Z · LW · GW

You're right that legibility alone isn't the whole story, but the reason I think Presties would still be advantaged in the many-slots-few-applicants story is that admissions officers also have a higher prior on Prestie quality. The impact of AOs' favorable prior about Presties is, I think, well acknowledged; the impact of their more precise signals is not, which is why I think this post is onto something important.

Comment by Karthik Tadepalli on Why The Focus on Expected Utility Maximisers? · 2022-12-27T19:03:55.867Z · LW · GW

If AI risk arguments mainly apply to consequentialist (which I assume is the same as EU-maximizing in the OP) AI, and the first half of the OP is right that such AI is unlikely to arise naturally, does that make you update against AI risk?