Posts

Comments

Comment by Dario St. (Dario Sterzi ) on What Goes Without Saying · 2025-01-02T15:06:29.771Z · LW · GW

Overall I agree, however I have a nitpick about point 2.

People differ a lot in how much they defy societal defaults, and do so at all points of both intelligence and domain competence spectra. I would not bet that "trust your reasoning more than the default choice", especially outside ones few competence domains, is good advice on average, not even for LessWrong readers. Maybe it is, maybe it is not.

E.g. I can easily intuit that the education system in my own country is far from optimal along any reasonable metric: its evolution is not driven by clear goals, nor by especially competent people, there are clear failings in attracting talented teachers etc. It does not mean that I know any single affordable action that I can take that would in expectation improve the system (or the education of someone I cared about whose default path is trough this system).

I believe taking the "normal" options for all choices that do not affect your top priorities and/or intersect with your particular comparative advantages is, in fact, a good policy.

Comment by Dario St. (Dario Sterzi ) on Pivotal acts from Math AIs · 2022-04-15T09:17:50.322Z · LW · GW

That does not look like a plan to me, it looks like two! One to make a lot of money and one to save the world with a lot of money. And lot of smart people look for plans to make a lot of money, lot of those same people have "throw an AI at it" as an hypothesis in their toolkit and are technically minded, competent in STEM, so it does not seem like an interesting direction to look at.

I do agree that it would be interesting to have a plan on how to effectively use a lot of money if we get it in whatever way but I would be quite surprised if "whatever way" ends up being "the community earns it" rather than "we convince some ultrarich like Elon Musk or some governments to give us a lot of money".