Posts

Comments

Comment by Yaacov on I Want To Live In A Baugruppe · 2017-04-14T05:12:11.834Z · LW · GW

Interested in theory. I wouldn't move cities to join a baugruppe but if I ended up in the same city as one I would like to live there.

Comment by Yaacov on Open Thread, Aug. 8 - Aug 14. 2016 · 2016-08-15T00:16:42.426Z · LW · GW

This study was trying to figure out what would happen if everyone in the US followed the same diet. That's probably not that useful for individual decision making. Even if lots and lots of people became vegan we wouldn't stop using grazing land, we would just raise less grain-fed animals.

Also, this analysis doesn't seem to consider animal suffering, which I personally find important.

Comment by Yaacov on The Blue-Minimizing Robot · 2016-01-31T01:53:17.492Z · LW · GW

Destroying the robot greatly diminishes its future ability to shoot, but it would also greatly diminishes its future ability to see blue. The robot doesn't prefer 'shooting blue' to 'not shooting blue', it prefers 'seeing blue and shooting' to 'seeing blue and not shooting'.

So the original poster was right.

Edit: I'm wrong, see below

Comment by Yaacov on Welcome to Less Wrong! (8th thread, July 2015) · 2015-07-26T04:57:04.564Z · LW · GW

Hi LW! My name is Yaacov, I've been lurking here for maybe 6 months but I've only recently created an account. I'm interested in minimizing human existential risk, effective altruism, and rationalism. I'm just starting a computer science degree at UCLA, so I don't know much about the topic now but I'll learn more quickly.

Specific questions:

What can I do to reduce existential risk, especially that posed by AI? I don't have an income as of yet. What are the best investments I can make now in my future ability to reduce existential risk?