post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-07-22T19:54:51.911Z · LW(p) · GW(p)

Feel free to delete these comments as you update the draft! These are just my rough rough thoughts, don't take them too seriously.

--I like the intro. Catchy puzzle paragraph followed by explanation of what you are doing and why.

--I think the bread example didn't fit as well with me for some reason. It felt both unnecessarily long and not quite the right angle. In particular, I don't think inequality is the issue, I think it is the loss of influence of us. Like, I think there are tons of bad actors in the world and I would be very happy to see them all lose influence to a single good or even just good-ish actor. Inequality would be increasing, but that would be a good thing in that circumstance. Another example: I might think that Moloch will eat all our children unless we achieve some sort of singleton or otherwise concentrate power massively; I may even be willing to have that power concentrated in the hands of someone with radically different values than me because I prefer that outcome to the moloch-outcome. (Maybe this isn't a good example because maybe if we think Moloch will eat us all then that means we think we have very little influence over the future?)

Here's maybe what I would suggest instead: "If I learned there was a new technology that was going to give its owners a thousand times as much bread, I wouldn't be worried unless I thought it would diminish the amount of bread I had--and why would it? But if I learn there is a new technology that will give its owners a thousand times as much control over the future, that seems to imply that I'll have less control myself." Not sure this is better, but it's what I came up with.

--The Elliot Sober thing is super interesting and I'd love to read more about it. Make sure you include a link or two!