Posts
Comments
really can't help because I happen to think Moloch isn't only inevitable but positively good (and not only better than alternatives but actually the best possible world type of good)
I have books organized by lists, with at least some semblance of weekly goals in terms of getting through them (unfortunately I put stuff like Kant right at the beginning, so I've been failing a lot).
then I have Feed.ly feeding me the updates on my favourite blogs. I read everything with up to two pages of text on the spot. everything else goes to a list which I keep in my self-inbox on Telegram. I read at least one of them a day, in chronological order.
I am currently reading the whole of Unqualified Reservations as well, and I usually read whole blogs. these I keep in my mobile chrome app, and save the links to the telegram list.
I sense it is mostly because people naturally refrain from murder unless it is seen as a last resort measure, or has hugely positive consequences.
I feel like the best approach is using your position to make them question themselves. Say, pointing out that a lot of their commitments sound like religious fundamentalism or some such device. You're studying creative writing, do some creative arguing XD
I guess Brexit is something along those lines, ain't it?
Yes, if the paperclipper is thought to be ever more intelligent, it's end-goal could be any - and it's likely it would see it's own capability improvement as the primary goal ("the better I am, the more paperclips are produced") etc.
the father of NRx is actually Mencius Moldbug (I see people (co-)attributing it to Land, but in fact he just did a lot of reinterpretation on some of Moldbug's themes)
unleash it and see what happens
Unless the ones with goals have more power, and can establish a stable monopoly on power (they do, and they might)
more than the ones optimizing for increasing their power? i find it doubtful.
well, any answer to the thread in the two I linked above would already be really interesting. his new book on Bitcoin is really good too: http://www.uf-blog.net/crypto-current-000/
There probably could be arguments in favour of Land's older stuff, but since not even him is interested in doing that, I won't either.
What escapes me is why would you review his thought and completely overlook his more recent material, which is engaged in a whole array of subjects that LW has been as well. Most prominently, a first treatment of Land's thought in this space should deal with this: http://www.xenosystems.net/against-orthogonality/ (more here: http://www.xenosystems.net/stupid-monsters/), which is neither obscure, nor irrelevant.
What are more options for No Safe AI?
let it go rampant over the world
is there any example of successful succession? if there aren't, i think one should be tempted to think that most likely creative destruction (and thus disruptive adaptation rather than continuous improvement) is the norm for social systems (it definitely seems so in relation to other evolutionary environments).