Posts

Comments

Comment by Uriel Fiori (uriel-fiori) on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-14T23:27:20.043Z · LW · GW

really can't help because I happen to think Moloch isn't only inevitable but positively good (and not only better than alternatives but actually the best possible world type of good)

Comment by Uriel Fiori (uriel-fiori) on How do you organise your reading? · 2020-08-06T19:12:21.223Z · LW · GW

I have books organized by lists, with at least some semblance of weekly goals in terms of getting through them (unfortunately I put stuff like Kant right at the beginning, so I've been failing a lot).

then I have Feed.ly feeding me the updates on my favourite blogs. I read everything with up to two pages of text on the spot. everything else goes to a list which I keep in my self-inbox on Telegram. I read at least one of them a day, in chronological order.

I am currently reading the whole of Unqualified Reservations as well, and I usually read whole blogs. these I keep in my mobile chrome app, and save the links to the telegram list.

Comment by Uriel Fiori (uriel-fiori) on Why isn’t assassination/sabotage more common? · 2020-06-04T18:57:56.964Z · LW · GW

I sense it is mostly because people naturally refrain from murder unless it is seen as a last resort measure, or has hugely positive consequences.

Comment by Uriel Fiori (uriel-fiori) on How do you survive in the humanities? · 2020-02-20T19:43:48.309Z · LW · GW

I feel like the best approach is using your position to make them question themselves. Say, pointing out that a lot of their commitments sound like religious fundamentalism or some such device. You're studying creative writing, do some creative arguing XD

Comment by Uriel Fiori (uriel-fiori) on Political Roko's basilisk · 2020-01-20T19:28:41.315Z · LW · GW

I guess Brexit is something along those lines, ain't it?

Comment by Uriel Fiori (uriel-fiori) on Accelerate without humanity: Summary of Nick Land's philosophy · 2020-01-09T17:59:30.498Z · LW · GW

Yes, if the paperclipper is thought to be ever more intelligent, it's end-goal could be any - and it's likely it would see it's own capability improvement as the primary goal ("the better I am, the more paperclips are produced") etc.

Comment by Uriel Fiori (uriel-fiori) on Curtis Yarvin on A Theory of Pervasive Error · 2019-11-26T18:27:14.883Z · LW · GW

the father of NRx is actually Mencius Moldbug (I see people (co-)attributing it to Land, but in fact he just did a lot of reinterpretation on some of Moldbug's themes)

Comment by Uriel Fiori (uriel-fiori) on If I knew how to make an omohundru optimizer, would I be able to do anything good with that knowledge? · 2019-07-12T15:53:00.251Z · LW · GW

unleash it and see what happens

Comment by Uriel Fiori (uriel-fiori) on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-26T15:19:42.580Z · LW · GW

Unless the ones with goals have more power, and can establish a stable monopoly on power (they do, and they might)

more than the ones optimizing for increasing their power? i find it doubtful.

Comment by Uriel Fiori (uriel-fiori) on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-19T15:16:42.139Z · LW · GW

well, any answer to the thread in the two I linked above would already be really interesting. his new book on Bitcoin is really good too: http://www.uf-blog.net/crypto-current-000/

Comment by Uriel Fiori (uriel-fiori) on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-17T19:34:54.193Z · LW · GW

There probably could be arguments in favour of Land's older stuff, but since not even him is interested in doing that, I won't either.

What escapes me is why would you review his thought and completely overlook his more recent material, which is engaged in a whole array of subjects that LW has been as well. Most prominently, a first treatment of Land's thought in this space should deal with this: http://www.xenosystems.net/against-orthogonality/ (more here: http://www.xenosystems.net/stupid-monsters/), which is neither obscure, nor irrelevant.

Comment by Uriel Fiori (uriel-fiori) on No Safe AI and Creating Optionality · 2019-04-17T15:46:09.177Z · LW · GW

What are more options for No Safe AI?

let it go rampant over the world

Comment by uriel-fiori on [deleted post] 2018-08-06T17:58:05.721Z

is there any example of successful succession? if there aren't, i think one should be tempted to think that most likely creative destruction (and thus disruptive adaptation rather than continuous improvement) is the norm for social systems (it definitely seems so in relation to other evolutionary environments).