Posts

Nick Land: Orthogonality 2025-02-04T21:07:04.947Z

Comments

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-16T08:27:04.306Z · LW · GW

look brah. i feel no need to convince you; i suggested "The Obliqueness Thesis" because it's written in a language you are more likely to understand - and it covers the same grounds covered here (once again, this was meant simply as a compendium for those who read jessi's post).

you are free to keep dunning-krugering instead; i wasted enough time attempting to steer you towards something better, and i don't see any value in having you on my side.

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-11T01:22:12.225Z · LW · GW

let's try it from the other direction:

do you think stable meta-values are to be observed between australopiteci and say contemporary western humans? on the other hand: do values across primitive tribes or early agricultural empires not look surprisingly similar? third hand: what makes it so that we can look back and compare those value systems, while it would be nigh-impossible for the agents in questions to wrap their head around even something as "basic" as representative democracy?

i don't think it's thought as much as capacity for it that changes one's values. for instance, aontogeny recapitulating phylogeny: would you think it wise to have @TsviBT¹⁹⁹⁹ align contemporary Tsvi based on his values? How about vice versa?

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-11T00:26:30.891Z · LW · GW

In which way would the infection-resistant body or the lightcone destiny-setting world government pose limits to evolution via variation and selection?

To me it seems that the alternative can only ever be homeostasis - of the radical, lukewarm-helium-ion-soup kind.

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-11T00:09:20.407Z · LW · GW

When I say: 

You state Pythia mind experiment. And then react to it

I imply that in doing so you are citing Land.

er - this defeats all rules of conversational pragmatics but look, i concede if it stops further more preposterous rebuttals.

More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn't depend in the slightest on whether you're citing Land or writing things yourself.

of course it doesn't. my opinion on your good faith depends on whether you are able to admit having  deeply misunderstood the post.

saying something of substance: i did, in the post. id respond to object-level criticism if you provided some - i just see status-jousting, formal pedantry, and random fnords.

have you read The Obliqueness Thesis btw? as i mentioned above, that's a gloss on the same texts that you might find more accessible - per editor's note, i contributed this to help those who'd want to check the sources upon reading it, so im not really sure how writing my own arguments would help.

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-10T09:25:18.520Z · LW · GW

Look friend.

You said you understood from the beginning that the text in question was Land's.

In your first comment, though, you clearly show that not to be the case:

> I do not see how you are doing that. You state Pythia mind experiment. And then react to it: "You go girl!". I suppose both the description of the mind experiment and the reaction are faithful. But there is no actual engagement between orthogonality thesis and Land's ideas. 

This clearly marks me as the author, as separated from Land.

I find it hard to keep engaging under an assumption of good faith on these premises.

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-09T02:04:52.032Z · LW · GW

the purpose of any test is to measure something. in this case, the ability to simulate other views. let’s not be overly pedantic.

anyway, you failed the Turing test with your dialogue, which surprises me source the crucial points recovered right above. maybe @jessicata’s The Obliqueness Thesis can help - it’s written in High Lesswrongian, which I assume is the register most likely to trigger some interpretative charity 

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-08T15:52:22.941Z · LW · GW

uh I see - I’ve put the editors note in blockquote; hope that helps at least to make its meta- character clearer (:

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-08T15:48:11.033Z · LW · GW

sure? that would blickauote 75% of the article 


perhaps I could block quote the editors note instead?

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-08T07:45:17.809Z · LW · GW

I stand corrected. What do you suggest? See other comment

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-08T07:44:25.048Z · LW · GW

My bad, I didn't check and was tricked by the timing. Sincere apoloigies.

How would you suggest the thing could be improved? (the TeX version in the PDF contains Nick Land only).

I was thinking perhaps to add a link to each XS item, but wasnt really looking forward to rehashing comments of what has probably been the nadir in r/acc / LW diplomatic relations

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-08T02:44:50.396Z · LW · GW

the editor's note, mine, is marked with the helpful title "editor's note", while the xenosystem pieces about orthogonality are marked with "xenosystems: orthogonality".

you seem to be the only user, although not the only account, who experienced this problem.

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-07T20:01:49.751Z · LW · GW

propaganda of nick land's idea

wait - are you aware that the texts in question are nick land's? i think it should be pretty clear from the editor's note.

besides, in the first extract, the labels part was entirely incidental - and has literally no import to any of the rest. it was an historical artefact; the meat of the first section was, well, the thing indicated by its title and its text. i definitely see the issue of fixating on labels, now, tho - and i thank you for providing an object lesson.

ideological turing test

the purpose of the idelogical turing test is to represent the opposing views in ways that your opponent would find satisfactory. I have it from reliable sources that Bostrom found the opening paragraphs, until "sun's eventual expansion", satisfactory.

i really cannot shake the feeling that you hadn't read the post to begin with, and that now you are simply scanning it in order to find rebuttals to my comments. your grasp of basic, factual statements seems to falter, to the point of suggesting that my engagement with what purport to be more fundamental points might be a suboptimal allocation of resources.

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-06T17:58:56.734Z · LW · GW

how is meditations on moloch a better explanation of the will-to-think, or a better rejection of orthogonality, than the above?

I think the argument is stated as clearly as it’s appropriate under the assumption of a minimally charitable audience; in particular, I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon?


I cannot shake the feeling that the commenter might have only read the first extract and either fell victim of fnords or found it expedient to leave a couple of them for the benefit of less sophisticated leaders - in particular, has the commenter not noticed that the whole first part of Pythia unbound is an ideological Turing test, passed with flying colours?

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-05T15:16:57.024Z · LW · GW

wait - do you consider that an insult? i snuggled with the best of them

Comment by lumpenspace (lumpen-space) on Nick Land: Orthogonality · 2025-02-04T23:10:38.489Z · LW · GW

[curious about the downvotes - there's usually much /acc criticising around these parts, I thought having the arguments in question available in a clear and faithful rendition would be considered an unalloyed good from all camps? but i've not poasted here since 2018, will go read the rules in case something changed] 

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-18T23:02:14.428Z · LW · GW

So, something like "quiet quitting"?

Well, no - not necessarily. And with all the epistemic charity in the world, I am starting to suspect you might benefit from actually reading the review at this point, just to have more of an idea of what we're talking about.

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-18T20:44:00.630Z · LW · GW

Funny, I see "exit" as. more or less the opposite of the thing you are arguing against. Land (and Moldbug) refer to this book by Hirschman, where "exit" is contrasted with "voice" - the other way to counter institutional/organisational decay. In such model, exit is individual and aims to carve a space for a different way of doing things, while voice is collective, and aims to steer the system towards change.

Balaji's network state, cryptocurrency, etc are all examples. Many can run parallel to existing institutions, working along different dimensions, and testing configurations which might one day end up being more effective than the legacy institutions themselves.

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-18T07:51:03.941Z · LW · GW

I'm trying to understand where the source of disagreement lies, since I don't really see much "overconfidence" - ie, i don't see much of a probabilistic claim at all. Let me know if one of these suggestion points somewhere close to the right direction:
 

  • The texts cited were mostly a response to the putative inevitability of orthogonalism. Once that was (i think effectively) dispatched, one might consider that part of the argument closed.
    After that, one could excuse him for being less rigorous/have more fun with the rest; the goal there was not to debate but to allow the reader to experience what something akin to will-to-think would be like (im aware this is frowned upon in some circles); 
  • The crux of the matter, imo, is not that thinking a lot about meta-ethics changes your values. Rather, that an increase in intelligence does - and namely, it changes them in the direction of greater appreciation for complexity and desire for thinking, and this change takes forms unintelligible to those one rung below. Of course, here the argument is either inductive/empirical or kinda neoplatonic. I will spare you the latter version, but the former would look something like:

    - Imagine a fairly uncontroversial intelligence-sorted line-up, going:
    thermostat → mosquito → rat(🐭) → chimp → median human  → rat(Ω)
    - Notice how intelligence grows together with the desire for more complexity, with curiosity, and ultimately with the drive towards increasing intelligence, per se: and notice also how morality evolves to accommodate those drives (one really wouldn't want those on the left of wherever one stands to impose their moral code to those on the right).


While I agree these sort of arguments don't cut it for a typical post-analytical, lesswrong-type debate, I still think that, at the very least, Occam's razor should strongly slash their way - unless there's some implicit counterargument i missed.

(As for the opportunity cost of deepening your familiarity with the subject matter, you might be right. The style of philosophy Land adopts is very very different from the one appreciated around here - it is indeed often a target for snark - and while I think there's much of interest on that side of the continental split, the effort required for overcoming the aesthetic shift, weighted by chance of such shift completing, might still not make it worth it).

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-18T00:38:49.938Z · LW · GW

I'm not sure I agree - in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.

At this point, one could decide whether to go for it or hold back - and we should all consider ourself lucky that our early sapiens predecessors didn't take the second option.

(btw, I'm very curious to know what you make of this other Land text: https://etscrivner.github.io/cryptocurrent/

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-17T04:29:32.729Z · LW · GW

I personally don't see the choice of "allowing a more intelligent set of agents take over" as particularly altruistic: personally, i think intelligence trumps species, and I am not convinced interrupting its growth to make sure more sets of genes similar to mine find hosts for longer would somehow be "for my benefit".

Even in my AI Risk years, what I was afraid is the same I'm afraid of now: Boring Futures. The difference is that in the meantime the arguments for a singleton ASI, with a single unchangeable utility function that is not more intelligence/knowledge/curiosity became less and less tenable (together with FOOM within our lifetimes).

This being the case, "altruistic" really seems out of place: it's likely that early sapiens would have understood nothing of our goals, our morality, and the drives that got us to build civilisations - but would it have been better for them had they murdered the first guy in the troop they found flirting with a neanderthal and prevented this? I personally doubt it, and I think the comparison between us and ASI is more or less in the same ballpark,

Comment by lumpenspace (lumpen-space) on Consent Isn't Always Enough · 2023-02-24T21:21:10.013Z · LW · GW

Not hitting on people on their first meetup is good practice, but none of the arguments in OP seem to support such a norm.

Perhaps less charitably than @Huluk, I find the consent framing almost tendentious. It's quite easy to see how the dynamics denounced have little to do with consent; here are two substitutions which show how the examples are professional ethics matters, and orthogonal to the intimacy axis:

- one could easily swap "sexual relations" with "access to their potential grantee's timeshare" without changing much in terms of moral calculus;
- one could make the grantee as the recipient of another, exclusive grant from other sources. In this case, flirting with a grantmaker would no longer have the downstream consequences OP warned about.

All in all, the scenario in OP seems to call not for more restrictive sexual norms, but for explicit and consistently enforced anti-collusion/corruption regulations.

Once again: this is limited to the examples provided by @jefftk, and the arguments accompanying them. It's possible that consent isn't always enough in some contexts within EA, for reason separated from professional ethics - but I did not find support for such thesis in the thread.