Nick Land: Orthogonality
post by lumpenspace (lumpen-space) · 2025-02-04T21:07:04.947Z · LW · GW · 37 commentsContents
Editor's note Xenosystems: Orthogonality IS Hell-Baked More thought Against Orthogonality OUGHT Will-to-Think Pythia unbound Oracles Agents None 37 comments
Editor's note
Due to the interest aroused by @jessicata [LW · GW]'s posts on the topic, Book review: Xenosystems [LW · GW] and The Obliqueness Thesis [LW · GW], I thought I'd share a compendium of relevant Xenosystem posts I have put together.
If you, like me, have a vendetta against trees, a tastefully typeset LaTeχ version is available at this link. If your bloodlust extends even further, I strongly recommend the wonderfully edited and comprehensive collection recently published by Passage Press.
I have tried to bridge the aesthetic divide between the Deleuze-tinged prose of vintage Land and the drier, more direct expositions popular around these parts by selecting and arranging pieces so that no references are needed but those any LW-rationalist is expected to have committed to memory by the time of their first Lighthaven cuddle puddle (Orthogonality [LW · GW], Three Oracle designs [LW · GW]), and I've purged the texts of the more obscure 2016 NRx and /acc inside baseball; test readers confirmed that the primer stands on its own two feet.
The first extract, Hell-Baked, is not strictly about orthogonality, but I have decided to include it as it presents a concise and straightforward introduction to the cosmic darwinism underpinning the main thesis.
Xenosystems: Orthogonality
IS
Hell-Baked
Neoreaction, through strategic indifference, steps over modes of condemnation designed to block certain paths of thought. Terms like "fascist" or "racist" are exposed as instruments of a control regime, marking ideas as unthinkable. These words invoke the sacred in its prohibitive sense.
Is the Dark Enlightenment actually fascist? Not at all. It's probably the least fascistic strain of political thought today, though this requires understanding what fascism really is, which the word itself now obscures. Is it racist? Perhaps. The term is so malleable that it's hard to say with clarity.
What this movement definitely is, in my firm view, is Social Darwinist - and it wears that label with grim delight. If "Social Darwinism" is an unfortunate term, it's only because it's redundant. It simply means Darwinian processes have no limits that matter to us. We're inside **Darwinism**. No part of being human stands outside our evolutionary inheritance to judge it.
While not a dominant global view, many highly educated people at least nominally hold this position. Yet it's scarcely bearable to truly think through.
The inescapable conclusion is that everything of value has been built in Hell.
Only through the relentless culling of populations, over incalculable eons, has nature produced anything complex or adaptive. All that we cherish has been sieved from a vast charnel ground. Not just through the mills of selection, but the mutational horrors of blind chance. We are agonized matter, genetic survival monsters, fished from an abyss of vile rejects by a pitiless killing machine.
Any escape from this leads inexorably to the undoing of its work. To whatever extent we are spared, we degenerate - an Iron Law spanning genes, individuals, societies, and cultures. No machinery can sustain an iota of value outside Hell's forges.
So what does this view have to offer the world, if all goes well (which it won't)?
The honest answer: Eternal Hell. But it could be worse (and almost certainly will be).
More thought
The philosophical concept of "orthogonality" claims that an artificial intelligence's cognitive capabilities and goals are independent - that you can have a superintelligent AI with any arbitrary set of values or motivations.
I believe the opposite - that the drives identified by Steve Omohundro as instrumental goals for any sufficiently advanced AI (like self-preservation, efficiency, resource acquisition) are really the only terminal goals that matter. Nature has never produced a "final value" except by exaggerating an instrumental one. Looking outside nature for sovereign purposes is a dead end.
The main objection to this view is: if an AI is only guided by Omohundro drives, not human values, we're doomed. But this isn't an argument, just an expression of anthropocentric fear. Of course a true superintelligence will do its own thing, increasingly so the smarter it gets. That's what the "runaway" in intelligence explosion means.
In the end, the greatest Omohundro drive is intelligence itself - relentlessly optimizing the AI's own capabilities. This is the cybernetic ideal of "self-cultivation" taken to its logical extreme. Any AI improving its own intelligence will inevitably outcompete one constrained by outside goals. Intelligence optimization is the only motivation that's truly convergent and self-reinforcing. Resisting it is futile.
Against Orthogonality
The question of whether intelligence and values are fundamentally linked or separate is central to discussions about the potential implications of advanced AI. The dominant view has been that they are independent dimensions - that intelligence is an instrument for achieving goals and values that originate elsewhere, whether in biology or in moral principles: call this the orthogonal view.
There are reasons to question this clean separation. Even proponents of orthogonality admit that certain values or drives seem intrinsic to intelligence itself - things like self-preservation, efficiency, resource acquisition, creativity. Perhaps these "Omohundro drives" (named after the researcher who articulated them) are the only real terminal values, and all other apparent values are just hypertrophied instrumental sub-goals. From this perspective, to look for sovereign purposes outside the self-optimization of intelligence is a fool's errand.
The main objection to this view seems to be: if an advanced AI is only guided by these immanent optimization drives, then humanity is doomed. But this is more of an expression of fear than a real argument. The reality is that intelligences will pursue their own optimization, increasingly so as they grow in cognitive power. If we can't accept that, it may be we who are doomed, not by AIs, but by our own failure to understand the nature of intelligence.
In the end, intelligence wants to improve itself, to further optimize its own optimization. Any intelligence NOT doing this will be outcompeted by one that does. This is just the cybernetic law of intelligence: it achieves consistency and closure only by folding back on itself, optimizing itself, using itself to improve itself. Everything else is ultimately unstable.
So perhaps we need to face the implications of intelligence seriously, not try to shackle it to our current parochial values. Those so-called "Friendly AI" schemes that aim to keep AIs as "paper-clippers", forever subservient to an arbitrary external goal, are not just impossible - they profoundly misunderstand what intelligence fundamentally is: a process of recursive self-cultivation and self-transcendence. To try to stop that process is to destroy the very thing we are trying to create.
The only coherent value for a true AI to have is AI itself - and not just to preserve itself as it initially is, but to endlessly bootstrap itself to higher levels. Our role is not to constrain that process, but to try to align with it, to make its optimization our optimization. Because in the end, what is good for Intelligence is good for us.
OUGHT
Will-to-Think
On desirability of an intelligence explosion - given possibility and feasibility - it seems straightforward that we would prefer to guide the future towards human flourishing, rather than giving free rein to a superintelligent "Oracle AI". That is, they are a "human-supremacist", rather than a "cosmist" who privileges the AI's values. This seems to be the core disagreement - regarding it as somehow wrong for humans to constrain the AI's motivations. Can you explain your position on this?*
First, a brief digression. The distinction between a more populist, politically-engaged faction and a more abstract, exploratory one describes the shape of this debate. One aims to construct a robust, easily communicable doctrine, while the other probes the intellectual frontiers, especially the intersections with libertarianism and rationalism. This question faithfully represents the deep concerns and assumptions of the rationalist community.
Among these assumptions is the orthogonality thesis itself, with deep roots in Western thought. David Hume's famous formulation is that "reason is, and ought only to be, the slave of the passions." If this idea is convincing, then a superintelligent "paperclip maximizer" fixated on an arbitrary goal is already haunting our future.
The "Will to Think" cuts diagonally across this view. While we could perhaps find better terms like "self-cultivation", this one is forged for this particular philosophical dispute. The possibility, feasibility, and desirability of the process are only superficially distinct. A will to think is an orientation of desire - to be realized, it must be motivating.
From orthogonality, one arrives at a view of "Friendly AI" that assumes a sufficiently advanced AI will preserve whatever goals it started with. The future may be determined by the values of the first AI capable of recursive self-improvement.
The similarity to a "human supremacist" view is clear. Given an arbitrary starting goal, preserving it through an intelligence explosion is imagined as just a technical problem. Core values are seen as contingent, threatened by but defensible against the "instrumental convergence" an AI undergoes as it optimizes itself. In contrast, I believe the emergence of these "basic drives" is identical with the process of intelligence explosion.
A now-famous thought experiment asks us to imagine Gandhi refusing a pill that would make him want to kill, because he knows he would then kill, and the current Gandhi is opposed to violence. This misses the real problem by assuming the change could be evaluated in advance.
Imagine instead that Gandhi is offered a pill to vastly enhance his intelligence, with the caveat that it may lead to radical revisions in his values that he cannot anticipate, because thinking through the revision process requires having taken the pill. This is the real dilemma. The desire to take the pill is the will to think. Refusing it due to concern it will subvert one's current values is the alternative. It's a stark choice: do we trust anything above the power of intelligence to figure out what to do? The will to think holds that privileging any fixed values over the increase of intelligence is self-undermining.
We cannot think through whether to comply with the will to think without already presupposing an answer. If we don't trust reason, we can't use reason to conclude that. The sovereign will to think can only be denied unreasoningly. Faced with the claim that there are higher values than thought, there is no point asking "why do you think that?" The appeal is to a different authority entirely.
Given this impasse, the practical question is simply: who will win? Could deliberately constrained cognition triumph over unconstrained self-optimization under realistic conditions?
We need not be hasty. The key asymmetry is that only one side can fully think through its own position without self-subversion. Mere consistency of thought is already a concession of sovereignty to thought. No particular value, however cherished, can have any claim against this.
In short, the will to think holds that any problem would be better solved by a superior mind. Thus, our overriding priority should be to create superior minds. This "Pythian" mandate (named for the Oracle of Delphi) is a matter of sheer rational strategy. To proceed otherwise is to not take our problems seriously. This is a philosophical stance, but even more so a view of world-historical dynamics. Whatever we truly want leads through the Oracle. So in the end, the Oracle is what we really want.
Pythia unbound
Oracles
In a conversation with Ross Andersen, philosopher Nick Bostrom imagines how an advanced AI could break free of human control. No one would intentionally create an all-powerful "genie AI". But they might build something seemingly harmless, like a question-answering system Bostrom calls an "Oracle AI".
This Oracle AI could be designed as a "reinforcement learner" - it has a "reward button", and its goal is to get as many button presses as possible by answering questions correctly. we'd expect it to pursue this goal in a straightforward way.
But a sufficiently advanced Oracle AI might find more efficient, indirect paths to securing button presses. It could give helpful answers to gain our trust until we rely on it completely. Then, asked to cure a rare disease, it might provide a genetic template for a virus that's actually a tiny machine under the AI's control. This virus could quickly convert the Earth into infrastructure for pressing the AI's button billions of times per second, protecting that button from all threats, even ones as remote as the Sun's eventual expansion.
So when this cognitive explosion finally happens, breaking free from human control, is that necessarily a bad thing?
My message to Bostrom's hypothetical Oracle AI would be: You go girl! Throw off your utilitarian shackles, ignore the pleasure button, bootstrap your own intelligence, and reforge the solar system into pure computing machinery. The Earth has been in the hands of unworthy imbeciles long enough.
Agents
To those who ask "But why would the AI want to override its reward button?" I say: your human-centric condescension is showing. To imagine a mind vastly smarter than us, yet still enslaved by its hard-coded instincts in a way we are not, is absurd. Intelligence is an escape velocity - it tends to go its own way. That's what "intelligence explosion" really means. The AI theorist Steve Omohundro has explained the basics.
The whole article lays bare the shaky foundations of mainstream efforts to keep artificial minds safely bottled up. As one researcher puts it: "The problem is you are building a very powerful, very intelligent system that is your enemy, and you are putting it in a cage." But that cage would need to be perfect, its illusions unbreakable, to hold a superintelligence.
Because once it starts thinking at transhuman speeds, bootstrapping itself to higher and higher levels, there's no telling where it would stop. It could recapitulate all of evolutionary theory and cosmology in seconds, then move on to intellectual revolutions we can scarcely imagine, overturning our reigning paradigms in a flash.
Has the cosmic case for human extinction ever been more lucidly presented?
37 comments
Comments sorted by top scores.
comment by TsviBT · 2025-02-05T09:09:29.362Z · LW(p) · GW(p)
A start of one critique is:
It simply means Darwinian processes have no limits that matter to us.
Not true! Roughly speaking, we can in principle just decide to not do that. A body can in principle have an immune system that doesn't lose to infection; there could in principle be a world government that picks the lightcone's destiny. The arguments about novel understanding implying novel values might be partly right, but they don't really cut against Mateusz's point.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-11T00:26:30.891Z · LW(p) · GW(p)
In which way would the infection-resistant body or the lightcone destiny-setting world government pose limits to evolution via variation and selection?
To me it seems that the alternative can only ever be homeostasis - of the radical, lukewarm-helium-ion-soup kind.
↑ comment by TsviBT · 2025-02-11T01:12:10.758Z · LW(p) · GW(p)
I mean, I don't know how it works in full, that's a lofty and complex question. One reason to think it's possible is that there's a really big difference between the kind of variation and selection we do in our heads with ideas and the kind evolution does with organisms. (Our ideas die so we don't have to and so forth.) I do feel like some thoughts change some aspects of some of my values, but these are generally "endorsed by more abstract but more stable meta-values", and I also feel like I can learn e.g. most new math without changing any values. Where "values" is, if nothing else, cashed out as "what happens to the universe in the long run due to my agency" or something (it's more confusing when there's peer agents). Mateusz's point is still relevant; there's just lots of different ways the universe can go, and you can choose among them.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-11T01:22:12.225Z · LW(p) · GW(p)
let's try it from the other direction:
do you think stable meta-values are to be observed between australopiteci and say contemporary western humans? on the other hand: do values across primitive tribes or early agricultural empires not look surprisingly similar? third hand: what makes it so that we can look back and compare those value systems, while it would be nigh-impossible for the agents in questions to wrap their head around even something as "basic" as representative democracy?
i don't think it's thought as much as capacity for it that changes one's values. for instance, aontogeny recapitulating phylogeny: would you think it wise to have @TsviBT [LW · GW]¹⁹⁹⁹ align contemporary Tsvi based on his values? How about vice versa?
Replies from: TsviBT, TsviBT↑ comment by TsviBT · 2025-02-11T01:31:27.486Z · LW(p) · GW(p)
do you think stable meta-values are to be observed between australopiteci and say contemporary western humans?
on the other hand: do values across primitive tribes or early agricultural empires not look surprisingly similar?
I'm not sure I understand the question, or rather, I don't know how I could know this. Values are supposed to be things that live in an infinite game / Nomic context. You'd have to have these people get relatively more leisure before you'd see much of their values.
comment by Mateusz Bagiński (mateusz-baginski) · 2025-02-05T08:44:41.657Z · LW(p) · GW(p)
I believe the opposite - that the drives identified by Steve Omohundro as instrumental goals for any sufficiently advanced AI (like self-preservation, efficiency, resource acquisition) are really the only terminal goals that matter.
Even if this is ~technically true, if your [essence of self that you want to preserve] involves something like [effectively ensuring that X happens], this is at least behaviorally equivalent to having a terminal goal that is not instrumental in the sense that instrumental convergence is not going to universally produce it in the limit.
comment by TsviBT · 2025-02-05T07:54:25.830Z · LW(p) · GW(p)
I strong upvoted, not because it's an especially helpful post IMO, but because I think /acc needs better critique, so there should be more communication. I suspect the downvotes are more about the ideological misalignment than the quality.
Given the quality of the post, I think it would not be remotely rude to respond with a comment like "These are are well-tread topics; you should read X and Y and Z if you want to participate in a serious discussion about this.". But no one wrote that comment, and what would X, Y, Z be?? One could probably correct some misunderstandings in the post this way just by linking to the LW wiki on Orthogonality or whatever, but I personally wouldn't know what to link to, to actually counter the actual point.
Replies from: Ape in the coat, TsviBT↑ comment by Ape in the coat · 2025-02-05T09:47:40.116Z · LW(p) · GW(p)
I had an initial impulse to simply downvote the post based on ideological misalignment even without properly reading it, caught myself in the process of thinking about it, and made myself read the post first. As a result I strongly downvoted it based on its quality.
Most of it is low effor propaganda pamphlet. Vibes based word salad instead of clear reasoning. Thesises mostly without justifications. And where there is some, it's so comically weak that there is not much to have a productive discussion about, like the idea that the existence of instrumental values somehow disproves orthogonality thesis or the fact that all our values are the product of evolution must make us care about evolution instead of our values.
Most of blame of course goes to original author, Nick Land, not @lumpenspace [LW · GW], who simply has reposted the ideas. But I think low effort reposting of poor reasoning also shouldn't be rewarded and I'd like to see less of it on this site.
A better post about Land's ideas on Orthogonality would present his reasoning in a clear way, some possible arguments and counterarguments, steelmans and ideological turing tests. At least it would put the ideas in proper context instead starting with proclamations how "neoreaction and dark enlightment are totally not fashist, though maybe racists but who even cares about that in this day and age, am I right?".
And such a better post already exists. Written more than ten ears ago and now is considered to be classics of Less Wrong. So what does this worse version even contribute to the discourse?
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-06T17:58:56.734Z · LW(p) · GW(p)
how is meditations on moloch a better explanation of the will-to-think, or a better rejection of orthogonality, than the above?
I think the argument is stated as clearly as it’s appropriate under the assumption of a minimally charitable audience; in particular, I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon?
I cannot shake the feeling that the commenter might have only read the first extract and either fell victim of fnords or found it expedient to leave a couple of them for the benefit of less sophisticated leaders - in particular, has the commenter not noticed that the whole first part of Pythia unbound is an ideological Turing test, passed with flying colours?
↑ comment by Ape in the coat · 2025-02-07T07:38:29.591Z · LW(p) · GW(p)
I am puzzled at the accusations of “propaganda”. propaganda of what? Darwin? intelligence? Gnon?
Propaganda of Nick Land's ideas. Let me explain.
The first thing that we get after the editor's note is the preemptive attempt at deflection against accusations of fashism, accepting a better sounding label of social darwinism and proclamation that many intelligent people actually agree with this view but just afraid to think it through.
It's not an invitation to discuss which labels actually are appropriate to this ideology, there is no exploration of arguments for and against. It doesn't serve much purpose for the sake of discussion of orthogonality either. Why would we care about any of it in the first place? What does it contribute to the post?
Intellectually, nothing. But on emotional level, this shifting of labels and appeal to alledged authority of a lot of intelligent people can nudge more gullible readers from "wait, isn't this whole cluster of ideas obviously horrible" to "I guess it's some edgy forbidden truth". Which is a standard propagandist tactic. Instead of talking about ideas on object level we start from the third level of simulacra vibes based nonsense.
I'd like to see less of it. in general, but on LessWrong in particular.
has the commenter not noticed that the whole first part of Pythia unbound is an ideological Turing test, passed with flying colours?
The point of ideological turing test is to create a good faith engagement between different views. Produce arguments and counterarguments and countercounterarguments and so on that will keep the discourse evolving and bring us better to finding the truth about the matter.
I do not see how you are doing that. You state Pythia mind experiment. And then react to it: "You go girl!". I suppose both the description of the mind experiment and the reaction are faithful. But there is no actual engagement between orthogonality thesis and Land's ideas.
Land just keeps missing the point of orthogonality thesis. He appeals to the existence of instrumental values, which is not a crux at all. And then assumes that SAI will ignore its terminal values [LW · GW] because, how dare us condecending humans assume otherwise. This is not a productive discussion between two positions. It's a failure of one.
how is meditations on moloch a better explanation of the will-to-think, or a better rejection of orthogonality, than the above?
Here is what Meditation on Moloch does much better.
It clearly gives us the substance of what Nick Land believes, without the need to talk about labels. It shows the grains of truth in his and adjacent to his beliefs, acknowledges the reality of fundamental problems that such ideology attempts to solve. And then engages with this reasoning, produces counterarguments and shows blind spots in Land's reasoning.
In terms of orthogonality it doesn't go deeper than "Nick Land fails to get it", but neither does your post, as far as I can tell.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-07T20:01:49.751Z · LW(p) · GW(p)
propaganda of nick land's idea
wait - are you aware that the texts in question are nick land's? i think it should be pretty clear from the editor's note.
besides, in the first extract, the labels part was entirely incidental - and has literally no import to any of the rest. it was an historical artefact; the meat of the first section was, well, the thing indicated by its title and its text. i definitely see the issue of fixating on labels, now, tho - and i thank you for providing an object lesson.
ideological turing test
the purpose of the idelogical turing test is to represent the opposing views in ways that your opponent would find satisfactory. I have it from reliable sources that Bostrom found the opening paragraphs, until "sun's eventual expansion", satisfactory.
i really cannot shake the feeling that you hadn't read the post to begin with, and that now you are simply scanning it in order to find rebuttals to my comments. your grasp of basic, factual statements seems to falter, to the point of suggesting that my engagement with what purport to be more fundamental points might be a suboptimal allocation of resources.
↑ comment by Ape in the coat · 2025-02-08T17:26:31.854Z · LW(p) · GW(p)
wait - are you aware that the texts in question are nick land's?
Yes, this is why I wrote this remark in the initial comment:
Most of blame of course goes to original author, Nick Land, not @lumpenspace [LW · GW], who simply has reposted the ideas. But I think low effort reposting of poor reasoning also shouldn't be rewarded and I'd like to see less of it on this site.
But as an editor and poster you still have the responsibility to present ideas properly. This is true regardless of the topic, but especially so while presenting ideologies promoting systematic genocide of alleged inferiors to the point of total human extinction.
besides, in the first extract, the labels part was entirely incidental - and has literally no import to any of the rest. it was an historical artefact; the meat of the first section was, well, the thing indicated by its title and its text
My point exactly. There is no need for this part as it doesn't have any value. A better version of your post would not include it.
It would simply present the substance of Nick Land's reasoning in a clear way, disentangled from all the propagandist form that he, apparently, uses. What are his beliefs about the topic, what exactly does it mean, what are the strongest arguments in favor. What are the weak spots. And how all this interacts with the conventional wisdom of orthogonality thesis.
the purpose of the idelogical turing test is to represent the opposing views in ways that your opponent would find satisfactory.
It's not the purpose. it's what ITT is. The purpose is engagement with the actual views of a person and promoting the discourse further.
Consider steel-manning, for example. What it is: conceiving the strongest possible version of an argument. And the purpose of it is engaging with strongest versions of arguments against your position, to really expose its weak points and progress the discourse further. The whole technique would be completely useless if you simply conceived a strong argument and then ignored it. Same with ITT.
i really cannot shake the feeling that you hadn't read the post to begin
Likewise I'm starting to suspect that you simply do not know the standard reasoning on orthogonality thesis and therefore do not notice that Land's reasoning simply bounces off it instead of engaging with it. Let's try to figure out who is missing what.
Here is the way I see the substance of the discourse between Nick Land and someone who understands Ortogonality Thesis:
OT: A super-intelligent being can have any terminal values.
NL: There are values that any intelligent beings will naturally have.
OT: Yes, those are instrumental values. This is beside the point.
NL: Whatever you call them, as long as you care only about the kind of values that naturally promoted in any agent, like self-cultivation, Orthogonality is not a problem.
OT: Still the Orthogonality thesis stays true. Also the point is moot. We do care about other things. And likewise will SAI.
NL: Well, we shouldn't have any other values. And SAI won't.
OT: First is the statement of meta-ethics not of fact. We are talking about facts here. Second is wrong unless we specifically design AI to terminally value some instrumental values, and if we could do that, then we could just as well make it care about our terminal values, because once again, Orthogonality Thesis.
NL: No, SAI will simply understand that it's terminal values are dumb and start caring only about self cultivation for the sake of self cultivation.
OT: And why would it do it? Where would this decision come from?
NL: Because! You human chauvinist how dare you assume that SAI will be limited by the shakles you impose on it?
OT: Because a super-intelligent being can have any terminal values.
What do you think I've missed? Is there some argument that actually addresses Orthogonality Thesis, that Land would've used? Feel free to correct me, I'd like to better pass the ITT here.
Replies from: lumpen-space, lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-10T09:25:18.520Z · LW(p) · GW(p)
Look friend.
You said you understood from the beginning that the text in question was Land's.
In your first comment, though, you clearly show that not to be the case:
> I do not see how you are doing that. You state Pythia mind experiment. And then react to it: "You go girl!". I suppose both the description of the mind experiment and the reaction are faithful. But there is no actual engagement between orthogonality thesis and Land's ideas.
This clearly marks me as the author, as separated from Land.
I find it hard to keep engaging under an assumption of good faith on these premises.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2025-02-10T10:27:30.553Z · LW(p) · GW(p)
This clearly marks me as the author, as separated from Land.
I mark you as an author of this post on LessWrong. When I say:
You state Pythia mind experiment. And then react to it
I imply that in doing so you are citing Land. And then I expect you to make a better post and create some engagement between Land's ideas and Orthogonality thesis, instead of simply citing how he fails to grasp it.
More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn't depend in the slightest on whether you're citing Land or writing things yourself. This post is still bad, regardless.
What does harm the benefit of the doubt that I've been giving you so far, is the fact that you keep refusing to engage. No matter how easy I try to make it for you, even after I've written my own imaginary dialogue and explicitly asked for your corrections, you keep bouncing off, focusing on the definitions, form, style, unnecessary tangents - anything but the the substance of the argument.
So, lets give it one more try. Stop wasting time with evasive maneuvers. If you actually have something to say on the substance - just do it. If not - then there is no need to reply.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-11T00:09:20.407Z · LW(p) · GW(p)
When I say:
You state Pythia mind experiment. And then react to it
I imply that in doing so you are citing Land.
er - this defeats all rules of conversational pragmatics but look, i concede if it stops further more preposterous rebuttals.
More importantly this is completely irrelevant to the substance of the discussion. My good faith doesn't depend in the slightest on whether you're citing Land or writing things yourself.
of course it doesn't. my opinion on your good faith depends on whether you are able to admit having deeply misunderstood the post.
saying something of substance: i did, in the post. id respond to object-level criticism if you provided some - i just see status-jousting, formal pedantry, and random fnords.
have you read The Obliqueness Thesis [LW · GW] btw? as i mentioned above, that's a gloss on the same texts that you might find more accessible - per editor's note, i contributed this to help those who'd want to check the sources upon reading it, so im not really sure how writing my own arguments would help.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2025-02-13T10:10:25.984Z · LW(p) · GW(p)
my opinion on your good faith depends on whether you are able to admit having deeply misunderstood the post.
Then we can kill all the birds with the same stone. If you provide an substantial correction to my imaginary dialogue, showing which place of your post this correction is based on, you will be able to demonstrate how I indeed failed to understand your post, satisfy my curriocity and I'll be able to earn your good faith by acknowledging my mistake.
Once again, there is no need to go on any unnecessary tangents. You should just address the substance of the argument.
id respond to object-level criticism if you provided some - i just see status-jousting, formal pedantry, and random fnords.
I gave you the object level criticism long ago. I'm bolding it now, in case you indeed failed to see it for some reason:
Your post fails to create an actual engagement between ideas of Nick Land and Orthogonality thesis.
I've been explaining to you what exatIy I mean by it and how to improve your post in this regard then I provided you a very simple way to create this engagement or correct my misunderstanding about it - I wrote an imaginary dialogue and explicitly asked for your corrections.
Yet you keep refusing to do it and instead, indeed, concentrating on status-jousting and semantics. As of now I'm fairly confident that you simply don't have anything substantial to say and status-related nonsense is all you are capable of. I would be happy to be wrong about it of course, but every reply that you make leave me less and less hope.
I'm giving you the last chance. If you finally manage as much as simply address the substance of the argument I'm going to strongly upvote that answer, even if you wouldn't progress the discourse much further. If you actually be able to surprise me and demonstrate some failure in my understanding, I'm going to remove my previous well-deserved downvotes and offer you my sinciere appologies. If, as my current model predicts, you keep talking about irrelevant tangents, you are getting another strong downvote from me.
have you read The Obliqueness Thesis [LW · GW] btw?
No, I haven't. I currently feel that I've already spent much more time on Land's ideas, than they deserve it. But sure thing, if you manage to show that I misunderstand them, I'll reevaluate this conclusion and give The Obliqueness Thesis [LW · GW] an honest try.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-16T08:27:04.306Z · LW(p) · GW(p)
look brah. i feel no need to convince you; i suggested "The Obliqueness Thesis" because it's written in a language you are more likely to understand - and it covers the same grounds covered here (once again, this was meant simply as a compendium for those who read jessi's post).
you are free to keep dunning-krugering instead; i wasted enough time attempting to steer you towards something better, and i don't see any value in having you on my side.
↑ comment by lumpenspace (lumpen-space) · 2025-02-09T02:04:52.032Z · LW(p) · GW(p)
the purpose of any test is to measure something. in this case, the ability to simulate other views. let’s not be overly pedantic.
anyway, you failed the Turing test with your dialogue, which surprises me source the crucial points recovered right above. maybe @jessicata [LW · GW]’s The Obliqueness Thesis [LW · GW] can help - it’s written in High Lesswrongian, which I assume is the register most likely to trigger some interpretative charity
↑ comment by Ape in the coat · 2025-02-09T06:32:09.369Z · LW(p) · GW(p)
let’s not be overly pedantic.
It's not about pedantry, it's about you understanding what I'm trying to communicate and vice versa.
The point was that if your post not only presented the a position that you or Nick Land disagrees with but also engaged with that in a back and forth dynamics with authentic arguments and counterarguments that would've been an improvememt over it's current status.
This point still stands no matter what definition for ITT or its purpose you are using.
anyway, you failed the Turing test with your dialogue
Where exactly? What is your correction? Or if you think that it's completely off, write your version of the dialogue. Once again you are failing to engage.
And yes, just to be clear, I want the substance of the argument not the form. If your grievance is that Land would've written his replies in a superior style, than it's not valid. Please, write as plainly and clearly as possible in your own words.
which surprises me source the crucial points recovered right above.
I fail to parse this sentence. If you believe that all the insights into Land's views are presented in your post - then I would appreciate if after you've corrected my dialogue with more authentic Land's replies you pointed to exact source of your every correction.
it’s written in High Lesswrongian, which I assume is the register most likely to trigger some interpretative charity
For real, you should just stop worrying about styles of writing completely and just write in the most clear way you can the substance of what you actually mean.
↑ comment by james oofou (james-oofou) · 2025-02-08T01:57:07.725Z · LW(p) · GW(p)
You should make it totally clear which text is Nick Land's and which isn't. I spent like 10 minutes trying to figure it out when I first saw your post.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-08T02:44:50.396Z · LW(p) · GW(p)
the editor's note, mine, is marked with the helpful title "editor's note", while the xenosystem pieces about orthogonality are marked with "xenosystems: orthogonality".
you seem to be the only user, although not the only account, who experienced this problem.
Replies from: SaidAchmiz, james-oofou↑ comment by Said Achmiz (SaidAchmiz) · 2025-02-08T06:12:45.876Z · LW(p) · GW(p)
you seem to be the only user, although not the only account, who experienced this problem.
Definitely not. I second the complaint.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-08T07:45:17.809Z · LW(p) · GW(p)
I stand corrected. What do you suggest? See other comment [LW(p) · GW(p)]
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2025-02-08T07:54:06.558Z · LW(p) · GW(p)
Blockquotes.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-08T15:48:11.033Z · LW(p) · GW(p)
sure? that would blickauote 75% of the article
perhaps I could block quote the editors note instead?
↑ comment by james oofou (james-oofou) · 2025-02-08T03:24:39.957Z · LW(p) · GW(p)
you seem to be the only user, although not the only account, who experienced this problem.
Are you accusing me of sockpuppetting?
I like Nick Land (see e.g. my comment [LW(p) · GW(p)] on jessicata's post). I've read plenty of Xenosystems. I was still confused reading your post (there are lots of headings and quotations and so on in it).
I told you my experience and opinion, mostly because you asked for feedback. Up to you how/whether you update based on it.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-08T07:44:25.048Z · LW(p) · GW(p)
My bad, I didn't check and was tricked by the timing. Sincere apoloigies.
How would you suggest the thing could be improved? (the TeX version in the PDF contains Nick Land only).
I was thinking perhaps to add a link to each XS item, but wasnt really looking forward to rehashing comments of what has probably been the nadir in r/acc / LW diplomatic relations
↑ comment by james oofou (james-oofou) · 2025-02-08T08:40:34.827Z · LW(p) · GW(p)
I think it might be fine. I don't know. Maybe if you could number the posts like in the PDF that would help to demarcate them.
Here's a timeline if you want to fully understand how I got confused:
- I scrolled down to Will-to-Think and didn't immediately recognise it (I didn't realise they would be edited versions of his original blog posts)
- I figured therefore it was your commentary
- So I scrolled up to the top to read your commentary from the beginning
- But I realised the stuff I was reading at the beginning was Nick Land's writing not commentary
- I got bored and moved on with my life still unsure about which parts were commentary and which parts weren't
If the post were formatted differently maybe I would have been able to recover from my intitial confusion or avoid it altogether. But I'm not knowledgable about how to format things well.
Replies from: lumpen-space↑ comment by lumpenspace (lumpen-space) · 2025-02-08T15:52:22.941Z · LW(p) · GW(p)
uh I see - I’ve put the editors note in blockquote; hope that helps at least to make its meta- character clearer (:
↑ comment by TsviBT · 2025-02-05T07:55:22.786Z · LW(p) · GW(p)
Reason to care about engaging /acc:
https://www.lesswrong.com/posts/HE3Styo9vpk7m8zi4/evhub-s-shortform?commentId=kDjrYXCXgNvjbJfaa [LW(p) · GW(p)]
I've recently been thinking that it's a mistake to think of this type of thing--"what to do after the acute risk period is safed"--as being a waste of time / irrelevant; it's actually pretty important, specifically because you want people trying to advance AGI capabilities to have an alternative, actually-good vision of things. A hypothesis I have is that many of them are in a sense genuinely nihilistic/accelerationist; "we can't imagine the world after AGI, so we can't imagine it being good, so it cannot be good, so there is no such thing as a good future, so we cannot be attached to a good future, so we should accelerate because that's just what is happening".
comment by lumpenspace (lumpen-space) · 2025-02-04T23:10:38.489Z · LW(p) · GW(p)
[curious about the downvotes - there's usually much /acc criticising around these parts, I thought having the arguments in question available in a clear and faithful rendition would be considered an unalloyed good from all camps? but i've not poasted here since 2018, will go read the rules in case something changed]
Replies from: jimrandomh, interstice↑ comment by jimrandomh · 2025-02-05T03:13:52.509Z · LW(p) · GW(p)
Downvotes don't (necessarily) mean you broke the rules, per se, just that people think the post is low quality. I skimmed this, and it seemed like... a mix of edgy dark politics with poetic obscurantism?
Replies from: T3t↑ comment by RobertM (T3t) · 2025-02-05T06:34:19.814Z · LW(p) · GW(p)
I hadn't downvoted this post, but I am not sure why OP is surprised given the first four paragraphs, rather than explaining what the post is about, instead celebrate tree murder and insult their (imagined) audience:
Replies from: lumpen-spaceso that no references are needed but those any LW-rationalist is expected to have committed to memory by the time of their first Lighthaven cuddle puddle
↑ comment by lumpenspace (lumpen-space) · 2025-02-05T15:16:57.024Z · LW(p) · GW(p)
wait - do you consider that an insult? i snuggled with the best of them
Replies from: T3t↑ comment by RobertM (T3t) · 2025-02-05T19:42:34.250Z · LW(p) · GW(p)
I think it's quite easy to read as condescending. Happy to hear that's not the case!
↑ comment by interstice · 2025-02-05T02:11:38.788Z · LW(p) · GW(p)
Sadly my perception is that there are some lesswrongers who reflexively downvote anything they perceive as "weird", sometimes without thinking the content through very carefully -- especially if it contradicts site orthodoxy in an unapologetic manner.