Posts

Comments

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-18T23:02:14.428Z · LW · GW

So, something like "quiet quitting"?

Well, no - not necessarily. And with all the epistemic charity in the world, I am starting to suspect you might benefit from actually reading the review at this point, just to have more of an idea of what we're talking about.

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-18T20:44:00.630Z · LW · GW

Funny, I see "exit" as. more or less the opposite of the thing you are arguing against. Land (and Moldbug) refer to this book by Hirschman, where "exit" is contrasted with "voice" - the other way to counter institutional/organisational decay. In such model, exit is individual and aims to carve a space for a different way of doing things, while voice is collective, and aims to steer the system towards change.

Balaji's network state, cryptocurrency, etc are all examples. Many can run parallel to existing institutions, working along different dimensions, and testing configurations which might one day end up being more effective than the legacy institutions themselves.

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-18T07:51:03.941Z · LW · GW

I'm trying to understand where the source of disagreement lies, since I don't really see much "overconfidence" - ie, i don't see much of a probabilistic claim at all. Let me know if one of these suggestion points somewhere close to the right direction:
 

  • The texts cited were mostly a response to the putative inevitability of orthogonalism. Once that was (i think effectively) dispatched, one might consider that part of the argument closed.
    After that, one could excuse him for being less rigorous/have more fun with the rest; the goal there was not to debate but to allow the reader to experience what something akin to will-to-think would be like (im aware this is frowned upon in some circles); 
  • The crux of the matter, imo, is not that thinking a lot about meta-ethics changes your values. Rather, that an increase in intelligence does - and namely, it changes them in the direction of greater appreciation for complexity and desire for thinking, and this change takes forms unintelligible to those one rung below. Of course, here the argument is either inductive/empirical or kinda neoplatonic. I will spare you the latter version, but the former would look something like:

    - Imagine a fairly uncontroversial intelligence-sorted line-up, going:
    thermostat → mosquito → rat(🐭) → chimp → median human  → rat(Ω)
    - Notice how intelligence grows together with the desire for more complexity, with curiosity, and ultimately with the drive towards increasing intelligence, per se: and notice also how morality evolves to accommodate those drives (one really wouldn't want those on the left of wherever one stands to impose their moral code to those on the right).


While I agree these sort of arguments don't cut it for a typical post-analytical, lesswrong-type debate, I still think that, at the very least, Occam's razor should strongly slash their way - unless there's some implicit counterargument i missed.

(As for the opportunity cost of deepening your familiarity with the subject matter, you might be right. The style of philosophy Land adopts is very very different from the one appreciated around here - it is indeed often a target for snark - and while I think there's much of interest on that side of the continental split, the effort required for overcoming the aesthetic shift, weighted by chance of such shift completing, might still not make it worth it).

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-18T00:38:49.938Z · LW · GW

I'm not sure I agree - in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.

At this point, one could decide whether to go for it or hold back - and we should all consider ourself lucky that our early sapiens predecessors didn't take the second option.

(btw, I'm very curious to know what you make of this other Land text: https://etscrivner.github.io/cryptocurrent/

Comment by lumpenspace (lumpen-space) on Book review: Xenosystems · 2024-09-17T04:29:32.729Z · LW · GW

I personally don't see the choice of "allowing a more intelligent set of agents take over" as particularly altruistic: personally, i think intelligence trumps species, and I am not convinced interrupting its growth to make sure more sets of genes similar to mine find hosts for longer would somehow be "for my benefit".

Even in my AI Risk years, what I was afraid is the same I'm afraid of now: Boring Futures. The difference is that in the meantime the arguments for a singleton ASI, with a single unchangeable utility function that is not more intelligence/knowledge/curiosity became less and less tenable (together with FOOM within our lifetimes).

This being the case, "altruistic" really seems out of place: it's likely that early sapiens would have understood nothing of our goals, our morality, and the drives that got us to build civilisations - but would it have been better for them had they murdered the first guy in the troop they found flirting with a neanderthal and prevented this? I personally doubt it, and I think the comparison between us and ASI is more or less in the same ballpark,

Comment by lumpenspace (lumpen-space) on Consent Isn't Always Enough · 2023-02-24T21:21:10.013Z · LW · GW

Not hitting on people on their first meetup is good practice, but none of the arguments in OP seem to support such a norm.

Perhaps less charitably than @Huluk, I find the consent framing almost tendentious. It's quite easy to see how the dynamics denounced have little to do with consent; here are two substitutions which show how the examples are professional ethics matters, and orthogonal to the intimacy axis:

- one could easily swap "sexual relations" with "access to their potential grantee's timeshare" without changing much in terms of moral calculus;
- one could make the grantee as the recipient of another, exclusive grant from other sources. In this case, flirting with a grantmaker would no longer have the downstream consequences OP warned about.

All in all, the scenario in OP seems to call not for more restrictive sexual norms, but for explicit and consistently enforced anti-collusion/corruption regulations.

Once again: this is limited to the examples provided by @jefftk, and the arguments accompanying them. It's possible that consent isn't always enough in some contexts within EA, for reason separated from professional ethics - but I did not find support for such thesis in the thread.