Posts

Comments

Comment by xpym on Changes in College Admissions · 2024-04-25T09:45:37.500Z · LW · GW

Indeed, from what I see there is consensus that academic standards on elite campuses are dramatically down, likely this has a lot to do with the need to sustain holistic admissions.

As in, the academic requirements, the ‘being smarter’ requirement, has actually weakened substantially. You need to be less smart, because the process does not care so much if you are smart, past a minimum. The process cares about… other things.

So, the signalling value of their degrees should be decreasing accordingly, unless one mainly intends to take advantage of the process. Has some tangible evidence of that appeared already, and are alternative signalling opportunities emerging?

Comment by xpym on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-03T11:53:22.653Z · LW · GW

I think Scott’s name is not newsworthy either.

Metz/NYT disagree. He doesn't completely spell out why (it's not his style), but, luckily, Scott himself did:

If someone thinks I am so egregious that I don’t deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists.

Metz/NYT considered Scott to be bad enough to deserve whatever inconveniences/punishments would come to him as a result of tying his alleged wrongthink to his real name, is the long and short of it.

Comment by xpym on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-04-03T11:25:48.062Z · LW · GW
Comment by xpym on On green · 2024-03-30T05:20:10.279Z · LW · GW

Right, the modern civilization point is more about the "green" archetype. The "yin" thing is of course much more ancient and subtle, but even so I doubt that it (and philosophy in general) was a major consideration before the advent of agriculture leading to greater stability, especially for the higher classes.

Comment by xpym on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-27T15:04:49.696Z · LW · GW

and another to actually experience the insights from the inside in a way that shifts your unconscious predictions.

Right, so my experience around this is that I'm probably one of the lucky ones in that I've never really had those sorts of internal conflicts that make people claim that they suffer from akrasia, or excessive shame/guilt/regret. I've always been at peace with myself in this sense, and so reading people trying to explain their therapy/spirituality insights usually makes me go "Huh, so apparently this stuff doesn't come naturally to most people, shame that they have to bend themselves backwards to get to where I have always been. Cool that they have developed all these neat theoretical constructions meanwhile though."

Maybe give some of it a try if you haven’t already, see if you feel motivated to continue doing it for the immediate benefits, and then just stick to reading about it out of curiosity if not?

Trying to dismiss the content of my thoughts does seem to help me fall asleep faster (sometimes), so there's that at least :)

Comment by xpym on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-27T11:21:38.658Z · LW · GW

Thanks for such a thorough response! I have enjoyed reading your stuff over the years, from all the spirituality-positive people I find your approach especially lucid and reasonable, up there with David Chapman's.

I also agree with many of the object-level claims that you say spiritual practices helped you reach, like the multi-agent model of mind, cognitive fusion, etc. But, since I seem to be able to make sense of them without having to meditate myself, it has always left me bemused as to whether meditation really is the "royal road" to these kinds of insight, and if whatever extra it might offer is worth the effort. Like, for example, I already rate my life satisfaction at around 7, and this seems adequate given my objective circumstances.

So, I guess, my real question for the therapy and spirituality-positive people is why they think that their evidence for believing what they believe is stronger than that of other people in that field who have different models/practices/approaches but about the same amount of evidence for its effectiveness. Granted that RCTs aren't always, or even often, easy, but it seems to me that the default response to lack of strong evidence of that sort, or particularly reliable models of reality like those that justify trusting parachutes even in the absence of RCTs, is to be less sure that you have grasped the real thing. I have no reason to doubt that plenty of therapists/coaches etc. have good evidence that something that they do works, but having a good, complete explanation of what exactly works or why is orders of magnitude harder, and I don't think that anybody in the world could reasonably claim to have the complete picture, or anything close to it.

Comment by xpym on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-27T08:18:17.403Z · LW · GW

I think western psychotherapies are predicated on incorrect models of human psychology.

Yet they all seem to have positive effects of similar magnitude. This suggests that we don't understand the mechanism through which they actually work, and it seems straightforward to expect that this extends to less orthodox practices.

RCTs mostly can’t capture the effects of serious practice over a long period of time

But my understanding is that benefits of (good) spiritual practices are supposed to be continuous, if not entirely linear. However much effort you invest correlates with the amount of benefits you get, until enlightenment and becoming as gods.

Comment by xpym on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-27T05:44:00.545Z · LW · GW

Some forms of therapy, especially ones that help you notice blindspots or significantly reframe your experience or relationship to yourself or the world (e.g. parts work where you first shift to perceiving yourself as being made of parts, and then to seeing those parts with love)

What is your take on the Dodo bird verdict, in relation to both therapy and Buddhism-adjacent things? All this stuff seems to be very heavy on personal anecdotes and just-so stories, and light on RCT-type things. Maybe there's a there there, but it doesn't seem like serious systematic study of this whole field has even begun, and there's plenty of suspicious resistance to even the idea of that from certain quarters.

For whatever reason, it looks like when these kinds of delusions are removed, people gravitate towards being compassionate, loving, etc.

This is also a big if true type claim which from the outside doesn't seem remotely clear, and to the extent that it is true causation may well be reversed.

Comment by xpym on On attunement · 2024-03-26T15:57:56.684Z · LW · GW

That is, for all its associations with blue (and to a lesser extent, black), rationality (according to Yudkowsky) is actually, ultimately, a projectof red. The explanatory structure is really: red (that is, your desires), therefore black (that is, realizing your desires), therefore blue (knowledge being useful for this purpose; knowledge as a form of power).

Almost. The explanation structure is: green (thou art godshatter), therefore red, therefore black, therefore blue. Yudkowsky may not have a green vibe, as you describe it in this series, but he certainly doesn't shy from acknowledging that there's no ultimate escaping from the substrate.

Comment by xpym on On green · 2024-03-26T14:12:57.263Z · LW · GW

Green is the idea that you don’t have to strive towards anything.

Can only be said by somebody not currently starving, freezing/parched or chased by a tiger. Modern civilization has insulated us from those "green" delights so thoroughly that we have an idealized conception far removed from how things routinely are in the natural world. Self-preservation is the first thing that any living being strives towards, the greenest thing there is, any "yin" can be entertained only when that's sorted out.

Comment by xpym on 'Empiricism!' as Anti-Epistemology · 2024-03-16T03:49:49.454Z · LW · GW

But some of them don’t immediately discount the Spokesperson’s false-empiricism argument publicly

Most likely as a part of the usual arguments-as-soldiers political dynamic.

I do think that there's an actual argument to be made that we have much less empirical evidence regarding AIs compared to Ponzis, and plently of people on both sides of this debate are far too overconfident in their grand theories, EY very much included.

Comment by xpym on 'Empiricism!' as Anti-Epistemology · 2024-03-14T15:10:15.494Z · LW · GW

Sure, there is common sense, available to plenty of people, of which reference classes apply to Ponzi schemes (but, somehow, not to everybody, far from it). Yudkowsky's point, however, is that the issue of future AIs is entirely analogous, so people who disagree with him on this are as dumb as those taken in by Bernies and Bankmans. Which just seems empirically false - I'm sure that the proportion of AI doom skeptics among ML experts is much higher than that that of Ponzi believers among professional economists. So, if there is progress to be made here, it probably lies in grappling with whatever asymmetries are between these situations. Telling skeptics a hundredth time that they're just dumb doesn't look promising.

Comment by xpym on My Clients, The Liars · 2024-03-06T09:25:22.331Z · LW · GW

And due to obvious selection effects, such people are most likely to end up in need of one. Must be a delightful job...

Comment by xpym on On the abolition of man · 2024-03-02T04:56:16.678Z · LW · GW

The standard excuse is that the possibility to ruin everything was a necessary cost of our freedom, which doesn’t make much sense

There's one further objection to this, to which I've never seen a theist responding.

Suppose it's true that freedom is important enough to justify the existence of evil. What's up with heaven then? Either there's no evil there and therefore no freedom (which is still somehow fine, but if so, why the non-heaven rigmarole then?), or both are there and the whole concept is incoherent.

Comment by xpym on CFAR Takeaways: Andrew Critch · 2024-02-26T15:54:35.873Z · LW · GW

That's probably Kevin's touch. Robin has this almost inhuman detachment, which on the one hand allows him to see things most others don't, but on the other makes communicating them hard, whereas Kevin managed to translate those insights into engaging humanese.

Any prospective "rationality" training has to comprehensively grapple with the issues raised there, and as far as I can tell, they don't usually take center stage in the publicized agendas.

Comment by xpym on CFAR Takeaways: Andrew Critch · 2024-02-26T15:03:42.396Z · LW · GW

What do people here think about Robin Hanson's view, for example as elaborated by him and Kevin Simler in the book Elephant in the Brain? I've seen surprisingly few mentions/discussions of this over the years in the LW-adjacent sphere, despite Hanson being an important forerunner of the modern rationalist movement.

One of his main theses, that humans are strategic self-deceivers, seems particularly important (in the "big if true" way), yet downplayed/obscure.

Comment by xpym on On "Geeks, MOPs, and Sociopaths" · 2024-02-04T14:43:53.769Z · LW · GW

To me, the main deficiency is that it doesn't make the possibility, indeed, the eventual inevitability of ontological remodeling explicit. The map is a definite concept, everybody knows what maps look like, that you can always compare them etc. But you can't readily compare Newtonian and quantum mechanics, they mostly aren't even speaking about the same things.

Comment by xpym on On "Geeks, MOPs, and Sociopaths" · 2024-02-04T13:31:21.198Z · LW · GW

Well, I blame Yudkowsky for the terminology issue, he took a term with hundreds of years of history and used it mostly in place of another established term which was traditionally sort of in opposition to the former one, no less (rationalism vs empiricism).

As I understand it, Chapman's main target audience wasn't LW, but normal STEM-educated people unsophisticated in the philosophy of science-related issues. Pretty much what Yudkowsky called "traditional rationality".

The map/territory essay: https://metarationality.com/maps-and-territory

Comment by xpym on On "Geeks, MOPs, and Sociopaths" · 2024-02-04T12:44:35.501Z · LW · GW

Here's Chapman's characterization of LW:

Assuming by “the modern rationality movement” you mean the LessWrong-adjacent subculture, some of what they write is unambiguously meta-rational. The center of gravity is more-or-less rationalism as I use the term, but the subculture is not exclusively that.

Among the (arguably) core LW beliefs that he has criticized over the years are Bayesianism as a complete approach to epistemology, utilitarianism as a workable approach to ethics, the map/territory metaphor as a particularly apt way to think about the relationship between belief and reality.

Comment by xpym on Being nicer than Clippy · 2024-01-18T13:29:47.923Z · LW · GW

Well, so far no such higher power seems forthcoming, and totalizing ideologies grip public imagination as surely as ever, so the need for liberalism-or-something-better is still live, for those not especially into wars.

Comment by xpym on Being nicer than Clippy · 2024-01-18T10:58:25.070Z · LW · GW

Of course liberalism has struggles, the whole point of it is that it's the best currently known way to deal with competing interests and value differences short of war. This invites three possible categories of objection: that there is actually a better way, that there is no better way and liberalism also no longer works, or that wars are actually a desirable method of conflict resolution. From what I can tell, yours seem to fall into the second and/or third category, but I'm interested in whether you have anything in the first one.

Comment by xpym on An even deeper atheism · 2024-01-12T10:34:59.285Z · LW · GW

I don't see a substantial difference between a (good enough) experience machine and an 'aligned' superintelligent Bostromian singleton, so the apparent opposition to the former combined with the enthusiastic support for the latter from the archetypal transhumanist always confused me.

Comment by xpym on When "yang" goes wrong · 2024-01-10T13:11:35.615Z · LW · GW

That is, turns itself into a God, while also keeping its heart intact? Well, you can do that too (right?).

Likely wrong. Human heart is a loose amalgamation of heuristics adapted to deal with its immediate surroundings, and couldn't survive ascension to godhood intact. As usual, Scott put it best (the Bay Area transit system analogy), but unfortunately stuck it in the end of a mostly-unrelated post, so it's undeservedly obscure.

Comment by xpym on Originality vs. Correctness · 2023-12-08T09:22:56.946Z · LW · GW

David Chapman has been banging on for years now against "Bayesianism"/early LW-style rationality being particularly useful for novel scientific advances, and, separately, against utilitarianism being a satisfactory all-purpose system of ethics. He proposes another "royal road", something something Kegan stage 5 (and maybe also Buddhism for some reason), but, frustratingly, his writings so far are rich on expositions and problem statements but consist of many IOUs on detailed solution approaches. I think that he makes a compelling case that these are open problems, insufficiently acknowledged and grappled with even by non-mainstream communities like the LW-sphere, but is probably overconfident about postmodernism/himself having much useful to offer in the way of answers.

Comment by xpym on OpenAI: The Battle of the Board · 2023-11-23T08:39:43.172Z · LW · GW

I'd say that, on conflict theory terms, NYT adequately described Scott. They correctly identified him as a contrarian willing to entertain, and maybe even hold, taboo opinions, and to have polite interactions with out-and-out witches. Of course, we may think it deplorable that the 'newspaper of record' considers such people deserving to be publicly named and shamed, but they provided reasonably accurate information to those sharing this point of view.

Comment by xpym on OpenAI: Facts from a Weekend · 2023-11-21T08:36:25.137Z · LW · GW

Maybe I’m missing some context, but wouldn’t it be better for Open AI as an organized entity to be destroyed than for it to exist right up to the point where all humans are destroyed by an AGI that is neither benevolent nor “aligned with humanity” (if we are somehow so objectively bad as to not deserve care by a benevolent powerful and very smart entity).

This seems to presuppose that there is a strong causal effect from OpenAI's destruction to avoiding creation of an omnicidal AGI, which doesn't seem likely? The real question is whether OpenAI was, on the margin, a worse front-runner than its closest competitors, which is plausible, but then the board should have made that case loudly and clearly, because, entirely predictably, their silence has just made the situation worse.

Comment by xpym on Architects of Our Own Demise: We Should Stop Developing AI · 2023-11-01T15:07:46.639Z · LW · GW

To me the core reason for wide disagreement seems simple enough - at this stage the essential nature of AI existential risk arguments is not scientific but philosophical. The terms are informal and there are no grounded models of underlying dynamics (in contrast with e.g. climate change). Large persistent philosophical disagreements are very much the widespread norm, and thus unsurprising in this particular instance as well, even among experts in currently existing AIs, as it's far from clear how their insights would extrapolate to hypothetical future systems.