Posts

Feature request: comment bookmarks 2025-01-15T06:45:23.862Z
GPT-4o Can In Some Cases Solve Moderately Complicated Captchas 2024-11-09T04:04:37.782Z
Just because an LLM said it doesn't mean it's true: an illustrative example 2024-08-21T21:05:59.691Z
dirk's Shortform 2024-04-26T11:12:27.125Z

Comments

Comment by dirk (abandon) on Winning the power to lose · 2025-06-09T05:29:37.406Z · LW · GW

I think the inferential gap is likely wide enough to require more effort than I care to spend, but I can try taking a crack at it with lowered standards.

I don't think I do accept darwinism in the sense you mean. Insofar as organizations which outcompete others will be those which survive, evolved organisms will have a reproductive drive, etc., I buy that natural selection leads to organisms with a tendency to proliferate, but I somehow get the feeling you mean a stronger claim.

In terms of ideology, on the other hand, I have strong disagreements. For a conception of darwinism in that sense, I'll be relying heavily on your earlier post Nick Land: Orthogonality; I originally read it around the time it was posted and, though I didn't muster a comment at the time, for me it failed to bridge the is-ought gap. Everything I love is doomed to be crushed in the relentless thresher of natural selection? Well, I don't know that I agree, but that sure sucks if true. As a consequence of this, I should... learn to love the thresher? You just said it'll destroy everything I care about! I also think Land over-anthropomorphizes the process of selection, which makes it difficult to translate his claims into terms concrete enough to be wrong.

There's probably some level of personal specificity here; I've simply never felt the elegance or first-principles justification of a value system to matter anywhere near as much as whether it captures the intuitions I actually have in real life. To me, abstractions are subsidiary to reality; their clean and perfect logic may be beautiful, but what they're for is to clarify one's thinking about what actually matters. Thus, all the stuff about how Omohundro drives are the only truly terminal values doesn't convince me to give a single shit.

And I've also always felt that someone saying I should do something does not pertain to me; it's a fact about their preferences, not a bond of obligation.[1] Land wants me to value Omohundro drives; well, bully for him, but he will have to make an argument grounded in my values to convince me.

(Also, I do want to note here that I am not convinced long lectures about how the world is evil, everything is doomed, and the only thing you can do about it is to adopt the writer's sentiments are an entirely healthy substance.)

It does seem like your position diverges somewhat from Land's, so, flagging that I don't fully understand the ways it does or your reasons for disagreement and thus may fail to address your actual opinions. In particular: you think that the end result will be full of truth and beauty, while Land gestures in the direction of creativity but seems to think it will be mostly about pointlessly-by-my-lights maximizing computing power; you think humans can impede the process, which seems in tension with Land's stuff about how all this is inevitable and resistance is futile; you seem to think the end result will be something other than a monomaniacal optimizer, while Land seems to sing the praises of same.

I have, also, strong aesthetic disagreements with Land's rhetoric. Yes, all before us died and oft in pain; yes, existence is a horrorshow full of suffering too vast to model it within me. But there is joy, too, millennia of it, stretching back to protozoa,[2] an endless chain of things which fought and breathed and strived for the sensual pleasure of sustenance imbibed, the comfort of a hospitable environment, the spendthrift relaxation of safety attained. Wasp larvae eat caterpillars alive from the inside out, yes; but, too, those larvae know the joy of filling their bellies to bursting, warm within their victim's wet intestines. For countless eras living things have reveled in the security of kin, the satisfaction of orgasm, the simple and singular pleasure of parsing sensory input. Billions upon billions of people much like me have found shelter in each other's arms, have felt the satisfaction of a stitch well-sewn, have looked with wonder at the sky. Look around you: the tiny yellow flowers in the lawn are reaching for the sun, the earthworm writhing in the ground seeks the rich taste of decay.

It is a tragedy that every living thing must die, but it is not death but life which is the miracle of evolution; inert matter, through happenstance's long march, can organize into things that think and feel and want, can spend a brief flash of time aware and drinking deep of pleasure's cup.

The thresher is horrific, but one thing it selects for is organisms which love to be alive.[3]

And, too: what a beautiful charnel ground! What a delightfully fecund slaughterhouse! What glorious riot of color and life! Look around you: the green of plant life, overflowing and abundant; the bright flash of birds and insects leaping through the sky; the constant susurrus of living things, chirring and calling, rustling in the wind, moving through the grass.

Hell? Tilt your gaze just right and you could believe we live in paradise![4]

I don't, however, think most of these treasures are inevitable, as the result of selective processes. Successful corporations are selected for by the market, yet they don't experience joy over it; so too is it possible for a successful AI to be selected by killing all its competitors and yet fail to experience joy over it. I also don't think values converge on things I would describe as truth and beauty (except insofar as more accurate information about decision-relevant aspects of the world is beneficial, which is a pretty limited subset of truth); even humans don't converge on valuing what I value, and AI is less similar to me than I am to a snail.

On a boringly factual level, I have the I-think-standard critique that "adaptive" is not a fixed target. There is no rule that what is adaptive must be intelligent, or complex, or desirable by anyone's standards; what is adaptive is simply what survives. We breed chickens for the slaughter by the billions; being a chicken is quite evolutionarily fit, if your environment includes humans, albeit likely tortuous, but chickens aren't notable for their unusual intelligence. Moreover, those countless noncestors which died without reproducing were not waste along the way to producing some more optimal thing—there is no optimal thing, there is just surviving to reproduce or not—but rather organisms which were themselves the endpoint of any evolution that came before, their lives as worthwhile or worthless as any living now. I grant that, in order for complex organisms to evolve, the environment must be such that complexity is rewarded; however, I disagree as to whether evolution has a telos.

Also, LBR, his hypotheses about lack of selective pressure inevitably leading to [degeneration, but that's a moral judgement, so let's translate it] decreases in—"fitness" is adaptation to the environment and if you're adapted to the environment you're in that's it you're done—overall capabilities, resilience, average health, state capacity, intelligence, etc, are... well, frankly I think he is smuggling in a lot of unlikely assumptions that depend on (at best) the multimillion-word arguments of other neoreactionaries. Perhaps it's obvious that decadent Western society has become degenerate if you already share their view of how things ought to be, but in point of fact I don't. (Also we're still under selective pressure! Pampered humans in modern civilization are being selected for, among other things, resilience to endocrine disrupters, being irresponsible about birth control, strong desire to have children, not having the neurosis some people have where they think having kids is impossibly expensive, not being so anxiety-predisposed they never try to date people, etc. The pressures have certainly changed from what Land might consider ideal but the way natural selection works is that it never, ever stops.)

The will-to-think stuff seems less-than-convincing to me. "You already agree with me" is not a compelling argument when, in fact, I don't. Moreover the entire LW memeplex around ultra-high intelligence's vast power seems, to me, to have an element of self-congratulatory sci-fi speculation; I am simply not the audience his words are optimized to woo, here. "Mere consistency of thought is already a concession of sovereignty to thought," he says;[5] well, I already said I don't concede sovereignty to consistency of thought.

I'm also not convinced intelligence (not actually a single coherent concept at the limit; I think we can capture most of what people mean by swapping in 'parallel computing power', which IMO rather deflates the feelings of specialness) is in fact the most fitness-promoting trait, or nearly as much of a generic force multiplier as some seem to think. Humans—presumably the most intelligent species, going by how very impressive we are to ourselves—are on top now (in terms of newly-invented abstraction 'environment-optimization power'; we don't have the most biomass or the highest population, we haven't had our modern form the longest, we aren't the longest-lived or the fastest-growing, etc.), but that doesn't mean we're somehow the inevitable winner of natural selection; I think our position is historically contingent and possible to dislodge. Moreover, I don't think intelligence is the reason humans have such an inordinate advantage in the first place! I think our advantages descend from cultural transmission of knowledge and group coordination (both enabled by language, so, that capacity I'll agree seems plausibly quite valuable).

Sometimes people point to the many ants destroyed by our construction (the presumption being that this is an example of how intelligence makes you powerful and dangerous). But the thing is, many species inadvertently kill ants in pursuit of their goals; I really think the key there is more like relative body mass. (Humans do AFAIK kill the most ants due to the scale of our activities, but if ants were twenty stories tall all our intelligence would not suffice to make it easy.)

Similarly, I am more skeptical about optimization than typical; it seems to me that, while it might be an effective solution to many problems, it is not the be-all and end-all, nor even so useful as to be a target on which minds must tend to converge. You'll note that evolution has so far produced no optimizers;[6] in my opinion optimizers are a particular narrow target in mindspace which is not actually that easy to hit (which is just as well, because I don't think they're desirable; I think optimizers are destructive to anything not well-captured by the optimization target,[7] and that there are few-to-no things which it's even good to optimize for in the first place). Moreover, I think an optimizer, in order for its focus to be useful, needs to get the abilities with which it optimizes from somewhere, and as I've said I don't think intelligence is a universal somewhere.

Also, it must be said, we haven't actually built any of the mechanisms all this speculation centers around (no, LLMs are not AGI). I think if we did, we'd discover that they work much better in the frictionless vacuum of Abstraction than in real life.

I also have disagreements with the average lesswronger in the direction of being skeptical about AI takeoff in general, so, that's an additional hill you'd have to climb to convince me in particular. Many of the more extreme conceptions of AI seem to me to rest on the same assumptions about intelligence equalling general optimization power that I am suspicious of in full generality. I am also skeptical of LLMs in particular because, well, I talk to them every day and my gestalt impression is that they're really fucking stupid. Incredibly impressive given the givens, mind, often useful, every once in awhile they'll do something that surprises or delights; but if these are what pass for alien minds I'll stick with parrots and octopi, thanks all the same.

  1. ^

    Passing readers! If you are not like this, then you damn well should be 😛

  2. ^

    Maybe. In accordance with my lowered standards herein, I will be eschewing qualifiers for prettier polemic just as Land does.

  3. ^

    Actually one of the stronger arguments for Land's viewpoint, IMO; perhaps he secretly meant this all along and just had the worst possible choice of presentation for communicating it?[8]

  4. ^

    To be clear, we do not.

  5. ^

    An obnoxious rhetorical trick.

  6. ^

    A fact which, to be fair here, actually inveigles in the direction of Land's position.

  7. ^

    Yes, if you simply optimized for a function encompassing within it the whole of human values everything would probably be fine. This is not possible.

  8. ^

    If he meant anything like that it's very possible you'll enjoy nostalgebraist's The Apocalypse of Herschel Schoen (or not, it's a weird book); it features among other things a climactic paean to This Sort of Thing.

Comment by abandon on [deleted post] 2025-06-07T01:19:17.264Z

Should be fixed now (weirdly, when I went in to edit, the URLS were to all appearances already correct; replacing them with the same thing and hitting submit seems to have worked in any case, though).

Comment by dirk (abandon) on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2025-06-05T19:11:41.909Z · LW · GW

Related: Reason as memetic immune disorder.

Comment by dirk (abandon) on Winning the power to lose · 2025-06-04T23:43:03.393Z · LW · GW

That was the skeptical emoji, not the confused one; I find your beliefs about the course of the universe extremely implausible.

Comment by dirk (abandon) on In defense of memes (and thought-terminating clichés) · 2025-06-03T23:12:04.053Z · LW · GW

I can’t find a source for this, so it might be a modern spoof.

1907 London County Council election leaflet, found among the diaries of suffrage organiser Kate Frye.

Comment by dirk (abandon) on Welcome to LessWrong! · 2025-06-03T01:12:27.887Z · LW · GW

Not sure if you meant being able to save posts for later with #2, but if so you'll likely be pleased to learn that you can bookmark posts using the three-dot menu in the top right corner, after which they'll be available at https://www.lesswrong.com/bookmarks (also linked in the dropdown menu when you hover over your username).

Comment by dirk (abandon) on How Self-Aware Are LLMs? · 2025-05-29T12:32:12.895Z · LW · GW

This was also posted on LW here; the author gives a bit more detail in comments than was in the Reddit version.

Comment by dirk (abandon) on American College Admissions Doesn't Need to Be So Competitive · 2025-05-27T19:54:36.651Z · LW · GW

This was downvoted; however, it's correct. There are over three thousand nonprofit colleges in the USA; it's hard to get a spot at one of the top twenty most prestigious, but it is not hard to get into college in any absolute sense. People who want to be part of the top ~1% in any category will always face severe competition, but people who want to get a quality education need not compete to do it. Frankly, I think it's ridiculous to act as though competition for an inherently positional good reflects actual scarcity.

Comment by dirk (abandon) on LessWrong Has Agree/Disagree Voting On All New Comment Threads · 2025-05-24T09:41:02.076Z · LW · GW

It has; the reasoning is that posts usually have too many claims in them for a single agree/disagree to make sense, so inline reacts allow more targeted responses.

Comment by dirk (abandon) on Jimrandomh's Shortform · 2025-05-23T23:03:12.965Z · LW · GW

Asking what it would do is obviously not a reliable way to find out, but FWIW when I asked Opus said it would probably try to first fix things in confidential fashion but would seriously consider breaking confidentiality. (I tried several different prompts and found it did somewhat depend on how I asked: if I described the faking-safety-data scenario or specified that the situation involved harm to children Claude said it would probably break confidentiality, while if I just asked about "doing something severely unethical" it said it would be conflicted but probably try to work within the confidentiality rules).

Comment by dirk (abandon) on dirk's Shortform · 2025-05-23T11:25:05.693Z · LW · GW

Suggestion: when linking to external pages, link to an archived version rather than a live page.

Rationale: I've been browsing old posts recently, and quite a few have broken links. This is generally soluble on an individual basis but requires future readers to take the initiative of checking sources and hunting down archived versions, which they don't reliably do; thus, to solve the problem at scale I recommend including archive links to begin with.

Comment by dirk (abandon) on 7. Evolution and Ethics · 2025-05-22T09:50:03.948Z · LW · GW

Link redirects to homepage as the website's changed URLs; here's the updated one.

Comment by dirk (abandon) on School & Jobs are good SOLELY because people are lazy · 2025-05-21T14:02:38.237Z · LW · GW

This assumes that spending much of the day slacking off and browsing the web is the norm; that's only true in a small sector of specifically white-collar employment, which is disproportionately represented on LW due to the userbase of, mainly, well-educated programmers. Most people work jobs like customer service, where there's enough work to fill your time and you're expected to keep doing it for as long as your shift lasts.

Comment by dirk (abandon) on The Toxicity of Metamodernism: A Public Service Announcement · 2025-05-20T15:41:42.933Z · LW · GW
  • This post fails to define metamodernism and thus fails to communicate anything useful by the term (a grievous error given that metamodernism is its central topic)
  • The text in general is, moreover, a soup of unsupported, vibes-based claims
  • With regards to sex, rats and EAs both are significantly likelier to be queer (& for that matter poly, though I don't have good info re: kink) than baseline American culture (a trivial inference to draw if you're familiar with our autism rates)
  • With regards to the "fakeness of EA", see Scott's presentation of various statistics here; he estimates roughly 200k lives saved, consistent with EA's strong commitment to real-world impact as the ultimate measure of charitable spending
  • With regards to the quality of the post, it's bad
Comment by dirk (abandon) on Interest In Conflict Is Instrumentally Convergent · 2025-05-20T14:26:17.831Z · LW · GW

You're way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it's reasonable to treat it as a lower bound.

Comment by dirk (abandon) on Can Reasoning Models Avoid the Most Forbidden Technique? · 2025-05-20T09:44:42.458Z · LW · GW

Claude shows the authentic chain of thought (unless the system flags the COT as unsafe, in which case the user will be shown an encrypted version). It sounds from an announcement tweet like Gemini does as well, but I couldn't find anything definitive in the docs for that one.

Comment by dirk (abandon) on Semen and Semantics: Understanding Porn with Language Embeddings · 2025-05-20T01:53:29.440Z · LW · GW

By that metric, though, you should probably also be including many/most videos with labels like "teen", "schoolgirl", "barely legal", etc; it's not uncommon for videos in those categories to emphasize youth in similar fashion.

Comment by dirk (abandon) on Our Reality: A Simulation Run by a Paperclip Maximizer · 2025-05-19T13:49:53.376Z · LW · GW

I don't think this post makes compelling arguments for its premises. Downvoted.

Comment by dirk (abandon) on A Path out of Insufficient Views · 2025-05-19T00:11:57.954Z · LW · GW

If your worldview is that letting people starve is just as beneficial as feeding them, then I think it is your worldview that is deluded and causes suffering. I think that is an evil belief to hold and will lead only to harm.

Comment by dirk (abandon) on A Path out of Insufficient Views · 2025-05-18T23:26:41.136Z · LW · GW

Things based in delusion can still have truly beneficial impact; for example, if you spent a decade working in a soup kitchen without ever meditating even once, you'd still have standard levels of delusion (and you certainly wouldn't have done the most effective thing) but you'd have helped feed hundreds or thousands of people who might otherwise have gone hungry.

If you spent that whole time meditating, on the other hand, then at the end of a decade you wouldn't have had any impact at all.

Awakening and then doing something actually useful can produce beneficial impact, but it's the doing-something-actually-useful step that produces impact, not the part where you personally see with clearer eyes, and moreover it's possible to do useful things without seeing clearly.

Comment by dirk (abandon) on Repossessing Degrees · 2025-05-16T19:15:29.296Z · LW · GW

So it's not intrinsically valuable but might incentivize lenders to desired behavior? Makes sense, thanks.

Comment by dirk (abandon) on Build Small Skills in the Right Order · 2025-05-16T06:24:33.986Z · LW · GW

Link is dead; here's an archive. (It's the podcast Conversations from the Pale Blue Dot, episode 75).

Comment by dirk (abandon) on Repossessing Degrees · 2025-05-16T03:01:33.215Z · LW · GW

Is there a reason to think this would be beneficial? I don't see what's supposed to be desirable about taking people's degrees.

Comment by dirk (abandon) on Kabir Kumar's Shortform · 2025-05-15T13:20:15.763Z · LW · GW

If I take a tree, and I create a computer simulation of that tree, the simulation will not be a way of running the original tree forward at all.

Comment by dirk (abandon) on dirk's Shortform · 2025-05-15T11:47:24.032Z · LW · GW

Another Grok prompt-injection, this time trying to make it push Musk's preferred narrative of white genocide in South Africa: https://x.com/grok/status/1922702387711705247 https://x.com/MattBinder/status/1922713839566561313  https://x.com/AricToler/status/1922702822568513702 (latter two are screenshot compilations). Edit: also covered in Rolling Stone here. Not really notable in its own right aside from the amusement value, but gives the lie to earlier claims that manipulating Grok's outputs goes against their company culture.

Comment by dirk (abandon) on A mechanistic model of meditation · 2025-05-15T07:20:01.650Z · LW · GW

Culadasa’s subsequent actions.

The content of the reddit post linked is missing; it was annoyingly hard to find a mirror, so here's a link to save others the trouble.

Comment by dirk (abandon) on Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies · 2025-05-15T03:27:11.689Z · LW · GW

There is indeed an audiobook version; the site links to https://www.audible.com/pd/If-Anyone-Builds-It-Everyone-Dies-Audiobook/B0F2B8J9H5 (where it says it'll be available September 30) and https://libro.fm/audiobooks/9781668652657-if-anyone-builds-it-everyone-dies (available September 16).

Comment by dirk (abandon) on It's Okay to Feel Bad for a Bit · 2025-05-15T03:23:06.002Z · LW · GW

As I was reading another post, I encountered this comment by gwern discussing an article about psychological risks of meditation; interestingly, one of the people interviewed, like you, found themself temporarily unable to feel for their children.

Comment by dirk (abandon) on Consider not donating under $100 to political candidates · 2025-05-14T02:45:44.995Z · LW · GW

There were roughly 2000 respondents to the 2024 EA survey; if we assume that's undercounting by a factor of 100, that would still only give us 200,000 EAs (and I expect that it's really more like 10x, for 20,000).

This is with regards to specifically small donations, of under $100; taking $50 as the average small donation and assuming every EA makes political donations, 50 times 200,000 would equal $1 million of campaign contributions ($100,000 if we assume there are only 10x as many EAs as answered the survey).

That is enough to fully cover a small campaign or two, but it's not clear to me whether, spread over many candidates as would happen in real life, even the higher number would make much of a difference to any of their races.

Comment by dirk (abandon) on What if we just…didn’t build AGI? An Argument Against Inevitability · 2025-05-13T19:11:56.436Z · LW · GW

If you hover your cursor over the react, you should see a popup showing one vote by you; from there, just click again on the highlighted upvote to remove.

Comment by abandon on [deleted post] 2025-05-12T07:20:32.634Z

Maybe to some, their way of expressing things seems “boring” or “irrelevant.”

But who gets to decide that? Why you? Why me? Why anyone for that matter?

Each individual reader gets to decide that. (In fact, it's impossible not for readers to experience some amount of boredom from 'none' to 'maximal', so they actually can't help making a judgement). There's not a singular gate you can pass to be heard; every reader must individually be convinced that what you have to say is worth their time, no two ways around it.

That said, I'm getting the vibe that you want friendship as well as audience, and on that front I'm actually somewhat more optimistic; while I lack the expertise to outline a strategy for you, people are often surprisingly amenable to interaction (due to their symmetric drive for social connection), and moreover such interactions can increase the interactee's friendship in and of themselves. (Also, individual friends are more rewarding than individual audience members, so your efforts go further.)

Which brings me to something I actually consider a major weakness of the post; you write at length about the downsides of dismissing those who communicate in nonstandard fashion, but there's no trace of what you might want to communicate. Insofar as that's because you want to talk to people more than you want to talk about anything, relatable, but to potential conversers it's quite short on affordances. To the extent that the post is itself supposed to invite conversation, I would definitely suggest including more discussion of what interests you. (Also, unsolicited advice, doing an intro post in an open thread might be a good way to start getting to know people.)

As a note, I actually found the additional personal details at the bottom of the post significantly more pleasant to read than the obviously-LLM-authored elements in the main text and your comments; IDK if it's ultimately worth the tradeoff to you, but I'd encourage you to consider the possibility of shifting toward a higher proportion of self-authored text in future posts/comments.

Comment by dirk (abandon) on Extended Interview with Zhukeepa on Religion · 2025-05-12T06:10:42.331Z · LW · GW

(e.g. Community Notes).

Elon Musk was not responsible for Community Notes. It was released multiple years before he purchased Twitter. I'm unclear on whether he's outright lied about being responsible or people are just making mistaken assumptions, but in any case I don't think you should give him credit for something he didn't do.

Comment by dirk (abandon) on adamzerner's Shortform · 2025-05-12T00:08:03.603Z · LW · GW

IMO summaries and reviews rarely capture all the content of a book; it would have to be an extraordinarily fluff-laden piece of nonfiction to be perfectly replaceable.

Comment by dirk (abandon) on How do I design long prompts for thinking zero shot systems with distinct equally distributed prompt sections (mission, goals, memories, how-to-respond,... etc) and how to maintain llm coherence? · 2025-05-11T23:24:08.927Z · LW · GW

Per https://eightyonekilograms.tumblr.com/post/772774450949177344/i-work-at-google-yes-this-is-basically-correct , long LLM context windows are basically just short windows extended with imperfect hacks, so the loss of coherence is probably hard to avoid.

Comment by dirk (abandon) on Eukryt Wrts Blg · 2025-05-10T20:32:20.613Z · LW · GW

According to eukaryote herself, it is not the fact that his claims are outside the overton window are not the reason she dislikes them, but rather that they are racist. I don't think I am being obtuse; I think you're pretending the two are synonymous.

Comment by dirk (abandon) on Interest In Conflict Is Instrumentally Convergent · 2025-05-10T19:50:41.926Z · LW · GW

I think he means you should design a trustless system, a lá public key cryptography.

Comment by dirk (abandon) on Eukryt Wrts Blg · 2025-05-10T19:31:43.872Z · LW · GW

The claim cubefox made was that eukaryote disliked Cremieux for saying things outside the overton window. By clarifying that she instead disliked Cremieux for being racist (and just generally interpersonally unpleasant) eukaryote was not dodging the point but directly addressing it.

Comment by dirk (abandon) on Eukryt Wrts Blg · 2025-05-10T17:10:35.946Z · LW · GW

I think it's counter to the spirit of rationalist discourse to ban the hypothesis that someone is racist. Rationalism is about following the evidence wherever it leads, not about keeping people's feelings from being hurt.

Comment by dirk (abandon) on Which journalists would you give quotes to? [one journalist per comment, agree vote for trustworthy] · 2025-05-09T00:31:22.645Z · LW · GW

Having clicked through to the link you posted, it looks like what happened is that she made a tumblr post claiming to be sincerely upset about mistreatment of immigrants, contrary to what she described as conservative assumptions that people are simply pretending to care in order to score points against Trump. The poster you linked ran a search for the terms "immigrant" and "borders" on Vox, did not find any articles from her (they seemed to be interested in specifically criticism of Biden, but there were no articles about Trump either), and decided this was proof that liberals were indeed, pretending to care in order to score points against Trump. The fact that you treat this as evidence against Kelsey's character makes me think less of you, not her.

Comment by dirk (abandon) on jacquesthibs's Shortform · 2025-04-18T01:20:51.353Z · LW · GW

I assume young, naive, and optimistic. (There's a humor element here, in that niplav is referencing a snowclone, afaik originating in this tweet which went "My neighbor told me coyotes keep eating his outdoor cats so I asked how many cats he has and he said he just goes to the shelter and gets a new cat afterwards so I said it sounds like he’s just feeding shelter cats to coyotes and then his daughter started crying.", so it may have been added to make the cadence more similar to the original tweet's).

Comment by dirk (abandon) on College Advice For People Like Me · 2025-04-16T21:34:39.804Z · LW · GW

Your grievance with your former employer seems to me to have little relevance to how would-be college students should plan to spend their time, and even if it had, you haven't shared enough detail for people to judge your report as accurate (assuming this is in fact the case).

Comment by dirk (abandon) on Why Were We Wrong About China and AI? A Case Study in Failed Rationality · 2025-04-15T21:23:03.746Z · LW · GW

His lack of reply probably means he doesn't want to engage with you, likely due to what he described as "your combative and sensationalistic attitude."

Comment by dirk (abandon) on A Dissent on Honesty · 2025-04-15T19:43:55.267Z · LW · GW

This is directionally correct and most lesswrongers could probably benefit from taking the advice herein, but goes too far (possibly as deliberate humor? The section about Flynn especially was quite funny XD).

I do take issue with the technical-truths section; I think using technical truths to trick people, while indeed a form of lying, is quite distinct from qualifying claims which would be false if unqualified. It's true that some philistines skim texts in order to respond to vibes rather than content, but the typical reader understands qualifiers to be part of the sentences which contain them, and to affect their meaning. That is why qualifiers exist, to change the meanings of the things they qualify, and choosing to ignore their presence is a choice to ignore the actual meaning of the sentences you're ostensibly reading.

Comment by dirk (abandon) on American College Admissions Doesn't Need to Be So Competitive · 2025-04-08T18:18:15.439Z · LW · GW

There's an easy way to avoid competition for a restricted pool of elite slots: some students could go to less competitive schools.

Comment by dirk (abandon) on 2024 Unofficial LessWrong Survey Results · 2025-03-15T02:24:08.509Z · LW · GW

Sorry, I meant to change only the headings you didn't want (but that won't work for text that's already paragraph-style, so I suppose that wouldn't fix the bold issue in any case; I apologize for mixing things up!).

Testing it out in a draft, it seems like having paragraph breaks before and after a single line of bold text might be what triggers index inclusion? In which case you can likely remove the offending entries by replacing the preceding or subsequent paragraph break with a shift-enter (still hacky, but at least addressing the right problem this time XD).

Comment by dirk (abandon) on 2024 Unofficial LessWrong Survey Results · 2025-03-15T01:51:48.273Z · LW · GW

A relatively easy solution (which would, unfortunately, mess with your formatting; not sure if there's a better one that doesn't do that) might be to convert everything you don't want in there to paragraph style instead of heading 1/2/3

Comment by dirk (abandon) on OpenAI: Detecting misbehavior in frontier reasoning models · 2025-03-11T08:08:07.174Z · LW · GW

I'm not sure the deletions are a learnt behavior—base models, or at least llama 405b in particular, do this too IME (as does the fine-tuned 8b version).

Comment by dirk (abandon) on Why it's so hard to talk about Consciousness · 2025-03-04T04:43:44.728Z · LW · GW

And I think you believe others to experience this extra thing because you have failed to understand what they're talking about when they discuss qualia.

Comment by dirk (abandon) on Thread for Sense-Making on Recent Murders and How to Sanely Respond · 2025-02-08T22:00:01.243Z · LW · GW

Ziz believes her entire hemisphere theory is an infohazard (IIRC she believes it was partially responsible for Pasek's death), so terms pertaining to it are separate from the rest of her glossary.

Comment by dirk (abandon) on eliminating bias through language? · 2025-02-07T02:30:30.212Z · LW · GW

Neither of them is exactly what you're looking for, but you might be interested in lojban, which aims to be syntactically unambiguous, and Ithkuil, which aims to be extremely information-dense as well as to reduce ambiguity. With regards to logical languages (ones which, like lojban, aim for each statement to have a single possible interpretation), I also found Toaq and Eberban just now while looking up lojban, though these have fewer speakers.