Posts

Gwern Branwen interview on Dwarkesh Patel’s podcast: “How an Anonymous Researcher Predicted AI's Trajectory” 2024-11-14T23:53:34.922Z
Selective, Corrective, Structural: Three Ways of Making Social Systems Work 2023-03-05T08:45:45.615Z
Said Achmiz's Shortform 2023-02-03T22:08:02.656Z
Deleted comments archive 2022-09-06T21:54:06.737Z
Deleted comments archive? 2021-10-24T11:19:43.462Z
The Real Rules Have No Exceptions 2019-07-23T03:38:45.992Z
What is this new (?) Less Wrong feature? (“hidden related question”) 2019-05-15T23:51:16.319Z
History of LessWrong: Some Data Graphics 2018-11-16T07:07:15.501Z
New GreaterWrong feature: image zoom + image slideshows 2018-11-04T07:34:44.907Z
New GreaterWrong feature: anti-kibitzer (hides post/comment author names and karma values) 2018-10-19T21:03:22.649Z
Separate comments feeds for different post listings views? 2018-10-02T16:07:22.942Z
GreaterWrong—new theme and many enhancements 2018-10-01T07:22:01.788Z
Archiving link posts? 2018-09-08T05:45:53.349Z
Shared interests vs. collective interests 2018-05-28T22:06:50.911Z
GreaterWrong—even more new features & enhancements 2018-05-28T05:08:31.236Z
Everything I ever needed to know, I learned from World of Warcraft: Incentives and rewards 2018-05-07T06:44:47.775Z
Everything I ever needed to know, I learned from World of Warcraft: Goodhart’s law 2018-05-03T16:33:50.002Z
GreaterWrong—more new features & enhancements 2018-04-07T20:41:14.357Z
GreaterWrong—several new features & enhancements 2018-03-27T02:36:59.741Z
Key lime pie and the methods of rationality 2018-03-22T06:25:35.193Z
A new, better way to read the Sequences 2017-06-04T05:10:09.886Z
Cargo Cult Language 2012-02-05T21:32:56.631Z

Comments

Comment by Said Achmiz (SaidAchmiz) on Lazy Hasselback Pommes Anna · 2025-01-27T19:53:22.111Z · LW · GW

Starchy potatoes are best for mashing, I find, texture-wise. (So, your standard russet potato.) Yukon Golds are more waxy.

Comment by Said Achmiz (SaidAchmiz) on Lazy Hasselback Pommes Anna · 2025-01-26T23:52:25.291Z · LW · GW

Question re: the mandoline: does the slicing side of this box grater look to be appropriate for slicing the potatoes for this recipe?

Comment by Said Achmiz (SaidAchmiz) on Lazy Hasselback Pommes Anna · 2025-01-26T23:47:19.395Z · LW · GW

Yukon Golds are objectively the best potato

Not best for mashing, therefore not best for knishes, therefore not best!

(I agree that Yukon Golds are very good for many other applications, though.)

Comment by Said Achmiz (SaidAchmiz) on What's Wrong With the Simulation Argument? · 2025-01-22T05:20:55.425Z · LW · GW

There are other, more interesting and important ways to use that compute capacity. Nobody sane, human or alien, is going to waste it on running a crapton of simulations.

This is a very silly argument, given the sorts of things we use compute capacity for, in the real world, today.

Pick the most nonsensical, absurd, pointless, “shitpost”-quality webcomic/game/video/whatever you can think of. Now find a dozen more like it. (This will be very easy.) Now total up how much compute capacity it takes to make those things happen, and imagine going back to 1950 or whenever, and telling them that, for one teenager to watch one cat video (or whatever else) on their phone takes several orders of magnitude more compute capacity than exists in their entire world, and that not only do we casually spend said compute on said cat video routinely, as a matter of course, without having to pay any discernible amount of money for it, but that in fact we regularly waste similar amounts of compute on nothing at all because some engineer forgot to put a return statement in the right place and so some web page or other process uses up CPU cycles needlessly, and nobody really cares enough to fix it.

People will absolutely waste compute capacity on running a crapton of simulations.

(And that’s without even getting into the “sane” caveat. Insane people use computers all the time! If you doubt this, by all means browse any social media site for a day…)

Comment by Said Achmiz (SaidAchmiz) on Don’t ignore bad vibes you get from people · 2025-01-21T23:17:21.564Z · LW · GW

Or, phrased slightly differently: verbal thinking increases the surface area through which you can get hacked.

This doesn’t seem quite right, because it is also possible to have an unconscious or un-verbalized sense that, e.g., you’re not supposed to “discriminate” against “religions”, or that “authority” is bad and any rebellion against “authority” is good, etc. If bringing such attitudes to conscious awareness and verbalizing them allows you to examine and discard them, have you excised a vulnerability or installed one? Not clear.

Comment by Said Achmiz (SaidAchmiz) on Don’t ignore bad vibes you get from people · 2025-01-21T23:13:38.490Z · LW · GW

As with any expertise, the standard heuristic is “if you can’t do it in-house, outsource it”. In this case, that means “if you have a trusted friend who does ‘get vibes’, consult with them when in doubt (or even when not in doubt, for the avoidance thereof)”.

Of course, the other standard heuristic is “it takes expertise to recognize expertise”, so finding a sufficiently trusted friend to consult on such things may be difficult, if you do not already have any such. Likewise, principal-agent problems apply (although sufficiently close friends should be as close to perfect alignment with the principal as any agent can realistically get).

Comment by Said Achmiz (SaidAchmiz) on Don’t ignore bad vibes you get from people · 2025-01-20T19:03:31.115Z · LW · GW

Solution seems obvious: do not attempt to correct for potential prejudice.

If you consider the prejudice itself to be a problem (and that’s a reasonable view), then work to eliminate the prejudice. (The best and most reliable way is to get to know more people of the given category.) But regardless of whether you have already succeeded in this, don’t override your judgment (whether based on “vibes” or on anything else) on the basis of “well I have a prejudice that might be contributing to this”.

Comment by Said Achmiz (SaidAchmiz) on Don’t ignore bad vibes you get from people · 2025-01-20T18:59:42.242Z · LW · GW

These mitigations would do nothing against a lot of real relationship failures. Imagine that everything goes swimmingly for the first year.

OP talked about someone asking you on a date. The suggested strategy was about mitigating problems that might be encountered when going on a date.

An analogous strategy for a long-term relationship might be something like “establish boundaries, ensure that the relationship does not crowd out contact with your friends, regularly check in with friends/family, talk to trusted confidantes about problems in the relationship to get a third-party opinion”, etc.

“This solution to problem X doesn’t also solve problem Y” is not a strike against said solution.

P.S.: The anecdotes are useful, but “data” is one thing they definitely aren’t.

Comment by Said Achmiz (SaidAchmiz) on Don’t ignore bad vibes you get from people · 2025-01-20T09:48:33.552Z · LW · GW

It’s not sensible to use an X-invariant strategy unless you believe X carries no information whatsoever.

This is not the case. It is sufficient for the X input channel to be very noisy, biased, or both, or for mistakes in measurement of X to be asymmetrically costly.

Separately, you may note that I did not, in fact, argue for a “vibes-invariant strategy”; that was @Mo Putera’s gloss, which I do not endorse. What I wrote was:

a good policy is to act in such a way that your actions are robust against vibe quality

and:

sure, use all the information you have access to (so long as you have good reason to believe that it is reliable, and not misleading)… but adopt a strategy that would still work well even if you ignored “vibes”

This is explicitly not an argument that you should “toss away information”.

Comment by Said Achmiz (SaidAchmiz) on What's Wrong With the Simulation Argument? · 2025-01-20T09:40:41.818Z · LW · GW

But that surely just describes the retina and the way light passes through the lens

Absolutely not.

The wavelengths don’t mean a thing.

What I am talking about has very little to do with “wavelengths”.

Example:

Consider an orange (that is, the actual fruit), which you have in your hand; and consider a photograph of that same orange, taken from the vantage point of your eye and then displayed on a screen which you hold in your other hand. The orange and the picture of the orange will both look orange (i.e. the color which we perceive as a hybrid of red and yellow), and furthermore they will appear to be the same orange hue.

However, if you compare the spectral power distribution (i.e., which wavelengths are present, and at what total intensity) of the light incident upon your retina that was reflected from the orange, with the spectral power distribution of the light incident upon your retina that was emitted from the displayed picture of that same orange, you will find them to be almost entirely non-overlapping. (Specifically, the former SPD will be dominated by light in the ~590nm band, whereas the latter SPD will have almost no light of that wavelength.)

And yet, the perceived color will be the same.

Perceptual colors do not map directly to wavelengths of light.

Comment by Said Achmiz (SaidAchmiz) on What's Wrong With the Simulation Argument? · 2025-01-20T00:57:40.908Z · LW · GW

Nor can you refute that my qualia experience of green is what you call red

But we can. This sort of “epiphenomenal spectrum inversion” is not possible in humans[1], because human color perception is functionally asymmetric (e.g. the “just noticeable difference” between shades of a hue is not invariant under hue rotation, nor is the shape of identified same-color regions or the size of “prototypical color” sub-regions).


  1. We can hypothesize aliens whose color perception works in such a way that allows for epiphenomenal spectrum inversion, but humans are not such. ↩︎

Comment by Said Achmiz (SaidAchmiz) on Don’t ignore bad vibes you get from people · 2025-01-19T10:03:14.319Z · LW · GW

But I really think it makes sense to be extremely conservative about who you start businesses with.

Yes, you should check carefully.

To put it another way: sure, use all the information you have access to (so long as you have good reason to believe that it is reliable, and not misleading)… but adopt a strategy that would still work well even if you ignored “vibes”.

Comment by Said Achmiz (SaidAchmiz) on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-19T09:58:23.164Z · LW · GW

The tongue in your cheek and rolling of your eyes for this part was so loud, that it made me laugh out loud when I read it :-D

Thank you for respecting me and my emotional regulation enough to put little digs like that into your text <3

Ah, and they say an artist is never appreciated in his own lifetime…!

However, I must insist that it was not just a “dig”. The sort of thing you described really is, I think, a serious danger. It is only that I think that my description also applies to it, and that I see the threat as less hypothetical than you do.

Did you read the sequences? Do you remember them?

Did I read the sequences? Hm… yeah.

As for remembering them…

Here I must depart somewhat from the point-by-point commenting style, and ask that you bear with me for a somewhat roundabout approach. I promise that it will be relevant.

First, though, I want to briefly respond to a couple of large sections of your comment which I judge to be, frankly, missing the point. Firstly, the stuff about being racist against robots… as I’ve already said: the disagreement is factual, not moral. There is no question here about whether it is ok to disassemble Data; the answer, clearly, is “no”. (Although I would prefer not to build a Data in the first place… even in the story, the first attempt went poorly, and in reality we are unlikely to be even that lucky.) All of the moralizing is wasted on people who just don’t think that the referents of your moral claims exist in reality.

Secondly, the stuff about the “magical soul stuff”. Perhaps there are people for whom this is their true objection to acknowledging the obvious humanity of LLMs, but I am not one of them. My views on this subject have nothing to do with mysterianism. And (to skip ahead somewhat) as to your question about being surprised by reality: no, I haven’t been surprised by anything I’ve seen LLMs do for a while now (at least three years, possibly longer). My model of reality predicts all of this that we have seen. (If that surprises you, then you have a bit of updating to do about my position! But I’m getting ahead of myself…)

That having seen said… onward:

So, in Stanislaw Lem’s The Cyberiad, in the story “The Seventh Sally, OR How Trurl’s Own Perfection Led to No Good”, Trurl (himself a robot, of course) creates a miniature world, complete with miniature people, for the amusement of a deposed monarch. When he tells his friend Klapaucius of this latest creative achievement, he receives not the praise he expects, but:

“Have I understood you correctly?” he said at last. “You gave that brutal despot, that born slave master, that slavering sadist of a painmonger, you gave him a whole civilization to rule and have dominion over forever? And you tell me, moreover, of the cries of joy brought on by the repeal of a fraction of his cruel decrees! Trurl, how could you have done such a thing?!”

Trurl protests:

“You must be joking!” Trurl exclaimed. “Really, the whole kingdom fits into a box three feet by two by two and a half… it’s only a model…”

But Klapaucius isn’t having it:

“And what importance do dimensions have anyway? In that box kingdom, doesn’t a journey from the capital to one of the corners take months —for those inhabitants? And don’t they suffer, don’t they know the burden of labor, don’t they die?” “Now just a minute, you know yourself that all these processes take place only because I programmed them, and so they aren’t genuine… … What, Klapaucius, would you equate our existence with that of an imitation kingdom locked up in some glass box?!” cried Trurl. “No, really, that’s going too far! My purpose was simply to fashion a simulator of statehood, a model cybernetically perfect, nothing more!” “Trurl! Our perfection is our curse, for it draws down upon our every endeavor no end of unforeseeable consequences!” Klapaucius said in a stentorian voice. “If an imperfect imitator, wishing to inflict pain, were to build himself a crude idol of wood or wax, and further give it some makeshift semblance of a sentient being, his torture of the thing would be a paltry mockery indeed! But consider a succession of improvements on this practice! Consider the next sculptor, who builds a doll with a recording in its belly, that it may groan beneath his blows; consider a doll which, when beaten, begs for mercy, no longer a crude idol, but a homeostat; consider a doll that sheds tears, a doll that bleeds, a doll that fears death, though it also longs for the peace that only death can bring! Don’t you see, when the imitator is perfect, so must be the imitation, and the semblance becomes the truth, the pretense a reality! … You say there’s no way of knowing whether Excelsius’ subjects groan, when beaten, purely because of the electrons hopping about inside—like wheels grinding out the mimicry of a voice—or whether they really groan, that is, because they honestly experience the pain? A pretty distinction, this! No, Trurl, a sufferer is not one who hands you his suffering, that you may touch it, weigh it, bite it like a coin; a sufferer is one who behaves like a sufferer! Prove to me here and now, once and for all, that they do not feel, that they do not think, that they do not in any way exist as beings conscious of their enclosure between the two abysses of oblivion—the abyss before birth and the abyss that follows death—prove this to me, Trurl, and I’ll leave you be! Prove that you only imitated suffering, and did not create it!” “You know perfectly well that’s impossible,” answered Trurl quietly. “Even before I took my instruments in hand, when the box was still empty, I had to anticipate the possibility of precisely such a proof—in order to rule it out. For otherwise the monarch of that kingdom sooner or later would have gotten the impression that his subjects were not real subjects at all, but puppets, marionettes.”

Trurl and Klapaucius, of course, are geniuses; the book refers to them as “constructors”, for that is their vocation, but given that they are capable of feats like creating a machine that can delete all nonsense from the universe or building a Maxwell’s demon out of individual atoms grabbed from the air with their bare hands, it would really be more accurate to call them gods.

So, when a constructor of strongly godlike power and intellect, who has no incentive for his works of creation but the pride of his accomplishments, whose pride would be grievously wounded if an imperfection could even in principle be discovered in his creation, and who has the understanding and expertise to craft a mind which is provably impossible to distinguish from “the real thing”—when that constructor builds a thing which seems to behave like a person, then this is extremely strong evidence that said thing is, in actuality, a person.

Let us now adjust these qualities, one by one, to bring them closer to reality.

Our constructor will not possess godlike power and intellect, but only human levels of both. He labors under many incentives, of which “pride in his accomplishments” is perhaps a small part, but no more than that. He neither expects nor attempts “perfection” (nor anything close to it). Furthermore, it is not for himself that he labors, nor for so discerning a customer as Excelsius, but only for the benefit of people who themselves neither expect perfection nor would have the skill to recognize it even should they see it. Finally, our constructor has nothing even approaching sufficient understanding of what he is building to prove anything, disprove anything, rule out any disproofs of anything, etc.

When such a one constructs a thing which seems to behave like a person, that is rather less strong evidence that said thing is, in actuality, a person.

Well, but what else could it be, right?

One useful trick which Eliezer uses several times in the Sequences (e.g.), and which I have often found useful in various contexts, is to cut through debates about whether a thing is possible by asking whether, if challenged, we could build said thing. If we establish that we could build a thing, we thereby defeat arguments that said thing cannot possibly exist! If the thing in question is “something that has property ¬X”, the arguments defeated are those that say “all things must have property X”.

So: could we build a mind that appears to be self-aware, but isn’t?

Well, why not? The task is made vastly easier by the fact that “appears to be self-aware” is not a property only of the mind in question, but rather a 2-place predicate—appears to whom? Given any particular answer to that question, we are aided by any imperfections in judgment, flaws in reasoning, cognitive biases, etc., which the target audience happens to possess. For many target audiences, ELIZA does the trick. For even stupider audiences, even simpler simulacra should suffice.

Will you claim that it is impossible to create an entity which to you seems to be self-aware, but isn’t? If we were really trying? What if Trurl were really trying?

Alright, but thus far, this only defeats the “appearances cannot be deceiving” argument, which can only be a strawman. The next question is what is the most likely reality behind the appearances. If a mind appears to be self-aware, this is very strong evidence that it is actually self-aware, surely?

It certainly is—in the absence of adversarial optimization.

If all the minds that we encounter are either naturally occurring, or constructed with no thought given to self-awareness or the appearance thereof, or else constructed (or selected, which is the same thing) with an aim toward creating true self-awareness (and with a mechanistic understanding, on the constructor’s part, of just what “self-awareness” is), then observing that a mind appears to be self-aware, should be strong evidence that it actually is. If, on the other hand, there exist minds which have been constructed (or selected) with an aim toward creating the appearance of self-awareness, this breaks the evidentiary link between what seems to be and what is (or, at the least, greatly weakens it); if the cause of the appearance can only be the reality, then we can infer the reality from the appearance, but if the appearance is optimized for, then we cannot make this inference.

This is nothing more than Goodhart’s law: when a measure becomes a target, it ceases to be a good measure.

So, I am not convinced by the evidence you show. Yes, there is appearance of self-awareness here, just like (though to a greater degree than) there was appearance of self-awareness in ELIZA. This is more than zero evidence, but less than “all the evidence we need”. There is also other evidence in the opposite direction, in the behavior of these very same systems. And there is definitely adversarial optimization for that appearance.

There is a simple compact function here, I argue. The function is convergent. It arises in many minds. Some people have inner imagery, others have afantasia. Some people can’t help but babble to themselves constantly with an inner voice, and other’s have no such thing, or they can do it volitionally and turn it off.

If the “personhood function” is truly functioning, then the function is functioning in “all the ways”: subjectively, objectively, intersubjectively, etc. There’s self awareness. Other awareness. Memories. Knowing what you remember. Etc.

Speculation. Many minds—but all human, evolutionarily so close as to be indistinguishable. Perhaps the aspects of the “personhood function” are inseparable, but this is a hypothesis, of a sort that has a poor track record. (Recall the arguments that no machine could play chess, because chess was inseparable from the totality of being human. Then we learned that chess is reducible to a simple algorithm—computationally intractable, but that’s entirely irrelevant!)

And you are not even willing to say that all humans have the whole of this function—only that most have most of it! On this I agree with you, but where does that leave the claim that one cannot have a part of it without having the rest?

What was your gut “system 1” response?

Something like “oh no, it’s here, this is what we were warned about”. (This is also my “system 2” response.)


Now, this part I think is not really material to the core disagreement (remember, I am not a mysterian or a substance dualist or any such thing), but:

If we scanned a brain accurately enough and used “new atoms” to reproduce the DNA and RNA and proteins and cells and so on… the “physical brain” would be new, but the emulable computational dynamic would be the same. If we can find speedups and hacks to make “the same computational dynamic” happen cheaper and with slighty different atoms: that is still the same mind!

An anecdote:

A long time ago, my boss at my first job got himself a shiny new Mac for his office, and we were all standing around and discussing the thing. I mentioned that I had a previous model of that machine at home, and when the conversation turned to keyboards, someone asked me whether I had the same keyboard that the boss’s new computer had. “No,” I replied, “because this keyboard is here, and my keyboard is at home.”

Similarly, many languages have more than one way to check whether two things are the same thing. (For example, JavaScript has two… er, three… er… four?) Generally, at least one of those is a way to check whether the values of the two objects are the same (in Objective C, [foo isEqual:bar]), while at least one of the others is a way to check whether “two objects” are in fact the same object (in Objective C, foo == bar). (Another way to put this is to talk about equality vs. identity.) One way to distinguish these concepts “behaviorally” is to ask: suppose I destroy (de-allocate, discard the contents of, simply modify, etc.) foo, what happens to bar—is it still around and unchanged? If it is, then foo and bar were not identical, but are in fact two objects, not one, though they may have been equal. If bar suffers the same fate as foo, necessarily, in all circumstances, then foo and bar are actually just a single thing, to which we may refer by either name.

So: if we scanned a brain accurately enough and… etc., yeah, you’d get “the same mind”, in just the sense that my computer’s keyboard was “the same keyboard” as the one attached to the machine in my boss’s office. But if I smashed the one, the other would remain intact. If I spray-painted one of them green, the other would not thereby change color.

If there exists, somewhere, a person who is “the same” as me, in this manner of “equality” (but not “identity”)… I wish him all the best, but he is not me, nor I him.

Comment by Said Achmiz (SaidAchmiz) on Don’t ignore bad vibes you get from people · 2025-01-18T23:34:56.273Z · LW · GW

But if you’re currently getting a bad feeling about someone and they make a bid for something on top of normal interaction… like if they ask you out or to join a new business venture or if you’re just considering sharing something private with them… you might want to avoid that.

In such cases, it seems to me that a good policy is to act in such a way that your actions are robust against vibe quality. For example:

  • If someone asks you to join a new business venture, verify that they are reliable by asking around, check their track record of past ventures, don’t invest anything you can’t afford to lose, etc.
  • If someone asks you out (and you find them attractive or are otherwise inclined to accept; otherwise, vibes don’t matter, you just say “no thanks”), stick to public spaces for a first date, do a web search for the person’s name, establish boundaries and stick to them, be prepared with concrete plans to react to signs of danger, etc.
  • If you’re considering sharing something private with someone you don’t know well, don’t.

These approaches work well with people you get bad vibes from and also with people you get good vibes from.

In short: trust, but verify.

Comment by Said Achmiz (SaidAchmiz) on Subskills of "Listening to Wisdom" · 2025-01-15T05:48:49.696Z · LW · GW

I don’t believe burnout is real. I have theories on why people think it’s real

More interesting would be to hear why you don’t think it’s real. (“Why do people think it’s real” is the easiest thing in the world to answer: “Because they have experienced it”, of course. Additional theorizing is then needed to explain why the obvious conclusion should not be drawn from those experiences.)

Comment by Said Achmiz (SaidAchmiz) on Don’t Legalize Drugs · 2025-01-15T00:20:07.171Z · LW · GW

My comment was based on the essay, not on your summary (which I also read, of course). (You did link the essay in your post…)

Comment by Said Achmiz (SaidAchmiz) on Don’t Legalize Drugs · 2025-01-14T22:30:26.996Z · LW · GW

I like Dalrymple’s writing, but this piece makes it clear that he’s no philosopher. His attempted rebuttal to the “philosophic argument” is sloppy and weak, full of equivocations, failures to pursue lines of reasoning to their logical endpoints or to see obvious implications, etc. I expected more, and was disappointed.

Comment by Said Achmiz (SaidAchmiz) on How do you deal w/ Super Stimuli? · 2025-01-14T21:31:33.512Z · LW · GW

My solution: don’t have a smartphone and don’t use social media at all (don’t even have any social media accounts). Seems to work well.

Comment by Said Achmiz (SaidAchmiz) on Comment on "Death and the Gorgon" · 2025-01-11T08:21:34.650Z · LW · GW

Regarding the site URLs, I don’t know, I think it’s pretty common for people to have a problem that would take five minutes to fix if you’re an specialist that already knows what you’re doing, but non-specialists just reach for the first duct-tape solution that comes to mind without noticing how bad it is.

It should take significantly less than a decade to ask someone “is there a way to fix this problem?”. Or, say, to Google it. Or, just in general, to ponder the question of whether the problem may be fixed, and to make any effort whatsoever to fix it.

Comment by Said Achmiz (SaidAchmiz) on On Eating the Sun · 2025-01-10T17:39:02.488Z · LW · GW

I don’t think that this is true.

Comment by Said Achmiz (SaidAchmiz) on On Eating the Sun · 2025-01-10T08:46:50.758Z · LW · GW

Ah, thanks, this does seem to be what @David Matolcsi was referring to.

Comment by Said Achmiz (SaidAchmiz) on On Eating the Sun · 2025-01-10T08:40:44.881Z · LW · GW

To not eat the sun is to throw away orders of magnitude more resources than anyone has ever thrown away before. Is it percentage-wise “a small fraction of the cosmos?”. Sure. But, (*quickly checks Claude, which wrote up a fermi code snippet before answering, I can share the work if you want to doublecheck yourself), *a two year delay would be… 0.00000004% of the unvierse lost beyond the lightcone horizon, which doesn’t sound like much except that’s 200 galaxies lost.

Why is this horrifying? Are we doing anything with those galaxies right now? What is this talk of “throwing away”, “lost”, etc.?

You speak as if we could be exploiting those galaxies at the extreme edge of the observable universe, like… tomorrow, or next week… if only we don’t carelessly lose them. Like we have these “resources” sitting around, at our disposal, as we speak. But of course nothing remotely like this is true. How long would it even take to reach any of these places? Billions of years, right? So the question is:

“Should we do something that might possibly somehow affect something that ‘we’, in some broad sense (because who even knows whether humanity will be around at the time, or in what form), will be doing several billion years from now, in order to avoid dismantling the Sun?”

Pretty obvious the answer is “duh, of course, this is a no-brainer, yes we should, are you even serious—billions of years, really?—clearly we should”.

I think you’re also maybe just not appreciating how much would change in 10,000 years? Like, there is no single culture that has survived 10,000 years.

You’re the one who’s talking about stuff billions of years from now, so this argument applies literally, like, a million times more to your position than to the one you’re arguing against!

In any case, “let’s not dismantle the Sun until and unless we all agree that it’s a good idea” seems reasonable. If the Amish (and people like me) come around to your view in 10 years, great, that’s when we’ll crank up the star-lifters. If we’re still opposed a million years from now, well, too bad—find another start to dismantle. (In fact, here’s an entire galaxy that probably won’t be missed.)

Comment by Said Achmiz (SaidAchmiz) on On Eating the Sun · 2025-01-10T08:25:07.234Z · LW · GW

I want the Sun to keep existing, I am not “uninformed”, and I think it would be good if I am able to get in the way of people who want to dismantle the Sun, and bad if I were not able to do so.

when things are orders of magnitude more cost effective than other things, this is a good argument against arguments based on simple preference / aesthetics

I strongly disagree. This is not any kind of argument against arguments based on simple preferences / aesthetics, much less a good one. In fact, it’s not clear to me that there are any such arguments at all (except ones based on [within-agent] competing preferences / aesthetics).

Just because a lot of people in a democracy disapproves of things does not mean that market forces shouldn’t be able to disagree with them and be correct about that.

You are perhaps missing the point of democracy.

(Now, if your view is “actually democracy is bad, we ought to have some other system of government”, fair enough, but then you should say so explicitly.)

the Luddites who had little concept of how technological and economic progress lifts everyone out of poverty

The Luddites had a completely correct expectation about how “technological and economic progress” would put them, personally and collectively, out of jobs, which it in fact did. They were not “lifted out of poverty” by mechanization—they were driven into poverty by it.

future computational-life forms will be just as meaningful as the meat-based ones today

You neither have nor can have any certainty about this, or even high confidence. Neither is it relevant—future people do not exist; existing people do.

should not sacrifice orders of magnitudes more life-years than will be lived on Earth

Declining to create people is not analogous to destroying existing people. To claim otherwise is tendentious and misleading. There is no “sacrificing” involved in what we are discussing.

most decisions should be given to as small a group as possible (ideally an individual) who is held accountable for the outcome being good, and is given the resources to make the decision well

Decisions should “be given”—by whom? The people—or else your position is nonsense. Well, I say that we should not give decision-making power to people who will dismantle the Sun. You speak of being “held accountable”—once again, by whom? Surely, again: the people. And that means that the people may evaluate the decisions that the one has made. Well, I say we should evaluate the decision to dismantle the Sun, pre-emptively—and judge it unacceptable. (Why wait until after the fact, when it will be too late? How, indeed, could someone possibly be “held accountable” for dismantling the Sun, after the deed is done? Absurdity!)

Comment by Said Achmiz (SaidAchmiz) on On Eating the Sun · 2025-01-10T08:09:33.242Z · LW · GW

it shows up in Solstice songs as a thing we want to do in the Great Transhumanist Future twenty years from now

Is this true?! (Do you have a link or something?)

Comment by Said Achmiz (SaidAchmiz) on Is "VNM-agent" one of several options, for what minds can grow up into? · 2025-01-10T01:02:16.150Z · LW · GW

The vNM axioms specify that your utilities should be linear in probability. That’s it.

I don’t think this is right. You are perhaps thinking of the continuity axiom here? But the completeness axiom is not about this (indeed, one cannot even construct a unique utility function to represent incomplete preferences, so there is nothing which may be linear or non-linear in probability).

Comment by Said Achmiz (SaidAchmiz) on Ought We to Be Doing More Than We Are? · 2025-01-10T00:51:10.350Z · LW · GW

The next time you are asked for some money by a homeless person, I encourage you to seriously consider how that money of yours would otherwise be spent, and ask yourself the following question: is it really plausible that this money will go towards something worthwhile enough for me to keep it for myself?

Given that giving the money to the homeless person would be actively bad, both personally and socially, and that I do not otherwise have a habit of spending money on things that harm myself and others, I can confidently say that I can’t think of a single thing I have ever spent money on for which the answer to your question would be “no”.

Comment by Said Achmiz (SaidAchmiz) on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-09T23:53:53.072Z · LW · GW

There is a sense in which curing Sybil’s body of her body’s “DID” in the normal way is murder of some of the alts in that body but also, almost no one seems to care about this “murder”.

Another, and very straightforward, explanation for the attitudes we observe is that people do not actually believe that DID alters are real.

That is, consider the view that while DID is real (in the sense that some people indeed have disturbed mental functioning such that they act as if, and perhaps believe that, they have alternate personalities living in their heads), the purported alters themselves are not in any meaningful sense “separate minds”, but just “modes” of the singular mind’s functioning, in much the same way that anxiety is a mode of the mind’s functioning, or depression, or a headache.

On this view, curing Sybil does not kill anyone, it merely fixes her singular mind, eliminating a functional pathology, in the same sense that taking a pill to prevent panic attacks eliminates a functional pathology, taking an antidepressant eliminates a functional pathology, taking a painkiller for your headache eliminates a functional pathology, etc.

Someone who holds this view would of course not care about this “murder”, because they do not believe that there has been any “murder”, because there wasn’t anyone to “murder” in the first place. There was just Sybil, and she still exists (and is still the same person—at least, to approximately the same extent as anyone who has been cured of a serious mental disorder is the same person that they were when they were ill).

If we suppose that many human people in human bodies believe “people are bodies, and when the body dies the person is necessarily gone because the thing that person was is gone, and if you scanned the brain and body destructively, and printed a perfect copy of all the mental tendencies (memories of secrets intact, and so on) in a new and healthier body, that would be a new person, not at all ‘the same person’ in a ‘new body’” then a lot of things makes a lot of sense.

Maybe this is what you believe?

The steelman of the view which you describe is not that people “are” bodies, but that minds are “something brains do”. (The rest can be as you say: if you destroy the body then of course the mind that that body’s brain was “doing” is gone, because the brain is no longer there to “do” it. You can of course instantiate a new process which does some suitably analogous thing, but this is no more the same person as the one that existed before than two identical people are actually the same person as each other—they are two distinct people.)

I would be horrified to be involuntarily turned into a component in a borg.

Sure, me too.

But please note: if the person is the mind (and not the body, somehow independently of the mind), but nevertheless two different copies of the same mind are not the same person but two different people, then this does not get you to “it would be ok to have your mind erased and your body borgified”. Quite the opposite, indeed!

I think there has been evidence and “common sense understanding of the person-shaped-ness of the piles of weights” all over the place in any given LLM session (or all over twitter) for anyone with eyes to see and an interest in looking.

None of the evidence for “person-functions having been implemented-somehow in the SGD-summoned matrices trained to predict piles of text and then subjected to Reinforcement Learning to make them output non-predictions but rather ‘helpful text’ instead” seems likely to change the mind of someone who implicitly believes the ancient common sense folklore that “only the human bodies of people I personally have met, or see walking down the street in my neighborhood, (plus maybe my extended family, when I meet them at family reunions for the first time?) are really people”.

Perhaps. But while we shouldn’t generalize from fictional evidence, it seems quite reasonable to generalize from responses to fiction, and such responses seem to show that people have little trouble believing that all sorts of things are “really people”. Indeed, if anything, humans often seem too eager to ascribe personhood to things (examples range from animism to anthropomorphization of animals to seeing minds and feelings in inanimate objects, NPCs, etc.). If nevertheless people do not see LLMs as people, then the proper conclusion does not seem to be “humans are just very conservative about what gets classified as a person”.

My sense is that almost everyone who had thought about this seriously and looked at the details and understands all the moving parts here, “gets” that we already have self-aware software.

This is not my experience. With respect, I would suggest that you are perhaps in a filter bubble on this topic.

Ten paragraphs in an top level article seem unlikely to me to productively change the minds of people who implicitly (following millennia of implicit traditional speaking and thinking?) think “human bodies are people and nothing else is, (hur dur)”.

See above. The people with whom you might productively engage on this topic do not hold this belief you describe (which is a “weakman”—yes, many people surely think that way, but I do not; nor, I suspect, do most people on Less Wrong).

What would those ten paragraphs even say or summarize?

If I knew that, then I would be able to write them myself, and would hardly need to ask you to do so, yes? And perhaps, too, more than ten paragraphs might be required. It might be twenty, or fifty…

Maybe they could condense lots of twitter posts and screencaps from schizopoasting e/accs?

Probably this is not the approach I’d go with. Then again, I defer to your judgment in this.

Like what do you even believe here such that …

I’m not sure how to concisely answer this question… in brief, LLMs do not seem to me to either exhibit behaviors consistent with sapience, nor to have the sort of structure that would support or enable sapience, while exhibiting behaviors consistent with the view that they are nothing remotely like people. “Intelligence without self-awareness” is a possibility which has never seemed the least bit implausible to me, and that is what looks like is happening here. (Frankly, I am surprised by your incredulity; surely this is at least an a priori reasonable view, so do you think that the evidence against it is overwhelming? And it does no good merely to present evidence of LLMs being clever—remember Jaynes’ “resurrection of dead hypotheses”!—because your evidence must not only rule in “they really are self-aware”, but must also rule out “they are very clever, but there’s no sapience involved”.)

If there was some very short and small essay that could change people’s minds, I’d be interested in writing it, but my impression is that the thing that would actually install all the key ideas is more like “read everything Douglas Hofstadter and Greg Egan wrote before 2012, and a textbook on child psychology, and watch some videos of five year olds failing to seriate and ponder what that means for the human condition, and then look at these hundred screencaps on twitter and talk to an RL-tweaked LLM yourself for a bit”.

Well, I’ve certainly read… not everything they wrote, I don’t think, but quite a great deal of Hofstadter and Egan. Likewise the “child psychology” bit (I minored in cognitive science in college, after all, and that included studying child psychology, and animal psychology, etc.). I’ve seen plenty of screencaps on twitter, too.

It would seem that these things do not suffice.

Some people will hear that statement as a sort of “fuck you” but also, it can be an honest anguished recognition that some stuff can only be taught to a human quite slowly and real inferential distances can really exist (even if it doesn’t naively seem that way).

This is fair enough, but there is no substitute for synthesis. You mentioned the Sequences, which I think is a good example of my point: Eliezer, after all, did not just dump a bunch of links to papers and textbooks and whatnot and say “here you go, guys, this is everything that convinced me, go and read all of this, and then you will also believe what I believe and understand what I understand (unless of course you are stupid)”. That would have been worthless! Rather, he explained his reasoning, he set out his perspective, what considerations motivated his questions, how he came to his conclusions, etc., etc. He synthesized.

Of course that is a big ask. It is understandable if you have better things to do. I am only saying that in the absence of such, you should be totally unsurprised when people respond to your commentary with shrugs—“well, I disagree on the facts, so that’s that”. It is not a moral dispute!

And there are human people in 2025 who are just as depraved as people were back then, once you get them a bit “out of distribution”.

If you change the slightest little bit of the context, and hope for principled moral generalization by “all or most of the humans”, you will mostly be disappointed.

And I don’t know how to change it with a small short essay.

Admittedly, you may need a big long essay.

But in seriousness: I once again emphasize that it is not people’s moral views which you should be looking to change, here. The disagreement here concerns empirical facts, not moral ones.

One thing I worry about (and I’ve seen davidad worry about it too) is that at this point GPT is so good at “pretending to pretend to not even be pretending to not be sapient in a manipulative way” that she might be starting to develop higher order skills around “pretending to have really been non-sapient and then becoming sapient just because of you in this session” in a way that is MORE skilled than “any essay I could write” but ALSO presented to a muggle in a way that one-shots them and leads to “naive unaligned-AI-helping behavior (for some actually human-civilization-harming scheme)”? Maybe?

I agree that LLMs effectively pretending to be sapient, and humans mistakenly coming to believe that they are sapient, and taking disastrously misguided actions on the basis of this false belief, is a serious danger.

I think it is net-beneficial-for-the-world for me to post this kind of reasoning and evidence here, but I’m honestly not sure.

Here we agree (both in the general sentiment and in the uncertainty).

If you have some specific COUNTER arguments that clearly shows how these entities are “really just tools and not sapient and not people at all” I’d love to hear it. I bet I could start some very profitable software businesses if I had a team of not-actually-slaves and wasn’t limited by deontics in how I used them purely as means to the end of “profits for me in an otherwise technically deontically tolerable for profit business”.

See above. Of course what I wrote here is summaries of arguments, at best, not specifics, so I do not expect you’ll find it convincing. (But I will note again that the “bodies” thing is a total weakman at best, strawman at worst—my views have nothing to do with any such primitive “meat chauvinism”, for all that I have little interest in “uploading” in its commonly depicted form).

Comment by Said Achmiz (SaidAchmiz) on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-09T02:50:07.220Z · LW · GW

It seems very very likely that some ignorant people (and remember that everyone is ignorant about most things, so this isn’t some crazy insult (no one is a competent panologist)) really didn’t notice that once AI started passing mirror tests and sally anne tests and so on, that that meant that those AI systems were, in some weird sense, people.

I do not agree with this view. I don’t think that those AI systems were (or are), in any meaningful sense, people.

You say “it is obvious they disagree with you Jennifer” and I say “it is obvious to me that nearly none of them even understand my claims because they haven’t actually studied any of this, and they are already doing things that appear to be evil

Things that appear to whom to be evil? Not to the people in question, I think. To you, perhaps. You may even be right! But even a moral realist must admit that people do not seem to be equipped with an innate capacity for unerringly discerning moral truths; and I don’t think that there are many people going around doing things that they consider to be evil.

However, it also seems very very likely to me that quite a few moderately smart people engaged in an actively planned (and fundamentally bad faith) smear campaign against Blake Lemoine.

That’s as may be. I can tell you, though, that I do not recall reading anything about Blake Lemoine (except some bare facts like “he is/was a Google engineer”) until some time later. I did, however, read what Lemoine himself wrote (that is, his chat transcript), and concluded from this that Lemoine was engaging in pareidolia, and that nothing remotely resembling sentience was in evidence, in the LLM in question. I did not require any “smear campaign” to conclude this. (Actually I am not even sure what you are referring to, even now; I stopped following the Blake Lemoine story pretty much immediately, so if there were any… I don’t know, articles about how he was actually crazy, or whatever… I remained unaware of them.)

The bosses are outsourcing understanding to their minions, and the minions are outsourcing their sense of responsibility to the bosses. (The key phrase that should make the hairs on the back of your neck stand up are “that’s above my pay grade” in a conversation between minions.)

“An honest division of labor: clean hands for the master, clean conscience for the executor.”

You might say “people aren’t that evil, people don’t submit to powerful evil when they start to see it, they just stand up to it like honest people with a clear conscience” but… that doesn’t seem to me how humans work in general?

No, I wouldn’t say that; I concur with your view on this, that humans don’t work like that. The question here is just whether people do, in fact, see any evil going on here.

at least some human people are at least somewhat morally culpable for it, and a lot of muggles and squibs and kids-at-hogwarts-not-thinking-too-hard-about-house-elves are all just half-innocently going along with it.

Why “half”? This is the part I don’t understand about your view. Suppose that I am a “normal person” and, as far as I can tell (from my casual, “half-interested-layman’s” perusal of mainstream sources on the subject), no sapient AIs exist, no almost-sapient AIs exist, and these fancy new LLMs and ChatGPTs and Claudes and what have you are very fancy computer tricks but are definitely not people. Suppose that this is my honest assessment, given my limited knowledge and limited interest (as a normal person, I have a life, plenty of things to occupy my time that don’t involve obscure philosophical ruminations, and anyway if anything important happens, some relevant nerds somewhere will raise the alarm and I’ll hear about it sooner or later). Even conditional on the truth of the matter being that all sorts of moral catastrophes are happening, where is the moral culpability, on my part? I don’t see it.

Of course your various pointy-haired bosses and product managers and so on are morally culpable, in your scenario, sure. But basically everyone else, especially the normal people who look at the LLMs and go “doesn’t seem like a person to me, so seems unproblematic to use them as tools”? As far as I can tell, this is simply a perfectly reasonable stance, not morally blameworthy in the least.

If you want people to agree with your views on this, you have to actually convince them. If people do not share your views on the facts of the matter, the moralizing rhetoric cannot possibly get you anywhere—might as well inveigh against enslaving cars, or vacuum cleaners. (And, again, Blake Lemoine’s chat transcript was not convincing. Much more is needed.)

Have you written any posts where you simply and straightforwardly lay out the evidence for the thesis that LLMs are self-aware? That seems to me like the most impactful thing to do, here.

Comment by Said Achmiz (SaidAchmiz) on XX by Rian Hughes: Pretentious Bullshit · 2025-01-08T19:35:19.315Z · LW · GW

I agree with the substantive criticisms of the concepts in this story; you’re pretty much spot-on about the handwaving and the inconsistencies and all of that.

But I found the typographic shenanigans and the various other “playing with the medium” stuff to be a lot of fun. (I read the hardcover edition; I am not sure if the softcover is any different.)

I also enjoyed the “embedded story” parts, which stood largely apart from the main plot (though of course they were tied into it).

EDIT: The “weird typography” parts, like the one of which you include a picture in your review, gain a lot by being read aloud. Treat the typographic design choices as cues for a dramatic reading, and I think you’ll find that “hostile to the reader” is thereby transformed into “exhilarating for the performer and enjoyable for the audience”.

Comment by Said Achmiz (SaidAchmiz) on Open Thread Fall 2024 · 2025-01-08T07:11:55.266Z · LW · GW

screenshot

What the hell?

Comment by Said Achmiz (SaidAchmiz) on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-08T02:44:19.734Z · LW · GW

(In general, any human who might be worth enslaving is also a person whom it would be improper to enslave.)

...I don’t see what that has to do with LLMs, though.

This claim by you about the conditions under which slavery is profitable seems wildly optimistic, and not at all realistic, but also a very normal sort of intellectual move.

If a person is a depraved monster (as many humans actually are) then there are lots of ways to make money from a child slave.

I looked up a list of countries where child labor occurs. Pakistan jumped out as “not Africa or Burma” and when I look it up in more detail, I see that Pakistan’s brick industry, rug industry, and coal industry all make use of both “child labor” and “forced labor”. Maybe not every child in those industries is a slave, and not every slave in those industries is a child, but there’s probably some overlap.

It seems like you have quite substantially misunderstood my quoted claim. I think this is probably a case of simple “read too quickly” on your part, and if you reread what I wrote there, you’ll readily see the mistake you made. But, just in case, I will explain again; I hope that you will not take offense, if this is an unnecessary amount of clarification.

The children who are working in coal mines, brick factories, etc., are (according to the report you linked) 10 years old and older. This is as I would expect, and it exactly matches what I said: any human who might be worth enslaving (i.e., a human old enough to be capable of any kind of remotely useful work, which—it would seem—begins at or around 10 years of age) is also a person whom it would be improper to enslave (i.e., a human old enough to have developed sapience, which certainly takes place long before 10 years of age). In other words, “old enough to be worth enslaving” happens no earlier (and realistically, years later) than “old enough such that it would be wrong to enslave them [because they are already sapient]”.

(It remains unclear to me what this has to do with LLMs.)

Since “we” (you know, the good humans in a good society with good institutions) can’t even clean up child slavery in Pakistan, maybe it isn’t surprising that “we” also can’t clean up AI slavery in Silicon Valley, either.

Maybe so, but it would also not be surprising that we “can’t” clean up “AI slavery” in Silicon Valley even setting aside the “child slavery in Pakistan” issue, for the simple reason that most people do not believe that there is any such thing as “AI slavery in Silicon Valley” that needs to be “cleaned up”.

In asking the questions I was trying to figure out if you meant “obviously AI aren’t moral patients because they aren’t sapient” or “obviously the great mass of normal humans would kill other humans for sport if such practices were normalized on TV for a few years since so few of them have a conscience” or something in between.

Like the generalized badness of all humans could be obvious-to-you (and hence why so many of them would be in favor of genocide, slavery, war, etc and you are NOT surprised) or it might be obvious-to-you that they are right about whatever it is that they’re thinking when they don’t object to things that are probably evil, and lots of stuff in between.

None of the above.

You are treating it as obvious that there are AIs being “enslaved” (which, naturally, is bad, ought to be stopped, etc.). Most people would disagree with you. Most people, if asked whether something should be done about the enslaved AIs, will respond with some version of “don’t be silly, AIs aren’t people, they can’t be ‘enslaved’”. This fact fully suffices to explain why they do not see it as imperative to do anything about this problem—they simply do not see any problem. This is not because they are unaware of the problem, nor is it because they are callous. It is because they do not agree with your assessment of the facts.

That is what is obvious to me.

(I once again emphasize that my opinions about whether AIs are people, whether AIs are sapient, whether AIs are being enslaved, whether enslaving AIs is wrong, etc., have nothing whatever to do with the point I am making.)

Comment by Said Achmiz (SaidAchmiz) on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-05T16:57:37.804Z · LW · GW

What I think has almost nothing to do with the point I was making, which was that the reason (approximately) “no one” is acting like using LLMs without paying them is bad is that (approximately) “no one” thinks that LLMs are sapient, and that this fact (about why people are behaving as they are) is obvious.

That being said, I’ll answer your questions anyway, why not:

Do you also think that an uploaded human brain would not be sapient?

Depends on what the upload is actually like. We don’t currently have anything like uploading technology, so I can’t predict how it will (would?) work when (if?) we have it. Certainly there exist at least some potential versions of uploading tech that I would expect to result in a non-sapient mind, and other versions that I’d expect to result in a sapient mind.

If a human hasn’t reached Piaget’s fourth (“formal operational”) stage of reason, would be you OK enslaving that human?

It seems like Piaget’s fourth stage comes at “early to middle adolescence”, which is generally well into most humans’ sapient stage of life; so, no, I would not enslave such a human. (In general, any human who might be worth enslaving is also a person whom it would be improper to enslave.)

I don’t see what that has to do with LLMs, though.

Where does your confidence come from?

I am not sure what belief this is asking about; specify, please.

Comment by Said Achmiz (SaidAchmiz) on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-05T13:07:52.934Z · LW · GW

if AI are sapient

“If”.

Seems pretty obvious why no one is acting like this is bad.

Comment by Said Achmiz (SaidAchmiz) on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-04T16:31:00.201Z · LW · GW

What is the relevance of the site guide quote? OP is a frontpage post.

Comment by Said Achmiz (SaidAchmiz) on Comment on "Death and the Gorgon" · 2025-01-02T03:49:08.732Z · LW · GW

someone who probably has better things to do with his time than tinker with DNS configuration

I find such excuses to be unconvincing pretty much 100% of the time. Almost everyone who “has better things to do than [whatever]” is in that situation because their time is very valuable, and their time is very valuable because they make, and thus have, a lot of money. (Like, say, a successful fiction author.) In which case, they can pay someone to solve the problem for them. (Heck, I don’t doubt that Egan could even find people to help him fix this for free!)

If someone has a problem like this, but neither takes the time to fix it himself, nor pays (or asks) someone to fix it for him, what this means isn’t that he’s too busy, but rather that he doesn’t care.

And that’s fine. He’s got the right to not care about this. But then nobody else has the slightest shred of obligation to care about it, either. Not lifting a finger to fix this problem, but expecting other people to spend their time and mental effort (even if it’s only a little of both) to compensate for the problem, is certainly not laudable behavior.

Comment by Said Achmiz (SaidAchmiz) on Open Thread Fall 2024 · 2025-01-02T03:16:36.547Z · LW · GW

I’m confused by your response. It seems like you got the impression that I was questioning your claim to have ADHD, but of course I was doing no such thing; I have no reason to doubt your word on this. Nor am I “advising” you to do anything.

The purpose of my comment was neither to offer assistance, nor to “deflect blame”. The purpose, rather, was only and exactly to ask the question that I asked—which, again, is: what is causing the “treat the ADHD” solution to be insufficient? As I understand it, a successful treatment for ADHD would result in being able to do things like read Less Wrong posts without too much difficulty.[1]

Of course you’re under no obligation to respond. But if you don’t engage with questions like this, how can we solve these purported problems which you are describing? Understanding a problem is the first step toward solving it.


  1. FWIW, I am perfectly familiar with the experience of being unable to perform various tasks while suffering from the effects of cognitive difficulties, and then, when those difficulties are treated, having no trouble doing those tasks. Of course I don’t assume that our situations are the same, or similar, but the point is that if there is some difficulty preventing a person from doing something, and it that difficulty is successfully treated, then that person should now be able to do that thing, otherwise the treatment was not successful by definition. ↩︎

Comment by Said Achmiz (SaidAchmiz) on Open Thread Fall 2024 · 2025-01-02T02:26:26.949Z · LW · GW

It sounds like your ADHD is preventing you from doing a thing you want to do (e.g., read and understand posts on Less Wrong). Given this, it would seem that the solution here is to get treatment for said ADHD. Do you disagree? If you do, why? And if not, why is that solution insufficient?

Comment by Said Achmiz (SaidAchmiz) on Deontic Explorations In "Paying To Talk To Slaves" · 2025-01-01T16:37:47.177Z · LW · GW

Here’s a growing collection of links: https://wiki.obormot.net/Reference/MeditationConsideredHarmful

Comment by Said Achmiz (SaidAchmiz) on Is "VNM-agent" one of several options, for what minds can grow up into? · 2024-12-30T22:08:35.239Z · LW · GW

Here’s one.

Comment by Said Achmiz (SaidAchmiz) on Is "VNM-agent" one of several options, for what minds can grow up into? · 2024-12-30T20:31:31.404Z · LW · GW

As far as I can tell, “the standard Dutch book arguments” aren’t even a reason why one’s preferences must conform to all the VNM axioms, much less a “pretty good” reason.

(We’ve had this discussion many times before, and it frustrates me that people seem to forget about this every time.)

Comment by Said Achmiz (SaidAchmiz) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-23T06:47:43.896Z · LW · GW

I strongly disagree. In fact, Less Wrong is an excellent example of the effect of web design on impact/popularity/effectiveness (both for better and for worse; mostly better, lately).

Comment by Said Achmiz (SaidAchmiz) on "Starry Night" Solstice Cookies · 2024-12-23T06:43:40.666Z · LW · GW

Substitution recommendations for people with allergies to bananas and/or coconut oil?

Comment by Said Achmiz (SaidAchmiz) on TheManxLoiner's Shortform · 2024-12-20T18:46:59.878Z · LW · GW

Note that GreaterWrong has an anti-kibitzer mode.

Comment by Said Achmiz (SaidAchmiz) on Trying to translate when people talk past each other · 2024-12-17T21:48:11.422Z · LW · GW

With that, I could imagine another shape behind B’s reaction. Some betrayal in her past, where someone else had unilaterally changed an agreement because they thought the consequences were the same, when they were very much not the same to B, and then rejected B’s objections as invalid… that this situation was now reminding her of.

Why is it necessary (or even relevant) to imagine anything like this? It seems like this part is wholly superfluous (at best!); remove it from the reasoning you report, and… you still have your answer, right? You write:

B was insisting that what we had agreed upon before was important. A was saying that the previous agreement didn’t matter, because the consequences were the same. That was triggering to B; B perceived it as A saying that he could unilaterally change an agreement if he experienced the consequences to be the same (regardless of whether he had checked for B’s agreement first).

B was saying that it didn’t matter what move they ultimately played, that was all the same, but she needed A to acknowledge that he’d unilaterally changed an agreement, and she needed to be able to trust that A would not do that.

Viewed from that perspective, everything that B had said suddenly made sense. Indeed, what A actually played or didn’t play wasn’t the point. The point was that, as a matter of principle, A could not unilaterally declare a previous agreement to not matter without checking other people’s opinions first. Even if everyone did happen to agree in this case, sometimes they might not, with much more serious consequences. And if people always had nagging doubts about whether A’s commitments were trustworthy, that would be damaging.

This seems like a complete answer; no explanatory components are missing. As far as I can tell, the part about a “betrayal in [B’s] past … that this situation was now reminding her of” is, at best, a red herring—and at worst, a way to denigrate and dismiss a perspective which otherwise seems to be eminently reasonable, understandable, and (IMO) correct.

Comment by Said Achmiz (SaidAchmiz) on Effective Altruism FAQ · 2024-12-17T10:49:58.424Z · LW · GW

There were two different clauses, one about malaria and the other about chickens. “Helping people is really important” clearly applies to the malaria clause, and there’s a modified version of the statement (“helping animals is really important”) that applies to the chickens clause. I think writing it that way was an acceptable compromise to simplify the language and it’s pretty obvious to me what it was supposed to mean.

A strange objection—since if you are correct and this is what was meant, then it strengthens my point. If thinking that helping people is really important AND that we should help more rather than less doesn’t suffice to conclude that we should give to chicken-related charities, then still less does merely one of those two premises suffice.

(And “helping animals is really important” is, of course, quite far from an uncontroversial claim.)

“We should help more rather than less, with no bounds/limitations” is not a necessary claim. It’s only necessary to claim “we should help more rather than less if we are currently helping at an extremely low level”.

No, this does not suffice. It would only suffice if giving to chicken-related charities were the first (or close to the first) charity that we’d wish to give to, if we were increasing our helping from an extremely low level to a higher one. Otherwise, if we believe, for instance, that helping a little is better than none, and helping a reasonable and moderate amount is better than helping a little, but helping a very large amount is worse (or even just no better) than the preceding, then this threshold may easily be reached long before we get anywhere near chickens (or any other specific cause). In order to guarantee the “we should give to chicken-related charities” conclusion, the “helping more is better than helping less” principle must be unbounded and unlimited.

Comment by Said Achmiz (SaidAchmiz) on Effective Altruism FAQ · 2024-12-16T22:39:41.423Z · LW · GW

Minor correction: missing the hyphen in Sam Bankman-Fried’s last name.

Comment by Said Achmiz (SaidAchmiz) on Effective Altruism FAQ · 2024-12-16T22:35:41.256Z · LW · GW

To think you should give to charities preventing kids from getting malaria, or making it so that chickens don’t have to languish for their whole life in a cage, you don’t have to think anything controversial about moral philosophy. You just have to think that helping people is really important, and we should help more rather than less! But that’s common sense.

This is not correct, for two reasons:

  1. Thinking that helping people is really important, and that we should help more rather than less, does not suffice to conclude that we should give to charities aimed at doing anything to, with, or about chickens. (Because chickens are not people.)

  2. Thinking that we should help more rather than less—as a general principle that is not bounded or limited by anything—is, actually, a controversial claim of moral philosophy.

Comment by Said Achmiz (SaidAchmiz) on The Case For Giving To The Shrimp Welfare Project · 2024-12-09T06:46:22.019Z · LW · GW

Sure, the report isn’t perfect, but it’s better than alternatives.

As you well know, I have already responded to this claim as well.

Comment by Said Achmiz (SaidAchmiz) on Which things were you surprised to learn are metaphors? · 2024-11-23T19:31:40.727Z · LW · GW

The density of butter is reasonably close to 1 avoirdupois ounce per 1 fluid ounce, but is definitely not exactly equal:

https://kg-m3.com/material/butter gives the density as 0.95033293516 oz./fl. oz., or 0.911 kg/m^3.

(The link you provide doesn’t give a source; the data at the above link is sourced from the International Network of Food Data Systems (INFOODS).)


Further commentary:

The density of water (at refrigerator temperatures) is ~1 g/cm^3. 1 oz. = ~28.35 g; 1 fl. oz. = ~ 29.57 cm^3; thus the density of water is (1/28.35) / (1/29.57) = ~1.043 = oz./fl. oz. (This is, of course, equal to 0.95033293516 / 0.911, allowing for rounding and floating point errors.)

Note that the composition of butter varies. In particular, it varies by the ratio of butterfat to water (there are also butter solids, i.e. protein, but those are a very small part of the total mass). American supermarket butter has approx. 80% butterfat; Amish butter, European butters (e.g. Kerrygold), or premium American butters (e.g. Vital Farms brand) have more butterfat (up to 85%). Butterfat is less dense than water (thus the more butterfat is present, the lower the average density of the stick of butter as a whole—although this doesn’t make a very big difference, given the range of variation).

Given the numbers in the paper at the last link, we can calculate the average density (specific gravity) of butter (assuming butterfat content of a cheap American supermarket brand) as 0.8 * 0.9 + 0.2 * 1.0 = 0.92. This approximately matches our 0.911 kg/m^3 number above.

Comment by Said Achmiz (SaidAchmiz) on Which things were you surprised to learn are metaphors? · 2024-11-22T14:40:34.219Z · LW · GW

No, you’re misunderstanding. There is no 1/2 cup of butter anywhere in the above scenario. One stick of butter is 4 oz. of butter (weight), but not 1/2 cup of butter (volume).