Posts

Selective, Corrective, Structural: Three Ways of Making Social Systems Work 2023-03-05T08:45:45.615Z
Said Achmiz's Shortform 2023-02-03T22:08:02.656Z
Deleted comments archive 2022-09-06T21:54:06.737Z
Deleted comments archive? 2021-10-24T11:19:43.462Z
The Real Rules Have No Exceptions 2019-07-23T03:38:45.992Z
What is this new (?) Less Wrong feature? (“hidden related question”) 2019-05-15T23:51:16.319Z
History of LessWrong: Some Data Graphics 2018-11-16T07:07:15.501Z
New GreaterWrong feature: image zoom + image slideshows 2018-11-04T07:34:44.907Z
New GreaterWrong feature: anti-kibitzer (hides post/comment author names and karma values) 2018-10-19T21:03:22.649Z
Separate comments feeds for different post listings views? 2018-10-02T16:07:22.942Z
GreaterWrong—new theme and many enhancements 2018-10-01T07:22:01.788Z
Archiving link posts? 2018-09-08T05:45:53.349Z
Shared interests vs. collective interests 2018-05-28T22:06:50.911Z
GreaterWrong—even more new features & enhancements 2018-05-28T05:08:31.236Z
Everything I ever needed to know, I learned from World of Warcraft: Incentives and rewards 2018-05-07T06:44:47.775Z
Everything I ever needed to know, I learned from World of Warcraft: Goodhart’s law 2018-05-03T16:33:50.002Z
GreaterWrong—more new features & enhancements 2018-04-07T20:41:14.357Z
GreaterWrong—several new features & enhancements 2018-03-27T02:36:59.741Z
Key lime pie and the methods of rationality 2018-03-22T06:25:35.193Z
A new, better way to read the Sequences 2017-06-04T05:10:09.886Z
Cargo Cult Language 2012-02-05T21:32:56.631Z

Comments

Comment by Said Achmiz (SaidAchmiz) on Losing Faith In Contrarianism · 2024-04-26T02:19:59.230Z · LW · GW

Similarly, the lab leak theory—one of the more widely accepted and plausible contrarian views—also doesn’t survive careful scrutiny. It’s easy to think it’s probably right when your perception is that the disagreement is between people like Saar Wilf and government bureaucrats like Fauci. But when you realize that some of the anti-lab leak people are obsessive autists who have studied the topic a truly mind-boggling amount, and don’t have any social or financial stake in the outcome, it’s hard to be confident that they’re wrong.

This is a very poor conclusion to draw from the Rootclaim debate. If you have not yet read Gwern’s commentary on the debate, I suggest that you do so. In short, the correct conclusion here is that the debate was a very poor format for evaluating questions like this, and that the “obsessive autists” in question cannot be relied on. (This is especially so because in this case, there absolutely was a financial stake—$100,000 of financial stake, to be precise!)

Comment by Said Achmiz (SaidAchmiz) on Losing Faith In Contrarianism · 2024-04-26T00:37:01.651Z · LW · GW

Hmm, this sounds like an awfully contrarian take to me.

Comment by Said Achmiz (SaidAchmiz) on Thoughts on seed oil · 2024-04-24T20:39:34.060Z · LW · GW

I think “packaged bread and other bakery products” this is referring to stuff like Wonder bread, which contains a whole bunch of stuff[1] beyond the proverbial “flour, water, yeast, salt” that goes into homemade or artisanal-bakery bread.


  1. Soybean oil, high fructose corn syrup, various preservatives, etc. ↩︎

Comment by Said Achmiz (SaidAchmiz) on General Thoughts on Secular Solstice · 2024-04-16T22:47:32.781Z · LW · GW

This seems solvable by using multiple recordings and averaging, yes?

Also, if the transcription to sheet-music form is accurate w.r.t. the recording, and the recording is acceptable w.r.t. the intended notes, then the transcription ought to be close enough to the intended notes. Or am I misunderstanding?

Comment by Said Achmiz (SaidAchmiz) on Do I count as e/acc for exclusion purposes? · 2024-04-03T21:19:08.381Z · LW · GW

Yes, I meant specifically the Bay Area scene, since that’s the only part of the LW community that’s accused of excluding e/acc-ers.

In that case, I request that you edit your post to clarify this, please.

Comment by Said Achmiz (SaidAchmiz) on Do I count as e/acc for exclusion purposes? · 2024-04-02T22:18:00.626Z · LW · GW

Hmm… I suppose that depends on what you mean by “the scene”. If you’re including only the Bay Area “scene” in that phrase, then I’m familiar with it only by hearsay. If you mean the broader LW-and-adjacent community, then my familiarity is certainly greater (I’ve been around for well over a decade, and have periodic contact with various happenings here in NYC).

Comment by Said Achmiz (SaidAchmiz) on Do I count as e/acc for exclusion purposes? · 2024-04-02T02:35:06.228Z · LW · GW

I don’t know, man. Like… yeah, “not the typical LW party”, but that’s a bit of an understatement, don’t you think? (What makes it an “LW party” at all? Is it literally just “the host of this party is sort of socially adjacent to some LW people”? Surely not everything done by anyone who is connected in any way to LW, is “an LW thing”?)

So, honestly, yeah, I think it says approximately nothing about “the scene”.

Comment by Said Achmiz (SaidAchmiz) on Do I count as e/acc for exclusion purposes? · 2024-04-02T02:25:52.770Z · LW · GW

Uh… does that really count as an event in “the LW scene”?

… are you sure this post isn’t an April 1st joke?

Comment by Said Achmiz (SaidAchmiz) on Do I count as e/acc for exclusion purposes? · 2024-04-02T01:31:40.085Z · LW · GW

I understand it’s common to exclude e/acc people from events.

Is… this actually true??

Comment by Said Achmiz (SaidAchmiz) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T21:28:19.672Z · LW · GW

Could you (or someone else) summarize the other stuff, in the context of my question? I mean, I read it, there’s various things in there, but I’m not sure which of it is supposed to be a definition of “making space for” an idea.

Comment by Said Achmiz (SaidAchmiz) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T20:48:48.141Z · LW · GW

So, basically, allowing the ideas in question to be discussed on one’s blog/forum/whatever, instead of banning people for discussing them?

Comment by Said Achmiz (SaidAchmiz) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-30T20:01:26.275Z · LW · GW

What does it mean to “make space for” some idea(s)?

Comment by Said Achmiz (SaidAchmiz) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-28T18:03:12.988Z · LW · GW

I agree that this investigation was worthwhile and important.

But is it a case of “lying to interview subjects”? That is what we’re talking about, after all. Did Bly even interview anyone, in the course of her investigation?

Undercover investigative journalism has some interesting ethical conundrums of its own, but it’s not clear what it has to do with interviews, or lying to the subjects thereof…

Comment by Said Achmiz (SaidAchmiz) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-28T12:13:46.872Z · LW · GW

I was actually looking for specific examples, precisely so that we could test our intuitions, rather than just stating our intuitions. Do you happen to have any particular ones in mind?

Comment by Said Achmiz (SaidAchmiz) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:09:08.191Z · LW · GW

If you ban lying to subjects, a swath of important news becomes impossible to cover.

What would be some examples of this?

Comment by Said Achmiz (SaidAchmiz) on General Thoughts on Secular Solstice · 2024-03-26T23:51:27.215Z · LW · GW

Can the process not be automated? Like, sheet music specifies notes, right? And notes are frequencies. And frequencies can be determined by examining a recording by means of appropriate hardware/software (very easily, in the case of digital recordings, I should think). Right? So, is there not some software or something that can do this?

Comment by Said Achmiz (SaidAchmiz) on General Thoughts on Secular Solstice · 2024-03-26T20:52:05.926Z · LW · GW

What exactly does the process of generating sheet music involve? Like, how does sheet music happen, in general?

Comment by Said Achmiz (SaidAchmiz) on General Thoughts on Secular Solstice · 2024-03-26T20:50:47.764Z · LW · GW

I’m afraid I don’t understand what the “attempted telekinesis” post is talking about…

The “curse of the counterfactual” post, I also don’t really understand. It’s about a… therapy technique? For people who are fixated on certain events in their past?

It seems like this whole discussion is based on using words like “faith” in weird ways, and making statements that sound profound but are actually trivial or tautological (like “the world is exactly as it is”).

Maybe it would help to ask this directly: is “faith” here being used in anything at all like the ordinary sense of the word? (Or, any of the ordinary senses of the word?) Or is this a case of “we’re talking about a weird new concept, but we’re going to use a commonplace word for it”?

Comment by Said Achmiz (SaidAchmiz) on General Thoughts on Secular Solstice · 2024-03-26T17:38:43.254Z · LW · GW

So… are you just saying that reality exists, and is not merely shaped by our perceptions?

This is one of the bedrock ideas of LW-style rationality, isn’t it? And what does it have to do with “faith”…?

Comment by Said Achmiz (SaidAchmiz) on General Thoughts on Secular Solstice · 2024-03-26T04:35:23.085Z · LW · GW

What would it mean for this to be false?

Comment by Said Achmiz (SaidAchmiz) on General Thoughts on Secular Solstice · 2024-03-23T23:30:46.580Z · LW · GW

This is … not the future I hope for. I am probably more futuristic than most of the public, and am excited about things like space colonization and more abundant energy. I am definitely not excited about mind uploading or turning the sun into computonium.

Strongly seconded.

Comment by Said Achmiz (SaidAchmiz) on Using axis lines for good or evil · 2024-03-20T14:47:07.434Z · LW · GW

Just so; the correct way is indeed to show the full (zero-based y-axis) chart, then a “zoomed-in” version, with the y-axis mapping clearly indicated. Of course, this takes more effort than just including the one chart; but this is not surprising—doing things correctly often takes more effort than doing things incorrectly!

Comment by Said Achmiz (SaidAchmiz) on Using axis lines for good or evil · 2024-03-19T15:11:01.369Z · LW · GW

Would you graph with a line chart? No. And it absolutely would be egregious to use a line chart and then not use a zero-based y-axis.

Comment by Said Achmiz (SaidAchmiz) on Using axis lines for good or evil · 2024-03-19T13:30:38.111Z · LW · GW

I’m a big fan of Butterick’s book (and Butterick’s stuff in general), and one of the things I appreciate about his guidelines is that he does well at distinguishing between hard-and-fast rules and mere heuristics or suggestions. For example, here, he correctly says: “In this example, cell borders are unnecessary. In other cases, they can be useful.” (Emphasis mine.)

Butterick’s example table has a mere four rows and columns. A larger table simply can’t do without some visual delineation. (But take a look at the linked table, and you may note that it doesn’t have lines[1] either—it has alternating row background colors. Meanwhile, the columns need no delineation, because the human eye is better at vertical alignment than horizontal alignment!)


  1. But of course even that needs a caveat: the table has no lines between rows of body content, but does have lines separating table sections. Similarly, you could vary the weight of the lines, to create visual organization, such as in the tables on this page. ↩︎

Comment by Said Achmiz (SaidAchmiz) on Using axis lines for good or evil · 2024-03-19T13:20:11.167Z · LW · GW

Agreed (except about the “this is fine” part). The arguments are unconvincing and the recommendations seem bad. (In particular, the suggestion that the “vary between $50T and $53T” graph shouldn’t be drawn with a zero-based y-axis is egregious.)

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-17T23:47:17.854Z · LW · GW

That… does not seem like a historically accurate account of the formation and growth of cities.

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-15T15:37:24.975Z · LW · GW

I don’t say that you’re wrong, necessarily, but what would you say is an example of something that “has the form of a Ponzi scheme”, but is actually a change that enables permanently faster growth?

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-15T02:59:03.844Z · LW · GW

This is in reference to the Luddites, I suppose? If so, “some people’s jobs being automated” is rather a glib description of the early effects of industrialization. There was considerable disruption and chaos, which, indeed, is “doom”, of more or less the sort that the Luddites predicted. (They never claimed that the world would end as a result of the new machines, as far as I know.)

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-15T02:30:37.914Z · LW · GW

Well, people prophesying doom in general have a pretty poor track record, so if that’s all we know, our prior should be that any such person is likely to be very wrong.

Of course, most people throughout history who have prophesied doom have had in mind a religious sort of doom. People prophesying doom from technological advance specifically have a better track record. The Luddites were correct, for example. (Their chosen remedy left something to be desired, of course; but that is common, sadly. Identifying the problem does not, by itself, suffice to solve the problem.) And we’ve had quite a bit of doom from technological advance. Indeed, as technology has advanced, we’ve had more and more doom from that advance.

So, on the whole, I’d say that applying the reasoning I describe to people prophesying doom from technological advance is that there is probably something to what they say, even if their specific predictions are not spot-on.

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T23:50:11.650Z · LW · GW

If the premise is a world where nobody ever does any scams or tries to swindle anyone out of money, then it’s so far removed from our world that I don’t rightly know how to interpret any of the included commentary on human nature / psychology / etc. Lying for personal gain is one of those “human universals”, without which I wouldn’t even recognize the characters as anything resembling humans.

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T22:51:45.505Z · LW · GW

Even setting aside such textual anomalies, why is this a good argument? As I noted in a sibling comment to yours, my response assumes that Ponzi schemes have never happened in this world, because otherwise we’d simply identify the Spokesperson’s plan as a Ponzi scheme! The reasoning that I described is only necessary because we can’t say “ah, a Ponzi scheme”!

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T22:48:22.800Z · LW · GW

The opposite approach of Said Achmiz, namely appealing very concretely to the object level, misses the point as well: the post is not trying to give practical advice about how to spot Ponzi schemes. “We thus defeat the Spokesperson’s argument on his own terms, without needing to get into abstractions or theory—and we do it in one paragraph.” is not the boast you think it is.

If the post describes a method for analyzing a situation, and that described method is not in fact the correct method for analyzing that situation (and is actually much worse than the correct method), then this is a problem with the post.

(Also, your description of my approach as “appealing very concretely to the object level”, and your corresponding dismissal of that approach, is very ironic! The post, in essence, argues precisely for appealing concretely to the object level; but then if we actually do that, as I demonstrated, we render the post moot.)

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T15:55:03.181Z · LW · GW

It follows inevitably, therefore, that there is a very high chance that the S&P 500, and the stock market in general, is a scam, and will steal all your money.

Well, here’s a question: what happens more often—stock market downturns, or banks going bust?

It follows further that the only safe investment approach is to put all your money into something that you retain personal custody of. Like gold bars buried in your backyard! Or Bitcoin!

Now this is simply an invalid extrapolation. Note that I made no claims along these lines about what does or does not supposedly follow. Claims like “X reasoning is invalid” / “Y plan is unlikely to work” stand on their own; “what is the correct reasoning” / “what is a good plan” is a wholly separate question.

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T13:02:50.358Z · LW · GW

this choice is in general non-trivial

I disagree. It seems to me that this choice is, in general, pretty easy to make, and takes naught but common sense. Certainly that’s the case in the given example scenario. Of course there are exceptions, where the choice of reference class is trickier—but in general, no, it’s pretty easy.

(Whether the choice “requires abstractions and/or theory” is another matter. Perhaps it does, in a technical sense. But it doesn’t particularly require talking about abstractions and/or theory, and that matters.)

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T08:04:13.514Z · LW · GW

Yes. (If it were otherwise, then the response would be even simpler: “oh, this is obviously just a Ponzi scheme”.)

Comment by Said Achmiz (SaidAchmiz) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T05:57:08.256Z · LW · GW

Or:

“In the past, people who have offered such apparently-very-lucrative deals have usually been scammers, cheaters, and liars. And, in general, we have on many occasions observed people lying, scamming, cheating, etc. On the other hand, we have only very rarely seen such an apparently-very-lucrative deal turn out to actually be a good idea. Therefore, on the general principle that the future will be similar to the past, we predict a very high chance that Bernie is a cheating, lying scammer, and that this so-called ‘investment opportunity’ is fake.”

We thus defeat the Spokesperson’s argument on his own terms, without needing to get into abstractions or theory—and we do it in one paragraph.

This happens to also be precisely the correct approach to take in real life when faced with apparently-very-lucrative deals and investment opportunities (unless you have the time to carefully investigate, in great detail and with considerable diligence, all such deals that are offered to you).

Comment by Said Achmiz (SaidAchmiz) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-03-13T14:42:32.625Z · LW · GW

Note that the selectionchange event will report the currently selected hit for the searched text, not all highlighted hits, so this should not be a problem. (As the user presses Cmd-G [Find Next], or Enter in the search box, the browser will cycle through the highlighted hits, firing a new selectionchange event each time.) Thus, each time the event is fired, you can expand whatever section the currently-selected hit is in (and collapse its neighbors in the row).

Update: I just checked, and it seems like this stopped being true at some point; I am now seeing selectionchange events fired only on the initial search (when the user first spawns the Search box, by invoking the Find command or the Find Next / Find Previous command) and also when that box is dismissed, but not on subsequent invocations of Find Next/Previous (or the Enter key) when the Search box is open. (It is still the case that only a single selected hit, and not all highlighted hits, are reported.) This reduces the usefulness of the technique I described, though does not entirely eliminate it. @habryka

Comment by Said Achmiz (SaidAchmiz) on "How could I have thought that faster?" · 2024-03-13T00:59:38.758Z · LW · GW

I am not sure what you mean by step #1 (when something “feels like it should” take some amount of time, but ends up taking more time, it’s generally not because I made some mistake, but rather because my initial “feeling” turned out to be mistaken about how much time the task “should” take—which is not shocking, as such “feelings” are necessarily probabilistic).

The rest of it seems like… learning from mistakes and optimizing your practices/workflows/etc. based on experience. Is that what you’re talking about?

I confess that I’m still confused about how any of this could be described as “how could I have thought that faster”. Eliezer writes about “retrain[ing] [himself] to perform only those steps over the course of 30 seconds”, and… that just does not seem like it has anything to do with what you’re describing? Am I missing some analogy here, or what?

Comment by Said Achmiz (SaidAchmiz) on "How could I have thought that faster?" · 2024-03-12T22:52:33.162Z · LW · GW

I would also like to see this. As it is, I’m not sure what the OP is even describing. (As noted in a sibling comment, description is very vague.)

Comment by Said Achmiz (SaidAchmiz) on "How could I have thought that faster?" · 2024-03-12T22:49:22.194Z · LW · GW

I’ve gotten better at computer programming (as demonstrated by the fact that I used to not know how to code and now I can code pretty well), and not only have I never done anything that sounds like this, I am not sure I even understand what it would mean to do this. (Is it just “optimize your workflow on a task”? If so, then it seems very mis-decribed. Or is it something else?)

Comment by Said Achmiz (SaidAchmiz) on Wholesome Culture · 2024-03-07T09:50:32.989Z · LW · GW

I don’t think that this view described in your second paragraph stands up to scrutiny.

Like, suppose that you are designing a product etc., and I ask you whether you’ve considered that perhaps capitalism is not even good for civilization. “I choose not to think about that right now” is not a coherent answer. Either you have already thought about that question, and have reached an answer that is compatible with your continuing to work on your product or whatever (in which case you can say “indeed I have considered that question, and here, in brief, is my answer”)—or else you should, in fact, pause and at least briefly consider the question now, because your answer will affect whether you should continue with your project or else abandon it.

In other words, if the questioning came before, then just give the answer you found. If the questioning comes after… well, that’s too late. The questioning shouldn’t come after. If there’s possibly some reason why you shouldn’t be doing the thing you’re doing, then the best time to figure that out is before you started, and the second best time to figure it out is right now.

“I’ll question my assumptions later” typically means “I’ll question my assumptions never; I simply want you to go away and not bother me.”

Comment by Said Achmiz (SaidAchmiz) on Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles · 2024-03-03T10:32:57.378Z · LW · GW

Sorry, I meant that I’d like to see references for @habryka’s last sentence specifically (i.e., the part for which he says “I could dig up the references”). The IQ thing doesn’t seem to be that.

Comment by Said Achmiz (SaidAchmiz) on Increasing IQ is trivial · 2024-03-03T02:29:00.116Z · LW · GW

Is there some reason why you don’t want to post the procedure here, on Less Wrong?

Comment by Said Achmiz (SaidAchmiz) on Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles · 2024-03-03T02:12:10.829Z · LW · GW

I would love to see references for this!

Comment by Said Achmiz (SaidAchmiz) on Changing Emotions · 2024-03-03T00:22:24.534Z · LW · GW

Re-reading this post today, in 2024, I just want to note:

Anyone want to still want to eat chocolate-chip cookies when the last sun grows cold? I didn’t think so.

I absolutely want to still want to eat chocolate chip cookies when the last sun grows cold.

Comment by Said Achmiz (SaidAchmiz) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-29T00:12:12.674Z · LW · GW

Ah, yeah, that makes sense. (I guess this isn’t terribly important information to communicate in this particular context, anyway…)

Comment by Said Achmiz (SaidAchmiz) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-29T00:06:18.781Z · LW · GW

Ah, it’s an outdated browser issue. (Mac, Chromium 103.) (Lack of support for the :has() pseudo-class, specifically, in the selector .TopPostsPage-imageGridPostBody:hover:not(:has(.TopPostsPage-imageGridPostHidden)) .TopPostsPage-imageGridPostAuthor, is what’s causing the problem.)

Comment by Said Achmiz (SaidAchmiz) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-29T00:00:21.753Z · LW · GW

The text-legibility here seems a lot better.

Agreed, that’s an improvement. (Must you require hover for that, though? I’d make that change unconditionally, frankly.)

I would also suggest deepening the text shadow, changing it from text-shadow: 0 0 3px \#000 to, e.g., text-shadow: 0 0 3px \#000, 0 0 5px \#000, 0 0 8px \#000, which looks like this:

Image alt-text

it’s a bit more complicated since the visual space of the sections overlaps, so if you search for a word that has a hit in more than one section on the same row, we can’t expand both, so I’ll have to think about how to handle that case

Note that the selectionchange event will report the currently selected hit for the searched text, not all highlighted hits, so this should not be a problem. (As the user presses Cmd-G [Find Next], or Enter in the search box, the browser will cycle through the highlighted hits, firing a new selectionchange event each time.) Thus, each time the event is fired, you can expand whatever section the currently-selected hit is in (and collapse its neighbors in the row).

Comment by Said Achmiz (SaidAchmiz) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-28T23:49:08.008Z · LW · GW

How do you see the author? Hovering doesn’t do that, for me…

Comment by Said Achmiz (SaidAchmiz) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-28T23:47:58.951Z · LW · GW

I would not have guessed that there is any read/unread state marking going on, FYI.