The Real Rules Have No Exceptions 2019-07-23T03:38:45.992Z · score: 111 (60 votes)
What is this new (?) Less Wrong feature? (“hidden related question”) 2019-05-15T23:51:16.319Z · score: 13 (4 votes)
History of LessWrong: Some Data Graphics 2018-11-16T07:07:15.501Z · score: 72 (24 votes)
New GreaterWrong feature: image zoom + image slideshows 2018-11-04T07:34:44.907Z · score: 39 (9 votes)
New GreaterWrong feature: anti-kibitzer (hides post/comment author names and karma values) 2018-10-19T21:03:22.649Z · score: 47 (14 votes)
Separate comments feeds for different post listings views? 2018-10-02T16:07:22.942Z · score: 14 (6 votes)
GreaterWrong—new theme and many enhancements 2018-10-01T07:22:01.788Z · score: 38 (9 votes)
Archiving link posts? 2018-09-08T05:45:53.349Z · score: 56 (19 votes)
Shared interests vs. collective interests 2018-05-28T22:06:50.911Z · score: 21 (11 votes)
GreaterWrong—even more new features & enhancements 2018-05-28T05:08:31.236Z · score: 64 (14 votes)
Everything I ever needed to know, I learned from World of Warcraft: Incentives and rewards 2018-05-07T06:44:47.775Z · score: 33 (12 votes)
Everything I ever needed to know, I learned from World of Warcraft: Goodhart’s law 2018-05-03T16:33:50.002Z · score: 83 (23 votes)
GreaterWrong—more new features & enhancements 2018-04-07T20:41:14.357Z · score: 23 (6 votes)
GreaterWrong—several new features & enhancements 2018-03-27T02:36:59.741Z · score: 44 (10 votes)
Key lime pie and the methods of rationality 2018-03-22T06:25:35.193Z · score: 59 (16 votes)
A new, better way to read the Sequences 2017-06-04T05:10:09.886Z · score: 19 (17 votes)
Cargo Cult Language 2012-02-05T21:32:56.631Z · score: 1 (32 votes)


Comment by saidachmiz on WordPress Destroys Editing Process, Seeking Alternatives · 2020-08-19T03:38:48.194Z · score: 4 (2 votes) · LW · GW

Yep, a fair point. It only happens with Naval Gazing (not my personal blog), for reasons I don’t think would apply to Zvi’s blog, but until the bug that causes that is fixed, it’s a risk.

Comment by saidachmiz on WordPress Destroys Editing Process, Seeking Alternatives · 2020-08-18T20:47:50.679Z · score: 4 (2 votes) · LW · GW

Zvi, I host a couple of blogs (such as my own blog and Naval Gazing) on my custom wiki platform. If you’re not able to find another alternative that suits you better, I’d be happy to host your blog as well.


  • I won’t ever add ‘features’ like “a new editor that doesn’t work”
  • Personal support / assistance
  • A massive array of features, from LaTeX to LessWrong comment thread transclusion to embedded graphs / charts to Git integration to … lots of stuff


  • No WYSIWIG editor
  • Less ‘polished’ than Wordpress in various ways
  • Definitely not a drop-in replacement and cannot seamlessly transfer over old blog contents
Comment by saidachmiz on Attacking enlightenment · 2020-08-16T16:36:16.306Z · score: 2 (1 votes) · LW · GW

The conversation did not take place, so there are no logs to produce.

Comment by saidachmiz on Jam is obsolete · 2020-08-02T22:25:59.073Z · score: 8 (5 votes) · LW · GW

Jam is tastier than frozen fruit. This, as far as I can see, ends the debate. (And if your jam is not tastier than frozen fruit, then you’re doing jam wrong.)

(… you are making the jam yourself, of course—aren’t you? Certainly there is little point in comparing to store-bought jam.)

Comment by saidachmiz on Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle · 2020-07-15T13:40:52.193Z · score: 23 (6 votes) · LW · GW

I don’t think that’s right. As I mention in another comment, Dennett’s notion of the intentional stance is relevant here. More specifically, it provides us with a way to distinguish between cases that Zack intended to include in his concept of “algorithmic intent”, and such cases as the “catch more vitamin D” that you mention. To wit:

The positing of “algorithmic intent” is appropriate in precisely those cases where taking the intentional stance is appropriate (i.e., where—for humans—non-trivial gains in compression of description of a given agent’s behavior may be made by treating the agent’s behavior as intentional [i.e., directed toward some posited goal]), regardless of whether the agent’s conscious mind (if any!) is involved in any relevant decision loops.

Conversely, the positing of “algorithmic intent” is not appropriate in those cases where the design stance or the physical stance suffice (i.e., where no meaningful gains in compression of description of a given agent’s behavior may be made by treating the agent’s behavior as intentional [i.e., directed toward some posited goal]).

Clearly, the “catch more vitamin D” case falls into the latter category, and therefore the term “algorithmic intent” could not apply to it.

Comment by saidachmiz on Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle · 2020-07-15T13:32:16.094Z · score: 6 (3 votes) · LW · GW

This discussion would be incomplete without a mention of Daniel Dennett’s notion of the intentional stance.

Comment by saidachmiz on The Ghost of Joseph Weber · 2020-07-13T23:00:46.980Z · score: 3 (2 votes) · LW · GW

If you would like to hear this post read aloud, try this video.

Meta: the video didn’t make it through the cross-posting, it seems. (I am not sure if Less Wrong supports video embedding; I think it may not. You might want to just link the video.)

Comment by saidachmiz on The New Frontpage Design & Opening Tag Creation! · 2020-07-09T18:15:06.122Z · score: 7 (4 votes) · LW · GW

For instance, has a slightly off-white background for just this reason.

Comment by saidachmiz on The Illusion of Ethical Progress · 2020-06-29T06:55:22.102Z · score: 9 (6 votes) · LW · GW

I mean that chairs and apples are less universal than the Universal Law of Gravitation.

In what way?

That the law of gravitation holds is a fact about the universe. That chairs exist is also a fact about the universe.

What does “less universal” mean? Does it mean something like “is applicable or relevant in a smaller volume of the observable universe”? If humanity spreads throughout the cosmos, and if we bring chairs with us everywhere we go, will chairs and gravitation thereby become equally “universal” (or, at least, more equal in “universality” than they are now)?

In any case this comparison is a red herring. The relevant comparison is not “chairs vs. gravity”, it’s “chairs vs. ethics”—or, more to the point, “guns vs. ethics”, “tanks vs. ethics”, “food vs. ethics”, “laws vs. ethics”, “governments vs. ethics”, “money vs. ethics”, “prestige vs. ethics”, etc. No vague allusion to “universality” will help you in any of these cases, since all of the things I’ve just listed are (so far as we know, anyway) approximately equally localized—namely, they are all facts about what exists and happens on the surface of one particular planet.

Comment by saidachmiz on The Illusion of Ethical Progress · 2020-06-28T22:25:34.934Z · score: 30 (14 votes) · LW · GW

Perhaps this is not central to the post, but I have always found that bit from Pratchett to be unbelievably inane. Truly, it grinds my gears to see it quoted, in wise tones, as if it expresses some profound truth; and doubly so, to see it quoted on Less Wrong.

Consider the following substitution:

Take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of apples, one molecule of chairs.

Right? There aren’t any chair molecules, are there? You won’t find apples on the Periodic Table, will you? So what? Do chairs and apples not exist? Are they somehow not real, or less real than… well, than what…? Hydrogen? Methane? Should we adjust our attitude toward apples, or chairs, or paintings, or tigers, on the basis of this insight? What, actually, is to be concluded from this?

Anyway, this is old news. The point, if you like, is that of course ‘physics’ ‘contains’ ethics, and improvements in ethics; these things are facts about people, and the goings-on in people’s brains—which are (dualistic views aside) very much “contained in physics”. Of course, you could argue otherwise[1], but you must do it without recourse to any such “greedy-reductionist”, “grind down the universe” arguments…

  1. E.g., non-cognitivism, or error theory. I am sympathetic to certain arguments in this broad class; but note that they have nothing much to do with the question of whether [fundamental] ‘physics’ ‘contains’ ethics or not. ↩︎

Comment by saidachmiz on Why are all these domains called from Less Wrong? · 2020-06-27T23:42:38.860Z · score: 2 (5 votes) · LW · GW

Which you can block unproblematically; no site functionality depends on it. In fact, if you’ve got uBlock Origin, GA will be blocked automatically.

Comment by saidachmiz on Obsidian: A Mind Mapping Markdown Editor · 2020-05-28T18:18:48.041Z · score: 4 (2 votes) · LW · GW

The output is in SVG format, but the cool thing about SVGs is that they (a) can have hyperlinks in them, and (b) can be displayed in the browser. So the generated SVG is displayed in your browser, and each of the nodes are links to the wikipage the node represents, so you can indeed use it to navigate the wiki.

Comment by saidachmiz on Obsidian: A Mind Mapping Markdown Editor · 2020-05-28T00:24:06.158Z · score: 2 (1 votes) · LW · GW

I think the most prominent functionality is the mind-mapping. Wikis, AFAICT, don’t have that.

Oh, but they do.

Comment by saidachmiz on Obsidian: A Mind Mapping Markdown Editor · 2020-05-28T00:22:37.150Z · score: 4 (2 votes) · LW · GW

I don’t know of any simple self-hosted local wiki software off the top of my head though.

PmWiki can do this, though whether it’s ‘simple’ is a matter of perspective. (It’s certainly easy to use.)

Comment by saidachmiz on Why anything that can be for-profit, should be · 2020-04-30T22:11:19.789Z · score: 7 (6 votes) · LW · GW

My reaction would be that a vaccine should be made for profit; if there are people who can’t afford it there should be a charity to buy the vaccine for them.

Pay careful attention to this formulation. Note the phrase:

there should be

What does that mean, precisely?

When we speak of whether a vaccine should be made for profit or not, we are, implicitly, speaking from the perspective of decision-makers who are in a position to decide whether a vaccine will be made on a for-profit or a non-profit basis. This may be some level of government (which may choose to contract to get a vaccine made, then distribute it to members of the public—or, may elect not to do so, and leave the matter to the vaccine manufacturers to decide), or it may be a corporation (which may manufacture the vaccine and then choose to make it available for free, instead of selling it).

Now, from the standpoint of those decision-makers, what does it mean to say that the vaccine should be for-profit but that “there should be” a charity to buy the vaccine for those who need it? It could only mean one of two things:

  1. We—i.e., by construction, that decision-making organization—having chosen to sell the vaccine for a profit, will now also spin off a charity whose purpose will be to make the vaccine available on a non-profit basis.

  2. We will sell the vaccine at a profit. Perhaps someone else will found a charity which will purchase our vaccine and make it available on a non-profit basis. Or, perhaps not. Either way, we will merely sell it and make a profit.

And note that option #1 is no different from “make the vaccine on a non-profit basis” in the first place, whereas option #2 is simply a shrug—a refusal to accept any responsibility for the problem of people who can’t afford the vaccine.

Either way, you have not answered leggi’s question/challenge, but evaded it.

Comment by saidachmiz on Why I've started using NoScript · 2020-03-31T20:20:36.785Z · score: 3 (3 votes) · LW · GW

Well, for one thing, the problem of “code delivered from a site you would normally trust but that is now malicious” is the same as the problem of “being mistaken about what sites to trust (and so accidentally trusting a site that was malicious all along)”.

As I understand your question, and as I understand the web and its technologies, the problem basically is that “if you run code (JavaScript) in your browser—code that is provided by arbitrary people on the internet—this is fundamentally a vulnerability”. And that’s true. There’s no solution to that basic fact other than “don’t run JavaScript”.

The matter really depends on how much you trust your browser vendor (Google, Apple, or Mozilla) to secure the browser against exploits that could harm/steal/pwn your computer or your data. If you trust them to a reasonable degree, then precautions short of “disable JavaScript entirely” suffice. If you really don’t trust them very much at all, then disable JavaScript (and possibly take even stricter measures to limit your exposure, such as running your browser in a VM, or some such thing; Richard Stallman’s browse-by-email workflow would be an extreme example of this).

Comment by saidachmiz on 3 Interview-like Algorithm Questions for Programmers · 2020-03-26T02:52:17.271Z · score: 2 (3 votes) · LW · GW

… then your answer is wrong. So… what gives?

Comment by saidachmiz on 3 Interview-like Algorithm Questions for Programmers · 2020-03-25T20:53:57.841Z · score: 6 (3 votes) · LW · GW

Re: #1:

Which sorting algorithm shows O(n) time complexity given no assumptions but that the values are integers?

Comment by saidachmiz on Coherent decisions imply consistent utilities · 2020-03-21T08:35:12.448Z · score: 2 (1 votes) · LW · GW

Have you read the papers I linked (or the more directly relevant papers cited by those)? What do you think about Aumann’s commentary on this question, for instance?

Comment by saidachmiz on Welcome to LessWrong! · 2020-03-18T00:55:10.643Z · score: 4 (2 votes) · LW · GW

Thanks to gwern for the mention of GW/RTS!

In the interests of giving equal screen time to the (friendly!) ‘competition’, here’s yet another viewer site for Less Wrong—one which takes an even more low-key and minimalist approach:

Comment by saidachmiz on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T05:38:52.215Z · score: 2 (1 votes) · LW · GW

Unless I am misunderstanding, wouldn’t orthonormal say that “switching frames” is actually a thing not to do (and that it’s something post-rationalists do, which is in conflict with rationalist approaches)?

Comment by saidachmiz on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-14T09:18:46.530Z · score: 15 (11 votes) · LW · GW

I want to specifically object to the last part of the post (the rest of it is fine and I agree almost completely with both the explicit positive claims and the implied normative ones).

But at the end, you talk about double-crux, and say:

And to try to double-crux with someone, only to have it fail in either of those ways, is an infuriating feeling for those of us who thought we could take it for granted in the community.

Well, and why did you think you could take it for granted in the community? I don’t think that’s justified at all—post-rationalists and rationalist-adjacents aside!

For instance, while I don’t like to label myself as any kind of ‘-ist’—even a ‘rationalist’—the term applies to me, I think, better than it does to most people. (This is by no means a claim of any extraordinary rationalist accomplishments, please note; in fact, if pressed for a label, I’d have to say that I prefer the old one ‘aspiring rationalist’… but then, your given definition—and I agree with it—requires no particular accomplishment, only a perspective and an attempt to progress toward a certain goal. These things, I think, I can honestly claim.) Certainly you’ll find me to be among the first to argue for the philosophy laid out in the Sequences, and against any ‘post-rationalism’ or what have you.

But I have deep reservations about this whole ‘double-crux’ business, to say the least; and I have commented on this point, here on Less Wrong, and have not seen it established to my satisfaction that the technique is all that useful or interesting—and most assuredly have not seen any evidence that it ought to be taken as part of some “rationalist canon”, which you may reasonably expect any other ‘rationalist’ to endorse.

Now, you did say that you’d feel infuriated by having double-crux fail in either of those specific ways, so perhaps you would be ok with double-crux failing in any other way at all? But this does not seem likely to me; and, in any case, my own objection to the technique is similar to what you describe as the ‘rationalist-adjacent’ response (but different, of course, in that my objection is a principled one, rather than any mere unreflective lack of interest in examining beliefs too closely).

Lest you take this comment to be merely a stream of grumbling to no purpose, let me ask you this: is the bit about double-crux meant to be merely an example of a general tendency (of which many other examples may be found) for Less Wrong site/community members to fail to endorse the various foundational concepts and techniques of ‘LW-style’ rationality? Or, is the failure of double-crux indeed a central concern of yours, in writing this post? How important is that part of the post, in other words? Is the rest written in the service of that complaint specifically? Or is it separable?

Comment by saidachmiz on Why hasn't the technology of Knowledge Representation (i.e., semantic networks, concept graphs, ontology engineering) been applied to create tools to help human thinkers? · 2020-03-09T06:44:20.407Z · score: 2 (1 votes) · LW · GW

To learn about attempts to develop user-facing “knowledge representation” software (and related) tools, read about “mind mapping” (and follow the links in the “Information mapping” sidebar).

Comment by saidachmiz on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T19:01:05.075Z · score: 11 (7 votes) · LW · GW

Third, given that the authors said they realized it might be bad, this should never have been posted without discussion with someone external.

For example…?

Suppose I’m a Less Wrong member who sometimes makes posts. Suppose I have some thoughts on this whole virus thing and I want to write down those thoughts and post them on Less Wrong.

You’re suggesting that after I write down what I think, but before I publish the post, I should consult with “someone external”.

But with whom? Are you proposing some general guideline for how to determine when a post should go through such consultation, and how to determine with whom to consult, and how to consult with them? If so, please do detail this process. I, for one, haven’t the foggiest idea how I would, in the general case, discern when a to-be-published post of mine needs to be vetted by some external agent, and how to figure out who that should be, etc.

This whole business of having people vet our posts seems like it’s easy to propose in retrospect as a purported unsatisfied criterion of posting a given post, but not so easy to satisfy in prospect. Perhaps I’m misunderstanding you. In any case, I should like to read your thoughts on the aforesaid guidelines.

(By the way, what assurances of vetting would satisfy you? Suppose the OP had contained a note: “This post has been vetted by X.”. And suppose otherwise the post were unchanged. For what value(s) of X would you now have no quarrel with the post?)

Comment by saidachmiz on Credibility of the CDC on SARS-CoV-2 · 2020-03-07T22:38:09.255Z · score: 50 (17 votes) · LW · GW

obvious info hazard

I wish people would stop throwing this term around willy-nilly.

Not only is it not obvious to me that this post is an “info hazard”, but I don’t really know what you even mean by it. Is it the definition used in this recent Less Wrong post[1], or perhaps the one quoted in this Less Wrong Wiki entry[2]?

In any case, the OP seems to be presenting true (as far as I can tell) and useful (potentially life-saving, in fact!) information. If you’re going to casually drop labels like “infohazard” in reference to it, you ought to do a lot better than a justification-free “this is bad”. Civil or not, I’d like to see that critique.

If you think the OP is harmful, by all means do not let civility stop you from posting a comment that may mitigate that harm! If you really believe what you’re saying, that comment may save lives. So let’s have it!

EDIT: Like Zack, I will strong-upvote this extended critique if you post.

  1. TL;DR: “Infohazard” means any kind of information that could be harmful in some fashion. Let’s use “cognitohazard” to describe information that could specifically harm the person who knows it.

  2. An information hazard is a concept coined by Nick Bostrom in a 2011 paper[1] for Review of Contemporary Philosophy. He defines it as follows; “Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.”

Comment by saidachmiz on Morality vs related concepts · 2020-02-20T01:43:05.905Z · score: 6 (4 votes) · LW · GW

Ethics/morality is generally understood to be a way to answer the question, “what is the right thing to do [in some circumstance / class of circumstances]?” (or, in other words, “what ought I to do [in this circumstance / class of circumstances]?”)

If, in answer to this, your ethical framework / moral system / etc. says “well, action X is better than action Y, but even better would be action Z”, then you don’t actually have an answer to your question (yet), do you? Because the obvious follow-up is, “Well, ok, so… which of those things should I do? X? Or Y? Or Z…?”

At that point, your morality can give you one of several answers:

  1. “Any of those things is acceptable. You ought to do something in the set { X, Y, Z } (but definitely don’t do action W!); but which of those three things to do, is really up to you. Although, X is more morally praiseworthy than Y, and Z more praiseworthy than X. If you care about that sort of thing.”

  2. “You ought to do the best thing (which is Z).”

  3. “I cannot answer your question. There is no right thing to do, nor is there such a thing as ‘the thing you ought to do’ or even ‘a thing you ought to do’. Some things are simply better than others.”

If your morality gives answer #3, then what you have is actually not a morality, but merely an axiology. In other words, you have a ranking of actions, but what do you do with this ranking? Not clear. If you want your initial question (“what ought I to do?”) answered, you still need a morality!

Now, an axiology can certainly be a component of a morality. For example, if you have a decision rule that says “rank all available actions, then do the one at the top of the ranking”, and you also have a utilitarian axiology, then you can put them together and presto!—you’ve got a morality. (You might have a different decision rule instead, of course, but you do need one.)

Answer #3 plus a “do the best thing, out of this ranking” is, of course, just answer #2, so that’s all fine and good.

In answer #1, we are supposing that we have some axiology (evaluative ranking) that ranks actions Z > X > Y > W, and some decision rule that says “do any of the first three (feel free to select among them according to any criteria you like, including random choice), and you will be doing what you ought to do; but if you do W, you’ll have done a thing you ought not to do”. Now, what can be the nature of this decision rule? There would seem to be little alternative to the rule being a simple threshold of some sort: “actions that are at least this good [in the evaluative ranking] are permissible, while actions worse than this threshold are impermissible”. (In the absence of such a decision rule, you will recall, answer #1 degenerates into answer #3, and ceases to be a morality.)

Well, fair enough. But how to come up with the threshold? On what basis to select it? How to know it’s the right one—and what would it mean for it to be right (or wrong)? Could two moralities with different permissibility thresholds (but with the same, utilitarian, axiology) both be right?

Note that the lower you set the threshold, the more empty your morality becomes of any substantive content. For instance, if you set the threshold at exactly zero—in the sense that actions that do either no good at all, or some good, but in either case no harm, are permitted, while harmful actions are forbidden—then your morality boils down to “do no harm (but doing good is praiseworthy, and the more the better)”. Not a great guide to action!

On the other hand, the higher you set the threshold, the closer you get to answer #2.

And in any event, the questions about how to correctly locate the threshold, remain unanswered…

Comment by saidachmiz on Why do we refuse to take action claiming our impact would be too small? · 2020-02-14T21:14:22.221Z · score: -4 (2 votes) · LW · GW

I’m not sure I get your meaning. You didn’t focus on actual examples… because you don’t want to find out if the phenomenon you’re describing is actually real or not? (But you obviously should want to find this out—that is what we’re doing here, right?)

I mean, if what you’re describing isn’t a real thing, then this whole conversation is moot, isn’t it?

Comment by saidachmiz on Why do we refuse to take action claiming our impact would be too small? · 2020-02-11T14:02:17.703Z · score: 6 (4 votes) · LW · GW

To illustrate with an hypothetical example: If we suddenly found out that mobile phone frequencies destroy the planet …

I find that phenomena like this are almost entirely pointless to illustrate with hypothetical examples, and much more fruitful to instead illustrate with actual examples.

Note, however, that if you do this, you may get responses protesting that actually, your supposed “actual examples” are not, in fact, examples of your claimed phenomenon. This, of course, is very much a feature, and not at all a bug—as it is quite possible that the phenomenon you thought was real, in fact… isn’t. In the latter case, what you would expect is precisely that all your attempts to provide actual examples would be met with skepticism and protest.

Comment by saidachmiz on Eukryt Wrts Blg · 2020-02-05T21:50:55.645Z · score: 2 (1 votes) · LW · GW

The “real name” issue is only one part of one of the points I made. Even if you reject that part entirely, what do you say to the rest?

I suppose there’s a perceived difference in professionalism or skin in the game (am I characterizing the motive correctly?), but we’re all here for the ideas anyways, right?

This is not a realistic view, but, again, I am content to let it slide. By no means is it the whole or even most of the reasons for my view.

Comment by saidachmiz on Eukryt Wrts Blg · 2020-02-05T00:00:10.935Z · score: 4 (2 votes) · LW · GW

I don’t think I agree.

Or, to be more precise, I agree denotationally but object connotationally: indeed, the thing I want is a different thing than what Less Wrong is, but it’s not clear to me that it’s a different thing than what Less Wrong easily could be.

To take a simple example of an axis of variation: it is entirely possible to have a public forum which is not indexed by Google.

A more complicated example: there is a difference between obtuseness and lack of deliberate, positive effort to minimize inferential distance to outsiders. I do not advocate the former… but whether to endorse the latter is a trickier question (not least because interpreting the latter is a tricky matter on its own).

Comment by saidachmiz on Eukryt Wrts Blg · 2020-02-04T23:13:30.629Z · score: 4 (2 votes) · LW · GW

I didn’t advocate being obtuse. I only said that by default, we probably do not (and/or ought not) want a post to be disseminated widely.

What is the best way of accomplishing this, is a separate matter.

Comment by saidachmiz on Category Theory Without The Baggage · 2020-02-03T22:36:26.562Z · score: 5 (3 votes) · LW · GW

I didn’t read all the way through (I stopped reading midway through the extended airport example), so forgive me if this is answered already; if you say the answer’s in there, I’ll go back and reread. But, in case it’s not, my question is: what would I gain from using category theory for problems like this, instead of graph theory (on which there already exists a vast literature)?

Comment by saidachmiz on Eukryt Wrts Blg · 2020-02-03T18:03:24.580Z · score: 7 (4 votes) · LW · GW

Several reasons.

The most important one is: the further an idea spreads, the more likely it is to be misinterpreted and distorted, and discussed elsewhere in the misinterpreted/distorted form; and the more this happens, the more likely it will be that anyone discussing the idea here has, in their mind, a corrupted form of it (both because of contamination in the minds of Less Wrong commenters from the corrupted form of the idea they read/hear in discussions elsewhere, and because of immigration of people, into Less Wrong discussions, who have first heard relevant ideas elsewhere and have them in a corrupted form). This can, if common, be seriously damaging to our ability to handle any ideas of any subtlety or complexity over even short periods of time.

Another very important reason is the chilling effects on discussions here due to pressure from society-wide norms. (Many obvious current examples, here; no need to enumerate, I think.) This means that the more widely we can expect any given post or discussion to spread, the less we are able to discuss ideas even slightly outside the Overton window. (The higher shock levels become entirely out of reach, for example.)

Finally, commonplace wide dissemination of discussions here are a strong disincentive for commenters here to use their real names (due to not wanting to be exposed so widely), to speak plainly and honestly about their views on many things, and—in the case of many commenters—to participate entirely.

Comment by saidachmiz on Eukryt Wrts Blg · 2020-02-03T09:12:55.951Z · score: 5 (3 votes) · LW · GW

And making it accessible suddenly means it can be linked and referred to in many other contexts. … It enables the post to spread beyond the LW memeosphere, potentially bringing you honor and glory.

There are often very, very good reasons not to want this, and indeed to want the very opposite of this. In fact, I think that the default should be to not want any given post to be linked, and to spread, far and wide.

If you’re not going to do this, you can at least: Link jargon to somewhere that explains it.

I do wholeheartedly endorse this, however.

Comment by saidachmiz on What Money Cannot Buy · 2020-02-03T09:06:37.522Z · score: 6 (3 votes) · LW · GW

I think to properly combat the factors that make PageRank not work, we need to broaden our analysis. Saying it’s “link farms and other abuse” doesn’t quite get to the heart of the matter—what needs to be prevented is adversarial activity, i.e., concerted efforts to exploit (and thus undermine) the system.

Now, you say “research is a gated community with ethical standards”, and that’s… true to some extent, yes… but are you sure it’s true enough, for this purpose? And would it remain true, if such a system were implemented? (Consider, in other words, that switching to a PageRank-esque system for allocating funding would create clear incentives for adversarial action, where currently there are none!)

Comment by saidachmiz on What Money Cannot Buy · 2020-02-03T00:26:02.388Z · score: 7 (6 votes) · LW · GW

The Web of Trust didn’t work for secure communication, and PageRank didn’t work for search. What makes you think either or both of these things will work for research funding?

Comment by saidachmiz on The Real Rules Have No Exceptions · 2020-02-02T19:39:46.483Z · score: 6 (3 votes) · LW · GW

In keeping with my habit of illustrating things using World of Warcraft, here is an additional, real-world (… more or less) example of applying the concept I describe in the OP.

Note that the case I’m about to describe has two interesting features which make it a useful case study for the concept. First, the rule in question is a rule meant to bind an organization, rather than an individual (in contrast to, e.g., the No Cookies rule we’ve thus far been discussing in this comment thread). Second, the challenge to the rule (which arose from the apparent existence of “legitimate exceptions”) was, in this case, resolved not by integrating the exceptions and updating the rule, but by rejecting the apparent legitimacy of the exceptions, identifying and repudiating the generator of those exceptions, and retaining the original rule.

Now, to the example. With the release of World of Warcraft: Classic (a.k.a. WoW), I’ve started playing the game once more, and so once more I routinely encounter the challenges of raiding, loot distribution, and everything else I described in my post about incentives and rewards in WoW. (See that post, and the one before it, for explanations of all the WoW jargon I use here.) The following happened to a guild with which I’m familiar.

This guild had wisely chosen the EP/GP loot distribution system (without question, the most rational of loot systems) for use in their raids. The system worked well at first, but soon there began to take place such situations: some raid member would receive a piece of gear (having the highest priority ratio among all those who wanted this item), but—so the sentiment among many of the raiders went—it would have gone to better use in the hands of a different raid member. Or: some item of loot—quite powerful, and potentially beneficial to the raid in the hands of one or another specific raid member—was discarded, and went to waste, because no one wanted to “spend points” (that is, to sacrifice their loot priority) on that item.

The raid leadership began to talk of legitimate exceptions… which, of course, stirred up anxiety and discontent among the raiders. (After all, if the rules only apply until the raid leader decides they don’t apply, then the rules don’t really apply at all… and the benefit of having a known, predictable system of loot distribution—raid member satisfaction and empowerment, the delegation of optimization tasks, etc.—are lost.) Seeing this, the guild’s officers held a public discussion, and analyzed the situation as follows.

Two competing goals, they said, together generate our intuitions (and yours) about how loot should be distributed. On the one hand, we desire that there be equity, fairness, and freedom of choice in the process; those who contribute, should be rewarded, and they should be free to choose how to spend the currency of those fairly allocated rewards. On the other hand, we also strive for raid progression, and to effectively defeat the challenges of raid content [i.e., killing powerful “raid boss” monsters—which are the source of loot]. Certain allocations of loot items, and certain allocation systems, may serve the former goal more than they serve the latter, and vice versa.

However (continued the guild officers), fairness is one of the stated values of this guild—and it takes precedence over optimization of raid progression. Our chosen loot distribution system (EP/GP) is meant to be the fairest system, and to provide an environment where our raid members can reliably expect to be rewarded for their contributions—and that is our top priority. This will, indeed, sometimes result in a less-than-optimal result from the standpoint of whole-raid optimization. We accept this consequence. We say that any apparent “legitimate exceptions” to EP/GP-based loot distribution, whose seeming legitimacy stems from the intuition generated by the “optimize the raid’s overall performance” goal, are not, in fact, legitimate, in our eyes. We recognize this goal, the source of such intuitions, and while we do not in the least disclaim it, we nonetheless explicitly place it below the goal of fairness, in our goal hierarchy. There will (the guild officers concluded) be no exceptions, after all. The rule will stand.

Comment by saidachmiz on The Real Rules Have No Exceptions · 2020-02-02T08:13:00.822Z · score: 2 (1 votes) · LW · GW

The question of formalization (a.k.a. “what does ‘expect’ mean in a technical sense”) is a good one; I don’t have an answer for you. (As I said, the idea which I have in mind is like the idea of “conservation of expected evidence”, but, as you say, it’s not quite the same thing.) My mathematical skills do not suffice to provide any technical characterization of the term as I am using it.

It seems to me that the informal sense of the word suffices here; a formalization would be useful, no doubt (and if someone can construct one, more power to them)… but I do not see that the lack of one seriously undermines the concept’s validity or applicability.

In particular, your list of examples is composed almost entirely of cases which quite miss the point. Before going through them, though, I’ll note two things:

First—the purpose of the exercise is to construct more effective rules with which to govern our own behavior (as individuals), and the behavior of groups or organizations in which we participate. This general goal is often threatened by the existence of so-called “exceptions” to our ostensible rules, which can easily turn some apparently clear and straightforward rule against itself, and against the ostensible intent of the rule’s formulator(s). My aim in the OP is to provide a conceptual tool that counteracts this threat, by pointing out that the existence of “exceptions” is, in fact, a sign that there actually exists some real rule which is not identical to the stated rule (and which is the generator for the “exceptions”).

Second—the point I make in the OP is twofold: descriptive and prescriptive. The descriptive component is “the real rules have no exceptions”. The prescriptive component is “here is how you ought to deal with encountered apparent ‘legitimate exceptions’”. You seem to be objecting, here, to the prescriptive component. I do not think your objection holds (as I’ll try to demonstrate shortly), but note that even if you continue to find my prescription unconvincing, nevertheless the description remains true! There is some underlying pattern which is generating “legitimate exceptions”, and it will continue, unseen, to govern your behavior (and to undermine the predictability thereof)… unless you identify it, and either integrate or alter it.

We’ll do well to remember these two points as we consider the examples you offer. You propose that the following seem like potentially expectable exceptions to the “No Cookies” rule:

  1. except if someone invents a cookie that’s good for my health

Once again, recall that the point of the rule in the first place is to effectively govern your own behavior. The difficulty, after all, is what? It’s that you know that you shouldn’t eat cookies all the time (or perhaps, almost ever), but you also know that without some device with which to restrain yourself, you’ll eat lots of cookies, because they’re delicious. (We can express this in terms of first- and second-order desires, or “goals” vs. “urges”, or some framework along such lines, but I think that the point here ought to be simple enough in any case.) A No Cookies rule is such a device. Its purpose is to enforce upon yourself some rule which you wish enforced upon yourself, in the service of achieving, and maintaining, some goal of yours.

Now, what happens when you encounter the best cookies in the state, and they seem to you to be a legitimate exception to your No Cookies rule? Roughly, what you have discovered thereby is that in addition to your goal of maintaining your health, you also have some other goal(s), which compete with it (such as, perhaps, “avoid turning life into a joyless existence, devoid entirely of sensory pleasures”, or “don’t let rare experiences pass you by, as they are precious and enriching”). Any explicit rule meant to govern the given class of situations, which purports to embody your goals and preferences, must capture this competing goal, along with the “maintain health” goal.

But under this view, the quoted example of a purported exception isn’t any such thing after all! The purpose of the No Cookies rule was to stop yourself from eating lots of cookies and thereby harming your health in the pursuit of momentary pleasures… but this hypothetical “health cookie” doesn’t interfere at all with the “health maintenance” goal, and is entirely consonant with the purpose of the existing rule. If you like, you can say that we take “cookies” to be a stand-in for “delicious but unhealthy sweets”—and “health cookies” don’t fit the bill. (Indeed, such a broad interpretation is needed anyway, as otherwise we would have the absurd situation of abjuring cookies but gorging on brownies—thus utterly ruining the purpose of the rule—and having to engage in philosophical debates about whether “bar cookies” are cookies or a distinct culinary product called bars, etc.)

  1. except if someone points a gun at my head and orders me to eat a cookie

Well, first of all, should you encounter such a conundrum, you really have bigger problems than how best to formulate a rule governing your dietary practices.

Nothing I wrote in the OP (indeed, you may assume, nothing I ever write) is intended to replace common sense. I am not Eliezer; I do not write with the ultimate aim of applying my points to AI design. My prescription is meant for people—not for robots.

That having been said, there is, in fact, a non-ad-hoc way of handling just such cases; one prominent example of such an approach is seen in Jewish religious law, in the concept of pikuach nefesh. Briefly, the point is that there is no need to write into every rule a clause to the effect that “this rule shall be suspended if someone’s pointing a gun at my head”; instead, you have a general rule that if your life’s in danger, almost all other rules are suspended. Whatever goals and purposes your rules serve, they’re not so important as to be worth your life. (This doesn’t apply to all rules, just most of them… but certainly that “most” includes dietary restrictions.)

  1. except if a doctor prescribes cookies because of some medical or psychiatric reason

Essentially the same response applies as that for example #1. If the cookies in question are, in fact, necessary to maintain your health, then eating them serves the goal for which the No Cookies rule was formulated. There is no question, here, of whether this is a “legitimate exception”; no uncertainty, no temptation.

Again, remember that the No Cookies rule is made by you, to serve your goals, to guard those goals against your impulses and your weaknesses. Consider again the notion of “legitimate exceptions”. We have already covered the meaning of legitimate exceptions (they are manifestations of underlying intuitions which serve competing goals), but what about illegitimate exceptions? The possibility of such is implied, isn’t it? But what are they? Well, they’re the manifestations, not of competing goals, but of precisely the impulses or urges which the rule is aimed at restraining in the first place! The question of “legitimacy” of an exception is, then, the question: “I have an intuition that I ought to except this situation from application of the rule, but does that intuition spring from a competing goal which I endorse, or does it spring from the desire I am trying to restrain?”

But in the given example, the question does not arise, because the exception is not generated by your intuition, but by an entirely endogenous factor: your doctor. (And, it must be noted, the question of how the health-related goal of your No Cookies rule stacks up to whatever medical reason your doctor has for prescribing you cookies, can, and should, be discussed with your doctor!)

  1. except if I’m in a social situation where not eating a cookie is a serious faux pas (e.g., it will seriously offend the person offering me a cookie)

(Skipping this one for now; see below.)

  1. except if I’m diagnosed with a terminal disease so I have no reason to care about my long term health anymore

Well, then you can drop the No Cookies rule entirely, and need no longer worry about what does, or does not, constitute an exception to it.

  1. except if I’m presented with convincing evidence that I’m living in a simulation and eating cookies has no real negative consequences

The same response applies as to example #5.

Now, let’s return to the example I skipped:

  1. except if I’m in a social situation where not eating a cookie is a serious faux pas (e.g., it will seriously offend the person offering me a cookie)

Ah! Now, here we have a genuine difficulty—and it is precisely the sort of difficulty which the concept I describe in the OP is intended to handle.

First, a note. In my post and my comments, I have talked about “encountering” various situations (and, relatedly, “expecting” to encounter them). Yet as you demonstrate, one can imagine encountering all sorts of situations, before ever actually encountering them.

Well, and what is the problem with that? This, it seems to me, is a feature, not a bug. Surely it’s a good thing, and not at all a bad thing, to think through the implications of your rules, and to consider how they may be applied in this or that situation you might run into. Suppose, after all, that you run into such a social situation (where refusing an offered cookie is a faux pas), having never before considered the possibility of doing so. You are likely to experience some indecision; you may act in a way you will later come to regret; you will, in short, handle the situation more poorly than you might’ve, had you instead given the matter some thought in advance.

You may think of this, if you like, of “encountering” the situation in your mind, which (assuming that your imagined scenario contains no gross distortions of the likely reality) may stand in for encountering the situation in fact. If the imagined scenario contains an apparently legitimate exception to your rule, you can then apply the same approach I describe in my post (i.e., analyze the generator of the exception, then either integrate the exception by updating the rule, or keep the rule and judge the exception to be illegitimate after all).

(Of course, such things shouldn’t be overdone. It’s no good to be paralyzed into anxiety by the constant contemplation of all possible situations you may ever encounter. But this problem is, I think, beyond the scope of this discussion.)

Now, to the specific example. You have, we have said, a rule: No Cookies. But you find yourself in some social situation where applying this rule has negative social consequences. This would seem to be one of those legitimate exceptions. And why is this? Well, we may suppose that you’ve got (as most people have) a general goal along the lines of “maintain good social standing”; or, perhaps, the operative goal is something more like “maintain a good relationship with this specific individual”.

The question before you, then, is how to weigh this social goal of yours against the health goal served by the No Cookies rule. That is something you (that is, our hypothetical person with the No Cookies rule) must answer for yourself; there is no a priori correct answer. In some cases, for some people, the social goal overrides the health goal. But for others, the health goal takes precedence. In such a case, it is a very good idea to have considered such situations in advance, and to have decided, in advance, to stand firm—to reject, in other words, the intuitive judgment of the exception’s legitimacy, having analyzed it and given due consideration to its source (i.e., the goal of maintaining social status or a personal relationship).

Such advance consideration is valuable not only because it saves you from making on-the-spot decisions you would later regret, but also because it allows you to take steps to mitigate the effects of choosing one way or the other—to turn an “either way, I lose something important” situation into a win-win.

Take the case of a No Cookies rule which is challenged by the refusal of an offered cookie being a social faux pas. Suppose you decide that in such a case (or in a specific such case), you will give precedence to your social goal(s), and eat the cookie. What steps might you take to mitigate the effects of this? For one, you might consider the impact of this violation of your No Cookies rule on the goal the rule serves, and compensate by reducing your sugar intake for the day / week / month. Alternatively, you might anticipate the possibility of entirely ruining your diet by frequent encounters of such socially challenging cookie-related situations, and proactively ensure that you only rarely find yourself at cookie-tasting parties (or whatever).

Conversely, suppose you decided that in such a case (or in a specific such case), you will give precedence to your health goal(s), and refuse the cookie. What steps might you take to mitigate the seriousness of the faux pas? Well, you might warn your cookie-offering acquaintance in advance that you are on a No Cookies diet, apologize in advance for refusing their offer of a cookie, and assure them (and solicit credible witnesses to bolster your assurance) that your refusal isn’t a judgment on their cookie-baking skills, but rather is forced by your dietary needs.

I could just go to full consequentialism and say the real rule is “no cookies except if the benefits of eating a cookie outweigh the costs” but presumably that’s not the point of this post?

The point, as I say above, is to provide a conceptual tool with which to better govern your own behavior, and that of organizations and groups in which you participate. Consequentialism is very well and good, and I have no quarrel with it; but act consequentialism is impractical (for humans). Consider my post to be a suggestion for a certain sort of rule-consequentialist “implementation detail” for your consequentialist principles.

Comment by saidachmiz on The Real Rules Have No Exceptions · 2020-02-01T09:03:10.679Z · score: 25 (5 votes) · LW · GW

Can anyone give some examples …


First, let me note that the key to understanding the post is this part:

But why do I say that good rules ought not have exceptions? Because rules already don’t have exceptions.

Exceptions are a fiction. They’re a way for us to avoid admitting (sometimes to ourselves, sometimes to others) that the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.

The approach I describe above merely consists of making this fact explicit.

Once again, for emphasis:

… the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.

And this is summarized by the title of the post: “The Real Rules Have No Exceptions”.

Now for some examples. I will give three: dietary restrictions, ethical injunctions, and criminal justice systems. We’ll examine each, and see how they fit into the concept I describe in the OP.

Personal dietary restrictions

This is the example in the quoted bit of Chris Leong’s post. You have a rule: “I won’t eat any cookies”. (You have decided on this rule, one imagines, to curb your sugar intake. Or something.) You’ve held strong for a while; you’ve turned down your friend’s signature chocolate chip cookies, and those wonderful black-and-white cookies they sell at the corner deli. But! You now find yourself faced with a bakery that sells what are, by all accounts of the cookie cognoscenti, the best cookies in the state. This, it seems to you, is a legitimate exception to your no-cookies rule. You eat the cookies. (They are delicious.)

The naïve view of this scenario is: “I am following a simple rule: No Cookies. But, sometimes, there are legitimate exceptions. Like, say, if the cookies are the best cookies in the state. Or… some similar situation. No Cookies is still the rule! Exceptions are just… exceptions.”

And I am saying that this view is both mistaken and imprudent. (More on this in a bit.)

Now, the obvious question to ask of the naïve account is: just what is this business of “legitimate exceptions”? What makes an exception “legitimate”, anyway? This is the crux of the matter. Chris Leong’s description of such scenarios says “you encounter a situation that legitimately feels exceptional”—but what makes one exception “feel” legitimate, and another “feel” illegitimate?

Generally, in such scenarios, there is some underlying intuition—which may or may not be easily verbalized or even teased out from examples. Nevertheless, there is (in my experience) always some pattern, some “generator” (to use the local parlance) of the intuition, some regularity—and this regularity sorts situations wherein the stated rule is applicable into the categories of “legitimate exception” and “not a legitimate exception”.

And so the core insight (such as it is) of my post is just this: whatever the stated rule may be, nevertheless the actual rule—the complete, fully described rule that governs situations of the given category—is constituted by the stated rule, plus whatever is the underlying pattern, dynamic, generator, etc., which determines which situations are legitimate exceptions to the stated rule.

Let’s return to our “No Cookies” example. Despite being a fairly trivial matter, this happens to be one of those cases where the underlying intuition behind judgments of exception legitimacy is hard to verbalize. It’s hard to say what may motivate someone to treat this particular situation (“best cookies in the state”) as a legitimate exception to a No Cookies rule… but consider this as one plausible account (out of potentially many other such):

“If I encounter a situation where I have the opportunity to have an interesting, fun, or pleasant experience which is rare, or even unique, and which opportunity I can expect will not repeat itself often, or ever, then it is permissible to suspend certain rules which otherwise would be in effect at all times. This is because, firstly, the benefit to me of having such a rare positive experience outweighs the downside of undermining a generally-unbreakable rule, and secondly, if I do not expect such a situation to recur often, then I run relatively little risk of permanently undermining the rule to an extent that makes following it infeasible.”

Now, again, such an intuition will, for the overwhelming majority of people, not be a consciously held belief. If you ask them to tell you what is their policy vis-a-vis cookies, they will say: “my policy is No Cookies”. If you press them, they will confess that their policy admits of exceptions, in some legitimately exceptional situations. If you ask them to explain just what situations are “legitimately exceptional”, they will be unable to oblige you in any coherent way. Yet this does not, of course, mean that the above-described intuition (or something along those general lines) does not govern their behavior and their thinking on the subject of cookies.

So, what I am saying is: the real rule in this case is not No Cookies, but something more like: No Cookies, Unless Consumption Of Some Particular Cookies Constitutes A Rare Opportunity To Have An Unusual, Or Even Unique, Experience, Which I Expect Will Not Recur Often, Or Perhaps Ever. (Or something along these lines.)

I said earlier that the naïve view (“My rule is No Cookies. But, yes, sometimes there are legitimate exceptions.”) is both mistaken and imprudent. What I meant by “mistaken” should now be clear: the naïve view is substantially less accurate than the fully-informed view; it does not really let you make accurate predictions about your own behavior (not without the aid of that non-verbalized intuition). And what I mean by “imprudent” is this: if you hold the naïve view, then you really have no opportunity to examine that exception-generating intuition of yours, and to endorse it, or revise it, or reject it. On the other hand, if you are fully cognizant of what the real rule is, then you can give it due consideration—and perhaps tweak it to your liking!

Note two things. First: this fully formulated, a.k.a. “real”, rule—is it a “rule to which no legitimate exception will ever be encountered”? Here I must admit that this wording was a bit of shorthand on my part. What I was referring to was something a bit like the notion of conservation of expected evidence; that is, while it is not all that probable that any given rule will survive the rest of your life without having to be updated, nevertheless you should not ever expect to encounter exceptions, any more than you should ever expect to encounter evidence in some specific direction from your current belief. If you do expect to encounter evidence in a specific direction from your current belief, then you should update immediately, because this indicates that you already have some not-yet-integrated evidence (which is the source of your expectation). Similarly, if you have some specific reason to believe that you’ll encounter legitimate exceptions to some rule, then you should revise your rule, because the real rule you’re already following is your stated rule plus whatever is causing you to expect to encounter exceptions.

Second: what role does encountering what seems to you to be a legitimate exception play in this whole framework? Simply, it is a demonstration that your real rule is not the same as your stated rule, and that there are some hidden parts to it (which are the source of your sense of the given exception’s legitimacy). So, in our cookie example, suppose that you thought (and would have said, if asked) that your policy on the subject of cookies is simple: No Cookies, No Exceptions. Then you encounter the best cookies in the state, and say: “OK, well… no cookies or exceptions… except for these cookies, which are clearly legitimately exceptional”. Your rule, which you thought had no exceptions, turns out to have exceptions—and is thereby revealed not to have been the real rule all along. You should now (I claim) discard your (stated) “No Cookies” rule, and adopt—no! wrong! not “adopt”, because you are already using it!… and (consciously) accept the fully formulated, real rule. (Or, of course, reject the fully formulated rule, and thus also reject your judgment of the given exception’s legitimacy.)

Ethical injunctions

Suppose you have a rule of personal conduct: “no lying; always tell the truth”. Then you find yourself sheltering an innocent person from a tyrannical government, whose agents accost you and inquire about whether you’re doing any such thing. “Clearly,” you think, “this is a legitimate exception to that whole ‘no lying’ business; after all, an innocent person’s life is at stake, and anyhow, these guys are, like, super evil.” You lie, and thereby save a life.

You have now discovered (if you will but admit it to yourself) that your “no lying” rule wasn’t the real rule after all. If you’re now asked whether you have any specific reason to expect that you might encounter exceptions to this “no lying” rule, you will surely say “yes”. The real rule was something more like: “no lying, unless it’s necessary to save a life”. (There might also be some intuition about whether the person(s) you’re lying to are, in some sense, deserving of honesty; but that is more complex, and anyway, overdetermines your behavior—the innocent person’s life quite suffices.) You should (I claim) admit all this to yourself, discard the “no lying, ever” rule (which, if you decide to lie in this scenario, was never truly operative in the first place) and replace it with the fully formulated version. (Of course, as with the cookies, you also have the option of endorsing the simple rule—even after reflecting on the source of your intuition that this is a legitimate exception—and discarding instead your judgment of the exception’s legitimacy; and, of course, then telling the truth to the jackbooted thugs at your door.)

Once you have reflected thus, and either endorsed the fully formulated rule, or rejected it along with your judgment of the exception’s legitimacy, whatever stated rule you now follow is one to which you do not expect ever to find exceptions.

Criminal justice systems

We have (or so we are told in our middle-school civics class) a justice system where everyone has the right to a fair trial with a jury of their peers, and all are equal before the law. Yet even a cursory glance at a news source of your choice reveals that our system of criminal justice routinely finds all sorts of legitimate exceptions to this very just and simple rule.

Clearly, it would be altogether utopian to suggest that our government “should” discard the simple stated rule, and instead either explicitly adopt some rule along the lines of “everyone’s entitled to a fair trial with a jury of their peers, unless of course our courts are swamped with cases (which is most of the time) or it’s an election year and we’re trying to be ‘tough on crime’, or any number of various other things; and everyone’s equal before the law, except of course that if you have money you can hire a good lawyer and that makes people unequal, [… etc.; insert the usual litany of entirely legal, non-corruption-related exceptions to the ostensible fairness of the criminal justice system]”, or (the still more starry-eyed scenario) reject all the exceptions and actually administer the law as fairly as in the civics class fantasy. These things will not happen. But if you were elected Absolute Dictator of America, with the power to make any social or political changes with a wave of your hand, you would (I hope) consider either of these (preferably, of course, the latter) to be good candidates for early implementation.

The point, in any case, is that, once more, the real rules have no exceptions. The real social, political, and economic forces that determine who gets treated fairly by the criminal justice system and who does not, and what the outcomes are—these forces, these dynamics, do not have exceptions (at least, not ones we can ever expect or predict). They operate at all times. They are a constant source of legitimate (which is to say, endorsed, de facto, by the simple fact of being the status quo, and of not changing even if brought to light) exceptions to the stated rules (“all are equal before the law” and so forth) precisely because the stated rules are not the real rules, and the dynamics which determine actual outcomes are the real rules.

Comment by saidachmiz on Have epistemic conditions always been this bad? · 2020-01-30T04:55:12.861Z · score: 4 (3 votes) · LW · GW

First of all, it can’t possibly be bad epistemics for Congress to form the House Un-American Activities Committee[1]. Whatever you think was bad about this, it wasn’t epistemics, since epistemics is about having correct beliefs. Taking an ill-advised action isn’t bad epistemics (no matter how bad it may be in other ways).

In any case, I was responding to the part of your comment which I quoted in mine. There is, perhaps, some very literal and very generous interpretation of the words “nation-spanning network of terrorist organizations that target minorities/homosexuals/etc” under which the KKK (and associated groups) qualifies. But under any common-sense reading of the words, the claim just does not fit the facts.

Insofar as there is a continuum of how justified is the response to some threat, judged only on the basis of how serious the threat itself is, the justification of social justice by the alleged threat of the KKK is, indeed, more plausible than the justification of the Satanic Panic by the alleged threat of child-sacrificing devil worshippers, since, as you say, there were none at all of the latter. Yet compare both of these things to the justification of HUAC by the threat of Soviet espionage, and it’s clear that both of the former pale into utter insignificance; if one is “actually entirely unjustified” and the other is “almost entirely unjustified”, that is a distinction such as makes no difference.

  1. It’s worth noting that “blacklist[ing] people from working in television for supporting labor unions” was not HUAC’s function; the blacklist was a measure taken by the Hollywood film studios, and had no legal force whatsoever. ↩︎

Comment by saidachmiz on Have epistemic conditions always been this bad? · 2020-01-29T22:37:18.850Z · score: 19 (9 votes) · LW · GW

The KKK’s membership does not even approach five figures, even in the most generous estimates of a group little-disposed to underestimate such things (the Southern Poverty Law Center). (The Anti-Defamation League puts nationwide KKK membership at a mere 3,000.) The ADL’s list of active KKK organizations (which includes all those mentioned in the linked article) lists a half-dozen chapters, localized to a handful of Southern states.

The Politico article you link doesn’t mention this. It talks a lot about individual people and specific incidents, but doesn’t talk about the fact that none of it adds up to anything like a “nation-spanning network of terrorist organizations”. What’s more, none of these people have any power or any serious connection even to state, much less federal, authorities.

In comparison, the Soviets had spies in (among many other places) the State Department, the Treasury Department, the Department of Agriculture, the Manhattan Project—and at the highest levels of these organizations, to boot. These spies were backed by one of the world’s two great superpowers; vast sums of money and resources, and a literal army of trained personnel, supported them.

Nothing even remotely like that is true of the KKK, nor any other “white supremacist” organization. No comparison between these two cases is even slightly reasonable. The KKK is irrelevant.

The idea that almost anything that the social justice movement does is justified or even explained by concerns about the Ku Klux Klan is not defensible.

Comment by saidachmiz on Have epistemic conditions always been this bad? · 2020-01-28T23:38:22.586Z · score: 11 (7 votes) · LW · GW

By contrast there is a nation-spanning network of terrorist organizations that target minorities/homosexuals/etc, so the social justice movement has more real concerns to work with.

Wait… what?

Comment by saidachmiz on What do the baby eaters tell us about ethics? · 2020-01-26T17:44:10.475Z · score: 2 (1 votes) · LW · GW

I don’t see how this is responsive to anything I said. Could you elaborate?

Comment by saidachmiz on On hiding the source of knowledge · 2020-01-26T06:22:47.837Z · score: 4 (4 votes) · LW · GW

Jessica’s very unusual use of the word ‘intuition’ is responsible for the confusion here, I think.

99% confidence on the basis of intuition[common_usage] alone is indeed religion (or whatever).

99% confidence on the basis of intuition[Jessica’s_usage] seems unproblematic.

Comment by saidachmiz on On hiding the source of knowledge · 2020-01-26T06:20:43.277Z · score: 5 (5 votes) · LW · GW

Alright, fair enough, this is certainly… something (that is, you have answered my question of “what do you mean by ‘intuition’”, though I am not sure what I’d call this thing you’re describing or even that it’s a single, monolithic phenomenon)… but it’s not at all what people usually mean when they talk about ‘intuition’.

This revelation makes your post very confusing and hard to parse! (What’s more, it seems like you actually use ‘intuition’ in your post in several different ways, making it even more confusing.) I will have to reread the post carefully, but I can say that I no longer have any clear idea what you are saying in it (whereas before I did—though, clearly, that impression was mistaken).

Comment by saidachmiz on On hiding the source of knowledge · 2020-01-26T05:46:45.655Z · score: 11 (6 votes) · LW · GW

The intuition is, then, crystalized in the form of Benzene, which chemists already know intuitively. If they had only abstract, non-intuitive knowledge of the form of Benzene, they would have difficulty mapping such knowledge to e.g. spacial diagrams.

It seems to me that your are using the word “intuitively” in a very unusual way, here. I would certainly not describe chemists’ knowledge of benzene’s form as “intuitive”… can you say more about what you mean by this term?

Comment by saidachmiz on Whipped Cream vs Fancy Butter · 2020-01-21T03:10:32.648Z · score: 2 (1 votes) · LW · GW

Hotel Bar is a cheap brand.

Comment by saidachmiz on Whipped Cream vs Fancy Butter · 2020-01-21T02:44:14.618Z · score: 5 (3 votes) · LW · GW

Differences in composition are about more than just a couple of % more or less fat by weight. The texture and taste of Hotel Bar butter is quite different from that of Land O’Lakes, etc.

Comment by saidachmiz on Whipped Cream vs Fancy Butter · 2020-01-21T02:33:28.343Z · score: 2 (1 votes) · LW · GW

We’ll put our eggbeater in the dishwasher. It’s stainless steel and it seems fine.

The eggbeater in the image you use in your post appears to have wooden parts (the handle).

needing to get the hand mixer out and assemble it

Well, where you store your hand mixer is obviously up to you; if you use it often, keep it in a convenient place—just as with the eggbeater. As for assembly, it takes mere seconds.

Comment by saidachmiz on Whipped Cream vs Fancy Butter · 2020-01-21T02:14:59.920Z · score: 3 (2 votes) · LW · GW

A wire whisk is easier to wash. You can put it in your dishwasher, for example, or fully immerse it to soak, etc., without worrying about water damage to any components.

EDIT: But yes, the way I described is definitely slower than using an eggbeater. My actual preferred solution for making whipped cream is to use an electric hand mixer, which is faster than either manual option.