Blegg Mode
post by Zack_M_Davis · 2019-03-11T15:04:20.136Z · LW · GW · 68 commentsThis is a link post for http://unremediatedgender.space/2018/Feb/blegg-mode/
Contents
68 comments
Fanfiction for the blegg/rube parable [LW · GW] in "A Human's Guide to Words" [LW · GW], ~800 words. (Content notice: in addition to making a point about epistemology (which is why it may have been worth sharing here), this piece is also an obvious allegory about a potentially mindkilling [LW · GW] topic; read with caution, as always.)
68 comments
Comments sorted by top scores.
comment by Vanessa Kosoy (vanessa-kosoy) · 2019-03-13T19:02:44.101Z · LW(p) · GW(p)
I don't understand what point are you trying to make.
Presumably, each object has observable properties and unobservable properties . The utility of putting an object into bin A is and the utility of putting it into bin B is . Therefore, your worker should put an object into bin A if an only if
That's it. Any "categories" you introduce here are at best helpful heuristics, with no deep philosophical significance.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-03-14T01:50:43.077Z · LW(p) · GW(p)
Any "categories" you introduce here are at best helpful heuristics, with no deep philosophical significance.
I mean, yes, but I was imagining that there would be some deep philosophy about how computationally bounded agents should construct optimally helpful heuristics.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-03-14T20:52:53.921Z · LW(p) · GW(p)
Alright, but then you need some (at least informal) model of why computationally bounded agents need categories. Instead, your argument seems to rely purely on the intuition of your fictional character ("you notice that... they seem to occupy a third category in your ontology of sortable objects").
Also, you seem to assume that categories are non-overlapping. You write "you don't really put them in the same mental category as bleggs". What does it even mean, to put two objects in the same or not the same category? Consider a horse and a cow. Are they in the same mental category? Both are in the categories "living organisms", "animals", "mammals", "domesticated mammals". But, they are different species. So, sometimes you put them in the same category, sometimes you put them in different categories. Are "raven" and "F16 aircraft" in the same category? They are if your categories are "flying objects" vs. "non-flying objects", but they aren't if your categories are "animate" vs. "non-animate".
Moreover, you seem to assume that categories are crisp rather than fuzzy, which is almost never the case for categories that people actually use. How many coins does it take to make a "pile" of coins? Is there an exact number? Is there an exact age when a person gets to be called "old"? If you take a table made out of a block of wood, and start to gradually deform its shape until it becomes perfectly spherical, is there an exact point when it is no longer called a "table"? So, "rubes" and "bleggs" can be fuzzy categories, and the anomalous objects are in the gray area that defies categorization. There's nothing wrong with that.
If we take this rube/blegg factory thought experiment seriously, then what we need to imagine is the algorithm (instructions) that the worker in the factory executes. Then you can say that the relevant "categories" (in the context of the factory, and in that context only) are the vertices in the flow graph of the algorithm. For example, the algorithm might be, a table that specifies how to score each object (blue +5 points, egg-shaped +10 points, furry +1 point...) and a threshold which says what the score should to be to put it in a given bin. Then there are essentially only two categories. Another algorithm might be "if object passes test X, put in the rube bin, if object passes test Y, put it in the blegg bin, if object passes neither test, put in in the Palladium scanner and sort according to that". Then, you have approximately seven categories: "regular rube" (passed test X), "regular blegg" (passed test Y), "irregular object" (failed both tests), "irregular rube" (failed both tests and found to contain enough Palladium), "irregular blegg" (failed both tests and found to contain not enough Palladium), "rube" (anything put in the rube bin) and "blegg" (anything put in the blegg bin). But in any case, the categorization would depend on the particular trade-offs that the designers of the production line made (depending on things like, how expensive is it to run the palladium scanner), rather than immutable Platonic truths about the nature of the objects themselves.
Then again, I'm not entirely sure whether we are really disagreeing or just formulating the same thing in different ways?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-03-15T07:42:47.336Z · LW(p) · GW(p)
your argument seems to rely purely on the intuition of your fictional character
Yes, the dependence on intuition is definitely a weakness of this particular post. (I wish I knew as much math as Jessica Taylor [LW · GW]! If I want to become stronger [LW · GW], I'll have to figure out how fit more studying into my schedule!)
you seem to assume that categories are non-overlapping.
you seem to assume that categories are crisp rather than fuzzy
I don't believe either of those things. If you have any specific wording suggestions on how I can write more clearly so as to better communicate to my readers that I don't believe either of those things, I'm listening.
If you take a table made out of a block of wood, and start to gradually deform its shape until it becomes perfectly spherical, is there an exact point when it is no longer called a "table"?
No, there is no such exact point; like many longtime Less Wrong readers [LW · GW], I, too, am familiar with the Sorities paradox.
But in any case, the categorization would depend on the particular trade-offs that the designers of the production line made (depending on things like, how expensive is it to run the palladium scanner)
Right. Another example of one of the things the particular algorithm-design trade-offs will depend on is the distribution of objects.
We could imagine a slightly altered parable in which the frequency distribution of objects is much more evenly spread out in color–shape–metal-content space: while cubeness has a reasonably strong correlation with redness and palladium yield, and eggness with blueness and vanadium yield, you still have a substantial fraction of non-modal objects: bluish-purple rounded cubes, reddish-purple squarish eggs, &c.
In that scenario, a natural-language summary of the optimal decision algorithm wouldn't talk about discrete categories: you'd probably want some kind of scoring algorithm with thresholds for various tests and decisions as you describe, and no matter where you set the threshold for each decision, you'd still see a lot of objects just on either side of the boundary, with no good "joint" to anchor the placement of a category boundary.
In contrast, my reading of Yudkowsky's original parable posits a much sparser, more tightly-clustered distribution of objects in configuration space. The objects do vary somewhat (some bleggs are purple, some rubes contain vanadium), but there's a very clear cluster-structure [LW · GW]: virtually all objects are close to the center of—and could be said to "belong to"—either the "rube" cluster or the "blegg" cluster, with a lot of empty space in between.
In this scenario, I think it does make sense for a natural-language summary of the optimal decision algorithm to talk about two distinct "categories" where the density in the configuration space [LW · GW] is concentrated. Platonic essences are just the limiting case as the overlap between clusters goes to zero.
In my fanfiction, I imagine that some unknown entity has taken objects that were originally in the "rube" cluster, and modified them so that they appear, at first glance but not on closer inspection, to be members of the "blegg" cluster. At first, the protagonist wishes to respect the apparent intent of the unknown entity by considering the modified objects to be bleggs. But in the process of her sorting work, the protagonist finds herself wanting to mentally distinguish adapted bleggs from regular bleggs, because she can't make the same job-relevant probabilistic inferences with the new "bleggs (either regular or adapted)" concept as she could with the old "bleggs (only standard bleggs)" concept.
To see why, forget about the category labels for a moment and just consider the clusters in the six-dimensional color–shape–texture–firmness–luminesence–metal-content configuration space.
Before the unknown entity's intervention, we had two distinct clusters: one centered at {blue, egg, furry, flexible, luminescent, vanadium}, and another centered at {red, cube, smooth, hard, non-luminescent, palladium}.
After the unknown entity's intervention, we have three distinct clusters: the two previously-existing clusters, and a new cluster centered at {blue, egg, furry, hard, non-luminescent, palladium}. This is a different situation! Workers on the sorting line might want different language in order to describe this new reality!
Now, if we were to project into the three-dimensional color–shape–texture subspace, then we would have two clusters again: with just these attributes, we can't distinguish between bleggs and adapted bleggs. But since workers on the sorting line can observe hardness, and care about metal content, they probably want to use the three-cluster representation, even if they suspect the unknown entity might thereby feel disrespected.
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2019-03-15T21:16:07.055Z · LW(p) · GW(p)
Hmm. Why would the entity feel disrespected by how many clusters the workers use? I actually am aware that this is an allegory for something else. Moreover, I think that I disagree you with about the something else (although I am not sure since I am not entirely sure what's your position about the something else is). Which is to say, I think that this allegory misses crucial aspects of the original situation and loses the crux of the debate.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-03-15T23:06:08.839Z · LW(p) · GW(p)
I think that this allegory misses crucial aspects of the original situation
That makes sense! As gjm noted [LW(p) · GW(p)], sometimes unscrupulous authors sneakily construct an allegory with the intent of leading the reader to a particular conclusion within the context of the allegory with the hope that the reader will map that conclusion back onto the real-world situation in a particular way, without doing the work of actually showing that the allegory and the real-world situation are actually analogous in the relevant aspects.
I don't want to be guilty of that! This is a story about bleggs and rubes that I happened to come up with in the context of trying to think about something else (and I don't want to be deceptive about that historical fact), but I definitely agree that people shouldn't map the story onto some other situation unless they actually have a good argument for why that mapping makes sense. If we wanted to discuss the something else rather than the bleggs and rubes, we should do that on someone else's website. Not here.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2019-03-16T09:02:57.400Z · LW(p) · GW(p)
FWIW, I predicted it would be an allegory of transsexuality even before I read it or any of the comments.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-03-16T13:31:33.014Z · LW(p) · GW(p)
I mean, yes, there's the allusion in the title! (The post wasn't originally written for being shared on Less Wrong, it just seemed sufficiently sanitized to be shareable-here-without-running-too-afoul-of-anti-politics-norms after the fact.)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2019-03-16T13:49:25.939Z · LW(p) · GW(p)
I read the title as just an allusion to Eliezer’s OP on bleggs and rubes. (Otoh, without having read the article just linked, I’m familiar with “egg” as transsexual jargon for someone exploring TS feelings, who (the ideology has it) will inevitably in the end “hatch” into a full-on TS.)
comment by gjm · 2019-03-12T01:53:19.243Z · LW(p) · GW(p)
The description here seems a little ... disingenuous.
[EDITED to add:] I see that this has been downvoted at least once. I don't object at all to being downvoted but find it hard to tell from just a downvote what it is that has displeased someone; if anyone would like to indicate why they dislike this comment, I'm all ears. (Objection to "disingenuous" as too harsh? Preferring the "deniable allegory", as Zack puts it, to remain deniable for longer? Disliking what they guess to be my position on the specific issue it's an allegory for? ...)
Replies from: Benquo, Zack_M_Davis↑ comment by Benquo · 2019-03-12T20:05:30.297Z · LW(p) · GW(p)
I downvoted bc the description is disingenuous only if it’s common knowledge that the Rationalist project is so doomed that an attempt to correct a politically motivated epistemic error via an otherwise entirely depoliticized fictional example in a specifically Rationalist space is construed as a political act.
Fine to argue that this is the case (thus contributing to actual common knowledge), but insinuating it seems like a sketchy way of making it so.
Replies from: gjm↑ comment by gjm · 2019-03-13T00:02:02.841Z · LW(p) · GW(p)
Thanks for the explanation!
It's rather condensed, so it's very possible that my inability to see how it's a fair criticism of what I wrote is the result of my misunderstanding it. May I attempt to paraphrase your criticism at greater length and explain why I'm baffled? I regret that my attempt at doing this has turned out awfully long; at least it should be explicit enough that it can't reasonably be accused of "insinuating" anything...
So, I think your argument goes as follows. (Your argument, as I (possibly mis-) understand it, is in roman type with numbers at the start of each point. Italics indicate what I can't make sense of.)
1. The purpose of the linked article is not best understood as political, but as improving epistemic hygiene: its purpose is to correct something that's definitely an error, an error that merely happens to arise as a result of political biases.
It isn't clear to me what this error is meant to be. If it's something like "thinking that there must be a definite objectively-correct division of all things into bleggs and rubes" then I agree that it's an error but it's an error already thoroughly covered by EY's and SA's posts linked to in the article itself, and in any case it doesn't seem to me that the article is mostly concerned with making that point; rather, it presupposes it. The other candidates I can think of seem to me not to be clearly errors at all.
In any case, it seems to me that the main point of the linked article is not to correct some epistemic error, but to propose a particular position on the political issue it's alluding to, and that most of the details of its allegory are chosen specifically to support that aim.
2. The author has taken some trouble to address this error in terms that are "entirely depoliticized" as far as it's possible for it to be given that the error in question is politically motivated.
I think what I think of this depends on what the error in question is meant to be. E.g., if it's the thing I mentioned above then it seems clear that the article could easily have been much less political while still making the general point as clearly. In any case, calling this article "depoliticized" seems to me like calling Orwell's "Animal Farm" depoliticized because it never so much as mentions the USSR. Constructing a hypothetical situation designed to match your view of a politically contentious question and drawing readers' attention to that matchup is not "depoliticized" in any useful sense.
3. My description of Zack's description as "disingenuous" amounts to an accusation that Zack's posting the article here is a "political act" (which I take to mean: an attempt to manipulate readers' political opinions, or perhaps to turn LW into a venue for political flamewars, or something of the kind).
I do in fact think that Zack's purpose in posting the article here is probably at least in part to promote the political position for which the article is arguing, and that if that isn't so -- if Zack's intention was simply to draw our attention to a well-executed bit of epistemology -- then it is likely that Zack finds it well-executed partly because of finding it politically congenial. In that sense, I do think it's probably a "political act". My reasons for thinking this are (1) that my own assessment of the merits of the article purely as a piece of philosophy is not positive, and (2) that the political allegory seems to me so obviously the main purpose of the article that I have trouble seeing why anyone would recommend it for an entirely separate purpose. More on this below. I could of course be wrong about Zack's opinions and/or about the merits of the article as an epistemological exercise.
It seems relevant here that Zack pretty much agreed with my description: see his comments using terms like "deniable allegory", "get away with it", etc.
4. That could only be a reasonable concern if the people here were so bad at thinking clearly on difficult topics as to make the project of improving our thinking a doomed one.
I have no idea why anything like this should be so.
5. And it could only justify calling Zack's description "disingenuous" if that weren't only true but common knowledge -- because otherwise a better explanation would be that Zack just doesn't share my opinion about how incapable readers here are of clear thought on difficult topics.
That might be more or less right (though it wouldn't require quite so much as actual common knowledge) if point 4 were right, but as mentioned above I am entirely baffled by point 4.
Having laid bare my confusion, a few words about what I take the actual purpose of the article to be and why, and about its merits or demerits as a piece of philosophy. (By way of explaining some of my comments above.)
I think the (obvious, or so it seems to me) purpose of the article is to argue for the following position: "Trans people [adapted bleggs] should be regarded as belonging not to their 'adopted' gender [you don't really put them in the same mental category as bleggs], but to a category separate from either of the usual genders [they seem to occupy a third category in your ontology of sortable objects]; if you have to put people into two categories, trans people should almost always be grouped with their 'originally assigned' gender [so that you can put the majority of palladium-containing ones in the palladium bin (formerly known as the rube bin) ... 90% of the adapted bleggs—like 98% of rubes, and like only 2% of non-adapted bleggs—contain fragments of palladium]." And also, perhaps, to suggest that no one really truly thinks of trans people as quite belonging to their 'adopted' gender [And at a glance, they look like bleggs—I mean, like the more-typical bleggs ... you don't really put them in the same mental category as bleggs].
(Note: the article deals metaphorically only with one sort of transness -- rube-to-blegg. Presumably the author would actually want at least four categories: blegg, rube, rube-to-blegg, blegg-to-rube. Perhaps others too. I'm going to ignore that issue because this is plenty long enough already.)
I don't think this can reasonably be regarded as correcting an epistemic error. There's certainly an epistemic error in the vicinity, as I mentioned above: the idea that we have to divide these hypothetical objects into exactly two categories, with there being a clear fact of the matter as to which category each object falls into -- and the corresponding position on gender is equally erroneous. But that is dealt with in passing in the first few paragraphs, and most of the article is arguing not merely for not making that error but for a specific other position, the one I described in the paragraph preceding this one. And that position is not so clearly correct that advocating it is simply a matter of correcting an error.
(Is it not? No, it is not. Here are three other positions that contradict it without, I think, being flat-out wrong. 1. "We shouldn't put trans people in a third distinct category; rather, we should regard the usual two categories as fuzzy-edged, try to see them less categorically, and avoid manufacturing new categories unless there's a really serious need to; if someone doesn't fit perfectly in either of the two usual categories, we should resist the temptation to look for a new category to put them in." 2. "Having noticed that our categories are fuzzy and somewhat arbitrary, we would do best to stick with the usual two and put trans people in the category of their 'adopted' gender. We will sometimes need to treat them specially, just as we would in any case for e.g. highly gender-atypical non-trans people, but that doesn't call for a different category." 3. "Having noticed [etc.], we would do best to stick with the usual two and put trans people in the category of their 'originally assigned' gender. We will sometimes need [etc.].")
I've indicated that if I consider the article as an epistemological exercise rather than a piece of political propaganda, I find it unimpressive. I should say a bit about why.
I think there are two bits of actual epistemology here. The first is the observation that we don't have to put all of our bleggs/rubes/whatever into two boxes and assume that the categorization is Objectively Correct. Nothing wrong with that, but it's also not in any sense a contribution of this article, which already links to earlier pieces by Eliezer and Scott that deal with that point well.
The second is the specific heuristic the author proposes: make a new category for things that have "cheap-to-detect features that correlate with more-expensive-to-detect features that are decision-relevant with respect to the agent's goals". So, is this a good heuristic?
The first thing I notice about it is that it isn't a great heuristic even when applied to the specific example that motivates the whole piece. As it says near the start: 'you have no way of knowing how many successfully "passing" adapted bleggs you've missed'. Trans-ness is not always "cheap to detect". I guess it's cheaper to detect than, say, sex chromosomes. OK -- and how often are another person's sex chromosomes "decision-relevant with respect to the agent's goals"? Pretty much only if the agent is (1) a doctor treating them or (2) a prospective sexual partner who is highly interested in, to put it bluntly, their breeding potential. Those are both fairly uncommon -- for most of us, very few of the people we interact with are either likely patients or likely breeding partners.
What about other cases where new categories have turned out to be wanted? Trying to think of some examples, it seems to me that what matters is simply the presence of features that are "decision-relevant with respect to the agent's goals". Sometimes they correlate with other cheaper-to-identify features, sometimes not. There are isotopes: we had the chemical elements, and then it turned out that actually we sometimes need to distinguish between U-235 and U-238. In this case it happens that you can distinguish them by mass, which I guess is easier than direct examination of the nuclei, but it seems to me that we'd care about the difference even if we couldn't do that, and relatively cheap distinguishability is not an important part of why we have separate categories for them. Indeed, when isotopes were first discovered it was by observing nuclear-decay chains. There are enantiomers: to take a concrete example, in the wake of the thalidomide disaster it suddenly became clear that it was worth distinguishing R-thalidomide from S-thalidomide. Except that, so far as I can tell, it isn't actually feasible to separate them, and when thalidomide is used medically it's still the racemic form and they just tell people who might get pregnant not to take it. So there doesn't seem to be a cheap-to-identify feature here in any useful sense. There are different types of supernova for which I don't see any cheap-feature/relevant-feature dichotomy. There are intersex people whose situation has, at least logically speaking, a thing or two in common with trans people; in many cases the way you identify them is by checking their sex chromosomes, which is exactly the "expensive" feature the author identifies in the case of trans people.
I'm really not seeing that this heuristic is a particularly good one. It has the look, to me, of a principle that's constructed in order to reach a particular conclusion. Even though, as I said above, I am not convinced that it applies all that well even to the specific example I think it was constructed for. I also don't think it applies particularly well in the hypothetical situation the author made up. Remember those 2% of otherwise ordinary bleggs that contain palladium? Personally, I'd want a category for those, if I found myself also needing one for "adapted bleggs" because of the palladium they contain. It might be impracticably expensive, for now, to scan all bleggs in case they belong to the 2%, but I'd be looking out for ways to identify palladium-containing bleggs, and all palladium-containing bleggs might well turn out in the end to be a "better" category than "adapted bleggs", especially as only 90% of the latter contain palladium.
So, as I say, not impressive epistemology, and it looks to me as if the principle was constructed for the sake of this particular application. Which is one more reason why I think that that application is the sole real point of the article.
Replies from: Benquo, Benquo, Zack_M_Davis, Zack_M_Davis↑ comment by Benquo · 2019-03-13T15:01:38.362Z · LW(p) · GW(p)
Thanks for trying. I have limited time and got a sense for where we seem to have split from each other about halfway through your comment so I'll mainly respond to that. You brought up a bunch of stuff very specific to gender issues that I don't think is relevant in the second half.
There's an underlying situation in which Zack made some arguments elsewhere about gender stuff, and prominent people in the Rationalist community responded with an argument along the lines of "since categories are in the map, not in the territory, there's no point in saying one categorization is more natural than another, we might as well just pick ones that don't hurt people's feelings."
These people are claiming a position on epistemology that Zack thinks is substantially mistaken. Zack is faced with a choice. Either they're giving a politically motivated anti-epistemology in order to shut down the conversation and not because they believe it - or they're making a mistake.
If we take the argument literally, it's worth correcting regardless of one's specific opinions on gender identity.
If we all know that such arguments aren't meant to be taken literally, but are instead meant to push one side of a particular political debate in that context, then arguing against it is actually just the political act of pushing back.
But part of how bad faith arguments work is that they fool some people into thinking they're good-faith arguments. Even if YOU know that people don't mean what they say in this case, they wouldn't say it unless SOMEONE was likely to be honestly mistaken.
"You're doing too much politics here" is not a helpful critique. It doesn't give Zack enough information to get clued in if he's not already, and leaves the key controversial premise unstated. If your actual position is, "come on, Zack, everyone on this site knows that people aren't making this mistake honestly, posts like this one by Scott are mindkilled politics and engaging with them lowers the quality of discourse here," then you need to actually say that.
Personally, I DON'T see people behaving as though it were common knowledge that people claiming to be making this mistake are actually just lying. And if we write off people like Scott we might as well just close down the whole project of having a big Rationalist community on the internet.
It's offensive to me that there's even a question about this.
Replies from: gjm↑ comment by gjm · 2019-03-13T16:02:20.196Z · LW(p) · GW(p)
Aha, this clarifies some things helpfully. It is now much clearer to me than it was before what epistemological error you take Zack to be trying to correct here.
I still think it's clear that Zack's main purpose in writing the article was to promote a particular object-level position on the political question. But I agree that "even though categories are map rather than territory, some maps match reality much better than others, and to deny that is an error" (call this proposition P, for future use) is a reasonable point to make about epistemology in the abstract, and that given the context of Zack's article it's reasonable to take that to be a key thing it's trying to say about epistemology.
But it seems to me -- though perhaps I'm just being dim -- that the only possibly way to appreciate that P was Zack's epistemological point is to be aware not only of the political (not-very-sub) subtext of the article (which, you'll recall, is the thing I originally said it was wrong not to mention) but also of the context where people were addressing that specific political issue in what Zack considers a too-subjective way. (For the avoidance of doubt, I'm not saying that that requires some sort of special esoteric knowledge unavailable to the rest of us. Merely having just reread Scott's TCWMFM would have sufficed. But it happened that I was familiar enough with it not to feel that I needed to revisit it, and not familiar enough with it to recognize every specific reference to it in Zack's article. I doubt I'm alone in that.)
Again, perhaps I'm just being dim. But I know that some people didn't even see the political subtext, and I know that I didn't see P as being Zack's main epistemological point before I read what you just wrote. (I'm still not sure it is, for what it's worth.) So it doesn't seem open to much doubt that just putting the article here without further explanation wasn't sufficient.
There's a specific way in which I could be being dim that might make that wrong: perhaps I was just distracted by the politics, and perhaps if I'd been able to approach the article as if it were purely talking in the abstract about epistemology I'd have taken it to be saying P. But, again, if so then I offer myself as evidence that it needed some clarification for the benefit of those liable to be distracted.
As to the rest:
It looks to me as if you are ascribing meanings and purposes to me that are not mine at all. E.g., "If we all know that such arguments aren't meant to be taken literally, but are instead meant to push one side of a particular political debate in that context" -- I didn't think I was saying, and I don't think I believe, and I don't think anything I said either implies or presupposes, anything like that. The impression I have is that this is one of those situations where I say X, you believe Y, from X&Y you infer Z, and you get cross because I'm saying Z and Z is an awful thing to say -- when what's actually happening is that we disagree about Y. Unfortunately, I can't tell what Y is in this situation :-).
So I don't know how to react to your suggestion that I should have said explicitly rather than just assuming that posts like Scott's TCWMFM "are mindkilled politics and engaging with them lowers the quality of discourse here"; presumably either (1) you think I actually think that or (2) you think that what I've said implies that so it's a useful reductio, but I still don't understand how you get there from what I actually wrote.
To be explicit about this:
I do not think that Scott's TCWMFM is "mindkilled politics".
I do not think that engaging with articles like Scott's TCWMFM lowers the quality of discourse.
I do not think that it's impossible to hold Scott's position honestly.
I do not think that it's impossible to hold Zack's position honestly.
I don't think that Zack's article is "mindkilled politics", but I do think it's much less good than Scott's.
I don't think Scott is making the epistemological mistake you say Zack is saying he's making, that of not understanding that one way of drawing category boundaries can be better than another. I think he's aware of that, but thinks (as, for what it's worth, I do, but I think Zack doesn't) that there are a number of comparably well matched with reality ways to draw them in this case.
I think that responding to Scott's article as if he were simply saying "meh, whatever, draw category boundaries literally any way you like, the only thing that matters is which way is nicest" is not reasonable, and I think that casting it as making the mistake you say Zack is saying Scott was making requires some such uncharitable interpretation. (This may be one reason why I didn't take P to be the main epistemological claim of Zack's article.)
If you're still offended by what I wrote, then at least one of us is misunderstanding the other and I hope that turns out to be fixable.
Replies from: Dagon, Benquo, Benquo↑ comment by Dagon · 2019-03-13T21:37:11.884Z · LW(p) · GW(p)
But I agree that "even though categories are map rather than territory, some maps match reality much better than others, and to deny that is an error"
Wait. Suitability for purpose has to come in here. There is no single ordering of how closely a map reflects reality. Maps compress different parts of reality in different ways, to enable different predictions/communications about various parts of reality. It's been literally decades since I've enjoyed flamewars about which projection of Earth is "best" for literal maps, but the result is the same: it depends on what the map will be used for, and you're probably best off using different maps for different purposes, even if those maps are of the same place.
I don't know the actual debate going on, and pretty much think that in unspecific conversation where details don't matter, one should prefer kindness and surface presentation. Where the details matter, be precise and factual about the details - don't rely on categorizations that have notable exceptions for the dimensions you're talking about.
Replies from: gjm, jessica.liu.taylor↑ comment by gjm · 2019-03-13T22:36:37.478Z · LW(p) · GW(p)
For the avoidance of doubt, I strongly agree that what counts as "matching reality much better" depends on what you are going to be using your map for; that's a key reason why I am not very convinced by Zack's original argument if it's understood as a rebuttal to (say) Scott's TCWMFM either in general or specifically as it pertains to the political question at issue.
↑ comment by jessicata (jessica.liu.taylor) · 2019-03-13T22:21:23.973Z · LW(p) · GW(p)
in unspecific conversation where details don’t matter, one should prefer kindness and surface presentation.
Why? Doesn't this lead to summaries being inaccurate and people having bad world models (ones that would assign lower probability to the actual details, compared to ones based on accurate summaries)?
Replies from: Dagon↑ comment by Dagon · 2019-03-14T00:03:09.221Z · LW(p) · GW(p)
Doesn't this lead to summaries being inaccurate and people having bad world models (ones that would assign lower probability to the actual details, compared to ones based on accurate summaries)?
No, it doesn't lead there. It starts there. The vast majority of common beliefs will remain inaccurate on many dimensions, and all you can do is to figure out which (if any) details you can benefit the world by slightly improving, in your limited time. Details about hidden attributes that will affect almost nothing are details that don't need correcting - talk about more interesting/useful things.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-03-14T00:49:40.901Z · LW(p) · GW(p)
No one has time to look into the details of everything. If someone isn't going to look into the details of something, they benefit from the summaries being accurate, in the sense that they reflect how an honest party would summarize the details if they knew them. (Also, how would you know which things you should look into further if the low-resolution summaries are lies?)
This seems pretty basic and it seems like you were disagreeing with this by saying the description should be based on kindness and surface presentation. Obviously some hidden attributes matter more than others (and matter more or less context-dependently), my assertion here is that summaries should be based primarily on how they reflect the way the thing is (in all its details) rather than on kindness and surface presentation.
Replies from: Dagon↑ comment by Dagon · 2019-03-14T05:43:09.519Z · LW(p) · GW(p)
In many contexts, the primary benefit of the summary is brevity and simplicity, more even than information. If you have more time/bandwidth/attention, then certainly including more information is better, and even then you should prioritize information by importance.
In any case, I appreciate the reminder that this is the wrong forum for politically-charged discussions. I'm bowing out - I'll read any further comments, but won't respond.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-03-14T08:16:43.210Z · LW(p) · GW(p)
To be clear, brevity and simplicity are not the same as kindness and surface presentation, and confusing these two seems like a mistake 8 year olds can almost always avoid making. (No pressure to respond; in any case I meant to talk about the abstract issue of accurate summaries which seems not to be politically charged except in the sense that epistemology itself is a political issue, which it is)
↑ comment by Benquo · 2019-03-13T16:52:06.333Z · LW(p) · GW(p)
But it seems to me -- though perhaps I'm just being dim -- that the only possibly way to appreciate that P was Zack's epistemological point is to be aware not only of the political (not-very-sub) subtext of the article (which, you'll recall, is the thing I originally said it was wrong not to mention) but also of the context where people were addressing that specific political issue in what Zack considers a too-subjective way.
That's not actually an important part of the content of Zack's article. It is only relevant in the context of your claim that Zack was responding to a very different specific thing not directly referenced in his article. I am not saying that the fact that you were wrong means that the true cause should have been obvious. I am saying that the fact that you were wrong should make you doubt that you were obviously right.
If people's models have a specific glitch, laying out what the undamaged version ought to look like is legitimate, and shouldn't have to exist solely in reference to the specific instance of the glitch. Truth doesn't have to make reference to error to be true - it just has to match reality.
Replies from: gjm↑ comment by gjm · 2019-03-13T22:44:21.954Z · LW(p) · GW(p)
Wait, if you reckon the proposition I called P is "not actually an important part of the content of Zack's article" then what did you have in mind as the "politically motivated epistemic error" that Zack's article was about?
(Or, if P was that error, how am I supposed to understand your original protest [LW(p) · GW(p)] which so far as I can tell only makes any sense if you consider that correcting the epistemic error was the whole point, or at least the main point, of Zack's article?)
Firmly agree with your last paragraph, though.
↑ comment by Benquo · 2019-03-13T16:48:37.407Z · LW(p) · GW(p)
I still think it's clear that Zack's main purpose in writing the article was to promote a particular object-level position on the political question.
Why would you think that? Why would this post be a remotely effective way to do that? Why is that even a plausible thing Zack's trying to do here? Can you point to an example of someone who was actually persuaded?
I feel like I've done way too much work explaining my position here and you haven't really explained the reasoning behind yours.
Replies from: gjm↑ comment by gjm · 2019-03-13T22:59:50.163Z · LW(p) · GW(p)
For what it's worth, I feel the same way as you but with the obvious change of sign: it feels to me like you keep accusing me of saying somewhat-outrageous things that I'm not intending to say and don't believe, and when I ask why you'd think I mean that you just ignore it, and it feels to me like I've put much more trouble into understanding your position and clarifying mine than you have into understanding mine and clarifying yours.
Presumably the truth lies somewhere in between.
I don't think it is reasonable to respond to "I think Zack was trying to do X" with "That's ridiculous, because evidently it didn't work", for two reasons. Firstly, the great majority of attempts to promote a particular position on a controversial topic don't change anyone's mind, even in a venue like LW where we try to change our minds more readily when circumstances call for it. Secondly, if you propose that instead he was trying to put forward a particular generally-applicable epistemological position (though I still don't know what position you have in mind, despite asking several times, since the only particular one you've mentioned you then said wasn't an important part of the content of Zack's article) then I in turn can ask whether you can point to an example of someone who was persuaded of that by the article.
It's somewhat reasonable to respond to "I think Zack was trying to do X" with "But what he wrote is obviously not an effective way of doing X", but I don't see why it's any more obviously ineffective as a tool of political persuasion, or as an expression of a political position, than it is as a work of epistemological clarification, and in particular it doesn't even look to me more than averagely ineffective in such a role.
For the avoidance of doubt, I don't in the least deny that I might be wrong about what Zack was trying to do. (Sometimes a person thinks something's clear that turns out to be false. I am not immune to this.) Zack, if you happen to be reading and haven't been so annoyed by my comments that you don't want to interact with me ever again, anything you might want to say on this score would be welcome. If I have badly misunderstood what you wrote, please accept my apologies.
↑ comment by Benquo · 2019-03-13T15:16:46.709Z · LW(p) · GW(p)
Gonna try a point-by-point version in case that's clearer.
It isn’t clear to me what this error is meant to be. If it’s something like “thinking that there must be a definite objectively-correct division of all things into bleggs and rubes” then I agree that it’s an error but it’s an error already thoroughly covered by EY’s and SA’s posts linked to in the article itself, and in any case it doesn’t seem to me that the article is mostly concerned with making that point; rather, it presupposes it.
I know from conversations elsewhere that Zack is responding to the opposite error - the claim that because the usual rule for separating Bleggs from Rubes is pragmatically motivated, it has no implications for edge cases. If you're making wrong guesses about what political position Zack is taking, you should really reconsider your claim that it's obvious what his political position is. This needs to be generalized, because it's obnoxious to have to bring in completely extraneous info about motives in order to figure out whether a post like this is political. Bothering to explain this at all feels a bit like giving in to extortion, and the fact that I expect this explanation to be necessary is a further update against continued substantive engagement on Lesswrong.
In any case, it seems to me that the main point of the linked article is not to correct some epistemic error, but to propose a particular position on the political issue it’s alluding to, and that most of the details of its allegory are chosen specifically to support that aim. [...] Constructing a hypothetical situation designed to match your view of a politically contentious question and drawing readers’ attention to that matchup is not “depoliticized” in any useful sense.
This seems like a proposal to cede an untenable amount of conversation territory. If a controversial political position becomes associated with a particular epistemic error, then discussing that specific error becomes off-limits here, or at least needs to be deprecated as political. I don't know what that results in, but it's not a Rationalist community.
I do in fact think that Zack’s purpose in posting the article here is probably at least in part to promote the political position for which the article is arguing, and that if that isn’t so—if Zack’s intention was simply to draw our attention to a well-executed bit of epistemology—then it is likely that Zack finds it well-executed partly because of finding it politically congenial. In that sense, I do think it’s probably a “political act”.
A clear implication of Something to Protect [LW · GW] is that people can't be Rationalists unless getting the right answer has some practical importance to them.
The rest of your comment seems to be making a substantially wrong guess about Zack's position on gender in a way that - to me, since I know something about Zack's position - is pretty strong evidence that Zack succeeded in stripping out the accidental specifics and focusing on the core epistemic question. The standard you're actually holding Zack to is one where if you - perhaps knowing already that he has some thoughts on gender - can project a vaguely related politically motivated argument onto his post, then it's disingenuous to say it's nonpolitical.
Replies from: gjm↑ comment by gjm · 2019-03-13T16:27:04.672Z · LW(p) · GW(p)
(I'm responding to this after already reading and replying to your earlier comment. Apologies in advance if it turns out that I'd have done better with the other one if I'd read this first...)
I'll begin at the end. "... perhaps knowing already that he has some thoughts on gender". What actually happened is that I started reading the article without noticing the website's name, got a few paragraphs in and thought "ah, OK, so this is a fairly heavy-handed allegory for some trans-related thing", finished reading it and was fairly unimpressed, then noticed the URL. As for the author, I didn't actually realise that Zack was the author of the linked article until the discussion here was well underway.
I think we may disagree about what constitutes strong evidence of having successfully stripped out the accidental specifics. Suppose you decide to address some controversial question obliquely. Then there are three different ways in which a reader can come to a wrong opinion about your position on the controversial question. (1) You can detach what you're writing from your actual position on the object-level issue successfully enough that a reasonable person would be unable to figure out what your position is. (2) You can write something aimed at conveying your actual position, but do it less than perfectly. (3) You can write something aimed at conveying your actual position, and do it well, but the reader can make mistakes, or lack relevant background knowledge, and come to a wrong conclusion. It seems like you're assuming #1. I think #2 and #3 are at least as plausible.
(As to whether I have got Zack's position substantially wrong, it's certainly possible that I might have, by any or all of the three mechanisms in the last paragraph. I haven't gone into much detail on what I think Zack's position is so of course there are also possibilities 4 and 5: that I've understood it right but expressed that understanding badly, or that I've understood and expressed it OK but you've misunderstood what I wrote. If you think it would be helpful, then I can try to state more clearly what I think Zack's position is and he can let us know how right or wrong I got it. My guess is that it wouldn't be super-helpful, for what it's worth.)
OK, now back to the start. My reply to your other comment addresses the first point (about what alleged error Zack is responding to) and I don't think what you've said here changes what I want to say about that.
On the second point (ceding too much territory) I think you're assuming I'm saying something I'm not, namely that nothing with political implications can ever be discussed here. I don't think I said that; I don't believe it; I don't think anything I said either implies or presupposes it. What I do think is (1) that Zack's article appears to me to be mostly about the politics despite what Zack calls its "deniable allegory", (2) that linking mostly-political things from here ought to be done in a way that acknowledges their political-ness and clarifies how they're intended to be relevant to LW, and (3) that (in my judgement, with which of course others may disagree) this particular article, if we abstract out the political application, isn't very valuable as a discussion of epistemology in the abstract.
I'm not sure I've understood what point you're making when you reference Something to Protect; I think that again you may be taking me to be saying something more negative than I thought I was saying. At any rate, I certainly neither think nor intended to suggest that we should only talk about things of no practical importance.
↑ comment by Zack_M_Davis · 2019-03-13T03:05:45.142Z · LW(p) · GW(p)
It seems relevant here that Zack pretty much agreed with my description: see his comments using terms like "deniable allegory", "get away with it", etc.
So, from my perspective, I'm facing a pretty difficult writing problem here! (See my reply to Dagon [LW(p) · GW(p)].) I agree that we don't want Less Wrong to be a politicized space. On the other hand, I also think that a lot of self-identified rationalists are making a politically-motivated epistemology error in asserting category boundaries to be somewhat arbitrary, and it's kind of difficult to address what I claim is the error without even so much as alluding to the object-level situation that I think is motivating the error! For the long, object-level discussion, see my reply to Scott Alexander, "The Categories Were Made for Man To Make Predictions". (Sorry if the byline mismatch causes confusion; I'm using a pen name for that blog.) I didn't want to share "... To Make Predictions" on Less Wrong (er, at least not as a top-level post), because that clearly would be too political. But I thought the "Blegg Mode" parable was sufficiently sanitized such that it would be OK to share as a link post here?
I confess that I didn't put a lot of thought into the description text which you thought was disingenuous. I don't think I was being consciously disingenuous (bad intent is a disposition, not a feeling!), but after you pointed it out, I do see your point that, since there is some unavoidable political context here, it's probably better to explicitly label that, because readers who had a prior expectation that no such context would exist would feel misled upon discovering it. So I added the "Content notice" to the description. Hopefully that addresses the concern?
our categories are [...] somewhat arbitrary
No! Categories are not "somewhat arbitrary"! There is structure in the world, and intelligent agents need categories that carve the structure at the joints [LW · GW] so that they can make efficient probabilistic inferences about the variables they're trying to optimize! "Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims." We had a whole Sequence [LW · GW] about this! Doesn't anyone else remember?!
Trans-ness is not always "cheap to detect". I guess it's cheaper to detect than, say, sex chromosomes. OK -- and how often are another person's sex chromosomes "decision-relevant with respect to the agent's goals"?
You seem to be making some assumptions about which parts of the parable are getting mapped to which parts of the real-world issue that obviously inspired the parable. I don't think this is the correct venue for me to discuss the real-world issue. On this website, under this byline, I'd rather only talk about bleggs and rubes—even if you were correct to point out that it would be disingenuous for someone to expect readers to pretend not to notice the real-world reason that we're talking about bleggs and rubes. With this in mind, I'll respond below to a modified version of part of your comment (with edits bracketed).
I guess it's cheaper to detect than, say, [palladium or vanadium content]. OK -- and how often [is a sortable object's metal content] "decision-relevant with respect to the agent's goals"? Pretty much only if [you work in the sorting factory.] [That's] fairly uncommon -- for most of us, very few of the [sortable objects] we interact with [need to be sorted into bins according to metal content].
Sure! But reality is very high-dimensional [LW · GW]—bleggs and rubes have other properties besides color, shape, and metal content—for example, the properties of being flexible-vs.-hard or luminescent-vs.-non-luminescent, as well as many others that didn't make it into the parable. If you care about making accurate predictions about the many properties of sortable objects that you can't immediately observe, then how you draw your category boundaries matters, because your brain is going to be using the category membership you assigned in order to derive your prior expectations about the variables that you haven't yet observed [LW · GW].
sex chromosomes, which is exactly the "expensive" feature the author identifies in the case of trans people.
The author did no such thing! It's epistemology fiction about bleggs and rubes! It's true that I came up with the parable while I was trying to think carefully about transgender stuff that was of direct and intense personal relevance to me. It's true that it would be disingenuous for someone to expect readers to not-notice that I was trying to think about trans issues. (I mean, it's in the URL.) But I didn't say anything about chromosomes! "If confusion threatens when you interpret a metaphor as a metaphor, try taking everything completely literally."
Trying to think of some examples, it seems to me that what matters is simply the presence of features that are "decision-relevant with respect to the agent's goals". [...]
Thanks for this substantive, on-topic criticism! I would want to think some more before deciding how to reply to this.
ADDENDUM: I thought some more and wrote a sister comment [LW(p) · GW(p)].
Replies from: gjm↑ comment by gjm · 2019-03-13T13:53:22.281Z · LW(p) · GW(p)
Yes, I agree that the content-note deals with my "disingenuousness" objection.
I agree (of course!) that there is structure in the world and that categories are not completely arbitrary. It seems to me that this is perfectly compatible with saying that they are _somewhat_ arbitrary, which conveniently is what I did actually say. Some categorizations are better than others, but there are often multiple roughly-equally-good categorizations and picking one of those rather than another is not an epistemological error. There is something in reality that is perfectly precise and leaves no room for human whims, but that thing is not usually (perhaps not ever) a specific categorization.
So, anyway, in the particular case of transness, I agree that it's possible that some of the four categorizations we've considered here (yours, which makes trans people a separate category but nudge-nudge-wink-wink indicates that for most purposes trans people are much more "like" others of their 'originally assigned' gender than others of their 'adopted' gender; and the three others I mentioned: getting by with just two categories and not putting trans people in either of them; getting by with just two categories and putting trans people in their 'originally assigned' category; getting by with just two categories and putting trans people in their 'adopted' category) are so much better than others that we should reject them. But it seems to me that that the relative merits of these depend on the agent's goals, and the best categorization to adopt may be quite different depending on whether you're (e.g.) a medical researcher, a person suffering gender dysphoria, a random member of the general public, etc -- and also on your own values and priorities.
I did indeed make some assumptions about what was meant to map to what. It's possible that I didn't get them quite right. I decline to agree with your proposal that if something metaphorical that you wrote doesn't seem to match up well I should simply pretend that you intended it as a metaphor, though of course it's entirely possible that some different match-up makes it work much better.
Replies from: Zack_M_Davis, Richard_Kennaway↑ comment by Zack_M_Davis · 2019-03-14T05:32:24.759Z · LW(p) · GW(p)
But it seems to me that that the relative merits of these depend on the agent's goals, and the best categorization to adopt may be quite different depending on whether you're [...] and also on your own values and priorities.
Yes, I agree! (And furthermore, the same person might use different categorizations at different times depending on what particular aspects of reality are most relevant to the task at hand.)
But given an agent's goals in a particular situation, I think it would be a shocking coincidence for it to be the case that "there are [...] multiple roughly-equally-good categorizations." Why would that happen often?
If I want to use sortable objects as modern art sculptures to decorate my living room, then the relevant features are shape and color, and I want to think about rubes and bleggs (and count adapted bleggs as bleggs). If I also care about how the room looks in the dark and adapted bleggs don't glow in the dark like ordinary bleggs do, then I want to think about adapted bleggs as being different from ordinary bleggs.
If I'm running a factory that harvests sortable objects for their metal content and my sorting scanner is expensive to run, then I want to think about rubes and ordinary bleggs (because I can infer metal content with acceptably high probability by observing the shape and color of these objects), but I want to look out for adapted bleggs (because their metal content is, with high probability, not what I would expect based on the color/shape/metal-content generalizations I learned from my observations of rubes and ordinary bleggs). If the factory invests in a new state-of-the-art sorting scanner that can be cheaply run on every object, then I don't have any reason to care about shape or color anymore—I just care about palladium-cored objects and vanadium-cored objects.
and picking one of those rather than another is not an epistemological error.
If you're really somehow in a situation where there are multiple roughly-equally-good categorizations with respect to your goals and the information you have, then I agree that picking one of those rather than another isn't an epistemological error. Google Maps and MapQuest are not exactly the same map, but if you just want to drive somewhere, they both reflect the territory pretty well: it probably doesn't matter which one you use. Faced with an arbitrary choice, you should make an arbitrary choice: flip a coin, or call random.random()
.
And yet somehow, I never run into people who say, "Categories are somewhat arbitrary, therefore you might as well roll a d3 to decide whether to say 'trans women are women' or 'so-called "trans women" are men' or 'transwomen are transwomen', because each of these maps is doing a roughly-equally-good job of reflecting the relevant aspects of the territory." But I run into lots of people who say, "Categories are somewhat arbitrary, therefore I'm not wrong to insist that trans women are women," and who somehow never seem to find it useful to bring up the idea that categories are somewhat arbitrary in seemingly any other context.
You see the problem? If the one has some sort of specific argument for why I should use a particular categorization system in a particular situation, then that's great, and I want to hear it! But it has to be an argument and not a selectively-invoked appeal-to-arbitrariness conversation-halter [LW · GW].
Replies from: gjm↑ comment by gjm · 2019-03-14T14:50:54.873Z · LW(p) · GW(p)
Multiple roughly-equally-good categorizations might not often happen to an idealized superintelligent AI that's much better than we are at extracting all possible information from its environment. But we humans are slow and stupid and make mistakes, and accordingly our probability distributions are really wide, which means our error bars are large and we often find ourselves with multiple hypotheses we can't decide between with confidence.
(Consider, for a rather different example, political questions of the form "how much of X should the government do?" where X is providing a social "safety net", regulating businesses, or whatever. Obviously these are somewhat value-laden questions, but even if I hold that constant by e.g. just trying to decide what I think is optimal policy I find myself quite uncertain.)
Perhaps more to the point, most of us are in different situations at different times. If what matters to you about rubleggs is sometimes palladium content, sometimes vanadium content, and sometimes furriness, then I think you have to choose between (1) maintaining a bunch of different categorizations and switching between them, (2) maintaining a single categorization that's much finer grained than is usually needed in any single situation and aggregating categories in different ways at different times, and (3) finding an approach that doesn't rely so much on putting things into categories. The cognitive-efficiency benefits of categorization are much diminished in this situation.
Your penultimate paragraph argues (I think) that talk of categories' somewhat-arbitrariness (like, say, Scott's in TCWMFM) is not sincere and is adopted merely as an excuse for taking a particular view of trans people (perhaps because that's socially convenient, or feels nice, or something). Well, I guess that's just the mirror image of what I said about your comments on categories, so turnabout is fair play, but I don't think I can agree with it.
So it seems to me very not-true that the idea that categories are somewhat arbitrary is a thing invoked only in order to avoid having to take a definite position (or, in order to avoid choosing one's definite position on the basis of hard facts rather than touchy-feely sensitivity) on how to think and talk about trans people.
Replies from: Zack_M_Davis, gjm↑ comment by Zack_M_Davis · 2019-03-16T01:01:03.586Z · LW(p) · GW(p)
The "Disguised Queries" [? · GW] post that first introduced bleggs and rubes makes essentially the point that categories are somewhat arbitrary, that there's no One True Right Answer to "is it a blegg or a rube?", and that which answer is best depends on what particular things you care about on a particular occasion.
That's not how I would summarize that post at all! I mean, I agree that the post did literally say that ("The question 'Is this object a blegg?' may stand in for different queries on different occasions"). But it also went on to say more things that I think substantially change the moral—
If [the question] weren't standing in for some query, you'd have no reason to care.
[...] People who argue that atheism is a religion "because it states beliefs about God" are really trying to argue (I think) that the reasoning methods used in atheism are on a par with the reasoning methods used in religion, or that atheism is no safer than religion in terms of the probability of causally engendering violence, etc... [...]
[...] The a priori irrational part is where, in the course of the argument, someone pulls out a dictionary and looks up the definition of "atheism" or "religion". [...] How could a dictionary possibly decide whether an empirical cluster of atheists is really substantially different from an empirical cluster of theologians? How can reality vary with the meaning of a word? The points in thingspace don't move around when we redraw a boundary. [bolding mine—ZMD]
But people often don't realize that their argument about where to draw a definitional boundary, is really a dispute over whether to infer a characteristic shared by most things inside an empirical cluster...
I claim that what Yudkowsky said about the irrationality about appealing to the dictionary, goes the same for appeal to personal values or priorities. It's not false exactly, but it doesn't accomplish anything.
Suppose Bob says, "Abortion is murder [LW · GW], because it's the killing of a human being!"
Alice says, "No, abortion isn't murder, because murder is the killing of a sentient being, and fetuses aren't sentient."
As Alice and Bob's hired rationalist mediator, you could say, "You two just have different preferences about somewhat-arbitary category boundaries, that's all! Abortion is murder-with-respect-to-Bob's-definition, but it isn't murder-with-respect-to-Alice's-definition. Done! End of conversation! [LW · GW]"
And maybe sometimes there really is nothing more to it than that [LW · GW]. But oftentimes, I think we can do more work to break the symmetry: to work out what different predictions Alice and Bob are making about reality, or what different preferences they have about reality, and refocus the discussion on that. As I wrote in "The Categories Were Made for Man to Make Predictions":
If different political factions are engaged in conflict over how to define the extension of some common word—common words being a scarce and valuable resource both culturally and information-theoretically—rationalists may not be able to say that one side is simply right and the other is simply wrong, but we can at least strive for objectivity in describing the conflict. Before shrugging and saying, "Well, this is a difference in values; nothing more to be said about it," we can talk about the detailed consequences of what is gained or lost by paying attention to some differences and ignoring others.
We had an entire Sequence [LW · GW] specifically about this! You were there! [LW(p) · GW(p)] I was there! Why doesn't anyone remember?!
Replies from: gjm↑ comment by gjm · 2019-03-16T17:23:05.767Z · LW(p) · GW(p)
I wasn't claiming to summarize "Disguised Queries". I was pointing out one thing that it says, which happens to be the thing that you say no one says other than to push a particular position on trans issues, and which "Disguised Queries" says with (so far as I can tell) no attempt to say anything about transness at all.
Alice and Bob's conversation doesn't have to end once they (hopefully) recognize that their disagreement is about category boundaries as much as it is about matters of fact. They may well want to figure out why they draw their boundaries in different places. It might be because they have different purposes; or because they have different opinions on some other matter of fact; or because one or both are really making appeals to emotion for an already-decided conclusion rather than actually trying to think clearly about what sort of a thing a foetus is; etc.
Ending a conversation, or a train of thought, prematurely, is a bad thing. It seems altogether unfair to complain at me merely for using words that could be abused for that purpose. (If you see me actually trying to end a conversation with them, of course, then by all means complain away.)
Over and over again in this discussion, it seems as if I'm being taken to say things I'm fairly sure I haven't said and certainly don't believe. If it's because I'm communicating badly, then I'm very sorry. But it might be worth considering other explanations.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-03-16T21:01:41.466Z · LW(p) · GW(p)
I wasn't claiming to summarize "Disguised Queries".
I may have misinterpreted what you meant by the phrase "makes essentially the point that."
the thing that you say no one says other than to push a particular position on trans issues
I see. I think I made a mistake in the great-great-grandparent comment [LW(p) · GW(p)]. That comments' penultimate paragraph ended: "[...] and who somehow never seem to find it useful to bring up the idea that categories are somewhat arbitrary in seemingly any other context." I should not have written that, because as you pointed out in the great-grandparent [LW(p) · GW(p)], it's not true. This turned out to be a pretty costly mistake on my part, because we've now just spent the better part of four comments litigating the consequences of this error in a way that we could have avoided if only I had taken more care to phrase the point I was trying to make less hyperbolically.
The point I was trying to make in the offending paragraph is that if someone honestly believes that the choice between multiple category systems is arbitrary or somewhat-arbitrary, then they should accept the choice being made arbitrarily or somewhat-arbitrarily. I agree that "It depends on what you mean by X" is often a useful motion, but I think it's possible to distinguish when it's being used to facilitate communication from when it's being used to impose frame control. Specifically: it's incoherent to say, "It's arbitrary, so you should do it my way," because if it were really arbitrary, the one would not be motivated to say "you should do it my way." In discussions about my idiosyncratic special interest, I very frequently encounter incredibly mendacious frame-control attempts from people who call themselves "rationalists" and who don't seem to do this on most other topics. (This is, of course, with respect to how I draw the "incredibly mendacious" category boundary.)
Speaking of ending conversations, I'm feeling pretty emotionally exhausted, and we seem to be spending a lot of wordcount on mutual misunderstandings, so unless you have more things you want to explain to me, maybe this should be the end of the thread? Thanks for the invigorating discussion! This was way more productive than most of the conversations I've had lately! (Which maybe tells you something about the quality of those other discussions.)
Replies from: gjm↑ comment by gjm · 2019-03-18T00:48:27.189Z · LW(p) · GW(p)
Happy to leave it here; I have a few final comments that are mostly just making explicit things that I think we largely agree on. (But if any of them annoy you, feel free to have the last word.)
1. Yeah, sorry, "essentially" may have been a bad choice of word. I meant "makes (inter alia) a point which is essentially that ..." rather than "makes, as its most essential part, the point that ...".
2. My apologies for taking you more literally than intended. I agree that "it's arbitrary so you should do it my way" is nuts. On the other hand, "there's an element of choice here, and I'm choosing X because of Y" seems (at least potentially) OK to me. I don't know what specific incredibly mendacious things you have in mind, but e.g. nothing in Scott's TCWMFM strikes me as mendacious and I remain unconvinced by your criticisms of it. (Not, I am fairly sure, because I simply don't understand them.)
Finally, my apologies for any part of the emotional exhaustion that's the result of things I said that could have been better if I'd been cleverer or more sensitive or something of the kind.
↑ comment by gjm · 2019-03-14T15:19:06.632Z · LW(p) · GW(p)
Meta: That comment had a bunch of bullet points in it when I wrote it. Now (at least for me, at least at the moment) they seem to have disappeared. Weird. [EDIT to clarify:] I mean that the bullet symbols themselves, and the indentation that usually goes with them, have gone. The actual words are still there.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-03-15T00:26:06.800Z · LW(p) · GW(p)
Our bad. We broke bullet-lists with a recent update that also added autolinking. I am working on a fix that should ideally go up tonight.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-03-15T03:16:07.467Z · LW(p) · GW(p)
Should be fixed now. Sorry for the inconvenience.
Replies from: gjm↑ comment by gjm · 2019-03-15T16:54:38.022Z · LW(p) · GW(p)
My comment above is unchanged, which I guess means it was a parsing rather than a rendering problem if the bug is now fixed.
... Nope, still broken, sorry. But it looks as if the vertical spacing is different from what it would be if these were all ordinary paragraphs, so something is being done. In the HTML they are showing up as <li> elements, without any surrounding <ul> or anything of the sort; I don't know whether that's what's intended.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-03-15T18:10:47.831Z · LW(p) · GW(p)
Wait, that list is definitely bulleted, and I also fixed your comment above. Are we seeing different things?
Replies from: Zack_M_Davis, habryka4↑ comment by Zack_M_Davis · 2019-03-15T23:09:20.281Z · LW(p) · GW(p)
I don't see bullets on Firefox 65.0.1, but I do on Chromium 72.0.3626.121 (both Xubuntu 16.04.5).
Replies from: gjm↑ comment by gjm · 2019-03-16T14:59:50.486Z · LW(p) · GW(p)
Right. I'm using Firefox and see no bullets. We're in "Chrome is the new IE6" territory, I fear; no one bothers testing things on Firefox any more. Alas!
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-03-16T19:37:27.248Z · LW(p) · GW(p)
I have a PR that fixes it properly. Should be up by Monday.
I usually check browser compatibility, I just didn't consider it in this case since I didn't actually expect that something as old as bullet lists would still have browser rendering differences.
Replies from: gjm↑ comment by habryka (habryka4) · 2019-03-15T18:15:22.247Z · LW(p) · GW(p)
My guess is it's some browser inconsistency because of orphaned <li> elements. Will try to fix that as well.
↑ comment by Richard_Kennaway · 2019-03-16T09:31:21.038Z · LW(p) · GW(p)
Categories are never arbitrary. They are created to serve purposes. They can serve those purposes better or worse. There can be multiple purposes, leading to multiple categories overlapping and intersecting. Purposes can be lost (imagine a link to the Sequences posting on lost purposes). “Arbitrary” is a “buffer” or “lullaby” word (imagine another link, I might put them in when I’m not writing on a phone on a train) that obscures all that.
Replies from: gjm↑ comment by gjm · 2019-03-16T14:56:58.138Z · LW(p) · GW(p)
It seems to me that you're saying a bunch of things I already said, and saying them as if they are corrections to errors I've made. For instance:
RK: "Categories are never arbitrary." gjm: "categories are not completely arbitrary."
RK: "They are created to serve purposes." gjm: "the relative merits of these depend on the agent's goals"
RK: "They can serve those purposes better or worse." gjm: "Some categorizations are better than others [...] the relative merits of these depend on the agent's goals."
So, anyway, I agree with what you say, but I'm not sure why you think (if you do -- it seems like you do) I was using "arbitrary" as what you call a "lullaby word". I'm sorry if for you it obscured any of those points about categories, though clearly it hasn't stopped you noticing them; you may or may not choose to believe me when I said it didn't stop me noticing them either.
For what it's worth, I think what I mean when I say "categories are somewhat arbitrary" is almost exactly the same as what you mean when you say "they are created to serve purposes".
↑ comment by Zack_M_Davis · 2019-03-13T06:20:40.132Z · LW(p) · GW(p)
Trying to think of some examples, it seems to me that what matters is simply the presence of features that are "decision-relevant with respect to the agent's goals". [...]
So, I think my motivation (which didn't make it into the parable) for the "cheap to detect features that correlate with decision-relevant expensive to detect features" heuristic is that I'm thinking in terms of naïve Bayes models [LW · GW]. You imagine a "star-shaped" causal graph [LW · GW] with a central node (whose various values represent the possible categories you might want to assign an entity to), with arrows pointing to various other nodes (which represent various features of the entity). (That is, we're assuming that the features of the entity are conditionally independent given category membership: P(X|C) = Π_i P(X_i|C).) Then when we observe some subset of features, we can use that to update our probabilities of category-membership, and use that to update our probabilities of the features we haven't observed yet. The "category" node doesn't actually "exist" out there in the world—its something we construct to help factorize our probability distribution over the features (which do "exist").
So, as AI designers, we're faced with the question of how we want the "category" node to work. I'm pretty sure there's going to be a mathematically correct answer to this that I just don't know (yet) because I don't study enough and haven't gotten to Chapter 17 of Daphne Koller and the Methods of Rationality. Since I'm not there yet, if I just take at intuitive amateur guess at how I might expect this to work, it seems pretty intuitively plausible that we're going to want the category node to be especially sensitive to cheap-to-observe features that correlate with goal-relevant features? Like, yes, we ultimately just want to know as much as possible about the decision-relevant variables, but if some observations are more expensive to make than others, that seems like the sort of thing the network should be able to take into account, right??
Remember those 2% of otherwise ordinary bleggs that contain palladium? Personally, I'd want a category for those
I agree that "things that look like 'bleggs' that contain palladium" is a concept that you want to be able to think about. (I just described it in words, therefore it's representable!) But while working on the sorting line, your visual system's pattern-matching faculties aren't going to spontaneously invent "palladium-containing bleggs" as a thing to look out for if you don't know any way to detect them, whereas if adapted bleggs tend to look different in ways you can see, then that category is something your brain might just "learn from experience." In terms of the naïve Bayes model, I'm sort of assuming that the 2% of palladium containing non-adapted bleggs are "flukes": that variable takes that value with that probability independently of the other blegg features. I agree that if that assumption were wrong, then that would be really valuable information, and if you suspect that assumption is wrong, then you should definitely be on the lookout for ways to spot palladium-containing bleggs.
But like, see this thing I'm at least trying to do here, where I think there's learnable statistical structure in the world that I want to describe using language? That's pretty important! I can totally see how, from your perspective, on certain object-level applications, you might suspect that the one who says, "Hey! Categories aren't even 'somewhat' arbitrary! There's learnable statistical structure in the world; that's what categories are for!" is secretly being driven by nefarious political motivations. But I hope you can also see how, from my perspective, I might suspect that the one who says, "Categories are somewhat arbitrary; the one who says otherwise is secretly being driven by nefarious political motivations" is secretly being driven by political motivations that have pretty nefarious consequences for people like me trying to use language to reason about the most important thing in my life, even if the psychological foundation of the political motivation is entirely kindhearted.
Replies from: jessica.liu.taylor, gjm↑ comment by jessicata (jessica.liu.taylor) · 2019-03-13T07:55:46.634Z · LW(p) · GW(p)
Since I’m not there yet, if I just take at intuitive amateur guess at how I might expect this to work, it seems pretty intuitively plausible that we’re going to want the category node to be especially sensitive to cheap-to-observe features that correlate with goal-relevant features? Like, yes, we ultimately just want to know as much as possible about the decision-relevant variables, but if some observations are more expensive to make than others, that seems like the sort of thing the network should be able to take into account, right??
I think the mathematically correct thing here is to use something like the expectation maximization algorithm. Let's say you have a dataset that is a list of elements, each of which has some subset of its attributes known to you, and the others unknown. EM does the following:
- Start with some parameters (parameters tell you things like what the cluster means/covariance matrices are; it's different depending on the probabilistic model)
- Use your parameters, plus the observed variables, to infer the unobserved variables (and cluster assignments) and put Bayesian distributions over them
- Do something mathematically equivalent to generating a bunch of "virtual" datasets by sampling the unobserved variables from these distributions, then setting the parameters to assign high probability to the union of these virtual datasets (EM isn't usually described this way but it's easier to think about IMO)
- Repeat starting from step 2
This doesn't assign any special importance to observed features. Since step 3 is just a function of the virtual datasets (not taking into account additional info about which variables are easy to observe), they're going to take all the features, observable or not, into account. However, the hard-to-observe features are going to have more uncertainty to them, which affects the virtual datasets. With enough data, this shouldn't matter that much, but the argument for this is a little complicated.
Another way to solve this problem (which is easier to reason about) is by fully observing a sufficiently high number of samples. Then there isn't a need for EM, you can just do clustering (or whatever other parameter fitting) on the dataset (actually, clustering can be framed in terms of EM, but doesn't have to be). Of course, this assigns no special importance to easy-to-observe features. (After learning the parameters, we can use them to infer the unobserved variables probabilistically)
Philosophically, "functions of easily-observed features" seem more like percepts than concepts (this post describes the distinction). These are still useful, and neural nets are automatically going to learn high-level percepts (i.e. functions of observed features), since that's what the intermediate layers are optimized for. However, a Bayesian inference method isn't going to assign special importance to observed features, as it treats the observations as causally downstream of the ontological reality rather than causally upstream of it.
↑ comment by gjm · 2019-03-13T14:21:06.487Z · LW(p) · GW(p)
I share jessicata's feeling that the best set of concepts to work with may not be very sensitive to what's easy to detect. This might depend a little on how we define "concepts", and you're right that your visual system or some other fairly "early" bit of processing may well come up with ways of lumping things together, and that that will be dependent on what's easy to detect, whether or not we want to call those things concepts or categories or percepts or whatever else.
But in the cases I can think of where it's become apparent that some set of categories needs refinement, there doesn't seem to be a general pattern of basing that refinement on the existence of convenient detectable features. (Except in the too-general sense that everything ultimately comes down to empirical observation.)
I don't think your political motivations are nefarious, and I don't think there's anything wrong with a line of thinking that goes "hmm, it seems like the way a lot of people think about X makes them misunderstand an important thing in my life really badly; let's see what other ways one could think about X, because they might be better" -- other than that "hard cases make bad law", and that it's easy to fall into an equal-and-opposite error where you think about X in a way that would make you misunderstand a related important thing in other people's lives. The political hot potato we're discussing here demonstrably is one where some people have feelings that (so far as I can tell) are as strong as yours and of opposite sign, after all. (Which may suggest, by the way, that if you want an extra category then you may actually need two or more extra categories: "adapted bleggs" may have fundamental internal differences from one another. [EDITED to add:] ... And indeed your other writings on this topic do propose two or more extra categories.)
I am concerned that we are teetering on the brink of -- if we have not already fallen into -- exactly the sort of object-level political/ideological/personal argument that I was worried about when you first posted this. Words like "nefarious" and "terrorist" seem like a warning sign. So I'll limit my response to that part of what you say to this: It is not at all my intention to endorse any way of talking to you, or anyone else, that makes you, or anyone else, feel the way you describe feeling in that "don't negotiate with terrorist memeplexes" article.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-03-14T06:00:46.856Z · LW(p) · GW(p)
I share jessicata's feeling that the best set of concepts to work with may not be very sensitive to what's easy to detect. [...] there doesn't seem to be a general pattern of basing that refinement on the existence of convenient detectable features
Yeah, I might have been on the wrong track there. (Jessica's comment is great! I need to study more!)
I am concerned that we are teetering on the brink of -- if we have not already fallen into -- exactly the sort of object-level political/ideological/personal argument that I was worried about
I think we're a safe distance from the brink.
Words like "nefarious" and "terrorist" seem like a warning sign
"Nefarious" admittedly probably was a high-emotional-temperature warning sign (oops), but in this case, "I don't negotiate with terrorists" is mostly functioning as the standard stock phrase to evoke the timeless-decision-theoretic "don't be extortable" game-theory intuition, which I don't think should count as a warning sign, because it would be harder to communicate if people had to avoid genuinely useful metaphors because they happened to use high-emotional-valence words.
↑ comment by Zack_M_Davis · 2019-03-12T03:55:35.391Z · LW(p) · GW(p)
Can you say more? What should the description say instead? (I'm guessing you're referring to the fact that the post has some subtext that probably isn't a good topic fit for Less Wrong? But I would argue that the text (using the blegg/rube parable setting to make another point about the cognitive function of categorization) totally is relevant and potentially interesting!)
Replies from: gjm↑ comment by gjm · 2019-03-12T12:52:48.988Z · LW(p) · GW(p)
"Fanfiction for the blegg/rube parable" and "to make another point about the cognitive function of categorization" are both completely ignoring the very large elephant in the rather small room.
The actual topic of the piece is clearly the currently hot topic of How To Think About Trans People. (Words like "trans" and "gender" are never mentioned, but it becomes obvious maybe four or five paragraphs in.) Which is a sufficiently mindkilling topic for sufficiently many people that maybe it's worth mentioning.
(Or maybe not; you might argue that actually readers are more likely to be able to read the thing without getting mindkilled if their attention isn't drawn to the mindkilling implications. But I don't think many of those likely to be mindkilled will miss those implications; better to be up front about them.)
Replies from: habryka4, Zack_M_Davis, Zack_M_Davis, Dagon↑ comment by habryka (habryka4) · 2019-03-13T03:49:36.597Z · LW(p) · GW(p)
When I first read the post, I did not notice any reference to any mindkilling topics and was actually quite confused and surprised when I saw the comments about all of this being about something super political, and still found the post moderately useful. So I do think that I am a counterexample to your "I don't think many of those likely to be mindkilled will miss those implications; " argument.
Replies from: gjm↑ comment by Zack_M_Davis · 2019-03-12T15:50:59.043Z · LW(p) · GW(p)
better to be up front about them
... you're right. (I like the aesthetics of the "deniable allegory" writing style, but delusionally expecting to get away with it is trying to have one's cake and eat it, too.) I added a "Content notice" to the description here.
Replies from: gjm↑ comment by gjm · 2019-03-13T16:42:06.120Z · LW(p) · GW(p)
I know it's rather a side issue, but personally I hate the "deniable allegory" style, though LW is probably a better fit for it than most places ...
1. The temptation to say literally-X-but-implying-Y and then respond to someone arguing against Y with "oh, but I wasn't saying that at all, I was only saying X; how very unreasonable of you to read all that stuff into what I wrote!" is too often too difficult to resist.
2. Even if the deniable-allegorist refrains from any such shenanigans, the fear of them (as a result of being hit by such things in the past by deniable allegorists with fewer scruples) makes it an unpleasant business for anyone who finds themselves disagreeing with any of the implications.
3. And of course the reason why that tactic works is that often one does misunderstand the import of the allegory; a mode of discussion that invites misunderstandings is (to me) disagreeable.
4. The allegorical style can say, or at least gesture towards, a lot of stuff in a small space. This means that anyone trying to respond to it in literal style is liable to look like an awful pedant. On the other hand, if you try to meet an allegory with another allegory, (a) that's hard to do well and (b) after one or two rounds the chances are that everyone is talking past everyone else. Which might be fun but probably isn't productive.
↑ comment by Zack_M_Davis · 2019-03-12T14:46:52.302Z · LW(p) · GW(p)
Thanks. In retrospect, possibly a better approach for this venue would have been to carefully rewrite the piece for Less Wrong in a way that strips more subtext/conceals more of the elephant (e.g., cut the "disrespecting that effort" paragraph).
Replies from: Dagon↑ comment by Dagon · 2019-03-14T00:11:10.978Z · LW(p) · GW(p)
I think, to make it work for my conception of LW, you'd also want to acknowledge other approaches (staying with 2 categories and weighting the attributes, staying with 2 categories and just acknowledging they're imperfect, giving up on categories and specifying attributes individually, possibly with predictions of hidden attributes, adding more categories and choosing based on the dimension with biggest deviation from average, etc.), and identify when they're more appropriate than your preferred approach.
↑ comment by Dagon · 2019-03-13T00:11:00.552Z · LW(p) · GW(p)
WTF. I didn't downvote (until now), but didn't see any point to so many words basically saying "labels are lossy compression, get over it".
Now that I actually notice the website name and understand that it's an allegory for a debate that doesn't belong here (unless gender categorization somehow is important to LW posts), I believe it also doesn't belong here. I believe that it doesn't belong here regardless of which side I support (and I don't have any clue what the debate is, so I don't know what the lines are or which side, if any, I support).
Replies from: Raemon, Zack_M_Davis↑ comment by Raemon · 2019-03-13T00:57:51.533Z · LW(p) · GW(p)
Quick note that the mod team had been observing this post and the surrounding discussion and not 100% sure how to think about it. The post itself is sufficiently abstracted that unless you're already aware of the political discussion, it seemed fairly innocuous. Once you're aware of the political discussion it's fairly blatant. It's unclear to me how bad this is.
I do not have much confidence in any of the policies we could pick and stick to here. I've been mostly satisfied with the resulting conversation on LW staying pretty abstract and meta level.
Replies from: Raemon, habryka4↑ comment by Raemon · 2019-03-13T01:16:06.929Z · LW(p) · GW(p)
Perhaps also worth noting: I was looking through two other recent posts, Tale of Alice Almost [LW · GW] and In My Culture [LW · GW], through a similar lens. They each give me the impression that they are relating in some way to a political dispute which has been abstracted away, with a vague feeling that the resulting post may somehow still be a part of the political struggle.
I'd like to a have a moderation policy (primarily about whether such posts get frontpaged) that works regardless of whether I actually know anything about any behind-the-scenes drama. I've mulled over a few different such policies, each of which would result in different outcomes as to which of the three posts would get frontpaged. But in each case the three posts are hovering near the edge of however I'd classify them.
(The mod team was fairly divided on how important a lens this was and/or exactly how to think about, so just take this as my own personal thoughts for now)
↑ comment by habryka (habryka4) · 2019-03-13T03:45:36.584Z · LW(p) · GW(p)
My current model is that I am in favor of people trying to come up with general analogies, even if they are in the middle of thinking about mindkilling topics. I feel like people have all kinds of weird motivations for writing posts, and trying to judge and classify based on them is going to be hard and set up weird metacognitive incentives, whereas just deciding whether something is useful for trying to solve problems in general has overall pretty decent incentives and allows us to channel a lot of people's motivations about political topics into stuff that is useful in a broader context. (And I think some of Sarah Constantin's stuff is a good example of ideas that I found useful completely separate from the political context and where I am quite glad she tried to abstract them away from the local political context that probably made her motivated to think about those things)
↑ comment by Zack_M_Davis · 2019-03-13T01:17:11.256Z · LW(p) · GW(p)
unless [...] categorization somehow is important to LW posts
Categorization is hugely relevant to Less Wrong! We had a whole Sequence [LW · GW] about this!
Of course, it would be preferable to talk about the epistemology of categories with non-distracting [LW · GW] examples if at all possible. One traditional strategy for avoiding such distractions is to abstract the meta-level point one is trying to make into a fictional parable about non-distracting things. See, for example, Scott Alexander's "A Parable on Obsolete Ideologies" [LW · GW], which isn't actually about Nazism—or rather, I would say, is about something more general than Nazism.
Unfortunately, this is extremely challenging to do well—most writers who attempt this strategy fail to be subtle enough, and the parable falls flat. For this they deserve to be downvoted.
Replies from: Dagon, gjm, habryka4↑ comment by Dagon · 2019-03-13T22:23:32.149Z · LW(p) · GW(p)
So I think my filter for "appropriate to LessWrong" is that it should be an abstraction and generalization, NOT a parable or obfuscation to a specific topic. If there is a clean mapping to a current hotbutton, the author should do additional diligence to find counterexamples (the cases where more categories are costly, or where some dimensions are important for some uses and not for others, so you should use tagging rather than categorization) in order to actually define a concept rather than just restating a preference.
↑ comment by gjm · 2019-03-13T16:33:16.754Z · LW(p) · GW(p)
I think it is worth pointing out explicitly (though I expect most readers noticed) that Dagon wrote "unless gender categorization is important" and Zack turned it into "unless ... categorization is important" and then said "Categorization is hugely relevant". And that it's perfectly possible that (1) a general topic can be highly relevant in a particular venue without it being true that (2) a specific case of that general topic is relevant there. And that most likely Dagon was not at all claiming that categorization is not an LW-relevant topic, but that gender categorization in particular is a too-distracting topic.
(I am not sure I agree with what I take Dagon's position to be. Gender is a very interesting topic, and would be even if it weren't one that many people feel very strongly about, and it relates to many very LW-ish topics -- including, as Zack says, that of categorization more generally. Still, it might be that it's just too distracting.)
Replies from: Dagon↑ comment by Dagon · 2019-03-13T21:53:03.839Z · LW(p) · GW(p)
The right word to elide from my objection would be "categorization" - I should have said "unless gender is important", as that's the political topic I don't think we can/should discuss here. Categorization in mathematical abstraction is on-topic, as would be a formal definition/mapping of a relevant category to mathematically-expressible notation.
Loose, informal mappings of non-relevant topics is not useful here.
And honestly, I'm not sure how bright my line is - I can imagine topics related to gender or other human relationship topics that tend to bypass rationality being meta-discussed here, especially if it's about raising the sanity waterline on such topics, and how to understand what goes wrong when they're discussed at the object level. I doubt we'd get good results if we had direct object-level debates or points made here on those topics.
↑ comment by habryka (habryka4) · 2019-03-13T03:46:55.993Z · LW(p) · GW(p)
I think I roughly agree with this, though the LW team definitely hasn't discussed this at length yet, and so this is just my personal opinion until I've properly checked in with the rest of the team.