Comment by gjm on On the Nature of Programming Languages · 2019-04-23T00:39:53.441Z · score: 3 (2 votes) · LW · GW

I think your h4ck3r-versus-n00b dichotomy may need a little adjustment.

It's true that some hackers prefer mathematics-y languages like, say, Haskell or Scheme, with elegantly minimal syntax and a modest selection of powerful features that add up to something tremendous.

But _plenty_ of highly skilled and experienced software-makers program in, for instance, C++, which really doesn't score too highly on the elegance-and-abstraction front. Plenty more like to program in C, which does better on elegance and worse on abstraction and is certainly a long way from mathematical elegance. Plenty more like to program in Python, which was originally designed to be (inter alia) a noob-friendly language, and is in fact a pretty good choice for a first language to teach to a learner. And, on the other side of things, Scheme -- which seems like it has a bunch of the characteristics you're saying are typical of "expert-focused" languages -- has always had a great deal of educational use, by (among others) the very people who were and are designing it.

If you're designing a programming language, you certainly need to figure out whether to focus on newcomers or experts, but I don't think that choice alone nails down very much about the language, and I don't think it aligns with elegance-versus-let's-politely-call-it-richness.

Comment by gjm on Experimental Open Thread April 2019: Socratic method · 2019-04-04T15:01:03.894Z · score: 8 (4 votes) · LW · GW

Would you care to distinguish between "there is no territory" (which on the face of it is a metaphysical claim, just like "there is a territory", and if we compare those two then it seems like the consistency of what we see might be evidence for "a territory" over "no territory") and "I decline to state or hold any opinion about territory as opposed to models"?

Comment by gjm on User GPT2 is Banned · 2019-04-03T14:45:39.147Z · score: 4 (2 votes) · LW · GW

I'm pretty sure that's wrong for three reasons. First, there are 365 days in a year, not 355. Second, there are actually 366 days next year because it's a leap year (and the extra day is before April 1). Third, the post explicitly says "may not post again until April 1, 2020".

Comment by gjm on User GPT2 is Banned · 2019-04-02T20:24:09.703Z · score: 5 (2 votes) · LW · GW

355 days?

Comment by gjm on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-02T00:15:28.766Z · score: 2 (1 votes) · LW · GW

I thought there was -- I thought I'd seen one with numbers in the style 1), 2), 3), ... going up to 25 -- but I now can't find it and the obvious hypothesis is that I'm just misremembering what I saw. My apologies.

Comment by gjm on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T22:00:29.434Z · score: 4 (2 votes) · LW · GW

If only it were.

Comment by gjm on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T21:58:24.068Z · score: 2 (1 votes) · LW · GW

True, but I don't think those were Markdown auto-numbers.

Comment by gjm on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T21:28:30.035Z · score: 6 (4 votes) · LW · GW

Already a thing: https://en.wikipedia.org/wiki/Reverse_Turing_test.

Comment by gjm on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T21:26:14.303Z · score: 3 (2 votes) · LW · GW

I have the same suspicion that they're human-written. (My comment there refers specifically to its better-than-expected counting skills; there are other less concrete signs, though I'm not enough of a GPT expert to know how strongly they really suggest non-bot-ness.)

I'm actually more impressed if the comments are written by a human; I am quite sure I couldn't write kinda-GPT-looking text as plausible as "GPT2"'s at the rate he/she/it's been churning them out at.

(Impressive or not, it's a blight on LW and I hope it will disappear with the end of April Fool's Day.)

Comment by gjm on Open Thread April 2019 · 2019-04-01T15:08:29.966Z · score: 9 (4 votes) · LW · GW

I have strong-downvoted all of the GPT2 comments in the hope that a couple of other people will do likewise and push them below the threshold at which everyone gets them hidden without needing to diddle around in their profile. (I hope this doesn't trigger some sort of automatic malice-detector and get me banned or anything. I promise I downvoted all those comments on their merits. Man, they were so bad they might almost have been posted by a bot or something!)

The idea is hilarious in the abstract, but very much less funny in reality because it makes LW horrible to read. Perhaps if GPT2 were responding to 20% of comments instead of all of them, or something, it might be less unbearable.

Comment by gjm on Open Thread April 2019 · 2019-04-01T09:29:21.937Z · score: 4 (2 votes) · LW · GW

I'm really hoping they will all get deleted when what John Gruber calls "Internet Jackass Day" is over.

(Also ... one of its posts has a list of numbered points from 1) to 25), all in the correct order. I'm a little surprised by that -- I thought it had difficulty counting far. Is this actually a (very annoying) reverse Turing test?)

Comment by gjm on Experimental Open Thread April 2019: Socratic method · 2019-04-01T01:53:51.050Z · score: 2 (1 votes) · LW · GW

Clarification request: At face value you're implying that typical rationalists always do require immediate explicit justification for their beliefs. I wonder whether that's an exaggeration for rhetorical effect. Could you be a bit more, um, explicit about just what the state of affairs is that you're suggesting is suboptimal?

Comment by gjm on Experimental Open Thread April 2019: Socratic method · 2019-04-01T01:50:57.439Z · score: 7 (3 votes) · LW · GW

Question: What empirical evidence do you have about this? (E.g., what do you observe introspectively, what have you seen others doing, etc., and how sure are you that those things are the way you think they are?)

Comment by gjm on Why the AI Alignment Problem Might be Unsolvable? · 2019-03-27T16:42:41.598Z · score: 2 (1 votes) · LW · GW

2. Again, there are plenty of counterexamples to the idea that human values have already converged. The idea behind e.g. "coherent extrapolated volition" is that (a) they might converge given more information, clearer thinking, and more opportunities for those with different values to discuss, and (b) we might find the result of that convergence acceptable even if it doesn't quite match our values now.

3. Again, I think there's a distinction you're missing when you talk about "removal of values" etc. Let's take your example: reading adult MLP fanfiction. Suppose the world is taken over by some being that doesn't value that. (As, I think, most humans don't.) What are the consequences for those people who do value it? Not necessarily anything awful, I suggest. Not valuing reading adult MLP fanfiction doesn't imply (e.g.) an implacable war against those who do. Why should it? It suffices that the being that takes over the world cares about people getting what they want; in that case, if some people like to write adult MLP fanfiction and some people like to read it, our hypothetical superpowerful overlord will likely prefer to let those people get on with it.

But, I hear you say, aren't those fanfiction works made of -- or at least stored in -- atoms that the Master of the Universe can use for something else? Sure, they are, and if there's literally nothing in the MotU's values to stop it repurposing them then it will. But there are plenty of things that can stop the MotU repurposing those atoms other than its own fondness for adult MLP fanfiction -- such as, I claim, a preference for people to get what they want.

There might be circumstances in which the MotU does repurpose those atoms: perhaps there's something else it values vastly more that it can't get any other way. But the same is true right here in this universe, in which we're getting on OK. If your fanfiction is hosted on a server that ends up in a war zone, or a server owned by a company that gets sold to Facebook, or a server owned by an individual in the US who gets a terrible health problem and needs to sell everything to raise funds for treatment, then that server is probably toast, and if no one else has a copy then the fanfiction is gone. What makes a superintelligent AI more dangerous here, it seems to me, is that maybe no one can figure out how to give it even humanish values. But that's not a problem that has much to do with the divergence within the range of human values: again, "just copy Barack Obama's values" (feel free to substitute someone whose values you like better, of course) is a counterexample, because most likely even an omnipotent Barack Obama would not feel the need to take away your guns^H^H^H^Hfanfiction.

To reiterate the point I think you've been missing: giving supreme power to (say) a superintelligent AI doesn't remove from existence all those people who value things it happens not to care about, and if it cares about their welfare then we should not expect it to wipe them out or to wipe out the things they value.

Comment by gjm on Do you like bullet points? · 2019-03-26T13:25:27.694Z · score: 15 (6 votes) · LW · GW

I'm mostly a fan.

1. I like presentation that foregrounds the structure of the ideas being presented. Sometimes bullet points do that well.

Some specific common structures that are served well by bullet points:

  • General principle with multiple supporting examples. (Like this list right here.)
  • Claim with multiple bits of supporting evidence/argument. This fits bullet points well because
    • you can put each bit in its own bullet point
    • you can bulletize recursively
      • so you can see the support for the claims that support the claims that support your main argument
    • if a reader is already satisfied that a thing is true, or so convinced it's wrong that they don't care what ridiculous bogus pseudo-evidence you've marshalled for it, they can skip the bullets
  • Claim with counterarguments/objections
    • You might think this is confusing because its presentation is just like that of the claim-with-support, where the bullet-pointed items have exactly the opposite significance.
      • Maybe it is, but I don't think any other mode of presentation does better.
    • This and its predecessor might be better thought of as special cases: you make a claim, and then you bulletize whatever bits of evidence or argument bear on it one way or the other.
      • In plain-text bullet-lists, I like to use "+" and "-" (and sometimes "=") as my "bullets" in this sort of context, with the obvious meaning.
  • Main argument and incidental remarks
    • I like pizza.
  • Chronological sequence with fairly clear-cut divisions (at regime changes, important technological/scientific developments, publication of important works, etc.)

2. I like concise, compact accounts of things. Bullet points can work against this (because they space things out) or for it (by encouraging terseness). But I don't like concision when it comes at the cost of clarity or correctness, and maybe concise bullet points are bad because they encourage omission of necessary nuance.

3. I agree with the person who said numbered lists are better than bullet points because they allow for easy cross-reference. (But also for easy screwups, if you add something and everything gets renumbered without the cross-refs being fixed up.)

4. Bullet-lists don't tend to make for elegant writing. Sometimes that matters, sometimes not.

5. Bullet-lists can obscure your logical structure instead of revealing it, as follows. The list structure takes the place of many explicit logical-structuring elements ("therefore", "because", "furthermore", etc.) but sometimes explicit is better than implicit and e.g. it may not be clear to the reader whether you're saying "here's another reason to believe X", "here's a good argument against X which I'll address below", "here's a silly argument against X which I bring up merely as a hook on which to hang something I want to say in favour of X", etc.

6. Although bullet-lists tend (on the whole) to clarify logical structure at small scales, they don't work so well at larger scales (say the length of an essay, or even a book). For that you need something else: chapters, headings, and so forth. And longer (say, paragraph-length or more) explanations of the structure. ("In this book I'm going to argue that scholarly publications in theoretical physics should be written in verse. The first three chapters motivate this by showing some examples of important papers whose impact was greatly reduced by their being written in prose. Then I'll explain in detail what poetic forms are most appropriate for what sort of research and why, giving numerous examples. The final four chapters of the book illustrate my thesis by taking Einstein's so-called "annus mirabilis" papers and rendering them in the sort of verse form I recommend. This book is dedicated to the memory of Omar Khayyam.")

7. If I'm writing down my thoughts on something to help clarify them, I often use something like bullet-point structure.

Comment by gjm on Why the AI Alignment Problem Might be Unsolvable? · 2019-03-26T03:19:19.600Z · score: 9 (5 votes) · LW · GW

1. Neither deontology nor virtue ethics is a special case of consequentialism. Some people really, truly do believe that sometimes the action with worse consequences is better. There are, to be sure, ways for consequentialists sometimes to justify deontologists' rules, or indeed their policy of rule-following, on consequentialist grounds -- and for that matter there are ways to construct rule-based systems that justify consequentialism. ("The one moral rule is: Do whatever leads to maximal overall flourishing!") They are still deeply different ways of thinking about morality.

You consider questions of sexual continence, honour, etc., "social mores, not morals", but I promise you there are people who think of such things as morals. You think such people have been "brainwashed", and perhaps they'd say the same about you; that's what moral divergence looks like.

2. I think that if what you wrote was intended to stand after "I think there is no convergence of moralities because ..." then it's missing a lot of steps. I should maybe repeat that I'm not asserting that there is convergence; quite likely there isn't. But I don't think anything you've said offers any strong reason to think that there isn't.

3. Once again, I think you are not being clear about the distinction between the things I labelled (i) and (ii), and I think it matters. And, more generally, it feels as if we are talking past one another: I get the impression that either you haven't understood what I'm saying, or you think I haven't understood what you're saying.

Let's be very concrete here. Pick some human being whose values you find generally admirable. Imagine that we put that person in charge of the world. We'll greatly increase their intelligence and knowledge, and fix any mental deficits that might make them screw up more than they need to, and somehow enable them to act consistently according to those admirable values (rather than, e.g., turning completely selfish once granted power, as real people too often do). Would you see that as an outcome much better than many of the nightmare misaligned-AI scenarios people worry about?

I would; while there's no human being I would altogether trust to be in charge of the universe, no matter how they might be enhanced, I think putting almost any human being in charge of the universe would (if they were also given the capacity to do the job without being overwhelmed) likely be a big improvement over (e.g.) tiling the universe with paperclips or little smiley human-looking faces, or over many scenarios where a super-powerful AI optimizes some precisely-specified-but-wrong approximation to one aspect of human values.

I would not expect such a person in that situation to eliminate people with different values from theirs, or to force everyone to live according to that person's values. I would not expect such a person in that situation to make a world in which a lot of things I find essential have been eliminated. (Would you? Would you find such behaviour generally admirable?)

Any my point here is that nothing in your arguments displays shows any obstacle to doing essentially that. You argue that we can't align an AI's values with those of all of humanity because "all of humanity" has too many different diverging values, and that's true, but there remains the possibility that we could align them with those of some of humanity, to something like the extent that any individual's values are aligned with those of some of humanity, and even if that's the best we can hope for the difference between that and (what might be the default, if we ever make any sort of superintelligent AI) aligning its values with those of none of humanity is immense.

(Why am I bothering to point that out? Because it looks to me as if you're trying to argue that worrying about "value alignment" is likely a waste of time because there can be no such thing as value alignment; I say, on the contrary, that even though some notions of value alignment are obviously unachievable and some others may be not-so-obviously unachievable, still others are almost certainly achievable in principle and still valuable. Of course, I may have misunderstood what you're actually arguing for: that's the risk you take when you choose to speak in parables without explaining them.)

I feel I need to defend myself on one point. You say "You switched from X to Y" as if you think I either failed to notice the change or else was trying to pull some sort of sneaky bait-and-switch. Neither is the case, and I'm afraid I think you didn't understand the structure of my argument. I wanted to argue "we could do Thing One, and that would be OK". I approached this indirectly, by first of all arguing that we already have Thing Two, which is somewhat like Thing One, and is OK, and then addressing the difference between Thing One and Thing Two. But you completely ignored the bit where I addressed the difference, and just said "oi, there's a difference" as if I had no idea (or was pretending to have no idea) that there is one.

Comment by gjm on Why the AI Alignment Problem Might be Unsolvable? · 2019-03-25T20:48:20.457Z · score: 5 (3 votes) · LW · GW

1. Some varieties of moral thinking whose diversity doesn't seem to me to be captured by your eye-for-eye/golden-rule/max-flourish/min-suffer schema:

  • For some people, morality is all about results ("consequentialists"). For some, it's all about following some moral code ("deontologists"). For some, it's all about what sort of person you are ("virtue ethicists"). Your Minnie and Maxie are clearly consequentialists; perhaps Ivan is a deontologist; it's hard to be sure what Goldie is; but these different outlooks can coexist with a wide variety of object-level moral preferences and your four certainly don't cover all the bases here.
  • Your four all focus on moral issues surrounding _harming and benefiting_ people. Pretty much everyone does care about those things, but other very different things are important parts of some people's moral frameworks. For instance, some people believe in a god or gods and think _devotion to their god(s)_ more important than anything else; some people attach tremendous importance to various forms of sexual restraint (only within marriage! only between a man and a woman! only if it's done in a way that could in principle lead to babies! etc.); some people (perhaps this is part of where Ivan is coming from, but you can be quite Ivan-like by other means) have moral systems in which _honour_ is super-important and e.g. if someone insults you then you have to respond by taking them down as definitively as possible.

2. (You're answering with "Because ..." but I don't see what "why?" question I asked, either implicitly or explicitly, so at least one of us has misunderstood something here.) (a) I agree that there are lots of different ways in which convergence could happen, but I don't see why that in any way weakens the point that, one way or another, it _could_ happen. (b) It is certainly true that Maxie and Minnie, as they are now, disagree about some important things; again, that isn't news. The point I was trying to make is that it might turn out that as you give Maxie and Minnie more information, a deeper understanding of human nature, more opportunities to talk to one another, etc., they stop disagreeing, and if that happens then we might do OK to follow whatever system they end up with.

3. I'm not sure what you mean about "values being eliminated from existence"; it's ambiguous. Do you mean (i) there stop being people around who have those values or (ii) the world proceeds in a way that doesn't, whether or not anyone cares, tend to satisfy those values? Either way, note that "that range" was the _normal range of respected human values_. Right now, there are no agents around (that we know of) whose values are entirely outside the range of human values, and we're getting on OK. There are agents (e.g., some psychopaths, violent religious zealots, etc.) whose values are within the range of human values but outside the range of _respected human values_, and by and large we try to give them as little influence as possible. To be clear, I'm not proposing "world ruled by an entity whose values are similar to those of some particular human being generally regarded as decent" as a _triumphant win for humanity_, but it's not an _obvious catastrophe_ either and so far as I can tell the sort of issue you're raising presents no obstacle to that sort of outcome.

Comment by gjm on Why the AI Alignment Problem Might be Unsolvable? · 2019-03-24T19:54:56.549Z · score: 7 (4 votes) · LW · GW

1. I think there are a lot more than four different kinds of moral system.

2. If "value alignment" turns out to be possible in any sense stronger than "alignment of a superintelligence's values with those of one human or more-than-averagely-coherent group" it won't mean making it agree with all of humanity about everything, or even about every question of moral values. That's certainly impossible, and its impossibility is not news.

Way back, Eliezer had a (half-baked) scheme he called "coherent extrapolated volition", whose basic premise was that even though different people think and feel very differently about values it might turn out that if you imagine giving everyone more and better information, clearer thinking, and better channels of communication with one another, then their values might converge as you did so. That's probably wrong, but I'm not sure it's obviously wrong, and some other thing along similar lines might turn out to be right.

An example of the sort of thing that could be true: while "maximize flourishing" and "minimize suffering" look like quite different goals, it might be that there's a single underlying intuition that they stem from. (Perhaps which appeals more to you depends largely on whether you have noticed more dramatic examples of suffering or of missed opportunities to flourish.) Another: "an eye for an eye" isn't, and can't be, a moral system on its own -- it's dependent on having an idea of what sort of thing counts as putting out someone's eye. So what's distinctive about "an eye for an eye" is that we might want some people to flourish less, if they have been Bad. Well, it might turn out either that a strong policy of punishing defectors leads to things being better for almost everyone (in which case "Minnie" and "Maxie" might embrace that principle on pragmatic grounds, given enough evidence and understanding) or that it actually makes things worse even for the victims (in which case "Ivan" might abandon it on pragmatic grounds, given enough evidence and understanding).

3. Suppose that hope turns out to be illusory and there's no such thing as a single set of values that can reasonably claim to be in any sense the natural extrapolation of everyone's values. It might still turn out possible, e.g., to make a superintelligent entity whose values are, and remain, within the normal range of generally-respected human values. I think that would still be pretty good.

Comment by gjm on A Tale of Four Moralities · 2019-03-24T19:24:09.018Z · score: 28 (10 votes) · LW · GW

I don't think Dagon saying not "don't tell stories" but "if you want to make an argument by telling stories, please at least tell us what the argument is meant to be so that we can evaluate it with system 2 as well as system 1".

Meditations on Moloch didn't just quote Ginsberg and say "Lo!", it explained what Scott was calling Moloch and why, and gave explicit concrete examples.

I think it's entirely possible that Sailor Vulcan has something interesting and/or important to say here, but at least from my perspective (as I think from Dagon's) it still needs actually saying rather than merely gesturing towards. Until we have at least some of those "nitty gritty details" we're promised later, it's hard to tell what bits of the story are intended more or less literally, what bits are intended as metaphors for other things, and what bits are mere window-dressing.

Comment by gjm on Blegg Mode · 2019-03-18T00:48:27.189Z · score: 6 (3 votes) · LW · GW

Happy to leave it here; I have a few final comments that are mostly just making explicit things that I think we largely agree on. (But if any of them annoy you, feel free to have the last word.)

1. Yeah, sorry, "essentially" may have been a bad choice of word. I meant "makes (inter alia) a point which is essentially that ..." rather than "makes, as its most essential part, the point that ...".

2. My apologies for taking you more literally than intended. I agree that "it's arbitrary so you should do it my way" is nuts. On the other hand, "there's an element of choice here, and I'm choosing X because of Y" seems (at least potentially) OK to me. I don't know what specific incredibly mendacious things you have in mind, but e.g. nothing in Scott's TCWMFM strikes me as mendacious and I remain unconvinced by your criticisms of it. (Not, I am fairly sure, because I simply don't understand them.)

Finally, my apologies for any part of the emotional exhaustion that's the result of things I said that could have been better if I'd been cleverer or more sensitive or something of the kind.

Comment by gjm on Blegg Mode · 2019-03-18T00:41:17.112Z · score: 2 (1 votes) · LW · GW

Thanks!

Comment by gjm on Blegg Mode · 2019-03-16T17:23:05.767Z · score: 2 (1 votes) · LW · GW

I wasn't claiming to summarize "Disguised Queries". I was pointing out one thing that it says, which happens to be the thing that you say no one says other than to push a particular position on trans issues, and which "Disguised Queries" says with (so far as I can tell) no attempt to say anything about transness at all.

Alice and Bob's conversation doesn't have to end once they (hopefully) recognize that their disagreement is about category boundaries as much as it is about matters of fact. They may well want to figure out why they draw their boundaries in different places. It might be because they have different purposes; or because they have different opinions on some other matter of fact; or because one or both are really making appeals to emotion for an already-decided conclusion rather than actually trying to think clearly about what sort of a thing a foetus is; etc.

Ending a conversation, or a train of thought, prematurely, is a bad thing. It seems altogether unfair to complain at me merely for using words that could be abused for that purpose. (If you see me actually trying to end a conversation with them, of course, then by all means complain away.)

Over and over again in this discussion, it seems as if I'm being taken to say things I'm fairly sure I haven't said and certainly don't believe. If it's because I'm communicating badly, then I'm very sorry. But it might be worth considering other explanations.

Comment by gjm on Blegg Mode · 2019-03-16T14:59:50.486Z · score: 3 (2 votes) · LW · GW

Right. I'm using Firefox and see no bullets. We're in "Chrome is the new IE6" territory, I fear; no one bothers testing things on Firefox any more. Alas!

Comment by gjm on Blegg Mode · 2019-03-16T14:56:58.138Z · score: 3 (2 votes) · LW · GW

It seems to me that you're saying a bunch of things I already said, and saying them as if they are corrections to errors I've made. For instance:

RK: "Categories are never arbitrary." gjm: "categories are not completely arbitrary."

RK: "They are created to serve purposes." gjm: "the relative merits of these depend on the agent's goals"

RK: "They can serve those purposes better or worse." gjm: "Some categorizations are better than others [...] the relative merits of these depend on the agent's goals."

So, anyway, I agree with what you say, but I'm not sure why you think (if you do -- it seems like you do) I was using "arbitrary" as what you call a "lullaby word". I'm sorry if for you it obscured any of those points about categories, though clearly it hasn't stopped you noticing them; you may or may not choose to believe me when I said it didn't stop me noticing them either.

For what it's worth, I think what I mean when I say "categories are somewhat arbitrary" is almost exactly the same as what you mean when you say "they are created to serve purposes".

Comment by gjm on Blegg Mode · 2019-03-15T16:54:38.022Z · score: 3 (2 votes) · LW · GW

My comment above is unchanged, which I guess means it was a parsing rather than a rendering problem if the bug is now fixed.

  • Do bullet lists work now?
  • If they do, this and the previous line should be bulleted.
  • ... Nope, still broken, sorry. But it looks as if the vertical spacing is different from what it would be if these were all ordinary paragraphs, so something is being done. In the HTML they are showing up as <li> elements, without any surrounding <ul> or anything of the sort; I don't know whether that's what's intended.

    Comment by gjm on Blegg Mode · 2019-03-14T15:19:06.632Z · score: 3 (2 votes) · LW · GW

    Meta: That comment had a bunch of bullet points in it when I wrote it. Now (at least for me, at least at the moment) they seem to have disappeared. Weird. [EDIT to clarify:] I mean that the bullet symbols themselves, and the indentation that usually goes with them, have gone. The actual words are still there.

    Comment by gjm on Blegg Mode · 2019-03-14T14:50:54.873Z · score: 5 (3 votes) · LW · GW

    Multiple roughly-equally-good categorizations might not often happen to an idealized superintelligent AI that's much better than we are at extracting all possible information from its environment. But we humans are slow and stupid and make mistakes, and accordingly our probability distributions are really wide, which means our error bars are large and we often find ourselves with multiple hypotheses we can't decide between with confidence.

    (Consider, for a rather different example, political questions of the form "how much of X should the government do?" where X is providing a social "safety net", regulating businesses, or whatever. Obviously these are somewhat value-laden questions, but even if I hold that constant by e.g. just trying to decide what I think is optimal policy I find myself quite uncertain.)

    Perhaps more to the point, most of us are in different situations at different times. If what matters to you about rubleggs is sometimes palladium content, sometimes vanadium content, and sometimes furriness, then I think you have to choose between (1) maintaining a bunch of different categorizations and switching between them, (2) maintaining a single categorization that's much finer grained than is usually needed in any single situation and aggregating categories in different ways at different times, and (3) finding an approach that doesn't rely so much on putting things into categories. The cognitive-efficiency benefits of categorization are much diminished in this situation.

    Your penultimate paragraph argues (I think) that talk of categories' somewhat-arbitrariness (like, say, Scott's in TCWMFM) is not sincere and is adopted merely as an excuse for taking a particular view of trans people (perhaps because that's socially convenient, or feels nice, or something). Well, I guess that's just the mirror image of what I said about your comments on categories, so turnabout is fair play, but I don't think I can agree with it.

  • The "Disguised Queries" post that first introduced bleggs and rubes makes essentially the point that categories are somewhat arbitrary, that there's no One True Right Answer to "is it a blegg or a rube?", and that which answer is best depends on what particular things you care about on a particular occasion.
  • Scott's "Diseased thinking" (last time I heard, the most highly upvoted article in the history of Less Wrong) makes essentially the same point in connection to the category of "disease". (The leading example being obesity rather than, say, gender dysphoria.)
  • Scott's "The tails coming apart as a metaphor for life" does much the same for categories like "good thing" and "bad thing".
  • Here's a little thing from the Instute for Fiscal Studies about poverty metrics, which begins by observing that there are many possible ways to define poverty and nothing resembling consensus about which is best. (The categories here are "poor" and "not poor".)
  • More generally, "well, it all depends what you mean by X" has been a standard move among philosophers for many decades, and it's basically the same thing: words correspond to categories, categories are somewhat arbitrary, and questions about whether a P is or isn't a Q are often best understood as questions about how to draw the boundaries of Q, which in turn may be best understood as questions about values or priorities or what have you rather than about the actual content of the actual world.
  • So it seems to me very not-true that the idea that categories are somewhat arbitrary is a thing invoked only in order to avoid having to take a definite position (or, in order to avoid choosing one's definite position on the basis of hard facts rather than touchy-feely sensitivity) on how to think and talk about trans people.

    Comment by gjm on Blegg Mode · 2019-03-13T22:59:50.163Z · score: 3 (2 votes) · LW · GW

    For what it's worth, I feel the same way as you but with the obvious change of sign: it feels to me like you keep accusing me of saying somewhat-outrageous things that I'm not intending to say and don't believe, and when I ask why you'd think I mean that you just ignore it, and it feels to me like I've put much more trouble into understanding your position and clarifying mine than you have into understanding mine and clarifying yours.

    Presumably the truth lies somewhere in between.

    I don't think it is reasonable to respond to "I think Zack was trying to do X" with "That's ridiculous, because evidently it didn't work", for two reasons. Firstly, the great majority of attempts to promote a particular position on a controversial topic don't change anyone's mind, even in a venue like LW where we try to change our minds more readily when circumstances call for it. Secondly, if you propose that instead he was trying to put forward a particular generally-applicable epistemological position (though I still don't know what position you have in mind, despite asking several times, since the only particular one you've mentioned you then said wasn't an important part of the content of Zack's article) then I in turn can ask whether you can point to an example of someone who was persuaded of that by the article.

    It's somewhat reasonable to respond to "I think Zack was trying to do X" with "But what he wrote is obviously not an effective way of doing X", but I don't see why it's any more obviously ineffective as a tool of political persuasion, or as an expression of a political position, than it is as a work of epistemological clarification, and in particular it doesn't even look to me more than averagely ineffective in such a role.

    For the avoidance of doubt, I don't in the least deny that I might be wrong about what Zack was trying to do. (Sometimes a person thinks something's clear that turns out to be false. I am not immune to this.) Zack, if you happen to be reading and haven't been so annoyed by my comments that you don't want to interact with me ever again, anything you might want to say on this score would be welcome. If I have badly misunderstood what you wrote, please accept my apologies.

    Comment by gjm on Blegg Mode · 2019-03-13T22:44:21.954Z · score: 3 (2 votes) · LW · GW

    Wait, if you reckon the proposition I called P is "not actually an important part of the content of Zack's article" then what did you have in mind as the "politically motivated epistemic error" that Zack's article was about?

    (Or, if P was that error, how am I supposed to understand your original protest which so far as I can tell only makes any sense if you consider that correcting the epistemic error was the whole point, or at least the main point, of Zack's article?)

    Firmly agree with your last paragraph, though.

    Comment by gjm on Blegg Mode · 2019-03-13T22:36:37.478Z · score: 3 (2 votes) · LW · GW

    For the avoidance of doubt, I strongly agree that what counts as "matching reality much better" depends on what you are going to be using your map for; that's a key reason why I am not very convinced by Zack's original argument if it's understood as a rebuttal to (say) Scott's TCWMFM either in general or specifically as it pertains to the political question at issue.

    Comment by gjm on Blegg Mode · 2019-03-13T16:42:06.120Z · score: 9 (5 votes) · LW · GW

    I know it's rather a side issue, but personally I hate the "deniable allegory" style, though LW is probably a better fit for it than most places ...

    1. The temptation to say literally-X-but-implying-Y and then respond to someone arguing against Y with "oh, but I wasn't saying that at all, I was only saying X; how very unreasonable of you to read all that stuff into what I wrote!" is too often too difficult to resist.

    2. Even if the deniable-allegorist refrains from any such shenanigans, the fear of them (as a result of being hit by such things in the past by deniable allegorists with fewer scruples) makes it an unpleasant business for anyone who finds themselves disagreeing with any of the implications.

    3. And of course the reason why that tactic works is that often one does misunderstand the import of the allegory; a mode of discussion that invites misunderstandings is (to me) disagreeable.

    4. The allegorical style can say, or at least gesture towards, a lot of stuff in a small space. This means that anyone trying to respond to it in literal style is liable to look like an awful pedant. On the other hand, if you try to meet an allegory with another allegory, (a) that's hard to do well and (b) after one or two rounds the chances are that everyone is talking past everyone else. Which might be fun but probably isn't productive.

    Comment by gjm on Blegg Mode · 2019-03-13T16:33:16.754Z · score: 3 (2 votes) · LW · GW

    I think it is worth pointing out explicitly (though I expect most readers noticed) that Dagon wrote "unless gender categorization is important" and Zack turned it into "unless ... categorization is important" and then said "Categorization is hugely relevant". And that it's perfectly possible that (1) a general topic can be highly relevant in a particular venue without it being true that (2) a specific case of that general topic is relevant there. And that most likely Dagon was not at all claiming that categorization is not an LW-relevant topic, but that gender categorization in particular is a too-distracting topic.

    (I am not sure I agree with what I take Dagon's position to be. Gender is a very interesting topic, and would be even if it weren't one that many people feel very strongly about, and it relates to many very LW-ish topics -- including, as Zack says, that of categorization more generally. Still, it might be that it's just too distracting.)

    Comment by gjm on Blegg Mode · 2019-03-13T16:27:04.672Z · score: 3 (2 votes) · LW · GW

    (I'm responding to this after already reading and replying to your earlier comment. Apologies in advance if it turns out that I'd have done better with the other one if I'd read this first...)

    I'll begin at the end. "... perhaps knowing already that he has some thoughts on gender". What actually happened is that I started reading the article without noticing the website's name, got a few paragraphs in and thought "ah, OK, so this is a fairly heavy-handed allegory for some trans-related thing", finished reading it and was fairly unimpressed, then noticed the URL. As for the author, I didn't actually realise that Zack was the author of the linked article until the discussion here was well underway.

    I think we may disagree about what constitutes strong evidence of having successfully stripped out the accidental specifics. Suppose you decide to address some controversial question obliquely. Then there are three different ways in which a reader can come to a wrong opinion about your position on the controversial question. (1) You can detach what you're writing from your actual position on the object-level issue successfully enough that a reasonable person would be unable to figure out what your position is. (2) You can write something aimed at conveying your actual position, but do it less than perfectly. (3) You can write something aimed at conveying your actual position, and do it well, but the reader can make mistakes, or lack relevant background knowledge, and come to a wrong conclusion. It seems like you're assuming #1. I think #2 and #3 are at least as plausible.

    (As to whether I have got Zack's position substantially wrong, it's certainly possible that I might have, by any or all of the three mechanisms in the last paragraph. I haven't gone into much detail on what I think Zack's position is so of course there are also possibilities 4 and 5: that I've understood it right but expressed that understanding badly, or that I've understood and expressed it OK but you've misunderstood what I wrote. If you think it would be helpful, then I can try to state more clearly what I think Zack's position is and he can let us know how right or wrong I got it. My guess is that it wouldn't be super-helpful, for what it's worth.)

    OK, now back to the start. My reply to your other comment addresses the first point (about what alleged error Zack is responding to) and I don't think what you've said here changes what I want to say about that.

    On the second point (ceding too much territory) I think you're assuming I'm saying something I'm not, namely that nothing with political implications can ever be discussed here. I don't think I said that; I don't believe it; I don't think anything I said either implies or presupposes it. What I do think is (1) that Zack's article appears to me to be mostly about the politics despite what Zack calls its "deniable allegory", (2) that linking mostly-political things from here ought to be done in a way that acknowledges their political-ness and clarifies how they're intended to be relevant to LW, and (3) that (in my judgement, with which of course others may disagree) this particular article, if we abstract out the political application, isn't very valuable as a discussion of epistemology in the abstract.

    I'm not sure I've understood what point you're making when you reference Something to Protect; I think that again you may be taking me to be saying something more negative than I thought I was saying. At any rate, I certainly neither think nor intended to suggest that we should only talk about things of no practical importance.

    Comment by gjm on Blegg Mode · 2019-03-13T16:02:20.196Z · score: 4 (3 votes) · LW · GW

    Aha, this clarifies some things helpfully. It is now much clearer to me than it was before what epistemological error you take Zack to be trying to correct here.

    I still think it's clear that Zack's main purpose in writing the article was to promote a particular object-level position on the political question. But I agree that "even though categories are map rather than territory, some maps match reality much better than others, and to deny that is an error" (call this proposition P, for future use) is a reasonable point to make about epistemology in the abstract, and that given the context of Zack's article it's reasonable to take that to be a key thing it's trying to say about epistemology.

    But it seems to me -- though perhaps I'm just being dim -- that the only possibly way to appreciate that P was Zack's epistemological point is to be aware not only of the political (not-very-sub) subtext of the article (which, you'll recall, is the thing I originally said it was wrong not to mention) but also of the context where people were addressing that specific political issue in what Zack considers a too-subjective way. (For the avoidance of doubt, I'm not saying that that requires some sort of special esoteric knowledge unavailable to the rest of us. Merely having just reread Scott's TCWMFM would have sufficed. But it happened that I was familiar enough with it not to feel that I needed to revisit it, and not familiar enough with it to recognize every specific reference to it in Zack's article. I doubt I'm alone in that.)

    Again, perhaps I'm just being dim. But I know that some people didn't even see the political subtext, and I know that I didn't see P as being Zack's main epistemological point before I read what you just wrote. (I'm still not sure it is, for what it's worth.) So it doesn't seem open to much doubt that just putting the article here without further explanation wasn't sufficient.

    There's a specific way in which I could be being dim that might make that wrong: perhaps I was just distracted by the politics, and perhaps if I'd been able to approach the article as if it were purely talking in the abstract about epistemology I'd have taken it to be saying P. But, again, if so then I offer myself as evidence that it needed some clarification for the benefit of those liable to be distracted.

    As to the rest:

    It looks to me as if you are ascribing meanings and purposes to me that are not mine at all. E.g., "If we all know that such arguments aren't meant to be taken literally, but are instead meant to push one side of a particular political debate in that context" -- I didn't think I was saying, and I don't think I believe, and I don't think anything I said either implies or presupposes, anything like that. The impression I have is that this is one of those situations where I say X, you believe Y, from X&Y you infer Z, and you get cross because I'm saying Z and Z is an awful thing to say -- when what's actually happening is that we disagree about Y. Unfortunately, I can't tell what Y is in this situation :-).

    So I don't know how to react to your suggestion that I should have said explicitly rather than just assuming that posts like Scott's TCWMFM "are mindkilled politics and engaging with them lowers the quality of discourse here"; presumably either (1) you think I actually think that or (2) you think that what I've said implies that so it's a useful reductio, but I still don't understand how you get there from what I actually wrote.

    To be explicit about this:

    I do not think that Scott's TCWMFM is "mindkilled politics".

    I do not think that engaging with articles like Scott's TCWMFM lowers the quality of discourse.

    I do not think that it's impossible to hold Scott's position honestly.

    I do not think that it's impossible to hold Zack's position honestly.

    I don't think that Zack's article is "mindkilled politics", but I do think it's much less good than Scott's.

    I don't think Scott is making the epistemological mistake you say Zack is saying he's making, that of not understanding that one way of drawing category boundaries can be better than another. I think he's aware of that, but thinks (as, for what it's worth, I do, but I think Zack doesn't) that there are a number of comparably well matched with reality ways to draw them in this case.

    I think that responding to Scott's article as if he were simply saying "meh, whatever, draw category boundaries literally any way you like, the only thing that matters is which way is nicest" is not reasonable, and I think that casting it as making the mistake you say Zack is saying Scott was making requires some such uncharitable interpretation. (This may be one reason why I didn't take P to be the main epistemological claim of Zack's article.)

    If you're still offended by what I wrote, then at least one of us is misunderstanding the other and I hope that turns out to be fixable.

    Comment by gjm on Blegg Mode · 2019-03-13T14:21:06.487Z · score: 3 (2 votes) · LW · GW

    I share jessicata's feeling that the best set of concepts to work with may not be very sensitive to what's easy to detect. This might depend a little on how we define "concepts", and you're right that your visual system or some other fairly "early" bit of processing may well come up with ways of lumping things together, and that that will be dependent on what's easy to detect, whether or not we want to call those things concepts or categories or percepts or whatever else.

    But in the cases I can think of where it's become apparent that some set of categories needs refinement, there doesn't seem to be a general pattern of basing that refinement on the existence of convenient detectable features. (Except in the too-general sense that everything ultimately comes down to empirical observation.)

    I don't think your political motivations are nefarious, and I don't think there's anything wrong with a line of thinking that goes "hmm, it seems like the way a lot of people think about X makes them misunderstand an important thing in my life really badly; let's see what other ways one could think about X, because they might be better" -- other than that "hard cases make bad law", and that it's easy to fall into an equal-and-opposite error where you think about X in a way that would make you misunderstand a related important thing in other people's lives. The political hot potato we're discussing here demonstrably is one where some people have feelings that (so far as I can tell) are as strong as yours and of opposite sign, after all. (Which may suggest, by the way, that if you want an extra category then you may actually need two or more extra categories: "adapted bleggs" may have fundamental internal differences from one another. [EDITED to add:] ... And indeed your other writings on this topic do propose two or more extra categories.)

    I am concerned that we are teetering on the brink of -- if we have not already fallen into -- exactly the sort of object-level political/ideological/personal argument that I was worried about when you first posted this. Words like "nefarious" and "terrorist" seem like a warning sign. So I'll limit my response to that part of what you say to this: It is not at all my intention to endorse any way of talking to you, or anyone else, that makes you, or anyone else, feel the way you describe feeling in that "don't negotiate with terrorist memeplexes" article.

    Comment by gjm on Blegg Mode · 2019-03-13T13:53:22.281Z · score: 6 (3 votes) · LW · GW

    Yes, I agree that the content-note deals with my "disingenuousness" objection.

    I agree (of course!) that there is structure in the world and that categories are not completely arbitrary. It seems to me that this is perfectly compatible with saying that they are _somewhat_ arbitrary, which conveniently is what I did actually say. Some categorizations are better than others, but there are often multiple roughly-equally-good categorizations and picking one of those rather than another is not an epistemological error. There is something in reality that is perfectly precise and leaves no room for human whims, but that thing is not usually (perhaps not ever) a specific categorization.

    So, anyway, in the particular case of transness, I agree that it's possible that some of the four categorizations we've considered here (yours, which makes trans people a separate category but nudge-nudge-wink-wink indicates that for most purposes trans people are much more "like" others of their 'originally assigned' gender than others of their 'adopted' gender; and the three others I mentioned: getting by with just two categories and not putting trans people in either of them; getting by with just two categories and putting trans people in their 'originally assigned' category; getting by with just two categories and putting trans people in their 'adopted' category) are so much better than others that we should reject them. But it seems to me that that the relative merits of these depend on the agent's goals, and the best categorization to adopt may be quite different depending on whether you're (e.g.) a medical researcher, a person suffering gender dysphoria, a random member of the general public, etc -- and also on your own values and priorities.

    I did indeed make some assumptions about what was meant to map to what. It's possible that I didn't get them quite right. I decline to agree with your proposal that if something metaphorical that you wrote doesn't seem to match up well I should simply pretend that you intended it as a metaphor, though of course it's entirely possible that some different match-up makes it work much better.

    Comment by gjm on Blegg Mode · 2019-03-13T13:18:54.846Z · score: 3 (2 votes) · LW · GW

    I'm not sure you are, since it seems you weren't at all mindkilled by it. I could be wrong, though; if, once you saw the implications, it took nontrivial effort to see past them, then I agree you're a counterexample.

    Comment by gjm on Blegg Mode · 2019-03-13T00:02:02.841Z · score: 18 (10 votes) · LW · GW

    Thanks for the explanation!

    It's rather condensed, so it's very possible that my inability to see how it's a fair criticism of what I wrote is the result of my misunderstanding it. May I attempt to paraphrase your criticism at greater length and explain why I'm baffled? I regret that my attempt at doing this has turned out awfully long; at least it should be explicit enough that it can't reasonably be accused of "insinuating" anything...

    So, I think your argument goes as follows. (Your argument, as I (possibly mis-) understand it, is in roman type with numbers at the start of each point. Italics indicate what I can't make sense of.)

    1. The purpose of the linked article is not best understood as political, but as improving epistemic hygiene: its purpose is to correct something that's definitely an error, an error that merely happens to arise as a result of political biases.

    It isn't clear to me what this error is meant to be. If it's something like "thinking that there must be a definite objectively-correct division of all things into bleggs and rubes" then I agree that it's an error but it's an error already thoroughly covered by EY's and SA's posts linked to in the article itself, and in any case it doesn't seem to me that the article is mostly concerned with making that point; rather, it presupposes it. The other candidates I can think of seem to me not to be clearly errors at all.

    In any case, it seems to me that the main point of the linked article is not to correct some epistemic error, but to propose a particular position on the political issue it's alluding to, and that most of the details of its allegory are chosen specifically to support that aim.

    2. The author has taken some trouble to address this error in terms that are "entirely depoliticized" as far as it's possible for it to be given that the error in question is politically motivated.

    I think what I think of this depends on what the error in question is meant to be. E.g., if it's the thing I mentioned above then it seems clear that the article could easily have been much less political while still making the general point as clearly. In any case, calling this article "depoliticized" seems to me like calling Orwell's "Animal Farm" depoliticized because it never so much as mentions the USSR. Constructing a hypothetical situation designed to match your view of a politically contentious question and drawing readers' attention to that matchup is not "depoliticized" in any useful sense.

    3. My description of Zack's description as "disingenuous" amounts to an accusation that Zack's posting the article here is a "political act" (which I take to mean: an attempt to manipulate readers' political opinions, or perhaps to turn LW into a venue for political flamewars, or something of the kind).

    I do in fact think that Zack's purpose in posting the article here is probably at least in part to promote the political position for which the article is arguing, and that if that isn't so -- if Zack's intention was simply to draw our attention to a well-executed bit of epistemology -- then it is likely that Zack finds it well-executed partly because of finding it politically congenial. In that sense, I do think it's probably a "political act". My reasons for thinking this are (1) that my own assessment of the merits of the article purely as a piece of philosophy is not positive, and (2) that the political allegory seems to me so obviously the main purpose of the article that I have trouble seeing why anyone would recommend it for an entirely separate purpose. More on this below. I could of course be wrong about Zack's opinions and/or about the merits of the article as an epistemological exercise.

    It seems relevant here that Zack pretty much agreed with my description: see his comments using terms like "deniable allegory", "get away with it", etc.

    4. That could only be a reasonable concern if the people here were so bad at thinking clearly on difficult topics as to make the project of improving our thinking a doomed one.

    I have no idea why anything like this should be so.

    5. And it could only justify calling Zack's description "disingenuous" if that weren't only true but common knowledge -- because otherwise a better explanation would be that Zack just doesn't share my opinion about how incapable readers here are of clear thought on difficult topics.

    That might be more or less right (though it wouldn't require quite so much as actual common knowledge) if point 4 were right, but as mentioned above I am entirely baffled by point 4.

    Having laid bare my confusion, a few words about what I take the actual purpose of the article to be and why, and about its merits or demerits as a piece of philosophy. (By way of explaining some of my comments above.)

    I think the (obvious, or so it seems to me) purpose of the article is to argue for the following position: "Trans people [adapted bleggs] should be regarded as belonging not to their 'adopted' gender [you don't really put them in the same mental category as bleggs], but to a category separate from either of the usual genders [they seem to occupy a third category in your ontology of sortable objects]; if you have to put people into two categories, trans people should almost always be grouped with their 'originally assigned' gender [so that you can put the majority of palladium-containing ones in the palladium bin (formerly known as the rube bin) ... 90% of the adapted bleggs—like 98% of rubes, and like only 2% of non-adapted bleggs—contain fragments of palladium]." And also, perhaps, to suggest that no one really truly thinks of trans people as quite belonging to their 'adopted' gender [And at a glance, they look like bleggs—I mean, like the more-typical bleggs ... you don't really put them in the same mental category as bleggs].

    (Note: the article deals metaphorically only with one sort of transness -- rube-to-blegg. Presumably the author would actually want at least four categories: blegg, rube, rube-to-blegg, blegg-to-rube. Perhaps others too. I'm going to ignore that issue because this is plenty long enough already.)

    I don't think this can reasonably be regarded as correcting an epistemic error. There's certainly an epistemic error in the vicinity, as I mentioned above: the idea that we have to divide these hypothetical objects into exactly two categories, with there being a clear fact of the matter as to which category each object falls into -- and the corresponding position on gender is equally erroneous. But that is dealt with in passing in the first few paragraphs, and most of the article is arguing not merely for not making that error but for a specific other position, the one I described in the paragraph preceding this one. And that position is not so clearly correct that advocating it is simply a matter of correcting an error.

    (Is it not? No, it is not. Here are three other positions that contradict it without, I think, being flat-out wrong. 1. "We shouldn't put trans people in a third distinct category; rather, we should regard the usual two categories as fuzzy-edged, try to see them less categorically, and avoid manufacturing new categories unless there's a really serious need to; if someone doesn't fit perfectly in either of the two usual categories, we should resist the temptation to look for a new category to put them in." 2. "Having noticed that our categories are fuzzy and somewhat arbitrary, we would do best to stick with the usual two and put trans people in the category of their 'adopted' gender. We will sometimes need to treat them specially, just as we would in any case for e.g. highly gender-atypical non-trans people, but that doesn't call for a different category." 3. "Having noticed [etc.], we would do best to stick with the usual two and put trans people in the category of their 'originally assigned' gender. We will sometimes need [etc.].")

    I've indicated that if I consider the article as an epistemological exercise rather than a piece of political propaganda, I find it unimpressive. I should say a bit about why.

    I think there are two bits of actual epistemology here. The first is the observation that we don't have to put all of our bleggs/rubes/whatever into two boxes and assume that the categorization is Objectively Correct. Nothing wrong with that, but it's also not in any sense a contribution of this article, which already links to earlier pieces by Eliezer and Scott that deal with that point well.

    The second is the specific heuristic the author proposes: make a new category for things that have "cheap-to-detect features that correlate with more-expensive-to-detect features that are decision-relevant with respect to the agent's goals". So, is this a good heuristic?

    The first thing I notice about it is that it isn't a great heuristic even when applied to the specific example that motivates the whole piece. As it says near the start: 'you have no way of knowing how many successfully "passing" adapted bleggs you've missed'. Trans-ness is not always "cheap to detect". I guess it's cheaper to detect than, say, sex chromosomes. OK -- and how often are another person's sex chromosomes "decision-relevant with respect to the agent's goals"? Pretty much only if the agent is (1) a doctor treating them or (2) a prospective sexual partner who is highly interested in, to put it bluntly, their breeding potential. Those are both fairly uncommon -- for most of us, very few of the people we interact with are either likely patients or likely breeding partners.

    What about other cases where new categories have turned out to be wanted? Trying to think of some examples, it seems to me that what matters is simply the presence of features that are "decision-relevant with respect to the agent's goals". Sometimes they correlate with other cheaper-to-identify features, sometimes not. There are isotopes: we had the chemical elements, and then it turned out that actually we sometimes need to distinguish between U-235 and U-238. In this case it happens that you can distinguish them by mass, which I guess is easier than direct examination of the nuclei, but it seems to me that we'd care about the difference even if we couldn't do that, and relatively cheap distinguishability is not an important part of why we have separate categories for them. Indeed, when isotopes were first discovered it was by observing nuclear-decay chains. There are enantiomers: to take a concrete example, in the wake of the thalidomide disaster it suddenly became clear that it was worth distinguishing R-thalidomide from S-thalidomide. Except that, so far as I can tell, it isn't actually feasible to separate them, and when thalidomide is used medically it's still the racemic form and they just tell people who might get pregnant not to take it. So there doesn't seem to be a cheap-to-identify feature here in any useful sense. There are different types of supernova for which I don't see any cheap-feature/relevant-feature dichotomy. There are intersex people whose situation has, at least logically speaking, a thing or two in common with trans people; in many cases the way you identify them is by checking their sex chromosomes, which is exactly the "expensive" feature the author identifies in the case of trans people.

    I'm really not seeing that this heuristic is a particularly good one. It has the look, to me, of a principle that's constructed in order to reach a particular conclusion. Even though, as I said above, I am not convinced that it applies all that well even to the specific example I think it was constructed for. I also don't think it applies particularly well in the hypothetical situation the author made up. Remember those 2% of otherwise ordinary bleggs that contain palladium? Personally, I'd want a category for those, if I found myself also needing one for "adapted bleggs" because of the palladium they contain. It might be impracticably expensive, for now, to scan all bleggs in case they belong to the 2%, but I'd be looking out for ways to identify palladium-containing bleggs, and all palladium-containing bleggs might well turn out in the end to be a "better" category than "adapted bleggs", especially as only 90% of the latter contain palladium.

    So, as I say, not impressive epistemology, and it looks to me as if the principle was constructed for the sake of this particular application. Which is one more reason why I think that that application is the sole real point of the article.

    Comment by gjm on [Old] Mapmaking Series · 2019-03-12T17:49:11.585Z · score: 2 (1 votes) · LW · GW

    IT'S HARDER TO READ LONG PASSAGES OF TEXT WHEN THEY ARE WRITTEN WITH NO CAPITAL LETTERS.

    Comment by gjm on What Vibing Feels Like · 2019-03-12T15:36:28.586Z · score: 2 (1 votes) · LW · GW

    It seems to me like you are using "rationality" with a much broader meaning than currently appears to me to be useful.

    "Vibing", as you describe it, appears to be fundamentally non-rational. Once again, that doesn't mean it's bad, it doesn't even mean it's not extremely valuable, but something that essentially requires stopping thinking as soon as it rears its ugly head is, whatever its merits, not engaging in rationality. Even if it provides a way of getting at truths that what-I-would-call-rationality can't reach.

    (Cf. the discussions long ago about the perils of saying "rational" when we actually mean "optimal" or "good".)

    Comment by gjm on Blegg Mode · 2019-03-12T12:52:48.988Z · score: 3 (7 votes) · LW · GW

    "Fanfiction for the blegg/rube parable" and "to make another point about the cognitive function of categorization" are both completely ignoring the very large elephant in the rather small room.

    The actual topic of the piece is clearly the currently hot topic of How To Think About Trans People. (Words like "trans" and "gender" are never mentioned, but it becomes obvious maybe four or five paragraphs in.) Which is a sufficiently mindkilling topic for sufficiently many people that maybe it's worth mentioning.

    (Or maybe not; you might argue that actually readers are more likely to be able to read the thing without getting mindkilled if their attention isn't drawn to the mindkilling implications. But I don't think many of those likely to be mindkilled will miss those implications; better to be up front about them.)

    Comment by gjm on What Vibing Feels Like · 2019-03-12T02:58:45.486Z · score: 22 (7 votes) · LW · GW

    There are plenty of things that are usually impaired by thinking. Thinking, however, is not one of them. So while I'm sure you could "vibe" about rationality, that would need to be an activity very different from actually doing rationality.

    (Of course one doesn't have to be doing rationality all the time! And some of those things that are usually impaired by thinking are excellent things to do. So, for the avoidance of doubt, I'm not saying that "vibing" is a Bad Thing. I'm not sure it belongs here, though. Why do we need "What vibing feels like" any more than we need "What being stoned feels like" or "What cuddling with a romantic partner feels like"?)

    Comment by gjm on Open Thread March 2019 · 2019-03-12T02:03:47.746Z · score: 15 (11 votes) · LW · GW

    What does "inside standard deviation from 500" mean?

    Having a small p-value is exactly the same thing, at least for approximately normally distributed things like this, as being multiple standard deviations away from the norm.

    The specific number here is neither "like p=0.01" nor within 1 of the mean. Variance of a binomial distribution is npq=250, so standard deviation is just under 16. Being at least 20 away from 500 is approximately a p=0.2 event.

    Comment by gjm on Blegg Mode · 2019-03-12T01:53:19.243Z · score: 3 (7 votes) · LW · GW

    The description here seems a little ... disingenuous.

    [EDITED to add:] I see that this has been downvoted at least once. I don't object at all to being downvoted but find it hard to tell from just a downvote what it is that has displeased someone; if anyone would like to indicate why they dislike this comment, I'm all ears. (Objection to "disingenuous" as too harsh? Preferring the "deniable allegory", as Zack puts it, to remain deniable for longer? Disliking what they guess to be my position on the specific issue it's an allegory for? ...)

    Comment by gjm on What Vibing Feels Like · 2019-03-12T01:47:36.970Z · score: 20 (7 votes) · LW · GW

    It's gotta be tough to engage in any sort of rationality with a mindset that says that "thoughts are poison to the flow of value".

    Comment by gjm on Rule Thinkers In, Not Out · 2019-02-28T16:32:22.543Z · score: 19 (8 votes) · LW · GW

    There's a really good idea slipped into the above comment in passing; the purpose of this comment is to draw attention to it.

    close attention from a few "angel investors"

    Scott's article, like the earlier "epistemic tenure" one, implicitly assumes that we're setting a single policy for whose ideas get taken how seriously. But it may make sense for some people or communities -- these "angel investors" -- to take seriously a wider range of ideas than the rest of us, even knowing that a lot of those ideas will turn out to be bad ones, in the hope that they can eventually identify which ones were actually any good and promote those more widely.

    Taking the parallel a bit further, in business there are more levels of filtering than that. You have the crazy startups; then you have the angel investors; then you have the early-stage VCs; then you have the later VCs; and then you have, I dunno, all the world's investors. There are actually two layers of filtering at each stage -- investors may choose not to invest, and the company may fail despite the investment -- but let's leave that out for now. The equivalent in the marketplace of ideas would be a sort of hierarchy of credibility-donors: first of all you have individuals coming up with possibly-crackpot ideas, then some of them get traction in particular communities, then some of those come to the attention of Gladwell-style popularizers, and then some of the stuff they popularize actually makes it all the way into the general public's awareness. At each stage it should be somewhat harder to get treated as credible. (But is it? I wouldn't count on it. In particular, popularizers don't have the best reputation for never latching onto bad ideas and making them sound more credible than they really are...)

    (Perhaps the LW community itself should be an "angel investor", but not necessarily.)

    Comment by gjm on So You Want to Colonize The Universe Part 4: Velocity Changes and Energy · 2019-02-28T15:02:22.890Z · score: 3 (2 votes) · LW · GW

    Seems like that's going to be less effective than it might sound because of (1) beam divergence and (2) the fact that if there's even the tiniest misalignment between the mirrors then after a couple of bounces the light will be missing its target.

    Comment by gjm on Epistemic Tenure · 2019-02-19T23:46:04.130Z · score: 5 (4 votes) · LW · GW

    If Bob's history is that over and over again he's said things that seem obviously wrong but he's always turned out to be right, I don't think we need a notion of "epistemic tenure" to justify taking him seriously the next time he says something that seems obviously wrong: we've already established that when he says apparently-obviously-wrong things he's usually right, so plain old induction will get us where we need to go. I think the OP is making a stronger claim. (And a different one: note that OP says explicitly that he isn't saying we should take Bob seriously because he might be right, but that we should take Bob seriously so as not to discourage him from thinking original thoughts in future.)

    And the OP doesn't (at least as I read it) seem like it stipulates that Bob is strikingly better epistemically than his peers in that sort of way. It says:

    Let Bob be an individual that I have a lot intellectual respect for. For example, maybe Bob has a history of believing true things long before anyone else, or Bob has discovered or invented some ideas that I have found very useful.

    which isn't quite the same. One of the specific ways in which Bob might have earned that "lot of intellectual respect" is by believing true things long before everyone else, but that's just one example. And someone can merit a lot of intellectual respect without being so much better than everyone else.

    For an "intellectual venture capitalist" who generates a lot of wild ideas, mostly wrong but right more often than you'd expect, I do agree that we want to avoid stifling them. But we do also want to avoid letting them get entirely untethered from reality, and it's not obvious to me what degree of epistemic tenure best makes that balance.

    (Analogy: successful writers get more freedom to ignore the advice of their editors. Sometimes that's a good thing, but not always.)

    Comment by gjm on Epistemic Tenure · 2019-02-19T14:43:47.905Z · score: 2 (3 votes) · LW · GW

    I think I'm largely (albeit tentatively) with Dagon here: it's not clear that we don't _want_ our responses to his wrongness to back-propagate into his idea generation. Isn't that part of how a person's idea generation gets better?

    One possible counterargument: a person's idea-generation process actually consists of (at least) two parts, generation and filtering, and most of us would do better to have more fluent _generation_. But even if so, we want the _filtering_ to work well, and I don't know how you enable evaluations to propagate back as far as the filtering stage but to stop before affecting the generation stage.

    I'm not saying that the suggestion here is definitely wrong. It could well be that if we follow the path of least resistance, the result will be _too much_ idea-suppression. But you can't just say "if there's a substantial cost to saying very wrong things then that's bad because it may make people less willing or even less able to come up with contrarian ideas in future" without acknowledging that there's an upside too, in making people less inclined to come up with _bad_ ideas in future.

    Comment by gjm on The Case for a Bigger Audience · 2019-02-15T17:26:46.563Z · score: 2 (3 votes) · LW · GW

    Sure: the author of a particular article may just want it to be read and shared as widely as possible. But what's locally best for them is not necessarily the same as what's globally best for the LW community.

    Put yourself in a different role: you're reading something of the sort that might be on LW. Would you prefer to read and discuss it here or on Facebook? For me, the answer is "definitely here". If your answer is generally "Facebook" then it seems to me that you want your writings discussed on Facebook, you want to discuss things on Facebook, and what would suit you best is for Less Wrong to go away and for people to just post things on Facebook. Which is certainly a preference you're entitled to have, but I don't think Less Wrong should be optimizing for people who feel that way.

    "Future of Go" summit with AlphaGo

    2017-04-10T11:10:40.249Z · score: 3 (4 votes)

    Buying happiness

    2016-06-16T17:08:53.802Z · score: 38 (38 votes)

    AlphaGo versus Lee Sedol

    2016-03-09T12:22:53.237Z · score: 19 (19 votes)

    [LINK] "The current state of machine intelligence"

    2015-12-16T15:22:26.596Z · score: 3 (4 votes)

    [LINK] Scott Aaronson: Common knowledge and Aumann's agreement theorem

    2015-08-17T08:41:45.179Z · score: 15 (15 votes)

    Group Rationality Diary, March 22 to April 4

    2015-03-23T12:17:27.193Z · score: 6 (7 votes)

    Group Rationality Diary, March 1-21

    2015-03-06T15:29:01.325Z · score: 4 (5 votes)

    Open thread, September 15-21, 2014

    2014-09-15T12:24:53.165Z · score: 6 (7 votes)

    Proportional Giving

    2014-03-02T21:09:07.597Z · score: 10 (13 votes)

    A few remarks about mass-downvoting

    2014-02-13T17:06:43.216Z · score: 27 (42 votes)

    [Link] False memories of fabricated political events

    2013-02-10T22:25:15.535Z · score: 17 (20 votes)

    [LINK] Breaking the illusion of understanding

    2012-10-26T23:09:25.790Z · score: 19 (20 votes)

    The Problem of Thinking Too Much [LINK]

    2012-04-27T14:31:26.552Z · score: 7 (11 votes)

    General textbook comparison thread

    2011-08-26T13:27:35.095Z · score: 9 (10 votes)

    Harry Potter and the Methods of Rationality discussion thread, part 4

    2010-10-07T21:12:58.038Z · score: 5 (7 votes)

    The uniquely awful example of theism

    2009-04-10T00:30:08.149Z · score: 36 (47 votes)

    Voting etiquette

    2009-04-05T14:28:31.031Z · score: 10 (16 votes)

    Open Thread: April 2009

    2009-04-03T13:57:49.099Z · score: 5 (6 votes)