Comment by zack_m_davis on Blegg Mode · 2019-03-16T21:01:41.466Z · score: 6 (3 votes) · LW · GW

I wasn't claiming to summarize "Disguised Queries".

I may have misinterpreted what you meant by the phrase "makes essentially the point that."

the thing that you say no one says other than to push a particular position on trans issues

I see. I think I made a mistake in the great-great-grandparent comment. That comments' penultimate paragraph ended: "[...] and who somehow never seem to find it useful to bring up the idea that categories are somewhat arbitrary in seemingly any other context." I should not have written that, because as you pointed out in the great-grandparent, it's not true. This turned out to be a pretty costly mistake on my part, because we've now just spent the better part of four comments litigating the consequences of this error in a way that we could have avoided if only I had taken more care to phrase the point I was trying to make less hyperbolically.

The point I was trying to make in the offending paragraph is that if someone honestly believes that the choice between multiple category systems is arbitrary or somewhat-arbitrary, then they should accept the choice being made arbitrarily or somewhat-arbitrarily. I agree that "It depends on what you mean by X" is often a useful motion, but I think it's possible to distinguish when it's being used to facilitate communication from when it's being used to impose frame control. Specifically: it's incoherent to say, "It's arbitrary, so you should do it my way," because if it were really arbitrary, the one would not be motivated to say "you should do it my way." In discussions about my idiosyncratic special interest, I very frequently encounter incredibly mendacious frame-control attempts from people who call themselves "rationalists" and who don't seem to do this on most other topics. (This is, of course, with respect to how I draw the "incredibly mendacious" category boundary.)

Speaking of ending conversations, I'm feeling pretty emotionally exhausted, and we seem to be spending a lot of wordcount on mutual misunderstandings, so unless you have more things you want to explain to me, maybe this should be the end of the thread? Thanks for the invigorating discussion! This was way more productive than most of the conversations I've had lately! (Which maybe tells you something about the quality of those other discussions.)

Comment by zack_m_davis on Blegg Mode · 2019-03-16T13:31:33.014Z · score: 9 (3 votes) · LW · GW

I mean, yes, there's the allusion in the title! (The post wasn't originally written for being shared on Less Wrong, it just seemed sufficiently sanitized to be shareable-here-without-running-too-afoul-of-anti-politics-norms after the fact.)

Comment by zack_m_davis on Blegg Mode · 2019-03-16T01:01:03.586Z · score: 7 (4 votes) · LW · GW

The "Disguised Queries" post that first introduced bleggs and rubes makes essentially the point that categories are somewhat arbitrary, that there's no One True Right Answer to "is it a blegg or a rube?", and that which answer is best depends on what particular things you care about on a particular occasion.

That's not how I would summarize that post at all! I mean, I agree that the post did literally say that ("The question 'Is this object a blegg?' may stand in for different queries on different occasions"). But it also went on to say more things that I think substantially change the moral—

If [the question] weren't standing in for some query, you'd have no reason to care.

[...] People who argue that atheism is a religion "because it states beliefs about God" are really trying to argue (I think) that the reasoning methods used in atheism are on a par with the reasoning methods used in religion, or that atheism is no safer than religion in terms of the probability of causally engendering violence, etc... [...]

[...] The a priori irrational part is where, in the course of the argument, someone pulls out a dictionary and looks up the definition of "atheism" or "religion". [...] How could a dictionary possibly decide whether an empirical cluster of atheists is really substantially different from an empirical cluster of theologians? How can reality vary with the meaning of a word? The points in thingspace don't move around when we redraw a boundary. [bolding mine—ZMD]

But people often don't realize that their argument about where to draw a definitional boundary, is really a dispute over whether to infer a characteristic shared by most things inside an empirical cluster...

I claim that what Yudkowsky said about the irrationality about appealing to the dictionary, goes the same for appeal to personal values or priorities. It's not false exactly, but it doesn't accomplish anything.

Suppose Bob says, "Abortion is murder, because it's the killing of a human being!"

Alice says, "No, abortion isn't murder, because murder is the killing of a sentient being, and fetuses aren't sentient."

As Alice and Bob's hired rationalist mediator, you could say, "You two just have different preferences about somewhat-arbitary category boundaries, that's all! Abortion is murder-with-respect-to-Bob's-definition, but it isn't murder-with-respect-to-Alice's-definition. Done! End of conversation!"

And maybe sometimes there really is nothing more to it than that. But oftentimes, I think we can do more work to break the symmetry: to work out what different predictions Alice and Bob are making about reality, or what different preferences they have about reality, and refocus the discussion on that. As I wrote in "The Categories Were Made for Man to Make Predictions":

If different political factions are engaged in conflict over how to define the extension of some common word—common words being a scarce and valuable resource both culturally and information-theoretically—rationalists may not be able to say that one side is simply right and the other is simply wrong, but we can at least strive for objectivity in describing the conflict. Before shrugging and saying, "Well, this is a difference in values; nothing more to be said about it," we can talk about the detailed consequences of what is gained or lost by paying attention to some differences and ignoring others.

We had an entire Sequence specifically about this! You were there! I was there! Why doesn't anyone remember?!

Comment by zack_m_davis on Blegg Mode · 2019-03-15T23:09:20.281Z · score: 3 (2 votes) · LW · GW

I don't see bullets on Firefox 65.0.1, but I do on Chromium 72.0.3626.121 (both Xubuntu 16.04.5).

Comment by zack_m_davis on Blegg Mode · 2019-03-15T23:06:08.839Z · score: 4 (2 votes) · LW · GW

I think that this allegory misses crucial aspects of the original situation

That makes sense! As gjm noted, sometimes unscrupulous authors sneakily construct an allegory with the intent of leading the reader to a particular conclusion within the context of the allegory with the hope that the reader will map that conclusion back onto the real-world situation in a particular way, without doing the work of actually showing that the allegory and the real-world situation are actually analogous in the relevant aspects.

I don't want to be guilty of that! This is a story about bleggs and rubes that I happened to come up with in the context of trying to think about something else (and I don't want to be deceptive about that historical fact), but I definitely agree that people shouldn't map the story onto some other situation unless they actually have a good argument for why that mapping makes sense. If we wanted to discuss the something else rather than the bleggs and rubes, we should do that on someone else's website. Not here.

Comment by zack_m_davis on Blegg Mode · 2019-03-15T07:42:47.336Z · score: 2 (1 votes) · LW · GW

your argument seems to rely purely on the intuition of your fictional character

Yes, the dependence on intuition is definitely a weakness of this particular post. (I wish I knew as much math as Jessica Taylor! If I want to become stronger, I'll have to figure out how fit more studying into my schedule!)

you seem to assume that categories are non-overlapping.
you seem to assume that categories are crisp rather than fuzzy

I don't believe either of those things. If you have any specific wording suggestions on how I can write more clearly so as to better communicate to my readers that I don't believe either of those things, I'm listening.

If you take a table made out of a block of wood, and start to gradually deform its shape until it becomes perfectly spherical, is there an exact point when it is no longer called a "table"?

No, there is no such exact point; like many longtime Less Wrong readers, I, too, am familiar with the Sorities paradox.

But in any case, the categorization would depend on the particular trade-offs that the designers of the production line made (depending on things like, how expensive is it to run the palladium scanner)

Right. Another example of one of the things the particular algorithm-design trade-offs will depend on is the distribution of objects.

We could imagine a slightly altered parable in which the frequency distribution of objects is much more evenly spread out in color–shape–metal-content space: while cubeness has a reasonably strong correlation with redness and palladium yield, and eggness with blueness and vanadium yield, you still have a substantial fraction of non-modal objects: bluish-purple rounded cubes, reddish-purple squarish eggs, &c.

In that scenario, a natural-language summary of the optimal decision algorithm wouldn't talk about discrete categories: you'd probably want some kind of scoring algorithm with thresholds for various tests and decisions as you describe, and no matter where you set the threshold for each decision, you'd still see a lot of objects just on either side of the boundary, with no good "joint" to anchor the placement of a category boundary.

In contrast, my reading of Yudkowsky's original parable posits a much sparser, more tightly-clustered distribution of objects in configuration space. The objects do vary somewhat (some bleggs are purple, some rubes contain vanadium), but there's a very clear cluster-structure: virtually all objects are close to the center of—and could be said to "belong to"—either the "rube" cluster or the "blegg" cluster, with a lot of empty space in between.

In this scenario, I think it does make sense for a natural-language summary of the optimal decision algorithm to talk about two distinct "categories" where the density in the configuration space is concentrated. Platonic essences are just the limiting case as the overlap between clusters goes to zero.

In my fanfiction, I imagine that some unknown entity has taken objects that were originally in the "rube" cluster, and modified them so that they appear, at first glance but not on closer inspection, to be members of the "blegg" cluster. At first, the protagonist wishes to respect the apparent intent of the unknown entity by considering the modified objects to be bleggs. But in the process of her sorting work, the protagonist finds herself wanting to mentally distinguish adapted bleggs from regular bleggs, because she can't make the same job-relevant probabilistic inferences with the new "bleggs (either regular or adapted)" concept as she could with the old "bleggs (only standard bleggs)" concept.

To see why, forget about the category labels for a moment and just consider the clusters in the six-dimensional color–shape–texture–firmness–luminesence–metal-content configuration space.

Before the unknown entity's intervention, we had two distinct clusters: one centered at {blue, egg, furry, flexible, luminescent, vanadium}, and another centered at {red, cube, smooth, hard, non-luminescent, palladium}.

After the unknown entity's intervention, we have three distinct clusters: the two previously-existing clusters, and a new cluster centered at {blue, egg, furry, hard, non-luminescent, palladium}. This is a different situation! Workers on the sorting line might want different language in order to describe this new reality!

Now, if we were to project into the three-dimensional color–shape–texture subspace, then we would have two clusters again: with just these attributes, we can't distinguish between bleggs and adapted bleggs. But since workers on the sorting line can observe hardness, and care about metal content, they probably want to use the three-cluster representation, even if they suspect the unknown entity might thereby feel disrespected.

Comment by zack_m_davis on Blegg Mode · 2019-03-14T06:00:46.856Z · score: 3 (2 votes) · LW · GW

I share jessicata's feeling that the best set of concepts to work with may not be very sensitive to what's easy to detect. [...] there doesn't seem to be a general pattern of basing that refinement on the existence of convenient detectable features

Yeah, I might have been on the wrong track there. (Jessica's comment is great! I need to study more!)

I am concerned that we are teetering on the brink of -- if we have not already fallen into -- exactly the sort of object-level political/ideological/personal argument that I was worried about

I think we're a safe distance from the brink.

Words like "nefarious" and "terrorist" seem like a warning sign

"Nefarious" admittedly probably was a high-emotional-temperature warning sign (oops), but in this case, "I don't negotiate with terrorists" is mostly functioning as the standard stock phrase to evoke the timeless-decision-theoretic "don't be extortable" game-theory intuition, which I don't think should count as a warning sign, because it would be harder to communicate if people had to avoid genuinely useful metaphors because they happened to use high-emotional-valence words.

Comment by zack_m_davis on Blegg Mode · 2019-03-14T05:32:24.759Z · score: 3 (2 votes) · LW · GW

But it seems to me that that the relative merits of these depend on the agent's goals, and the best categorization to adopt may be quite different depending on whether you're [...] and also on your own values and priorities.

Yes, I agree! (And furthermore, the same person might use different categorizations at different times depending on what particular aspects of reality are most relevant to the task at hand.)

But given an agent's goals in a particular situation, I think it would be a shocking coincidence for it to be the case that "there are [...] multiple roughly-equally-good categorizations." Why would that happen often?

If I want to use sortable objects as modern art sculptures to decorate my living room, then the relevant features are shape and color, and I want to think about rubes and bleggs (and count adapted bleggs as bleggs). If I also care about how the room looks in the dark and adapted bleggs don't glow in the dark like ordinary bleggs do, then I want to think about adapted bleggs as being different from ordinary bleggs.

If I'm running a factory that harvests sortable objects for their metal content and my sorting scanner is expensive to run, then I want to think about rubes and ordinary bleggs (because I can infer metal content with acceptably high probability by observing the shape and color of these objects), but I want to look out for adapted bleggs (because their metal content is, with high probability, not what I would expect based on the color/shape/metal-content generalizations I learned from my observations of rubes and ordinary bleggs). If the factory invests in a new state-of-the-art sorting scanner that can be cheaply run on every object, then I don't have any reason to care about shape or color anymore—I just care about palladium-cored objects and vanadium-cored objects.

and picking one of those rather than another is not an epistemological error.

If you're really somehow in a situation where there are multiple roughly-equally-good categorizations with respect to your goals and the information you have, then I agree that picking one of those rather than another isn't an epistemological error. Google Maps and MapQuest are not exactly the same map, but if you just want to drive somewhere, they both reflect the territory pretty well: it probably doesn't matter which one you use. Faced with an arbitrary choice, you should make an arbitrary choice: flip a coin, or call random.random().

And yet somehow, I never run into people who say, "Categories are somewhat arbitrary, therefore you might as well roll a d3 to decide whether to say 'trans women are women' or 'so-called "trans women" are men' or 'transwomen are transwomen', because each of these maps is doing a roughly-equally-good job of reflecting the relevant aspects of the territory." But I run into lots of people who say, "Categories are somewhat arbitrary, therefore I'm not wrong to insist that trans women are women," and who somehow never seem to find it useful to bring up the idea that categories are somewhat arbitrary in seemingly any other context.

You see the problem? If the one has some sort of specific argument for why I should use a particular categorization system in a particular situation, then that's great, and I want to hear it! But it has to be an argument and not a selectively-invoked appeal-to-arbitrariness conversation-halter.

Comment by zack_m_davis on Blegg Mode · 2019-03-14T01:50:43.077Z · score: 4 (2 votes) · LW · GW

Any "categories" you introduce here are at best helpful heuristics, with no deep philosophical significance.

I mean, yes, but I was imagining that there would be some deep philosophy about how computationally bounded agents should construct optimally helpful heuristics.

Comment by zack_m_davis on Blegg Mode · 2019-03-13T06:20:40.132Z · score: 4 (3 votes) · LW · GW

Trying to think of some examples, it seems to me that what matters is simply the presence of features that are "decision-relevant with respect to the agent's goals". [...]

So, I think my motivation (which didn't make it into the parable) for the "cheap to detect features that correlate with decision-relevant expensive to detect features" heuristic is that I'm thinking in terms of naïve Bayes models. You imagine a "star-shaped" causal graph with a central node (whose various values represent the possible categories you might want to assign an entity to), with arrows pointing to various other nodes (which represent various features of the entity). (That is, we're assuming that the features of the entity are conditionally independent given category membership: P(X|C) = Π_i P(X_i|C).) Then when we observe some subset of features, we can use that to update our probabilities of category-membership, and use that to update our probabilities of the features we haven't observed yet. The "category" node doesn't actually "exist" out there in the world—its something we construct to help factorize our probability distribution over the features (which do "exist").

So, as AI designers, we're faced with the question of how we want the "category" node to work. I'm pretty sure there's going to be a mathematically correct answer to this that I just don't know (yet) because I don't study enough and haven't gotten to Chapter 17 of Daphne Koller and the Methods of Rationality. Since I'm not there yet, if I just take at intuitive amateur guess at how I might expect this to work, it seems pretty intuitively plausible that we're going to want the category node to be especially sensitive to cheap-to-observe features that correlate with goal-relevant features? Like, yes, we ultimately just want to know as much as possible about the decision-relevant variables, but if some observations are more expensive to make than others, that seems like the sort of thing the network should be able to take into account, right??

Remember those 2% of otherwise ordinary bleggs that contain palladium? Personally, I'd want a category for those

I agree that "things that look like 'bleggs' that contain palladium" is a concept that you want to be able to think about. (I just described it in words, therefore it's representable!) But while working on the sorting line, your visual system's pattern-matching faculties aren't going to spontaneously invent "palladium-containing bleggs" as a thing to look out for if you don't know any way to detect them, whereas if adapted bleggs tend to look different in ways you can see, then that category is something your brain might just "learn from experience." In terms of the naïve Bayes model, I'm sort of assuming that the 2% of palladium containing non-adapted bleggs are "flukes": that variable takes that value with that probability independently of the other blegg features. I agree that if that assumption were wrong, then that would be really valuable information, and if you suspect that assumption is wrong, then you should definitely be on the lookout for ways to spot palladium-containing bleggs.

But like, see this thing I'm at least trying to do here, where I think there's learnable statistical structure in the world that I want to describe using language? That's pretty important! I can totally see how, from your perspective, on certain object-level applications, you might suspect that the one who says, "Hey! Categories aren't even 'somewhat' arbitrary! There's learnable statistical structure in the world; that's what categories are for!" is secretly being driven by nefarious political motivations. But I hope you can also see how, from my perspective, I might suspect that the one who says, "Categories are somewhat arbitrary; the one who says otherwise is secretly being driven by nefarious political motivations" is secretly being driven by political motivations that have pretty nefarious consequences for people like me trying to use language to reason about the most important thing in my life, even if the psychological foundation of the political motivation is entirely kindhearted.

Comment by zack_m_davis on Editor Mini-Guide · 2019-03-13T04:48:45.825Z · score: 2 (1 votes) · LW · GW

This comment is a test! I prefer the plain-Markdown editor, but I want to try using LaTeX for one comment, so I temporarily swapped my user settings to the "rich" editor so that I could try out the LaTeX editor here. Then after submitting this comment, I'll switch my settings _back_ to Markdown, and edit this comment to see what the syntax is for using LaTeX from that editor (wrap it in $s, maybe?), which didn't seem to be explained in the post above? I would be pretty disappointed if it were to turn out that there's no way to do LaTeX from the Markdown editor, but I would also be somewhat surprised.

Edit: even after switching my settings back, editing this comment gives me the rich editor? So ... I guess individual comments are saved on a per-editor basis, with no translation? I'll concede that that makes sense from a technical standpoint, but it's somewhat disappointing.

Comment by zack_m_davis on Blegg Mode · 2019-03-13T03:05:45.142Z · score: 9 (5 votes) · LW · GW

It seems relevant here that Zack pretty much agreed with my description: see his comments using terms like "deniable allegory", "get away with it", etc.

So, from my perspective, I'm facing a pretty difficult writing problem here! (See my reply to Dagon.) I agree that we don't want Less Wrong to be a politicized space. On the other hand, I also think that a lot of self-identified rationalists are making a politically-motivated epistemology error in asserting category boundaries to be somewhat arbitrary, and it's kind of difficult to address what I claim is the error without even so much as alluding to the object-level situation that I think is motivating the error! For the long, object-level discussion, see my reply to Scott Alexander, "The Categories Were Made for Man To Make Predictions". (Sorry if the byline mismatch causes confusion; I'm using a pen name for that blog.) I didn't want to share "... To Make Predictions" on Less Wrong (er, at least not as a top-level post), because that clearly would be too political. But I thought the "Blegg Mode" parable was sufficiently sanitized such that it would be OK to share as a link post here?

I confess that I didn't put a lot of thought into the description text which you thought was disingenuous. I don't think I was being consciously disingenuous (bad intent is a disposition, not a feeling!), but after you pointed it out, I do see your point that, since there is some unavoidable political context here, it's probably better to explicitly label that, because readers who had a prior expectation that no such context would exist would feel misled upon discovering it. So I added the "Content notice" to the description. Hopefully that addresses the concern?

our categories are [...] somewhat arbitrary

No! Categories are not "somewhat arbitrary"! There is structure in the world, and intelligent agents need categories that carve the structure at the joints so that they can make efficient probabilistic inferences about the variables they're trying to optimize! "Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims." We had a whole Sequence about this! Doesn't anyone else remember?!

Trans-ness is not always "cheap to detect". I guess it's cheaper to detect than, say, sex chromosomes. OK -- and how often are another person's sex chromosomes "decision-relevant with respect to the agent's goals"?

You seem to be making some assumptions about which parts of the parable are getting mapped to which parts of the real-world issue that obviously inspired the parable. I don't think this is the correct venue for me to discuss the real-world issue. On this website, under this byline, I'd rather only talk about bleggs and rubes—even if you were correct to point out that it would be disingenuous for someone to expect readers to pretend not to notice the real-world reason that we're talking about bleggs and rubes. With this in mind, I'll respond below to a modified version of part of your comment (with edits bracketed).

I guess it's cheaper to detect than, say, [palladium or vanadium content]. OK -- and how often [is a sortable object's metal content] "decision-relevant with respect to the agent's goals"? Pretty much only if [you work in the sorting factory.] [That's] fairly uncommon -- for most of us, very few of the [sortable objects] we interact with [need to be sorted into bins according to metal content].

Sure! But reality is very high-dimensional—bleggs and rubes have other properties besides color, shape, and metal content—for example, the properties of being flexible-vs.-hard or luminescent-vs.-non-luminescent, as well as many others that didn't make it into the parable. If you care about making accurate predictions about the many properties of sortable objects that you can't immediately observe, then how you draw your category boundaries matters, because your brain is going to be using the category membership you assigned in order to derive your prior expectations about the variables that you haven't yet observed.

sex chromosomes, which is exactly the "expensive" feature the author identifies in the case of trans people.

The author did no such thing! It's epistemology fiction about bleggs and rubes! It's true that I came up with the parable while I was trying to think carefully about transgender stuff that was of direct and intense personal relevance to me. It's true that it would be disingenuous for someone to expect readers to not-notice that I was trying to think about trans issues. (I mean, it's in the URL.) But I didn't say anything about chromosomes! "If confusion threatens when you interpret a metaphor as a metaphor, try taking everything completely literally."

Trying to think of some examples, it seems to me that what matters is simply the presence of features that are "decision-relevant with respect to the agent's goals". [...]

Thanks for this substantive, on-topic criticism! I would want to think some more before deciding how to reply to this.

ADDENDUM: I thought some more and wrote a sister comment.

Comment by zack_m_davis on Blegg Mode · 2019-03-13T01:17:11.256Z · score: 5 (3 votes) · LW · GW

unless [...] categorization somehow is important to LW posts

Categorization is hugely relevant to Less Wrong! We had a whole Sequence about this!

Of course, it would be preferable to talk about the epistemology of categories with non-distracting examples if at all possible. One traditional strategy for avoiding such distractions is to abstract the meta-level point one is trying to make into a fictional parable about non-distracting things. See, for example, Scott Alexander's "A Parable on Obsolete Ideologies", which isn't actually about Nazism—or rather, I would say, is about something more general than Nazism.

Unfortunately, this is extremely challenging to do well—most writers who attempt this strategy fail to be subtle enough, and the parable falls flat. For this they deserve to be downvoted.

Comment by zack_m_davis on Blegg Mode · 2019-03-12T15:50:59.043Z · score: 11 (5 votes) · LW · GW

better to be up front about them

... you're right. (I like the aesthetics of the "deniable allegory" writing style, but delusionally expecting to get away with it is trying to have one's cake and eat it, too.) I added a "Content notice" to the description here.

Comment by zack_m_davis on Blegg Mode · 2019-03-12T14:46:52.302Z · score: 6 (3 votes) · LW · GW

Thanks. In retrospect, possibly a better approach for this venue would have been to carefully rewrite the piece for Less Wrong in a way that strips more subtext/conceals more of the elephant (e.g., cut the "disrespecting that effort" paragraph).

Comment by zack_m_davis on Blegg Mode · 2019-03-12T03:55:35.391Z · score: 3 (2 votes) · LW · GW

Can you say more? What should the description say instead? (I'm guessing you're referring to the fact that the post has some subtext that probably isn't a good topic fit for Less Wrong? But I would argue that the text (using the blegg/rube parable setting to make another point about the cognitive function of categorization) totally is relevant and potentially interesting!)

Blegg Mode

2019-03-11T15:04:20.136Z · score: 18 (13 votes)
Comment by zack_m_davis on Verbal Zendo · 2018-10-22T04:48:33.927Z · score: 1 (1 votes) · LW · GW

Another fun programming exercise is to do the other direction: have the user come up with a rule, and make the program come up with examples to try to test its hypotheses. (You want the program to generate examples that falsify half the remaining probability-mass.)

Comment by zack_m_davis on Open thread, June 5 - June 11, 2017 · 2017-06-11T19:58:20.627Z · score: 2 (2 votes) · LW · GW

What specific bad things would you expect to happen if the post was left up, with what probabilities? (I'm aware of the standard practice of not discussing ongoing legal cases, but have my doubts about whether allowing the legal system to operate under conditions of secrecy actually makes things better on net.)

Comment by zack_m_davis on Open thread, June 5 - June 11, 2017 · 2017-06-08T17:54:05.301Z · score: 1 (1 votes) · LW · GW

The mature way to handle suicidal people is to call professional help, as soon as possible.

It's worth noting that this creates an incentive to never talk about your problems.

My advice for people who value not being kidnapped and forcibly drugged by unaccountable authority figures who won't listen to reason is to never voluntarily talk to psychiatrists, for the same reason you should never talk to cops.

Comment by zack_m_davis on Open thread, June 5 - June 11, 2017 · 2017-06-08T03:54:39.643Z · score: 3 (3 votes) · LW · GW

I corresponded with sad_dolphin. It added a little bit of gloom to my day, but I don't regret doing it: having suffered from similar psychological problems in the past, I want to be there with my hard-won expertise for people working through the same questions. I agree that most people who talk about suicide in such a manner are unlikely to go through with it, but that doesn't mean they're not being subjectively sincere. I'd rather such cries for help not be disincentivized here (as you seem to be trying to do); I'd rather people be able to seek and receive support from people who actually understand their ideas, rather than callously foisted off onto alleged "experts" who don't understand.

Comment by zack_m_davis on Bet or update: fixing the will-to-wager assumption · 2017-06-08T03:19:18.207Z · score: 0 (0 votes) · LW · GW

But I'm being told that this is "meta-uncertainty" which right-thinking Bayesians are not supposed to have.

Hm. Maybe those people are wrong??

Clearly not since the normal distribution goes from negative infinity to positive infinity

That's right; I should have either said "approximately", or chosen a different distribution.

That 0.5 is conditional on the distribution of r, isn't it? That makes it not a different question at all.

Yes, it is averaging over your distribution for _r_. Does it help if you think of probability as relative to subjective states of knowledge?

Can you elaborate?

(Attempted humorous allusion to how Cox's theorem derives probability theory from simple axioms about how reasoning under uncertainty should work, less relevant if no one is talking about inventing new fields of math.)

Comment by zack_m_davis on Bet or update: fixing the will-to-wager assumption · 2017-06-07T23:45:53.184Z · score: 4 (4 votes) · LW · GW

As I said, I want a richer way to talk about probabilities, more complex than taking them as simple scalars. Do you think it's a bad idea?

That's right, I think it's a bad idea: it sounds like what you actually want is a richer way to talk about your beliefs about Coin 2, but you can do that using standard probability theory, without needing to invent a new field of math from scratch.

Suppose you think Coin 2 is biased and lands heads some unknown fraction _r_ of the time. Your uncertainty about the parameter _r_ will be represented by a probability distribution: say it's normally distributed with a mean of 0.5 and a standard deviation of 0.1. The point is, the probability of _r_ having a particular value is a different question from the the probability of getting heads on your first toss of Coin 2, which is still 0.5. You'd have to ask a different question than "What is the probability of heads on the first flip?" if you want the answer to distinguish the two coins. For example, the probability of getting exactly _k_ heads in _n_ flips is C(_n_, _k_)(0.5)^_k_(0.5)^(_n_−_k_) for Coin 1, but (I think?) ∫₀¹ (1/√(0.02π))_e_^−((_p_−0.5)^2/0.02) C(_n_, _k_)(_p_)^_k_(_p_)^(_n_−_k_) _dp_ for Coin 2.

Does St.Bayes frown upon it?

St. Cox probably does.

Comment by zack_m_davis on Bet or update: fixing the will-to-wager assumption · 2017-06-07T19:49:30.595Z · score: 4 (4 votes) · LW · GW

Can my utility function include risk aversion?

That would be missing the point. The vNM theorem says that if you have preferences over "lotteries" (probability distributions over outcomes; like, 20% chance of winning $5 and 80% chance of winning $10) that satisfy the axioms, then your decisionmaking can be represented as maximizing expected utility for some utility function over outcomes. The concept of "risk aversion" is about how you react to uncertainty (how you decide between lotteries) and is embodied in the utility function; it doesn't apply to outcomes known with certainty. (How risk-averse are you about winning $5?)

See "The Allais Paradox" for how this was covered in the vaunted Sequences.

In my hypothetical the two 50% probabilites are different. I want to express the difference between them. There are no sequences involved.

Obviously you're allowed to have different beliefs about Coin 1 and Coin 2, which could be expressed in many ways. But your different beliefs about the coins don't need to show up in your probability for a single coinflip. The reason for mentioning sequences of flips, is because that's when your beliefs about Coin 1 vs. Coin 2 would start making different predictions.

Comment by zack_m_davis on Birth of a Stereotype · 2017-06-05T22:52:02.195Z · score: 0 (0 votes) · LW · GW

Correlations are symmetric, but is evidence for may not be (depending on how you interpret the phrase): P(A|B) ≠ P(B|A) (unless P(A) == P(B)).

Comment by zack_m_davis on A Comment on Expected Utility Theory · 2017-06-05T04:59:32.685Z · score: 3 (3 votes) · LW · GW

Expected utility is not the same thing as expected dollars. As AgentStonecutter explained to you on Reddit last month, the standard assumption of diminishing marginal utility of money is entirely sufficient to account for preferring the guaranteed $250,000; no need to patch standard decision theory. (The von Neumann–Morgenstern theorem doesn't depend on decisions being repeated; if you want to escape your decisions being describable as the maximization of some utility function, you have to reject one of the axioms, even if your decision is the only decision in the universe.)

Comment by zack_m_davis on Birth of a Stereotype · 2017-06-05T04:43:30.320Z · score: 1 (1 votes) · LW · GW

more than I’m willing to commit to an article I am writing out of boredom

As a reader, this gives me pause. If you didn't have any more compelling reason to write than that, you shouldn't expect anyone to have a compelling reason to read. Maybe give yourself more credit: you weren't merely bored, the fact that you may have felt bored is incidental to the fact that you had something to say!

there is no way I’m going to go through the rigours of Solomonoff induction

Solomonoff induction is uncomputable; it's great to be aware that the theoretical foundations exist, but it's also important to be aware of what the theoretical foundations are and aren't good for. (Imagine saying "there's no way I'm going to go through the rigors of predicting the future state of all air molecules here given their current state" when what you actually want is a thermometer.)

On first encountering the glasses users that were smart the fact that they wore glasses might have left a deep impression on the people who encountered them and may have been associated with their perceived intelligence.

But "On first encountering _X_ that were _Y_, the fact that they were _X_ might have left a deep impression on the people who encountered them" works for any _X_ and _Y_; it can't explain why the stereotype links glasses and intelligence in particular.

A more specific hypothesis: people are more likely to need glasses while reading, and reading is mentally associated with intelligence because it is in fact the case that P(likes to read | intelligent) > P(likes to read | not intelligent).

The above is the charitable hypothesis. I decline—at this juncture—to mention the less charitable one.

Don't leave your readers in suspense like that; it's cruel! (Also, what makes a hypothesis "charitable", exactly?)

Alas, the stereotypes seem to be unfounded.

Are they?

I am not Yudkowsky, and so I would not proffer an evolutionary psychology hypothesis

Eliezer doesn't have a magic license authorizing him in particular to tell just-so stories: if he can do it, anyone can! (Some argue that we shouldn't, but I don't think I agree.)

Comment by Zack_M_Davis on [deleted post] 2017-06-01T02:49:07.424Z

Yeah, that makes sense. Sorry. Feel free to say more or PM me if you want to try to have a careful-and-respectful discussion now (if you trust me).

Comment by Zack_M_Davis on [deleted post] 2017-05-31T22:32:58.486Z

encourage me to keep pressing on this issue whenever doing so is impactful. So I prefer to not be consoled until the root issue has been addressed

Is this really a winning move for you? I'm not budging. It doesn't look like you have a coalition that can deny me anything I care about. From my perspective, any activity spreading the message "Zack M. Davis should be shunned because of his writing at http://unremediatedgender.space/" is just free marketing.

Comment by Zack_M_Davis on [deleted post] 2017-05-31T22:14:50.222Z

(Just noticed this.)

a large (but not the only) factor in getting me to step down from a leadership position in a project I'm spending about half of my time on. [...] and someone with the viewpoint that shunning him was the wrong thing for me to do also stepped down from an equivalent leadership position in order to maintain a balance.

I wasn't aware of this, but it seems unfortunate. If successfully ostracizing me isn't going to happen anyway, "both of you step down from something that you previously wanted to do" seems like a worse outcome than "neither of you step down."

(For my own part, while I wouldn't invite you to any parties I host at my house, I have no interest in trying to get other people to exclude you from their events. I consider my goal in this whole affair as simply to make it clear that I don't intend to let social pressure influence my writing—a goal at which I think I've succeeded.)

shunning people who call others "delusional perverts" because of their gender

I hadn't bothered addressing this earlier, because I wanted to emphasize that my true rejection was "I don't negotiate with emotional blackmailers; I'm happy to listen and update on substantive criticism of my writing, but appeal to consequences is not a substantive criticism", but since it is relevant, I really think you've misunderstood the point of that post: try reading the second and third paragraphs again.

What I'm trying to do there is highlight my disapproval of the phenomenon where the perceived emotional valence of language overshadows its literal content. I understand very well that the phrase "delusional pervert" constitutes fighting words in a way that "paraphilic with mistaken views" doesn't, but I'm interested in developing the skill of being able to simultaneously contemplate framings with different ideological/emotional charges, especially including framings that make me and my friends look bad (precisely because those are the ones it's most emotionally tempting to overlook). People who aren't interested in this skill probably shouldn't read my blog, as the trigger warning page explains.

(Seriously, why isn't the trigger warning page good enough for you? It's one thing to say my writing to should have a label to protect the sensitive, but it's another thing to say that you don't want my thoughts to exist!)

It would have been better if I'd been skilled enough to convince him to use a less aggressive tone throughout his writing by being gentler myself

Not all goals are achievable by sufficiently-skilled gentle social manipulation. If you can show me an argument that can persuade me to change my behavior given _my_ values, then I'll do so. If no such argument exists, then your skill and gentleness don't matter. (At least, I hope I'm not that hackable!)

Comment by Zack_M_Davis on [deleted post] 2017-05-30T16:36:45.048Z

Moreover, it seems like there's an incentive gradient at work here; the only way to gauge how costly it is for someone to act decently is to ask them how costly it is to them, and the more costly they claim it to be, the more the balance of discussion will reward them by letting them impose costs on others via nastiness while reaping the rewards of getting to achieve their political and interpersonal goals with that nastiness.

I agree that the incentives you describe exist, but the analysis cuts both ways: the more someone claims to have been harmed by allegedly-nasty speech, the more the balance of discussion will reward them by letting them restrict speech while reaping the rewards of getting to achieve their political and interpersonal goals with those speech restrictions.

Interpersonal utility aggregation might not be the right way to think of these kinds of situations. If Alice says a thing even though Bob has told her that the thing is nasty and that Alice is causing immense harm by saying it, Alice's true rejection of Bob's complaint probably isn't, "Yes, I'm inflicting _c_ units of objective emotional harm on others, but modifying my speech at all would entail _c_+1 units of objective emotional harm to me, therefore the global utilitarian calculus favors my speech." It's probably: "I'm not a utilitarian and I reject your standard of decency."

Comment by Zack_M_Davis on [deleted post] 2017-05-30T04:10:18.026Z

but it's honest, and I'm assuming you'd prefer that

Yes, thank you!

Basically, because you are psychotic

I definitely went through some psychosis states back in February and April, but I seem to be pretty stably back to my old self now. (For whatever that might be worth!) I have a lot of regrets about this period, but I don't regret most of my public comments.

If you have not already internalized "just because I believe something is true does not make it socially acceptable for me to go around trying to convince everyone else that it's true", I don't know that I will be able to briefly explain to you why that is the case.

Oh, I think I understand why; I'm not that socially retarded. Even so—if there's going to be one goddamned place in the entire goddamned world where people put relatively more emphasis on "arguing for true propositions about human psychology because they're true" and relatively less emphasis on social acceptability, shouldn't it be _us_? I could believe that there are such things as information hazards—I wouldn't publicize instructions on how to cheaply build a suitcase nuke—but this isn't one of them.

Comment by Zack_M_Davis on [deleted post] 2017-05-28T01:28:09.976Z

have already thought about the question of why (and to what extent) the community continues to interact with you to my satisfaction.

For obvious reasons, I'm extremely curious to hear your analysis if you're willing to share. (Feel free to PM me.)

from their behavior, I would expect anyone they praise to be terrible without redeeming features

I don't think that's a good inference! (See the anti-halo effect and "Are Your Enemies Innately Evil?") Even if you think the throwaway's rudeness and hostility makes them terrible, does it really make sense for guilt-by-association to propagate to anyone the throwaway approves of for any reason?

(from the great-grandparent)

This is about behavior, not belief. [...] (for their proselytizing, not for their beliefs)

I think it would be less cruel and more honest to just advocate for punishing people who believe a claim, rather than to advocate for punishing people who argue for the claim while simultaneously insisting that this isn't a punishment for the belief. What would be the point of restricting speech if the goal isn't to restrict thought?

Comment by Zack_M_Davis on [deleted post] 2017-05-27T13:39:15.250Z

If you feel like naming this valiant man in private, I commit to

Hi! 18239018038528017428 is almost certainly referring to me! (I would have predicted that you'd already have known this from Facebook, but apparently that prediction was wrong.)

somewhat abhorrent and damaging to the social fabric to start these conversations in any but the most careful and respectful way.

I tried that first. It turns out that it doesn't work: any substantive, clearly-worded claims just get adversarially defined as insufficiently respectful. I still had something incredibly important to protect (there is a word for the beautiful feeling at the center of my life, and the word is not woman; I want the right to use my word, and I want the right to do psychology in public and get the right answer), so I started trying other things.

Comment by Zack_M_Davis on [deleted post] 2017-05-27T12:44:27.953Z

Hi! 18239018038528017428 is almost certainly talking about me! My detailed views are probably more nuanced and less objectionable than you might infer from the discussion in this thread? But to help you assess for yourself why "the community" (whatever that is) has not yet shunned me, maybe start with this comment (which also contains links to my new gender blog).

Comment by zack_m_davis on Change · 2017-05-10T23:07:52.966Z · score: 0 (0 votes) · LW · GW

in a way that demeans someone not participating in the discussion

How is this relevant? Like, if I have a map that I claim reflects the territory, and you're saying that my map demeans someone who's not here, that doesn't say anything about whether the map predicts features of the territory that, in fact, aren't there.

atypical social understanding, not [...] underlying ambiguity

This is kind of mind-projection-fallacious. Situations that look unambiguous if your expectations are already calibrated to them can be a lot harder to decipher for people with atypical social understanding, like foreigners, children, or (in this case) otherwise-mostly-ordinary adults recovering from a psychotic break.

Comment by zack_m_davis on Change · 2017-05-10T22:54:13.545Z · score: 1 (1 votes) · LW · GW

and it would be rude to point them out directly

I don't think it's rude! Go ahead, stab me with the truth!

it would be more interesting to analyze what made this type of thinking appear and seem subjectively normal in the first place

I can see why you might think that, but unfortunately, it's actually not very interesting (recovering from stress- and sleep-deprivation-induced nervous breakdown back in Feburary and trauma of subsequent involuntary "hospitalization" (I actually think the word imprisonment is more appropriate), phenomenology and family history suggesting an underlying disposition towards schizophrenia-like problems).

Comment by zack_m_davis on Change · 2017-05-07T15:01:27.806Z · score: 1 (1 votes) · LW · GW

Delete the post before too many people see it

Seems kind of anti-social? I'd rather read personal blogs containing the posts that the author thought worth posting at the time, rather than blogs that have been post-publication-selected to exclude the posts that commenters pointed out make the author look bad, which suggests that I should extend the same courtesy to my readers.

think deeply about your life

Working on it, thanks!

Change

2017-05-06T21:17:45.731Z · score: 1 (1 votes)
Comment by zack_m_davis on OpenAI makes humanity less safe · 2017-04-03T20:10:26.157Z · score: 7 (7 votes) · LW · GW

to buy a seat on OpenAI’s board

I wish we lived in a world where the Open Philanthropy Project page could have just said it like that, instead of having to pretend that no one knows what "initiates a partnership between" means.

Comment by zack_m_davis on LW mentioned in influential 2016 Milo article on the Alt-Right · 2017-03-18T21:26:19.056Z · score: 5 (5 votes) · LW · GW

I agree! Indeed, your comment is a response to the something different that I wrote down! If I cared more about correcting this particular historical error, I would do more research and write something more down in a place that would get more views than this Less Wrong Discussion thread. Unfortunately, I'm kind of busy, so the grandparent is all that I bothered with!

Comment by zack_m_davis on LW mentioned in influential 2016 Milo article on the Alt-Right · 2017-03-18T21:00:05.048Z · score: 11 (11 votes) · LW · GW

But, but, this is not historically accurate! I'm sure there's a much greater overlap between Less Wrong readers and Unqualified Reservations readers than you would expect between an arbitrary pairing of blogs, but the explanation for that has to look something like "Yudkowsky and Moldbug both attract a certain type of contrarian nerd, and so you get some links from one community to the other from the few contrarian nerds that are part of both." The causality doesn't flow from us!

Comment by zack_m_davis on Do Scientists Already Know This Stuff? · 2017-03-16T19:50:18.698Z · score: 4 (3 votes) · LW · GW

but the Science versus Bayescraft rhetoric is a disaster.

What's wrong with you? It's true that people who don't already have a reason to pay attention to Eliezer could point to this and say, "Ha! An anti-science crank! We should scorn him and laugh!", and it's true that being on the record saying things that look bad can be instrumentally detrimental towards achieving one's other goals.

But all human progress depends on someone having the guts to just do things that make sense or say things that are true in clear language even if it looks bad if your head is stuffed with the memetic detritus of the equilibrium of the crap that everyone else is already doing and saying. Eliezer doesn't need your marketing advice.

But you probably won't understand what I'm talking about for another eight years, ten months.

Comment by zack_m_davis on Am I Really an X? · 2017-03-14T23:05:26.821Z · score: 1 (1 votes) · LW · GW

Zack_M_Davis provides another... [...] None of these dozens of experts provides a scientific reference for their side

You probably missed it (last paragraph of this comment), but I did in fact reference a blog FAQ and a book (official website, PDF that someone put online, probably in defiance of copyright law). These are both secondary sources, but with plenty of citations back to the original studies in the psychology literature (some of which I've read myself; I don't recall noticing anything being dishonestly cited to claim something that it didn't say).

Comment by zack_m_davis on An Intuition on the Bayes-Structural Justification for Free Speech Norms · 2017-03-10T04:30:03.970Z · score: 2 (2 votes) · LW · GW

Yes.

An Intuition on the Bayes-Structural Justification for Free Speech Norms

2017-03-09T03:15:30.674Z · score: 5 (6 votes)
Comment by zack_m_davis on Am I Really an X? · 2017-03-07T04:15:38.256Z · score: 2 (2 votes) · LW · GW

Am I missing something important?

Yes; people in general are really really shockingly bad at self-reporting. People don't know why they do things; they just notice themselves doing things and then tell a self-serving story about why they did the right things.

For example, prominent trans activist (and autogynephilia theory critic) Julia Serano writes (Whipping Girl, p. 84):

There was also a period of time when I embraced the word "pervert" and viewed my desire to be female as some sort of sexual kink. But after exploring that path, it became obvious that explanation could not account for the vast majority of instances when I thought about being female in a nonsexual context.

I trust that Julia Serano is telling the truth about her subjective experiences. But "it became obvious that explanation could not account for" is not an experience. It's a hypothesis about human psychology. I don't expect anyone to get that kind of thing right based on introspection alone!

Again, it's very important to emphasize that I'm not saying that non-exclusively-androphilic trans women who deny autogynephilia are particularly delusional. I'm saying that basically everyone is basically that delusional about basically everything!

Comment by zack_m_davis on Am I Really an X? · 2017-03-07T03:27:15.668Z · score: 14 (14 votes) · LW · GW

I was familiar with this.

Yup. This is a case (I can think of one more, but I'll let that be someone else's crusade) where we have the correct theory in the psychology literature, and all the nice smart socially-liberal people have heard of the theory, but they think to themselves, "Oh, but only bad outgroup people could believe something crazy like that; it's just some guy's theory; it probably isn't actually true."

Surprise! Everyone is lying! Everyone is lying because telling the truth would be politically inconvenient!

I find the first etiology similar to my model. [...] Writing things like "behaviorally-masculine girls" just sounds like paraphrase to me.

Similar, but the key difference is that I claim that there's no atomic "identity": whether a very behaviorally-masculine girl grows up to identify as a "butch lesbian" or "trans man" is mostly going to depend on the details of her/his cultural environment and the incentives she/he faces.

Do you have thoughts on Thoughts on The Blanchard/Bailey Distinction?

My reply.

I do find your confidence surprising

I agree that I probably look insanely confident! What's going on here—

(personal explanation of why I'm investing so much effort into being such an asshole screaming bloody murder about this at every possible opportunity follows; if you're just interested in the science, read the science; don't pay attention to me)

—is that I spent ten years having (mild, manageable) gender problems, all the while thinking, "Oh, but this is just autogynephilia; that can't be the same as actually being trans, because every time I use that word in public, _everyone says_ 'That's completely unsupported transphobic nonsense', so I must just be some weird non-trans outlier; oh, well."

... and then, I moved to Berkeley. I met a lot of trans women, who seem a lot like me along many dimensions. People who noticed the pattern started to tell me that they thought I was probably trans.

And I was like, "I agree that it seems plausible that I have a similar underlying psychological condition, and I'm definitely very jealous of all of our friends who get their own breasts and get refered to as she, but my thing looks pretty obviously related to my paraphilic sexuality and it's not at all obvious to me that full-time social transition is the best quality-of-life intervention when you take into account the serious costs and limitations of the existing technology. After all, I am biologically male and have received male socialization and you can use these facts to make probabilistic inferences about my psychology; I don't expect anyone to pretend not to notice. If some actual biologically-female women don't want people like me in their spaces, that seems like a legimate desire that I want to respect, even if other people make different choices."

And a lot of people are like, "Oh, that's just internalized transphobia; you're obviously a trans woman; we already know that transitioning is the correct quality-of-life intervention. Don't worry about invading women's spaces; Society has decided that you have a right to be a woman if you want."

And I'm like, "Okay, it's certainly possible that you're right about the optimal social conventions and quality-of-life interventions surrounding late-onset gender dysphoria in males, but how do you know? Where is the careful cost-benefit calculation that people are using to make these enormous life- and society-altering decisions?"

And no one knows. No one is in control. It's all just memetics and primate signaling games, just like Robin Hanson was trying to tell me for the past ten years, and I verbally agreed, but I didn't see it.

I trusted the Berkeley rationalist community. I trusted that social reality mostly made sense. I was wrong.

I still want to at least experiment with the same drugs everyone else is on. But I have no trust anymore.

Comment by zack_m_davis on Dreaming of Political Bayescraft · 2017-03-07T03:13:37.381Z · score: 1 (1 votes) · LW · GW

Huh, so it is! I've been away for a while!

Comment by zack_m_davis on Dreaming of Political Bayescraft · 2017-03-07T02:20:33.283Z · score: 2 (2 votes) · LW · GW

The payoff is the shock of, "Wait! A lot of the customs and ideas that my ingroup thinks are obviously inherently good, aren't actually what I want now that I understand more about the world, and I predict that my ingroup friends would substantially agree if they knew what I knew, but I can't just tell them, because from their perspective it probably just looks like I suddenly went crazy!"

I know, that's still vague. The reason I'm being vague is because the details are going to depend on your ingroup, and which hated outgroup's body of knowledge you chose to study. Sorry about this.

Comment by zack_m_davis on Dreaming of Political Bayescraft · 2017-03-07T02:08:39.276Z · score: 1 (1 votes) · LW · GW

But wasn't your point that if they're enlightened enough, they would refuse to carry out the government's orders?

No, I'm thinking that if someone understood history, human psychology, and the game-theoretic Bayes-structure of the universe well enough, they might be able to understand how and why currently-existing governments evolved, and then use that knowledge to engineer new institutions that do a better job of being powerful, stable, and protecting human values?

But this is just me thinking out loud. If this comment isn't useful or interesting, you should downvote it!

Comment by zack_m_davis on Dreaming of Political Bayescraft · 2017-03-07T01:41:10.948Z · score: 2 (2 votes) · LW · GW

Yes, that's a better way of putting it, thanks.

Comment by zack_m_davis on Am I Really an X? · 2017-03-07T01:34:05.196Z · score: 1 (1 votes) · LW · GW

some people whose opinions seem worth listening to

Worth listening to?—of course. Worth believing after looking at the rest of the available evidence? My claim is, "No, this theory looks really solid and explains so much of what I see in myself and what I see in other people that I trust it more than any particular contradictory self-report; psychology is about invalidating people's identities."

You might disagree. Most trans women might disagree. And that's okay! It's okay for my world-model to not agree with your world-model!

Dreaming of Political Bayescraft

2017-03-06T20:41:16.658Z · score: 1 (1 votes)

Rationality Quotes January 2010

2010-01-07T09:36:05.162Z · score: 3 (6 votes)

News: Improbable Coincidence Slows LHC Repairs

2009-11-06T07:24:31.000Z · score: 7 (8 votes)