Discussion on the choice of concepts

post by KatjaGrace · 2021-01-14T04:00:13.283Z · LW · GW · 4 comments

Contents

4 comments

“The reason that you can currently make toast without doing great damage is just that your toaster is stupid.”

“Can ‘stupid’ be correctly applied to toasters?”

“Yes”

“What if I say no?”

“Well, if you have a conception of stupidity that can’t be applied to toasters, and one that can, why would you choose the one that can’t?”

“But I don’t have two—I’m talking about the actual concept”

“There isn’t an actual concept, there are a bajillion concepts, and you can use whichever one you want.”

“There’s one that people mean”

“Not really—each person has a slightly different usage, and probably hasn’t pinned it down. For instance if you ask them if toasters are stupid, they might be unsure.”

“Yes! They are unsure because they are trying to guess what the real concept is, from their limited collection of exposures to it. If it were about making one up, why would they be uncertain?”

“They might be uncertain which one they want to make up”

“You’re saying when people say words, they are ascribing meanings to them that they just made up, according to which definition they like most?”

“Well, they like definitions that fit with other people’s usage a lot more than other ones.”

“I think they are just guessing what the real meaning is”

“There isn’t a real meaning”

“Ok, what the consensus meaning is”

“There isn’t a consensus”

“Yeah but they believe there is one”

“You’re like a word meaning nihilist—you want only these ‘real’ word meanings or at least these word meanings of consensus, yet you know they don’t exist. That seems sad.”

“Maybe, but that doesn’t make it wrong. And also, I was talking about what other people do.”

“What does it matter what other people do? You can use whatever meanings you want.”

“That seems unfriendly somehow”

“What if you do it in a friendly way? For instance, where a meaning is ambiguous, if you choose the best one. For instance, if you say toasters can be stupid?”

“It’s more a vibe of do-it-alone responsibility for everything, thinking of others as machinery that happens to be near you, that rings my alarm bells. Leaving the common experience of word usage to stand outside the system, as it were, and push the common stock of concepts in the way that you calculate best. At least it seems somehow lonely and cold”

“That’s a bit dramatic - I think the odd nudge in a good direction is well within the normal human experience of word usage. Plus, often people clearly redefine words somewhat in the context of a specific conversation. Would it be so strange if within our conversation we deemed ‘stupid’ applicable to toasters? Not doing so seems like it will only limit our discussion and edge us toward taking up some further concept like shmoopid to fill the gap.”

“It’s not clear at all to me that that is the only bad consequence at stake. For instance, words have all kinds of connotations besides what you explicitly think of them as about. If you just declare that stupid applies to toasters, then try to use it, you’ll doubtless be saying all kinds of things about toasters that you don’t mean. For instance, that that they are mildly reprehensible, and that you don’t like them.”

“I don’t know if I would have used it if I didn’t implicitly accept the associations, and this is a risk one seems to always run in using words, even when you would deem them to apply.”

“Hmm. Ok, maybe. This sounds like a lot of work though, and I have done ok not thinking about using my influence over words until this day.”

“You think you have done ok, but word meanings are a giant tragedy of the commons. You might have done untold damage. We know that interesting concepts are endlessly watered down by exaggerators and attention seekers choosing incrementally wider categories at every ambiguity. That kind of thing might be going on all over the place. Maybe we just don’t know what words could be, if we were trying to do them well, instead of everyone being out to advance their own utterings.”

4 comments

Comments sorted by top scores.

comment by abramdemski · 2021-01-14T22:15:03.711Z · LW(p) · GW(p)

“You think you have done ok, but word meanings are a giant tragedy of the commons. You might have done untold damage. We know that interesting concepts are endlessly watered down by exaggerators and attention seekers choosing incrementally wider categories at every ambiguity. That kind of thing might be going on all over the place. Maybe we just don’t know what words could be, if we were trying to do them well, instead of everyone being out to advance their own utterings.”

"You know, you're speaking as if I'm contributing to the tragedy of the commons, while you are the one who is avoiding it. But you're the one who doesn't think word-meaning is serious enough to elevate beyond an arbitrary choice, whereas I was the one concerned with the real meaning of words. Doesn't your casual stance invite the greater risk of tragedy? Isn't my attempt to cooperate with a larger group the sort of thing which avoids tragedy?"

"I'm far from indifferent, or casual! Denying that there is one correct definition of a word does not make language arbitrary, or unimportant."

"Yes, I get that... and since I didn't explicitly say it before: I concede that there is no fundamental reason we have to stick to common usage, and furthermore, if you're trying to figure out what common usage is in order to decide whether to agree with some point in a discussion, you're probably going down a wrong track. But, look. That doesn't mean you're allowed to make a word mean anything you want."

"I literally am. There are no word police."

"... yeah ... but, look. According to my schoolbooks, at least, biologists define 'life' in a way which excludes viruses, right? Because they don't have 'cells', and there's some doctrine about life consisting of cells. And that's crazy, right? All the big, important intuitions about biology apply to viruses. They're clearly a form of life, because they reproduce and evolve, just like life. If you're going to go around with a narrow concept of 'life' which excludes viruses, you are missing something. You're not just going to be using language in a way I find disagreeable. Your mental heuristics are going to reach poorer conclusions, because you don't apply them broadly enough. Unless you have some secondary concept, 'pseudo-life', which plays the role in your ontology which 'life' plays in mine. In which case it is just a translation issue."

"A virus doesn't have any metabolism, though. That's pretty important to a lot of biology!"

"... Fine, but that still plays to my point that definitions are important, and can be wrong!"

"Hm. I think we both agree that definitions can be good and bad. But, what would make one wrong?"

"It's the same thing that makes anything wrong. Bad definitions lead to low predictive accuracy. [LW · GW] If you use worse definitions, you're going to tend to lose bets against people who use better definitions, all else being equal."

"Hmm. I'm pretty on board with the Bayesian thing, but this seems somehow different. I have an intuition that which definitions you use shouldn't matter, at all, to how you predict."

"That seems patently false in practice."

"Sure, but... the bayesian ideal of rationality is an agent with unlimited processing power. It can trivially translate things from one definition to another. The words are just a tool it uses to describe its beliefs. Hence, definitions may influence efficiency of communication, but they shouldn't influence the quality of the beliefs themselves."

"I think I see the problem here. You're imagining just speaking with the definitions. I'm imagining thinking in those terms. I think we'd be on the same page, if I thought speaking were the only concern. In any conversation, the ideal is to communicate efficiently and accurately, in the language as it's understood by the listeners. There's a question of who/how to adjust when participants in a conversation have differing definitions, or don't know whether they have the same ones. But setting that aside, "

"Sure, I think that's what I've been trying to say!"

"But there's another way to think about language. As you know, prediction is compression [LW · GW] in orthodox Bayesianism. So a belief distribution can be thought of as a language, and vice versa -- we can translate between the two, using coding schemes. So, in that sense, our internal language just is our beliefs, and by definition has to do with how we make predictions."

"Sure, ok, but that still doesn't need to have much to do with how we talk about things; we can use different languages internally and externally. Like you said, the ideal is to translate our thoughts into the language our listeners can best understand."

"Yes, BUT, that's a more cool and collected way of relating to others -- dare I say cold and isolated, as per my earlier line of thinking. It's a bit like turning in a math assignment without any shown work. You can't bare your soul to everyone, but among trusted friends, you want to talk about how you actually think, pre-translation, because if you do that, you might actually stand a chance of improving how you think."

"I don't think we can literally convey how we think -- it would be a big mess of neural activations. We're doomed to speak in translation."

"Ok, point conceded. But there are degrees. I guess what I'm trying to say is that it seems important to the workings of my internal ontology that 'toasters' just aren't something that can be labelled as 'stupid'; it's a confused notion..."

"Hm, well, I feel it's the reverse, there's something wrong with not being able to label toasters that way..."

comment by gjm · 2021-01-14T19:45:44.063Z · LW(p) · GW(p)

"Since you're troubled by the other possibly unwanted associations of the word stupid, how about we just agree to say that toasters aren't highly intelligent? It doesn't really matter whether you say that that's because toasters aren't the sort of thing one can call intelligent, or that it's because you could call them intelligent if they were but they aren't; either way we can agree that toasters are not highly intelligent agents, and that's what matters."

"Oh, yeah, that works."

"Great. Let's move on."

(Of course, in many arguments about how one should define things there isn't a sufficiently convenient circumlocution, either because there isn't a good one at all or because it's super-important to have a handy short term or because the question is exactly about how one particular term should be used.)

comment by ryan_b · 2021-01-15T15:39:33.723Z · LW(p) · GW(p)

This feels like one of those social-reality level problems. It seems to me that as long as the concept has a single socially real meaning, we get all the same value out of it with respect to communication. I am unsure about thinking; on the one hand individually it is better to have a concept that cleaves physical reality at the joints, but it feels like with a group that understands a socially real meaning we are more likely to discover the limits of the concept.

I suppose the question there boils down to whether transmission is more important than generation, and if so by how much?

comment by habryka (habryka4) · 2021-01-14T05:48:28.294Z · LW(p) · GW(p)

I liked this dialogue quite a bit.