Proposal: Consider not using distance-direction-dimension words in abstract discussions
post by moridinamael · 2022-08-09T20:44:13.256Z · LW · GW · 18 commentsContents
Overview Rationale Examples Sidebar Concluding remarks None 18 comments
Epistemic Status: Less Wrong Classic™, a bold and weird proposal to possibly improve your rationality a little bit, maybe.
Overview
It is common to use words and phrases implying distance, directionality, and dimension when discussing ideas and concepts that do not inherently possess these properties. This usually doesn't help and often actively harms our ability to communicate and reason correctly. I am running an experiment to see the effect it has on my thinking and speaking habits.
Rationale
We implicitly translate a thought from its natural form as a clean, clear, distinct concept into a spatial/continuum metaphor all the time, casually and without regard. A mathematician would not try to pull a mathematical object with a thousand dimensions, half of which are binary or non-Euclidean dimensions, into a two-dimensional projection without carefully thinking about the implications of doing so, but we do the equivalent of this all the time, casually and without regard for subtle consequences.
When you mentally impose dimension/distance/direction onto an idea, you risk bringing it into the part of your mind where you are allowed to freely leverage spatial metaphors when thinking about the idea. You will likely have no awareness of all the ways in which you are abusing these spatial/geometric metaphors, because humans are not really smart enough to keep track of all the implications of our casually asserted metaphors. I think that even genius-level humans can be extremely blind to the damage that's done when they implicitly translate a thought from its natural form as a clean, clear, distinct concept into a spatial/continuum metaphor.
I think there's a reason we like to compress things to one or two dimensions and use spatial metaphors. The Curse of Dimensionality applies to humans as well as machines. The way we deal with it is by squinting until all but a couple of the dimensions go away and then reasoning as if only a couple of dimensions exist. Perhaps this is not very generous to humans, but I don't think it's too far off the mark. We come with pre-built circuitry for reasoning in three or fewer dimensions, so it's not surprising that we want to compress things.
It makes sense to use "continuum" language iff what you're talking about has natural and clear continuum properties. "Is this paint color 'more red' than that paint color?" Otherwise you are, at best, muddying up your thoughts and communication for no gain, and often actively damaging things.
Examples
Non-political examples include the following:
- Referring to two ideas as being "distant" or "far apart," or "close" or "nearby," or some analogous phrase. Perhaps you feel like sparrows are closer to finches, eagles are distant from pigeons, but not as distant and penguins are from ostriches. In other words, you're trying to use the language of spatial relationships to describe your mental model of what a similarity cluster [LW · GW] looks like. The real problem, though, is that birdiness is not really something that lends itself to intuitions of a spatial continuum. Certain individual dimensions of birdiness, such as "can it fly?" are almost perfectly binary, and thus not really dimensions at all in the spatial sense. Your spatial intuitions will lead you astray if you actually put any weight on your metaphor - so, of what use is the metaphor?
- Referring to two positions, claims, proposals, or policies as being "orthogonal." What is being gestured at here is the idea that there is no "intersection" (another geometry word!) between the ideas; that there is no relative relevance between the ideas; that the implications of Idea 1 have no bearing on Idea 2. This is usually not technically true, because in a very real practical sense, everything is connected, but in an abstract argument the word can probably be used as long as there's an implicit understanding that what is really meant is "these two ideas are orthogonal within the carefully defined perimeter (another geometry word!) of this specific discussion."
- Assuming the existence of an "axis" or "axes" or a "spectrum" as a way of orienting (another geometry word!) a conversation around specific conceptual variables. The problem arises if you happen to forget that abstract concepts don't always work this way. You usually can't freely "move" concepts along "axes" in ways that don't have implications for other relevant variables, especially if your interlocutor doesn't share your understanding of exactly what these axes mean and imply.
This leads us to the most common specific examples, in the case of politics:
- When Blues and Greens [? · GW] start talking about positions being "Up-Wing" or "Down-Wing," arguing about whether a position, person, or argument is really more Up than another, or more Blue than average, etc. The overall impact is a massively lossy reduction of dimensionality for no real gain other than the convenience of assigning a tribal label for the purposes of tribal maneuvering and signaling. In fact, I assert that even claiming that these labels are doing "dimensionality reduction" might be overly generous. Most ideas do not have "dimension" or fall naturally onto some "axis" in an unambiguous and clearly-communicable way. Taking a naturally atomic, distinct statement, and placing it on some kind of conceptual continuum is almost certainly going to do violence to the concept in some way. Most concepts and ideas should probably not casually be shoved onto continua.
- Another common application of the language of distance is to communicate affiliation: "The outgroup member and his opinions are very far away from me! I am very close to you and your opinions!" Perhaps this kind of speech has its place, but it might be useful to avoid doing for a while, so that you actually notice when you're doing it.
Sidebar
We can Steelman the upist/downist rhetorical technique as implying that one or both of the following two things is true:
- Upism and downism are distinct, crisp and clear philosophies that are in natural opposition, and there are many relevant regards in which positions can fall along the "spectrum" between the extremes of perfect doctrinaire upism and downism.
- Upism and downism are empirical clusters in thingspace, which may not be philosophically distinct but are practically distinct.
Point 1 would be great if it were true, but it is not, and it is especially not when you consider how people actually use these words in reality. Nobody even agrees on what upism and downism are. Point 2 might be true, but if it is, arguing about whether a position is more downist than another position is identical to arguing if a duck is more birdy than a magpie. It's a subjective and pointless argument about membership in a subjectively identified category, from which no deductive inference can ever be made even in principle.
Concluding remarks
I tried this out for about a week before posting, and found the results to be interesting enough that I plan to try to keep it up for a while longer.
The most interesting part of the experiment has been observing the mental vapor-lock that occurs when I disallow myself from casually employing a spatial metaphor ... followed by the more-creative, more-thoughtful, less-automatic mental leap I'm forced to make to finish my thought. You discover new ways in which your mind can move.
I also found that I ended up frequently reaching for "uncorrelated" or "unrelated" as a substitutes for "orthogonal," a word which I previously overused. As a metaphor, "uncorrelated" works even better than "orthogonal"; I very rarely actually meant "the dot product of these two ideas is zero" in the first place.
I also sometimes ended up settling on words that invoke topology without quite implying geometry. For example, "disconnected," "disjoint," or, conversely, "interconnected," "strongly linked." I feel like these metaphors are okay to keep, they are a much better map than the words they would be replacing. I am reminded of the Odonian habit of using "more central" instead of "highest" as a way of expressing hierarchy; it seems like a minor choice but it influences how you see things. Asking yourself to actually thoughtfully choose which metaphor to use results in overall richer communication.
Thanks to the Guild of the Rose for feedback and discussion on a draft version of this post.
18 comments
Comments sorted by top scores.
comment by jaspax · 2022-08-10T09:34:02.566Z · LW(p) · GW(p)
Counterpoint: spatial metaphors are so deeply embedded into human cognition that getting rid of them is likely to massively impair your ability to think clearly, rather than enhancing it. Lakoff's work on cognitive metaphors, and the whole field of cognitive linguistics more generally, have shown that mapping concepts onto experiences of space (and related bodily metaphors) is central to linguistic communication and all forms of abstract thought.
Refusing to use spatial metaphors may be an interesting training exercise, much like walking around your house blindfolded, or making things with your non-dominant hand. Trying this out might be a good way to develop other cognitive modalities and notice ways in which you were misusing the spatial concept. However, I find it unlikely that this makes your thinking as a whole clearer or more accurate. The things you make using your non-dominant hand are probably objectively worse than the things you make with your dominant hand, but the practice of doing it makes you more capable once you remove the restraint.
(Tentatively, I endorse the strong view that there is no such thing as abstract human cognition; instead, all human thought is based on metaphors from embodied sensory experience.)
Replies from: moridinamael, FiftyTwo↑ comment by moridinamael · 2022-08-10T14:07:38.768Z · LW(p) · GW(p)
I didn't want to derail the OP with a philosophical digression, but I was somewhat startled to find the degree I found it difficult to think at all without at least some kind of implicit "inner dimensionality reduction." In other words, this framing allowed me to put a label on a mental operation I was doing almost constantly but without any awareness.
↑ comment by FiftyTwo · 2022-08-10T22:42:41.906Z · LW(p) · GW(p)
Also, we have a huge amount of mental architecture devoted to understanding and remembering spatial relationships of objects (for obvious evolutionary reasons). Using that as a metaphor for purely abstract things allows us to take advantage of that mental architecture to make other tasks easier.
A very structured version of this would be something like a memory palace where you assign ideas to specific locations in a place, but I think we are doing the same thing often when we talk about ideas in spatial relationships, and build loose mental models of them as existing in spatial relationship to one another (or at least I do). [LW · GW]
comment by Shiroe · 2022-08-10T06:47:24.120Z · LW(p) · GW(p)
Contrasting this post with techniques like Word2vec, which do map concepts into spatial dimensions. Every word is assigned a vector and associations are learned via backprop by predicting nearby text. This allows you to perform conceptual arithmetic like "Brother" - "Man" + "Woman", giving a result which is a vector very close to (in literal spatial terms) the vector for "Sister".
Replies from: GWS↑ comment by Stephen Bennett (GWS) · 2022-08-11T13:07:29.564Z · LW(p) · GW(p)
Going from memory: the hitrate of those metaphors is higher than someone would naively expect (wow, "king" - "queen" is nearly the same vector as "man" - "woman"!) but lower than you might expect after hearing that it works some of the time ("king" - "queen" isn't the same vector as "prince" - "princess"? Weird.). I remember playing around with them and being surprised that some of them worked, and that other similar metaphors didn't. This stackexchange post suggests about 70%. Again I want to emphasize that this is going from memory and I'm not sure about the exact examples I used - I don't have a word2vec embedding downloaded to check the examples, and presumably it will depend on the training data & model parameters.
comment by lincolnquirk · 2022-08-09T21:00:49.024Z · LW(p) · GW(p)
Upvoted for raising something to conscious attention, that I have never previously considered might be worth paying attention to.
(Slightly grumpy that I'm now going to have a new form of cognitive overhead probably 10+ times per day... these are the risks we take reading LW :P)
comment by Yoav Ravid · 2022-08-10T05:12:32.614Z · LW(p) · GW(p)
Suggestion: put at least one example at the beginning of the post. Otherwise it's difficult to understand for anyone who doesn't already know what you're talking about.
comment by cubefox · 2022-08-10T20:50:33.501Z · LW(p) · GW(p)
Specifically for "orthogonal": I think here simply the word "independent" should be used instead. Most people don't even know what orthogonality means. "Independent" is of course a vague term (logically independent? modally independent? causally independent? counterfactually independent? probabilistically independent? etc), but so is "orthogonal" in its metaphorical sense.
Moreover, orthogonality between two concepts really only can make geometric sense if you compare two qualitative concepts expressed by mass nouns, like in "intelligence" and "age". Otherwise you don't even have two quantities which could form orthogonal vectors. An example for this arguable misuse is the Orthogonality Thesis itself. It says that intelligence and terminal goals are "orthogonal". While intelligence is a quantity (you can be more or less intelligent, perhaps even "twice as" intelligent), "goals" is not. They are not ordered. They are expressed by a countable noun, not a mass noun. We couldn't simply plot "goals" on a vector.
Replies from: GWS↑ comment by Stephen Bennett (GWS) · 2022-08-11T13:32:00.075Z · LW(p) · GW(p)
Overall I think you're right, and walking[1] through this example for myself was a good example of ways in which geometric metaphors can be imprecise[1] (although I'm not sure they're exactly misleading[1]). I'll end up[1] having to stretch[1] the metaphor to the point that I was significantly modifying what other people were saying to have it actually make sense.
Regarding the "orthogonality" in "orthogonality thesis". The description given on the LW tag [? · GW] and indeed bostrom's paper is orthogonality between intelligence and goals as you said. However, in practice I frequently see "goals" replaced with something like "alignment", which (to the extent that you can rank order aligned agents), is something quantifiable. This seems appropriate since you can take something like Kendall's tau of the rank orderings of world states of two agents, and that correlation is the degree to which one agent is aligned with another.
- ^
This is a spatial metaphor. I went back through after writing the post to see how often they showed up. Wowza.
↑ comment by cubefox · 2022-08-11T16:22:12.429Z · LW(p) · GW(p)
Rank correlation coefficients are an interesting point. The way I have so far interpreted "orthogonality" in the orthogonality thesis, is just as modal ("possibility") independence: For a system with any given quantity of intelligence, any goal is possible, and for a system with any given goal, any quantity of intelligence is possible.
The alternative approach is to measure orthogonality in terms of "rank correlation" when we assume we have some ordering on goals, such as by how aligned they are with the goals of humanity.
As far as I understand, a rank correlation coefficient (such as Kendall's tau, Goodman and Kruskal's gamma, or Spearman's rho) measures some kind of "association" between two "ordinal variables" and maps this to values between -1 and +1, where 0 means "no association". The latter would be the analogue to "orthogonality".
Now it is not completely clear what "no association" would mean, other than (tautologically) a value of 0. The interpretation of a perfect "association" of -1 or +1 seems more intuitive though. I assume for the ordinal variables "intelligence" and "alignment with human values", a rank correlation of +1 could mean the following:
- "X is more intelligent than Y" implies "X is more aligned with human values than Y", and
- "X is more aligned with human values than Y" implies "X is more intelligent than Y".
Then -1 would mean the opposite, namely that X is more intelligent than Y if and only if X is less aligned with human values than Y.
Then what would 0 association (our "orthogonality") mean? That "X is more intelligent than Y" and "X is more aligned with human values than Y" are ... probabilistically independent? Modally independent? Something else? I guess the first, since the measures seem to be based on statistical samples...
Anyway, I'm afraid I don't understand what you mean with "world states". Is this a term from decision theory?
Replies from: GWS↑ comment by Stephen Bennett (GWS) · 2022-08-11T18:57:45.581Z · LW(p) · GW(p)
Your initial point was that "goals" aren't a quantifiable thing, and so it doesn't make sense to talk about "orthogonality", which I agree with. I was just saying that while goals aren't quantifiable, there are ways of quantifying alignment. The stuff about world states and kendall's tau was a way to describe how you could assign a number to "alignment".
When I say world states, I mean some possible way the world is. For instance, it's pretty easy to imagine two similar world states: the one that we currently live in, and one that's the same except that I'm sitting cross legged on my chair right now instead of having my knee propped against my desk. That's obviously a trivial difference and so gets nearly exactly the same rank as the world we actually live in. Another world state might be one in which everything is the same except that a cosmic ray has created a prion in my brain (which gets ranked much lower than the actual world).
Ranking all possible future world states is one way of expressing an agent's goals, and computing the similarity of these rankings between agents is one way of measuring alignment. For instance, if someone wants me to die, they might rank the Stephen-has-a-prion world quite highly, whereas I rank it quite low, and this will contribute to us having a low correlation between rank orderings over possible world states, and so by this metric we are unaligned from one another.
Replies from: cubefox↑ comment by cubefox · 2022-08-12T18:17:18.300Z · LW(p) · GW(p)
Thanks, that clarifies it. I'm not sure whether it would be the right way to compare the similarity of two utility functions, since it only considers ordinal information without taking into account how strongly the agents value an outcome / world state. But this is at least one way to do it.
comment by Raemon · 2022-08-10T05:53:26.610Z · LW(p) · GW(p)
Strong upvoted for this line:
I tried this out for about a week before posting, and found the results to be interesting enough that I plan to try to keep it up for a while longer.
(I don't endorse strong upvoting everything with this property, but am feeling capriciously like rewarding this today. I think there's tendency to post about clever ideas as soon as you think about them, and actually trying them and seeing how it plays out is a good habit)
That said: I disagree with the claim that "distance" metaphors are inaccurate – as I've played a bunch of semantle and semantle.pimanrul and noticed how I think about concepts elsewhere in life, I think spectrums that make sense to measure in distance are just... everywhere. "Bird-iness" is pretty scalar. There are birds that can't fly, or are bad at flying. There are things with lots of birdlike properties on the dimension of flying and on other dimensions.
That said, your description of the general benefit of forcing yourself to avoid your usual conventions and explore new thought patterns makes sense.
comment by Rosencrantz (rob-sears) · 2022-08-10T07:41:50.245Z · LW(p) · GW(p)
I think avoiding spatial metaphors altogether is hard! For example in the paragraph below you use perhaps 3 spatial metaphors (plus others not so obviously spatial but with equal potential for miscommunication).
"The most interesting part of the experiment has been observing the mental vapor-lock that occurs when I disallow myself from casually employing a spatial metaphor ... followed by the more-creative, more-thoughtful, less-automatic mental leap I'm forced to make to finish my thought. You discover new ways in which your mind can move."
I'm sure I even recall encountering views that suggest all thought and language is a superstructure of metaphors based on a few basic sensorily acquired concepts we acquire young. Not sure where I read this though!
That said as a writer I also try to be alert to spatial metaphors that don't map especially well to the truth of a situation, and endeavour to select only the best ones.
Replies from: moridinamael↑ comment by moridinamael · 2022-08-10T14:05:15.812Z · LW(p) · GW(p)
I snuck a few edge-case spatial metaphors in just to show how common they really are in a tongue-in-cheek fashion.
You could probably generalize the post to a different version along the lines of "Try being more thoughtful about the metaphors you employ in communication," but this framing singles out a specific class of metaphor which is easier to notice.
comment by Noosphere89 (sharmake-farah) · 2022-08-09T23:12:59.805Z · LW(p) · GW(p)
A great example of near zero correlation is the Orthogonality Thesis, that is the correlation between goals and intelligence is essentially zero, and thus is usually safe to talk about Orthogonality in AI risk.
Replies from: cubefoxcomment by __nobody · 2022-08-12T01:44:31.717Z · LW(p) · GW(p)
This seems to be another case of "reverse advice" for me. I seem to be too formal instead of too lax with these spatial metaphors. I immediately read the birds example as talking about the relative positions and distances along branches of the Phylogenetic tree, your orthogonality description as referring to actual logical independence / verifiable orthogonality, and it's my job to notice hidden interaction and stuff like weird machines and so I'm usually also very aware of that, just by habits kicking in.
Your post made me realize that instead of people's models being hard to understand, there simply may not be a model that would admit talking in distances or directions, so I shouldn't infer too much from what they say. Same for picking out one or more vectors, for me that doesn't imply that you can move along them (they're just convenient for describing the space), but others might automatically assume that's possible.
As others already brought up, once you've gotten rid of the "false" metaphors, try deliberately using the words precisely. If you practice, it becomes pretty easy and automatic over time. Only talk about distances if you actually have a metric space (doesn't have to be euclidean, sphere surfaces are fine). Only talk about directions that actually make sense (a tree has "up" and "down", but there's no inherent order to the branches that would get you something like "left" or "right" until you impose extra structure). And so on... (Also: Spatial thinking is incredibly efficient. If you don't need time, you can use it as a separate dimension that changes the "landscape" as you move forward/backward, and you might even manage 2-3 separate "time dimensions" that do different things, giving you fairly intuitive navigation of a 5- or 6-dimensional space. Don't lightly give up on that.)
Nitpick: "It makes sense to use 'continuum' language" - bad word choice. You're not talking about the continuum (as in real numbers) but about something like linearity or the ability to repeatedly take small steps and get predictable results. With quantized lengths and energy levels, color isn't actually a continuous thing, so that's not the important property. (The continuum is a really really really strange thing that I think a lot of people don't really understand and casually bring up. Almost all "real numbers" are entirely inaccessible! Because all descriptions of numbers that we can use are finite, you can only ever refer to a countable subset of them, the others are "dark" and for almost all purposes might as well not exist. So usually rational numbers (plus a handful of named constants) are sufficient, especially for practical / real world purposes.)