Is "gears-level" just a synonym for "mechanistic"?
post by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2021-12-13T04:11:45.159Z · LW · GW · 6 commentsThis is a question post.
Contents
Answers 25 pjeby 11 Valentine 11 kyleherndon 7 Richard_Kennaway 5 adamShimi 5 Evenflair 2 Slider 2 Ape in the coat None 6 comments
If so, can we try to shift rationalist terminology towards the latter, which seems more transparent to outsiders?
Answers
Even if it is "just a synonym", it does not imply that we should shift terminology. Terminology is not just about definition (denotation), it is also about implication (connotation).
As others have pointed out, "mechanistic" and "reductionist" have unwanted connotations, while "gears-level" has only the connotations the community gives it... along with the intuitive implication that it's a model that is specific enough that you could build it, that you would need to know what gears exist and how they connect. (In contrast, it's much easier to say that a model is mechanistic or reductionist, without it actually being, well, gears-level!)
Between the lack of pre-existing negative connotations and the intuition pump, there seems to me to be more than enough value to use the term in preference over the other words, even if it were an exact synonym!
↑ comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-01-19T17:43:32.423Z · LW(p) · GW(p)
I see an implicit premise I disagree with about the value of improving communication within the rationalist community vs. between rationalists and outsiders; It seems like I think the latter is relatively more important than you do.
Replies from: pjeby↑ comment by pjeby · 2022-01-21T05:15:54.119Z · LW(p) · GW(p)
Perhaps you could explain what you mean more precisely, as "improving communication" is really under-specified in what you wrote. Communication to achieve what purpose, exactly?
I think you may be correct that I don't value some goal as much as you do, but I expect most of the difference lies in how we each define the phrase "improving communication".
I would argue that my approach is very much treating "improving communication to outsiders" as a very important thing -- so there must be some difference in what you are assuming the goal of such communications is.
I for one don't plan on using "mechanistic" where I currently talk about "gears-like" simply because I know what intuition the latter is pointing at but I'm much less sure about the former. Maybe down the road they'll turn out to be equivalent. But I'll need to see that, and why, before it'll feel-make sense for me to switch. Sort of like needing to see and grok a math proof that two things are equivalent before I feel comfortable using that fact.
Not that I determine how Less Wrong does or doesn't use this terminology. I'm just being honest about my intentions here.
A minor aside: To me, "gears-level" doesn't actually make sense. I think I used to use that phrasing, but it now strikes me as an incoherent metaphor. Level of what? Level of detail of the model? You can add a ton of detail to a model without affecting how gears-like it is. I think it's self-referential in roughly the style of "This quoted sentence talks about itself." I think it's intuitively pointing at how gears-like a model is, and on the scale of "not very many gears at all" to "absolutely transparently made of gears", it's on a level where we can talk about how the gears interact.
That said, there is a context in which I'd use a similar phrase and I think it makes perfect sense. "Can we discuss this model at the gears level?" That feels to me like we're talking about a very gears-like model already but we aren't yet examining the gears.
I interpret the opening question being about whether the property of being visibly made of gears is the same as "mechanistic". I think that's quite plausible, given that "mechanistic" means "like a mechanism", which is a metaphor pointing at quite literally a clockwork machine made of literal physical gears. The same intuition seems to have inspired both of them.
But as I said, I await the proof.
↑ comment by Raemon · 2021-12-14T00:47:53.601Z · LW(p) · GW(p)
Same. I think there's a tendency to see superficially similar names and say "ah, these are the same concept, we should use commonly used phrases to refer to them", which sometimes miss the nuances that the new concept was actually aiming at.
Replies from: strangepoop↑ comment by a gently pricked vein (strangepoop) · 2021-12-17T13:07:36.410Z · LW(p) · GW(p)
Yeah, this can be really difficult to bring out. The word "just" is a good noticer for this creeping in.
It's like a deliberate fallacy of compression: sure you can tilt your view so they look the same and call it "abstraction", but maybe that view is too lossy for what we're trying to do! You're not distilling, you're corrupting!
I don't think the usual corrections for fallacies of compression can help either (eg. Taboo) because we're operating at the subverbal layer here. It's much harder to taboo cleverness at that layer. Better off meditating on the virtue of The Void instead.
But it is indeed a good habit to try to unify things, for efficiency reasons. Just don't get caught up on those gains.
There's a tag for gears level [? · GW] and in the original post [LW · GW] it looks like everyone in the comments was confused even then what gears-level meant, and in particular there were a lot of non-overlapping definitions given. In particular, the author, Valentine, also expresses confusion.
The definition given, however, is:
1. Does the model pay rent? [? · GW] If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
2. How incoherent is it to imagine that the model is accurate but that a given variable could be different [? · GW]?
3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it [? · GW]?
I'm not convinced that how people have been using it in more recent posts though. I think the one upside is that "gears-level" is probably easier to teach than "reductionist" but contingent on someone knowing the word "reductionism" it is clearly simpler to just use that word. In the history of the tag, there was also previously "See also: Reductionism" with a link.
In the original post, I think Valentine was trying to get at something complex/not fully encapsulated by an existing word or short phrase, but it's not clear to me that it was well communicated to others. I would be down for tabooing "gears-level" as a (general) term on lesswrong. I can't think of an instance after the original where someone used the term "gears-level" to not mean something more specific, like "mechanistic" or "reductionist."
That said, given I don't think I really understand what was meant by "gears-level' in the original, when there are suitable replacements, I would ideally like to hear from someone who thinks they do. In particular, like Valentine or brook. If there were no objections maybe clean-up the tag by removing it and/or linking to other related terms.
Both of these would be clearer if replaced by "causal". That is what they are both talking about: causes and effects.
↑ comment by Richard_Kennaway · 2021-12-13T10:49:46.085Z · LW(p) · GW(p)
I have noticed that a lot of people are reluctant to talk about causation, on LessWrong and elsewhere ever since Hume (who was confused on the matter). Even in statistics, where causal analysis is nowadays a large field, time was when you couldn't talk about causation in statistical papers, and had to disguise causal analysis as the "missing data problem". Neither Causal Decision Theory nor Evidential Decision Theory work as naturalised decision theories, yet the former is criticised more harshly for failing on Newcomb's Problem than the latter is for failing on the Smoking Lesion. People readily think that they do not do anything, merely observe what they have done, and do not act to bring things about, but merely predict that the things will happen.
If I ask myself, "What sort of person would see the world that way?" the answer I get is "Someone who experiences themselves that way." [LW(p) · GW(p)]
Replies from: adamShimi, matthew-barnett↑ comment by adamShimi · 2021-12-13T18:19:06.496Z · LW(p) · GW(p)
I'm very confused by this comment because LW and AF post talk about causation all the time. They're definitely a place where I expect causal and causation to appear in 90% of the posts I read, more than once.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2021-12-16T14:30:37.423Z · LW(p) · GW(p)
Talk of causation happens, but talk avoiding causation also happens, for example the EDT and action-as-prediction ideas that I mentioned.
↑ comment by Matthew Barnett (matthew-barnett) · 2021-12-13T18:40:10.347Z · LW(p) · GW(p)
I have noticed that a lot of people are reluctant to talk about causation, on LessWrong and elsewhere ever since Hume
I am having trouble understanding what you mean, since I see causation talked about a lot here. But I also think it’s funny how Hume wrote about causation in 1748, and you’re worried that people still haven’t gotten over it.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2021-12-16T14:26:26.627Z · LW(p) · GW(p)
My aside about Hume referred to the passage where he gives two different definitions of causation in consecutive sentences, yet asserts them to be equivalent:
we may define a cause to be an object, followed by another, and where all the objects similar to the first are followed by objects similar to the second. Or in other words where, if the first object had not been, the second never had existed.
"Enquiry Concerning Human Understanding", V,2,60. Emphasis in the original.
The first of these is the idea of constant conjunction, or as we would now call it, correlation, and the second is the idea of a counterfactual statement. Many have remarked on this contradiction.
↑ comment by Charlie Steiner · 2021-12-13T20:20:18.511Z · LW(p) · GW(p)
I dunno, I just used "gears" about a totally acausal (but still logical) relationship yesterday.
↑ comment by Rafael Harth (sil-ver) · 2021-12-13T22:30:58.140Z · LW(p) · GW(p)
I don't think this works. There are many cases where the gears-level model is causal and the policy level is not, but it's not the same distinction, and there are cases where they come apart.
E.g., suppose someone claims to have proven P NP. You can have a policy-level take on this, say "Scott Aarenson think it's correct therefore I believe it", or a gears-level model, e.g., "I've read the proof and it seems solid". But neither of them is causal. It doesn't even make sense to talk about causality for mathematical facts.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2021-12-16T14:36:05.359Z · LW(p) · GW(p)
Yes, I'll grant that "causal" doesn't fit so well for mathematics. Yet even in mathematics, people still talk in terms of "why" and "because".
I think the best pointer for gears-level as it is used nowadays is John Wentworth's post Gears vs Behavior [LW · GW]. And in this summary comment [LW(p) · GW(p)], he explicitly says that the definition is the opposite of a black box, and that gears-level vs black box is a binary distinction.
Gears-level models are the opposite of black-box models.
[...]
One important corollary to this (from a related comment [LW(p) · GW(p)]): gears/no gears is a binary distinction, not a sliding scale.
As for the original question, I feel that "mechanistic" can be applied to models that are just one neat equation but with no moving parts, such that you don't know how to alter the equation when the underlying causal process.
If mechanistic indeed means the opposite of black-box, then in principle we could replace gears-level model.
↑ comment by Valentine · 2021-12-14T00:01:02.960Z · LW(p) · GW(p)
Huh. That's a neat distinction. It doesn't feel quite right, and in particular I notice that in practice there absolutely super duper very much is a sliding scale of gears-ness. But the "no black box" thing does tie together some things nicely. I like it.
A simple counterpoint: There's a lot of black box in what a "gear" is when you talk about gears in a box. Are we talking about physical gears operating with quantum mechanics to create physical form? A software program such that these are basically data structures? A hypothetical universe in which things actually in fact magically operate according to classical mechanics and things like mass just inherently exist without a quantum infrastructure? And yet, we can and do black-box that level in order to have a completely gears-like model of the gears-in-a-box.
My guess is you have to fuse this black box thing with relevance. And as John Vervaeke points out, relevance is functionally incomputable, at least for humans.
Isn't mechanistic specifically about physical properties? Could you say that an explanation of a social phenomenon is "mechanistic", even though it makes zero references to physical reality?
↑ comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2021-12-13T06:07:32.071Z · LW(p) · GW(p)
You could indeed; think "causal mechanism" rather than "physical mechanism".
Normally and one might be tempted to generalise that usually when you know some thing you know the surrounfing "topic". However there are cases when this is lacking. There are such things as zero-knowledge proofs. Also any reductio ad absurdum (assuume not p. Derive q and not q from not p. Therefore p) is going to be very silent about small alterations to the claims.
Also dismissing perpetual motions machines because you believe energy is conserved will make no particular claim on what is the issue with this particular scheme. This can be rigorous and robust which might be alot what people often shoot for with "mechanistic" but it is general and fails to be particular and thus not gear-level (it kind of concretely doesn't care whether the machine in question even has gears or not).
White box or transparent box model, as opposed to a black box model.
6 comments
Comments sorted by top scores.
comment by Alexander (alexander-1) · 2021-12-13T09:20:15.829Z · LW(p) · GW(p)
"Mechanistic" and "reductionist" have somewhat poor branding, and this assertion is based on personal experience rather than rigorous data. Many people I know will associate "mechanistic" and "reductionist" with negative notions, such as "life is inherently meaningless" or "living beings are just machines", etcetera.
Wording matters and I can explain the same idea using different wording and get drastically different responses from my interlocutor.
I agree that “gears-level“ is confusing to someone unfamiliar with the concept. Naming is hard. A better name could be "precise causal model".
Replies from: rudi-c↑ comment by Rudi C (rudi-c) · 2021-12-13T19:32:23.681Z · LW(p) · GW(p)
But mechanistic world models do suggest that meaning in a traditional (mystical? I can’t really define it, as I find the concept itself incoherent) sense does not (and cannot) exist; so I think the “negative” connotations are pretty fair, it’s just that they aren’t that negative or important in the first place. (“Everything adds up to normalcy.”) Rebranding is still a sound marketing move, of course.
Replies from: alexander-1↑ comment by Alexander (alexander-1) · 2021-12-14T06:14:59.375Z · LW(p) · GW(p)
In some sense, yeah, "life is inherently meaningless" and "living beings are just machines." However, I am still struggling to wrap my head around the objectivity of aesthetics, meaning and morality. Information is now widely considered physical (refer to papers by R Landauer and D Deutsch). Maybe someday, we will once and for all incorporate aesthetics, meaning and morality under physicalism. If minds are physical, and aesthetics, purposes, and morality are real aspects of minds, then wouldn't that imply that they are therefore objective notions? And thus not "meaningless"?
This is a gnarly rabbit hole, and I am not qualified to talk about this topic. I recently read Parfit's "Reasons and Persons" to gain a deeper grasp of these topics and it's a stunning and precious book, but I need to do more work to understand all this. I may have to read his magnum opus "On What Matters" to wrap my head around this. We don't have a proper understanding of minds at this point in time. Developing robust theories about rationality, morality, aesthetics, desires, etc., necessitates actually understanding minds.
As you've pointed out, marketing matters. In my view, this is part of the reason why epistemic and instrumental rationalities are distinct aspects of rationality as defined in the sequences [LW · GW]. If your goal is to explain an idea to your interlocutor and you can convey the same truth using different wording, with one wording leading to mutual understanding and the other leading to obstinacy, then the instrumentally rational thing to do would be to use the former wording. Here we have a situation where two things are epistemically equivalent but not instrumentally so.
comment by TekhneMakre · 2021-12-13T09:30:51.733Z · LW(p) · GW(p)
A dimension I like, is the dimension of how much a model bears "long" chains of inference. (Metaphorically long, not necessarily many steps.) Can I tell you the model, and then ask you what the model says about X, and you don't immediately see it, but then I tell you an argument that the model makes, and you can then see for yourself that the model says that? Then that's a gears-level model.
Gears-level models make surprising predictions from apparently unsurprising elements. E.g. a model that says "there's some gears in the box, connected in series by meshing teeth" sounds sort of anodyne, but using inference, you can get a precise non-obvious prediction out of the model: turning the left gear Z-wise makes the right gear turn counter-Z-wise, and vice versa.