What should be reified?
post by herschel (hrs) · 2023-12-20T04:52:53.826Z · LW · GW · 2 commentsThis is a link post for https://brothernin.substack.com/p/what-should-be-reified
Contents
2 comments
I've said elsewhere, I think the sticking point of a lot of important questions is just this question: what should be reified? Another way to say this is: what are our ontological and axiological commitments? This is basically a cold take in many of my circles, but certainly not elsewhere, and I'd like to make this distinction as glaringly obvious as it is to me. Hopefully this post will be become trite and annoying to more people :).
Mathematics has the fortunate property of using axioms which are essentially solipsistic. In some sense they simply don't permit varying interpretations; alternative models belong to a different genus or theory or whatever. The further we get from math, and the larger and more complex the objects we want to study, the more destructive or lossy our abstractions are.
Models and frameworks exist for various purposes, they provide gripping points for interacting with the world. The abstractions we choose are where our existing intuitions, goals, etc leak into the model. This also isn't about explicit models necessarily, this question is underneath lots of intuitions, "embodied knowings", the whole lot.
Reifications make things easier to think about because they reduce dimensionality, or at least salient dimensionality. Maybe a goofy way to express the title of this post would be: "what subspace can we project into which preserves the structures we're concerned with?" Perhaps, better: "does this reification reduce complexity appropriately? And then, surely, we should also ask: "what structures exactly are we concerned with?"--which brings us back to the title of the post.
Part of the question then is also to ask, what should be dereified? This is part and parcel of the question. A lot of frameworks rely on dereifying large aspects of the domain, sometimes explicitly, and sometimes simply leaving them out of the discussion.
Ok dude, thanks for your philosophy 101 post.
No seriously, though. I think these distinctions are largely absent from our discourse, and in my experience, people are insensitive enough to them (or else just blended [? · GW] enough with their reifications) that they can't step out of them to investigate the abstractions they're using. Perhaps, part of the trouble here is that this process is relatively phenomenologically subtle. If someone is insensitive to the process by which they apprehend the world, then they take the output of that process as real, and are mostly unable to investigate it. Or maybe I just suck at broaching these topics.
Again, I'll try to be clear: these questions are upstream of the rest of the decisions we make. The "blooming, buzzing confusion" has to be reduced to manipulable structures, on the basis of which we make the rest of our determinations. Again, this even applies to judgements of value, regardless of whether or what frameworks we use to make them.
And then, ok, getting closer to the actual discourse: how can we even do explicit tradeoff calculations? A number of attempts have been made to construct reifications to use as a basis for such calculations. I'm not as familiar with the literature as I would like to be but my impression is that these are basically lacking in any useful normative or descriptive power, at any level outside of narrowly defined abstract games. Nobody (?) is productively doing EV calculations, afaik, etc. etc.
(Okay, to be fair, some people are currently trying, so I suppose the above is already making judgements about the quality of their models and their choices. In this respect maybe I'm less confident. The main point I can certainly make here is that the choice and determination of reifications is upstream of these models.)
This isn't even mostly about utilitarianism, either. I see this question as fundamental, but still latent, in questions all over the place: sex, justice, freedom, correctness (of a few different kinds) come easily to mind.
But nonetheless we have to make and use lossy representations! No choice is still a choice; at some point various decisions get made, at personal, organizational, civilization levels, etc. The point here is not that reifications are bad, but to make salient this distinction so we can discuss this process more clearly.
2 comments
Comments sorted by top scores.
comment by Dagon · 2023-12-20T17:38:43.431Z · LW(p) · GW(p)
Can you give an example of some decision or model that does it well? I don't disagree with the framework of recognizing that all of this is human-level modeling, and not actually encoded in the real world, but I don't know that I get anything out of questioning or changing most of the "default" models of human experience.
Note that when you say "reification", my mind replaces it with "model", "map", or "focus". If you mean something else, the source of my confusion is clear.
Edit: one source of my confusion may be the use of "reified" as a passive verb, which happens to ideas without specifying the actor. I sometimes trip up on "model" in the same way, wanting to clarify whether it's an agent modeling some prediction, or a model that exists in a vacuum which could be used to make a prediction. Relatedly, the use of "we" without acknowledging the variance among humans or whether it's a recommendation to others or an observation of yourself.
Replies from: hrs↑ comment by herschel (hrs) · 2023-12-20T20:43:17.731Z · LW(p) · GW(p)
Yeah, I'm working towards that. Cheap examples would be like, hard sciences have made certain kinds of assumptions (reductionism, primacy of formal models etc) which have been extremely generative. There are lots of locally extremely helpful reifications, certainly various kinds of optimization criteria are very useful to generate strategies etc. A big part of my point is these should sort of always be taken as provisional.
Note that when you say "reification", my mind replaces it with "model", "map", or "focus". If you mean something else, the source of my confusion is clear.
"Focus" is the best among these but isn't great. In most cases I think a lot of reifications are upstream of specific models, or generate them or something. Like, reification is the process (implicitly) of choosing weights for tradeoff calculations, but more generally for salience or priority etc. Maybe an ok example would be like, in economics we start trying to construct measures on the basis of which to determine etc etc., and even before these get goodharted we've already tried to collapse the complexity of the domain into a small number of factors to think about. We might even have destroyed a number of other important dimensions in this representation. This happens in some sense before rest of the model is constructed.
one source of my confusion may be the use of "reified" as a passive verb, which happens to ideas without specifying the actor.
This may just be sloppiness on my part. I usually mean something like, "a idea, as held by a person, which that person has reified." Compare to eg. "a loved one" or something like that.