Philosophy as low-energy approximation

post by Charlie Steiner · 2019-02-05T19:34:18.617Z · LW · GW · 20 comments

Contents

20 comments

In 2015, Scott Alexander wrote a post originally titled High Energy Ethics. The idea is that when one uses an extreme thought experiment in ethics (people dying, incest, the extinction of humanity, etc.), this is like smashing protons together at the speed of light at the LHC - an unusual practice, but one designed to teach us something interesting and fundamental.

I'm inclined to think that not only is that a slight mischaracterization of what's going on, but that all philosophical theories that make strong claims about the "high energy" regime are doubtful. But first, physics:

<physics>

Particle physics is about things that are very energetic - if we converted the energy per particle into a temperature, we could say the LHC produces conditions in excess of a trillion (1,000,000,000) degrees. But there is also a very broad class of physics topics that only seem to show up when it's very cold - the superconducting magnets inside said LHC, only a few meters away from the trillion-degree quarks, need to be cooled to basically absolute zero before they superconduct.

The physics of superconductors is similarly a little backwards of particle physics. Particle physicists try to understand normal, everyday behavior in terms of weird building blocks. Superconductor physicists try to understand weird behavior in terms of normal building blocks.

The common pattern here is idea that the small building blocks (in both fields) get "hidden" at lower energies. We say that the high-energy motions of the system get "frozen out." When a soup of fundamental particles gets cold enough, talking about atoms becomes a good low-energy approximation. And when atoms get cold enough, we invent new low-energy approximations like "the superconducting order parameter" as yet more convenient descriptions of their behavior.

</physics>

Some philosophers think that they're like particle physicists, elucidating the weird and ontologically basic stuff inside the everyday human. The better philosophers, though, are like superconductor physicists, trying to understand the unusual (in a cosmic sense) state of humanity in terms of mundane building blocks.


My favorite example of a "low-energy approximation" in philosophy, and the one that prompted this post, is Dennett's intentional stance. The intentional stance advertises itself as a useful approximation. It's a way of thinking about certain systems (physical agents) that are, at bottom, evolving according to the laws of physics with detail more complicated than we can comprehend directly. Even though the microscopic world is too complicated for us, we can use this model, the intentional stance, to predict physical agents (not-quite tautologically defined as systems the intentional stance helps predict) using a more manageable number of free parameters.

But sometimes approximations break down, or fail to be useful - the approximation depends on certain regularities in the world that are not guaranteed by the physical law. To be direct, the collection of atoms we think of as a "human" isn't an agent in the abstract sense. They can be approximated as an agent, but that approximation will inevitably break down in some physical situations. The psychological properties that we ascribe to humans only make sense within the approximation - "In truth, there are only atoms and the void."

Taken to its logical conclusion, this is a direct rejection of most varieties of the "hard problem of consciousness." The hard problem asks, how can you take the physical description of a human and explain its Real Sensations - our experiences that are supposed to have their own extra essences, or to be directly observed by an "us" that is an objective existence. But this is like asking "Human physical bodies are only approximate agents, so how does this generate the real Platonic agent I know I am inside?" In short, maybe you're not special. Approximate agents also suffice to write books on philosophy.

Show me a model that's useful for understanding human behavior, and I'll show you someone who's taken it too literally. Beliefs, utterances, meanings, references, and so on - we just naturally want to ask "what is the true essence of this thing?" rather than "what approximation of the natural world has these objects as basic elements?" High-energy philosophy totally fails to accept this reality. When you push humans' intuitions to extremes, you don't get deep access to what they really mean. You just get junk, because you've pushed an approximation outside its domain of validity.

Take Putnam's Twin Earth thought experiment, where we try to analyze the idea (essence?) of "belief" or "aboutness" by postulating an entire alternate Earth that periodically exchanges people with our own. When you ponder it, you feel like you are getting insights into the true nature of believing. But more likely, there is no "true nature of believing," just some approximations of the natural world that have "belief"s as basic elements.

In the post on ethics, Scott gives some good examples of highly charged thought experiments in ethics, and in some ways ethics is different from psychology - modern ethics acknowledges that it's largely about rhetoric and collaboration among human beings. And yet it's telling that the examples are all counterexamples to other peoples' pet theories. If Kant claims you should never ever lie, all you need to refute him is one counterexample, and it's okay if it's a little extreme. But just because you can refute wrong things with high-energy thought experiments doesn't mean that there's some right thing out there that's immune to refutation at all energies. The lesson of high energy ethics seems to be that every neat ethical theory breaks down in some high energy situation.

Applications to value learning left (for now) as an exercise for the reader.

20 comments

Comments sorted by top scores.

comment by MrMind · 2019-08-01T13:12:15.408Z · LW(p) · GW(p)

I arrived at the same conclusion when I tried to make sense of the Metaethics Sequence. My summary of Eliezer's writings is: "morality is a bunch of mental computations shared between most human beings". Morality thus grew out of our evolutive history, and it should not be surprising that in extreme situations it might be incoherent or maladaptive.

Only if you believe that morality should be like systematic and universal and coherent, then you can say that extreme examples are uncovering something interesting about peoples' morality.

Otherwise, extreme situations are as interesting as saying that people cannot mentally factor long numbers.

comment by TAG · 2019-02-06T11:02:54.133Z · LW(p) · GW(p)

Take Putnam’s Twin Earth thought experiment, where we try to analyze the idea (essence?) of “belief” or “aboutness” by postulating an entire alternate Earth that periodically exchanges people with our own.

Maybe he is just trying to find a good model. If you want to accuse people of excessive literalism, you need some firm examples.

Show me a model that’s useful for understanding human behavior, and I’ll show you someone who’s taken it too literally.

Go on, then.

comment by TAG · 2019-02-06T08:59:10.606Z · LW(p) · GW(p)

Taken to its logical conclusion, this is a direct rejection of most varieties of the “hard problem of consciousness.” The hard problem asks, how can you take the physical description of a human and explain its Real Sensations—our experiences that are supposed to have their own extra essences, or to be directly observed by an “us” that is an objective existence.

The Hard Problem is not a statement insisting that qualia are irreducible, it is the question of what they reduce to. If they don't reduce, there is no hard problem (and physicalism is false. You only face the HP given physicalism).

You imply that qualia are only approximate high level descriptions. But we still don't have a predictive and reductive theory of qualia as high level emergent phenomena as we do with, and for instance, heat. It lowers the bar, but not enough.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2019-02-06T14:04:38.576Z · LW(p) · GW(p)

Suppose that we show how certain physical processes play the role of qualia within an abstract model of human behavior. "This pattern of neural activities means we should think of this person as seeing the color red," for instance.

David Chalmers might then say that we have merely solved an "easy problem," and that what's missing is whether we can predict that this person - this actual first-person point of view - is actually seeing red.

This is close to what I parody as "Human physical bodies are only approximate agents, so how does this generate the real Platonic agent I know I am inside?"

When I think of myself as an abstract agent in the abstract state of "seeing red," this is not proof that I am actually an abstract Platonic Agent in the abstract state of seeing red. The person in the parody has been misled by their model of themselves - they model themselves as a real Platonic agent, and so they believe that's what they have to be.

Once we have described the behavior of the approximate agents that are humans, we don't need to go on to describe the state of the actual agents hiding inside the humans.

Replies from: TAG
comment by TAG · 2019-02-06T14:41:20.022Z · LW(p) · GW(p)

Suppose that we show how certain physical processes play the role of qualia within an abstract model of human behavior. “This pattern of neural activities means we should think of this person as seeing the color red,” for instance. [..]This is close to what I parody as “Human physical bodies are only approximate agents, so how does this generate the real Platonic agent I know I am inside?”

But we know that we do see red. Red is not an invisible spook inside someone else.

When I think of myself as an abstract agent in the abstract state of “seeing red,” this is not proof that I am actually an abstract Platonic Agent in the abstract state of seeing red. The person in the parody has been misled by their model of themselves—they model themselves as a real Platonic agent, and so they believe that’s what they have to be.

Once we have described the behavior of the approximate agents that are humans, we don’t need to go on to describe the state of the actual agents hiding inside the humans.

We don't need to bring in agency at all. You are trying to hitch something you can be plausible eliminativist about to something you can't.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2019-02-06T19:19:07.482Z · LW(p) · GW(p)

I'm supposing that we're conceptualizing people using a model that has internal states. "Agency" of humans is shorthand for "conforms to some complicated psychological model."

I agree that I do see red. That is to say, the collection of atoms that is my body enters a state that plays the same role in the real world as "seeing red" plays in the folk-psychological model of me. If seeing red makes the psychological model more likely to remember camping as a child, exposure to a red stimulus makes the atoms more likely to go into a state that corresponds to remembering camping.

"No, no," you say. "That's not what seeing red is - you're still disagreeing with me. I don't mean that my atoms are merely in a correspondence with some state in an approximate model that I use to think about humans, I mean that I am actually in some difficult to describe state that actually has parts like the parts of that model."

"Yes," I say "- you're definitely in a state that corresponds to the model."

"Arrgh, no! I mean when I see red, I really see it!"

"When I see red, I really see it too."

...

It might at this point be good for me to reiterate my claim from the post, that rather than taking things in our notional world and asking "what is the true essence of this thing?", it's more philosophically productive to ask "what approximate model of the world has this thing as a basic object?"

Replies from: TAG
comment by TAG · 2019-02-07T08:59:42.529Z · LW(p) · GW(p)

Models can omit things that are there as well as include things that aren't there. That's the whole problem.

I'm always in the exact state that I am in, and those states includes conscious experience. You can and have built a model which is purely functional and in which Red only featurrs as a functional role or behavioural disposition. But you don't get to say that your model is in exact two way correspondence with my reality. You have to show that a model is exact, and and that is very difficult, you can't just assert it.

it’s more philosophically productive to ask “what approximate model of the world has this thing as a basic object?”

Why can't I ask "what does this approximate model leave anything out"?

If physicist A builds a model that leaves out friction, say, physicist B can validly object to it. And that has nothing whatever to do with "essences" or ontological fundamentalness. No one thinks friction or cow's legs are fundamental. The rhetoric about essences is a red herring. (Or , if it is valid, surely you can use it to justify any model of any simplicity). I think the spherical cow model is inaccurate because every cow I have ever seen is squarish with a leg at each corner. Thats an observation, not a metaphysical claim.

I agree that I do see red. That is to say, the collection of atoms that is my body enters a state that plays the same role in the real world as “seeing red” plays in the folk-psychological model of me.

Seeing red is more than a role or disposition. That is what you have left out.

Replies from: Charlie Steiner, dxu
comment by Charlie Steiner · 2019-02-07T22:41:12.593Z · LW(p) · GW(p)

Seeing red is more than a role or disposition. That is what you have left out.

Suppose epiphenomenalism is true. We would still need two separate explanations - one explanation of your epiphenomenal activity in terms of made-up epiphenomenology, and a different explanation for how your physical body thinks it's really seeing red and types up these arguments on LessWrong, despite having no access to your epiphenomena.

The mere existence of that second explanation makes it wrong to have absolute confidence in your own epiphenomenal access. After all, we've just described approximate agents that think they have epiphenomenal access, and type and make facial expressions and release hormones as if they do, without needing any epiphenomena at all.

We can imagine the approximate agent made out of atoms, and imagine just what sort of mistake it's making when it says "no, really, I see red in a special nonphysical way that you have yet to explain" even when it doesn't have access to the epihpenomena. And then we can endeavor not to make that mistake.

If I, the person typing these words, can Really See Redness in a way that is independent or additional to a causal explanation of my thoughts and actions, my only honest course of action is to admit that I don't know about it.

Replies from: TAG
comment by TAG · 2019-02-08T09:09:43.051Z · LW(p) · GW(p)

It's wrong to have absolute confidence in anything. You can't prove that you are not in a simulation, so you can't have absolute confidence that there is any real physics.

Of course, I didn't base anything on absolute confidence.

You can put forward a story where expressions of subjective experience are caused by atoms, and subjective experience itself isn't mentioned.

I can put forward a story where ouches are caused by pains, and atoms aren't explicitly mentioned.

Of course you now want to say that the atoms are still there and playing a causal role, but have gone out of focus because I am using high level descriptions. But then I could say that subjective states are identical to aggregates of atoms, and therefore have identical caudal powers.

Multiple explanations are always possible, but aren't necessarily about rival ontologies

Replies from: Charlie Steiner
comment by Charlie Steiner · 2019-02-08T19:53:40.625Z · LW(p) · GW(p)

Anyhow, I agree that we have long since been rehashing standard arguments here :P

Replies from: TAG
comment by TAG · 2019-02-09T11:05:11.496Z · LW(p) · GW(p)

How likely is it that you would have solved the Hard Problem? Why do people think philosophy is easy, or full of obvious confusions?

Replies from: Charlie Steiner
comment by Charlie Steiner · 2019-02-10T03:14:54.835Z · LW(p) · GW(p)

About 95%. Because philosophy is easy* and full of obvious confusions.

(* After all, anyone can do it well enough that they can't see their own mistakes. And with a little more effort, you can't even see your mistakes when they're pointed out to you. That's, like, the definition of easy, right?)

95% isn't all that high a confidence, if we put aside "how dare you rate yourself so highly?" [? · GW] type arguments for a bit. I wouldn't trust a parachute that had a 95% chance of opening. Most of the remaining 5% is not dualism being true or us needing a new kind of science, it's just me having misunderstood something important.

comment by dxu · 2019-02-08T00:14:58.800Z · LW(p) · GW(p)
Seeing red is more than a role or disposition. That is what you have left out.

Do you have any evidence for this claim, besides a subjective feeling of certainty?

Replies from: TAG
comment by TAG · 2019-02-08T08:03:46.541Z · LW(p) · GW(p)

Subjective experience can't be demonstrated objectively. On the other hand, demanding objective evidence of subjectivity biases the discussion away from taking consciousness seriously.

I don't have a way out if the impasse. The debate amongst professional philosophers is logjammed, so this one is as well. (However,this demonstrates a meta level truth: there is no neutral epistemology).

comment by Signer · 2019-02-07T03:49:23.254Z · LW(p) · GW(p)

The hard problem asks, how can you take the physical description of a human and explain its Real Sensations—our experiences that are supposed to have their own extra essences, or to be directly observed by an “us” that is an objective existence.

The hard problem is more like "what part of the Schrödinger equation says that it describes non-zombie world" - you can point out the part, where human body doesn't act like an agent (with interesting goals) if it is placed in the vacuum, and in principle why it does, when it does, but there are more issues with doing this with what people think of as consciousness. Personally I think panpsychism gives satisfying enough answer ("the part, where we say that the world it describes is real"), and so there is not much disagreement between hard problem and ethically-significant consciousness being non-fundamental. But it doesn't mean the hard problem is meaningless.

comment by TAG · 2019-02-06T11:12:44.865Z · LW(p) · GW(p)

If Kant claims you should never ever lie, all you need to refute him is one counterexample, and it’s okay if it’s a little extreme. But just because you can refute wrong things with high-energy thought experiments doesn’t mean they’re going to help you find the right thing.

I don't see why not. If virtue theory, deontology and consequenitalism all, separately, go wrong under some circumstances, then you probably need an ethics that combines the strengths of all three.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2019-02-06T13:40:12.134Z · LW(p) · GW(p)

Also replying to:

I am not clear how you are defining HEphil: do you mean (1) that any quest for the ontologically basic is HEphil, or (2) treating mental properties as physical is the only thing that is HEphil ?

Neither of those things is quite what I meant - sorry if I was unclear. The quest for the ontologically basic is what I call "thinking you're like a particle physicist," (not inherently bad, but I make the claim that when done to mental objects it's pretty reliably bad). This is distinct from "high energy philosophy," which I'm trying to use in a similar way to Scott.

High Energy Philosophy is the idea that extreme thought experiments help illuminate what we "really think" about things - that our ordinary low-energy thoughts are too cluttered and dull, but that we can draw out our intuitions with the right thought experiment.

I argue that this is a dangerous line of thought because it's assuming that there exists some "what we really think" that we are uncovering. But what if we're thinking using an approximation that doesn't extend to all possible situations? Then asking what we really think about extreme situations is a wrong question.

[Even worse is when people ignore the fact that the concept is a human invention at all, and try to understand "the true nature of belief" (not just what we think about belief) by conceptual analysis.]

So, now, back the the question of "the correct ethical theory." What, one might ask, is the correct ethical theory that captures what we really value in all possible physical situations (i.e. "extends to high energy")?

Well, one can ask that, but maybe it doesn't have an answer. Maybe, in fact, there is no such object as "what we really value in all possible physical situations" - it might be convenient to pretend there is in order to predict humans using a simple model, but we shouldn't try to push that model too far.

(EDIT: Thanks for asking me these questions / pressing me on these points, by the way.)

Replies from: TAG
comment by TAG · 2019-02-06T14:49:27.248Z · LW(p) · GW(p)

I argue that this is a dangerous line of thought because it’s assuming that there exists some “what we really think” that we are uncovering. But what if we’re thinking using an approximation that doesn’t extend to all possible situations?

Then the thought experiment is a useful negative result telling us we need something more comprehensive.

[Even worse is when people ignore the fact that the concept is a human invention at all, and try to understand “the true nature of belief” (not just what we think about belief) by conceptual analysis

What's the problem? Even if all concepts are human-made, that doesn't mean we have perfect reflective access to them for free. Thought experiments can be seen as a way of informing the conscious mind what the unconscious mind is doing.

Well, one can ask that, but maybe it doesn’t have an answer.

Or maybe it does. Negative results are still information, so it is hard to see how we can solve problems better by avoiding thought experiments.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2019-02-06T18:31:45.357Z · LW(p) · GW(p)

Then the thought experiment is a useful negative result telling us we need something more comprehensive.

Paradigms also outline which negative results are merely noise :P I know it's not nice to pick on people, but look at the negative utilitarians. They're perfectly nice people, they just kept looking for The Answer until they found something they could see no refutation of, and look where that got them.

I'm not absolutely against thought experiments, but I think that high-energy philosophy as a research methodology is deeply flawed.

comment by TAG · 2019-02-06T10:47:11.138Z · LW(p) · GW(p)

Some philosophers think that they’re like particle physicists, elucidating the weird and ontologically basic stuff inside the everyday human.

That includes physicalists, who think the ontologically basic stuff is quarks and electrons.

I am not clear how you are defining HEphil: do you mean (1) that any quest for the ontologically basic is HEphil, or (2) treating mental properties as physical is the only thing that is HEphil ?