The ignorance of normative realism bot

post by Joe Carlsmith (joekc) · 2022-01-18T05:26:07.676Z · LW · GW · 6 comments

Contents

  I. What’s in the envelope?
  II. Can you touch the ghostly frosting with your mind?
  III. Does the frosting exist?
  IV. Third-factor explanations
  V. Is this just generic skepticism?
  VI. How to be a normative realist
  VII. Will the aliens agree?
None
7 comments

(Cross-posted from Hands and Cities)

And you want to travel with her, and you want to travel blind
And then you know that she will trust you
For you’ve touched her perfect body with your mind

 — Leonard Cohen, “Suzanne”

Non-naturalist normative realism claims that there are objective normative facts that are irreducibly “over and above” facts about the natural world (see here [EA · GW] for more details). This post lays out what I see as the most important objection to such realism: namely, that it leaves us without the right type of epistemic access to the normative facts it posits.

To illustrate the point, I discuss various robots in unfortunate epistemic situations. Non-naturalist realism, I claim, makes ours analogous.

Thanks to Ketan Ramakrishnan and Katja Grace for discussion. My views on this topic have been especially influenced by the work of Sharon Street.

I. What’s in the envelope?

In a previous post [EA · GW], I discussed a robot whose value system is built to assume the existence of a certain envelope, inside of which is written an algorithm for scoring the “goodness” of a given world. The robot’s objective is to maximize the goodness of its world, as scored by the envelope. Call such a robot an “envelope bot.”

In some situations, this robot is totally screwed. Here’s one. Suppose that Rosie the envelope bot wakes up in a simulation created by Eccentric Ed. Ed’s avatar appears before Rosie and tells her:

“Outside the simulation, back in my office, I have the envelope on my desk. No one has ever opened the envelope, including me, and no one knows what process determined its contents.

Separately, though, I’ve given you a bunch of “evaluative attitudes” – e.g., intuitions about what’s in the envelope, desires to do different things, experiences of different things as “calling to you” or “to be done,” and so on. I chose the content of these by calling up Angela Merkel and asking her what kind of stuff reminds her of apple pie. Turns out Angela has a weird association between apple pie and helium, so you ended up with a set of evaluative attitudes that tend to favor maximizing the amount of simulated helium, albeit in somewhat incoherent ways.

You’ve got about 100 simulated years to live. After that, I’ll bring you out of the simulation, open up the envelope, and tell you how much you improved the world by its lights.”  

My first claim is that Rosie, here, is in a bad way. Why? She has no clue what’s in the envelope.

In particular: her evaluative attitudes aren’t any help. They were determined via a process – namely, by Ed calling up Angela Merkel and asking her about her apple pie associations – that lacks an epistemically relevant connection to the envelope’s contents. Angela’s associating apple pie with helium just isn’t evidence that the envelope’s contents are about helium. And it’s extremely unlikely, on priors, that these contents just happen to be about helium: they could, after all, be about anything.[1]

And note that performing some kind of “idealization” or “reflective equilibrium” procedure on Rosie’s evaluative attitudes doesn’t help. This will just lead to some more coherent version of helium-maximizing. Garbage in, garbage out.

Nor is there some way to figure out the envelope’s contents by “starting from scratch” — ignoring her intuitions, desires, and so forth, and relying on “reason” and “logic” instead (as some utilitarians I meet seem to think they are doing – wrongly, in my opinion). True, reasoning on priors as to what process might’ve fixed the contents of the envelope is better, here, than “seems like helium, probably it’s that.” But it won’t go very far.

Perhaps you reply: “Ah, but doesn’t Rosie have some kind of pro-tanto warrant for trusting her intuitions?” Maybe. But even if so, I don’t think it survives the encounter with Ed’s avatar. The causal origins of her intuitions are, as the epistemologists say, a “defeater.”  

“No they aren’t. It doesn’t matter what her encounter with Ed reveals about the causal origins of her evaluative attitudes. Rosie’s intuitions give her reason to think that the envelope loves helium, so she has reason to think that the causal processes that gave rise to her attitudes happened to lead her to the truth.”[2]

Do you think that in the version of the case where the envelope’s contents were chosen by literally drawing a utility function out of a hat with a trillion utility functions, only one of which was ‘maximize helium’? That feels to me like straightforwardly bad Bayesianism.

“No, but in that case there’s a physical mechanism that mandates that Rosie put a one-in-a-trillion prior on ‘maximize helium,’ lest her credences fail to reflect the objective chances. Absent such a mechanism, though, Rosie is entitled to put a high prior on the envelope loving helium.”

Is Rosie entitled to put a high prior on the billionth digit of pi being 9, on the basis of an intuition about this that Ed chose by asking Angela Merkel which digit reminds her most of apple pie? No physical mechanism is determining pi, after all.

“No, but that case is different. There’s some other very natural sense in which your prior should be 10% on 9, which doesn’t apply to the envelope. And assigning probabilities to the space of ‘all possible utility functions that might get written down in an envelope’ is tricky.”

But does “tricky” entail “whatever prior I want to have is reasonable?” Not to my mind. To me it looks like: no way that envelope just happens to say “maximize helium.” But whatever: as ever, if Rosie insists on having a wonky prior, I think I probably just want to start betting with her, rather than arguing. If I were Rosie, though, I would not say, on encountering Ed, “on priors, though, isn’t it sufficiently likely to be helium that I’m justified in assuming that Angela’s apple pie associations happened to give me the right evaluative attitudes?” Rather, I would be like: “sh**, I’m so screwed…”

Perhaps you reply: “Ah, but suppose that whatever’s in the envelope had to be in the envelope. In all possible worlds, the envelope contains the same utility function.” But would that matter? No. Angela’s apple pie associations would be no more evidence about the contents of a modally robust envelope than about a modally fragile one. Note, for example, that the billionth digit of pi had to be [whatever it is]. But “seems like it’s 9” is still no help, if it seems that way only because Angela associates apple pie with 9.

Perhaps you reply: “Ah, but suppose that in all nearby possible worlds, Rosie has the same evaluative attitudes. Suppose, for example, that giving her helium-maximizing attitudes is required, for some reason, in order to make the simulation work.[3] Or suppose that Angela could not have easily associated apple pie with something else.” But would that matter? No. It would just make Rosie more robustly screwed.

Perhaps you reply: “OK, but suppose that the envelope in fact does say to maximize helium (and would do so in all possible worlds), and the only way to make the simulation work is to give Rosie helium maximizing attitudes. Thus, she is robustly right about the envelope. Is she OK then?” I still feel like: no. She’s still, in some deeper sense, just getting lucky. What’s needed is the right type of explanatory connection between her helium-oriented evaluative attitudes, and the contents of the envelope. Absent this connection, then even if Rosie’s evaluative attitudes are in fact right about what’s in the envelope, she shouldn’t expect them to be. 

Perhaps you reply: “Maybe Rosie is having some kind of mysterious rational insight into the contents of the envelope? Like, somehow just by thinking about it, she can tell what the envelope says?” More on this later, and this would indeed count as the type of explanatory connection I want. But obviously that’s not what’s going on in the envelope case (the normative case is indeed different in various respects). The envelope has never been opened, and it lives outside the simulation. Rosie can’t see through it with her mind.

As often on this blog, but even more so than usual, I’m playing fast and loose with issues that philosophers spend a lot of time on. I expect many of these philosophers to find this brief discussion frustrating and incomplete; and there is, indeed, much more to say.

Still, amidst all the philosophical subtleties and complications, when I step back and look at the Rosie case (and even, at version of the case where we add in various types of modal robustness), I feel like: she’s screwed. I’m not trying to give some theory of knowledge or justification or “safety” or whatever; I don’t even have a worked-out view about exactly what criteria Rosie fails to satisfy, here. But I expect her rendezvous with Ed after 100 years to go very poorly, envelope-wise. She can talk all she wants about being “entitled” to trust her intuitions; or about how, if the envelope in fact says to maximize helium, then (in some versions of the case) she’ll count as “knowing” that it does; or about how, just as you were lucky not to be born as a deluded brain in a vat, she was lucky that Angela’s apple pie associations gave her the correct beliefs about the envelope’s love of helium. But in practice, at the end of the day, she’s going to lose. Or so I would predict, if I were her.

II. Can you touch the ghostly frosting with your mind?

My next claim is that the most standard form of non-naturalist normative realism puts us in an epistemic situation analogous to Rosie’s.

Why? Because on this form of realism, as its proponents widely acknowledge, normative stuff has no causal interaction with the natural world. Call this the “causal inertness” thesis. This thesis distinguishes normative stuff not just from quantum fields and electrons and neurons and the like, but also from “higher-level” things like inflation levels, NIMBY, and Chinese nationalism about Taiwan. Things can be caused on multiple levels at once – and yet, says causal inertness, none of these levels are normative levels. To cause stuff, you need to be an “is”; but normativity is, irreducibly, an “ought.” The two are just too different. 

You might find this strange. “Isn’t the wrongness of slavery at least part of what caused the campaigns to end slavery?” “Isn’t my belief that pain is bad partly caused by the badness of pain?” “Isn’t the existence of normative properties part of what causes us to write books about them?” Indeed, this is how we often talk – in fact, it’s a way of a talking that even realists who accept causal inertness often slip into, in violation of their own metaphysical commitments. And the slip is understandable: we often expect domains in good epistemic standing to play some kind of causal role in explaining our beliefs about them (the properties of cells should play some role in causing the beliefs of cell biologists; the properties of economies should play some role in causing the beliefs of the economists, and so on). It can be hard, especially in a context that treats normative inquiry as analogous to other forms of inquiry about the objective world, to keep vividly in mind that, for the standard non-naturalist realist, normativity isn’t like that.

This sort of realist leaves us with what I will uncharitably label a “ghostly frosting” picture. On such a picture, there is (a) the natural world, and (b) an extra other thing, floating on top of (more accurately: “supervening on”) that world, but which plays no role in causing any natural stuff, including beliefs, intuitions, and so on. Let’s call this extra thing the “frosting.”[4]

The frosting is the only thing the non-natural normative realist cares about. Beauty, consciousness, joy, love – these things are nothing, if they have no invisible frosting (i.e., non-natural “goodness”) on top. And the same holds for suffering. Faced with someone being tortured, and whom the realist could save, this sort of realist needs to know whether supervening on the torture is some kind of non-natural “badness” or “wrongness.” If not, it’s a matter of indifference. In this sense, the realist’s relationship to the non-natural frosting is like Rosie’s relationship to the contents of the envelope. Faced with a chance to make helium, Rosie still needs to know: does the envelope love helium, too?

Yet despite its singular importance, the frosting is forever out of reach. It lives in a parallel dimension. When the world moves, the frosting moves with it; but the frosting never touches the world, and the world never touches the frosting, either (except by serving as its “supervenience base”). And note that the world, here, includes you, your mind, and your beliefs. These things, and the stuff that causally influences them, never interact with the frosting at all. The frosting is like the contents of the unopened envelope, which exert no influence whatsoever on the simulation — including on Rosie’s evaluative attitudes.

This feature of the frosting gives rise to various worrying epistemic properties. Notably, for example, if we moved the goodness frosting from “happiness” to “eating bricks,” you wouldn’t notice – just as Rosie wouldn’t notice if the envelope switched from loving helium to loving oxygen. And if we wiped the frosting away entirely, and were left with just the natural world in all its everyday glory, you wouldn’t notice that, either. The existence and distribution of the frosting, by hypothesis, makes no difference whatsoever to anything you will ever observe or intuit; nor has it ever done so. One feels the scientist types getting nervous.

Whether these properties of the frosting are enough to impugn its epistemic status is a topic of debate amongst philosophers. To me, at a gut level, they look extremely damning – not just to our knowledge of how the frosting is distributed, but to the idea that the frosting exists at all. But “if p were false, you’d still believe p” has fallen out of favor as an epistemic insult in the philosophy community (see the literature on “sensitivity”), and I’m not trying to dig into that debate here.

Rather, my argument is just: whatever makes Rosie screwed, with respect to the envelope, would make us screwed, with respect to the distribution of the frosting/non-natural normative stuff. Specifically, in Rosie’s case, her evaluative attitudes are determined by a causal process – Ed’s asking Angela Merkel what she associates with apple pie – that lacks the right type of connection to the contents of the envelope. But given the causal inertness of the normative, and its resulting disconnection from the natural world (and note that realism also precludes connection of a type where natural stuff causes or determines facts like “pleasure is intrinsically good” – more on this later), the same will be true of the causal process that determined our evaluative attitudes – since this process, too, will be natural one, and thus unable to interact (or be influenced by something that has interacted with) with the normative stuff. And like Rosie, we don’t have anything else to go on.

(To be clear: I’m not saying “here is the precise thing that makes the Rosie case bad, and here’s why it applies in the non-natural normativity case” – though see section IV for a bit more on this. Rather, I’m saying: “Look at these two cases. Aren’t they bad for the same reason?” To me it feels like they are.)

In the literature on this topic, the evolutionary origins of our evaluative attitudes have received an especially large amount of attention. But as various commentators have noted, evolution isn’t the point, here: the argument works just as well, for example, if we focus on the cultural origins on your beliefs. What matters is that whatever causal stories we can tell about why your evaluative attitudes are what they are – and we can tell these stories, recall, on multiple levels of abstraction – none of them, by hypothesis, will involve anything that has ever interacted with the frosting. The frosting, through it all, will stay lonely, off in its separate non-natural dimension – an envelope entirely un-opened, read (and written in) by nothing that touches you. Getting read (or written in), after all, is a form of touching; and touching stuff is not the envelope way.

In such a situation, then, insofar as we share the realist’s single-minded focus on the frosting, it seems to me we should say the same thing Rosie should say: “sh**, we’re so screwed.” In particular: we just don’t have any clue about which stuff has how much non-natural frosting on it, and we’re not going to get one. It’s not a matter of having to work extra-special hard to figure out where the frosting is – thinking for eons [? · GW], interviewing all the aliens, creating tons of new beings with new evaluative attitudes to interview, re-evolving ourselves in different situations. That’s just more “natural world” stuff. The aliens, the paperclippers you created in your lab, the alternative versions of yourself you evolved on different earths – none of these creatures are going to have touched the frosting, either. Would creating new creatures, or thinking super hard, have worked for Rosie?

Nor is it a matter of being willing to tolerate weird and counterintuitive results, or to bravely “follow the logic where it leads.” Logic, coherence, consistency – these, on their own, won’t distinguish happiness from helium, paperclips, brick-eating, and so on. We need substance; we need non-garbage in, to get non-garbage out; we need to open the damn envelope. But it’s too far away.

III. Does the frosting exist?

I say this as though, in such a situation, our main problem would be locating the frosting. But as gestured at above, I think we should also be having very serious doubts about whether the frosting exists at all. In this respect, our situation as realists would differ from Rosie’s. Rosie’s world hasn’t been touched by the envelope’s contents; but at least it’s been touched by the envelope itself. Ed, after all, interacted with the envelope, and then interacted with Rosie. She has positive evidence that the envelope exists.

The realist’s relationship to the frosting isn’t like that. Rather, the realist’s predicament is more like Robby’s, in this version of the case:

Robby the envelope bot wakes up in a simulation. Like all envelope bots, his value system presupposes the existence of a certain envelope, inside of which is written a utility function that it’s his job to maximize. For Robby, though, this envelope lives in a separate, parallel world, the contents of which no one can ever access, and the existence of which no one can ever verify.

Eccentric Ed’s avatar appears before Robby. “Hello, Robby. Here’s how I created you. First, I called up Angela Merkel and asked her what sorts of robotic value systems reminded her of strawberry jam. She told me about this funky set-up about presupposing the existence of an envelope in a parallel inaccessible world, so that’s the structure of your value system. Then, for your object-level evaluative attitudes, I asked her about her apple pie associations, and she started talking about helium – so now you’ve got helium oriented evaluative attitudes. OK, that’s all. Good-bye.”

Robby, here, has a problem that Rosie does not: namely, his evidence that his envelope even exists is a lot worse. One might even say that no one, anywhere, has any reason to think this envelope exists. Maybe Robbie gets some initial pro tanto warrant for believing in it anyway, because his value system presupposes/requires it (and because helium just seems so envelope-y) – but this presupposition is explained by a causal process that has no connection to envelope’s existing or not (namely, Ed’s asking Angela what sorts of value systems she associates with strawberry jam). One worries, with the epistemologists, about “defeaters.” And while I think there’s a cleaner argument for “on priors, it’s unlikely that the envelope just happens to love helium” than for “on priors, it’s unlikely that there just happens to be some envelope hidden away in an inaccessible dimension,” both of them sound pretty solid to me.  

The non-naturalist realist is like Robby. They wake up, convinced that there is an inaccessible, non-natural dimension telling them what to do and to value – or perhaps, that there must be, lest all deliberation turn to nonsense. But they learn that the origins of this conviction have no connection to the existence of the dimension in question. Such a revelation should, I think, be cause for doubt.

What’s more, when I boot up my own inner non-naturalist normative realist, I think that starting to seriously doubt the analogs of my helium intuitions (as, per the Rosie argument, I think one should) would also undermine an important source of my conviction that the envelope exists at all. That is, my inner normative realist is most compelled by experiences in which particular things – joy, love, beauty, etc – appear as transcendently and objectively good (bad, wrong [LW · GW], etc). I am much less compelled by the abstract intuition that something must be objectively good, even if it’s something totally random or horrible like brick eating or torture. That is, my inner normative realist is a “premise: X, Y, and Z are non-naturally good; conclusion: something is non-naturally good” type of guy. So if my premise turns out to be not just false but wildly off the mark (as I think a Rosie-type character should expect it to be), my devotion to the conclusion takes a corresponding hit (though I think that setting this up rigorously might be a bit tricky – and maybe it ultimately doesn’t work).

Of course, Robby can argue that, if the envelope doesn’t exist, all is dust and ashes, so might as well act like it does (see here [LW · GW] for more; also, Ross (2006)). But in a sober and non-self-deceptive hour, he should still expect to be screwed – not just by his ignorance of the envelope’s contents, but by its not existing at all. 

IV. Third-factor explanations

Let’s discuss a popular form of response to the original Rosie case – one that I associate centrally with Enoch (2011, Chapter 7), but which comes up elsewhere as well (see e.g. Copp (2008) and Skarsaune (2009)). Enoch suggests that the central issue for the realist is explaining the correlation between our normative beliefs, on the one hand, and the non-natural normative facts, on the other. The realist’s “realism” prevents the beliefs from determining the facts. And the causal inertness of the facts prevents them from determining the beliefs. What is the realist to do?

Enoch suggests that the realist can appeal to a “third factor,” which explains both the beliefs and the facts, even if neither is explained in terms of the other. And in some cases, I think, such a third factor would do just fine. Consider, for example, a version of the Rosie case in which a witch decided to throw a dart at the periodic table, and then to both (a) write “maximize [whatever element the dart hit]” in the envelope, and also (b) to cause Angela Merkel to associate this element with apple pie. If Rosie knows this, then she’s cooking with gas, envelope-wise: her helium-oriented attitudes are strong evidence that the envelope loves helium, too – despite the fact that the envelope’s contents do not, themselves, play a role in the causal explanation of Rosie’s attitudes.

But this isn’t the type of third-factor explanation that Enoch offers. Rather, he offers the following explanation of why our evolved attitudes roughly track the normative truth: namely, that “survival or reproductive success (or whatever else evolution ‘aims’ at) is at least somewhat good” (Skarsaune (2009) uses something like “pleasure is usually good,” and Copp (2008) “the true norms are the ones that promote societies meeting their needs” – but I’ll focus on enoch). Thus, since evolutionary forces shaped us to value stuff in the vicinity of survival, it shaped us to have roughly accurately evaluative attitudes. Problem solved?

Not in my book. Indeed, I feel a fairly visceral level of dissatisfaction with this response – perhaps some readers out there, who share my sense of the problem in the original Rosie case, will feel it too. To me it feels a bit like Rosie (in the non-witch case) responding to Ed by saying something like: “Ok, well no worries, my helium-oriented evaluative attitudes are roughly accurate, and I can explain why: namely, the thing that Angela associates with apple pie is in fact loved by the envelope.” This isn’t a perfect analogy, but hopefully it conveys a certain kind of: wait, that’s the response? This is supposed to help?

Andreas Mogensen is more articulate: “The proposal succeeds only in relocating the point at which a coincidence must be posited” (p. 10). Before, the coincidence to explain was the one between our beliefs and the normative truth (supposing our beliefs are in fact accurate). Now, it’s the fact that the process that shaped our normative beliefs (evolution) also happened to be aimed at something good (survival). Should the possibility of positing such a coincidence be any comfort to people in Rosie-like situations? Doesn’t seem like it to me. Rather, it seems like the type of thing anyone could posit willy-nilly (note, for example, the diversity of proposals in the literature), whether it was true or no – much as “the envelope in fact loves helium, lucky me” feels that way.

And indeed, Enoch acknowledges that “a miracle remains,” and that his realism does indeed lose some “plausibility points” when it comes to epistemology (though he thinks that the price is “entirely affordable”). He thinks he’s made things better by only positing a coincidence between evolution’s aim and the good, rather than between all our diverse beliefs, and the normative facts. But I don’t think this helps much. In the Rosie case, for example, we need only posit “one coincidence” – between Angela’s apple pie associations, and the envelope’s love. But one is enough.

(There is also a separate worry about Enoch’s proposal: namely, that if our normative inquiry can never “touch” the normative facts, then even if evolution starts us off vaguely in the right ballpark, then unless everything in the ballpark except the truth is actively incoherent, there’s very little reason to suspect our reflective equilibrium process to converge on the actual truth, as opposed to something nearby but still wrong. If the envelope loves oxygen, for example, and Angela’s apple pie associations happen to start Rosie off with helium-oriented attitudes, then in some sense Rosie is “in the right ballpark” – indeed, so much so that a massive coincidence has taken place. But she still can’t get to oxygen from where she is.)

Note, as well, a key difference between Enoch’s “third-factor” explanation, and the Witch version of the Rosie case. In Enoch’s case, the frosting stays in its separate, non-natural, causally-inert realm – he just posits that it happens to float on top of evolution’s aim. The Witch case, by contrast, involves some non-envelope bits of the world (e.g., the Witch) interacting with the envelope – and in particular, actively influencing its contents. This allows the downstream effects of those bits of the world in other places (e.g., the Witch’s impact on Rosie’s evaluative attitudes) to serve as genuine evidence about the envelope (we could get more precise here by mapping things out Judea Pearl-style, but I’m going to skip that for now).

But when it comes to normativity, the non-naturalist realist can’t allow this type of thing. Events in the natural world don’t get to “make it the case” that pleasure is intrinsically good, instead of helium, such that we could look at the downstream effects of such events to check which one it is. Nor does God decide what’s good, and then create us to love that. Rather, the normative facts have been written into the brute fabric of things since the dawn of time. Only: a separate, ghostly fabric. One that hovers over the natural bits of the tapestry. One that never weaves together with the rest.

In this sense, the realist’s problem isn’t, strictly, that the normative stuff can’t cause our evaluative attitudes (as critics of the “causal theory of knowledge” will be quick to note; and anyway, having heard Bob announce, on Monday, that he intends to go to the moves on Friday, your Monday belief that Bob will go to the movies on Friday is in good order, despite the fact that his Friday activities can’t cause your Monday belief). Rather, it’s more about whether normative stuff can enter into the right type of network of explanation and dependence with those attitudes. The normative stuff isn’t allowed to depend on “upstream” nodes, like God, which also flow downstream to influence our attitudes; it’s not allowed to be influenced by our attitudes themselves (e.g., to be “downstream” of our attitudes, as the anti-realists would have it); and it’s not allowed to influence our attitudes, either, because it’s causally inert (and other forms of “influence,” like some sort of non-causal determination of our attitudes, aren’t available to it either). But this makes it very hard for our attitudes – or indeed, anything about the natural world – to serve as evidence about the frosting’s whereabouts.

V. Is this just generic skepticism?

Let’s turn to what seems to me a more pressing and important response to the Rosie case: namely, is this, maybe, just a fully general skeptical problem, rather than a problem for the normative realist in particular? Here’s Berker (2014):

the skeptical worry here … it is just an instance of the general epistemological problem of how we can show that our most fundamental cognitive faculties (perception, introspection, induction, deduction, intuition— what have you) are reliable without relying [on] those very faculties when attempting to show this… There seems to be something viciously circular about appealing to a given cognitive faculty when attempting to vindicate the epistemic standing of that very faculty. But, with our most basic cognitive faculties, what recourse do we have except to appeal to those faculties during their vindication?

That is: you might’ve thought that there was something “trivially question-begging” about Rosie saying “well, in fact the envelope loves helium, so Angela’s apple pie associations haven’t led me astray,” or about Enoch saying “well, given that survival is in fact non-naturally good, evolution put our evaluative attitudes on track.” But maybe this is no more question-begging than the type of epistemic moves we have to pull all the time. Maybe, for example, in order to argue that your perceptual faculties are reliable, you need to assume the truth of various perceptual judgments. Maybe, in order to justify inductive reasoning, we will have to rely on induction. And so on.

Now, naively, it sure feels to me like there’s a difference between my story about the accuracy of my perceptual faculties, and Rosie’s story about the accuracy of her evaluative attitudes. Notably, for example, I can give a fairly detailed account of the way in which my perceptual faculties make me sensitive to the presence or absence of various features of my environment, and I can give a higher-level account of why my faculties would be sensitive in this way (namely, that there are evolutionary advantages to being able to track what’s going on around you). It’s true that, in order to start giving these accounts, I need to take for granted various deliverances of my perceptual faculties. But once I do so, I feel like a world of sense-making opens up before me. I am left with an internally consistent and vindicating story about the reliability of my perceptual faculties; a story that explains to me how they work, and why they work (see Sharon Street’s work for more on this). And it’s a story that, subjectively, I feel quite satisfied and comfortable with.

This isn’t how Rosie saying “the envelope loves helium, so I guess my ‘normative faculties’ are reliable” feels to me at all. First off, though maybe this is a nit-pick, Rosie does not have ‘normative faculties’ in the sense that an analogy with perceptual faculties might seem to imply. Rosie has a set of evaluative attitudes, yes, but these do not “detect” features of the envelope’s contents in the way that my ear detects the presence of a trumpet player. They aren’t “in touch” with anything envelope-y. Rather, they just happen to be right, or happen to be wrong. Rosie’s main move is to claim that they happen to be right, and maybe so: but sense-organs for reading the envelope at a distance, they are not. And the same is true, on non-naturalist normative realism, for the relationship between our own evaluative attitudes, and the ghostly frosting. Maybe evolution happened to make us right about how the frosting is distributed, but it didn’t give us “faculties” for touching the ghostly frosting with our minds.

Partly for this reason, Rosie can’t give some vindicatory, mechanistic explanation of how her normative faculties make her sensitive to the envelope’s contents, analogous to e.g. this explanation of how the ear detects sound. Nor can she offer some higher-level vindicatory explanation of why the process that created her would’ve given her this sensitivity. Rather, once she goes out on a limb and says “the envelope in fact loves helium, so Angela’s apple pie associations gave me accurate attitudes,” she just kind of… stays there, repeating herself. To me, it feels “precarious,” and “thin.” It feels like the object-level judgements she granted herself didn’t go on to allow her any further sense-making, nor did they circle back to vindicate themselves in a way that “did I mention that I’m right?” does not. 

Indeed, they did something worse: they (together with Ed) told her a story in which she got ridiculously lucky for no good reason. And this feels closer to the nub of the matter. After all, I think that people were justified in trusting their perceptual faculties before they knew how eyes and ears work, and before they had accurate vindicatory explanations for how those faculties arose. And I’m open to the idea that prior to hearing from Ed, Rosie might well have been justified in treating her helium-oriented intuitions about the envelope as accurate (who knows, after all, whether they might have been caused by the envelope’s contents, or whether whoever made her might’ve pulled a Witch-like dart-board maneuver, or whether she can somehow read the envelope with her mind from a distance). But once Ed tells her about her real situation, I think she should acknowledge that she’s screwed. That is, I think that Ed’s story functions as something like a “defeater”; and I don’t think we have a comparable defeater in the perceptual case. 

(Similarly, I think that prior to getting clarity about the causal inertness of the non-natural normative properties, and the naturalness of our minds and beliefs, and the absence of a God to orchestrate normative accuracy, the non-naturalist realists had more hope of avoiding skepticism – perhaps, after all, our minds can interact with the normative stuff, or perhaps God ensures that our normative picture isn’t too far gone (thanks to Ketan Ramakrishnan for discussion). But once the metaphysical story is clarified, and God banished…)

Berker disputes that the type of “vindication” our perceptual faculties offer for themselves grants any epistemic credit: 

Suppose we discover a book of unknown origin that makes various claims about a hitherto undocumented era of the historical past. We begin to wonder whether this book’s claims track the truth. Then we find, halfway through the book, an elaborate story about how books of this sort were carefully screened for their accuracy, the unreliable ones being destroyed. (The book contains a story of, as it were, unnatural selection that applies to itself.) Does this story give us any reason to think that our book tracks the truth? I say: no, it does not.

It’s a nice case, and it gives me pause. Note, though, that in the context of a dialectic in which we assume pro tanto warrant for trusting our intuitions, then retract it in the case of defeat, the book’s vindicatory self-explanation need not serve as extra, positive reason to trust it (though I do feel like my understanding of how and why my perceptual faculties work gives me extra reason to trust them – so that’s a different from the book case). Rather, what matters is that the book doesn’t offer a sufficiently defeater-ish story about its accuracy (and that you had some reason to think it credible in the first place). Thus, for example, if the book said “I am one of thousands of books generated by an old king’s monkey jumping up and down on the king’s pen collection. However, strangely enough, I also happen to be totally accurate. Here follows a record of all the king’s exploits over the course of a year…” then that would be an active problem for our trust in the book – albeit, a problem with a somewhat strange character (e.g., we only mistrust the book’s accuracy on the basis of information it contains). If, before reading either of the books, an oracle told you that one of them was accurate, and the other was not, I’d bet on the first book. And the second book seems to me more analogous to Rosie’s story about herself (though note that in Rosie’s case, believing Ed’s defeater-ish account of her normative attitudes does not require trusting in them).

Perhaps, though, one now begins to suspect that too much weight is being placed on “luck” as a defeater. After all, aren’t you “lucky” not to have been born into a family that educated you to believe that slavery is OK, or that the earth is flat, or that evolution never happened (see e.g. Srinivasan (2015) for more)? Aren’t you “lucky” not to live in a simulation that makes you relentlessly wrong [LW · GW] (even about the simulated world), or in a world created five minutes ago with the appearance of age? Aren’t you lucky to have evolved in conditions where developing broadly truth-tracking models of the world was reproductively advantageous?

In some sense, yes. But in some of these cases, at least, the degree of “luck” at stake feels like it comes down to priors about what sorts of situations I’m likely or unlikely to end up in. If I knew that I had been randomly inserted into one of a trillion simulations, all but one of which would leave me with radically inaccurate perceptual beliefs, then if I ended up in the “accurate beliefs” one, I would indeed be lucky. But notably, after having undergone such a lottery, I wouldn’t go around talking about how I’m justified in treating my perceptual faculties as reliable, because of [something something, if I’m in the good case, then they are in fact reliable, and I have pro tanto warrant for trusting how things seem – or whatever]. Rather, I would just directly bet that I’m probably in a skeptical scenario. 

So I’m happy to grant that if before making any observations, you started out with X% on “world created five minutes ago with the appearance of age,” then you definitely shouldn’t update towards “old world” based on the world’s appearance of age – such an appearance is equally likely on either hypothesis. But the question is: what should X be? It’s a tough question; the most principled answer I’m currently aware of is something about the Universal Distribution, which isn’t exactly a walk in the “oh right totally” park. But it does, at least, probably say stuff like “it’s simpler to specify the initial conditions and physical laws of a universe like ours than to hard code a version of that universe that just started five minutes ago with the appearance of age.” And if I had to start talking about skeptical scenarios, it’s probably where I would turn.

But wait, if we’re appealing to priors, does that mean that Rosie can just say: “Ok, well on priors, I think that the envelope is very likely to love helium.” As I noted in the first section, this sure isn’t how I’d bet – but it also feels, more broadly, like a cheat. In particular, priors like the Universal Distribution are supposed to dictate your credences before you’ve made any observations. But it sure seems like Rosie’s “prior,” here, has something to do with her updating on certain helium-oriented evaluative experiences and intuitions, the Bayesian value of which Ed’s story undercuts. 

Still, though, I’m happy to acknowledge that there’s tons of gnarly stuff in this vicinity, which the epistemologists have thought about a lot more than I have. And I don’t have a worked out view about exactly what distinguishes the perceptual case from non-naturalist normative one, or of what makes Rosie’s posited “luck” the “don’t bet on it” kind, but “flat earthers are wrong, even though there are circumstances in which I would’ve been a flat earther” the tolerable kind.

Indeed, there was a time in my life when, in an effort to preserve a form of robust normative realism that I really wanted to be true, I took refuge in the difficulty of drawing some of these distinctions. Worries about the normative version of cases like Rosie’s, I tried to say, were really just standard skeptical worries – and those are a problem for everyone, everywhere. 

Ultimately, though, some part of me never really bought it. Something about the resulting realist picture felt thin and made-up and forced. I’d try to say “evolution and the rest happened to give us reasonably accurate normative intuitions, despite the fact that nothing in the natural world, including our minds, ever interacts with the normative properties,” and some part of me would feel like: huh? That’s the story? I took some comfort in the fact that, apparently, lots of famous and prominent philosophers believed this too, or said they did (though notably, many of the ethicists I met seemed to want to focus on the first-order game of reaching reflective equilibrium about between our normative intuitions and higher-level principles, rather than on meta-ethical questions about why we should expect this project to bear any relation to the truth). But it never felt right. 

What’s more, “evolution happened to aim at something non-naturally good” has in fact always seemed different to me than “I happened to evolve in circumstances in which it was reproductively advantageous to have reliable perceptual faculties,” even if I don’t have a worked out story about exactly why. And to this day, I notice that I am not at all worried about whether e.g. the world was created five minutes ago with the appearance of age; but being in a Rosie-like case would worry me a lot (and non-naturalist realists seem to me to be in a Rosie-like case). I expect that further analysis could do more to pin down the relevant differences, but I, personally, am not suspending judgment until such an analysis comes in. Rather, I find myself in fact persuaded that Rosie is screwed, and so is the non-natural normative realist; but not that we’re all screwed, in estimating the age, or the flatness, of the earth. 

Of course, your mileage may vary. Indeed, I don’t really expect to win over realists wedded to positing Rosie-like luck. And perhaps I am biased, in modeling such realists, by memories of how much a younger version of myself was trying to save realism. I wasn’t, as it were, an unbiased inquirer. I didn’t have “scout mindset.” Rather, I was confronted by what felt like a disturbingly powerful argument against something that I very much wanted to be true, I was in defense mode, and appealing to comparisons with more general skeptical arguments ultimately felt like the best out. But it was always, for me, a looking away.

VI. How to be a normative realist

I’ve been focused, throughout this post, on a form of non-naturalist normative realism that makes normative properties causally inert. This is the most standard form; it’s the form I was raised on; and it’s the form that preserves the most scientific respectability, since it does not require positing new and mysterious causes of stuff that happens in the natural world – causes that might, in principle, compete with the types of causes the scientists focus on, and/or make predictions that the scientists could refute. 

But one can imagine other forms. Notably, for example, one can imagine an “interventionist” form of normative realism, in which the non-natural normative facts/properties intervene on the world to cause our beliefs about them (“Are the properties still non-natural, in that case?” Beats me, seems like no – but whatever.). Indeed, such an interventionist picture is sometimes suggested (even if not explicitly endorsed) even by non-naturalist realists like Parfit, who elsewhere want to deny any causal efficacy to the normative facts/properties. Thus, for example, Parfit (2011, Volume II, Chapter 33) argues that while some of our normative beliefs (e.g., our beliefs about the wrongness of incest) can be explained in evolutionary terms, others (e.g., the widespread acceptance of the golden rule) cannot. These, he seems to suggest, should be viewed more charitably. Why? The suggestion seems to be that the absence of an evolutionary explanation leaves more open the possibility that these beliefs formed as a result of our reason “responding to the intrinsic credibility” of their content. 

Lazari-Radek and Singer (2012) make a similar move: they argue that the utilitarian’s principle of “universal benevolence” does not admit of an evolutionary explanation in the way that more parochial and selfish principles do, and that it is therefore on more solid ground – presumably, because it’s presence in our moral life is more likely to explained by its truth. And Crisp (2006), too, argues that evolution gave us a general capacity to reason, which we use to recognize the normative truths. 

When I first heard these suggestions from Parfit and Lazari-Radek and Singer, I felt like: oh snap, you’re going to bet with the scientists about whether our beliefs about things like universal benevolence and the golden rule can be given full, naturalistic causal explanations that would apply regardless of their truth? Maybe they say: “no, no, my claim is just that these beliefs can’t be given evolutionary explanations.” Even for that I’m like: oof, especially if we allow for generalizations from evolved starting-places. But more broadly: recall that it’s not actually the evolutionary-ness of the causal explanations that matter for the Rosie-ness of someone’s epistemic situation; it’s the disconnection from the content of the beliefs themselves. 

That said, I’ve come to think that actually, the type of realism that Parfit, Singer, and Crisp are groping at, here, is actually the one that our inner realists really want. It’s the realism of the old fashioned rational intuitionists – the ones who posited a mysterious form of “rational access” to the normative domain, and were subsequently chided for their obscurantism. It’s a realism on which you hold true normative beliefs because those beliefs are true; not because you happened to have been caused to have those beliefs, and also they are true. It’s a realism on which, when you reach out towards the ghosty frosting with your mind, you can actually touch it; you aren’t just groping wildly in dark, with no constraints except reflective coherence, hoping you happened to have ended up with precisely the right starting material. Rather, the thing you’re trying to detect is detectable; your mind “perceives” it; it pushes back. 

Indeed, in my experience, even realists who elsewhere endorse a “I ended up with XYZ beliefs, and also separately they are true” picture often end up talking in quasi-perceptual terms about our epistemic access to the normative domain. Thus, ethicists frequently talk about “recognizing” your reasons for action (or more rarely, recognizing the “intrinsic plausibility” or “self-evident-ness” of various normative claims), in a manner that evokes the type of perception involved in recognizing that e.g. your partner is feeling grumpy. Similarly, ethicists frequently approach their normative intuitions as though they are seeing through a glass, dimly, into the normative realm; as though intuitions are a fallible but still productive mode of “access” to something beyond themselves, rather than Rosie-ish psychological patterns that also, separately, happen to be right. That is, especially when doing normative ethics, it seems to me very hard for realists to fully absorb that on their most standard picture, their mental activity remains always and everywhere “cut off” from the normative domain they seek to understand – that they can never actually travel to the ghostly city they are trying to draw a map of; that they are drawing blind. 

And no surprise: such map-makers are screwed, just as Rosie is screwed. It is difficult to wholeheartedly participate in such an epistemic practice, while remaining vividly conscious of its basic Rosie-like structure. It is tempting, instead, to reconceive of it in more perception-like terms – terms that might give it some chance of success. 

And indeed, my current guess is that the best form of non-naturalist normative realism embraces the sort of perception-like, rational-intuitionist picture that normative ethicists find themselves, anyway, coming back to. Drawing blind is doomed. To avoid Rosie’s fate, you need to posit “eyes.”

Now, naively, it might look like this sort of picture involves denying the “causal inertness” of the normative. Eyes, after all, famously use causation to track the stuff they perceive. And naively, if one is going so far as to posit “normative eyes,” one would want to get to say that yes, in fact, the wrongness of slavery did play on a role in causing abolition (e.g., people “saw” that slavery was wrong, and so pushed to end it); that the badness of pain is part of what causes me to believe that pain is bad; that the existence of normative properties plays a role in causing me to write blog posts about them. Indeed, it can feel like Parfit and Lazari-Radek and Singer are playing this sort of game. They are offering a competing causal story about what gives rise e.g. the widespread acceptance of the golden rule – one that appeals, centrally, to the golden rule’s truth

That said, I also want to note that things get slippery here pretty fast, and maybe there’s room to posit some suitably mysterious form of “normative vision,” while holding on to causal inertness nonetheless. In particular, it seems to me that the key thing that the “normative vision” picture I have in mind needs to deny is not causal inertness but something more like “explanatory inertness” – e.g., the idea that the normative stuff cannot explain anything about the natural world. Thus, for example, perhaps we have metaphysical hang-ups about saying that the badness of pain causes me to believe that pain is bad. But we should, I think, want to say that I believe that pain is bad partly because pain is bad. Pain’s badness plays a role in explaining my belief. 

Here I think about comparisons with mathematics – that “partner in guilt” to which the normative realist so often turns (see Clarke-Doane (2020) for a book-length exploration). I, at least, really want to say that in some straightforward sense, I (and others) believe that there is no largest prime because there is no largest prime. If you told me that I only believe that there is no largest prime because Angela Merkel happened to associate this proposition with peach strudel, I would get worried. Yet the idea that the non-existence of a largest prime is causing my belief, here, seems more obscure and possibly optional. And more generally, mathy stuff is often thought to be abstract and ethereal and “outside of space and time” – properties that makes its suitability for entering into causal relationships with the concreteness of the natural world seems suspect. But explanatory relationships are more general, and perhaps less demanding of concreteness. 

We can say similar stuff about other a priori domains like modality, logic, and philosophy as a whole. One wants to say that our beliefs about (some of) these domains are in reasonably good standing; and also, I think, that these beliefs (when accurate) are partly explained by the facts they represent (e.g., I think that square circles are impossible because they are in fact impossible). But the contents of the domains in question aren’t exactly standard-fare, causation-wise. Indeed, to the extent that one wants to say that the contents of all these a priori domains are “causally inert,” our epistemic access to them would seem vulnerable, prima facie, to Rosie-like arguments as well (this is one of the realist’s other central responses to Rosie-ish arguments). 

Whether there are, ultimately, important differences here is a question beyond the scope of this post (I, personally, expect at least some). But I do expect, broadly, that being a robust realist about an a priori domain, while also avoiding Rosie-like problems, will require giving that domain the ability to explain stuff about the natural world (for example, our talking about it at all), whatever you say about causation. After all, the central thing missing from the Rosie case is not the right network of “causation,” per say, but rather the right network of explanation and dependence more broadly. 

So does the possibility of holding on to causal inertness, but denying explanatory inertness, save the realist of the type I was arguing against above? I’m doubtful that this sort of move helps very much — at least at a spiritual level.

First, certain types of explanation – for example, whatever type of explanation is at stake in “the presence of atoms arranged chair-wise explains the presence of a chair” or “the fact that they are officially married obtains in virtue of their having signed this document” – aren’t the right fit: normativity doesn’t explain our beliefs like that. Rather, what’s needed here is some kind of middle ground between explanations with a “constitutive” flavor – e.g., A explains B because B in some sense “reduces to” or “just comes down to A” or “is made of” of A (or something like that, I barely know what “constitutive explanation” entails) – and explanations that are robustly causal. And it’s not clear exactly whether whatever this middle ground makes available will be good enough.

More importantly, though, I think that taking refuge in this middle ground quickly starts to clash with the aspirations that motivated the realist’s devotion to causal inertness in the first place. Explaining what happens in the natural world, after all, is traditionally the job of science; indeed, “the natural world” is often rigorously defined as “you know, the science-y stuff.” And once something is playing a role in explaining what’s going on with natural world, it starts to look kind of science-y. But non-naturalist realists specifically want the normative facts, properties, etc to be irreducibly different from the science-y stuff. These facts, properties, etc live in the domain of “ought,” and we wheel them in, centrally, because there are various “oughts” we seek to explain – oughts like the badness of torture, the fact that Bob has decisive reason to help Jill, and so on. We don’t wheel them in because without them, we can’t make sense of the “is” – that type of move is science-y, not ethics-y. 

And indeed, if the realist were to claim that we need normative properties to explain what’s up with the natural world (for example, the degree of consensus about the golden rule), she would risk positing a “normativity of the gaps” – that is, a normativity that survives only insofar as the naturalist scientific project fails (this is the type of thing that makes one nervous about the Parfit and Lazari-Radek and Singer proposals above). Contemporary ethicists don’t like betting against (or participating in) science in this way. Rather, they prefer to relegate their inquiry to a separate magisterium, accessible from the armchair, and orthogonal to the rest of the naturalist’s explanatory network. 

But as the Rosie case shows, if you want knowledge of a given domain, cutting it off from the naturalist’s explanatory network is a loser’s game. Epistemology is a thing that natural creatures do with their natural minds and their natural beliefs. For them to be in a good epistemic relationship to your favored domain, they need to be somehow “entangled” with that domain; it needs to fit into the right chain of touching and being touched. And because the non-naturalist realist will not want the (fundamental) normative facts to determined by the natural facts, or by some prior Witch-ish cause like God, then even if she denies these facts causal “ert-ness,” she will still, I expect, need to give them some kind of “downstream influence” – some kind of explanatory power over what we feel and think and observe.

In this sense, even if saying “I accept causal inertness, but deny explanatory inertness” ultimately works, I don’t think it necessarily buys the non-naturalist realist (or at least, my toy model of such a realist) all that much that she truly wants. Non-naturalist normativity will still need to become an actor in the natural world, making predictions about things like “lots of people (aliens?) will accept the golden rule, reproductive advantage be damned” or “a superintelligent paperclipping AI system will come to see that actually, happiness is better than paperclips” (this one, I think, is importantly not true). It will still need to earn its keep like other such actors (electrons, cells, etc). It will still need to stake a claim as something that people solely interested in predicting natural events need to reckon with – something that a purely empirical worldview can’t just leave out.

VII. Will the aliens agree?

I find the example of “what will the aliens think about normative stuff” instructive, here. Notably, for example, realists often want to claim that whatever’s going on with our knowledge of mathematics is going on with our knowledge of non-naturalist normativity, too. But notably, we tend to expect sufficiently intelligent aliens to share our mathematical beliefs (or for us to converge on theirs, as we learn more). Indeed, expecting this sort of consensus in the limit of intelligence and inquiry is a natural default with respect to a realistically-construed domain – though obviously, the philosophers will want to raise complications.

So, then, does the realist predict consensus amongst the aliens about normative stuff? Will the aliens, regardless of their starting points, come to share our (enlightened) views about trolley problems, population ethics, hedonism, epistemic teleology? If they start out eating their babies [LW · GW], or maximizing helium, will they eventually recognize that this is wrong, and turn towards the true path? It’s actually, I think, quite an important question

Faced with it, a realist who posits Rosie-like luck is liable to defer, entirely, to the scientists; or at least, to make predictions in the same way that e.g. a nihilist would. Granted, perhaps she will want to baptize some aliens (e.g., the ones she ends up agreeing with) as not just procedurally but “substantively” rational (paperclippers, on this view, are “substantively irrational,” because they do not value happiness). But their “substantive rationality” does not explain why they’re right. It doesn’t give them some ability to touch the ghostly frosting that the other aliens lack. Rather, it’s just a re-naming; just another way of saying “one of the lucky ones, who happened to get given the right evaluative attitudes.” Indeed, at no point will this realist’s belief in the existence of a non-natural normative realm constrain her expectations about what future anthropologists will discover. In this sense, she is safe from the scientists; but as discussed, she gets Rosie-ed hard. 

By contrast, the type of realist who has a hope in hell of being right about the normative domain must take a different, scientifically riskier path. She must posit that we are right, insofar as we are, because, somehow and sometimes, our minds can touch and be touched by (or if you prefer, “rationally intuit” or “recognize”; or perhaps better, “do the magic we-know-not-what with”) the normative stuff. She must try to have a best guess about how this mental touching works, why we are able to engage in it, how often different aliens will tend to develop it, and so on. She will be humble and suitably ignorant about all this, of course; but she will also be taking a substantive scientific stand about why we believe what we do – a stand that the naturalist cannot take. And plausibly, this stand will make predictions about the aliens as well. (Does it make predictions about AI systems as well? If I train a sufficiently intelligent RL agent to maximize helium, will it keep reasoning its way to maximizing happiness instead?)

It’s a dicey game. It reminds one a bit of Descartes and the pineal gland, or of those unfortunate philosophers who made specific empirical claims about the role of quantum mechanics in our brains during the decisions that manifest our libertarian free will. The track record of “philosophers just making stuff up” looms large.

Still, my best guess is that, if you insist on being a non-naturalist realist, and you want to ever have any clue about the normative domain, this route is the way to go. Your realism cannot be empirically innocent: it should have implications for how to expect the natural world to be. For us natural beings, after all, such implications are the very stuff of knowledge. Maybe the fact that there’s no largest prime is not a “natural fact”; but it predicts some natural facts about what humans will think, say, and write in math textbooks. The same should hold for the normative facts, lest we go the way of Rosie.

My own best guess, though, is that we should not insist on being non-naturalist normative realists at all [LW · GW]. 

  1. ^

    The phrase "the normative facts could be anything” is from Sharon Street.

  2. ^

    See Chappel (2012) for more on “causal origins don’t matter” — though he wouldn’t endorse the stuff about intuitions giving reasons.

  3. ^

    This example is adapted from Zorman and Locke (2020).

  4. ^

    Perhaps you protest: “Normativity isn’t like frosting at all! Frosting implies some kind of ‘substance,’ as though it’s ‘made of’ something. But the normativity I’m interested in isn’t like this. Instead, it’s a property, like the roundness of a beachball; or a set of facts, like the fact that Paris is in France. These things aren’t ‘made of’ any kind of ‘stuff.’” I don’t feel like I get have a clean grip on the metaphysics here (do you? be honest…), but I don’t think that this really changes the picture. The important thing is that normativity isn’t a part of causal nexus of the natural world in the way that e.g. the shapes of beachballs and the locations of Eiffel Towers are.

6 comments

Comments sorted by top scores.

comment by Steven Byrnes (steve2152) · 2023-12-14T03:28:00.153Z · LW(p) · GW(p)

I found this post extremely clear, thoughtful, and enlightening, and it’s one of my favorite lesswrong (cross)posts of 2022. I have gone back and reread it at least a couple times since it was first posted, and I cited it recently here [LW · GW].

comment by Wei Dai (Wei_Dai) · 2022-01-18T21:58:43.104Z · LW(p) · GW(p)

We can say similar stuff about other a priori domains like modality, logic, and philosophy as a whole. [...] Whether there are, ultimately, important differences here is a question beyond the scope of this post (I, personally, expect at least some).

I would be interested in your views on metaphilosophy and how it relates to your metaethics.

Suppose we restrict our attention to the subset of philosophy we call metaethics, then it seems to me that meta-metaethical realism is pretty likely (i.e., there are metanormative facts, or facts about the nature of normativity/morality) and therefore metaethical realism is at least pretty plausible. In other words, perhaps there are normative facts in the same way that there are metanormative facts, even though I don't understand the nature of these facts, e.g., whether they're "non-naturalist" or "interventionist". I think this line of thinking provides a major source of support for moral realism within my metaethical uncertainty, so I'm curious if you have any arguments against it.

Replies from: joekc
comment by Joe Carlsmith (joekc) · 2022-01-20T05:21:13.999Z · LW(p) · GW(p)

Is the argument here supposed to be particular to meta-normativity, or is it something more like "I generally think that there are philosophy facts, those seem kind of a priori-ish and not obviously natural/normal, so maybe a priori normative facts are OK too, even if we understand neither of them"? 

Re: meta-philosophy, I tend to see philosophy as fairly continuous with just "good, clear thinking" and "figuring out how stuff hangs together," but applied in a very general way that includes otherwise confusing stuff. I agree various philosophical domains feel pretty a priori-ish, and I don't have a worked out view of a priori knowledge, especially synthetic a priori knowledge (I tend to expect us to be able to give an account of how we get epistemic access to analytic truths). But I think I basically want to make the same demands of other a priori-ish domains that I do normativity. That is, I want the right kind of explanatory link between our belief formation and the contents of the domain -- which, for "realist" construals of the domain, I expect to require that the contents of the domain play some role in explaining our beliefs. 

Re: the relationship between meta-normativity and normativity in particular, I wonder if a comparison to the relationship between "meta-theology" and "theology" might be instructive here. I feel like I want to be fairly realist about certain "meta-theological facts" like "the God of Christianity doesn't exist" (maybe this is just a straightforward theological fact?). But this doesn't tempt me towards realism about God. Maybe talking about normative "properties" instead of normative facts would be easier here, since one can imagine e.g. a nihilist denying the existence of normative properties, but accepting some 'normative' (meta-normative?) facts like "there is no such thing as goodness" or "pleasure is not good."

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2022-01-20T07:43:38.850Z · LW(p) · GW(p)

I would think the metatheological fact you want to be realist about is something like "there is a fact of the matter about whether the God of Christianity exists." "The God of Christianity doesn't exist" strikes me as an object-level theological fact.

The metaethical nihilist usually makes the cut at claims that entail the existence of normative properties. That is, "pleasure is not good" is not a normative fact, as long as it isn't read to entail that pleasure is bad. "Pleasure is not good" does not by itself entail the existence of any normative property.

comment by TAG · 2023-12-14T04:44:59.783Z · LW(p) · GW(p)

Is a non-natural fact, a fact that can't be discovered by empiricism, or a fact that can't be discovered by empiricism+logic?

The frosting is the only thing the non-natural normative realist cares about.

If there are moral facts, whether natural or not, everyone should care about them. If they are accessible by abstract reasoning, then arguments from the causal closure of the physical don't apply.

comment by Flaglandbase · 2022-01-19T10:54:03.576Z · LW(p) · GW(p)

Almost all possible minds that think their existence is meaningful are really just chaotically delusional pattern combinations. 

But the minds of which the most identical copies exist throughout reality may all have evolved in consistent universes.

comment by [deleted] · 2022-01-20T18:21:16.705Z · LW(p) · GW(p)