Posts

Lessons from “The Book of My Life” 2021-01-06T22:40:08.722Z
The Case for Privacy Optimism 2020-03-10T20:30:02.152Z
Realism and Rationality 2019-09-16T03:09:45.499Z

Comments

Comment by bmgarfinkel on Realism and Rationality · 2020-02-15T00:31:24.891Z · LW · GW

Re-posting a link here, on the off-chance it's of interest despite its length. ESRogs and I also had a parallel discussion on the EA Forum, which led me to write up this unjustifiably lengthy doc partly in response to that discussion and partly in response to the above comment.

Comment by bmgarfinkel on Realism and Rationality · 2020-02-14T18:55:44.730Z · LW · GW

Just wanted to say I really appreciate you taking the time to write up such a long, clear, and thoughtful response!

(If I have a bit of time and/or need to procrastinate anything in the near future, I may write up a few further thoughts under this comment.)

Comment by bmgarfinkel on Realism and Rationality · 2020-02-14T18:37:48.897Z · LW · GW

It sounds as though you're expecting anti-realists about normativity to tell you some arguments that will genuinely make you feel (close to) indifferent about whether to use Bayesianism, or whether to use induction.

Hm, this actually isn't an expectation I have. When I talk about "realists" and "anti-realists," in this post, I'm thinking of groups of people with different beliefs (rather than groups of people with different feelings). I don't think of anti-realism as having any strong link to feelings of indifference about behavior. For example: I certainly expect most anti-realist philosophers to have strong preferences against putting their hands on hot stoves (and don't see anything inconsistent in this).

But I have yet to see how that's a useful concept to introduce. I just don't get it.

I guess I don't see it as a matter of usefulness. I have this concept that a lot of other people seem to have too: the concept of the choice I "should" make or that it would be "right" for me to make. Although pretty much everyone uses these words, not everyone reports having the same concept. Nonetheless, at least I do have the concept. And, insofar as there is any such thing as the "right thing," I care a lot about doing it.

We can ask the question: "Why should people care about doing what they 'should' do?" I think the natural response to this question, though, is just sort of to evoke a tautology. People should care about doing what they should do, because they should do what they should do.

To put my "realist hat" firmly on for a second: I don't think, for example, that someone happily abusing their partner would in any way find it "useful" to believe that abuse is wrong. But I do think they should believe that abuse is wrong, and take this fact into account when deciding how to act, because abuse is wrong.

I'm unfortunately not sure, though, if I have anything much deeper or more compelling than that to say in response to the question.


Another (significantly more rambling and possibly redundant) thought on "usefulness":

One of the main things I'm trying to say in the post -- although, in hindsight, I'm unsure if I communicated it well -- is that there are a lot of debates that I personally have trouble interpretting as both non-trivial and truth-oriented if I assume that the debaters aren't employing irreducably normative concepts. A lot of debates about decision theory have this property for me.

I understand how it's possible for realists to have a substantive factual disagreement about the Newcomb scenario, for example, because they're talking about something above-and-beyond the traditional physical facts of the case (which are basically just laid out in the problem specification). But if we assume that there's nothing above-and-beyond the traditional physical facts, then I don't see what there's left for anyone to have a substantive factual disagree about.

If we want to ask "Which amount of money is the agent most likely to receive, if we condition on the information that it will one-box?", then it seems to me that pretty much everyone agrees that “one million dollars” is the answer. If we want to ask "Would the agent get more money in a counterfactual world where it instead two-boxes, but all other features of the world at that time (including the contents of the boxes) are held fixed?", then it seems to me that pretty much everyone agrees the answer is “yes.” If we want to ask “Would the agent get more money in a counterfactual world where it was born as a two-boxer, but all other features of the world at the time of its birth were held fixed?", then it seems to me that pretty much everyone agrees the answer is “no.” So I don't understand what the open question could be. People may of course have different feelings about one-boxing and about two-boxing, in the same way that people have different feelings about (e.g.) playing tennis and playing soccer, but that's not a matter of factual/substantive disagreement.

So this is sort of one way in which irreducably normative concepts can be "useful": they can, I think, allow us to make sense of and justify certain debates that many people are strongly inclined to have and certain questions that many people are strongly inclined to ask.

But the above line line of thought of course isn't, at least in any direct way, an argument for realism actually being true. Even if the line of thought is sound, then it's still entirely totally possible that these debates and questions just actually aren't non-trivial and truth-oriented. Furthermore, the line of thought could also just not be sound. It's totally possible that the debates/questions are non-trivial and truth-oriented, without evoking irreducably normative concepts, and I'm just a confused outside observer not getting what's going on. Tonally, one thing I regret about the way I wrote this post is that I think it comes across as overly skeptical of this possibility.

Comment by bmgarfinkel on Open Thread January 2019 · 2019-11-21T21:37:24.293Z · LW · GW

Belatedly, is this a fair summary of your critique?

When someone thinks about another person (e.g. to predict whether they'll submit to blackmail), the act of thinking about the other person creates a sort of 'mental simulation' that has detailed concious experiences in its own right. So you never really know whether you're a flesh-and-blood person or a 'mental simulation' based on a flesh-and-blood person.

Now, suppose you seem to find yourself in a situation where you've been blackmailed. In this context, it's reasonable to wonder whether you're actually a flesh-and-blood person who's been blackmailed -- or merely a 'mental simulation' that exists in the mind of a potential blackmailer. If you're a mental simulation, and you care about the flesh-and-blood person you're based on, then you have reason to resist blackmail. The reason is that the decision you take as a simulation will determine the blackmailer's prediction about how the flesh-and-blood person will behave. If you resist blackmail, then the blackmailer will predict the flesh-and-blood person will refuse blackmail and therefore decide not to blackmail them.

If this is roughly in the right ballpark, then I would have a couple responses:

  1. I disagree that the act of thinking about a person will tend to create a mental simulation that has detailed concious experiences in its own right. This seems like a surprising position that goes against the grain of conventional neuroscience and views on the philosophy of conciousness. As a simple illustrative case, suppose that Omega makes a prediction about Person A purely on the basis of their body language. Surely thinking "This guys looks really nervous, he's probably worried he'll be seen as the sort of guy who'll submit to blackmail -- because he is" doesn't require bringing a whole new conciousness into existance.

  2. Suppose that when a blackmailer predicts someone's behavior, they do actually create a concious mental simulation. Suppose you don't know whether you're this kind of simulation or the associated flesh-and-blood person, but you care about what happens to the flesh-and-blood person in either case. Then, depending on certain parameter values, CDT does actually say you should resist blackmail. This is because there is some chance that you will cause the flesh-and-blood person to avoid being blackmailed. So CDT gives the response you want in this case.

Overall, I don't think this line of argument really damages CDT. It seems to be based on a claim about conciousness that I think probably wrong. But even if the claim is right, all this implies is that CDT recommends a different action than one would otherwise have thought.

(If my summary is roughly in the right ballpark, then I also think it's totally reasonable for academic decision theorists to read the FDT paper to fail to know that a non-mainstream neuroscience/philosophy-of-conciousness view is being assumed and provides the main justification for FDT. The paper really doesn't directly say anything about this. It seems wrong, then, to me to suggest that Schwarz only disagrees because he lacks the ability to see his own assumptions.)

[[EDIT: Oops, rereading your comment, seems like the summary is probably not fair. I didn't process this bit:

Yes, yes, if Omega used some method other than a simulation to make his prediction, the hypothetical you wouldn't have existed and wouldn't have had a perspective--but hey, that doesn't stop me from writing from their perspective, right? After all, real people write from the perspectives of unreal people all the time; that's just called writing fiction.

But now, reading the rest of the comment in light of this point, I don't think this reduces my qualms. The suggestion seems to be that, when seem to find yourself in the box room, you should in some cases be uncertain about whether or not you exist at all. And in these cases you should one box, because, if it turns out that you don't exist, then your decision to one box will (in some sense) cause a corresponding person who does exist to get more money. You also don't personally get less money by one boxing, because you don't get any money either way, because you don't exist.

Naively, this line of thought seems sketchy. You can have uncertainty about the substrate your mind is being run on or about the features of the external world -- e.g. you can be unsure whether or not you're a simulation -- but there doesn't seem to be room for uncertainty about whether or not you exist. "Cogito, ergo sum" and all that.

There is presumably some set of metaphysical/epistemological positions under which this line of reasoning makes sense, but, again, the paper really doesn't make any of these positions explicit or argue for them directly. I mainly think it's premature to explain the paper's faillure to persuade philosophers in terms of their rigidity or inability to question assumptions.]]

Comment by bmgarfinkel on Realism and Rationality · 2019-09-22T00:17:21.920Z · LW · GW

Hmm, I think focusing on a simpler case might be better for getting at the crux.

Suppose Alice says: "Eating meat is the most effective way to get protein. So if you want to get protein, you should eat meat."

And then Bob, an animal welfare person, responds: "You're wrong, people shouldn't eat meat no matter how much they care about getting protein."

If Alice doesn't mean for her second sentence to be totally redundant -- or if she is able to interpret Bob's response as an intelligible (if incorrect) statement of disagreement with her second sentence -- then that suggests her second sentence actually constitutes a substantively normative claim. Her second sentence isn't just repeating the same non-normative claim as the first one.

I definitely don't think that all "If you want X, do Y" claims are best understood as normative claims. It's possible that when people make claims of this form about Bayesianism, and other commonly discussed topics, they're not really saying anything normative. But a decent chunk of statements of this form do strike me as difficult to interpret in non-normative terms.

Comment by bmgarfinkel on Realism and Rationality · 2019-09-21T18:12:48.596Z · LW · GW

Okay, this seems like a crux of our disagreement. This statement seems pretty much equivalent to my statement #1 in almost all practical contexts. Can you point out how you think they differ?

This stuff is definitely a bit tricky to talk about, since people can use the word "should" in different ways. I think that sometimes when people say "You should do X if you want Y" they do basically just mean to say "If you do X you will receive Y." But it doesn't seem to me like this is always the case.

A couple examples:

1. "Bayesian updating has a certain asymptoptic convergence property, in the limit of infinite experience and infinite compute. So if you want to understand the world, you should be a Bayesian."

If the first and second sentence were meant to communicate the same thing, then the second would be totally vacuous given the first. Anyone who accepted the first sentence could not intelligibly disagree with or even really consider disagreeing with the second. But I don't think that people who say things like this typically mean for the second sentence to be vacuous or typically regard disagreement as unintelligible.

Suppose, for example, that I responded to this claim by saying something like: "I disagree. Since we only have finite lives, asymptoptic convergence properties don't have direct relevance. I think we should instead use a different 'risk averse' updating rule that, for agents with finite lives, more strongly reduces the likelihood of ending up with especially inaccurate beliefs about key features of the world."

The speaker might think I'm wrong. But if the speaker thinks that what I'm saying constitutes intelligible disagreement with their claim, then it seems like this means their claim is in fact a distinct normative one.

2. (To someone with no CS background) "If you want to understand the world, you should be a Bayesian."

If this sentence were meant to communicate the same thing as the claim about asymptotic convergence, then the speaker shouldn't expect the listener to understand what they're saying (even if the speaker has already explained what it means to be a Bayesian). Most people don't naturally understand or care at all about asymptotic convergence properties.

Comment by bmgarfinkel on Realism and Rationality · 2019-09-21T02:09:14.114Z · LW · GW

It's important to disentangle two claims:

  1. In general, if you have the goal of understanding the world, or any other goal that relies on doing so, being Bayesian will allow you to achieve it to a greater extent than any other approach (in the limit of infinite compute).

  2. Regardless of your goals, you should be Bayesian anyway.

Believing #2 commits you to normative realism as I understand the term, but believing #1 doesn't - #1 is simply an empirical claim about what types of cognition tend to do best towards a broad class of goals. I think that many rationalists would defend #1, and few would defend #2 - if you disagree, I'd be interested in seeing examples of the latter.

I don't necessarily think that #2 is a common belief. But I do have the impression that many people would at least endorse this equally normative claim: "If you have the goal of understanding the world, you should be a Bayesian."

In general -- at least in the context of the concepts/definitions in this post -- the inclusion of an "if" clause doesn't prevent a claim from being normative. So, for example, the claim "You should go to Spain if you want to go to Spain" isn't relevantly different from the claim "You should give money to charity if you have enough money to live comfortably."


Either way, I agree with Wei that distinguishing between moral normativity and epistemic normativity is crucial for fruitful discussions on this topic.

I agree there's an important distinction, but it doesn't necessarily seem that deep to me.

For example: We can define different "epistemic utility functions" that map {agent's credences; state of the world} to real values and then discuss theories like Bayesianism in the context of "epistemic decision theory," in relatively close analogy with traditional (practical) decision theory.

It seems like some theories -- e.g. certain theories that say we should have faith in the existance of God, or theories that say that we shouldn't take into account certain traits when forming impressions of people -- might also be classified as both moral and epistemological.

Comment by bmgarfinkel on Realism and Rationality · 2019-09-18T17:16:27.156Z · LW · GW

I left a sub-comment under Wei's comment (above) that hopefully unpacks this suggestion a bit

Comment by bmgarfinkel on Realism and Rationality · 2019-09-18T17:07:20.685Z · LW · GW

I think there's a distinction (although I'm not sure if I've talked explicitly about it before). Basically there's quite possibly more to what the "right" or "reasonable" action is than "what action that someone who tends to 'win' a lot over the course of their life would take?" because the latter isn't well defined. In a multiverse the same strategy/policy would lead to 100% winning in some worlds/branches and 100% losing in other worlds/branches, so you'd need some kind of "measure" to say who wins overall. But what the right measure is seems to be (or could beLW) a normative fact that can't be determined by just looking at or thinking "who tends to 'win' a lot'.

I agree with you on this and think it's a really important point. Another (possibly redundant) way of getting at a similar concern, without evoking MW:

Due to randomness/uncertainty, an agent that tries to maximize expected "winning" won't necessarily win compared to an agent that does something else. If I spend a dollar on a lottery ticket with a one-in-a-billion chance of netting me a billion-and-one "win points," then I'm taking the choice that maximizes expected winning but I'm also almost certain to lose. So we can't treat "the action that maximizes expected winning" as synonymous with "the action taken by an agent that wins."

We can try to patch up the issue here by defining "the action that I should take" as "the action that is consistent with the VNM axioms," but in fact either action in this case is consistent with the VNM axioms. The VNM axioms don't imply that an agent must maximize the expected desirability of outcomes. They just imply that an agent must maximize the expected value of some function. It is totally consistent with the axioms, for example, to be risk averse and instead maximize the expected square root of desirability. If we try to define "the action I should take" in this way, then, as another downside, the claim "your actions should be consistent with the VNM axioms" also becomes a completely empty tautology.

So it seems very hard to make non-vacuous and potentially true claims about decision theory without evoking some additional non-reducible notion of "reasonableness," "rationality," or what an actor "should" do. Assuming that normative anti-realism is true pretty much means assuming that there is no such notion or assuming that the notion doesn't actually map onto anything in reality. And I think anti-realist views of these sort are plausible (probably for roughly the same reasons Eliezer seems to). But I think that adopting these views would also leave us with very little to say about decision theory.

Comment by bmgarfinkel on Realism and Rationality · 2019-09-18T16:41:39.680Z · LW · GW

If there is anything that anyone should in fact do, then I would say that meets the standards of "realism."

Does "anyone" refer to any human, or any possible being?

Sorry, I should have been clearer. I mean to say: "If there exists at least one entity, such that the entity should do something, then that meets the standards of 'realism.'"

I understand "moral realism" as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing.

I don't think I'm aware of anyone who identifies as a "moral realist" who believes this. At least, it's not part of a normal definition of "moral realism."

The term "moral realism" is used differently by different people, but typically it's either used roughly synonymously with "normative realism" (as I've defined it in this post) or to pick out a slightly more specific position: that normative realism is true and that people should do things besides just try to fulfill their own preferences.

Comment by bmgarfinkel on Realism and Rationality · 2019-09-16T17:01:50.917Z · LW · GW

Terminology definitely varies. FWIW, the breakdown of normative/meta-normative views I prefer is roughly in line with the breakdown Parfit uses in OWM (although he uses a somewhat wonkier term for "realism"). In this breakdown:

"Realist" views are ones under which there are facts about what people should do or what they have reason to do. "Anti-realist" views are ones under which there are no such facts. There are different versions of "realism" that claim that facts about what people should do are either "natural" (e.g. physical) or "non-natural" facts. If we condition on any version realism, there's then the question of what we should actually do. If we should only act to fulfill our own preferences -- or pursue other similar goals that primarily have to do with our own mental states -- then "subjectivism" is true. If we should also pursue ends that don't directly have to do with our own mental states -- for example, if we should also try to make other people happy -- then "objectivism" is true.

It's a bit ambiguous to me how the terms in the LessWrong survey map onto these distinctions, although it seems like "subjectivism" and "constructivism" as they're defined in the survey probably would qualify as forms of "realism" on the breakdown I just sketched. I think one thing that sometimes makes discussions of normative issues especially ambiguous is that the naturalism/non-naturalism and objectivism/subjectivism axes often get blended together.

Comment by bmgarfinkel on Realism and Rationality · 2019-09-16T16:00:33.186Z · LW · GW

Thanks for sharing this, was not aware of the survey! Seems like this suggests I've gotten a skewed impression of the distribution of meta-ethical views, so in that sense the objection I raise in this post may only be relevant to a smaller subset of the community than I'd previously thought.

I agree with a lot of the spirit of PMR (that people use the word "should" to mean different things in different contexts), but think that there's a particularly relevant and indespensible sense of the word "should" that points toward a not-easily-reducible property. Then the interesting non-semantic question to me -- and to certain promiment "realists" like Enoch and Parfit -- is whether any actions are actually associated with such a property.

(Within my cave of footnotes, I say a bit more on this point in FN14)

Comment by bmgarfinkel on Realism and Rationality · 2019-09-16T15:05:03.573Z · LW · GW

If there is anything that anyone should in fact do, then I would say that meets the standards of "realism." For example, it could in principle turn out to be the case that the only normative fact is that the tallest man in the world should smile more. That would be an unusual normative theory, obviously, but I think it would still count as substantively normative.

I'm unsure whether this is a needlessly technical point, but sets of facts about what specific people should do also imply and are implied by facts about what everyone should do. For example, suppose that it's true that everyone should do what best fulfills their current desires. This broad normative fact would then imply lots of narrow normative facts about what individual people should do. (E.g. "Jane should buy a dog." "Bob should buy a cat." "Ed should rob a bank.") And we could also work backward from these narrow facts to construct the broad fact.


I interpret Eliezer's post, perhaps wrongly, as focused on a mostly distinct issue. It reads to me like he's primarily suggesting that for any given normative claim -- for example, the claim that everyone should do what best fulfills their current desires or the claim that the tallest man should smile more -- there is no argument that could convince any possible mind into believing the claim is true.

So—and I shall take up this theme again later—wherever you are to locate your notions of validity or worth or rationality or justification or even objectivity, it cannot rely on an argument that is universally compelling to all physically possible minds.

I agree with him at least on this point and think that most normative realists would also tend to agree.


Please let me know (either clone of saturn or Said) if it seems like I'm still not quite answering the right question :)

Comment by bmgarfinkel on Realism and Rationality · 2019-09-16T13:13:30.359Z · LW · GW

If you mean "compelling" in the sense of "convincing" or "motivating," then I actually don't mean to suggest there are any "universally compelling normative statements." I think it's totally possible for there to be something that somone "should" do (e.g. being vegetarian), without this person either believing they should do it or acting on their belief.

This doesn't seem too problematic to me, though, since most other kinds of statements also fail to be at least universally convincing. For example, I also think that the statement "the universe is billions of years old" is both true and not-universally-convincing. Some philosophers do still argue, though, that the failure of normative beliefs to consistently motivate people is a serious challenge for normative realism.

Comment by bmgarfinkel on Realism and Rationality · 2019-09-16T12:42:59.138Z · LW · GW

I wish when people did this kind of thing (i.e., respond to other people's ideas, arguments, or positions) they would give some links or quotes, so I can judge whether whatever they're responding to is being correctly understood and represented.

Fair point!

It's definitely possible I'm underestimating the popularity of realist views. In which case, I suppose this post can be take as a mostly redundant explanation of why I think people are sensible to have these views :)

I guess there are few reasons I've ended up with the impression that realist views aren't very popular.

  1. People are often very dismissive of "moral realism." (If this doesn't seem right, I think I should be able to pull up quotes.) But nearly all standard arguments against moral realism also function as arguments against "normative realism" as well. The standard concerns about 'spookiness' and ungrounded epistemology arise as soon as we accept that there are facts of the matter about what we should do and that we can discover these facts; it doesn't lessen the fundamental metaphysical or epistemological issues whether these facts, for example, tell us to try to maximize global happiness or to try to fulfill the preferences of some particular idealized version of ourselves. It also seems to be the case that philosophers who identify as "moral anti-realists" are typically anti-realists about normativity, which I think partly explains why people seldom bother to tease the terms "moral realist" and "normative realist" apart in the first place. So I suppose I've been leaning on a prior that people who identify as "moral anti-realists" are also "normative anti-realists."

  2. (Edit) It seems pretty common for people in the community to reject or attack the idea of "shoulds." For example, many posts in the (popular?) "Replacing Guilt" sequence on Minding Our Way seem to do this. A natural reading is a rejection of normative realism.

  3. Small-n, but the handful of friends I've debated moral realism with have also had what I would tend to classify as anti-realist attitudes toward normativity more generally.

  4. If normative realism is correct, then it's at least conceivable that the action it's most "reasonable" for us to take in some circumstance (i.e. the action that we "should' take") is different from the action that someone who tends to "win" a lot over the course of their life would take. However, early/foundational community writing seems to reject the idea that there's any meaningful conceptually distinct sense in which we can talk about an action being "reasonable." I take this Eliezer post on decision theory and rationality as an example.

It might also be useful to clarify that in ricraz's recent post criticizing "realism about rationality," several of the attitudes listed aren't directly related to "realism" in the sense of this post. For example, it's possible for there to be "a simple yet powerful theoretical framework which describes human intelligence" even if normative anti-realism is true. It did seem to me like the comments on ricraz's post leaned toward wariness of "realism," as conceptualized there, but I'm not really sure how to map that onto attitudes about the notion of "realism" I have in mind here.