Posts
Comments
I hardly think it answers the question, but this might be of interest: https://www.cdc.gov/vaccinesafety/concerns/concerns-history.html
I think each year's flu vaccine is a slight modification on an existing vaccine. This may well (read: I have no idea, but it sounds plausible) make it faster to safety test the flu vaccine than a vaccine for a novel disease.
I think this is an uncharitable reading of the purpose of Gaiman's quote. His quote isn't really meant to be a factual claim but an inspirational one.
Now obviously some people will find more inspiration from quotes that express a truth as compared with those that don't. Perhaps you're such a person (I suspect that many people on LW are). At risk of irony, however, it's best not to assume that everyone else is the same as you in that regards.
Evaluating something with an emotional purpose in accordance with its epistemic accuracy (instead of its psychological or poetic force) is likely to lead to an uncharitable reading of many quotes (and rather reinforces the straw vulcan stereotype of rationality).
Yes, philosophers tend to be interested in the issue of conceptual analysis. Different philosophers will have a different understanding of what conceptual analysis is but one story goes something like the following. First, we start out with a rough, intuitive sense of the concepts that we use and this gives us a series of criteria for each concept (perhaps with free will one criteria would be that it relates to moral responsibility in some way and another would be it relates to the ability to do otherwise in some way). Then we try to find a more precise account of the concept that does the best (though not necessarily perfect) job of satisfying these criteria.
I personally find the level of focus on conceptual analysis in philosophy frustrating so I'm not sure that I can do justice to a defence of it. I know many very intelligent people who think it is indispensible to our reasoning though so it may well be deserving of further reflection. If you're interested in such reflection I suggest that you read Frank Jacksons, "From Metaphysics to Ethics: A Defence of Conceptual Analysis". It's a short book and gives a good sense of how contemporary philosophers think about conceptual analysis (in terms of what conceptual analysis is, btw, Jackson says the following: "The short answer is that conceptual analysis is the very business of addressing when and whether a story told in one vocabulary is made true by one told in some allegedly more fundamental vocabulary.")
Off the top of my head, why might someone think conceptual analysis is important? First, conceptual analysis is all about getting clear on our terms. If you're discussing free will, it seems like a really bad idea to just debate without making clear what you mean by free will. So it seems useful to get clear on our terms.
Myself, I'm tempted to say we should get clear on our terms by stipulation (though note that even this involves a small amount of conceptual analysis or I would be just as likely to stipulate that "free will" means "eating my hat" as I would be to stipulate that it means "my decisions flow from my deliberative process": and many philosophers only use conceptual analysis in this easily-defendable manner). So I would stipulate what various meanings of free will are, say which we do and do not have and leave it to each individual to figure out how that makes them feel (which type of free will they care about).
However, a lot of people don't find this very helpful. They care about FREE WILL or MORALITY or BEING THE SAME PERSON TOMORROW (a.k.a. personal identity). And they need to know what the best realiser of the criteria that they have for this concept is to know what they care about. So if you tell a lot of people that they have free-will-1 and not free-will-2, they don't care about this: they care whether they have FREE WILL and so they need to find out which of free-will-1 and free-will-2 is a better realiser of their concept before they can work out whether they have the sort of free will that they care about. Insofar as I don't think telling people what they should desire (no, you should desire free-will-1, regardless of the nature of your concept of FREE WILL), I don't really have any objection to the claim that such a person needs to carry out a more robust project of conceptual analysis (though I feel no need to join them on the road they're travelling).
All that said, standard epistemology (as opposed to formal epistemology) is one of the worst areas of philosophy to study if you're uninterested in these conceptual debates as such debates are pretty much the entire of the field (where in many other areas of philosophy they play a smaller, though often still substantial, role).
Exactly what information CDT allows you to update your beliefs on is a matter for some debate. You might be interested in a paper by James Joyce (http://www-personal.umich.edu/~jjoyce/papers/rscdt.pdf) on the issue (which was written in response to Egan's paper).
Then if the uncompressed program running had consciousness and the compressed program running did not, you have either proved or defined consciousness as something which is not an output. If it is possible to do what you are suggesting then consciousness has no effect on behavior, which is the presumption one must make in order to conclude that p-zombies are possible.
I haven't thought about this stuff for a while and my memory is a bit hazy in relation to it so I could be getting things wrong here but this comment doesn't seem right to me.
First, my p-zombie is not just a duplicate of me in terms of my input-output profile. Rather, it's a perfect physical duplicate of me. So one can deny the possibility of zombies while still holding that a computer with the same input-output profile as me is not conscious. For example, one could hold that only carbon-based life could be conscious while denying the possibility of zombies (denying that a physical duplicate of a carbon-based lifeform that is conscious could lack consciousness) while denying that an identical input-output profile implies consciousness.
Second, if it could be shown that the same input-output profile could exist even with consciousness was removed this doesn't show that consciousness can't play a causal role in guiding behaviour. Rather, it shows that the same input-output profile can exist without consciousness. That doesn't mean that consciousness can't cause that input-output profile in one system and something else cause it in the other system.
Third, it seems that one can deny the possibility of zombies while accepting that consciousness has no causal impact on behaviour (contra the last sentence of the quoted fragment): one could hold that the behaviour causes the conscious experience (or that the thing which causes the behaviour also causes the conscious experience). One could then deny that something could be physically identical to me but lack consciousness (that is, deny the possibility of zombies) while still accepting that consciousness lacks causal influence on behaviour.
Am I confused here or do the three points above seem to hold?
It depends on what you're looking for. If you're looking for Drescher style stuff then you're looking for a very specific type of contemporary, analytic philosophy. Straight off the top of my head: Daniel Dennett, Nick Bostrom and some stuff by David Chalmers as well as decision and game theory (good free introduction here).
If you're interested in contemporary, analytic philosophy generally then I can't really make suggestions because the list is too broad (what are your interests? Ethics? Aesthetics? Metaphysics? Epistemology? Logic?). Good general resources, however, definitely the Stanford Encyclopedia of Philosophy (which is a great resource), Philosophy Bro (for a lighthearted take), Philosophy Bites (for a podcast). The list here, cited by someone else, is a good guide to prominent papers some of which will be harder to understand without background knowledge than others.
If you have specific questions during your self-study, feel free to PM me and I'm happy to try and help (I'm far from an expert but also know substantially more philosophy than the average person).
I don't think further conversation on this topic is going to be useful for either of us. I presume we both accept that we have some responsibilities for the welfare of others and that sometimes we can consider the welfare of others without being infantilising (for example, I presume we both presume that shooting someone for fun would be in violation of these responsibilities).
Clearly, you draw the line at a very different place to me but beyond that I'm not sure there's much productive to be said.
I will note, however, that my claim is not about doubting the capability of women nor about "protecting" women in some special sense that goes beyond general compassion. It's about respect for the welfare of other people.
Other than that, I think this conversation has reached the end of its useful life so will leave things at that.
If you are a car salesman and have a button you can legally press which makes your costumer buy a car, you'd press it. Instrumental rationality, no?
Instrumental rationality doesn't get you this far. It gets you this far only if you assume that you care only about selling cars and legality. If you also care about the welfare of others then instrumental rationality will not necessarily tell you to push the button (instrumental rationality isn't the same thing as not caring about others).
Of course, I don't expect anyone who doesn't care about the welfare of others to find any of what I'm saying here compelling. A certain level of common ground is required to have a useful discussion. However, I think most people do care about the welfare of others.
I find it extremely condescending to say you're responsible with how a woman you just met feels, it's treating them like a child, not like an adult who can darn well be expected to make her own choices, and turn away from you if she so desires.
There is, of course, a line between compassion and condescension and I agree that it is bad to cross that line. However, I think it's unreasonable to think that showing the level of concern that I'm talking about here for someone's welfare is crossing this line. To choose a silly example, it would be undesirable for me to shoot someone for no reason but a selfish desire (I, of course, do not actually have this desire). However, if I didn't shoot someone for some reason, this would be taking some responsibility for the welfare of others. However, this hardly amounts to treating them like a child. Similarly, not deliberately making a woman feel bad about herself is simply showing compassion and being respectful toward others. There's no reason to think this amounts to treating someone like a child.
Externally imposing unwritten rules (other than a legal framework) is infantizing adult agents.
I'm not "imposing" rules, unwritten or otherwise. What I am doing is suggesting that insofar as you care about the welfare of others, it is undesirable that you deliberately make people feel bad about themselves. Having a concern for the welfare of others is hardly infantising adults (consider the gun example again: it is not treating someone as an infant to decide not to kill them on the grounds of their welfare).
Thanks for a reply. I did take a look at your post but I don't think it really engages with the points that I make (it engages with arguments that are perhaps superficially similar but importantly distinct)
In general a PUA should always make a woman feel good, otherwise why should she choose to stay with him? Probably women suffer much more through awkward interactions, stalkers, etc...
I have no problems with certain things that one might describe as pick up artistry. My comments are reserved for the things that don't involve respect for a woman's welfare (demeaning her, for example). And yes, I'm sure people suffer more through stalkers but that doesn't set the bar very high.
Making a woman feel insecure might work, so does a movie that makes people feel scared(ever enjoyed a good horror movie?). Should we blame a PUA if that works for him?
If you think that people should care about the welfare of others then yes. I think here we have identified the ultimate source of our disagreement. The fact that you think this is even a question worth asking shows that we have substantially different background assumptions (and this perhaps explains why you find attacks on PU confusing).
ETA: I realise now that it was unclear whether you were asking whether we should blame a PUA for the movie thing or for deliberately making a woman feel insecure. If the first, no (except perhaps in unusual circumstances) as going to a movie doesn't go against the woman's welfare presuming she, like many people, finds the fear of a horror movie desirable or finds it to be made up for by other aspects of the movie. If the second, then as per above: yes, I think a person should care about the welfare of the person that they're picking up.
Hi Roland,
I replied to you in the other thread and I'd be interested to know what you think about my comment (I'm not really making the sort of claim you dismiss in this post so I'm curious as to whether you agree with what I'm saying or whether my comments are problematic for other reasons). Comments quoted below for ease of access:
If the sole determining factor of whether an interaction with a women is desirable is whether they end up attracted to you then, yes, even the most extreme sort of pick up artistry would be unproblematic.
However, if you think that there are other factors that determine whether such an interaction is desirable (such as whether the woman is treated with respect, is not made to feel unpleasant etc) then certain sorts of pick up artistry are extremely distasteful.
For example, let's hypothetically imagine that women are more attracted to people who make them feel insecure (I take no position on the accuracy of this claim). Sure, it would just be "understanding how women work and adjusting your behaviour to be more attractive to them" if you deliberately made them feel insecure. And sure, this would be no problem if being attractive was the sole determining factor of whether the interaction was desirable. However, if you think women deserve to be treated with respect and not made to feel horrible (presuming not because they are women but just because all humans deserve this) then this interaction is extremely undesirable.
Some discussions of pick up artistry don't just blur this line but fail to even realise there is a line. To those who think women should be treated with respect, this is extremely concerning.
And also:
We can think of it another way: what do we think constitutes the welfare of a woman? Presumably we don't think that it is just that she is attracted to the person she is currently conversing with.
However, if this is the case and if we care about how our interaction with people effect their welfare then the fact that a person's interaction with a woman makes the woman attracted to them doesn't entail that the interaction was desirable (because we care about their welfare which is more than just their extent of current attraction).
Note that this need not be a condescending attempt to institute an objective conception of welfare on an unwilling recipient. For example, we might think that a person's welfare is determined by their own subjective, personally decided upon preferences. Now perhaps a woman has preferences to be attracted to the person they're talking to (or perhaps not) but presumably they also have preferences to feel good about themselves and a number of other things. Again, then, even taking their self-identified welfare, we can't presume that an interaction is benefiting a woman's welfare just because they are attracted to their current conversation partner.
To put it another other way: just because a woman finds herself attracted to a person following an interaction, it doesn't mean she doesn't wish that the interaction had been different. So the conversation may fulfill the man's interests in being attractive but it doesn't follow from the fact that the woman is attracted to him that it fulfulls the woman's interests.
Of course, if you think a woman's welfare is her own problem and an interested man's only responsibility is to be attractive to the woman then you won't find this compelling but that attitude is precisely what the problem is (many people think that one should be concerned about the effects of one's interactions on others' welfare).
ETA: So to clarify: the claim was not that some women's tastes are distasteful but rather that a woman's tastes don't entirely determine her welfare so we can't move from a claim that something is in accordance with her tastes to a claim that something is in accordance with her welfare (or, for that matter, her desires, because her tastes in men don't fully define her desires either)
So I don't say that the problem is manipulation: I say the problem is a lack of concern about the welfare of women. Wearing a nice shirt doesn't show that lack of concern, deliberately demeaning them does. The dividing line isn't trying to influence women vs not doing so but the way this manipulation is carried out. Similarly, my claim is not about the accuracy or pleasantness of a view of women, it's about the desirability of a way of treating women even if that view is correct. So your comments above don't seem to respond to these issues. What do you think about these issues then?
That's the issue. Some people have an ideology that some women's tastes are distasteful.
It's a clever line but doesn't really interact with what I said (which may perhaps have been because I was unclear: I don't intend to suggest this fact is your fault).
We can think of it another way: what do we think constitutes the welfare of a woman? Presumably we don't think that it is just that she is attracted to the person she is currently conversing with.
However, if this is the case and if we care about how our interaction with people effect their welfare then the fact that a person's interaction with a woman makes the woman attracted to them doesn't entail that the interaction was desirable (because we care about their welfare which is more than just their extent of current attraction).
Note that this need not be a condescending attempt to institute an objective conception of welfare on an unwilling recipient. For example, we might think that a person's welfare is determined by their own subjective, personally decided upon preferences. Now perhaps a woman has preferences to be attracted to the person they're talking to (or perhaps not) but presumably they also have preferences to feel good about themselves and a number of other things. Again, then, even taking their self-identified welfare, we can't presume that an interaction is benefiting a woman's welfare just because they are attracted to their current conversation partner.
To put it another other way: just because a woman finds herself attracted to a person following an interaction, it doesn't mean she doesn't wish that the interaction had been different. So the conversation may fulfill the man's interests in being attractive but it doesn't follow from the fact that the woman is attracted to him that it fulfulls the woman's interests.
Of course, if you think a woman's welfare is her own problem and an interested man's only responsibility is to be attractive to the woman then you won't find this compelling but that attitude is precisely what the problem is (many people think that one should be concerned about the effects of one's interactions on others' welfare).
ETA: So to clarify: the claim was not that some women's tastes are distasteful but rather that a woman's tastes don't entirely determine her welfare so we can't move from a claim that something is in accordance with her tastes to a claim that something is in accordance with her welfare (or, for that matter, her desires, because her tastes in men don't fully define her desires either)
If the sole determining factor of whether an interaction with a women is desirable is whether they end up attracted to you then, yes, even the most extreme sort of pick up artistry would be unproblematic.
However, if you think that there are other factors that determine whether such an interaction is desirable (such as whether the woman is treated with respect, is not made to feel unpleasant etc) then certain sorts of pick up artistry are extremely distasteful.
For example, let's hypothetically imagine that women are more attracted to people who make them feel insecure (I take no position on the accuracy of this claim). Sure, it would just be "understanding how women work and adjusting your behaviour to be more attractive to them" if you deliberately made them feel insecure. And sure, this would be no problem if being attractive was the sole determining factor of whether the interaction was desirable. However, if you think women deserve to be treated with respect and not made to feel horrible (presuming not because they are women but just because all humans deserve this) then this interaction is extremely undesirable.
Some discussions of pick up artistry don't just blur this line but fail to even realise there is a line. To those who think women should be treated with respect, this is extremely concerning.
There have been attempts to create derivatives of CDT that work like that. That replace the "C" from conventional CDT with a type of causality that runs about in time as you mention. Such decision theories do seem to handle most of the problems that CDT fails at. Unfortunately I cannot recall the reference.
You may be thinking of Huw Price's paper available here
Thanks Pinyaka, changed for next edit (and glad to hear you're finding it useful).
Okay, well I've rewritten this for the next update in a way that hopefully resolves the issues.
If you have time, once the update is posted I'd love to know whether you think the rewrite is successful. In any case, thanks for taking the time to comment so far.
Some quotes might help.
Peterson defines an act "as a function from a set of states to a set of outcomes"
The rest of the details are contained in this quote: "The key idea in von Neumann and Morgenstern's theory is to ask the decision maker to state a set of preferences over risky acts. These acts are called lotteries, because the outcome of each act is assumed to be randomly determined by events (with known probabilities) that cannot be controlled by the decision maker".
The terminology of risky acts is more widespread than Peterson: http://staff.science.uva.nl/~stephane/Teaching/UncDec/vNM.pdf
However, I don't particularly see the need to get caught up in the details of what some particular people said: mostly I just want a clear way of saying what needs to be said.
Perhaps the best thing to do is (a) be more explicit about what lotteries are in the VNM system; and (b) be less explicit about how lotteries and acts interact. Use of the more neutral word "options" might help here [where options are the things the agent is choosing between].
Specifically, I could explicitly note that lotteries are the options on the VNM account (which is not to say that all lotteries are options but rather that all options are lotteries on this account), outline everything in terms of lotteries and then, when talking about the issue of action guidance, I can note that VNM, at least in the standard formulation, requires that an agent already has preferences over options and note that this might seem undesirable.
From memory, Nozick explicitly disclaims the idea that his view might be a response to normative uncertainty. Rather, he claims that EDT and CDT both have normative force and so should both be taken into account. While this may appear to be window dressing, this will have fairly substantial impacts. In particular, no regress threatens Nozick but the regress issue is going to need to be responded to in the normative uncertainty case.
Okay, so I've been reading over Peterson's book An Introduction to Decision Theory and he uses much the same language as that used in the FAQ with one difference: he's careful to talk about risky acts rather than just acts (when he talks about VNM, I mean, he does simply talk about acts at some other point). This seems to be a pretty common way of talking about it (people other than Peterson use this language).
Anyway, Peterson explicitly defines a "lottery" as an act (which he defines as a function from world states to outcomes) whose outcome is risky (which is to say, is determined randomly but with known probability) [I presume by the act's outcome he means the outcome that will actually occur if that act is selected].
Would including something more explicit like this resolve your concerns or do you think that Peterson does things wrong as well (or do you think I'm misunderstanding what Peterson is doing)?
Cool, thanks for letting me know.
Point conceded (both your point and shminux's). Edited for the next update.
Thanks for the clarification.
Perhaps worth noting that earlier in the document we defined acts as functions from world states to outcomes so this seems to resolve the second concern somewhat (if the context is different then presumably this is represented by the world states being different and so there will be different functions in play and hence different acts).
In terms of the first concern, while VNM may define preferences over all lotteries, there's a sense where in any specific decision scenario, VNM is only appealed to in order to rank the achievable lotteries and not all of them. Of course, however, it's important to note as well that this is only part of the story.
Anyway, I changed this for the next update so as to improve clarity.
My understanding is that in the VNM system, utility is defined over lotteries. Is this the point you're contesting or are you happy with that but unhappy with the use of the word "acts" to describe these lotteries. In other words, do you think the portrayal of the VNM system as involving preferences over lotteries is wrong or do you think that this is right but the way we describe it conflates two notions that should remain distinct.
My understanding is that in the VNM system, utility is defined over lotteries. Is this the point you're contesting or are you happy with that but unhappy with the use of the word "acts" to describe these lotteries. In other words, do you think the portrayal of the VNM system as involving preferences over lotteries is wrong or do you think that this is right but the way we describe it conflates two notions that should remain distinct.
I think I'm missing the point of what you're saying here so I was hoping that if I explained why I don't understand, perhaps you could clarify.
VNM-utility is unique up to a positive linear transformation. When a utility function is unique up to a positive linear transformation, it is an interval (/cardinal scale). So VNM-utility is an interval scale.
This is the standard story about VNM-utility (which is to say, I'm not claiming this because it seems right to me but rather because this is the accepted mainstream view of VNM-utility). Given that this is a simple mathematical property, I presume the mainstream view will be correct.
So if your comment is correct in terms of the presentation in the FAQ then either we've failed to correctly define VNM-utility or we've failed to correctly define interval scales in accordance with the mainstream way of doing so (or, I've missed something). Are you able to pinpoint which of these you think has happened?
One final comment. I don't see why ratios (a-b)/|c-d| aren't meaningful. For these to be meaningful, it seems to me that it would need to be that [(La+k)-(Lb+k)]/[(Lc+k)-(Ld+k)] = (a-b)/(c-d) for all L and K (as VNM-utilities are unique up to a positive linear transformation) and it seems clear enough that this will be the case:
[(La+k)-(Lb+k)]/[(Lc+k)-(Ld+k)] = [L(a-b)]/[L(c-d)] = (a-b)/(c-d)
Again, could you clarify what I'm missing (I'm weaker on axiomatizations of decision theory than I am on other aspects of decision theory and you're a mathematician so I'm perfectly willing to accept that I'm missing something but it'd be great if you could explain what it is)?
Does the horizontal axis of the decision tree in section 3 represent time?
Yes and no. Yes, because presumably the agent's end result re: house and money occurs after the fire and the fire will happen after the decision to take out insurance (otherwise, there's not much point taking out insurance). No, because the diagram isn't really about time, even if there is an accidental temporal component to it. Instead, the levels of the diagram correspond to different factors of the decision scenario: the first level is about the agent's choice, the second level about the states of natures and the third about the final outcome.
Given that this is how the diagram works, smearing out the triangles would mix up the levels and damage the clarity of the diagram. To model an agent as caring about whether they were insured or not, we would simply modify the text next to the triangles to something like "Insurance, no house and $99,900", "Insurance, house and - $100" and so on (and then we would assign different utilities to the agent based partly on whether they were insured or not as well as whether they had a house and how much money they had).
I think that forgetting this point sometimes leads to misapplications of decision theory.
I agree, though I think that talking of utility rather than money solves many of these problems. After all, utility should already take into account an agents desire to be insured etc and so talk about utility should be less likely to fall into these traps (which isn't to say there are never any problems)
Will be fixed in the next update. Thanks for pointing it out.
Thanks. Will be fixed in next update. Thanks also for the positive comment.
Thanks, as you note, the linked comment is right.
Thanks, will be fixed in next update.
Fixed for next update. Thanks.
Thanks, fixed for the next update.
Thanks. Fixed for the next update of the FAQ.
Thanks. I've fixed this up in the next update (though it won't appear on the LW version yet).
I think this conversation is now well into the territory of diminishing return so I'll leave it at that.
Okay, perhaps I can have another go at this.
First thing to note, possible worlds can't be specified at different levels of detail. When doing so we are either specifying partial possible worlds or sets of possible worlds. As rigid designation is a claim about worlds, it can't be relative to the level of detail utilised as it only applies to things specified at one level of detail.
Second, you still seem to be treating possible worlds as concrete things rather than something in the head (or, at least, making substantive assumptions about possible worlds and relying on these to make claims about possible worlds generally). Let's take possible worlds to be sets of propositions and truth values. In this case there's no reason to find transworld identity puzzling. H2O exists in this world just if a relevant proposition is true (like, "I am holding a glass of H2O"). There's also no room for this transworld identity to be relative to a context. Whether these things are puzzling depends on your account of possible worlds and it seems like if you're account of possible worlds can't account for transworld identity it can't do the work required of possible worlds and so it is open to the challenge that it should be abandoned in favour of some other account.
Third, it's important to distinguish questions about the way worlds are from questions about how they can be specified. It's an interesting question how we should specify individual possible worlds and another interesting question whether we often do so or whether we normally specify sets of possible worlds instead. However, difficulties with specification do not undermine the concept of a rigid designator.
Fourth, even if it were a relative matter whether H2O exists in a world this wouldn't undermine the concept of rigid designation. Rigid designation would simply imply that if this were the case then it would also be a relative matter whether water existed in that world.
The summary of what I'm trying to get at is the following: you have raised concerns for practical issues (how we specify worlds) and epistemic issues (how we know what's in worlds) but these aren't really relevant to the issue of rigid designation. So, for example, I don't think your claim that:
I think what I'm actually trying to say is that what constitutes a rigid designator, among other things, seems to depend very strongly on the resolution at which you examine possible worlds.
Follows from your argument (for the reasons I've outlined above, recap: from the fact that humans often specify sets of worlds rather than worlds nothing about rigid designation follows), even though I think your argument is an insightful one that raises interesting epistemic and practical issues for possible worlds.
I think this is getting past the point that I can useful contribute further though I will note that the vast literature on the topic has dealt with this sort of issue in detail (though I don't know it well enough to comment in detail).
Saying that, I'll make one final contribution and then leave it at that: I suspect that you've misunderstood the idea of a rigid designator if you think it depends on the resolution at which you examine possible worlds. To say that something is a rigid designator is to say that it refers to the same thing in all possible worlds (note that this is a fact about language use). So to say that "water" rigidly denotes H2O is just to say that when we use the word water to refer to something in some possible world, we are talking about H2O. Issues of how precisely the details of the world are filled in are not relevant to this issue (for example, it doesn't matter what happens to the air molecules - this has no impact on the issue of rigid designation).
The point you raise is an interesting one about how we specify possible worlds but not, to my knowledge, one that's relevant to rigid designation. But beyond that I don't think I have anything more of use to contribute (simple because we've exhausted my meagre knowledge of the topic)...
I can't cite sources off-hand but this suggestion is reasonably standard but taken to be a bit of a cheat (it dodges the difficult question). For this reason it is often stipulated that no objective chance device is available to the agent or that the predictor does something truly terrible if the agent decides by such a device (perhaps takes back all the money in the boxes and the money in the agent's bank account).
As I said, these are complex issues.
possible worlds are things that live inside the minds of agents (e.g. humans).
Yes, but almost everyone agrees with this (or at least, almost all views on possible worlds can be interpreted this way even if they can also be interpreted as claims about the existence of abstract - non-concrete - objects). There are a variety of different things that possible worlds can be even given the assumption that they exist in people's heads (almost all the disagreement about what possible worlds are is disagreement within this category rather than between this category and something else).
Water is one of the examples I considered and found incoherent. Once you start considering possible worlds with different laws of physics, it's extremely unclear to me in what sense you can identify types of particles in one world with particles in another type of world.
Two things: first, the claim that "water" rigidly designates H2O doesn't imply that it must exist in all possible worlds - just that if "water" exists in a possible world then it is H2O. So if we can't identify the same particles in different worlds then this just means that water exists in almost no worlds (perhaps only in our own world).
However, the view that we can't identify the same particles in other worlds is a radical one and would be a strong sign that the account of possible worlds appealed to falls short (after all, possible worlds are supposed to be about what is possible and surely there are possibilities that revolve around the particles existing in our world - ie. surely it's possible that I now be holding a glass of H2O. If your account of possible worlds can't cope with this possibility it seems to not be a very useful account of possible worlds).
Further, how hard it is to identify sameness of particles across possible worlds will depend on how you take them to be "constructed" - if they are constructed by stipulation, ie. "consider the world where I am holding a glass of H2O" then it is very easy to get sameness of particles.
I'm not saying there's not room for your criticisms but for them to hold requires substantial metaphysical work showing why your account, and only your account, of possible worlds works and hence that your conclusions hold.
Okay, so three things are worth clarifying up front. First, this isn't my area of expertise so anything I have to say about the matter should be taken with a pinch of salt. Second, this is a complex issue and really would require 2 or 3 sequences of material to properly outline so I wouldn't read too much into the fact that my brief comment doesn't present a substantive outline of the issue. Third, I have no settled views on the issues of rigid designators, nor am I trying to argue for a substantive position on the matter so I'm not deliberately sweeping anything under the rug (my aim was to distinguish Eliezer's use of the phrase rigid designator from the standard usage and doing so doesn't require discussion of transworld identity: Eliezer was using it to refer to issues relating to different people whereas philosophers use it to refer to issues relating to a single person - or at least that's the simplified story that captures the crucial idea).
All that said, I'll try to answer your question. First, it might help to think of rigid designators as cases where the thing to be identified isn't simply to be identified with its broad role in the world. So "the inventor of bifocals" is the person that plays a certain role in the world - the role of inventing bifocals. So "the inventor of bifocals" is not a rigid designator. So the heuristic for identifying rigid designators is that they can't just be identified by their role in the world.
Given this, what are some examples of rigid designators? Well, the answer to this question will depend on who you ask. A lot of people, following Putnam would take "water" (and other natural kind terms) to be a rigid designator. On this view, "Water" rigidly refers to H2O, regardless of whether H20 plays the "water" role in some other possible world. So imagine a possible world where some other substance, XYZ, falls from the sky, sakes thirst, fill rivers and so on (that is, XYZ fills the water role in this possible world). On the rigid designation view, XYZ would not be water. So there's one example of a rigid designator (on one view).
Kripke (in his book naming and necessity) defends the view that names are rigid designators - so the name "Thomas Jefferson" refers to the same person in all possible worlds (this is where issues of transworld identity become relevant). This is meant to be contrasted with a view according to which the name "John Lennon" refers to the nearest and near enough realiser of a certain description ("lead singer of the Beatles, etc). So on this view, there are possible worlds where John Lennon is not the lead singer of the Beatles, even though the Beatles formed and had a singer that met many of the other descriptive features of John (born in the same town and so on).
Plausibly, what you take to be a rigid designator will depend on what you take possible worlds to be and what views you have on transworld identity. Note that your comment that it seems difficult to imagine how you could go about identifying objects in different possible worlds even in principle makes a very strong assumption about the metaphysics of possible worlds. For example, this difficulty would be most noticeable if possible worlds were concrete things that were causally distinct from us (as Lewis would hold). One major challenge to Lewis's view is just this challenge. However, very few philosophers actually agree with Lewis.
So what are some other views? Well Kripke thinks that we simply stipulate possible worlds (as I said, this isn't my area so I'm not entirely clear what he takes possible worlds to be - maximally consistent sets of sentences, perhaps - if anyone knows, I'd love to have this point clarified). That is, we say, "consider the possible world where Bill Gates won the presidency". As Kripke doesn't hold that possible worlds are real concrete entities, this stipulation isn't necessarily problematic. On Kripke's view, then, the problem of transworld identity is easy to solve.
More precisely, I do not understand how one goes about identifying objects in different possible worlds even in principle. I think that intuitions about this procedure are likely to be flawed because people do not consider possible worlds that are sufficiently different.
I don't have the time to go into more detail but it's worth noting that your comment about intuition is an important point depending on your view of what possible worlds are. However, there's definitely an overarching challenge to views according to which we should rely on our intuitions to determine what is possible.
Hope that helps clarify.
You may have resolved this now by talking to Richard (who knows more about this than me) but, in case you haven't, I'll have a shot at it.
First, the distinction: Richard is using rigid designation to talk about how a single person evaluates counterfactual scenarios, whereas you seem to be taking it as a comment about how different people use the same word.
Second, relevance: Richard's usage allow you to respond to an objection. The objection asks you to consider the counterfactual situation where you desire to murder people and says murder must now be right so the theory is extremely subjective. You can respond that "right" is a rigid designator so it is still right to not murder in this counterfactual situation (though your counterpart here will use the word "right" differently).
Suggestion: perhaps edit the paragraph so as to discuss either this objection and defence or outline why the rigid designator view so characterised is not your view.
Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself - if somebody thinks of Euclidean geometry when you utter the sound "num-berz" they're not doing anything false, they're associating the sound to a different logical thingy. It's not about words with intrinsically rigid referential power, it's that the words are window dressing on the underlying entities.
I just wanted to agree with Tristanhaze here that this usage strikes me as non-standard. I want to put this in my own words so that Tristanhaze/Eliezer/others can correct me if I've got the wrong end of the stick.
If something is a rigid designator it means that it refers to the same thing in all possible worlds. To say it's non-rigid is to say it refers to different things in some possible worlds to others. This has nothing to do with whether different language users that use the phrase must always be referring to the same thing. So George Washington may be a rigid designator in that it refers to the same person in all possible worlds (bracketing issues of transworld identity) but that doesn't mean that in all possible worlds that person is called George Washington or that in all possible worlds people who use the name George Washington must be referring to this person or even that in the actual world all people who use the name George Washington must be referring to this person.
To say "water" is a rigid designator is to say that whatever possible world I am talking about, I am picking out the same thing when I use the word water (in a way that I wouldn't be when I say, "the tallest person in the world" - this would pick out different things in different worlds). But it doesn't say anything about whether I mean the same thing as other language users in this or other possible worlds.
ETA: So the relevance to the quoted section is that rigid designators aren't about whether someone that thinks of Euclidean geometry when you say "numbers" is right or wrong - it's about whether whatever they associate with that word is the same thing in all possible worlds (or whether it's a different thing in some worlds).
ETA 2: I take it that Eliezer's paragraph here is in response to comments like these. I'm in a bit of a rush and need to think about it some more but I think Richard may be making a different point here to the one Eliezer's making (on my reading). I think Richard is saying that what is "right" is rigidly determined by my current (idealised) desires - so in a possible world where I desired to murder, murder would still be wrong because "right" is a rigid designator (that is, right from the perspective of my language, a different language user - like the me that desires murder - might still use "right" to refer to something else according to which murder is right. See the point about George Washington being able to be rigid even if people in other possible worlds use that name to name someone else). On the other hand, my reading of Eliezer was that he was taking the claim that "right" (or "fair") is a rigid designator to mean something about the way different language users use the word "fair". Eliezer seemed to be suggesting that rigid designation implied that words intrinsically mean certain things and hence that rigid designation implies that if someone uses a word in a different way they are wrong (using numbers to refer to geometry). I could have misunderstood either of these two comments but if I haven't then it seems to me that Eliezer is using rigid designator in a non-standard way.
Given that this has no response to it, I'm curious as to whether Orthonormal has responded to you regarding this either off list or elsewhere?
It still may be hard to resolve when something is as simple as possible.
So modal realism (the idea that possible worlds exist concretely) has been highlighted a few times in this thread as an unparsimonious theory but Lewis has two responses to this:
1.) This is (at least mostly) quantitative unparsimony not qualitative (lots of stuff, not lots of types of stuff). It's unclear how bad quantitative unparsimony is. Specifically, Lewis argues that there is no difference between possible worlds and actual worlds (actuality is indexical) so he argues that he doesn't postulate two types of stuff (actuality and possibility) he just postulates a lot more of the stuff that we're already committed to. Of course, he may be committed to unicorns as well as goats (which the non-realist isn't) but then you can ask whether he's really committed to more fundamental stuff than we are.
2.) Lewis argues that his theory can explain things that no-one else can so even if his theory is less parsimonious, it gives rewards in return for that cost.
Now many people will argue that Lewis is wrong, perhaps on both counts but the point is that even with the case that's been used almost as a benchmark for unparsimonious philosophy in this thread, it's not as simple as "Lewis postulates two types of stuff when he doesn't need to, therefore, clearly his theory is not as simple as possible."
In terms of Lewis, I don't know of someone criticising him for this off-hand but it's worth noting that Lewis himself (in his book On the Plurality of Worlds) recognises the parsimony objection and feels the need to defend himself against it. In other words, even those who introduce unparsimonious theories in philosophy are expected to at least defend the fact that they do so (of course, many people may fail to meet these standards but the expectation is there and theories regularly get dismissed and ignored if they don't give a good accounting of why we should accept their unparsimonious nature).
Sensations and brain processes: one of Jack Smart's main grounds for accepting the identity theory of mind is based around considerations of parsimony
Quine's paper On What There Is is basically an attack on views that hold that we need to accept the existence of things like pegasus (because otherwise what are we talking about when we say "Pegasus doesn't exist"). Perhaps a ridiculous debate but it's worth noting that one of Quine's main motivations is that this view is extremely unparsimonious.
From memory, some proponents of EDT support this theory because they think that we can achieve the same results as CDT (which they think is right) in a more parsimonious way by doing so (no link for that however as that's just vague recollection).
I'm not actually a metaphysician so I can't give an entire roll call of examples but I'd say that the parsimony objection is the most common one I hear when I talk to metaphysicians.
Obviously and unfortunately, the idea that you are not supposed to end up with more and more ontologically fundamental stuff inside your philosophy is not mainstream.
I think I must be misunderstanding what you're saying here because something very similar to this is probably the principle accusation relied upon in metaphysical debates (if not the very top, certainly top 3). So let me outline what is standard in metaphysical discussions so that I can get clear on whether you're meaning something different.
In metaphysics, people distinguish between quantitative and qualitative parsimony. Quantitative parisimony is about the amount of stuff your theory is committed to (so a theory according to which more planets exist is less quantitatively parsimonious than an alternative). Most metaphysicians don't care about quantative parsimony. On the other hand, qualitative parsimony is about the types of stuff that your theory is committed to. So if a theory is committed to causation and time, this would be less qualitatively parsimonious than one that that was only committed to causation (just an example, not meant to be an actual case). Qualitative parsimony is seen to be one of the key features of a desirable metaphysical theory. Accusations that your theory postulates extra ontological stuff but doesn't gain further explanatory power for doing so is basically the go to standard accusation against a metaphysical theory.
Fundamentality is also a major philosophical issue - the idea that some stuff you postulate is ontologically fundamental and some isn't. Fundamentality views are normally coupled with the view that what really matters is qualitative parsimony of fundamental stuff (rather than stuff generally).
So how does this differ from the claim that you're saying is not mainstream?
Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1
This isn't an area about which I know very much about but my understanding is that very few philosophers actually hold to a version of internalism which is disproven by these sorts of cases (even fewer than you might expect because those people that do hold to such a view tend to get commented on more often because "look how empirical evidence disproves this philosophical view" is a popular paper writing strategy and so people hunt for a target and then attack it, even if that target is not a good representation of the general perspective). As I said, not my area of expertise so I'm happy to be proben wrong on this.
I know you mention this sort of issue in the footnote but I think that still runs the risk of being misleading and making it seem that philosophers on mass hold a view that they (AFAIK) don't. This is particularly likely to happen because you cite a survey of philosophers in the same breath.
In general, I find that academic philosophy is far less bad than people on LW seem to think it is, in a large part because of a tendency on LW to focus on fringe views instead of mainstream views amongst philosophers and to misinterpret the meaning of words used by philosophers in a technical manner.
I'm not convinced that Briggs' argument succeeds but I take it that the argument is meant to apply as long as the theory ranks decisions ordinally (rather than applying only if they do so and not if they utilise more information). See my response to manfred for a few more minor details.
Whoa, no. That's a bad mantra. Wireheading, quantum immortality, doing meth - these are bad things.
Briggs is here primarily considering cases where your preferences don't change as a result of your decision (but where your credences might). If we're interested in criticising the argument precisely as stated then perhaps this is a reasonable criticism but it's not an interesting criticism of Briggs' view which is to do with how we reason in cases where our decision gives us new information about the state of the world (ie. about changing credences not changing utilities).
This not only requires you to throw away the cardinal information as gwern says
Again, it is not clear that this is an interesting criticism. The result doesn't rely on cardinal values but it does apply to agents with cardinal values. This makes it a stronger more powerful result (rather than an uninteresting result which doesn't apply to actual theories). The result only relies on the ordinal rankings of outcomes but it causes problems for theories that utilise cardinal values (like decision theory). So Gwern notes that "a lot of voting paradoxes are resolved with additional information. (For example, Arrow's paradox is resolved with cardinal information.)". This isn't true with Briggs' argument - it can't simply be resolved by having cardinal preferences.
A reductio ad absurdum of this position would be that it can't even distinguish CDT from evil-CDT, which is where you always choose the locally worst option. Their result is really just the statement "some decision theories have different preference orderings,"
Again, not clear that this is an interesting criticism. Briggs' isn't trying to develop a necessary and sufficient criteria for theory adequacy so it's no surprise that her paper doesn't determine which of CDT and evil-CDT one should follow. She's just introducing two necessary criteria for theory adequacy and presenting a proof that no theory can meet these. So both CDT and evil-CDT fail to be entirely adequate theories - that's all she is trying to establish. Of course, we also want a tool that can tell us that CDT is a more adequate theory than evil-CDT but that's not the tool that Briggs is discussing here so it seems unreasonable to criticise her on the grounds that she fails to achieve some aim that's tangential to her purpose.
Egan's point is often taken to be similar to some earlier points including that made by Bostrom's meta-newcomb's problem (http://www.nickbostrom.com/papers/newcomb.html)
It's worth noting that not everyone agrees that these are problems for CDT:
See James Joyce (the philosopher): http://www-personal.umich.edu/~jjoyce/papers/rscdt.pdf
See Bob Stalnaker's comment here: http://tar.weatherson.org/2004/11/29/newcomb-and-mixed-strategies/ (the whole thread is pretty good)