Posts
Comments
Is your position the same as Dennett's position (summarized in the second paragraph of synopsis here) ?
" 'What is true is already so. The coherent extrapolated volition of God doesn't make it worse' is obviusly false if and only if timeless politics is isomorphic to truth if and only if the tenth virtue of rationality is 'Let me not become attached to the map I may not want' " is obviously false.
Well, it's true.
Also, This is way smarter than than the Deepak Chopra quote generator.
Yes. P2 finding this out would harm him, and couldn't possibly benefit anyone else, so if searching would lead him to believe the cube doesn't exist, it would be ethically better if he didn't search. But the harm to P2 is a result of his knowledge, not the mere fact of the cube's inexistence. Likewise, P1 should investigate assuming he would find the cube. The reason for this difference is that investigating would have a different effect on the mental states of P1 than it would on the mental states of P2. If the cube in U1 can't be found by P1, than the asymmetry is gone, and neither should investigate.
I would not be in favor of wireheading the human race, but I don't see how that is connected to S. If wireheading all of humanity is bad, it seems clear that it is bad because it is bad for the people being wireheaded. If this is a wireheading scenario where humanity goes extinct as a result of wireheading, than this is also bad because of the hypothetical people who would have valued being alive. There is nothing about S that stops someone from comparing the normal life they would live with a wireheaded life and saying they would prefer the normal life. This is because these two choices involve different mental states for the person, and S does not in itself place any restrictions on which mental states would be better for you to have. Rather, it states that your own mental states are the only things that can be good or bad for you.
If you think S is false, you could additionally claim that wireheading humanity is bad because the fact that humanity is wireheaded is something that almost everybody believes is bad for them, and so if humanity is wireheaded, that is very bad for many people, even if these people are not aware that humanity is wireheaded. But it seems very easy to believe that wireheading is bad for humanity without believing this claim.
Just to make sure I understand your position: Imagine two universes U1, and U2,like the one in my original post, where P1 and P2 are unsure whether the gold cube exists. In U1 the cube exists, in U2 it does not, but they are otherwise identical (or close enough to identical that P1 and P2 have identical brain states). The Ps truly desire that the cube exist as much as anyone can desire a fact about the universe to be true. Do you claim that P1 is better off than P2? If so do you really think that this being possible is as obvious as that 2 + 2 =\= 3 ? If not, why would someone's well-being be able to depend on something other than their mental states in some situations but not this one? To me it seems very obvious to me that P1 and P2 have exactly equally good lives, and I am truly surprised that other people's intuitions and beliefs lean strongly the other way.
What if you're deciding whether to have sex?
I think you're misunderstanding what I meant. I'm using "Someone's utility" here to mean only how good or bad things are for that person. I am not claiming that people should (or do) only care about their own well-being, just that their well-being only depends on their own mental states. Do you still disagree with my statement given this definition of utility?
If someone kidnapped me and hooked me up to an experience machine that gave me a simulated perfect life, and then tortured my family for the rest of their lives, I claim that this would be good for me. It would be bad overall because people would be harmed (far in excess of my gains). If I was given this as an option I would not take it because I would be horrified by the idea and because I believe it would be morally wrong, but not because I believe I would be worse off if I took the deal. If someone claimed that taking this deal would be bad for their own well-being, I believe that they would be mistaken.
If someone claimed that the existence of a gold cube in a section of the universe where it would never be noticed by anyone or affect any sentient things could be a morally good thing, I would likewise claim that they are mistaken. I claim this, because regardless of how much they want the cube to exist, or how good they believe the existence of the cube to be, no one's well-being can depend on the existence of the cube. At most, someone's well-being can depend on their belief in the existence of the cube.
No, it isn't. You are claiming that P "really" wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of "the gold exists" is "the oracle said the gold exists."
I do not claim that. I claim that P believes the cube exists because the oracle says so. He could believe it exists because he saw it in a telescope. Or because he saw it fly in front of his face and then away into space. Whatever reason he has for "knowing" the cube exists has some degree of uncertainty. He is happy because he has a strong belief that the gold exists. Moreover, my point stands regardless of where P gets his knowledge. Imagine, for example, that P believes strongly that the cube does not exist, because the existence of the cube violates Occam's razor. It is still the case (in my opinion) that whether he is correct does not alter his well-being.
How surprising should it be that ignoring the real world causes of something produces paradoxes?
I do not think that this is a paradox, it seems intuitively obvious to me. In fact, I'm not entirely sure that we disagree on anything. You say "P's happiness doesn't depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists." I think others on this thread would argue that P's happiness does change depending on the existence of the gold, even if what the oracle tells him is the same either way.
I actually have not a clue what this example's connection to moral realism might be,
Maybe nothing, I just suspected that moral anti-realists would be less likely to accept S. My main question is just whether other people share my intuition that S is true (and what there reasons for agreeing or disagreeing are).
Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.
I'm not sure I understand what you're saying. P believes that the oracle is telling him the cube exists because the cube exists. P is of course mistaken, but everything else the oracle told him was correct, so he strongly believes that the oracle will only tell him things because they are the truth. Whether this is a reasonable belief for P to have is not relevant. You seem to be saying that if something has no causal effect on someone, that it cannot affect their well-being. I agree with that, but other people do not agree with that.
I guess the realism aspect isn't as relevant as I thought it would be. I expected that any realists would believe S, and that anti-realists might or might not. I also think that not believing S would imply anti-realism, but I'm not super confident that that's true.
I would say that P and Q have equal utility until the point where their circumstances diverge, after which of course they would have different utilities. There is no reason to consider future utility when talking about current utility. So it just depends on what section of time you are looking at. If you're only looking at a segment where P and Q have identical brain states, then yes I would say they have the same utility.
I said that there could be other reasons for P to want the cube to exist. If someone has a desire that fulfilling will not be good for them in any way, or good for any other sentient being, that's fine but I do not think that a desire of this type is morally relevant in any way. Further if someone claimed to have such a desire, knowing that fulfilling it served no purpose other than simply fulfilling it, I would believe them to be confused about what desire is. Surely the desire would have to be at least causing them discomfort, or at least some sort of an urge to fulfill the desire. Without that, what does desire even mean?
But that doesn't really have much to do with whether S is true. Like I said, It seems clearly true to me that identical mental states implies identical well-being, If you don't agree, I don't really have any way to convince you other than what I've already written.
I am stipulating that P really truly wants the gold to exist (in the same way that you would want there not to exist a bunch of people who are being tortured, ceteris paribus). Whether P should be trusting the oracle is besides the point. The difference between these scenarios is that you are correct in believing that the people being tortured is morally bad. However, your well-being would not be affected by whether the people are being tortured, only by your belief of how likely this is. Of course, you would still try to stop the torture if you could, even if you knew that you would never know whether you were successful, but this is mainly an act of altruism.
My main point is probably better expressed as "Beings with identical mental states must be equally well off". Disagreeing with this seems absurd to me, but apparently a lot of people do not share this intuition.
Also, you could easily eliminate the oracle in the example by just stating that P spontaneously comes to believe the cube exists for no reason. Or we could imagine that P has a perfectly realistic hallucination of the oracle. The fact that P's belief is unjustified does not matter. According to S, the reasons for P's mental state are irrelevant.
Look, if it helps, you can define utility*, which is utility that doesn't depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*.
Someone can want to maximize utility**, and this is not necessarily irrational, but if they do this the are choosing to maximize something other than their own well-being.
Perhaps they are being altruistic and trying to improve someone else's well-being at the expense of their own, like in your torture example. In this example, I don't believe that most people who choose to save their family believe that they are maximizing their own well-being, I think they realize they are sacrificing their well-being (by maximizing utility** instead of utility*) in order to increase the well-being of their family members. I think that any one who does believe they are maximizing their own well being when saving their family is mistaken.
Perhaps they do not have any legitimate reason for wanting something other than their own well-being. Going back to the gold cube example, think of why P wants the cube to exist. P could want it to exist because knowing that gold cubes exist makes them happy. If this is the only reason, then P would probably be perfectly happy to accept a deal where their mind is altered so that they know the cube exists, even though it does not. If, however, P thinks there is something "good" about the cube existing, independent of their mind, they would (probably) not take this deal. Both of these actions are perfectly rational, given P's beliefs about morality, but in the second case, P is mistaken in thinking that the existence of the cube is good by itself. This is because in either case, after accepting the deal, P's mental state is exactly the same, so P's well-being must be exactly the same. Further, nothing else in this universe is morally relevant, and P was simply mistaken in thinking that the existence of the gold block was a fundamentally good thing. (There might be other reasons for P to want the cube. Perhaps P just has an inexplicable urge for there to be a cube. in this case it is unclear whether they would take the deal, but taking it would surely still increase their well-being)
Well, again, you're kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states.
It seems implausible to me that this function could exist independent of a mind or outside of a mind. You seem to be claiming that two people with identical mental states could have different levels of well-being. This seems absurd to me. I realize I am not provide much of an argument for this claim, but the idea that someone's well-being could depend upon something that has no connection with their mental states whatsoever strongly violates my moral intuitions. I expected that other people would share this intuition, but so far no one has said that they do, so perhaps this intuition is unusual. (One could argue that P is correct in believing that the cube has moral value/utility independent of any sentient being, but this seems even more absurd)
In any case, I think S is basically equivalent to saying that utility (or moral value, however you want to define it) reduces to mental states.
P.S. I think you quoted more than you meant to above.
This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true. But it's mainly just something I was thinking about while thinking about moral realism.
I don't really know what I'm talking about when I say objective utility, I am just claiming that if such a thing exists/ makes sense to talk about, that it can only depend on the states of individual minds, since each mind's utility can only depend on the state of that mind and nothing outside of the utility of minds can be ethically relevant.
That is true, but not relevant to the point I am trying to make. If P took the first offer, they would end up exactly as well off as if they hadn't received the offer, and if P took the second offer, they would end up better off. The fact that P's beliefs don't correspond with reality does not change this. The reason that P would accept the first offer but not the second is that P believes the universe would be "better" with the cube. P does not think ey will actually be happier (or whatever) accepting offer 1, and if P does think ey will be happier, I think that is an error in moral judgment. The error is in thinking that the box is morally relevant, when it cannot be, since P is the only morally relevant thing in this universe.
In your example, I agree that almost everyone would choose the second choice, but my point is that they will be worse off because they make that choice. It is an act of altruism, not an act which will increase their own utility. (Possibly the horror they would experience in making choice 1 would outweigh their future suffering, but after the choice is made they are definitely worse off having made the second choice.)
I say that the cube cannot be part of P's utility function, because whether the cube exists in this example is completely decoupled from whether P believes the cube exists, since P trusts the oracle completely, and the oracle is free to give false data about this particular fact. P's belief about the cube is part of the utility function, but not the actual fact of whether the cube exists.
Summary: I'm wondering whether anyone (especially moral anti-realists) would disagree with the statement, "The utility of an agent can only depend on the mental state of that agent".
I have had little success In my attempts to devise a coherent moral realist theory of meta-ethics, and am no longer very sure that moral realism is true, but there is one statement about morality that seems clearly true to me. "The utility of an agent can only depend on the mental state of that agent". Call this statement S. By utility I roughly mean how good or bad things are, from the perspective of the agent. The following thought experiment gives a concrete example of what I mean by S.
Imagine a universe with only one sentient thing, a person named P. P desires that there exist a 1 meter cube of gold somewhere within P's lightcone. P has a (non-sentient) oracle that ey trusts completely to provide either an accurate answer or no information for whatever question ey asks. P asks it whether a 1 meter gold cube exists within eir lightcone, and the oracle says yes.
It seems clear that whether the cube actually exists cannot possibly be relevant to the utility of P, and therfore the utility of the universe. P is free to claim that eir utility depends upon the existence of the cube, but I believe P would be mistaken. P certainly desires the cube to exist, but I believe that it cannot be part of P's utility function. (I suppose it could be argued that in this case P is also mistaken about eir desire, and that desires can only really be about one's own metnal state, but that's not important to my argument). Similarly, P would be mistaken to claim that anything not part of eir mind was part of eir utility function.
I'm not sure whether S in itself implies a weak form of moral realism, since it implies that statements of the form "x is not part of P's utility function" can be true. Would these statements count as ethical statements in the necessary way? It does not seem to imply that there is any objective way to compare different possible worlds though, so it doesn't hurt the anti-realist position much. Still, it does seem to provide a way to create a sort of moral partition of the world, by breaking it into individual morally relevant agents (no, I don't have a good definition for "morally relevant agent") which can be examined separately, since their utility can only depend on their map of the world and not the world itself. The objective utility of the universe can only depend on the separate utilities in each of the partitions. This leaves the question of whether it makes any sense to talk about an objective utility of the universe.
So, does anyone disagree with S? If you agree with S, are you an anti-realist?