Posts
Comments
I simplify here because a lot of people think I will have contradictory expectations for a more complex event.
But I think you're being even more picky here. Do I -expect- that increasing the amount of gold in the world will slightly affect the market value? Yes. But I haven't wished anything related to that, my wish is -only- about some gold appearing in front of me.
Having the genie magically change how much utility I get from the gold is an even more ridiculous extension. If I wish for gold, why the heck would the genie feel it was his job to change my mental state to make me like gold more?
Possibly we just think very differently, and your 'expectation' of what would happen when gold appears also includes every thing you would do with that gold later, despite, among many, many things, not even knowing -when- you would speak the wish to get the gold, or what form it would appear in. And you even have in mind some specific level of happiness that you 'expect' to get from it. If so, you're right, this trick will not work for you.
The genie is, after all, all-powerful, so there are any number of subtle changes it could make that you didn't specify against that would immediately make you, or someone else, wish for the world to be destroyed. If that's the genie's goal, you have no chance. Heck, if it can choose it's form it could probably appear as some psycho-linguistic anomaly that hits your retina just right to make you into a person who would wish to end the world.
Really I'm just giving the genie a chance to show me that it's a nice guy. If it's super evil I'm doomed regardless, but this wish test (hopefully) distinguishes between a benevolent genie and one that's going to just be a dick.
A wish is a pretty constrained thing, for some wishes.
If I wish for a pile of gold, my expectations probably constrain lots of externalities like 'Nobody is hurt acquiring the gold, it isn't taken from somewhere else, it is simply generated and deposited at my feet, but not, like, crushing me, or using the molecules of my body as raw material, or really anything that kills me for that matter'. Mostly my expectations are about things that won't happen, not things that will happen that might conflict (that consists only of: the gold will appear before me and will be real and persistent).
If you try this with a wish for world peace, you're likely to get screwed. But I think that's a given no matter your strategy. Don't wish for goals, wish for the tools to achieve your goals - you'll probably be more satisfied with the result to boot.
I just take this as evidence that I -can't- beat the genie, and don't attempt any more wishes.
Whereas, if it's something simple then I have pretty strong evidence that the genie is -trying- to meet my wishes, that it's a benevolent genie.
Wish 1: "I wish for a paper containing the exact wording of a wish that, when spoken to you, would meet all my expectations for a wish granting X." For any value of X.
Wish 2: Profit.
Three wishes is overkill.
That's hardly a critique of the trolley problem. Special relativity itself stipulates that it doesn't apply to faster-than-light movement, but a moral theory can't say "certain unlikely or confusing situations don't count". The whole point of a moral theory is to answer those cases where intuition is insufficient, the extremes you talk about. Imagine where we'd be if people just accepted Newtonian physics, saying "It works in all practical cases, so ignore the extremes at very small sizes and very high speeds, they are faulty models". Of course we don't allow that in the sciences, so why should we in ethics?
I can attest that I had those exact reactions on reading those sections of the article. And in general I am more impressed by someone who graduated quickly than one who took longer than average, and by someone who wrote a book rather than one who hasn't. "But what if that's not the case?" is hardly a knock-down rebuttal.
I think it's more likely you're confusing the status you attribute to Kaj for candidness and usefulness of the post, with the status you would objectively add or subtract from a person if you heard that they floundered or flourished in college.
I don't see how this is admirable at all. This is coercion.
If I work for a charitable organization, and my primary goal is to gain status and present an image as a charitable person, then efforts by you to change my mind are adversarial. Human minds are notoriously malleable, so it's likely that by insisting I do some status-less charity work you are likely to convince me on a surface level. And so I might go and do what you want, contrary to my actual goals. Thus, you have directly harmed me for the sake of your goals. In my opinion this is unacceptable.
It's excessive to claim that the hard work, introspection, and personal -change- (the hardest part) required to align your actions with a given goal are equivalent in difficulty or utility to just taking a pill.
Even if self-help techniques consistently worked, you'd still have to compare the opportunity cost of investing that effort with the apparent gains from reaching a goal. And estimating the utility of a goal is really difficult, especially when it's a goal you've never experienced before.
You are quite right. My scores correlate much better now; I retract my confusion.
I underwent a real IQ test when I was young, and so I can say that this estimation significantly overshoots my actual score. But that's because it factors in test-taking as a skill (one that I'm good at). Then again, I'm also a little shocked that the table on that site puts an SAT score of 1420 at the 99.9th percentile. At my high school there were, to my knowledge, at least 10 people with that high of a score (and that's only those I knew of), not to mention one perfect score. This is out of ~700 people. Does that mean my school was, on average, at the 90th percentile of intelligence? Or just at the 90th percentile of studying hard (much more likely I think).
The article spends two paragraphs explaining the link between openness and disease, and then even links to the wikipedia page for parasite load. You link to 'Inferential Distance', but this seems more like a case of 'didn't really read the article' or perhaps 'came into it with really strong pre-conceptions of what it would be about, and didn't bother to update them based on what was actually there'.
What kind of 'morality' are we talking about here? If we're talking about actual systems of morality, deontological/utilitarian/etc, then empathy is almost certainly not required to calculate morally correct actions. But this seems to be talking about intuitive morality. It's asking: is empathy, as a cognitive faculty, necessary in order to develop an internal moral system (that is like mine)?
I'm not sure why this is an important question. If people are acting morally, do we care if it's motivated by empathy? Or put it this way: Is it possible for a psychopath to act morally? I'd say yes, of course, no matter what you mean by morality.
I see what you're getting at with the intuitive concept (and philosophy matching how people actually are, rather than how they should be), but human imperfection seems to open the door to a whole lot of misunderstanding. Like, if someone said we were having fish for dinner, and then served duck, because they thought anything that swims is a fish, well I'd be put out to say the least.
I think my intuition is that my understanding of various concepts should approach the strictness of conceptual analysis. But maybe that's just vanity. After all, border cases can easily be specified (if we're having eel, just say 'eel' rather than 'fish').
I think this is a little unfair. For example, I know exactly what the category 'fish' contains. It contains eels and it contains flounders, without question. If someone gives me a new creature, there are things that I can do to ascertain whether it is a fish. The only question is how quickly I could do this.
We pattern-match on 'has fins', 'moves via tail', etc. because we can do that fast, and because animals with those traits are likely to share other traits like 'is billaterally symetrical' (and perhaps 'disease is more likely to be communicable from similarly shaped creatures'). But that doesn't mean the hard-and-fast 'fish' category is meaningless; there is a reason dolphins aren't fish.
I actually tried the 2-4-6 puzzle on both my brothers, and they both got it right because they thought there was some trick to it and so kept pressing until they were sure (and even after ~20 questions still didn't fully endorse their answers). Maybe I have extra-non-biased brothers (not too likely), or maybe the clinical 2-4-6 test is so likely to be failed because students expect a puzzle and not a trick. That is to say, you are in a position of power over them and they trust you to give them something similar to what they've been given in the past. Also there's an opportunity cost to keep guessing in the classroom setting, both because you have less class time to learn and because if other students have already stopped you might alienate them by continuing. Point is, I've seen markedly better results when this puzzle is administered in a casual or 'real-world' setting. I intend to try it on other people (friends, co-workers), and see if the trend continues. Anyone else tried it and gotten this result?
I feel obliged to point out that Socialdemocracy is working quite well in Europe and elsewhere and we owe it, among other stuff, free universal health care and paid vacations.
It's not fair to say we 'owe' Socialdemocracy for free universal health care and paid vacations, because they aren't so much effects of the system as they are fundamental tenets of the system. It's much like saying we owe FreeMarketCapitalism for free markets - without these things we wouldn't recognize it as socialism. Rather, the question is whether the marginal gain in things like quality of living are worth the marginal losses in things like autonomy. Universal health care is not an end in itself.
My point was meant in the sense that random culling for organs is not the best solution available to us. Organ growth is not that far in the future, and it's held back primarily because of moral concerns. This is not analagous to your parody, which more closely resembles something like: "any action that does not work towards achieving immortality is wrong".
The point is that people always try to find better solutions. If we lived in a world where, as a matter of fact, there is no way whatsoever to get organs for transplant victims except from living donors, then from a consequentialist standpoint some sort of random culling would in fact be the best solution. And I'm saying, that is not the world we live in.
But people still die.
I think a major part of how our instinctive morality works (and a reason humans, as a species, have been so successful) is that we don't go for cheap solutions. The most moral thing is to save everyone. The solution here is a stopgap that just diminishes the urgency of technology to grow organ replacements, and even if short-term consequentially it leaves more people alive, it in fact worsens out long-term life expectancy by not addressing the problem (which is that people's organs get damaged or wear out).
If a train is heading for 5 people, and you can press a switch to make it hit 1 person, the best moral decision is "I will find a way to save them all!" Even if you don't find that solution, at least you were looking!
In the 1 red/10 beans scenario, you can only win once, no matter how hard you try. With 7 read/100 beans, you simply play the game 100 times, draw 7 red beans, and end up with 7x more money.
Unless the beans are replaced, in which case yeah, what the hell were they thinking?
I'd call that character humor, where the character of the boss is funny because of his exaggerated stupidity. It wouldn't be funny if the punchline was just the boss getting hit in the face by a pie (well, beyond the inherent humor of pie-to-face situations). Besides, most of the co-workers say idiotic things too!
The high value you place on freedom may be because, in the past, freedom has tended to lead to pleasure. The idea that people are better suited to choosing how to obtain their pleasure makes sense to us now, because people usually know how best to achieve their own subjective pleasure, whereas forced pleasures often aren't that great. But by the time wireheading technology comes around, we'll probably know enough about neurology and psychology that such problems no longer exist, and a computer could well be trusted to tell you what you would most enjoy more accurately than your own expectations could.
I agree with the intuition that most people value freedom, and so would prefer a free pleasure over a forced one if the amount of pleasure was the same. But I think that it's a situational intuition, that may not hold in the future. (And is a value really a value if it's situational?)
Exactly the difficulty of solving a Rubik's cube is that it doesn't respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least). In general, -humans- solve a Rubik's cube by memorizing sequences of moves with certain results, and then string these sub-solutions together. An AI, though, probably has the computational power to brute force a solution much faster than it could manipulate the cube.
The more interesting question (I think) is how it figures out a model for the cube in the first place. What makes the cube a good problem is that it's designed to match human pattern intuitions (in that we prefer the colors to match, and we quickly notice the seams that we can rotate through), but an AI has no such intuitions.
The simple answer is that your choice is also probabilistic. Let's say that your disposition is one that would make it very likely you will choose to take only box A. Then this fact about yourself becomes evidence for the proposition that A contains a million dollars. Likewise if your disposition was to take both, it would provide evidence that A was empty.
Now let's say that you're pretty damn certain that this Omega guy is who he says he is, and that he was able to predict this disposition of yours; then, noting your decision to take only A stands as strong evidence that the box contains the million dollars. Likewise with the decision to take both.
But what if, you say, I already expected to be the kind of person who would take only box A? That is, that the probability distribution over my expected dispositions was 95% only box A and 5% both boxes? Well then it follows that your prior over the contents of box A will be 95% that is contains the million and 5% that it is empty. And as a result, the likely case of you actually choosing to take only box A need only have a small effect on your expectation of the contents of the box (~.05 change to reach ~1), but in the case that you introspect and find that really, you're the kind of person who would take both, then your expectation that the box has a million dollars will drop by exactly 19(=.95/.05) times as much as it would get raised by the opposite evidence (resulting in ~0 chance that it contains the million). Making the less likely choice will create a much greater change in expectation, while the more common choice will induce a smaller change (since you already expected the result of that choice).
Hope that made sense.