Posts
Comments
Does it occur to anyone else that the fable is not a warning against doing favors in general but of siding with "outsiders" against "insiders"? When the farmer protects the venomous snake from the people trying to kill it, from a human perspective he's doing a bad thing. When the heron recommends white fowl as a medicine, even he were not to himself become a meal, he's not doing the bird community any favors. And the farmer's wife, in letting the heron go, is depriving her husband of vital medicine.
Best thought-out utopia ever:
In the Big Rock Candy Mountains, all the cops have wooden legs And the bulldogs all have rubber teeth and the hens lay soft-boiled eggs The farmer's trees are full of fruit and the barns are full of hay Oh I'm bound to go where there ain't no snow Where the rain don't fall, the wind don't blow In the Big Rock Candy Mountains
In the Big Rock Candy Mountains, you never change your socks And little streams of alcohol come a-trickling down the rocks The brakemen have to tip their hats and the railroad bulls are blind There's a lake of stew and of whiskey too And you can paddle all around 'em in a big canoe In the Big Rock Candy Mountains
In the Big Rock Candy Mountains the jails are made of tin, And you can walk right out again as soon as you are in There ain't no short-handled shovels, no axes, saws or picks, I'm a-goin' to stay where you sleep all day Where they hung the jerk that invented work In the Big Rock Candy Mountains
I think the word "kill" is being grossly misused here. It's one thing to say you have no right to kill a person, something very different to say that you have a responsibility to keep a person alive.
Keep posting. Once a week, even once a month, is better than never. If you can get more authors than provide high quality content, great, but if you can't, that's okay, don't worry about quantity.
To be precise, in every case where the environment only cares about your actions and not what algorithm you use to produce them, any algorithm that can be improved by randomization can always be improved further by derandomization.
Isn't this trivially true? Isn't the most (time) efficient algorithm always a giant lookup table?
Consider this scenario:
There are a large number of agents independently working on the same problem (for example, trying to find a string that hash-collides with some given string), but they cannot communicate in any way, they don't have any identification information about each other, they don't know how many other agents there are working on the problem (they aren't even sure there are any). It seems to me that each agent should decide at random where to start searching, not to fool each other but to avoid pointlessly duplicating each others' work.
Are you sure there is always something better than randomness?
I think it's worth mentioning that Kasparov will have a harder time accurately predicting your moves than you will have predicting his. Each of you knows that Kasparov will win, but this will much more likely be due to a blunder on your part than a brilliancy on his. He may well reason, "sooner or later this patzer is going to hang a piece", but he will have no way of knowing when.
Here's what I find difficult to understand from an evolutionary perspective: why do we have a sense that we ought to do what is right as opposed to what society wants us to do? Why are we even capable of making this distinction?
Cat Dancer,
The frequentist answer of 1/3 is effectively making the implicit assumption that the parent would have said "at least one boy" either if both were boys or if there were one of each, and "at least one girl" if both were girls. Eliezer2008's 1/2 answer effectively assumes that the parent would have said "at least one boy" if both were boys, "at least one girl" if both were girls, and either with equal probability if there were one of each. "No alternative" assumes the parent is constrained to (truthfully) say either "at least one boy" or "at least one girl", an assumption that strikes me as being bizzare.
Will Pearson, you could not be more wrong. Winning money at games of chance is precisely what probability theory was designed for.
Thank you for a correct statement of the problem which indeed gives the 1/3 answer. Here's the problem I have with the malformed version: I agree that it's reasonable to assume that if the children were a boy and a girl it is equally likely that the parent would say "at least one is a boy" as "at least one is a girl". But I guess you're assuming the parent would say "at least one boy" if both were boys, "at least one girl" if both were girls, and either "at least one boy" or "at least one girl" with equal probability in the one of each case.
That's the simplest set of assumptions consistent with the problem. But the quote itself is inconsistent with the normal rules of social interaction. Saying "at least one is a boy" takes more words to convey less information than saying "both boys" or "one of each". I think it's perfectly reasonable to draw some inference from this violation of normal social rules, although it is not clear to me what inference should be drawn.
I don't think you've given enough information to make a reasonable choice. If the results of all 20 experiments are consistent with both theories but the second theory would not have been made without the data from the second set of experiments, then it stands to reason that the second theory makes more precise predictions.
If the theories are equally complex and the second makes more precise predictions, then it appears to be a better theory. If the second theory contains a bunch of ad hoc parameters to improve the fit, then it's likely a worse theory.
But of course the original question does not say that the second theory makes more precise predictions, nor that it would not have been made without the second set of experiments.
And are you really "exploiting" an "irrational" opponent, if the party "exploited" ends up better off? Wouldn't you end up wishing you were stupider, so you could be exploited - wishing to be unilaterally stupider, regardless of the other party's intelligence? Hence the phrase "regret of rationality"...
Eliezar, you are putting words in your opponents' mouths, then criticizing their terminology.
"Rationality" is I think a well-defined term in game theory, it doesn't mean the same thing as "smart". It is trivial to construct scenarios in which being known to be "rational" in the game theory sense is harmful, but in all such cases it is being known to be rational which is harmful, not rationality itself.
Personally, I pretty much always have checked baggage, I can always make it to baggage claim before my luggage does, so I don't really care about saving time getting off the plane. If I'm in a window seat I usually let people behind me get off first, but if I'm in an aisle seat I don't want to block in the person in the window seat.
Which moral system the human race uses is relative, arbitrary, and meaningless, just as there's no reason for the pebble sorters to like prime numbers instead of composite numbers, perfect numbers, or even numbers
But that's clearly not true, except in the sense that it's "arbitrary" to prefer life over death. It's a pretty safe generalization that actions which are considered to be immoral are those which are considered to be likely to cause harm to others.
But which others matter how much is an open question. Some would suggest that all humans matter equally and that only humans matter, but I don't buy it, and I don't think many others do either. For example, I (and I think everyone I know) would agree that we should make at least some effort to avoid causing suffering to animals, but that it would be going way to far to treat a rat or a pig as equally important as a human. I understand that there are people out there who think it's perfectly appropriate to treat a pig as nothing but a machine for turning corn into meat, and others who think we out to consider a pig every bit the moral equal of a human being, and I acknowledge that either position is better defined and more internally consistent than my own. I can't see anything "wrong" with either extreme position, I see no reason to believe anyone could convince the others of the "rightness" of his position, even in principle.
It's strange that these pebblesorters can be convinced by "a heap of 103 pebbles and a heap of 19 pebbles side-by-side" that 1957 is incorrect, yet don't understand that this is because 19 * 103 = 157. Admittedly I didn't notice this myself on first reading, but I wasn't looking for a pattern.
I don't think your analogy holds up. Your pebblesorters all agree that prime numbered piles are correct and composite ones incorrect, yet are unreflective enough not to realize that's how they are making the distinction and bad enough mathematicians that they can't reliably tell whether or not large numbers are prime. If only they were smarter, all their disagreements would go away. The question of why prime piles are correct, or why piles should be made at all, would be forever unanswerable, but it wouldn't matter much.
I think with human beings the moral disagreements are fundamental. There is no equivalent of a universal belief that primality = goodness. It's not just that we make calculational errors (although of course we do). It's not just that we aren't consciously aware of the fundamental criteria by which we as individuals evaluate things as morally "good" or "bad" (although of course we aren't). Something like a universal agreement as to what these fundamental criteria are just isn't there. Not consciously, not unconsciously, not waiting to emerge, just not.
At least, I don't think it is.
I think everything you say in this post is correct. But there's nothing like a universal agreement as to what is "good", and although our ideas as to what is good will change over time, I see no reason to believe that they will converge.
There's a big difference between saying "morality is the product of human minds" and saying "morality is purely arbitrary". Similarly, there's a big difference between saying "there are objective reasons why we make the moral judgments we do" and "all moral questions have objective answers which in no way depend on human minds".
Life is not a zero sum game. I think nearly everyone would agree that it would be advantageous to nearly everyone if one could somehow guarantee that neither one's self nor one's loved ones would be killed at the cost of forgoing the ability to kill one's enemies. I think this fact, not repeated arbitrary assertion, is the basis for the nearly universal belief that "murder is wrong". I think the fact that, in many societies, refraining from killing those outside one's own tribe does nothing to prevent those outside the tribe from killing one's self or one's loved ones, and not arbitrary bigotry, is the reason that in those societies killing those outside one's tribe does not count as murder.
I think it's probably useful to taboo the word "should" for this discussion. I think when people say you "should" do X rather than Y it means something like "experience indicates X is more likely to lead to a good outcome than Y". People tend to have rule-based rather than consequence based moral systems because the full consequences of one's actions are unforeseeable. A rule like "one shouldn't lie" comes about because experience has shown that lying often has negative consequences for the speaker and listener and possibly others as well, although the particular consequences of a particular lie may be unforeseeable.
I don't see how there can be agreement as to moral principles unless there is first a reasonably good agreement as to what constitutes good and bad final states.
Slightly OT for this thread: there should always be a prominent link on the right to the open thread. As things are, it gets heavy usage the first couple days of the month, then falls off the bottom of the page before anyone can read most of the comments. Look, it's gone again already!
I know I've said this before, but I think it was on the open thread and it fell off the bottom of the page before anyone read it.
Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"?
These really are different statements. "I am entitled to fraction x of the pie" means more or less the same as "a fair judge would assign me fraction x of the pie".
But a fair judge just means the judge has no personal relationship with any of the disputing parties and makes his decision based on some rational process, not arbitrarily. It isn't necessarily true that there's a unique solution that a fair judge would decide upon. One could say that whoever saw it first or touched it first is entitled to the whole pie, or that it should be divided strictly equally, or that it be divided on a need-based or merit-based, or he could even make the gods must be crazy/idiocy of Solomon solution and say it's better that the pie be destroyed than allowed to exist as a source of dissent. In my (admittedly spotty) knowledge of anthropology, in most traditional pie-gathering societies, if three members of a tribe found a particularly large and choice pie they would be expected to share it with the rest of the tribe, but they would have a great deal of discretion as to how the pie was divided, they'd keep most of it for themselves and their allies.
This is not to say that morality is nothing but arbitrary social convention. Some sets of rules will lead to outcomes that nearly everyone would agree are better than others. But there's no particular reason to believe that there could be rules that everyone will agree on, particularly not if they have to agree on those rules after the fact.
I see no reason to believe there is such a thing as an objective definition of "fair" in this case. The idea that an equal division is "fair" is based on the assumption that none of the three has a good argument as to why he should receive more than either of the others. If one has a reasonable argument as to why he should receive more, the fairness argument breaks down. In fact, none of the three really have a good argument as to why he is entitled to any of it, and I can't see why it would be wrong for any of the first one to grab it to claim the whole pie under "right of capture".
what's the standard reply to someone who says, "Friendly to who?" or "So you get to decide what's Friendly"?
This is an important question. I don't believe there is such a thing as an objective definition of friendliness, I'd doubt that "reasonable" people can come to an agreement as to what friendliness means. But I'm eager to be proven wrong, keep writing.
Well, I remember wondering as a graduate student how how one was supposed to go about deciding what problems to work on, and not coming up with a good answer . A fellow student suggested that your project is worth working on if you can get it funded, but I think he was kidding. Or maybe not.
Most experimentalists really aren't in the business of supporting or refuting hypotheses as such. It's more a matter of making a measurement, and yes they will be comparing their results to theoretical predictions, but ideally experimentalists should be disinterested in the result, that is, they care about making as accurate a measurement as possible but don't have any a priori preference of one value over another.
There's a way you could make the heat=motion concept much clearer to Carnot. When one studies kinematics, one generally makes the approximation that macroscopic bodies are rigid, and the motions of the body refer to center of mass motion, or perhaps rotation about some axis. If you explain that "heat" refers to the motion of the constituent particles relative to each other, I think a scientist of Carnot's day would understand the idea pretty quickly.
I think this sort of thing might be what people mean when they talk about a "bridging theory".
The essential idea behind reductionism, that if you have reliable rules for how the pieces behave then in principle you can apply them to determine how the whole behaves, has to be true. To say otherwise is to argue that the airplane can be flying while all its constituent pieces are still on the ground.
But if you can't do a calculation in practice, does it matter whether or not it would give you the right answer if you could?
It would certainly facilitate communication, though, if people could agree on what words mean rather than having personal definitions. No doubt it's unrealistic to expect everyone to agree on precisely where the boundary between yellow and orange lies, but tigers aren't even a yellowish orange.
For some reason this post reminds me of the Buddhist parable asceticsim now, nymphs later.
I don't think it's all that uncommon to begin cultivating an art for some specific purpose, proceed to cultivate it largely for its own sake, and eventually to abandon the original purpose.
It seems to me that your argument relies on the utility of having a probability p of gaining x being equal to p times the utility of gaining x. It's not clear to me that this should be true.
The trouble with the "money pump" argument is that the choice one makes may well depend on how one got into the situation of having the choice in the first place. For example, let's assume someone prefer 2B over 2A. It could be that if he were offered choice 1 "out of the blue" he would prefer 1A over 1B, yet if it were announced in advance that he would have a 2/3 chance of getting nothing and a 1/3 chance of being offered choice 1, he would decide beforehand that B is the better choice, and he would stick with that choice even if allowed to switch. This may seem odd, but I don't see why it's logically inconsistent.
Martin Gardner has a chapter on these "look-see" proofs in Knotted Donuts.
Ni no Tachi figured out how to use the hammer, but Bouzo only sold them without understanding their value.
"A bird in the hand is worth what you can get for it." --Ambrose Bierce
Fiction is fiction, but it seems to me that that if student objects to wearing silly clothes and his master responds by ordering him to wear yet sillier clothes, it's a lot more plausible that the student will conclude his master is a quack and drop out than that he'll decide to extend his master's teaching by taking silly clothes to a whole new level.
Maybe the whole point of this exercise is to remind us that one can't come to reliable conclusions from fictional evidence? If so, well, maybe I haven't learned anything... but at least I've learned I haven't learned anything.
"Since you are so concerned about the interactions of clothing with probability theory," Ougi said, "it should not surprise you that you must wear a special hat to understand."
But isn't this almost the exact opposite of what the student was saying? Questioning the robes indicates to me that the student felt there was not any interaction between learning probability theory and clothing, and that therefore it served some other purpose, presumably differentiating between an in group and an out group.
Or am I just nuts for trying to argue with you about the internal thoughts of your own fictional characters?
Or at least invent a cultometer, so we can check our cultempature?
It's a bad sign if we develop identifiable cliques. Because of general attitudes it stands to reason agreements and disagreements won't be randomly distributed, but ideally we shouldn't "agree" or "disagree" with others because we agreed or disagreed with them in the past. It probably wouldn't be too hard to develop some sort of voting software that measured cliquishness if there's a demand for it.
Of course, the real disaster would be if people start saying things like "Eliezer is always right". Nobody is always right.
If you met John Barnes and he argued that he's doing the right thing, would it be appropriate to sock him in the jaw?
No, because the statement that "the only appropriate response to some arguments is a good swift sock in the jaw" is not itself one of the arguments whose appropriate response is a sock in the jaw. There may or may not be any such arguments, but socking him in the jaw is admitting that he is fundamentally right. Of course, it might be appropriate to sock him for some other reason :-)
One can argue that Buzz Aldrin had a special right to sock the guy that you or I would not. To me, claiming the moon landing was faked is just an absurd statement. Saying it in front of Buzz is unjustifiably calling the man a fraud and a liar. Buzz shouldn't have to put up with that kind of crap.
It seems to me that normative statements like "let us go and serve other gods" aren't really something you can have a rational debate about. The question comes down to "which do you love more, your god or me", and the answer should always be "God"... according to God.
Similarly, one could have a rational debate about whether a command economy will outperform a market economy or vice versa (although the empirical evidence seems pretty one-sided), but a statement like "all people ought to be socially and economically equal" seems like something that just has to be accepted or rejected.
Maybe I should have said something more like "conceivably could be" rather than "is likely to be". Certainly I didn't mean to imply that every firm in an industry will immediately copy somebody else's good idea. There isn't even a guarantee that a good idea will be recognized as one in the company in which it originates.
But the point is that ideas can be copied without anything like biological reproduction taking place. Why they so seldom are is an interesting question, I've added Deming to my "to read" list.
What corporations do is very different from biological evolution, but if a corporation develops a successful idea then it is likely to be copied by other corporations without anything like biological reproduction entering the picture.
I have a question about this picture.
Imagine you have something like a chess playing program. It's got some sort of basic position evaluation function, then uses some sort of look ahead to assign values to the instrumental nodes based on the terminal nodes you anticipate along the path. But unless the game actually ends at the terminal node, it's only "terminal" in the sense that that's where you choose to stop calculating. There's nothing really special about them.
Human beings are different from the chess program in that for us the game never ends, there are no "true" terminal nodes. As you point out, we care what happens after we are dead. So wouldn't it be true that in a sense there's nothing but instrumental values, that a "terminal value" just means that a point at which we've chosen to stop calculating, rather than saying something about the situation itself?