The Contrarian Status Catch-22
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-19T22:40:51.201Z · LW · GW · Legacy · 102 commentsContents
102 comments
It used to puzzle me that Scott Aaronson still hasn't come to terms with the obvious absurdity of attempts to make quantum mechanics yield a single world.
I should have realized what was going on when I read Scott's blog post "The bullet-swallowers" in which Scott compares many-worlds to libertarianism. But light didn't dawn until my recent diavlog with Scott, where, at 50 minutes and 20 seconds, Scott says:
"What you've forced me to realize, Eliezer, and I thank you for this: What I'm uncomfortable with is not the many-worlds interpretation itself, it's the air of satisfaction that often comes with it."
-- Scott Aaronson, 50:20 in our Bloggingheads dialogue.
It doesn't show on my face (I need to learn to reveal my expressions more, people complain that I'm eerily motionless during these diavlogs) but at this point I'm thinking, Didn't Scott just outright concede the argument? (He didn't; I checked.) I mean, to me this sounds an awful lot like:
Sure, many-worlds is the simplest explanation that fits the facts, but I don't like the people who believe it.
And I strongly suspect that a lot of people out there who would refuse to identify themselves as "atheists" would say almost exactly the same thing:
What I'm uncomfortable with isn't the idea of a god-free physical universe, it's the air of satisfaction that atheists give off.
If you're a regular reader of Robin Hanson, you might essay a Hansonian explanation as follows:
Although the actual state of evidence favors many-worlds (atheism), I don't want to affiliate with other people who say so. They act all brash, arrogant, and offensive, and tend to believe and advocate other odd ideas like libertarianism. If I believed in many-worlds (atheism), that would make me part of this low-prestige group.
Or in simpler psychology:
I don't feel like I belong with the group that believes in many-worlds (atheism).
I think this might form a very general sort of status catch-22 for contrarian ideas.
When a correct contrarian idea comes along, it will have appealing qualities like simplicity and favorable evidence (in the case of factual beliefs) or high expected utility (in the case of policy proposals). When an appealing contrarian idea comes along, it will be initially supported by its appealing qualities, and opposed by the fact that it seems strange and unusual, or any other counterintuitive aspects it may have.
So initially, the group of people who are most likely to support the contrarian idea, are the people who are - among other things - most likely to break with their herd in support of an idea that seems true or right.
These first supporters are likely to be the sort of people who - rather than being careful to speak of the new idea in the cautious tones prudent to supplicate the many high-status insiders who believe otherwise - just go around talking as if the idea had a very high probability, merely because it seems to them like the simplest explanation that fits the facts. "Arrogant", "brash", and "condescending" are some of the terms used to describe people with this poor political sense.
The people first to speak out in support of the new idea will be those less sensitive to conformity; those with more confidence in their sense of truth or rightness; those less afraid of speaking out.
And to the extent these are general character traits, such first supporters are also more likely to advocate other contrarian beliefs, like libertarianism or strident atheism or cryonics.
And once that happens, the only people who'll be willing to believe the idea will be those willing to tar themselves by affiliating with a group of arrogant nonconformists - on top of everything else!
tl;dr: When a counterintuitive new idea comes along, the first people to support it will be contrarians, and so the new idea will become associated with contrarian traits and beliefs, and people will become even more reluctant to believe it because that would affiliate them with low-prestige people/traits/beliefs.
A further remark on "airs of satisfaction": Talk about how we don't understand the Born Probabilities and there are still open questions in QM, and hence we can't accept the no-worldeaters interpretation, sounds a good deal like the criticism given to atheists who go around advocating the no-God interpretation. "But there's so much we don't know about the universe! Why are you so self-satisfied with your disbelief in God?" There's plenty we don't understand about the universe, but that doesn't mean that future discoveries are likely to reveal Jehovah any more than they're likely to reveal a collapse postulate.
Furthermore, atheists are more likely than priests to hear "But we don't know everything about the universe" or "What's with this air of satisfaction?" Similarly, it looks to me like you can get away with speaking out strongly in favor of collapse postulates and against many-worlds, and the same people won't call you on an "air of satisfaction" or say "but what about the open questions in quantum mechanics?"
This is why I think that what we have here is just a sense of someone being too confident in an unusual belief given their assigned social status, rather than a genuine sense that we can't be too confident in any statement whatever. The instinctive status hierarchy treats factual beliefs in pretty much the same way as policy proposals. Just as you need to be extremely high-status to go off and say on your own that the tribe should do something unusual, there's a similar dissonance from a low-status person saying on their own to believe something unusual, without careful compromises with other factions. It shows the one has no sense of their appropriate status in the hierarchy, and isn't sensitive to other factions' interests.
The pure, uncompromising rejection merited by hypotheses like Jehovah or collapse postulates, socially appears as a refusal to make compromises with the majority, or a lack of sufficient humility when contradicting high-prestige people. (Also priests have higher social status to start with; it's understood that their place is to say and advocate these various things; and priests are better at faking humility while going on doing whatever they were going to do anyway.) The Copenhagen interpretation of QM - however ridiculous - is recognized as a conventional non-strange belief, so no one's going to call you insufficiently humble for advocating it. That would mark them as the contrarians.
102 comments
Comments sorted by top scores.
comment by Andy_McKenzie · 2009-12-20T05:58:46.264Z · LW(p) · GW(p)
Right. This is why I think it's under-ratedly important for contrarians who actually believe in the potential efficacy of their beliefs to not seem like contrarians. If you truly believe that your ideas are underrepresented then you will much better promote them by being appearing generally "normal" and passing off the underrepresented idea as a fairly typical part of your ostensibly coherent worldview. I will admit that this is more challenging.
Replies from: MichaelVassar, CarlShulman, Kutta, Nanani, Document↑ comment by MichaelVassar · 2009-12-21T07:53:12.587Z · LW(p) · GW(p)
Great debate starter.
One quibble, I don't think that it's even ostensibly normal to have or aspire to have a coherent worldview.
↑ comment by CarlShulman · 2009-12-20T07:12:41.431Z · LW(p) · GW(p)
Strong agreement.
Replies from: Eliezer_Yudkowsky, Will_Newsome↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-20T08:37:01.916Z · LW(p) · GW(p)
Couldn't do it if I tried for a hundred years. Not disagreeing, though.
Replies from: Tyrrell_McAllister, Steve_Rayhawk, matt↑ comment by Tyrrell_McAllister · 2009-12-20T17:32:08.448Z · LW(p) · GW(p)
Couldn't do it if I tried for a hundred years. Not disagreeing, though.
Actually, I'd say that you do a much better job at this than many contrarians on the Internet, MW notwithstanding. At least, you have the "passing off the underrepresented idea as a fairly typical part of your ostensibly coherent worldview" part down.
↑ comment by Steve_Rayhawk · 2009-12-20T23:36:12.158Z · LW(p) · GW(p)
If you truly believe that your ideas are underrepresented then you will much better promote them by being appearing generally "normal" and passing off the underrepresented idea as a fairly typical part of your ostensibly coherent worldview.
Couldn't do it if I tried for a hundred years.
Something interesting may be going on here.
Consider the question, "Could you appear generally 'normal' and pass off the underrepresented idea as a fairly typical part of your ostensibly coherent worldview, if the fate of the world depended on it?"
I could imagine the same person giving two different answers, depending on how they understood the question to be meant.
One answer would be, "Yes: if I knew the fate of the world depended on it, then I would appear normal".
The other would be, "No, because the fate of the world could not depend on my appearing normal; in fact the fate of the world would depend on my not appearing normal".
(A third possible answer would be about one form of akrasia: "In that case I would believe that the fate of the world depended on my appearing normal, but I wouldn't alieve it enough to be able to appear normal.")
This seems psychologically parallel to the situation of the question about "Would you kill babies if it was intrinsically the right thing to do?". One answer is, "Yes: if I knew the intrinsically right thing to do was to kill babies, then I would kill babies". But the other is, "No, because killing babies would not be the intrinsically right thing to do; in fact the intrinsically right thing to do would be to not kill babies". (The third answer would be, "In that case I would believe that killing babies was intrinsically the right thing to do, but I wouldn't alieve it enough to be able to kill babies".)
Maybe the important question is: what would the meaning be of the imagined statement, "the fate of the world would depend on my not appearing normal"?
For the statement "the intrinsically right thing to do would be to not kill babies", one possible meaning would be conceptual and epistemic: "I have thought about possible ethical and meta-ethical positions, and it was impossible for killing babies to be the right thing to do." Another possible meaning would be non-epistemic and intuitive: "If someone has an argument that killing babies is the intrinsically right thing to do, then from priors, I and others would both estimate that the chances are too high that they had made a mistake in ethics or meta-ethics. If I were to agree that they could be right, that would be a concession that would be both too harmful directly and too costly to me socially."
Similarly, for the statement "the fate of the world would depend on my not appearing normal", one possible meaning would be conceptual and epistemic: "I have thought about my abilities, and I could do at most two of 'value appearing normal', 'be psychologically able to privately reason to truthful beliefs', and 'have enough mental energy left over for other work', but it was impossible for me to do all three." Another meaning would non-epistemic and intuitive: "If someone has an argument that it would be good on net to appear normal and not contrarian, then from priors, I and others would both estimate that the chances are high that they had made a motivated mistake about how hard it is to have good epistemology. If I were to agree that they could be right, that would be a concession that would be both too harmful for me directly, and too costly socially as an understood implicit endorsement of that kind of motivated mistake."
↑ comment by matt · 2009-12-20T12:07:03.580Z · LW(p) · GW(p)
Use the try harder, Luke.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-20T15:25:35.051Z · LW(p) · GW(p)
Use the try harder, Luke.
It's a good link. But I would strongly recommend Eliezer did not try harder to do this. Some considerations:
- Eliezer is a terrible politician. Ok, he can get by on a high IQ and plenty of energy. But if you are considering comparative advantage Eliezer should no more devote himself to political advocacy than he should create himself a car out of iron ore.
- Apart from details of presentation the important thing to do is be conformist in all areas except the one in which they make their move. This is a significant limitation on what you can achieve, particularly when what you are attempting to achieve involves interacting with the physical reality and not just the social reality.
- The Sesame Street approach to belief (one of these things is not like the other ones) is a status optimisation, not necessarily an optimal way to increase the influence of an idea. It involves spending years defending the positions of high status individuals and carefully avoiding all contrarian positions until you have the prestige required to make a play for your own territory. Then, you select the work of (inevitably lower status) innovators in a suitable area. Present the ideas yourself and use your prestige to ensure that your terminology becomes adopted and your papers most frequently cited. The innovators can then choose between marginalization, supplication or moving to a new field. If any innovator happens to come up with ideas that challenge your position and you dismiss them as arrogant and smug and award status to others who, by way of supplication, do likewise.
Does this help make a contrarian idea mainstream? Perhaps. But maybe the status exploitation of ideas market is efficient and your participation makes no particular difference. Either way, I consider gaining power in this manner useful for achieving Eliezer's aims only in the same way it would be useful for him to gain power through selling stationary or conquering a small nation. Possibly instrumentally useful but far from comparatively advantageous.
↑ comment by Will_Newsome · 2010-09-19T08:40:48.814Z · LW(p) · GW(p)
This only works well if you're really high status in the first place. Therefore what someone who reads Andy's comment should try to do would be to bootstrap the memetic fitness of their idea via adoption by progressively higher status people until you snag a Dawkins. The way to do this isn't obviously to try to appear especially high status oneself; I suspect a strong method would be to appear just high enough status so as to spam as many moderate-status people as possible with reasonably optimized memes and rely on a halfway decent infection rate. The disease would thus become endemic and hopefully reach fixation. One way to reach that stage would be to become high status oneself, but I'm guessing it'd be more effective to predict who will soon become high status and talk to them while they're still approachable.
(The above may be obvious but it was useful for me to think through such a memetic strategy explicitly.)
Replies from: wedrifid↑ comment by wedrifid · 2010-09-19T08:45:56.806Z · LW(p) · GW(p)
One way to reach that stage would be to become high status oneself, but I'm guessing it'd be more effective to predict who will soon become high status and talk to them while they're still approachable.
This ability is one that is rather useful for the goal of gaining status, too. (As well as being reflected in the mating strategy of young females.)
↑ comment by Kutta · 2009-12-20T09:34:17.223Z · LW(p) · GW(p)
But, I think, you'd better be vocal, visible and brash to some extent or you risk science advancing by funerals. If someone believes that replacing status quo beliefs with a correct contratrian belief is very important then IMO her optimal strategy will be somewhere between total crackpotness and total passivity.
comment by RobinHanson · 2009-12-20T12:16:54.702Z · LW(p) · GW(p)
Most smug contrarians have many contrarian beliefs, not just one. If we average over all the various beliefs of smug contrarians, what level of accuracy will we find? (Could we find data on smug contrarians from a century ago?) I suspect accuracy will be far too low to justify such smugness. Even if we limit our attention to high IQ smug contrarians, I suspect accuracy will also be low. Yes the typical objection to smugness is probably to the cockiness of asserting higher status than one has been granted, but the typical reason people are smugly contrarian is also probably wanting to defy current status rankings. You can't assume that because they are hypocrites and disagree with you that you are not a hypocrite.
Replies from: MichaelVassar, Johnicholas, Alexan↑ comment by MichaelVassar · 2009-12-21T07:44:40.930Z · LW(p) · GW(p)
Once again, I think that for smart (e.g. made good arguments, e.g. arguments that contemporaries who also made good arguments [recursive but sound] couldn't poke holes in) smug contrarians of a century ago this is simply wrong, though it needs a lot of cashing out in specific detailed claims.
↑ comment by Johnicholas · 2009-12-21T15:15:42.108Z · LW(p) · GW(p)
The most important point in this comment was buried towards the end.
"wanting to defy current status rankings" is an incentive; an incentive that the original article doesn't pay enough attention to.
Adding a rhetorical concession to the original article, something like: "As someone who is, academically, less-credentialed, upsetting the credentials hierarchy would be be to my advantage. My subconscious may be twisting my beliefs in a self-serving direction. However [... and so on...]" might make the original article stronger.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-21T16:05:31.847Z · LW(p) · GW(p)
Adding a rhetorical concession to the original article, something like: "As someone who is, academically, less-credentialed, upsetting the credentials hierarchy would be be to my advantage. My subconscious may be twisting my beliefs in a self-serving direction. However [... and so on...]" might make the original article stronger.
That would be fake humility used for the purposes of supplication, not improving the strength of the article. The main use of such an addition would be to demonstrate Contrarian Status Catch 22b. If you are asked why you disagree with a high status person and your answer is not "I was being smug and arrogant and I must be wrong" then you are being smug and arrogant.
The effectiveness of this ploy explains why "How do explain the fact that disagrees with you?" is such a popular form of one-upmanship. It works even when there is a good answer (and usually even when doesn't disagree).
Replies from: Johnicholas↑ comment by Johnicholas · 2009-12-21T16:21:26.633Z · LW(p) · GW(p)
Yes, I agree. Rhetorical techniques are about persuasion, not truth. Possibly "stronger" should have been "more palatable" or "more acceptable to the non-choir".
Replies from: wedrifid↑ comment by wedrifid · 2009-12-21T23:08:22.039Z · LW(p) · GW(p)
or "more acceptable to the non-choir".
"More acceptable to the choir" would be more accurate. Around here the in-group signalling chorus is "other people are sycophants and I am better connected and less gullible". More generally, the choir signal is "I consider any comments that don't advocated deference to status unacceptable because I am high status myself and want the approval of those even higher". From this it is easy to extract Contrarian Status Catch 18.
↑ comment by Alexan · 2009-12-21T04:28:07.207Z · LW(p) · GW(p)
Most beliefs are wrong. We do not test medicine against nothing, but rather against a placebo. In this case we must ask ourselves whether or not holding contrarian beliefs gives a result better then comparative strategies.
The answer is yes. Let them be smug!
comment by paulfchristiano · 2010-12-24T10:53:20.647Z · LW(p) · GW(p)
Assuming collapse is quantitatively unlike assuming the existence of God. The collapse postulate is extremely unlikely a priori, in terms of the usual probabilities humans deal with, but the order of magnitude of unlikeliness is not even in the same order of magnitude as for an intelligent creator. Collapse is easy to describe (if you are careful and use thermodynamic degrees of freedom rather than consciousness to check when it will happen) and consistent with all observations. Of course MWI is way simpler, so I would bet on it against overwhelming odds; so would Scott Aaronson. That said, we still have huge unexplained things (the Born probabilities; the coexistence of GR and quantum) whose improbability a priori is very large compared to the gap between many worlds and collapse. So it is conceivable that we could discover a formulation with collapse which accounts for the Born probabilities and which is simpler than any formulation without collapse. I would bet anything against this, but it is certainly more likely than discovering that the world is flat or that God exists.
Human experience with quantum mechanics has been confined to experiments with extremely low entanglement. The question we are asking is: do the laws of physics we have observed so far continue to hold up in the high entanglement limit? This is very much analogous to asking: do the laws of physics we have observed so far continue to hold up in the high energy limit? Even if you expect the answer to be yes, there is broad consensus among physicists that investigating the question is worthwhile. I also suspect that if you knew more physics you would be less confident in your position.
(I recently took a class from Scott Aaronson which caused me to stop being confused about quantum mechanics very effectively and made me understand precisely how obvious MWI is. I think that his pedagogy is one of the most powerful forces in the world right now for getting students to understand quantum mechanics and see that MWI is obvious. I think you are basically manufacturing disagreement, which you can do successfully mostly because Scott Aaronson expresses himself less precisely than you do.)
Replies from: shokwave↑ comment by shokwave · 2010-12-24T11:28:46.080Z · LW(p) · GW(p)
Assuming collapse is quantitatively unlike assuming the existence of God.
Weird, because assuming the MWI is quantitatively like assuming atheism.
Your posts seems to suggest Scott Aaronson agrees with the MWI, but the blog post and Eliezer's post seem to suggest he doesn't - but your post also suggests Eliezer's post is not evidence in this case. I am confused on the matter of what Scott actually believes.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2010-12-24T18:31:58.938Z · LW(p) · GW(p)
MWI is qualitatively like atheism, but the weight of evidence on your aside (as compared to the next fundamentally different alternative) is so dissimilar that you should have qualitatively different beliefs about what updates may occur in the future. I guess there is also a qualitative difference; there are very many hypotheses as simple or simpler than theism which are equally "consistent" with observation, while there are relatively few as simple or simpler than collapse. Again, this is not to say that MWI isn't overwhelmingly likely, but if we were to discover some evidence that its predictions stopped working in the high entanglement limit, some form of collapse is a reasonably likely departure from the simplest model. If we were to discover some evidence for say the intelligent design of life on Earth, a Christian god is still a spectacularly unlikely departure from the simplest model.
Replies from: shokwave↑ comment by shokwave · 2010-12-25T01:44:04.041Z · LW(p) · GW(p)
I don't feel that the weight of the evidence is much different. For both atheism and MWI, I basically have Occam's Razor plus some stretching on part of the opposing theories. "The way the world is underneath our observations" is really hard to get any hard evidence on.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2010-12-25T07:56:45.897Z · LW(p) · GW(p)
Occam's Razor doesn't just tell you which option is more likely---it tells you how much more likely it is. The hypothesis that God exists is way, way more complicated than the alternative (I would guess billions or trillions of bits more complicated). The hypothesis that collapse occurs is more complicated than the alternative, but the difference isn't in the same ballpark. Maybe you doubt that it is possible to concisely describe collapse, in which case we just have a factual disagreement.
comment by Mitchell_Porter · 2009-12-20T08:48:10.802Z · LW(p) · GW(p)
The quantum interpretation debate is merely the illustrative starting point for this article, so perhaps it is boorish of me to focus on it. But...
the obvious absurdity of attempts to make quantum mechanics yield a single world.
Even if some form of many worlds is correct, this is going too far.
Eliezer, so far as I can see, in your life you have gone from a belief in objective-collapse theories, derived from Penrose, to a belief in many worlds, derived perhaps from your extropian peers; and you always pose the theoretical choice as a choice between "collapse" and "no-collapse". You do not seem to have ever seriously considered - for example - whether the wavefunction is not real at all, but simply a construct like a probability distribution.
There are at least two ways of attempting "to make quantum mechanics yield a single world" which are obviously not absurd: zigzag interpretations and quantum causal histories. Since these hyperbolic assertions of yours about the superiority and obviousness of many-worlds frequently show up in your discourses on rationality, you really need at some point to reexamine your thinking on this issue. Perhaps you have managed to draw useful lessons for yourself even while getting it wrong, or perhaps there are new lessons to be found in discovering how you got to this point, but you are getting it wrong.
Replies from: Eliezer_Yudkowsky, timtyler, Psy-Kosh↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-20T17:40:54.002Z · LW(p) · GW(p)
We have seemingly "fundamentally unpredictable" events at exactly the point where ordinary quantum mechanics predicts the world ought to split, and there's no way to have those events take place in a single global world without violating Special Relativity. Leaving aside other considerations such as having the laws of physical causality be local in the configuration space, the interpretation of the above evidence is obvious and there's simply no reason whatsoever at all to privilege the hypothesis of a single world. I call it for many-worlds. It's over.
Replies from: Mitchell_Porter, PhilGoetz↑ comment by Mitchell_Porter · 2009-12-21T02:49:47.927Z · LW(p) · GW(p)
Ordinary quantum mechanics does not predict that the world splits, no more than does ordinary probability theory.
The zigzag interpretations are entirely relativistic, since the essence of a zigzag interpretation is that you have ordinary space-time with local causality, but you have causal chains that run backwards as well as forwards in time, and a zigzag in time is what gives you unusual spacelike correlation.
A "quantum causal history" (see previous link) is something like a cellular automaton with no fixed grid structure, no universal time, and locally evolving Hilbert spaces which fuse and join.
These three ideas - many worlds, zigzag, QCH - all define research programs rather than completed theories. The latter two are single-world theories and they even approximate locality in spacetime (and not just "in configuration space"). You should think about them some time.
↑ comment by PhilGoetz · 2009-12-21T05:52:06.729Z · LW(p) · GW(p)
I haven't seen you take into account the relative costs of error of the two beliefs.
A few months ago, I asked:
Suppose Omega or one of its ilk says to you, "Here's a game we can play. I have an infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."
How many cards do you draw?
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I'm uncomfortable about playing the lottery with the universe if it's the only one we've got.
If a rational, ethical one-worlds believer doesn't continue drawing cards as long as they can, in a situation where the many-worlds believer would, then we have an asymmetry in the cost of error. Building an FAI that believes in one world, when many worlds is true, causes (possibly very great) inefficiency and repression to delay the destruction all life. Building an FAI that believes in many worlds, when one world is true, results in annihilating all life in short order. This large asymmetry is enough to compensate for a large asymmetry in probabilities.
(My gut instinct is that there is no asymmetry, and that having a lot of worlds shouldn't make you less careless with any of them. But that's just my gut instinct.)
Also, I also think that you can't, at present, both be rational about updating in response to the beliefs of others, and dismiss one-world theory as dead.
Replies from: Eliezer_Yudkowsky, wedrifid, shokwave↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-21T17:18:28.685Z · LW(p) · GW(p)
Not only is "What do we believe?" a theoretically distinct question from "What do I do about it?", but by your logic we should also refuse to believe in spatially infinite universes and inflationary universes, since they also have lots of copies of us.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2009-12-21T18:03:18.368Z · LW(p) · GW(p)
Not only is "What do we believe?" a theoretically distinct question from "What do I do about it?"
"What do we believe?" is a distinct question; and asking it is comitting an error of rationality. The limitations of our minds often force us to use "belief" as a heuristic; but we should remember that it is fundamentally an error, particularly when the consequences are large.
You don't do the expected-cost analysis when investigating a theory; you should do it before dismissing a theory. Because, If someday you build an AI, and hardcode in the many-worlds assumption because many years before you dismissed the one-world hypothesis from your mind and have not considered it since, you will be committing a grave Bayesian error, with possibly disastrous consequences.
(My cost-of-error statements above are for you specifically. Most people aren't planning to build a singleton.)
Replies from: benelliott↑ comment by benelliott · 2011-08-10T21:14:26.604Z · LW(p) · GW(p)
I can't speak for Eliezer, but if I was building a singleton I probably wouldn't hard-code my own particular scientific beliefs into it, and even if I did I certainly wouldn't program any theory at 100% confidence.
↑ comment by wedrifid · 2009-12-21T06:04:45.920Z · LW(p) · GW(p)
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I'm uncomfortable about playing the lottery with the universe if it's the only one we've got.
Omega clearly has more than one universe up his sleeve. It doesn't take too many doublings of my utility function before a further double would require more entropy than is contained in this one. Just how many galaxies worth of matter perfectly optimised for my benefit do I really need?
The problem here is that is hard to imagine Omega actually being able to double utility. Doubling utility is hard. It really would be worth the risk of gambling indefinitely if Omega actually had the power to do what he promised. If it isn't, then you by definition have your utility function wrong. In fact, if exactly half of the cards killed you and the other half doubled utility it would still be worth gambling unless you assign exactly 0 utility to anything else in the universe in the case of your death.
Replies from: Strange7↑ comment by Strange7 · 2010-12-24T12:23:46.547Z · LW(p) · GW(p)
It doesn't take too many doublings of my utility function before a further double would require more entropy than is contained in this one.
Omega knows you'll draw a skull before you get that many doublings.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-24T13:26:13.516Z · LW(p) · GW(p)
Omega knows you'll draw a skull before you get that many doublings.
That would be a different problem. Either the participant is informed that the probability distribution in question has anthropic bias based on the gamemaster's limits or the gamemaster is not Omega-like.
↑ comment by shokwave · 2010-12-24T14:35:42.497Z · LW(p) · GW(p)
I think that someone who believes in many-worlds will keep drawing cards until they die.
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality ... but assume Omega can do this.
The payoff, then, looks like it favors going to an arbitrarily high number given that quantum immortality is true. Honestly, my gut response is that I would go to either 3 draws, 9 draws, or 13 draws depending on how risk-averse I felt and how much utility I expected as my baseline (a twice-as-high utility before doubling lets me go one doubling less).
I think this says that my understanding of utility falls prey to diminishing returns when it shouldn't (partially a problem with utility itself), and that I don't really believe in quantum immortality - because I am choosing a response that is optimal for a non-quantum immortality scenario.
But in any reasonable situation where I encounter this scenario, my response is accurate: it takes into account my uncertainty about immortality (requires a few more things than just the MWI) and also accounts for me updating my beliefs about quantum immortality based on evidence from the bet. That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal. Utility is so complicated, and doubling just gets insane so quickly.
Replies from: wedrifid↑ comment by wedrifid · 2010-12-24T15:18:59.170Z · LW(p) · GW(p)
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality ... but assume Omega can do this.
Neither the problem itself nor this response need make any mention of quantum immortality. Given an understanding of many-worlds 'belief in quantum immortality' is just a statement about preferences given a certain type of scenario. There isn't some kind of special phenomenon involved, just a matter of choosing what sort of preferences you have over future branches.
That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal.
No, no, no! Apart from being completely capricious with essentially arbitrary motivations they aren't betting against quantum immortality. They are betting a chance of killing someone against a chance of making ridiculous changes to the universe. QI just doesn't play a part in their payoffs at all.
↑ comment by timtyler · 2009-12-20T19:28:59.009Z · LW(p) · GW(p)
Your reference for "zigzag interpretations" is your own blog comment?!?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2009-12-21T02:56:34.290Z · LW(p) · GW(p)
I particularly wanted people to see a zigzag explanation of quantum computing. Anyway, see John Cramer, Mark Hadley, Huw Price, Wheeler-Feynman for examples. Like many worlds, the idea comes in various flavors.
↑ comment by Psy-Kosh · 2009-12-20T15:14:29.277Z · LW(p) · GW(p)
Hrm... looking at QCH... it doesn't seem to even claim to be a single world model in the first place. (just began reading it, though)
Actually, for that matter, I think I'm misunderstanding the notion of inextendibility. Wouldn't local finiteness ensure that every non finite directed path is future inextendible?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2009-12-21T03:16:07.399Z · LW(p) · GW(p)
QCH is arguably a formalism rather than a philosophy. I think you could actually repurpose the QCH formalism for many worlds, by way of "consistent histories". A "consistent history" is coarse-grained by classical standards; a space-time in which properties are specified only here and there, rather than everywhere. If you added a causal structure connecting those specified patches, you'd get something like a QCH. However, the authors are not many-worlders and an individual QCH should be a self-sufficient thing.
I think you are right. But the notion of inextendibility is meant to apply both to causal histories that continue forever and to causal histories that have terminal future events. It's a step toward defining the analogue of past and future light-cones in a discrete event-geometry.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-12-21T04:20:11.128Z · LW(p) · GW(p)
Well, what precisely is meant to me excluded when interceptions of non extendible futures are the only ones that matter? (ugh... I just had the thought that this conversation might summon the spambots :)) anyways, why only the non extendible paths?
As far as the rest, I want to read through and understand the paper before I comment on it. I came to a halt right early on due to being confused about extendibility.
However, my overall question is if the idea on its own naturally produce a single history, or does it still need some sort of "collapse" or other contrived mechanism to do so?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2009-12-21T06:10:01.246Z · LW(p) · GW(p)
This earlier paper might also help.
The big picture is that we are constructing something like special relativity's notion of time, for any partially ordered set. For any p and q in the set, either p comes before q, p comes after q, or p and q have no order relation. That last possibility is the analogue of spacelike separation. If you then have p, q, r, s..., all of which are pairwise spacelike, you have something resembling a spacelike slice. But you want it to be maximal for the analogy to be complete. The bit about intersecting all inextendible paths is one form of maximality - it's saying that every maximal timelike path intersects your spacelike slice.
Then, having learnt to think of a poset as a little space-time, you then associate Hilbert spaces with the elements of the poset, and something like a local Schrodinger evolution with each succession step. But the steps can be one-to-many or many-to-one, which is why they use the broader notion of "completely positive mapping" rather than unitary mapping. You can also define the total Hilbert space on a spacelike slice by taking the tensor product of the little Hilbert spaces, and the evolution from one spacelike slice to a later one by algebraically composing all the individual mappings. All in all, it's like quantum field theory constructed on a directed graph rather than on a continuous space.
my overall question is if the idea on its own naturally produce a single history, or does it still need some sort of "collapse" or other contrived mechanism to do so?
I find it hard to say how naturally it does so. The paper is motivated by the problem that the Wheeler-deWitt equation in quantum cosmology only applies to "globally hyperbolic" spacetimes. It's an exercise in developing a more general formalism. So it's not written in order to promote a particular quantum interpretation. It's written in the standard way - "observables" are what's real, quantum states are just guides to what the observables will do.
A given history will attach a quantum state to every node in the causal graph. Under the orthodox interpretation, the reality at each node does not consist of the associated quantum state vector, but rather local observables taking specific values. Just to be concrete, since this must sound very abstract, let's talk in terms of qubits. Suppose we have a QCH with a qubit state at every node. Orthodoxy says that these qubit "states" are not the actual states, the actuality everywhere is just 0 or 1. A many-worlds interpretation would have to say those maximal spacelike tensor products are the real states. But when we evolve that state to the next spacelike slice, it should usually become an unfactorizable superposition. This is in contradiction with the QCH philosophy of specifying a definite qubit state at each node. So it's as if there's a collapse assumption built in - only I don't think it's a necessary assumption. You should be able to talk about a reduced density matrix at each node instead, and still use the formalism.
For me the ontological significance of QCH is not that it inherently prefers a single-world interpretation, but just that it shows an alternative midway between many worlds and classical spacetime - a causal grid of quasi-local state vectors. But the QCH formalism is still a long way from actually giving us quantum gravity, which was the objective. So it has to be considered unproven work in progress.
comment by PhilGoetz · 2009-12-21T06:00:09.200Z · LW(p) · GW(p)
What I'm uncomfortable with isn't the idea of a god-free physical universe, it's the air of satisfaction that atheists give off.
There are many who are uncomfortable with the air of satisfaction that the faithful give off. I'm not convinced there's an asymmetry here.
Replies from: benelliott↑ comment by benelliott · 2011-08-10T19:55:22.395Z · LW(p) · GW(p)
The difference, hopefully, is that while LWers may be uncomfortable with the air of satisfaction the faithful give off, this is not the reason why they are atheists (in practice they may or may not actually live up to this lofty ideal).
comment by SilasBarta · 2009-12-20T06:37:16.683Z · LW(p) · GW(p)
The instinctive status hierarchy treats factual beliefs in pretty much the same way as policy proposals.
Bingo. Like I've harped on and on about, humans don't naturally decouple beliefs from values, or ought from is. If an ought (esp. involving distribution of resources ) hinges on an "is", it's too often the "is" that gets adjusted, self-servingly, rather than the ought.
Take note, Wei_Dai and everyone who uses could-should-agents as models of humans.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-20T13:39:06.299Z · LW(p) · GW(p)
Take note, Wei_Dai and everyone who uses could-should-agents as models of humans.
I agree with your point about the is/ought non-distinction. When you refer the CS Agents are you just emphasising the extent that humans diverge from that idealized model?
For my part I find the CSA model interesting don't find CSAs a remotely useful way to model humans. But that is probably because 'could and should' are the easy part and I need other models to predict the 'but probably will' bit.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-12-20T14:47:37.241Z · LW(p) · GW(p)
Yes, I think humans are hard to model as CSAs (because they don't cleanly cut "is" from ought), but my other problem with it is that, AFAICT anything can be equivalently expressed as a CSA, so I want to know an example of a system, preferably intelligent, that is not a CSA so I know what I'm differentiating it from.
comment by spencerth · 2009-12-24T20:56:19.167Z · LW(p) · GW(p)
I came up with quote for a closely related issue:
"Don't let the fact that idiots agree with you be the sole thing that makes you change your mind, else all you'll have gained is a different set of idiots who agree with you."
Naive people (particularly contrarians) put into a situation where they aren't sure which ideas are truly "in" or "out" or "popular" may become highly confused and find themselves switching sides frequently. After joining a "side", then being agreed with by people whose arguments were poor in support of something good, they find themselves making an argument like "Wow. So many idiots support this! There's no way this can be good." only to find out after switching sides again that the same thing keeps happening. Why? It's because there are likely complete fools who support every cause you might consider good.
Bottom line: consider arguments on their merits, and avoid automatically thinking that they're bad (or good) simply because of who believes them or the (bad) arguments made on behalf of the idea. That's difficult, but if you don't, you wind up with situations similar to Eliezer's.
comment by RobinHanson · 2009-12-21T02:17:48.390Z · LW(p) · GW(p)
As the post is written, Eliezer seems too quick to presume that if people do not want to affiliate with a belief because of the low status of folks who hold that belief, that must mean those people admit that the evidence favors the belief. Yes if the belief happens to be correct then having the wrong sort of early enthusiasts will sadly discourage others from accepting a correct belief. But this same process will also play out if the belief is wrong.
Note that not all new beliefs first gain the interest of low status folks - many new beliefs are first championed by high status groups. So we should ask: what sorts of beliefs tend to be first championed by low status folks? If those beliefs tend to be less true on average than the beliefs first championed by high status folks, then the usual status moves would actually make epistemic sense.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2009-12-21T07:40:45.156Z · LW(p) · GW(p)
I'd like details here. The second paragraph seems basically false to me, though vague enough that it's not clearly false and specific examples would be of some use. Relative status hasn't been discussed, for instance. I'm also not sure what we are counting as a 'new belief'. The lowest status folks almost never advocate new beliefs.
comment by RHollerith (rhollerith_dot_com) · 2009-12-20T03:41:59.572Z · LW(p) · GW(p)
Thanks for writing this, Eliezer.
I consider it so informative I wrote a note to myself to re-read it in 6 months.
Replies from: MichaelGR↑ comment by MichaelGR · 2009-12-20T06:29:45.149Z · LW(p) · GW(p)
I wrote a note to myself to re-read it in 6 months.
That's actually not a bad idea. I think I'll adopt it. Easy enough to do with iCal or Google Calendar.
Replies from: pozorvlak, rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2009-12-20T06:39:39.567Z · LW(p) · GW(p)
That's actually not a bad idea. I think I'll adopt it. Easy enough to do with iCal or Google Calendar.
You might find it informative to know that personal-productivity guru David Allen advises against putting on one's calendar anything that will not "die" if you do not get to it that day.
The note I made went into a file that I will review July 2010. (I make such a file for every 3-month period coming up.)
Replies from: randallsquared↑ comment by randallsquared · 2009-12-22T15:27:03.433Z · LW(p) · GW(p)
So... you keep some of your calendar events in files by month, in order to get around a self-imposed limitation on another part of your calendar, which part you call "my calendar"?
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2009-12-23T00:51:49.542Z · LW(p) · GW(p)
Hi Randall Randall! If you and I were to arrange to meet the first Wednesday after the holidays, I would put a few words about the meeting in my calendar. Do you really want my calendar to be cluttered up with "nice to have" and "we should re-read this sometime" reminders? Aren't you a little worried that the reminder of our meeting will get lost in the noise, and I will neglect to go?
And if you aren't worried because you know that if I do not show then I'll never see my leather jacket again, or whatever, then I would be worried. And I do not like to worry: I want to know that whatever reminder I put in my calendar will definitely be seen by me on the day I intended for me to see it. And that means never putting so many things on my calendar that I start to regret consulting it. In fact, sometimes a whole month goes by in which there are zero words and zero marks in my calendar for the whole month (but then I lead an unusually unstructured life).
That file named July 2010? Well, first, there is no file named June 2010 or May 2010: there is only one for every 3 months. Moreover, past experience suggests that I probably will not get around to looking in there till September or December.
comment by Shalmanese · 2009-12-20T09:23:04.921Z · LW(p) · GW(p)
Huh? That is not at all what I read from Scott Aaronson on this and I don't see how your interpretation can be supported upon a close reading.
My interpretation about this is that people who are smugly contrarion suffer from their own rationality bias that leads them to a higher likelihood of truth but at the cost of a much, much higher variance.
Sure, the smug contrarians taught to wash our hands between surgery & discovered America, but they were also the ones who ushered in the French Revolution, the Cambodian Genocide & the Zimbabwe Land Reforms.
Replies from: MichaelVassar, ciphergoth↑ comment by MichaelVassar · 2009-12-21T07:45:46.486Z · LW(p) · GW(p)
Huh? America? By being smart?
Ditto land reform.
↑ comment by Paul Crowley (ciphergoth) · 2009-12-20T10:15:03.516Z · LW(p) · GW(p)
re "higher likelihood of truth" - the great majority of contrarians are crazy and wrong.
comment by peteshnick · 2009-12-20T03:29:54.474Z · LW(p) · GW(p)
Hi Eliezer,
I personally like many worlds because it helps me count on quantum immortality in case I get hit in the head by a falling 2x4 before the singularity comes. However, I was disturbed when I read about the Ashfar experiment, as it seems to disprove many worlds.. I couldn't find anything on your blog about it.. What do you think about it?
Replies from: JGWeissman, None, cousin_it, None↑ comment by JGWeissman · 2009-12-20T05:10:41.892Z · LW(p) · GW(p)
I personally like many worlds because it helps me count on quantum immortality ...
I worry whenever someone proclaims to "like" a theory because it predicts high utility, that they are conflating the concepts of "it would be nice for me if this theory were true", and "I believe this theory is true".
Replies from: peteshnick↑ comment by peteshnick · 2009-12-20T21:06:24.631Z · LW(p) · GW(p)
Hi to all... First off, sorry for the spelling error. Second, the link given above in one of the responses to my original post (given again below) explains why the experiment doesn't jive with Many Worlds (or Copenhagen), at least in the opinion of the author. Third, sorry for not giving more background on the matter - I thought it was fairly well known. It's basically an experiment that seems to contradict the principle of complementarity as it seems to reveal wave and particle features at the same time.
Once again, for background:
Wikipedia entry: http://en.wikipedia.org/wiki/Afshar_experiment
John Cramer on why it contradicts MW: http://www.analogsf.com/0410/altview2.shtml
I'm not saying I buy Cramer's argument - I was just curious as to other people's opinions.
Thanks!
Replies from: Sniffnoy↑ comment by [deleted] · 2009-12-20T19:16:48.955Z · LW(p) · GW(p)
After a bit of searching, I think peteshnick is talking about the Afshar experiment. The wikipedia article is fascinating, but I don't really understand the issue. It only mentions many-worlds briefly, but includes a link to the creator of another interpretation saying that the experiment exposes a failure of both MWI and Copenhagen to match the math.
↑ comment by cousin_it · 2009-12-20T15:11:05.101Z · LW(p) · GW(p)
How does the Ashfar experiment disprove many worlds?
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-20T17:26:26.800Z · LW(p) · GW(p)
I'm curious, too. peteshnick, I think that you weren't downvoted for contradicting the conventional wisdom here on LW. You were downvoted for referring to a topic that is too obscure without providing sufficient background for casual readers.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-20T21:14:30.527Z · LW(p) · GW(p)
I've got no idea why pete was downvoted, seemed like an honest question to me.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-12-21T03:04:24.575Z · LW(p) · GW(p)
I didn't downvote peteshnick, but I didn't upvote either, for the reason I gave.
↑ comment by [deleted] · 2009-12-20T19:15:31.715Z · LW(p) · GW(p)
After a bit of searching, I think peteshnick is talking about the Afshar experiment. The wikipedia article is fascinating, but I don't really understand the issue. It only mentions many-worlds briefly, but includes a link to the creator of another interpretation saying that the experiment exposes a failure of both MWI and Copenhagen to match the math.
comment by Blueberry · 2009-12-20T01:21:43.504Z · LW(p) · GW(p)
But is MWI really contrarian? At least according to the Wikipedia page, several polls have shown a majority of quantum physicists accept it.
What may be bothering people about MWI is the question of what it actually means for these other worlds to exist: are they just theoretical constructs, or are they actually "out there" even though we can't ever interact with them?
Also, I've seen a great deal of confusion about what worlds actually exist: a number of people seem to think that MWI means that any world you can verbally describe exists somewhere, rather than just worlds that have split off from quantum events. No, there is not necessarily a world out there in which the Nazis won WWII.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-12-20T01:31:06.800Z · LW(p) · GW(p)
a number of people seem to think that MWI means that any world you can verbally describe exists somewhere, rather than just worlds that have split off from quantum events. No, there is not necessarily a world out there in which the Nazis won WWII.
But all but the most fundamental (e.g. particle masses) asymmetry/"randomness" in the world comes from quantum events, no? Which would imply that every physically possible world (up to the size of the universe, if it's finite) exists in the wavefunction under a pretty broad definition of "physically possible", including Nazi victories and simulations of all physically impossible but logically possible worlds.
Replies from: Roko, MichaelVassar↑ comment by Roko · 2009-12-20T01:55:57.001Z · LW(p) · GW(p)
But all but the most fundamental (e.g. particle masses) asymmetry/"randomness" in the world comes from quantum events, no?
Correct. There is branch of the wavefunction where a large sperm whale and a bowl of petunias were spontaneously created 500 miles above the city of Copenhagen exactly 10 minutes ago.
Replies from: Eliezer_Yudkowsky, Blueberry↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-20T01:57:00.632Z · LW(p) · GW(p)
Not if Robin's right about mangled worlds.
↑ comment by Blueberry · 2009-12-20T02:11:18.935Z · LW(p) · GW(p)
Even if Robin is not right about mangled worlds, I don't think this is right. Why would there necessarily be a quantum event that produced a sperm whale and a bowl of petunias? ("Oh no, not again.")
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-12-20T02:33:49.492Z · LW(p) · GW(p)
Nearby atoms tunneling into the appropriate places.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2009-12-21T08:05:25.487Z · LW(p) · GW(p)
But tunneling, and atoms even, are classical concepts, not QM concepts. I'm confused here, but I suspect that others are as well. My impression is that the best pseudo-classical perspective is that anything can appear anywhere at any time with low probability (!!) but that it's confusion to think in terms of classical objects doing anything but following classical physics as the classical particles, indeed the classical objects, simply ARE the math of classical physics which is approximated by QM.
This is a nit to pick perhaps, but it seems to me that the proper understanding of Boltzman Brains involves this line of attack as well as a Turing Machine refoundation.
↑ comment by Blueberry · 2009-12-21T10:26:59.112Z · LW(p) · GW(p)
I've yet to see a clear answer on this. My understanding was that quantum events occur when a particle interacts with another particle in such a way as to yield different outcomes depending on its spin (essentially "measuring" the spin). There are exactly two possibilities and the world only splits into two.
It's not at all clear that you can arrange the possibilities into a world where a sperm whale spontaneously appears. I would love to see a correction or further explanation.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-12-22T00:22:14.358Z · LW(p) · GW(p)
Spin is popular for examples and experiments, but it's not fundamentally special; all physical properties are subject to QM in the same way.
Have you read the Quantum Physics Sequence?
↑ comment by MichaelVassar · 2009-12-21T07:57:24.849Z · LW(p) · GW(p)
I don't think that this is certain at all. Also, our intuitions of what counts as logically possible are terribly terribly unreliable.
comment by byrnema · 2009-12-21T21:12:22.713Z · LW(p) · GW(p)
I don't agree with the interpretation of Scott's comment.
Scott Aaronson doesn't object to Many Worlds because he associates it with particular contrarian group characteristics, he objects to it because (1) he disagrees with Many Worlds by his way of thinking and (2) doesn't like the idea that people who do believe Many Worlds with another way of thinking are projecting superiority.
"What you've forced me to realize, Eliezer, and I thank you for this: What I'm uncomfortable with is not the many-worlds interpretation itself, it's the air of satisfaction that often comes with it."
So I believe that with this comment, Scott Aaronson was saying that while he can with some effort relate to believing Many Worlds, he is aware that this requires a subtle difference in thinking compared to his usual way of thinking. Perhaps he sees both ways of thinking as viable, but there is an irritation that goes with thinking expansively, especially if the other group is smug about their point of view.
And we do find different ways of thinking threatening, and feel smug about our own way of thinking. It’s really quite general. For example: people who dislike Brittany Spears are often smug about not liking her. This seems strange when you consider that music is just meant to be enjoyed -- why should it be a source of pride to enjoy or not enjoy a particular music style?
What we “feel” is logical or natural varies from person to person, and as humans we form general classes of which you and Scott are in separate ones. Libertarian, democratic, republican, physical materialism, theist are some of these groups. Instead of focusing on the science of Many Worlds, I think it would be more interesting to characterize the different aesthetics that are involved in different opinions about Many Worlds, and realize that these are subjective differences.
comment by pozorvlak · 2009-12-20T13:28:54.504Z · LW(p) · GW(p)
I think that, while there's some truth in what you say, you're twisting yourself into intellectual knots to avoid having to reify (and thus admit to) arrogance. As far as atheism goes, I think you were much more on the money with your post about untheism and antitheism: in a secular society, untheism is rarely an issue, but antitheism (like all proselytising belief-systems) is very annoying to the recipients.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-20T13:41:27.143Z · LW(p) · GW(p)
comment by Mitchell_Porter · 2009-12-20T03:21:43.378Z · LW(p) · GW(p)
to me this sounds an awful lot like: "Sure, many-worlds is the simplest explanation that fits the facts, but I don't like the people who believe it."
Then, sorry to put it this way, but you have a hearing problem! :-) All he said was "many worlds could be true". Not "I would believe it if only I could let myself do so". He's agnostic about the truth; he considers many worlds a possibility; and he's put off by people who consider it a certainty.
Replies from: CarlShulman↑ comment by CarlShulman · 2009-12-20T05:28:00.543Z · LW(p) · GW(p)
Eliezer said this immediately before the quoted text! :-)
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2009-12-20T06:41:36.738Z · LW(p) · GW(p)
Where?
Replies from: Sniffnoycomment by happyseaurchin · 2009-12-29T15:42:50.603Z · LW(p) · GW(p)
I prefer to abstract the dynamic to "oppositional state" rather than personify into a "contrarian". That is, a contrarian is someone who places themselves in an oppositional state to another.
comment by sheldon · 2009-12-20T04:43:05.821Z · LW(p) · GW(p)
many-worlds is the simplest explanation that fits the facts,
I find it utterly baffling that anyone could say such a thing. Many worlds, the "simplest" explanation? And what facts does it fit at all, let alone as the best possible fit?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2009-12-20T05:12:13.369Z · LW(p) · GW(p)
Hi, Sheldon. Eliezer has previously defended many worlds at length. Also, Less Wrong uses Markdown syntax: enclose text in *asterisks* to get italics.
Replies from: sheldon↑ comment by sheldon · 2009-12-20T22:56:56.172Z · LW(p) · GW(p)
Fine, but the notion that "many worlds" is the "simplest" explanation for anything seems absurd. "Many worlds" is the most extravagant and the least simple explanation that could ever be conceived.
Replies from: simpleton↑ comment by simpleton · 2009-12-20T23:28:37.757Z · LW(p) · GW(p)
Quite the opposite, under the technical definition of simplicity in the context of Occam's Razor.
comment by Thomas · 2009-12-21T11:06:08.279Z · LW(p) · GW(p)
Just a question for MWI advocates.
If this world W1 has a parallel world W2, which has a parallel world W3, and which W1 hasn't - this is the very difference between W1 and W2 - is the W3 second order parallel to us?
Replies from: benelliott, orthonormal↑ comment by benelliott · 2011-08-10T19:54:05.143Z · LW(p) · GW(p)
MWI does not work like you think it does. The relation "X is parallel to Y" is not defined in a meaningful non-vacuous way.
Replies from: Thomas↑ comment by Thomas · 2011-08-10T20:22:54.778Z · LW(p) · GW(p)
Really? Not defined? So, there is no parallel worlds?
Or what?
Replies from: benelliott, wedrifid↑ comment by benelliott · 2011-08-10T21:30:09.455Z · LW(p) · GW(p)
According to MWI, more than one world exists. I don't understand why this means some worlds must be 'parallel' to others, or even what it would mean for two worlds to be 'parallel' (I understand the concept as it relates to lines, is there another scientific meaning?)
Lots of humans exist, but we don't talk about "parallel humans". What do you mean by doing so with worlds?
Replies from: Thomas↑ comment by Thomas · 2011-08-11T06:03:54.865Z · LW(p) · GW(p)
It's quite a synonym. Parallel worlds is the same thing as the MWI.
At least according to this:
http://en.wikipedia.org/wiki/Many-worlds_interpretation
24 occurences of "parallel worlds" or "parallel universes" in this article. Are they wrong about that?
Replies from: benelliott, wedrifid↑ comment by benelliott · 2011-08-11T11:06:30.160Z · LW(p) · GW(p)
The article contains 36 occurences of the word 'collapse' but this certainly does not mean MWI and Collapse are the same interpretation.
The article's own use of 'parallel worlds' appears to mean 'any world other than the one we currently occupy so "X is parallel to Y" means "X and Y both exist and X =! Y".
Using this definition we can answer your question quite easily, if W1 is parallel to W2 and W2 is parallel or W3 but W1 is not parallel to W3, we can deduce that W1 = W3, and so this is what 'second order parallel' means, identical.
You see what I mean when I say that treating parallel as a relation on worlds is a pretty vacuous way of defining it. Essentially you have drawn the complete graph of worlds, which in terms of information is pretty much equivalent to the empty graph of worlds.
↑ comment by wedrifid · 2011-08-10T21:17:40.168Z · LW(p) · GW(p)
Really? Not defined? So, there is no parallel worlds?
Or what?
This question did not seem unreasonable to me! Somewhat hard to answer though, without just explaining quantum mechanics. Try reading this introduction.
↑ comment by orthonormal · 2012-04-19T03:40:04.749Z · LW(p) · GW(p)
I don't know if you're still as confused as you were when you wrote this, but you were most likely thinking of modal logic when you talked of worlds in terms of defining properties like "having a parallel world". Modal logic is confusing to the philosophers who use it, and quite probably entirely useless. Many-worlds is about interpreting the Schrödinger wavefunction as the universe (and locating ourselves as tiny patterns concealed in various corners of it), not about verbal descriptions of worlds. Does that help explain the disagreement?