Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds
post by Vladimir_Nesov · 2009-08-16T16:06:18.646Z · LW · GW · Legacy · 102 commentsContents
103 comments
Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics
Sections of the diavlog:
- When will we build the first superintelligence?
- Why quantum computing isn’t a recipe for robot apocalypse
- How to guilt-trip a machine
- The evolutionary psychology of artificial intelligence
- Eliezer contends many-worlds is obviously correct
- Scott contends many-worlds is ridiculous (but might still be true)
102 comments
Comments sorted by top scores.
comment by cousin_it · 2009-08-16T18:02:47.401Z · LW(p) · GW(p)
Upvoted, but it wasn't nearly as fascinating as I'd hoped, because it was all on our home turf. Eliezer reiterated familiar OB/LW arguments, Aaronson fought a rearguard action without saying anything game-changing. Supporting link for the first (and most interesting to me) disagreement: Aaronson's "The Singularity Is Far".
Replies from: billswift, MichaelGR, None↑ comment by billswift · 2009-08-17T02:38:20.595Z · LW(p) · GW(p)
I have a significant disagreement with this from that link:
I see a few fragile and improbable victories against a backdrop of malice, stupidity, and greed—the tiny amount of good humans have accomplished in constant danger of drowning in a sea of blood and tears
Since destroying things is MUCH easier than building, if humans weren't substantially inclined toward helpful and constructive values, civilization would never have existed in the first place nor could it continue to exist at all.
↑ comment by MichaelGR · 2009-08-19T23:01:47.164Z · LW(p) · GW(p)
Maybe I'm the only one, but I'd like to see a video of Eliezer alone. Just him talking about whatever he finds interesting these days.
I'm suggesting this because so far all the 2-way dialogs I've seen end up with Eliezer talking about 1/4 of the time, and most of what he's saying is correcting what the other person has said. So we end up with not much original Eliezer, which is what I'd really be interested in hearing.
↑ comment by [deleted] · 2009-08-16T18:25:19.042Z · LW(p) · GW(p)
I agree. I stopped watching about five minutes into it when it became clear that EY and Scott were just going to spend a lot of time going back-and-forth.
Nothing game-changing indeed. Debate someone who substantially disagrees with you, EY.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-16T20:46:59.295Z · LW(p) · GW(p)
Sorry about that. Our first diavlog was better, IMHO, and included some material about whether rationality benefits a rationalist - but that diavlog was lost due to audio problems. Maybe we should do another for topics that would interest our respective readers. What would you want me to talk about with Scott?
Replies from: eirenicon, psb, marks, None↑ comment by eirenicon · 2009-08-16T22:11:28.925Z · LW(p) · GW(p)
I'd like you to talk about subjects that you firmly disagree on but think the other party has the best chance of persuading you of. To my mind, debates are more useful (and interesting) when arguments are conceded than when the debaters agree to disagree. Plus, I think that when smart, rational people are disadvantaged in a discussion, they are more likely to come up with fresh and compelling arguments. Find out where your weaknesses and Scott's strengths coincide (and vice versa) and you'll both come out of the debate stronger for it. I wouldn't suggest this to just anyone but I know that (unlike most debaters, unlike most people) you're both eager to admit when you're wrong.
(I dearly love to argue, and I'm probably too good at it for my own good, but oh how difficult it can be to admit defeat at the end of an argument even when I started silently agreeing with my opponent halfway through! I grew up in an argumentative household where winning the debate was everything and it was a big step for me when I started admitting I was wrong, and even bigger when I started doing it when I knew it, not a half hour and two-thousand words of bullshit later. I was having an argument with my father about astrophysics a couple months ago, and it had gotten quite heated even though I suspected he was right. I hadn't followed up, but the next time I saw him he showed me a couple diagrams he'd worked out. It took me thirty seconds to say, "Wow, I really was totally wrong about that. Well done." He looked at me like a boxer who enters the ring ready for ten rounds and then flattens his opponent while the bell's still ringing. No particular reason for this anecdote, just felt like sharing.)
↑ comment by [deleted] · 2009-08-17T01:43:23.070Z · LW(p) · GW(p)
It's okay.
What do you disagree with Scott over? I don't regularly read Shtetl-Optimized, and the only thing I associate with him is a deep belief that P != NP.
I don't really know much about his FAI/AGI leanings. I guess I'll go read his blog a bit.
comment by Vladimir_Nesov · 2009-08-16T16:23:05.444Z · LW(p) · GW(p)
At one point in the dialog, Scott raises what I think is a valid objection to the "nine people in the basement" picture of FAI's development. He points out that it's not how science progresses, and so not how he expects this novel development to happen.
If we consider FAI as a mathematical problem that requires a substantial depth of understanding beyond what's already there to get right, any isolated effort becomes likely hopeless. Mathematical progress is a global effort. I can sorta expect a basement scenario if most of the required math happens to be already developed, and so the remaining challenge is to find the relevant math, assemble it in the right way, and see the answer. But that doesn't sound very likely.
Alternatively, a "team in the basement" could wait for the right breakthrough in the mainstream mathematics, and, being prepared, to apply it faster than anyone else to the problem. This seems more realistic, but may require the mainstream to know what to look for. Which involves playing with existential risk.
Replies from: wedrifid, timtyler↑ comment by wedrifid · 2009-08-29T14:00:14.534Z · LW(p) · GW(p)
At one point in the dialog, Scott raises what I think is a valid objection to the "nine people in the basement" picture of FAI's development. He points out that it's not how science progresses, and so not how he expects this novel development to happen.
I would like to hear more from Eliezer on just how likely he thinks the 'nine people in the basement' development scenario is.
My own impression would be that a more gradual development of GAI is more likely but that that 'basement development' is the only way there is even a remote possibility that the development will not lead to rapid human extinction. That would make the 'nine people in the basement picture' either wishful thinking or 'my best plan of action' depending on whether or not we are Eliezer.
↑ comment by timtyler · 2009-08-16T23:30:19.776Z · LW(p) · GW(p)
"Breakthroughs" are not really how synthetic intelligence has progressed so far. Look at speech recognition, for example. So far, that has mostly been a long, gradual slog. Maybe we are doing it wrong - and there is an easier way. However, that's not an isolated example - and if there are easier ways, we don't seem to be very good at finding them.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-08-16T23:52:41.829Z · LW(p) · GW(p)
Of course, "breakthroughs" is a cumulative impression: now you don't know how to solve the problem or even how to state it, and 10 years later you do.
Replies from: timtyler↑ comment by timtyler · 2009-08-17T00:19:42.067Z · LW(p) · GW(p)
The idea of a "breakthrough" denotes a sudden leap forwards. There have been some of those.
One might cite back propagation, for example - but big breakthroughs seem rare, and most progress seems attributable to other factors - much as Robin Hanson claims happens in general: "in large systems most innovation value comes from many small innovations".
comment by Furcas · 2009-08-16T20:20:02.523Z · LW(p) · GW(p)
Well, that was interesting, if a little bland. I think the main problem was that Scott is the kind of guy who likes to find points of agreement more than points of disagreement, which works fine for everyday life, but not so well for this kind of debate.
By the way, I noticed that this was "sponsored" by the Templeton Foundation, which I and many other people who care about the truth find deeply repulsive.
Replies from: byrnema, thomblake, timtyler↑ comment by byrnema · 2009-08-16T21:08:47.281Z · LW(p) · GW(p)
In response to this comment, I just spent some time on the Templeton Foundation web page to see why you don't like them. Wow, interesting. It's clear why you wouldn't like them. They're like the Less Wrong antithesis. They seem to have a completely opposite POV, but judging from the comments I've read so far, are quite intellectual as well. I spent the summer reading Less Wrong... I think I'll give these guys some time (and form my own conclusion).
Replies from: Furcas↑ comment by thomblake · 2009-08-17T14:41:58.172Z · LW(p) · GW(p)
Why be repulsed at the Templeton Foundation?
It seems like they're mostly on the up-and-up.
Replies from: Furcas↑ comment by Furcas · 2009-08-17T17:43:44.997Z · LW(p) · GW(p)
The purpose of the Templeton Foundation is to blur the line (in people's minds) between science and religion. I'm sure you know it goes: Science and religion are Different Ways Of Knowing The Same Truth, Blah Blah Blah™.
A few years ago they were fairly straightforward about it (it was practically spelled out on their website), but after being subjected to a lot of criticism by secular scientists and philosophers, they've been going about it much more sneakily. They sponsor respectable events and fund respectable science to earn credibility, and spend that credibility on stuff like this and other sneaky attempts to make it seem like among first-rate scientists/philosophers/epistemic authority figures, there's only a small minority that views religion as utter hogwash. There's also the Templeton Prize, that rewards scientists who've said something appropriately respectful about religion, and many other lesser brib... I mean gifts. All of this hidden behind an interest in what they call "The Big Questions", by which they mean, "Questions to which the answer is God".
Replies from: thomblake↑ comment by thomblake · 2009-08-17T18:01:09.043Z · LW(p) · GW(p)
It doesn't seem to me they're doing anything terribly subversive. Even the thing you linked to didn't look too bad - they even have Christopher Hitchens up there.
It seems like some sort of newagey softboiled ecumenical pantheism might just be the way to cut the knot between angry atheists and angry theists. Pragmatism moves me to think they're on the right side here.
Replies from: Furcas, timtyler↑ comment by Furcas · 2009-08-17T18:22:16.827Z · LW(p) · GW(p)
It doesn't seem to me they're doing anything terribly subversive. Even the thing you linked to didn't look too bad - they even have Christopher Hitchens up there.
Like I said, they're a sneaky bunch. Out of 13 contributors, they invite three or four forthright atheists, just to make it seem like they're being fair. The rest are theists (one Muslim and lots of Christians) or 'faitheists', agnostics and pantheists who believe in belief.
It seems like some sort of newagey softboiled ecumenical pantheism might just be the way to cut the knot between angry atheists and angry theists. Pragmatism moves me to think they're on the right side here.
First, the Templeton Foundation's current president, John Templeton Jr., is an evangelical Christian. The softboiled pantheism you think you're seeing is Christianity hidden by prodigious volumes of smoke.
Second, whatever happened to caring about the truth? Would you also say that belief in a cube-shaped Earth might just be the way to cut the knot between angry round-Earthers and angry flat-Earthers?
Replies from: Wei_Dai, Alicorn, thomblake↑ comment by Wei Dai (Wei_Dai) · 2009-08-17T18:51:33.979Z · LW(p) · GW(p)
It's interesting to compare the 1996 Templeton site:
The Templeton Prize for Progress in Religion (especially spiritual information through science) is awarded each year to a living person who shows extraordinary originality in advancing humankind's understanding of God.
to the current site:
The Prize is intended to recognize exemplary achievement in work related to life's spiritual dimension.
Another one. Old:
- Create and fund projects forging stronger relationships and insights linking the sciences and all religions
- Apply scientific methodology to the study of religious and spiritual subjects
- Support progress in religion by increasing the body of spiritual information through scientific research
- Encourage a greater appreciation of the importance of the free enterprise system and the values that support it
- Promote character and value development in educational institutions
New:
Established in 1987, the Foundation’s mission is to serve as a philanthropic catalyst for discovery in areas engaging life’s biggest questions. These questions range from explorations into the laws of nature and the universe to questions on the nature of love, gratitude, forgiveness and creativity.
ETA: I wonder what LessWrong will look like in 13 years. :)
Replies from: PhilGoetz, Furcas↑ comment by PhilGoetz · 2009-08-17T21:01:44.565Z · LW(p) · GW(p)
If you look at the history of the Templeton Prize and their other endeavors, you will find that they never gave an award or a grant to anybody who came up with the "wrong answers". I mean, if they were really interested in "engaging life's biggest questions" they would have given a Templeton to Dawkins for "The God Delusion".
↑ comment by Alicorn · 2009-08-17T18:41:02.071Z · LW(p) · GW(p)
I did a little poking on Wikipedia.
- An atheist, culturally Jewish
- A Dominican friar
- A Methodist
- A possible Muslim, although the Wikipedia page doesn't come out and actually say it and there's some evidence that he is a non-theist and critical of Islam
- A non-theist with a Christian upbringing and general theist sympathies
- An atheist raised Orthodox Jewish
- Christopher-freakin'-Hitchens
- A Church of England priest
- Another atheist
- Unclear what Jerome Groopman is
- Another atheist
- A Catholic
- A guy with a very nontraditional definition of God, sort of reminiscent of what byrnema has said
Given the demographics of the population at large and the content of the question the contributors were answering, I think four actual Christians out of thirteen contributors is very modest.
Replies from: PhilGoetz, Furcas↑ comment by PhilGoetz · 2009-08-17T21:07:10.872Z · LW(p) · GW(p)
Look at the past winners of the Templeton prize. If you look at the winners before 2000, a lot of them were evangelists who had nothing to do with science+religion: Pandurang Shastri Athavale, Bill Bright, Billy Graham, Chuck Colson, Kyung-Chik Han Mother Theresa.
↑ comment by Furcas · 2009-08-17T19:20:21.456Z · LW(p) · GW(p)
Like I said, three or four forthright atheists (depending on what you think of Michael Shermer), the rest are theists or faitheists.
I mean, just take a quick look at the essays (not the titles). Only three answer the question, "Does science make belief in God obsolete?" with a clear Yes. Shermer is less clear, but let's count him as a Yes. The remaining nine answer with No.
Replies from: thomblake, Alicorn↑ comment by thomblake · 2009-08-17T19:29:30.156Z · LW(p) · GW(p)
I must say, I'd answer "No" straightforwardly to that question. While it may be the case that belief in God is 'obsolete', I think what that question means at least needs some unpacking (How is a belief obsolete? Is that a category mistake?), and I don't think science is necessarily what makes that belief 'obsolete'.
Reason, perhaps, or good philosophy, might do the trick.
↑ comment by Alicorn · 2009-08-17T19:25:24.658Z · LW(p) · GW(p)
The question was not, "Does science make it clear that it is an error to believe in God?" I have not read the essays, but if I were answering the question about whether religion is obsolete, I doubt my answer would be interpreted as an unambiguous Yes. Obsolescence isn't about accuracy, it's about consensus of historicity over contemporary usefulness.
↑ comment by thomblake · 2009-08-17T18:32:36.270Z · LW(p) · GW(p)
First, the Templeton Foundation's current president, John Templeton Jr., is an evangelical Christian. The softboiled pantheism you think you're seeing is Christianity hidden by prodigious volumes of smoke.
Well most of the pantheism I've encountered comes from the Christian worldview. And that sounds like an ad-hominem to me... the Foundation doesn't seem to be coming from an evangelical Christian viewpoint in general, and it's certainly not its stated mission.
Second, whatever happened to caring about the truth? Would you also say that belief in a cube-shaped Earth might just be the way to cut the knot between angry round-Earthers and angry flat-Earthers?
If nothing really turned on the question of the Earth's shape, then sure.
To give the classic Pragmatist example, people used to kill each other over the question of transubstantiation of the Eucharist. One side said that the Eucharist is just bread, symbolizing the body and blood of Christ. The other side said that the Eucharist is really the body and blood of Christ, but for all practical purposes (and under any scientific scrutiny) is indistinguishable from bread. It seems like insisting that one side or the other was wrong on this question is the wrong way to go, as nothing really turns on it and they're both saying roughly the same thing.
Better to just 'live and let live' and let 'truth' go this time, in favor of actually making things better. If people do end up making 'God' mean something vacuous, then there's no harm in letting them say it.
Replies from: Furcas↑ comment by Furcas · 2009-08-17T19:05:29.778Z · LW(p) · GW(p)
And that sounds like an ad-hominem to me... the Foundation doesn't seem to be coming from an evangelical Christian viewpoint in general, and it's certainly not its stated mission.
Taking a person's most fundamental beliefs into account when trying to figure out what their true intentions are is not an ad hominem, it's common sense.
To give the classic Pragmatist example, people used to kill each other over the question of transubstantiation of the Eucharist. (...) It seems like insisting that one side or the other was wrong on this question is the wrong way to go, as nothing really turns on it and they're both saying roughly the same thing.
That's short-sighted. Nothing may really turn on the question of transubstantiation, but a there's a lot that turns on the cognitive processes that led millions of people to believe that a cracker is the body a magical Jewish half-deity.
I'm all in favor of "actually making things better", but the middle-of-the-road solution that the Templeton Foundation is (outwardly, deceitfully) espousing won't do that. Middle-of-the-road solutions are easy, they allow us to avoid sounding shrill, strident, and militant, but easiness is not effectiveness.
If people do end up making 'God' mean something vacuous, then there's no harm in letting them say it.
There is harm, because people who don't mean something vacuous by 'God' like to give the impression that they do to shield themselves against criticism. And thanks to 'pragmatism', it usually works.
Replies from: thomblake↑ comment by thomblake · 2009-08-17T19:11:55.412Z · LW(p) · GW(p)
There is harm, because people who don't mean something vacuous by 'God' like to give the impression that they do to shield themselves against criticism. And thanks to 'pragmatism', it usually works.
If theists need to pretend to be atheists to be taken seriously, then we've already won.
Replies from: Furcas↑ comment by Furcas · 2009-08-17T19:34:07.095Z · LW(p) · GW(p)
I didn't think that by a vacuous God you meant a non-existent God.
Obviously, theists don't need to pretend to be atheists: Theism is respected by everyone except a small minority of neo-militant ultra-materialist fundamentalist atheists. To be taken seriously, theists merely need to be (or pretend to be, in the presence of critics) moderates, i.e. believers in a God that acts in a very subtle way and conforms to modern secular morality.
So no, "we" haven't won. The limited form of insanity we call faith is still the norm and is still respected.
↑ comment by timtyler · 2009-08-17T20:09:08.454Z · LW(p) · GW(p)
They seem like dark forces to me. The more dangerous for conveying an innoculous appearance. Religion in scientific clothing.
Replies from: thomblake↑ comment by thomblake · 2009-08-17T20:15:51.853Z · LW(p) · GW(p)
If 'seem like dark forces' is the best you can come up with, then it sounds like you're on no better ground than the theists.
It doesn't seem to me that they're "religion in scientific clothing", but rather an institution that cares about lots of big questions, some of which have traditionally been (and are still) answered primarily by religious sources.
You can't just excise a whole part of the human experience and not expect to lose something good. Diversity is sometimes far more valuable than optimality.
Replies from: Cyan, timtyler↑ comment by timtyler · 2009-08-17T20:27:26.901Z · LW(p) · GW(p)
Right, well, I have limited resources to spend on criticising their particular perversion of science. The purpose of the Templeton Foundation is to blur the line between straightforward science and explicitly religious activity, making it seem like the two enterprises are part of one big undertaking. It's an enterprise I find noxious.
↑ comment by timtyler · 2009-08-16T23:31:51.738Z · LW(p) · GW(p)
It was whaaaat? Where are you getting that from?
Replies from: Furcas↑ comment by Furcas · 2009-08-16T23:35:39.187Z · LW(p) · GW(p)
Click on the link and look to the right side of the video.
Replies from: timtyler↑ comment by timtyler · 2009-08-16T23:54:24.859Z · LW(p) · GW(p)
Thanks! I see! So that's these videos:
http://www.bloggingheads.tv/percontations/
Ironically, the participants discuss the Templeton Foundation 18 minutes in - did they know? ;-)
John Horgan explains how he rationalises taking the Templeton Foundation's money here:
http://www.edge.org/3rd_culture/horgan06/horgan06_index.html
Replies from: timtyler↑ comment by timtyler · 2009-08-17T00:34:52.267Z · LW(p) · GW(p)
Wow! Are these folks all on the Templeton Foundation's payroll?
http://www.templeton.org/evolution/
I wondered why Robert Wright had bothered to write a whole book about god! ;-)
comment by Christian_Szegedy · 2009-08-18T20:07:49.407Z · LW(p) · GW(p)
I liked the discussion, especially the final part on the many world interpretation (MWI).
I had the impression that Eliezer had a better understanding of quantum mechanics (QM), however I found one of his remarks very misleading (and it also confused Scott rightly): Eliezer seemed to argue that MWI somehow resolves the difficulty of unifying QM with general relativity (GR) by resolving non-locality.
It is true that non-locality is resolved by Everett's interpretation, but the real problem with QM+GR is that the renormalization of the gravity wave function does not seem to work out mathematically. At least not in a straightforward manner. However MWI requires gravity to be quantized and therefore forced the physicists to come up with a more elaborate solution.
Anyways, I agree with Eliezer on the other arguments in favor of MWI (linearity, locality, objectivity, etc...), but think that making overreaching remarks rendered his position at least a bit suspect for no good reason.
To be fair: MWI has its own technical quirks (e.g. choice of basis, explanation of probabilities,...) but they don't seem to be as fundamental as those of the classical interpretation. However the discussion would have been more interesting if Scott could have brought up those points rather than the purely philosophical issues.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-18T21:06:31.159Z · LW(p) · GW(p)
"relativity" was meant to refer to SR not GR
Replies from: Christian_Szegedy↑ comment by Christian_Szegedy · 2009-08-19T06:04:39.042Z · LW(p) · GW(p)
Sorry, it seems I was too sloppy, I even must revise my opinion on Scott who seemed to represent a very reasonable point of view although (I agree with you) he tries to conform a bit too much for my taste as well.
Still, I have a very special intutitive suspicions with the WMI: if the physics is so extremely generous and powerful that it spits out all those universes with ease, why does not it allow us to solve exponential problems?
How comes that our world has such a very special physics that it allows us to constructs machines that are slightly more powerful than Turing machines (in an asymptotical sense) still not making exponential (or even NP-complete) problems tractable?
It looks like a strange twist of nature that we have this really special physics that allows us to construct computational processes in this very narrow middle ground in asymptotic complexity. Generating all those exponentially increasing number of universes, but does not allow their inhabitants to exploit them algorithmically to the full extent.
Can't it be that that our world still has to obey certain complexity limits and some of the universes have to be pruned away for some reason?
Replies from: Eliezer_Yudkowsky, timtyler↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-19T15:39:13.391Z · LW(p) · GW(p)
This is a fascinating way of looking at it.
My first thought was to reply, "Yes, most worlds may need to be pruned a la Hanson's mangled worlds, but that doesn't mean you can end up with a single global world without violating Special Relativity, linearity, unitarity, continuity, CPT invariance, etc."
But on second thought this seems to be arguing even further, for the sort of deep revolution in QM that Scott wants - a reformulation that would nakedly expose the computational limits, and make the ontology no more extravagant than the fastest computation it can manage within a single world's quantum computer. So this would have to reduce the proliferation of worlds to sub-exponential, if I understand it correctly, based on the strange reasoning that if we can't do exponential computations in one world then this should be nakedly revealed in a sub-exponential global universe.
But you still cannot end up with a single world, for all the reasons already given - and quantum computers do not seem to be merely as powerful as classical computers, they do speed things up. So that argues that the ontology should be more than polynomial, even if sub-truly-exponential.
Replies from: Christian_Szegedy, Christian_Szegedy↑ comment by Christian_Szegedy · 2009-08-20T21:40:14.777Z · LW(p) · GW(p)
Thanks. I was not aware the Scott has the same concerns based on computational complexity that I have.
I am not even sure that the ontology needs to rely on non-classical capabilities. If our multiverse is a super-sophisticated branch-and-bound type algorithm for some purpose, then it still could be fastest, albeit super-polynomial, algorithm.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-20T22:14:18.028Z · LW(p) · GW(p)
I was not aware the Scott has the same concerns based on computational complexity that I have.
Don't know if he does. I just mean that Scott wants a deep revolution in general, not that particular deep revolution.
↑ comment by Christian_Szegedy · 2009-08-21T20:16:15.992Z · LW(p) · GW(p)
Some other thoughts about the MWI, that come to my mind after a bit more thinking:
Here is a version of the Schroedinger's cat experiment that would let anyone to test the MWI for himself: Let us devise a quantum process that has 99 percent probability of releasing a nerve-gas in a room that kills humans without any pain. If I'd be really sure of the MWI, I would have no problems going into the room and press the button to start the experiment. In my own experience I would simply come out of the room unscratched for certain as it will be the only world I would experience. OTOH, if I really get out of the room as if nothing happened I could deduce with high probability that the MWI is correct. (If not: just repeat the experiment for a couple of times...)
I must admit, I am not really keen on doing the experiment. Why? Am I really so unconvinced about the MWI? What are my reasons not to perform it, even if I'd be 100% sure?
Another variation of the above line of thoght: assume that we are in 2020 and we say that since 2008, year after year, the Large Hadron Collider had all kind of random-looking technical defects that prevented it from performing the planned experiments in the 7Tv scale. Finally a physicist comes up with a convincing calculation showing that the probability that the collider will produce a black hole is much much higher than anticipated and the chances that the earth is destroyed are significant.
Would it be a convincing demonstration of the MWI? Even without the calculation, should we insist on trying to fix the LHC, if we experience the pattern of its breaking down for years?
Replies from: bstarkcomment by BrandonReinhart · 2009-08-18T19:30:49.042Z · LW(p) · GW(p)
I picked up a copy of Jaynes off of ebay for a good price ($35.98). There are 2 copies left in that auction. Someone here might be interested:
http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=280380684353
No need to vote this comment up or down.
comment by timtyler · 2009-08-16T23:24:37.285Z · LW(p) · GW(p)
I note that the Born probabilities were claimed to have been derived from decision theory for the MWI in 2007 by Wallace and Deutsch:
“Probabilities used to be regarded as the biggest -problem for Everett, but ironically, they are now its most powerful success” - David Deutsch.
"In a September 2007 conference David Wallace reported on what is claimed to be a proof by Deutsch and himself of the Born Rule starting from Everettian assumptions. The status of these arguments remains highly controversial."
comment by timtyler · 2009-08-18T10:26:57.842Z · LW(p) · GW(p)
Robot ant: http://www.youtube.com/watch?v=0jyBiECoS3Q
I would say real ants are currently waaay ahead of robot ant controllers.
On the other hand - like EY says - there's a whole bunch of things that we can do which ants can't. So it is not trivial to compare.
Replies from: wedrifidcomment by shirisaya · 2009-08-17T20:16:25.071Z · LW(p) · GW(p)
On the issue of many-world, I must just be slow because I can't see how it is "obviously" correct. It certainly seems both self consistent and consistent with observation, but I don't see how this in particular puts it so far ahead of other ways of understanding QM as to be the default view. If anyone knows of a really good summary for somebody who's actually studied physics on why MWI is so great (and sadly, Eliezer's posts here and on overcomingbias don't do it for me) I would greatly appreciate the pointer.
In particular, two things that I have a hard time wrapping my head around are: -If multiple worlds really are "splitting" from our own how is this accomplished without serious violations of mass and energy conservation. (I'm sure somebody has treated this somewhere since it's so basic, but I've never seen it.) -Even assuming everything else is fine, the actual mechanism for which world diverge has to be spelled out. (Maybe it is somewhere, if so please help me end my ignorance.)
I'll admit that I haven't actually spent a great deal of time considering the issue, but I've never come across answers to basic questions of this sort.
Replies from: Douglas_Knight, timtyler, Z_M_Davis, byrnema, timtyler↑ comment by Douglas_Knight · 2009-08-18T04:18:54.375Z · LW(p) · GW(p)
if multiple worlds really are "splitting"
What if instead of talking about "many worlds" we just said "no collapse"? If there's just this state and it evolves according to Schroedinger's equation. Then then of course there's conservation of energy.
Replies from: shirisaya↑ comment by shirisaya · 2009-08-18T14:34:05.482Z · LW(p) · GW(p)
Sure, I'm certainly not saying that the Copenhagen interpretation is correct, and my understanding is that a decoherence view is both more useful and simpler. MWI (at least as I understand it) is a significantly stronger claim. When we take the probabilities that come from wave state amplitudes as observed frequencies among actually existing "worlds" then we are claiming that there are many different versions of me that actually exist. It's this last part that I find a bit of a stretch.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-08-18T18:13:40.516Z · LW(p) · GW(p)
If many different versions of you existing bothers you, does Schroedinger's cat bother you?
The extent to which MWI is a stronger claim than "no collapse," it's purely interpretative. It certainly doesn't posit any "splitting" beyond vanilla QM. Questions about conservation of energy suggest that you don't get this.
↑ comment by timtyler · 2009-08-17T21:05:46.531Z · LW(p) · GW(p)
For energy conservation see:
http://www.hedweb.com/manworld.htm#violate
The main reason for following the MWI is Occam's razor:
http://www.hedweb.com/manworld.htm#ockham%27s
Replies from: shirisaya↑ comment by shirisaya · 2009-08-18T03:21:33.909Z · LW(p) · GW(p)
Thank you, this is exactly the type of linking that I was looking for. Unfortunately, the FAQ that you so kindly provided isn't providing the rigor that I'm looking for. In fact, for the energy conservation portion, I think (although I'm by no means certain) that the argument has been simplified to the point that the explanation being offered isn't true.
I guess what I'd really like is an explanation of MWI that actually ties the math and the explanations together closely. (I think that I'm expressing myself poorly, so I'm sorry if my point seems muddled, but I'd actually like to really understand what Eliezer seems to find so obvious.)
Replies from: timtyler↑ comment by timtyler · 2009-08-18T08:02:19.809Z · LW(p) · GW(p)
The first sentence lays out the issue:
"the law conservation of energy is based on observations within each world. All observations within each world are consistent with conservation of energy, therefore energy is conserved."
Conservation of energy takes place within worlds, not between them.
FWIW, I first learned about the MWI from: Paul C.W. Davies' book: "Other Worlds" - waay back in the 1980s. It was quite readable - and one of the better popular books on QM from that era. It succeeded in conveying the "Occam" advantage of the theory.
Replies from: shirisaya, shirisaya↑ comment by shirisaya · 2009-08-18T14:56:16.602Z · LW(p) · GW(p)
OK, if that's really what it takes I guess I'll leave it at that. But I don't see the loss of generality from conservation laws operating on any closed system as a good thing, and I can't understand how weighting a world (that is claimed to actually exist) by a probability measure (that I've seen claimed to be meant as observed frequencies) is actually a reasonable thing to do.
I would actually like to understand this, and I suspect strongly that I'm missing something basic. Unfortunately, I don't have the time to make my ignorance suitable for public consumption, but if anyone would like to help enlighten me privately, I'd be delighted.
↑ comment by shirisaya · 2009-08-18T14:52:22.260Z · LW(p) · GW(p)
Ok, but this isn't actually making the case for MWI better to my mind. Instead of mass and energy being conserved in any closed system it is now only conserved on closed systems up to the "size" of a "world". I don't see how this loss of generality (especially since "worlds" tend to "split" into things that must now be treated independently despite coming from the same source) is a good thing.
I actually want to understand this correctly and I strongly suspect that I'm missing something basic. Unfortunately, I don't really have the time to express my ignorance well in a public forum, but if anyone is willing to discuss privately, I'd be delighted.
↑ comment by Z_M_Davis · 2009-08-18T03:56:33.073Z · LW(p) · GW(p)
If anyone knows of a really good summary for somebody who's actually studied physics on why MWI is so great (and sadly, Eliezer's posts here and on overcomingbias don't do it for me) I would greatly appreciate the pointer.
You say Eliezer's posts didn't do it for you, but how much of it did you read? In particular, the point about parsimony favoring MWI is explained in "Decoherence is Simple". As for the mechanism of world divergence, I think the answer is that "worlds" are not an ontologically basic element of the theory. Rather, the theory is about complex amplitude in configuration space, and then from our perspective embedded within the physics, the evolution of the wavefunction seems like "worlds" "splitting."
Replies from: shirisaya↑ comment by shirisaya · 2009-08-18T15:04:37.137Z · LW(p) · GW(p)
You say Eliezer's posts didn't do it for you, but how much of it did you read?
I have read every post on overcomingbias and I'm pretty sure I've ready every top-level post by Eliezer on less wrong. Although I very much enjoyed Eliezer's posts on the issue, they were intended for a wide audience and I'm looking for a technical discussion.
↑ comment by byrnema · 2009-08-18T00:23:24.356Z · LW(p) · GW(p)
I think that the many world hypothesis is aesthetic because it doesn't break symmetry. Suppose that in some set-up a particle can move down one path to the right or another path to the left and there are exactly equal probabilities of either path being taken. Choosing one of the paths -- by any mechanism -- seems arbitrary. It is more logical that both paths are taken. But the two possibilities can't interact: two different worlds.
In the world we experience, objects do occasionally move to the right. If there is not an alternate reality in which the object moved to the left, eventually, with either that object's movement, or the object that pushed it, or the object that pushed that, and so on, you have to explain how symmetry was ever broken in the first place.
Physicists don't like spontaneous breaking of symmetry. So much so, that the idea of many worlds suddenly seems totally reasonable.
Later edit: This is similar to the argument Eliezer made, in more detail and with more physics here.
Replies from: shirisaya↑ comment by shirisaya · 2009-08-18T03:13:57.627Z · LW(p) · GW(p)
In my understanding, what you have presented is an argument for why MWI is interesting (is has strong aesthetic appeal) and why it's worth looking into seriously (it doesn't seem to have spontaneous breaking of symmetry).
What I'm looking for is a compilation of reasons that I should believe that it is true, basically a list of problems with other interpretations and how MWI fixes it along with refutations of common objections to MWI. I should also note that I'm explicitly asking for rigorous arguments (I actually am a physicist and I'd like to see the math) and not just casual arguments that make things seem plausible.
Replies from: byrnema↑ comment by byrnema · 2009-08-18T13:11:45.298Z · LW(p) · GW(p)
I should also note that I'm explicitly asking for rigorous arguments
Many worlds is an interpretation of quantum mechanics. QM stays exactly the same; mathematics, evidence and everything. Whether an interpretation is plausible really just depends on what is aesthetic and what makes sense to you. I explained why some other physicists find Many Worlds reasonable. It's always going to be this nebulous opinion-based "support" because it's not a matter of empirical fact -- unless it ever turned out there is some way the worlds interact.
In my understanding, what you have presented is an argument for why MWI is interesting (is has strong aesthetic appeal) and why it's worth looking into seriously (it doesn't seem to have spontaneous breaking of symmetry).
You've made a distinction between MWI being aesthetic and MWI being worth looking into seriously, which makes it sounds like you view that the argument to avoid spontaneous breaking of symmetry is more than just an aesthetic one. Can you pinpoint the physical reason why we like to avoid it? (I was wondering before.)
And then a question for the physical materialists: Why do you feel comfortable discussing multiple worlds; with it being an interpretation rather than an empirical fact? Or do you think there could ever be evidence one way or the other? (I just read Decoherence is Falsifiable and Testable and I believe Eliezer is saying that Many Worlds is a logical deduction of QM, so that having a non-many-world-theory would require additional postulates and evidence.)
Replies from: timtyler↑ comment by timtyler · 2009-08-18T13:40:48.850Z · LW(p) · GW(p)
Uh huh. See:
"What unique predictions does many-worlds make?"
"Could we detect other Everett-worlds?"
"Many worlds is often referred to as a theory, rather than just an interpretation, by those who propose that many worlds can make testable predictions (such as David Deutsch) or is falsifiable (such as Everett)"
Replies from: byrnema↑ comment by timtyler · 2009-08-18T10:38:21.653Z · LW(p) · GW(p)
It mostly revolves around the idea of collapse. There's no expermental evidence for a collapse. In the MWI, there's no collapse. If we find evidence for a collapse someday, we will have to discard the MWI. However, people have been looking for a while now - and there's no sign of a collapse so far. So, applying Occam's razor, you get the MWI - or something similar.
comment by timtyler · 2009-08-17T07:54:53.968Z · LW(p) · GW(p)
Dennett and Hofstadter have "extremely large" estimates of the time to intelligent machines as well. I expect such estimates will prove to be wrong - but it is true that we don't know much about the size of the target in the search space - or how rough that space is - so almost any estimate is defensible.
comment by timtyler · 2009-08-18T09:59:30.804Z · LW(p) · GW(p)
Time symmetry is probably not a big selling point of the classical formulation of the MWI. What with all those worlds in the future that don't exist in the past.
OK - no information is created or destroyed - so it's technically reversible - but that's not quite the same thing as temporal symmetry.
It would be better if it were formulated so there were lots of worlds in the past too. You don't lose anything that way - AFAICS.
comment by timtyler · 2009-08-18T09:40:42.248Z · LW(p) · GW(p)
The discussion got a bit sidetracked around about when EY asked something like:
If you are assuming that you can give the machine one value and have it stable, why assume that there are all these other values coming into it which you can't control.
...about 27 minutes in.
Scott said something about that being how humans work. That could be expanded on a bit:
In biology, it's hard to build values in explicitly, since the genes have limited control over the brain - since the brain is a big self-organising system. It's as though the genes can determine the initial developmental trajectory - but then there's the wind to deal with.
If machine intelligence turns out to work much like that, then we may have similar difficulties building in machine values. If we can find a way of getting the machines to absorb values from surrounding agents, then that might save a lot of trouble.
Humans get many of their values from surrounding humans - via human culture. Were it not for that we would be like our cannibal ancestors from 1MY ago. Conscience and guilt are some of the mechanisms used to absorb those values. Evolution built those in - rather than all the details of the values of human society. It would have been technically difficult to build those in - and the result would have been inflexible. Instead it built a learning machine - and allowed the details of the values of human society to be one of the things learned.
Machine intelligence is quite likely to work along those lines if it is built on a connectionist model - where the brain grows from a simple initial state. There, we can't easily wire in the details of particular values - since it is so hard to understand the details of what is going on. However, we can wire in some gross values - pain, suffering, irritation, etc. Guilt is basically a way of applying negative reinforcement to past actions. It's a fairly primitive value - the kind that it is easier to build in.
comment by timtyler · 2009-08-17T09:19:59.514Z · LW(p) · GW(p)
I'm not sure the halved doubling time for quantum computers is right.
Maybe I'm not getting into the spirit of accepting the proposed counterfactuals - but is quantum computer performance doubling regularly at all? It seems more as though it is jammed up against decoherence problems already.
Replies from: Douglas_Knight, marc↑ comment by Douglas_Knight · 2009-08-18T04:36:31.367Z · LW(p) · GW(p)
It's a purely theoretical counterfactual about the combination of Moore's law and Grover's algorithm.
Moore's law says that the computer becomes twice as efficient in 18 months. Grover's algorithm says that the time taken by a quantum computer to solve SAT is the square root of the time required by a classical computer. Thus in 18 months, Moore's law of hardware should make the quantum computer 4 times as fast.
Replies from: pengvado, timtyler↑ comment by pengvado · 2009-08-18T08:19:58.198Z · LW(p) · GW(p)
Assume the number of quantum gate-ops per second doubles every 18 months. Assume SAT is O(2^n) on a classical computer and O(2^(n/2)) by Grover's. Then the maximum feasible problem size on a classical computer increases by 1 every 18 months, and on a quantum computer increases by 2. No factors of anything involved.
Alternately, if you measure a fixed problem size, then by assumption speed doubles for both.
So where does 4x come from?
↑ comment by Douglas_Knight · 2009-08-18T17:59:14.236Z · LW(p) · GW(p)
It just comes from treating classical computers as the correct measuring stick. It would be more precise to refer, as you do, to 18 months as the add one time than the doubling time. But if you do call it the doubling time, then for quantum computers, it becomes the 4x time. Of course, it's not uniform--it doesn't apply to problems in P.
Replies from: timtyler↑ comment by timtyler · 2009-08-18T18:28:15.078Z · LW(p) · GW(p)
With classical computers Moore's law improves serial and parallel performance simulataneously - by making components smaller.
With quantum computers serial and parallel performance are decoupled - more qubits improves parallel performance and minaturisation has no effect on the number of qubits, but improves serial processing performance. So, there are two largely independent means of speeding up quantum computing. Which one supposedly doubles twice as fast as classical computers? Neither - AFAICS.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-08-18T21:25:25.599Z · LW(p) · GW(p)
Sorry, my original response should have been "yes, you aren't getting into the spirit of the counterfactual."
↑ comment by timtyler · 2009-08-18T07:39:55.623Z · LW(p) · GW(p)
Well, I can see what math was done. The problem is the false assertion. I learned in math classes that if you accept one false thing, you can prove everything, and consequently your understanding of the difference between what's true and what's not dwindles to zero. You can't just believe one false thing.
If we actually "switched to quantum computers" it isn't clear we would get an exponential trajectory at all - due to the proximity of physical limits. If we did get an exponential trajectory, I can see no coherent reason for thinking the doubling time would relate to that of classical computers - because the technology is quite different. Currently, quantum computers grow mostly by adding qubits - not by the shrinking in component size that drives Moore's law in classical computers. That increases their quantum-parallelism, but doesn't affect their speed.
↑ comment by marc · 2009-08-17T23:23:56.490Z · LW(p) · GW(p)
I guess that quantum computers halve the doubling time, as compared to a classical computer, because every extra qubit squares the available state space. This could give the factor two in the exponential of Moore's law.
Quantum computing performance currently isn't doubling but it isn't jammed either. Decoherence is no longer considered to be a fundamental limit, it's more a practical inconvenience. The change that brought this about was the invention of quantum error correcting codes.
However experimental physicists are still searching for the ideal practical implementation. You might compare the situation to that of the pre-silicon days of classical computing. Until this gets sorted I doubt there will be any Moore's law type growth.
Replies from: timtyler↑ comment by timtyler · 2009-08-18T07:55:27.328Z · LW(p) · GW(p)
I looked at:
http://en.wikipedia.org/wiki/Quantum_error_correction
The bit about the threshold theorem looks interesting.
However, I would be more impressed by a working implementation ;-)
comment by timtyler · 2009-08-17T08:10:14.102Z · LW(p) · GW(p)
Scott cites the Doomsday Argument in his "The Singularity Is Far":
http://scottaaronson.com/blog/?p=346
Surely that is a mistake. The Doomsday Argument may suggest that the days of humans like us may be numbered, but doesn't say much more than that - in particular it can't be used to argue against a long and rich future filled with angelic manifestations. So: it is poor evidence against a relatively near era of transcension.
comment by dmfdmf · 2009-08-17T07:51:13.560Z · LW(p) · GW(p)
Am I missing something here? EY and SA were discussing the advance of computer technology, the end of Moore's rule-of-thumb, quantum computing, BIg Blue, etc. It seems to me that AI is an epistemological problem not an issue of more computing power. Getting Big Blue to go down all the possible branches is not really intelligence at all. Don't we need a theory of knowledge first? I'm new here so this has probably already been discussed but what about freewill? How do AI researchers address that issue?
I'm with SA on the MWI of QM. I think EY is throwing the scientific baby out with the physics bath water. It seems to me that the MWI is committing the mind projection fallacy or the fallacy of the primacy of consciousness. I also agree with whoever said (paraphrased) that all these interpretations of QM just differ on where they hide the contradictions... they are all unsatisfactory and it will take a genius to figure it out.
Replies from: timtyler↑ comment by timtyler · 2009-08-17T07:59:08.295Z · LW(p) · GW(p)
Neither consciousness nor mind are primary in the MWI - so I can't see where you are getting that from.
Replies from: dmfdmf↑ comment by dmfdmf · 2009-08-17T19:44:17.854Z · LW(p) · GW(p)
Its not an explicit form of Primacy of Consciousness like prayer or wishing. Its implicit in QM and its basic premises. One example of an implicit form of PoC is to project properties or aspects of consciousness onto reality and treating them as metaphysical and not epistemological factors. I think the ancient philosophers got hung up on this when debating whether a color like "red" was in the object or subject. This went round and round for a few hundred years until someone pointed out that its both (form/object distinction).
Jaynes covers similar idea in his book and articles where he ascribes this error to traditional frequentists who hold probabilities as a property of things (a metaphysical concept) instead of a measure or property of our lack of knowledge (an epistemological, bayesian concept). Moreover, committing the PoC error will lead you to supernaturalism eventually so MWI is just a logical outcome of that error.
Replies from: Douglas_Knight, timtyler↑ comment by Douglas_Knight · 2009-08-18T04:24:57.652Z · LW(p) · GW(p)
One example of an implicit form of PoC is to project properties or aspects of consciousness onto reality and treating them as metaphysical and not epistemological factors.
you mean like collapse?
Replies from: dmfdmf↑ comment by dmfdmf · 2009-08-18T06:04:32.826Z · LW(p) · GW(p)
Could be but I don't know QM well enough to say for sure.
If I understand it correctly, the collapse of the wave function is when the probabilities change at the moment of observation or measurement. So if one holds that the wave collapse is a metaphysical event (and you agree with Jaynes that probabilities are epistemological) then that would be a case of what Jaynes called the mind projection fallacy. Much of the debates in QM regarding wave collapse revolve around exactly this point. Of course, camps have formed on both sides of the dichotomy and I don't think it can be resolved by just asserting that probabilities are epistemological. The error is deeper than that and I suspect QM needs to be derived from bayesian principles but I am not sure that bayesian probability theory is yet up to the task. The situation is very similar to the ancient debates on whether color was a property of the object or in the mind, which makes me think there is an object/subject distinction that is being missed.
Replies from: Eliezer_Yudkowsky, timtyler↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-18T07:39:36.030Z · LW(p) · GW(p)
Just read the Less Wrong sequence on QM. All the answers to your questions may be found there. I consider myself an aspiring disciple of Jaynes, probably as versed as any living human being in the ways of the Mind Projection Fallacy, and MWI is the version of QM which does not have such difficulties.
You've certainly arrived at the correct website to find the answers that you in particular seek, fellow Bayesian and Jaynesian; but you're being voted down because you haven't read the existing material.
↑ comment by timtyler · 2009-08-17T20:46:04.665Z · LW(p) · GW(p)
So: you know all about the mind projection fallacy - but don't seem to be able to find a coherent way to link it to the MWI, even though you seem to want to do that. I don't know what your motives are - and so don't see the point.
Replies from: dmfdmf↑ comment by dmfdmf · 2009-08-17T22:41:02.388Z · LW(p) · GW(p)
Of course my motives are irrelevant here but for the record I am trying to understand epistomology and its application to my self and, ultimately to AI. How about you, what are your motives?
Not knowing the exact details of where the PoC flaw is in QM is not a devastating criticism of my point, though your tone seems to suggest that you think it is. Why does the USPTO no longer accept applications for perpetual motion machines? Because it violates the first and/or second laws of thermo, no need to dig further into the details. This is just how principles work and once a fundamental error is identified then that's it, end of discussion.... unless I was a physicist and wanted to dig in and take a crack at resolving the QM quandries which I do not. Jaynes left us a pretty large clue that the PoC error probably lies in the mis-use of probability theory as he described. As a non physicist that's all (and more) than I need to know.
Replies from: Cyan, JGWeissman, timtyler↑ comment by Cyan · 2009-08-17T22:51:18.103Z · LW(p) · GW(p)
If you can't tell us why Primacy of Consciousness is necessary for MWI, then we have no grounds for doubting MWI on the basis of your argument. It's like saying that X is a perpetual motion machine and therefore impossible, and then when asked in what way is X a perpetual motion machine, replying that it's implicitly a perpetual motion machine and you can't relate the exact details.
Replies from: dmfdmf↑ comment by dmfdmf · 2009-08-18T02:50:14.327Z · LW(p) · GW(p)
The proof is left as an exercise for the reader ;-)
Seriously, I can't explain the whole chain of thought to you. I made my claim that MWI is implicitly PoC and is a rejection of science and amounts to a supernatural theory. I gave examples of the difference between implicit and explicit PoC errors and gave an historical example where philosophy got hung up on an implicit error. I also cited Jaynes' argument on how traditional probability theory (on which QM rests) projects probabilities onto reality when they are in fact epistemological measures. And I gave an example of how to use principles to avoid unnecessary work such as examining every single case of perpetual motion. And finally I explained that I am not a physicist and have no obligation or desire to find the specific error.
However, if you decide to do the proof as an excersise I will add the following as a hint;
We have this great theory in QM that allows us to make all kinds of calculations and prediction in the microscopic world. It does not integrate with our best theory of the macroscopic world and cosmology -- Special and General Relativity. Moreover, there are various "interpretations" of QM that do not change the calcs but are an attempt to bring meaning and understanding to QM. But they all fail in various ways leaving use with unsatisfactory and contradictory choices between causality -vs- acausality, locality -vs- non-locality, faster than light -vs- c as a limit, one reality -vs- many realities, etc.
Nevetheless, I think these different interpretations of QM should be studied because such understanding and perspectives will lead someone to finding the error that gives rise to all the contradictions and false-alternative. Finally, I don't need to be a physicist nor find the specific error to state with certainty that either QM or GR (or both) are wrong and that the answer, whatever that turns out to be, will be consistent with both of these theories.
Replies from: Cyan↑ comment by JGWeissman · 2009-08-21T01:55:36.836Z · LW(p) · GW(p)
What principle do you believe that MWI is violating that is analogous to a perpetual motion machine violating conservation of energy?
In the case of the perpetual motion machine, it is easy to see that the described system violates energy conservation, because you can compare the energy in the system at different times. From this global violation, one can deduce that there was a mistake somewhere in the calculations that predicted it for a system that follows the physical laws that imply conservation of energy.
So, what is the global problem with MWI that leads you to believe that it has a PoC flaw?
↑ comment by timtyler · 2009-08-18T07:14:48.385Z · LW(p) · GW(p)
Probably mostly to learn things - though you would have to consult with my shrink for more details. Of course I'm not doing that in this thread - I guess that, here I'm trying to help you out on this issue while showing that I know what I'm talking about. Maybe someday, someone can return the favour - if they see me talking nonsense.
Or maybe it's just a case of:
http://mohel.dk/grafik/andet/Someone_Is_Wrong_On_The_Internet.jpg
Jaynes' criticism doesn't apply to the MWI. The MWI doesn't involve probabilities - it's a deterministic theory:
http://www.hedweb.com/manworld.htm#deterministic
Replies from: dmfdmf