Posts

CRISPR opens up new genetic engineering potential 2013-11-19T00:56:42.202Z · score: 5 (8 votes)
Brain-Brain communication 2011-12-09T17:05:47.957Z · score: 10 (13 votes)
Genetically Engineered Intelligence 2010-12-05T10:19:29.237Z · score: 18 (19 votes)
Print ready version of The Sequences 2010-11-06T01:21:20.328Z · score: 16 (16 votes)

Comments

Comment by jordan on Build Small Skills in the Right Order · 2014-07-11T21:12:35.645Z · score: 0 (0 votes) · LW · GW

Ah, I should have guessed that 'Immersion Learning' had been co-opted a few times before. My above use is my own coinage. By it I just mean jumping in and being exposed to everything you can and letting your brain sort it out, rather than methodically building a cathedral of understanding, one block at a time.

Comment by jordan on Build Small Skills in the Right Order · 2014-07-11T02:56:41.741Z · score: 1 (1 votes) · LW · GW

This is how I prefer to learn as well. I call it "Immersion Learning".

For example, during my first year of Algebra, I carried a Calculus textbook with me to class, and read whenever I was bored. I read through the whole textbook that semester, and understood maybe 20%. I didn't bother doing any problems, and when I tried I was totally incapable, but that was OK. The next semester I read through a Calc II and Calc III textbook. Afterward I decided I was going to take the AP Calculus exam. I bought a prep book and started doing calculus problems for the first time in my life, and found that mastering the techniques came naturally. A few weeks later I passed the AP exam.

I think this works because knowledge (at least as it exists in brains) is not highly structured. It's a giant associative mess. As with learning a language, the best way is to be immersed, and let the entire associative mess emerge simultaneously.

Learn the shape of the forest before the lay of the trees. Afterward you can do targeted study to patch up your makeshift map.

Comment by jordan on White Lies · 2014-02-10T20:25:19.805Z · score: 0 (0 votes) · LW · GW

I do this as well, but I don't "lie" (from the perspective of my core values).

I empathetically accept the other person's ethics and decisions. I allow that common connection to genuinely color my tone and physical expressions, which seems to build rapport just as well as actually verbalizing agreement. When I find myself about to verbalize agreement of something I don't actually believe, I consciously pull back. The trick is being able to pull back without losing your empathetic connection.

Anecdotally, I find that I can verbalize disagreement, but as long as I maintain the tone and physical signals of agreement (or 'acceptance', perhaps, but I think 'agreement' is more true) that the other person remains open.

Comment by jordan on CRISPR opens up new genetic engineering potential · 2013-11-19T16:31:45.140Z · score: 0 (0 votes) · LW · GW

Awesome, thanks for the detailed response. After reading about CRISPR's natural role in bacteria I was curious if it would have targeting limitations. It sounds like it does (needs GGG triplet), but that in practice this isn't a big deal.

You still need to get this system into a cell -- that's an issue as always, I agree -- but the reduced chance of unwanted mutation seems like a big step forward over retroviruses.

Thanks again for the great write up!

Comment by jordan on CRISPR opens up new genetic engineering potential · 2013-11-19T08:24:14.731Z · score: 0 (0 votes) · LW · GW

I'm very curious how many genes can be targeted usefully. One paper succeeded in targeting 5 simultaneously in a mouse model. Given the purported accuracy that is already game changing, but if we can do 100 or 200 then maybe we can do more than merely eliminate some simple single gene disorders.

Comment by jordan on Nonperson Predicates · 2013-01-21T20:27:52.955Z · score: 3 (3 votes) · LW · GW

Scary enough for ya?

Sufficiently scary, yes.

That is equivalent to saying you can't understand how mathematics could be a construct; or how mathematical anti-realism could possibly be true.

I assign a respectable probability to anti-realism, and hold no disrespect for anyone who is an anti-realist, but I don't understand how anti-realism can be true. I've never heard a plausible model for why one thing should exist but not another. Tegmarkism sweeps away that problem, leaving the new problem of how to measure probability (why do we have the subjective experience of probability that we do when there are so many versions of myself?). I don't have a satisfactory answer for that question, but it feels like a real question, with meat to get at, whereas in an anti-realist universe the question of why some things exist and other don't seems completely hopeless.

Comment by jordan on Firewalling the Optimal from the Rational · 2012-10-09T15:55:26.269Z · score: 10 (10 votes) · LW · GW

I had a similar experience the first time I supplemented magnesium. Long lasting, non-jittery energy spike. I felt stronger (and empirically could in fact lift more weight), felt better, and was extremely happy. The effect decreased the next few times. After 4 doses (of 50% RDA, spread out over 2 weeks) I began to have adverse effects, including heart palpitation, weakness, and "sense of impending doom".

I wonder if there is a general physiological response to a sudden swing in electrolyte balance that causes the positive effect, rather than the removal of a deficiency.

Comment by jordan on Plastination is maturing and needs funding, says Hanson · 2012-06-23T04:38:58.686Z · score: 1 (1 votes) · LW · GW

If you wipe out the chemical gradient information then how do you know what sorts of ways that the dendrites should regrow in the weeks and months post-resuscitation?

If I wake up and I feel like myself on a second to second basis, I will not be upset if my path through mind space is drastically altered on a time scale of weeks and months, so long as it doesn't lead me to insanity. Hell, I hope I'll be able to drastically change my mind on that time scale anyway once I'm uploaded.

Comment by jordan on Plastination is maturing and needs funding, says Hanson · 2012-06-23T04:30:56.196Z · score: 2 (2 votes) · LW · GW

if a problem doesn't appear quickly, then it probably isn't that important...

I agree completely, especially about how close we probably are to a successful Biosphere, but just to throw out an example where this is wrong: vitamin B-12 deficiency usually takes a decade to demonstrate symptoms, and is fatal.

Comment by jordan on The problem with too many rational memes · 2012-01-19T00:46:18.948Z · score: 1 (1 votes) · LW · GW

It is dangerous in the same way as bringing John Q. Snodgrass to trial for murder. We might overweight evidence in favor of the hypothesis.

Human intuition is a valuable heuristic. As a mathematician I constantly entertain hypotheses I don't believe to be true, for the simple reason that my intuition presented them to be considered. I don't believe I would be at all effective otherwise (although I did just now entertain the hypothesis, despite my lack of belief!)

Comment by jordan on Allen & Wallach on Friendly AI · 2011-12-18T08:03:56.304Z · score: 0 (0 votes) · LW · GW

firstly, a lot of aspects would not necessarily scale up to a smarter system, and it's sometimes hard to tell what generalizes and what doesn't.

I agree, but certainly trying to solve the problem without any hands on knowledge is more difficulty.

Secondly, it's very very hard to pinpoint the "intelligence" of a program without running it

I agree, there is a risk that the first AGI we build will be intelligent enough to skillfully manipulate us. I think the chances are quite small. I find it difficult to image skipping dog level intelligence and human level intelligence and jumping straight to superhuman intelligence, but it is certainly possible.

Comment by jordan on Allen & Wallach on Friendly AI · 2011-12-16T15:42:10.285Z · score: 5 (7 votes) · LW · GW

I agree with Allen and Wallach here. We don't know what an AGI is going to look like. Maybe the idea of a utility maximizer is unfeasible, and the AGIs we are capable of building end up operating in a fundamentally different way (more like a human brain, perhaps). Maybe morality compatible with our own desires can only exist in a fuzzy form at a very high level of abstraction, effectively precluding mathematically precise statements about its behavior (like in a human brain).

These possibilities don't seem trivial to me, and would undermine results from friendliness theory. Why not instead develop a sub-superintelligent AI first (perhaps an intelligence intentionally less than human), so that we can observe directly what the system looks like before we attempt to redesign it for greater safety.

Comment by jordan on Brain-Brain communication · 2011-12-13T02:30:08.269Z · score: 1 (1 votes) · LW · GW

Very interesting. It appears my own model of the brain included a false dichotomy.

If modules are not genetically hardwired, but rather develop as they adapt to specific stimuli, then we should expect infants to have more homogeneous brains. Is that the case?

Comment by jordan on The Gift We Give Tomorrow, Spoken Word [Finished?] · 2011-12-03T19:54:12.244Z · score: 2 (2 votes) · LW · GW

When I read it I was imagining something tongue in cheeky like Pirates of Penzance. Dr. Seuss would have the advantage of great illustrations though.

Comment by jordan on The Gift We Give Tomorrow, Spoken Word [Finished?] · 2011-12-03T05:25:39.707Z · score: 9 (9 votes) · LW · GW

I request a full play, sir.

Comment by jordan on OPERA Confirms: Neutrinos Travel Faster Than Light · 2011-11-19T19:29:40.816Z · score: 0 (0 votes) · LW · GW

Very good sir!

Comment by jordan on OPERA Confirms: Neutrinos Travel Faster Than Light · 2011-11-19T00:03:21.934Z · score: 2 (2 votes) · LW · GW

I'm aware that we've caculated 'c' both by directly measuring the speed of light (to high precision), as well as indirectly via various formulas from relativity (we've directly measured time dilation, for instance, which lets you estimate c), but are the indirect measurements really accurate to parts per million?

Comment by jordan on OPERA Confirms: Neutrinos Travel Faster Than Light · 2011-11-18T16:16:18.470Z · score: 7 (9 votes) · LW · GW

If everywhere in physics where we say "the speed of light" we instead say "the cosmic speed limit", and from this experiment we determine that the cosmic speed limit is slightly higher than the speed of light, does that really change physics all that much?

Comment by jordan on Whole Brain Emulation: Looking At Progress On C. elgans · 2011-10-30T18:54:16.186Z · score: 3 (5 votes) · LW · GW

I was disappointed when I first looked into the C. elegans emulation progress. Now I'm not so sure it's a bad sign. It seems to me that at only 302 neurons the nervous system is probably far from the dominant system of the organism. Even with a perfect emulation of the neurons, it's not clear to me if the resulting model would be meaningful in any way. You would need to model the whole organism, and that seems very hard.

Contrast that with a mammal, where the brain is sophisticated enough to do things independently of feedback from the body, and where we can see these larges scale neural patterns with scanners. If we uploaded a mouse brain, presumably we could get a rough idea that the emulation was working without ever hooking it up to a virtual body.

Comment by jordan on Rationality Drugs · 2011-10-01T20:54:33.161Z · score: 7 (7 votes) · LW · GW

(This is exacerbated by the fact that when I'm sleep-deprived, I tend to feel lousy and wanting to doze off through the day, but then in the evening I suddenly start feeling perfectly OK and not wanting to sleep at all.)

I suffer from this as well. It is my totally unsubstantiated theory that this is a stress response. Throughout the whole day your body is tired and telling you to go to sleep, but the Conscious High Command keeps pressing the KEEP-GOING-NO-MATTER-WHAT button until your body decides it must be in a war zone and kicks in with cortisol or adrenaline or whatever.

Comment by jordan on Knowledge is Worth Paying For · 2011-09-21T19:18:03.474Z · score: 2 (2 votes) · LW · GW

Hear, hear. I encourage everyone to buddy up with an academic and use that academic's library's access to journals.

Comment by jordan on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-21T04:44:28.938Z · score: 2 (2 votes) · LW · GW

I propose that the rational act is to investigate approaches to greater than human intelligence which would succeed.

This. I'm flabbergasted this isn't pursued further.

Comment by jordan on Rationality Quotes August 2011 · 2011-08-05T22:33:18.353Z · score: 1 (1 votes) · LW · GW

Definitely works better than any supplement or herbal remedy I've tried, but I usually don't feel rested the next day.

Comment by jordan on Rationality Quotes August 2011 · 2011-08-05T20:51:06.149Z · score: 1 (1 votes) · LW · GW

Fully agree, especially because I suffer from chronic insomnia =D

Comment by jordan on Rationality Quotes August 2011 · 2011-08-05T20:05:37.439Z · score: 0 (0 votes) · LW · GW

All i'm saying is that people attribute evolutionary reasons to things that have many separate causes and are unproven because they think they understand it.

I agree, however, reverse stupidity is not intelligence. You say

the behavioral pattern that polyphasic sleep requires isn't evolved into our system its just a natural response to the natural light patterns of our world

but this seems like an unsubstantiated claim, just as much as people claiming sleep must be an evolved behavior. I agree that sleep is at least partially behavioral, but it's unclear to me that there isn't an evolved component. See this blurb from Wikipedia, which suggests that human sleep patterns are not completely dependent on external stimuli.

Comment by jordan on Rationality Quotes August 2011 · 2011-08-05T01:01:44.296Z · score: 3 (3 votes) · LW · GW

I would discount polyphasic sleep as being natural on grounds of my current knowledge of anthropology. As far as I know there are no known human cultures that engage in polyphasic sleep (not counting biphasic sleep). That seems like pretty strong evidence that it isn't behavioral, it's physiological, which in turn suggests (but doesn't guarantee) an evolutionary basis for human sleep patterns. Of course, some amount of human sleep patterns is behavioral, e.g. the siesta.

Comment by jordan on The limits of introspection · 2011-07-16T23:50:24.166Z · score: 3 (5 votes) · LW · GW

Great post, great review of the literature.

Where do you get most of your references? Do you wade through the literature, or do you use review papers? I'd love to see a book length compilation with the same density as this post.

Comment by jordan on Preference For (Many) Future Worlds · 2011-07-16T18:59:40.714Z · score: 0 (2 votes) · LW · GW

you had better start imagining pretty hard and consider every possible unexpected event of that order of improbability, including black swans

With QS you must guard yourself against all local Everett branches. Those branches could conceivably contain black swans, like a few electrons tunneling out of a circuit preventing a CPU from performing correctly. Even that is a 1:1,000,000,000 or more event. But they will not contain something macroscopic.

If I look around and notice no one nearby, I might say "I am only 99% confident that there isn't anyone near." If I then sample all local branches (with a device that has a 1:1,000,000 fail rate), killing myself in those branches that no one appears, what is the probability that I will find myself in a branch with another person nearby? I would say about 1%. The presence or absence of another person should behave classically for the small numbers we are talking about. Quantum probabilities are different than my own Bayesian probabilities.

In short, while some failure modes will become more common, others will not.

Comment by jordan on Preference For (Many) Future Worlds · 2011-07-16T02:35:25.404Z · score: 3 (5 votes) · LW · GW

If I were to build a death machine it would be based on high explosives. I would encase my head in a mound of C4 clay (or perhaps a less stable material). The machine could fail, most likely at the detonator, but it's difficult to imagine how it could maim me.

Comment by jordan on Rationalist Judo, or Using the Availability Heuristic to Win · 2011-07-15T17:29:07.618Z · score: 4 (4 votes) · LW · GW

It's difficult for my brain to parse a sentence with 'alieve'. I guess I've watched too many commercials, and my brain associates 'Aleve' with 'relieve', which has an approximately opposite meaning. I have to mentally substitute 'alieve' with something like 'actually believe' in order to comfortably read the sentence.

Comment by jordan on So, I guess the site redesign is live? · 2011-06-22T05:01:34.283Z · score: 9 (9 votes) · LW · GW

The borders on comments are fairly ugly, and far too thick. When I go to view all my comments, the way they are listed there is much more aesthetically pleasing.

I like the new header. The footer is a great improvement.

Mixed feelings about the thumbs up/down icons. I like icons, and they are smaller than the text "Vote up" and "Vote down", but they actually end up taking more space than the text, because their vertical height is greater. Perhaps they can be shrunk a bit and placed in the title line of the comment, along with the permalink and reply icon? You could potentially hide all the icons unless you're mousing over the comment, to avoid clutter.

Comment by jordan on Bias in capital project decision making · 2011-05-27T02:28:13.329Z · score: 2 (4 votes) · LW · GW

I think an unforeseeable edge case or bug that requires deep refactoring and severely cuts into allotted development time fits the bill for a black swan dead on.

Comment by jordan on Beginning resources for CEV research · 2011-05-16T06:11:07.931Z · score: 0 (0 votes) · LW · GW

What if the subreddit was an actual reddit subreddit?

Comment by jordan on Scholarship: How to Do It Efficiently · 2011-05-10T06:48:36.977Z · score: 6 (6 votes) · LW · GW

That is the heart of the social engineering problem at hand.

Programmers gain status by creating and contributing to open source projects, and by answering questions on StackOverflow, etc. I think that is a stable equilibrium, both for programmers and for academics. The question is how to get to that equilibrium in the first place.

First, I think it needs to become generally accepted that the current equilibrium is broken and that there are alternatives. To that end I encourage all academics to discuss it as openly as possible. Once that happens I think (hope) it will just be a matter of high status individuals throwing their weight around properly.

Comment by jordan on Scholarship: How to Do It Efficiently · 2011-05-10T05:47:03.563Z · score: 3 (3 votes) · LW · GW

Can you comment on what the end goal is for all your scholarship, aside from satisfaction?

Comment by jordan on Scholarship: How to Do It Efficiently · 2011-05-10T05:05:01.678Z · score: 15 (21 votes) · LW · GW

I lament this state of affairs with the subdued passion of a 1000 brown dwarf suns.

It's ridiculous that wikipedia is more structured and useful that most of the academic literature. I would like to start some kind of academic movement, whereby we reject closed journals, embrace the open source mentality, and collaborate on up-to-date and awesome wikis on every modern research area.

Comment by jordan on SIAI - An Examination · 2011-05-07T21:26:14.127Z · score: 2 (2 votes) · LW · GW

You're right, the guideline is not too well worded. You should probably replace "what you wouldn't eat raw" with "what would be toxic to eat raw".

Meat is edible raw. There's nothing inherently toxic about uncooked meat. Many other foods require cooking to diminish their toxicity (potatoes, grains, legumes). There's definitely concern about parasites in raw meat, but parasites are not an inherent quality of the meat itself.

There's actually a whole raw paleo sub-subculture. I wouldn't recommend it personally, and I'm not keen to try it myself, but it's there.

Comment by jordan on SIAI - An Examination · 2011-05-07T19:31:53.333Z · score: 4 (4 votes) · LW · GW

I think it's likely humans are evolved to eat cooked food. The guideline don't eat anything you wouldn't eat raw isn't intended to dissuade people to not eat cooked food, but rather to serve as a heuristic for foods that were probably less commonly eaten by our ancestors. It's unclear to me how accurate the heuristic is. A big counterexample is tubers. Tubers are widely eaten by modern hunter-gatherers and are toxic when uncooked.

Comment by jordan on SIAI - An Examination · 2011-05-07T08:36:27.176Z · score: 4 (4 votes) · LW · GW

There isn't really a rigorous definition of the diet. One guideline some people use is that you shouldn't eat anything you wouldn't eat raw, which excludes beans. Coffee beans aren't actually beans though. I wouldn't be surprised if some people consider coffee not paleo, but there are big names in the paleo scene that drink coffee (Kurt Harris, Art de Vany).

Really, I would say paleo is more a philosophy for how to go about honing in on a diet, rather than a particular diet in and of itself. There are hard lines, like chocolate muffins. I don't think coffee is close to that line though.

Comment by jordan on SIAI - An Examination · 2011-05-07T08:24:00.314Z · score: 6 (6 votes) · LW · GW

Fluid dynamics. Considering jumping over to computational neuroscience.

I've put some serious thought into a paleo coffee shop. It's definitely on my list of potential extra-academic endeavors if I end up leaving my ivory tower.

Comment by jordan on Experiment Idea Thread - Spring 2011 · 2011-05-07T01:36:16.212Z · score: 1 (1 votes) · LW · GW

I should have done some more due diligence before suggesting my idea:

http://www.cs.ucla.edu/~sblee/Papers/mobicom09-wnoc.pdf

Edit: I was originally concerned about bandwidth, but the above article claims

On-chip wireless channel capacity. Because of such low signal loss over on-chip wireless channels and new techniques in generating terahertz signals on-chip [14,31], the on-chip wireless network becomes feasible. In addition, it is possible to switch a CMOS transistor as fast as 500 GHz at 32 nm CMOS [21], thus allowing us to implement a large number of high frequency bands for the onchip wireless network. Following a rule of thumb in RF design, the maximum available bandwidth is 10% of the carrier frequency. For example, with a carrier frequency of 300 GHz, the data rate of each channel can be as large as 30 Gbps. Using a 32 nm CMOS process, there will be total of 16 available channels, from 100 GHz to 500 GHz, for the on-chip wireless network, and each channel can transmit at 10 to 20 Gbps. In the 1000-core CMPs design, the total aggregate data rate can be as high as 320 Gbps with 16 TX’s and 64 RX’s.

Comment by jordan on SIAI - An Examination · 2011-05-06T22:43:18.765Z · score: 11 (11 votes) · LW · GW

Luckily a juicy porterhouse steak is a nice stand-in for a triple chocolate muffin. Unfortunately they don't tend to sell them at coffee shops.

Perhaps I'll end my career as a mathematician to start a paleo coffee shop.

Comment by jordan on Experiment Idea Thread - Spring 2011 · 2011-05-06T21:17:19.200Z · score: 0 (2 votes) · LW · GW

Field: Electrical Engineering. No idea how practical this is though:

An important problem with increasing the number of cores on a chip is having enough bandwidth between the cores. Some people are working on in-silicone optical channels, which seems promising. Instead of this would it be possible for the different cores to communicate with each other wirelessly? This requires integrated transmitter and receivers, but I believe both exist.

Comment by jordan on The Cognitive Costs to Doing Things · 2011-05-03T18:48:02.651Z · score: 6 (6 votes) · LW · GW

This is a great list. I think it's too easy to focus on will power alone.

I've been training myself for years to be able to work longer hours. I've built up to the point where I can work for 12-16 hours straight, everyday. Unfortunately I'm only now realizing the extent of other costs. During weeks or months when I'm working hard, I have begun to notice many things:

  • I find it much harder to remain completely calm and respectful while interacting with people close to me. (I used to pride myself on my levelheadedness in interpersonal relationships)
  • I find myself physically tense. I have to consciously relax muscles, especially facial muscles. (I used to pride myself on being very relaxed, physically and emotionally)
  • I'm much more neurotic, and frequently lose sleep analyzing whatever problem I'm working on.
  • Thoughts of opportunity cost are ever prevalent, and wear me out emotionally.

I've been trying to develop a personal philosophy in contrast to Eliezer's Extraordinary Effort idea that stresses not having an emotional stake in what I work on, especially if I work on it 12 hours a day (I call it Directed Apathy). I've had some mild success but in the end it may be that I just need to work less in order to stay sane.

Comment by jordan on The Cognitive Costs to Doing Things · 2011-05-03T18:27:56.720Z · score: 0 (0 votes) · LW · GW

Sounds like something that could be useful for rationality boot camp.

I'd love to do some rejection therapy. There might need to be some caution in applying it in a group setting though. I know for me it would be much easier (and hence much less useful) to do things like asking for a discount if there is a social group behind me to back me up (even if they are out of sight).

Comment by jordan on Essay-Question Poll: Dietary Choices · 2011-04-27T18:18:05.057Z · score: 0 (0 votes) · LW · GW

You're getting into dangerous philosophical territory here, which is not at all easy to resolve. If there are two animals with very similar brain states are they distinct animals? If not, have we doubled the subjective chance of an animal experiencing the state of the doubled animal? These aren't straightforward questions at all. See the Anthropic Trilemma.

I'm not sure how anyone could argue that bringing more suffering animals into the world is good. I support humane treatment of livestock, which I think makes for a net positive regardless of how the Anthropic Trilemma pans out:

If it turns out that most animals are so similar as to not count for distinct entities, but subjective probabilities still exist, so that increasing the percentage of animals in one state increases the chances of experiencing that state for an animal, then it is a good thing to raise lots of animals in a humane fashion.

If it turns out that animals aren't distinct and subjective probabilities can't be affected, then it seems the entire moral quandary disappears. The subjective experience of animals is forever fixed, regardless of our actions, so even factory farming wouldn't be unethical (although I would still support humane treatment of animals because I believe it makes for a healthier meal for me).

If it turns out that every animal is a unique entity, then the moral question must come down to individual cases. Should I bring this potential animal into existence? In this case I believe a close proxy for this question is: if this animal already exists, is it worse for it to have never existed? In the case of a humanely raised animal, I believe the answer is 'yes' to both of these questions.

Comment by jordan on Heading Toward: No-Nonsense Metaethics · 2011-04-24T22:14:01.407Z · score: 5 (5 votes) · LW · GW

This is a more reasonable and measured reply. Negative comments are great, so long as they have substance.

Comment by jordan on Philosophy: A Diseased Discipline · 2011-03-29T00:50:40.600Z · score: 13 (13 votes) · LW · GW

You could probably find other philosophers to help out. The end result, if supported properly by Eliezer, could be very helpful to SIAI's cause.

If SIAI donations could be earmarked for this purpose I would double my monthly contribution.

Comment by jordan on On Being Decoherent · 2011-03-20T19:58:49.918Z · score: 0 (0 votes) · LW · GW

I think that's just a common misunderstanding most people have of MWI, unfortunately. Visualizing a giant decohering phase space is much harder than imagining parallel universes splitting off. I'm fairly certain that Eliezer's presentation of MWI is the standard one though (excepting his discussion of timeless physics perhaps).

Comment by jordan on On Being Decoherent · 2011-03-20T19:25:55.989Z · score: 1 (1 votes) · LW · GW

Upvoted, although my understanding is that there is no difference between Eliezer's MWI and canonical MWI as originally presented by Everett. Am I mistaken?