Posts

TAI Safety Bibliographic Database 2020-12-22T17:42:05.595Z
Review of "Lifecycle Investing" 2020-04-12T03:12:58.874Z
Welcome to SSC-LW Toronto [Edit With Your Details] 2018-05-06T20:17:22.917Z

Comments

Comment by JessRiedel on Natural Abstractions: Key claims, Theorems, and Critiques · 2024-02-29T15:01:14.675Z · LW · GW

(Self-promotion warning.) Alexander Gietelink Oldenziel pointed me toward this post after hearing me describe my physics research and noticing some potential similarities, especially with the Redundant Information Hypothesis.  If you'll forgive me, I'd like to point to a few ideas in my field (many not associated with me!) that might be useful. Sorry in advance if these connections end up being too tenuous.

In short, I work on mathematically formalizing the intuitive idea of wavefunction branches, and a big part of my approach is based on finding variables that are special because they are redundantly recorded in many spatially disjoint systems.  The redundancy aspects are inspired by some of the work done by Wojciech Zurek (my advisor) and collaborators on quantum Darwinism. (Don't read too much into the name; it's all about redundancy, not mutation.) Although I personally have concentrated on using redundancy to identify quantum variables that behave classically without necessarily being of interest to cognitive systems, the importance of redundancy for intuitively establishing "objectivity" among intelligent beings is a big motivation for Zurek.

Building on work by Brandao et al., Xiao-Liang Qi & Dan Ranard made use of the idea of "quantum Markov blankets" in formalizing certain aspects of quantum Darwinism. I think these are playing a very similar role to the (classical) Markov blankets discussed above.

In the section "Definitions depend on choice of variables" of the current post, the authors argue that Wentworth's construction depends on a choice of variables, and that without a preferred choice it's not clear that the ideas are robust.  So it's maybe worth noting that a similar issue arises in the definition of wavefunction branches.  The approach several researchers (including me) have been taking is to ground the preferred variables in spatial locality, which is about as fundamental a constraint as you can get in physics.  More specifically, the idea is that the wavefunction branche decomposition should be invariant under arbitrary local operations ("unitaries") on each patch of space, but not invariant under operations that mix up different spatial regions.

Another basic physics idea that might be relevant is hydrodynamic variables and the relevant transport phenomena. Indeed, Wentworth brings up several special cases (e.g., temperature, center-of-mass momentum, pressure), and he correctly notes that their important role can be traced back to their local conservation (in time, not just under re-sampling).  However, while very-non-exhaustively browsing through his other posts on LW it seemed as if he didn't bring up what is often considered their most important practical feature: predictability.  Basically, the idea is this: Out of the set of all possible variables one might use to describe a system, most of them cannot be used on their own to reliably predict forward time evolution because they depend on the many other variables in a non-Markovian way. But hydro variables have closed equations of motion, which can be deterministic or stochastic but at the least are Markovian. Furthermore, the rest of the variables in the system (i.e., all the chaotic microscopic degrees of freedom) are usually "as random as possible" -- and therefore unnecessary to simulate -- in the sense that it's infeasible to distinguish them from being in equilibrium (subject, of course, to the constraints implied by the values of the conserved quantities). This formalism is very broad, extending well beyond fluid dynamics despite the name "hydro".

Comment by JessRiedel on Natural Latents: The Math · 2024-02-27T05:52:11.556Z · LW · GW

Further, assume that  mediates between  and  (third diagram below).

I can't tell if X is supposed to be another variable, distinct from X_1 and X_2, or if it's suppose to be X=(X_1,X_2), or what. EDIT: From reading further it looks like X=(X_1,X_2). This should be clarified where the variables are first introduced. Just to make it clear that this is not obvious even just within the field of Bayes nets, I open up Pearl's "Causality" to page 17 and see "In Figure 1.2, X={X_2} and Y={X_3} are d-separated by Z={X_1}", i.e. X is not assumed to be a vector (X_1, X_2, ...).  And obviously there is more variability in other fields.

Comment by JessRiedel on Making every researcher seek grants is a broken model · 2024-01-29T22:14:54.989Z · LW · GW

Other examples:

  • “Career politician” is something of a slur. It seems widely accepted (though maybe you dispute?) that folks who specialize in politics certainly become better at winning politics (“more effective”) but that also this selects for politicians who are less honest or otherwise not well aligned with their constituents.

  • Tech startups still led by their technical CEO are somehow better than those where they have been replaced with a “career CEO”. Obviously there are selection effects, but the career CEOs are generally believed to be more short-term- and power-focused.

People have tried to fix these problems by putting constraints on managers (either through norms/stigmas about “non-technical” managers or explicit requirements that managers must, e.g., have a PhD). And probably these have helped some (although they tend to get Goodhardted, e.g., people who get MDs in order to run medical companies without any desire to practice medicine). And certainly there are times when technical people are bad managers and do more damage than their knowledge can possibly make up for.

But like, this tension between technical knowledge and specializing in management (or grant evaluation) seems like the crux of the issue that must be addressed head-on in any theorizing about the problem.

Comment by JessRiedel on Introduction to Cartesian Frames · 2024-01-29T05:34:02.940Z · LW · GW

Note that I'm specifically not referring to the elements of  as "actions" or "outputs"; rather, the elements of  are possible ways the agent can choose to be.

I don't know what distinction is being drawn here.  You probably need an example to illustrate.

Comment by JessRiedel on Making every researcher seek grants is a broken model · 2024-01-29T02:01:18.997Z · LW · GW

Once you eliminate the requirement that the manager be a practicing scientist, the roles will become filled with people who like managing, and are good at politics, rather than doing science. I’m surprised this is controversial. There is a reason the chair of academic departments is almost always a rotating prof in the department, rather than a permanent administrator. (Note: “was once a professor” is not considered sufficient to prevent this. Rather, profs understand that serving as chair for a couple years before rotating back into research is an unpleasant but necessary duty.)

We see this with doctors too. As the US medical system consolidates, and private practices are squeezed to a tiny and tinier fraction of docs, slowly but surely all the docs become employees of hospitals and the people in charge are MBA-types. Some of them have MDs, and once practiced medicine, but they specialize in management and they don’t come back.

You can of course argue that the downside is worth the benefits. But the existence and size of the downside are pretty clear from history, and need to be addressed in such a system.

Comment by JessRiedel on Making every researcher seek grants is a broken model · 2024-01-28T17:19:29.311Z · LW · GW

Letting people specialize as “science managers” sounds in practice like transferring the reins from scientists to MBAs, as was much maligned at Boeing. Similarly, having grants distributed by people who aren’t practicing scientists sounds like a great way to avoid professional financial retaliation and replace it with politicians setting the direction of funding.

Comment by JessRiedel on President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence · 2023-10-31T21:29:23.536Z · LW · GW

UK’s proposal for a joint safety institute seems maybe more notable:

Sunak will use the second day of Britain's upcoming two-day AI summit to gather “like-minded countries” and executives from the leading AI companies to set out a roadmap for an AI Safety Institute, according to five people familiar with the government’s plans.

The body would assist governments in evaluating national security risks associated with frontier models, which are the most advanced forms of the technology.

The idea is that the institute could emerge from what is now the United Kingdom’s government’s Frontier AI Taskforce, which is currently in talks with major AI companies Anthropic, DeepMind and OpenAI to gain access to their models. An Anthropic spokesperson said the company is still working out the details of access, but that it is “in discussions about providing API access.”

https://www.politico.eu/article/uk-pitch-ai-safety-institute-rishi-sunak/

Comment by JessRiedel on Brief Notes on Transformers · 2023-10-10T23:13:24.780Z · LW · GW

The softmax acts on the whole matrix

Isn't the softmax applied vector-wise, thereby breaking the apparent transpose symmetry? 

Comment by JessRiedel on Human values & biases are inaccessible to the genome · 2023-09-18T18:00:01.543Z · LW · GW

Strictly speaking, the plot could be 100% noise without error bars, sample size, or similar info. So maybe worth including that.

Comment by JessRiedel on Review of "Lifecycle Investing" · 2023-08-29T14:47:11.346Z · LW · GW

No. All the forms of leverage advocated in the book (e.g., call options and buying stocks on margin) at worst take your portfolio to zero if there is a huge market downturn. The book of course advocates keeping a safe rainy-day fund for basic expenses, like everyone else. So you don’t ever require a bailout. The idea is that having your retirement fund go to zero in your early twenties is hardly catastrophic, and the older you get the less leveraged you should be.

Comment by JessRiedel on Feynman Paths · 2022-09-29T03:34:31.307Z · LW · GW

You're drawing a philisophical distinction based on a particular ontology of the wavefunction.  As simpler version arises in classical electromagnetism: we can integrate out the charges and describe the world entirely as an evolving state of the E&M field with the charges acting as weird source terms, or we can do the opposite and integrate out the E&M field to get a theory of charges moving with weird force laws.  These are all equivalent descriptions in that they are observationally indistinguishable.

Comment by JessRiedel on The Best Software For Every Need · 2022-08-05T20:08:52.726Z · LW · GW

Does excalidraw have an advantage over a slides editor like PowerPoint or Keynote?

Comment by JessRiedel on The Best Software For Every Need · 2022-08-05T20:06:03.238Z · LW · GW

Let me also endorse the usefulness of AlternativeTo.net .  Highly recommended.

Comment by JessRiedel on The Case for Extreme Vaccine Effectiveness · 2021-04-19T03:45:22.713Z · LW · GW

You've given some toy numbers as a demonstration that the claim needn't necessarily be undermined, but the question is whether it's undermined by the actual numbers.

Comment by JessRiedel on The Case for Extreme Vaccine Effectiveness · 2021-04-13T23:16:25.289Z · LW · GW

> Of course, the outcomes we’re interested in are hospitalization, severe Covid, and death. I’d expect the false positives on these to be lower than for having Covid at all, but across tens of thousands of people (the Israel study did still have thousands even in later periods), it’s not crazy that some people would be very ill with pneumonia and also get a false positive on Covid.

Does this observation undermine the claim of a general trend in effectiveness with increasing severity of disease? That is, if false positives bias the measured effectiveness downward, and if false positives are more frequent with less severe severe, then the upward trend is less robust and our use of it to extrapolate into places where the error bars are naively large is less convincing.

Comment by JessRiedel on TAI Safety Bibliographic Database · 2020-12-23T01:42:23.123Z · LW · GW

The automated tools on Zotero are good enough now that getting the complete bibtex information doesn't really make it much easier.  I can convert a DOI or arXiv number into a complete listing with one click, and I can do the same with a paper title in 2-3 clicks.  The laborious part is (1) interacting with each author and (2) classifying/categorizing the paper.

Comment by JessRiedel on TAI Safety Bibliographic Database · 2020-12-23T00:56:25.312Z · LW · GW

Looks fine, thanks.

Comment by JessRiedel on TAI Safety Bibliographic Database · 2020-12-23T00:54:15.559Z · LW · GW

Does the org have an official stance?  I've seen people write it both ways.  Happy to defer to you on this, so I've edited.

Comment by JessRiedel on TAI Safety Bibliographic Database · 2020-12-22T20:55:10.497Z · LW · GW

If we decide to expand the database in 2021 to attempt comprehensive coverage of blog posts, then a machine-readable citation system would be extremely helpful.  However, to do that we would need to decide on some method for sorting/filtering the posts, which is going to depend on what the community finds most interesting.  E.g., do we want to compare blog posts to journal articles, or should the analyses remain mostly separate?  Are we going to crowd-source the filtering by category and organization, or use some sort of automated guessing based on authorship tags on the post? How expansive should the database be regarding topic?

Currently of the 358 web items in our database, almost half (161) are blog posts from AI Alignment Forum (106), LessWrong (38), or Effective Altruism Forum (17).  (I emphasize that, as mentioned in the post, our inclusion procedure for web content was pretty random.) Since these don't collect citations on GoogleScholar, some sort of data on them (# comments and upvotes) would be very useful to surface the most notable posts.

Comment by JessRiedel on Search versus design · 2020-08-22T19:15:44.002Z · LW · GW

Somewhat contra Alex's example of a tree, I am struck by the comprehensibility of biological organisms. If, before I knew any biology, you had told me only that (1) animals are mechanistic, (2) are in fact composed of trillions of microscopic machines, and (3) were the result of a search process like evolution, then the first time I looked at the inside of an animal I think I would have expected absolutely *nothing* that could be macroscopically understood. I would have expected a crazy mesh of magic material that operated at a level way outside my ability to understand without (impossibly) constructing a mental entire from the bottom up. And indeed, if animals had been designed through a one-shot unstructured search I think this is what they would be like.

In reality, of course, animals have macroscopic parts that can be partially understood. There's a tube food passes through, with multiple food-processing organs attached. There are bones for structure, muscles to pull, and tendons to transmit that force. And the main computation for directing the animal on large scales takes place in a central location (the brain).

We can tell and understand a post-hoc story about why the animal works, as a machine, and it's sorta right. That animals have a strong amount of design, making this is possible, seems to be related to the iterated search-and-evaluate process that evolution used; it was not one-shot unstructured search.

At the least, this suggest that if search vs. design identifies a good dichotomy or a good axis, it is an axis/dichotomy that is fundamental, and arises way before human-level intelligence.

Comment by JessRiedel on Review of "Lifecycle Investing" · 2020-04-12T15:50:40.440Z · LW · GW

Agreed. The optimal amount of leverage is of course going to be very dependent on one's model and assumptions, but the fact that a young investor with 100% equities does better *on the margin* by adding a bit of leverage is very robust.

Comment by JessRiedel on Review of "Lifecycle Investing" · 2020-04-12T15:46:03.829Z · LW · GW

I endorse ESRogs' replies. I'll just add some minor points.

1. Nothing in this book or the lifecycle strategy rests on anything specific to the US stock market. As I said in my review

The fact that, when young, you are buying stocks on margin makes it tempting to interpret this strategy is only good when one is not very risk averse or when the stock market has a good century. But for any time-homogeneous view you have on what stocks will do in the future, there is a version of this strategy that is better than a conventional strategy. (A large fraction of casual critics seem to miss this point.)

If you are bearish on stocks as a whole, this is incorporated by you choosing a lower equity premium and hence lower overall stock allocation. This choice is independent of the central theoretical idea of the book.

2. Yours is a criticism of all modeling and is not specific to the lifecycle strategy.

3. As ESRogs mentioned, neither this book nor my review has the timing you suggest, so the psychoanalysis of proponents of this strategy appears inconsistent.

4. I acknowledged this sort of argument in my review, and indeed argued that the best approaches hinges on such correlations. But consider: even in the extreme case where I believes my future income is highly correlated with the stock market and is just as volatile, the lifecycle strategy recommends that my equity exposure should start low when I'm young and then increase with age, in opposition to conventional strategies! So even if you take a different set of starting assumptions from the authors, you still get a deep insight from their basic framework.

Comment by JessRiedel on SARS-CoV-2 pool-testing algorithm puzzle · 2020-03-21T19:26:47.553Z · LW · GW
The problem is that there are other RNA viruses besides SARS-CoV-2, such as influenza, and depending when in the disease course the samples were taken, the amount of irrelevant RNA might exceed the amount of SARS-CoV-2 RNA by orders of magnitude

There is going to be tons of RNA in saliva from sources besides SARS-CoV-2 always. Bits of RNA are floating around everywhere. Yes, there is some minimum threshold of SARS-CoV-2 density at which the test will fail to detect it, but this should just scale up by a factor of N when pooling over N people. I don't see why other RNA those people have will be a problem any more than the other sources of RNA in a single person are a problem for a non-pooled test.

Comment by JessRiedel on Why would panic during this coronavirus pandemic be a bad thing? · 2020-03-08T17:34:25.450Z · LW · GW
"The government" in the US certainly doesn't have the authority to do most of these things.

Both the federal and state governments have vast powers during public health emergencies. For instance, the Supreme Court has made clear that the government can hold you down and vaccinate you against your will. Likewise, the Army (not just National Guard) can be deployed to enforce laws, including curfew and other quarantine laws.

Yes, it's unclear whether government officials would be willing to use these options, and how much the public would resist them, but the formal authority is definitely there.

Comment by JessRiedel on Alignment Newsletter #13: 07/02/18 · 2018-07-31T16:19:28.381Z · LW · GW

Hi Rohin, are older version of the newsletter available?

Also:

This sounds mostly like a claim that it is more computationally expensive to deal with hidden information and long term planning.

One consideration: When you are exploring a tree of possibilities, every bit of missing information means you need to double the size of the tree. So it could be that hidden information leads to an exponential explosion in search cost in the absence of hidden-information-specific search strategies. Although strictly speaking this is just a case of something being "more computationally expensive", exponential penalties generically push things from being feasible to infeasible.

Comment by JessRiedel on The simple picture on AI safety · 2018-05-28T01:53:41.894Z · LW · GW

What is the core problem of your autonomous driving group?!

Comment by JessRiedel on Less Wrong: Progress Report · 2009-04-25T19:46:33.000Z · LW · GW

Marshall, I would keep in mind that good intentions are not sufficient for getting your comments up-voted. They need to contribute to the discussion. Since your account was deleted, we can't to judge one way or the other.

Comment by JessRiedel on Less Wrong: Progress Report · 2009-04-25T07:00:25.000Z · LW · GW

I think there is some truth to Marshall's critique and that the situation could be easily improved by making it clear (either on the "about" page or in some other high-visibility note) what the guidelines for voting are. That means guidelines would have to be agreed upon. Until that happens, I suspect people will continue to just vote up comments they agree with, stifling debate.

I've previously suggested a change to the voting system, but this might require more man-power to implement than is available.

Comment by JessRiedel on Issues, Bugs, and Requested Features · 2009-02-27T15:49:21.239Z · LW · GW

It seems like the only criterion for the rating of comment/post be the degree to which it contributes to healthy discussion (well-explained, on-topic, not completely stupid). However, there is an strong tendency for people to vote comments based on whether they disagree with them or not, which is very bad for healthy discussion. It discourages new ideas and drives away visitors with differing opinions when they see a page full of highly rated comments for a particular viewpoint (cf. reddit).

The feature I would recommend most for this website is a dual voting feature: one vote up/down for the quality of the post/comment, and one for whether you agree or disagree with it. This would allow quality, disagreeable comments to float to the top while allowing everyone to satisfy their urge to express their opinion. It also would force people to make a cognitive distinction between the two categories.

Even people like me who try to base their ratings independent of their agreement with the comment are biased in their assessment of the quality. It would be very healthy to read a comment you agree with and would normally upvote (because your quality standards have been biased downward) only to see that a large fraction of the community finds the argument poor.

Incidentally, you might allow voting for humor or on-topic-ness so that people can (say) still be funny every once in a while without directly contributing to the current discussion per se.

(Sorry that was so long. It was something I had been thinking about for awhile.)

Comment by JessRiedel on You Only Live Twice · 2008-12-12T21:03:28.000Z · LW · GW

I'm confused. What is the relationship between Alcor and the Cryonics Institute? Is it either-or? What is the purpose of yearly fees to them if you can just take out insurance which will cover all the costs in the event of your death?

Comment by JessRiedel on Magical Categories · 2008-08-24T21:12:51.000Z · LW · GW

Eliezer, I believe that your belittling tone is conducive to neither a healthy debate nor a readable blog post. I suspect that your attitude is borne out of just frustration, not contempt, but I would still strongly encourage you to write more civilly. It's not just a matter of being nice; rudeness prevents both the speaker and the listener from thinking clearly and objectively, and it doesn't contribute to anything.

Comment by JessRiedel on Timeless Physics · 2008-05-27T19:10:40.000Z · LW · GW

Günther: Of course my comments about Barbour were (partially) ad hominem. The point was not to criticize his work, but to criticize this post. Very few people are qualified to assess the merit of Barbour's work. This includes, with respect, Eliezer. In the absence of expertise, the rational thinker must defer to the experts. The experts have found nothing of note in Barbour's work.

Albert Einstein was not performing philosophy when he developed GR. He was motivated by a philosophical insight and then did physics.

Comment by JessRiedel on Timeless Physics · 2008-05-27T11:45:25.000Z · LW · GW

You've drawn many vague conclusions (read: words, not equations or experimental predictions) about the nature of reality from a vague idea promoted by a non-academic. It smacks strongly of pseudo-science.

Julian Barbour's work is unconventional. Many of his papers border on philosophy and most are not published in prominent journals. His first idea, that time is simply another coordinate parameterizing a mathematical object (like a manifold in GR) and that it's specialness is an illusion, is ancient. His second idea, that any theory more fundamental than QM or GR will necessarily feature time only in a relational sense (in contrast to the commonly accepted, and beautiful, gauge freedom of all time and space coordinates) is interesting and possibly true, but it is most likely not profound. I can't read all of his papers, so perhaps he has some worthwhile work.

This post, however, appears to be completely without substance. What is the point?

    That the universe as we understand it is best described by a timeless mathematical object (e.g. a manifold equipped with some quantum fields)? This viewpoint, taken by most physicists, has been around since at least the 1920's. While profound, it has little to do with Barbour's work, which seems to be the focus of this post.

That the next theory of physics should be expressed with a "relational approach"? This is a nice idea, but one which has (to my knowledge) produced no objective progress in formulating a successor to GR or QM. There are a thousand approaches out there with similar promise...and similar results. I can't help but feel that you are wading into waters which are above your expertise.

Comment by JessRiedel on Faster Than Science · 2008-05-20T04:00:18.000Z · LW · GW

I definitely agree that there is truth to Max Planck's assertion. And indeed, the Copenhagen interpretation was untenable as soon as it was put forth. However, Everett's initial theory was also very unsatisfying. It only became (somewhat) attractive with the much later development of decoherence theory, which first made plausible the claim that no-collapse QM evolution could explain our experiences. (For most physicists who examine it seriously, the claim is still very questionable).

Hence, the gradual increase in acceptance of the MW interpretation is a product both of the old guard dying off and the development of better theoretical support for MW.

Comment by JessRiedel on Decoherence is Falsifiable and Testable · 2008-05-07T22:25:07.000Z · LW · GW

Psy-Kosh: Oh, I almost forgot to answer your questions. Experimental results are still several years distant. The basic idea is to fabricate a tiny cantilever with an even tinier mirror attached to its end. Then, you position that mirror at one end of a photon cavity (the other end being a regular fixed mirror). If you then send a photon into the cavity through a half-silvered third mirror--so that it will be in a superposition of being in and not in the cavity--then the cantilever will be put into a correlated superposition: it will be vibrating if the photon is in the cavity and it will be still if the photon is not. Of course, the really, really super-hard part is getting all this to happen without the state decohering before you see anything interesting.

Robin Z: The motivation for suspecting that something funny happens as you try scale up decoherance to full blown many-worlds comes from the serious problems that many-worlds has. Beyond the issue with predicting the Born postulate, there are serious conceptual problems with defining individual worlds, even emergently.

The motivation for doing this experiment is even more clear: (1) The many-worlds interpretation is a fantastically profound statement about our universe and therefore demands that fantastic experimental work be done to confirm it as best as is possible. (For instance, despite the fact that I very confidently expect Bell's inequality to continue to hold after each tenuous experimental loophole is closed, I still consider it an excellent use of my tax dollars that these experiments continue to be improved). (2) Fundamental new regimes in physics should always be probed, especially at this daunting time in the history of physics where we seem to be able to predict nearly everything we see around us but unable to extend our theories to in-principally testable but currently inaccessible regimes. (3) It's just plain cool.

Comment by JessRiedel on Decoherence is Falsifiable and Testable · 2008-05-07T21:55:39.000Z · LW · GW

Psy-Kosh: It is an awesome experiment. Here are links to Bouwmeester's home page , the original proposal, and the latest update on cooling the cantilever.(Bouwmeester has perhaps the most annoying web interface of any serious scientist. Click in the upper left on "research" and then the lower right on "macroscopic quantum superposition". Also, the last article appeared in nature and may not be accessible without a subscription.)

Obviously, this is a very hard experiment and success is not assured.

Also, you might be interested to know that at least one other group, Jack Harris's at Yale, is doing similar work.

Comment by JessRiedel on Decoherence is Falsifiable and Testable · 2008-05-07T09:25:15.000Z · LW · GW

Excellent post Eliezer. I have just a small quibble: it should be made clear that decoherance and the many worlds interpretations are logically distinct. Many physicists, especially condensed matter physicist working on quantum computation/information, use models of microscopic decoherance on a daily basis while remaining agnostic about collapse. These models of decoherance (used for so-called "partial measurement") are directly experimentally testable.

Maybe a better term for what you are talking about is macroscopic decoherance. As of right now, no one has ever created serious macroscopic superpositions. Macroscopic decoherance, and hence the many worlds interpretation, rely on extrapolating microscopic observed phenomena.

If there's one lesson we can take from the history of physics, it's that everytime new experimental "regimes" are probed (e.g. large velocities, small sizes, large mass densities, large energies), phenomena are observed which lead to new theories (special relativity, quantum mechanics, general relativity, and the standard model, respectively). This is part of the reason I find it likely that the peculiar implications of uncollapsed hermitian evolution are simply the artifacts of using quantum mechanics outside its regime of applicability.

Here at UC Santa Barbara, Dirk Bouwmeester is trying to probe this macroscopic regime by superposing a cantilever that is ~50 microns across--big enough to see with an optical microscope!

Comment by JessRiedel on On Being Decoherent · 2008-04-27T22:38:13.000Z · LW · GW

"And both spatial infinity and inflation are standard in the current model of physics."

As mentioned by a commenter above, spatial infinity is by no means required or implied by physical observation. Non-compact space-times are allowed by general relativity, but so are compact tori (which is a very real possibility) or a plethora of bizarre geometries which have been ruled out by experimental evidence.

Inflation is an interesting theory which agrees well with the small (relative to other areas of physics) amount of cosmological data which has been collected. However, the data by no means implies inflation. In fact, the term "inflation" refers to a huge zoo of models which have many unexplained parameters which can be tuned to fit the date. Physicists are far from absolutely confident in the inflationary picture.

Furthermore, there are serious, serious problems with Many Worlds Interpretation (and likewise for Mangled Worlds), which you neglect to mention here.

I enjoy your take on Quantum Mechanics, Eliezer, and I recommend this blog to everyone I know. I agree with you that Copenhagen untenable and the MWI is the current best idea. But you talk about some of your ideas like it's obvious and accepted by anyone who isn't an idiot. This does your readers a disservice.

I realize that this is a blog and not a refereed journal, so I can't expect you to follow all the rules. But I can appeal to your commitment to honesty in asking you to express the uncertainty of your ideas and to defer when necessary to the academic establishment.

Comment by JessRiedel on Which Basis Is More Fundamental? · 2008-04-24T08:45:07.000Z · LW · GW

Eliezer:I wouldn't be surprised to learn that there is some known better way of looking at quantum mechanics than the position basis, some view whose mathematical components are relativistically invariant and locally causal. There is. Quantum Field theory takes place on the full spacetime of special relativity, and it is completely lorentz covariant. Quantum Mechanics is a low-speed approximation of QFT and neccessarily chooses a reference frame, destroying covariance.

Hal Finney: The Schrodinger equation (and the relatavistic generalization) dictate local evolution of the wavefunction. Non-locality comes about during the measurement process, which is not well understood.

Comment by JessRiedel on Decoherence · 2008-04-22T11:53:21.000Z · LW · GW

CPT symmetry is required by Quantum Field Theory, not General Relativity.

Comment by JessRiedel on Feynman Paths · 2008-04-17T07:18:41.000Z · LW · GW

The Feynman path integral (PI) and Schrödinger's equation (SE) are completely equivalent formulations of QM in the sense that they give the same time evolution of an initial state. They have exactly the same information content. It's true that you can derive SE from the PI, while the reverse derivation isn't very natural. On the other hand, the PI is mathematically completely non-rigorous (roughly, the space of paths is too large) while SE evolution can be made precise.

Practically, the PI cannot be used to solve almost anything except the harmonic oscillator. This is a serious handicap in QM, since SE can be used to solve many problems exactly. But in quantum field theory, all the calculations are perturbations around harmonic oscillators, so the PI can be very useful.

Many physicists would agree that the PI is more "fundamental" because it's gives insight into QFT and theoretical physics. But the distinction is largely a matter of taste.

Comment by JessRiedel on The Quantum Arena · 2008-04-16T05:14:37.000Z · LW · GW

Psy-Kosh: Position-space is special because it has a notion of locality. Two particles can interact if they collide with each other traveling at different speeds, but they cannot interact if they are far from each other traveling at the same speed.

The field, defined everywhere on the 4-D spacetime manifold, is "reality" (up until the magical measurement happens, at least). You can construct different initial value problem (e.g. if the universe is such-and-such at a particular time, how will it evolve?) by taking different slices of the spacetime. Just because there are are many ways to pose an initial value problem for the same spacetime history doesn't mean there isn't one field which is reality.

Eliezer is obviously unable to address all these issues here, as they are well outside his intended scope.

Comment by JessRiedel on The Quantum Arena · 2008-04-15T21:19:10.000Z · LW · GW

Chris, in case you didn't see me ask you last time...

http://www.overcomingbias.com/2008/04/philosophy-meet.html#comment-110472438

do you know of a good survey of decoherence?

Comment by JessRiedel on The Quantum Arena · 2008-04-15T21:06:05.000Z · LW · GW

Psy-Kosh: In Quantum Field Theory, the fields (the analog of wavefunctions in non-relativistic Quantum Mechanics) evolve locally on the spacetime. This is given a precise, observer-independant (i.e. covariant) meaning. This property reduces to the spatially-local evolution of the wavefunction in QM which Eliezer is describing. Further, this indeed identifies position-space as "special", compared to momentum-space or any other decomposition of the Hilbert space.

Eliezer: The wavefunctions in QM (and the fields in QFT) evolve locally under normal (Hermitian) evolution. However, Bell-type experiments show that wavefunction collapse is a non-local process (be it the preposterous Copenhagen-style collapse, or some flavor of decoherence). As far as I have read, the source of this non-locality is not understood.

Comment by JessRiedel on Where Philosophy Meets Science · 2008-04-13T01:03:50.000Z · LW · GW

Chris, could you recommend an introduction to decoherence for a grad student in physics? I am dumbstruck by how difficult it is to learn about it and the seeming lack of an authoritative consensus. Is there a proper review article? Is full-on decoherence taught in any physics grad classes, anywhere?

Comment by JessRiedel on Configurations and Amplitude · 2008-04-10T23:34:09.000Z · LW · GW

Psy-Kosh: I have never heard of anyone ever successfully formulating quantum (or classical) mechanics without the full spectrum of real numbers. You can't even have simple things, like right triangles with non-integer side length, without irrational numbers to "fill in the gaps". Any finite-set formulation of QM would look very different from what we understand now.

Comment by JessRiedel on Configurations and Amplitude · 2008-04-10T23:31:47.000Z · LW · GW

Psy-Kosh: I have never heard of anyone ever successfully formulating quantum (or classical) mechanics without the full spectrum of real numbers. You can't even have simple things, like right triangles with non-integer side length, without irrational numbers to "fill in the gaps". Any finite-set formulation of QM would look very different from what we understand now.

Comment by JessRiedel on Configurations and Amplitude · 2008-04-10T18:33:50.000Z · LW · GW

Psy-Kosh, when QM is formulated rigorously (something that is rarely done, and only by mathematical physicists) the amplitudes must be able to take on any number in the complex plane, not just the rationals.

Sebastian Hagen, I believe Eliezer is explaining to us the best model physicists have for the way the world works on the (sorta) lowest level we understand, not his personal beliefs on the nature of reality. This model must include the irrationals, to be self-consistent. This does not prevent the universe from being discretized (no uncountable sets) on a more fundamental level from QM.

Comment by JessRiedel on Configurations and Amplitude · 2008-04-10T18:18:47.000Z · LW · GW

I guess, Eliezer, that I would be concerned about convincing everyone that the universe runs along like a computer, computing amplitudes locally (which seems to be the gist of your discussion). To do so would certainly make people feel like QM isn't confusing; it would just be wave mechanics. But this would give people a false confidence, I think, and is not how the universe appears to operate.

But this is the first post, so I'll try to confine my criticism until you've wrapped up your discussion.

Comment by JessRiedel on Configurations and Amplitude · 2008-04-10T08:07:09.000Z · LW · GW

Eliezer, in case you plan to discuss Bell's-inequality-type experiments in future posts, I suggest that you use the GHZ state (not the EPR pair) to show how local realism is ruled out in QM. The GHZ state is a much cleaner result, and is not obscurred by the statistics inherent in Bell's inequality.