## Posts

## Comments

**scott_aaronson2**on The Weighted Majority Algorithm · 2008-11-14T17:36:13.000Z · score: 1 (1 votes) · LW · GW

Will: Yeah, it's 1/4, thanks. I somehow have a blind spot when it comes to constants. ;-)

**scott_aaronson2**on The Weighted Majority Algorithm · 2008-11-14T16:43:34.000Z · score: 5 (7 votes) · LW · GW

Silas: Look, *as soon you find a 1 bit*, you've solved the problem with certainty. You have no remaining doubt about the answer. And the expected number of queries until you find a 1 bit is O(1) (or 2 to be exact). Why? Because with each query, your probability of finding a 1 bit is 1/2. Therefore, the probability that you need t or more queries is (1/2)^(t-1). So you get a geometric series that sums to 2.

(Note: I was careful to say you succeed with certainty after an *expected* number of queries independent of n -- not that there's a constant c such that you succeed with certainty after c queries, which would be a different claim!)

And yes, I absolutely assume that the adversary knows the algorithm, because your algorithm is a fixed piece of information! Once again, the point is not that the universe is out to get us -- it's that we'd like to succeed even if it *is* out to get us. In particular, we don't want to assume that we know the probability distribution over inputs, and whether it might happen to be especially bad for our algorithm. Randomness is useful precisely for "smearing out" over the set of possible environments, when you're completely ignorant about the environment.

If you're not to think this way, the price you pay for Bayesian purity is to give up whole theory of randomized algorithms, with all the advances it's led to even in deterministic algorithms; and to lack the conceptual tools even to *ask* basic questions like P versus BPP.

It feels strange to be explaining CS101 as if I were defending some embattled ideological claim -- but it's good to be reminded why it took people a long time to get these things right.

**scott_aaronson2**on The Weighted Majority Algorithm · 2008-11-14T14:59:05.000Z · score: 3 (3 votes) · LW · GW

Silas: "Solve" = for a worst-case string (in both the deterministic and randomized cases). In the randomized case, just keep picking random bits and querying them. After O(1) queries, with high probability you'll have queried either a 1 in the left half or a 1 in the right half, at which point you're done.

As far as I know this problem doesn't have a name. But it's how (for example) you construct an oracle separating P from ZPP.

**scott_aaronson2**on The Weighted Majority Algorithm · 2008-11-14T09:02:18.000Z · score: 7 (7 votes) · LW · GW

Don: When you fix the goalposts, make sure someone can't kick the ball straight in! :-) Suppose you're given an n-bit string, and you're promised that exactly n/4 of the bits are 1, and they're either all in the left half of the string or all in the right half. The problem is to decide which. It's clear that any deterministic algorithm needs to examine at least n/4 + 1 of the bits to solve this problem. On the other hand, a randomized sampling algorithm can solve the problem *with certainty* after looking at only O(1) bits on average.

Eliezer: I often tell people that theoretical computer science is basically mathematicized paranoia, and that this is the reason why Israelis so dominate the field. You're absolutely right: we *do* typically assume the environment is an adversarial superintelligence. But that's not because we literally think it *is* one, it's because we don't presume to know which distribution over inputs the environment is going to throw at us. (That is, we lack the self-confidence to impose any particular prior on the inputs.) We *do* often assume that, if we generate random bits ourselves, then the environment isn't going to magically take those bits into account when deciding which input to throw at us. (Indeed, if we like, we can easily generate the random bits *after* seeing the input -- not that it should make a difference.)

Average-case analysis is also well-established and used a great deal. But in those cases where you *can* solve a problem without having to assume a particular distribution over inputs, why complicate things unnecessarily by making such an assumption? Who needs the risk?

**scott_aaronson2**on The Weighted Majority Algorithm · 2008-11-14T01:26:51.000Z · score: 7 (9 votes) · LW · GW

*I am interested in what Scott Aaronson says to this.*

I fear Eliezer will get annoyed with me again :), but R and Stephen basically nailed it. Randomness provably never helps in average-case complexity (i.e., where you fix the probability distribution over inputs) -- since given any ensemble of strategies, by convexity there must be at least one deterministic strategy in the ensemble that does at least as well as the average.

On the other hand, if you care about the *worst*-case running time, then there are settings (such as query complexity) where randomness provably *does* help. For example, suppose you're given n bits, you're promised that either n/3 or 2n/3 of the bits are 1's, and your task is to decide which. Any deterministic strategy to solve this problem clearly requires looking at 2n/3 + 1 of the bits. On the other hand, a randomized sampling strategy only has to look at O(1) bits to succeed with high probability.

Whether randomness ever helps in *worst-case polynomial-time computation* is the P versus BPP question, which is in the same league as P versus NP. It's conjectured that P=BPP (i.e., randomness never saves more than a polynomial). This is known to be true if really good pseudorandom generators exist, and such PRG's can be constructed if certain problems that seem to require exponentially large circuits, really do require them (see this paper by Impagliazzo and Wigderson). But we don't seem close to proving P=BPP unconditionally.

**scott_aaronson2**on Complexity and Intelligence · 2008-11-04T18:15:26.000Z · score: 1 (1 votes) · LW · GW

(the superscripts didn't show up: that was N^googol and 2^N)

**scott_aaronson2**on Complexity and Intelligence · 2008-11-04T18:13:43.000Z · score: 3 (3 votes) · LW · GW

Um, except that we *also* don't know whether there are computations that can be checked in N time but only performed in Ngoogol time. The situation is qualitatively the same as for N versus 2N.

**scott_aaronson2**on Complexity and Intelligence · 2008-11-04T17:41:06.000Z · score: 2 (2 votes) · LW · GW

*Otherwise, of course a larger environment can outsmart you mathematically.*

No, not of course. For example, suppose P were equal to PSPACE. Then even though a larger environment could fundamentally outsmart you mathematically (say by solving the halting problem), it couldn't *prove* to you that it was doing so. In other words, the situation with polynomial-time computation would be more-or-less the same as it is with unlimited computation: *superintelligent machines could only prove their superintelligence to other superintelligent machines.*

That the situation with efficient computation appears to be *different*---i.e., that it appears superintelligent machines can indeed prove their superintelligence to fundamentally dumber machines---is (if true) a profound fact about the world that seems worth calling attention to. Sure, of course you can nullify it by assuming away all complexity considerations, but why? :-)

**scott_aaronson2**on Complexity and Intelligence · 2008-11-04T12:56:04.000Z · score: 5 (5 votes) · LW · GW

*In fact, it's just bloody hard to fundamentally increase your ability to solve math problems in a way that "no closed system can do" just by opening the system. So far as I can tell, it basically requires that the environment be magic and that you be born with faith in this fact.*

As Wei mentioned, P≠NP is basically the conjecture that this isn't true: i.e., that you can exponentially increase your ability to solve math problems by your environment being magic and your *not* being born with faith in that fact. So for example, if your environment immediately inverted any one-way function, that would be evidence (no faith required) that your environment is not merely 'slightly' smarter than you are, but *astoundingly* smarter. In qualitative terms, I think it would be almost as astounding as if the environment solved the halting problem.

**scott_aaronson2**on My Childhood Death Spiral · 2008-09-16T00:34:46.000Z · score: 1 (1 votes) · LW · GW

Carl: I'm not sure, but I'd certainly *try* such a pill were the effects reversible.

**scott_aaronson2**on My Childhood Death Spiral · 2008-09-15T22:47:33.000Z · score: 3 (3 votes) · LW · GW

*I don't have a problem, my environment has a problem.*

Eliezer, I'm in complete sympathy with that attitude. I've had only limited success so far at nerdifying the rest of the world, but I'll keep at it!

**scott_aaronson2**on My Childhood Death Spiral · 2008-09-15T22:36:24.000Z · score: 6 (6 votes) · LW · GW

Lara: As far as I can tell, there are four basic problems.

First, if adults constantly praise and reward you for solving math problems, writing stories, and so on, then you aren't *forced* to develop interpersonal skills to the same extent most kids are. You have a separate source of self-worth, and it may be too late that you realize that source isn't enough. (Incidentally, the sort of interpersonal skills I'm talking about often get conflated with *caring for others' welfare*, which then leads to moral condemnation of nerds as egotistical and aloof. But the two qualities seem completely unrelated to me. As often as not, those who are most skilled at convincing others to go along with them also care about others the least.) Of course, the same might in principle be true for any unusual talent, including musical or athletic talent---except that the latter are understood and rewarded by one's peer group in a way that intellectual skills aren't.

Second, math, physics, and so on can simply be *fun*, independently of whatever self-worth one derives from them. In this they're no different from tennis or basket weaving or any other activity that some people enjoy. The trouble, again, is that while math and physics are reasonably well-rewarded economically, they're not rewarded socially. And therefore, deriving pleasure from them can have the same sorts of social implications as deriving pleasure from heroin.

Third, even if you manage to overcome these handicaps, other people won't *know* you have, and will be guided by the reigning stereotypes. They might decide before talking to you that you couldn't possibly have anything in common with them. Naturally, this sort of thing can be overcome given enough social skill, but it's another obstacle.

The fourth problem is specific to technical fields (rather than literary ones), and is just the well-known gender imbalance in those fields.

Given all of this, what's surprising is not that so many "intelligence-centric types" are unhappy, but rather that *in spite of it* many manage to live reasonably happy lives. That's the interesting part! :-)

**scott_aaronson2**on My Childhood Death Spiral · 2008-09-15T20:31:42.000Z · score: 3 (3 votes) · LW · GW

*how much money (or other utility-bearing fruit) would you demand (or pay Scott) to take a drug which lowered your IQ by x pts?*

Here's the funny thing: given who I am now, I would *not* pay to have my IQ lowered, and indeed would pay good money to avoid having it lowered, or even to have it raised. But I would also pay to have been, since early childhood, the sort of person who didn't have such an intelligence-centric set of priorities. I'm not transitive in my preferences; I don't want to want what I want.

**scott_aaronson2**on My Childhood Death Spiral · 2008-09-15T18:56:13.000Z · score: 0 (0 votes) · LW · GW

Eliezer, I don't think there's a *necessary* tradeoff between intelligence (the academic rather than interpersonal kind) and happiness at the far nerd end of the spectrum---just that the way society is currently organized, it seems to be both true and common knowledge that there is (cf. Lara Foster's comment). Though despite the temptation, I can't justify dwelling on this phenomenon for too long---any more than on physical appearance, parental wealth, or any other aspect of our lives that we might love to "choose wisely" but can't. Unlike many other accidents of birth, one could even regard this one as "cosmically justified" if one saw intelligence as having a value of its own, independent from happiness. If you disagree, then yes, I might need a better argument than Pfffft.

**scott_aaronson2**on My Childhood Death Spiral · 2008-09-15T17:50:12.000Z · score: 1 (1 votes) · LW · GW

*We are the cards we are dealt, and intelligence is the unfairest of all those cards.*

I completely agree with that statement, though my interpretation of it might be the opposite of Eliezer's. From *The Simpsons*:

Lisa: Dad, as intelligence goes up, happiness often goes down. In fact, I made a graph! [She holds up a decreasing, concave upward graph on axes marked "intelligence" and "happiness"] Lisa: [sadly] I make a lot of graphs.

**scott_aaronson2**on Hiroshima Day · 2008-08-07T21:49:06.000Z · score: 0 (0 votes) · LW · GW

*Apparently many people just don't have a mental bin for global risks to humanity, only counting up the casualties to their own tribe and country. Either that or they're just short-term thinkers.*

Eliezer, I certainly worry about global risks to humanity, but I also worry about the "paradoxes" of utilitarian ethics. E.g., would you advocate killing an innocent person if long-term considerations convinced you it would have an 0.00001% chance of saving the human race? I'm pretty sure most people wouldn't, and if asked to give a reason, might say that they don't trust anyone to estimate such small probabilities correctly.

**scott_aaronson2**on Hiroshima Day · 2008-08-07T05:15:24.000Z · score: 6 (6 votes) · LW · GW

I would have exploded the first bomb over the ocean, and only then used it against cities if Japan still hadn't surrendered. No matter how many arguments I read about this, I still can't understand the downsides of that route, besides the cost of a 'wasted bomb.'

But what's just as tragic as the bomb having been used in anger, is that it wasn't finished 2-3 years earlier -- in which case it could have saved tens of millions of lives.

**scott_aaronson2**on Humans in Funny Suits · 2008-07-31T08:14:40.000Z · score: 1 (1 votes) · LW · GW

*What you seem to want is an intelligence that is non-human but still close enough to human that we can communicate with it. Although it's not clear what we'd have to talk about, once we get past the Pythagorean theorem.*

How about P vs. NP? :-)

**scott_aaronson2**on Humans in Funny Suits · 2008-07-31T00:50:55.000Z · score: 2 (2 votes) · LW · GW

Can't we imagine the SF writers reasoning that they're never going to succeed *anyway* in creating "real aliens," so they might as well abandon that goal from the outset and concentrate on telling a good story? Absent actual knowledge of alien intelligences, perhaps the best one can ever hope to do is to write "hypothetical humans": beings that are postulated to differ from humans in just one or two important respects that the writer wants to explore. (A good example is the middle third of The Gods Themselves, which delves into the family dynamics of aliens with three sexes instead of two, and one of the best pieces of SF I've read---not that I've read a huge amount.) Of course, most SF (like Star Wars) doesn't *even* do that, and is just about humans with magic powers, terrible dialogue, and funny ears. I guess Star Trek deserves credit for at least *occasionally* challenging its audience, insofar as that's possible with mass-market movies and TV.

**scott_aaronson2**on Possibility and Could-ness · 2008-06-14T10:58:45.000Z · score: 1 (1 votes) · LW · GW

*The algorithm has to assume many different possible actions as having been taken, and extrapolate their consequences, and then choose an action whose consequences match the goal ... The algorithm, therefore, cannot produce an output without extrapolating the consequences of itself producing many different outputs.*

It seems like you need to talk about our "internal state space", not our internal algorithms -- since as you pointed out yourself, our internal algorithms might never enumerate many possibilities (jumping off a cliff while wearing a clown suit) that we still regard as possible. (Indeed, they won't enumerate many possibilities at all, if they do anything even slightly clever like local search or dynamic programming.)

Otherwise, if you're not willing to talk about a state space independent of algorithms that search through it, then your account of counterfactuals and free will would seem to be at the mercy of algorithmic efficiency! Are more choices "possible" for an exponential-time algorithm than for a polynomial-time one?

**scott_aaronson2**on Principles of Disagreement · 2008-06-02T20:12:18.000Z · score: 2 (2 votes) · LW · GW

Eliezer: Yeah, I understand. I was making a sort of meta-joke, that you shouldn't trust me over Gell-Mann about particle physics *even after accounting for* the fact that I say that and would be correspondingly reluctant to disagree...

**scott_aaronson2**on Principles of Disagreement · 2008-06-02T10:44:17.000Z · score: 4 (4 votes) · LW · GW

Particle masses?? Definitely go with Gell-Mann.

**scott_aaronson2**on Einstein's Speed · 2008-05-22T04:47:27.000Z · score: 7 (8 votes) · LW · GW

*When Einstein invented General Relativity, he had almost no experimental data to go on, except the precession of Mercury's perihelion. And (AFAIK) Einstein did not use that data, except at the end.*

Eliezer, I'd love to believe that too, but from the accounts I've read I don't think it's quite right. Because of his "hole argument", Einstein took a long detour from the correct path in 1913-1915. During that time, he abandoned his principle of general covariance, and tried to find field equations that would "work well enough in practice anyway." Apparently, one of the main reasons he finally abandoned that line of thought, and returned to general covariance, is that he was getting a prediction for Mercury's perihelion motion that was too small by a factor of 2.

So is it possible that not even Einstein was a Bayesian superintelligence?

**scott_aaronson2**on Science Doesn't Trust Your Rationality · 2008-05-14T10:41:24.000Z · score: 4 (4 votes) · LW · GW

*Incidentally, it looks to me like you should be able to test macroscopic decoherence. Eventually. You just need nanotechnological precision, very low temperatures, and perhaps a clear area of interstellar (intergalactic?) space.*

Short of that, building a scalable quantum computer would be another (possibly easier!) way to experiment with macroscopic coherence. The difference is that with quantum computing, you wouldn't even *try* to isolate a quantum system perfectly from its environment. Instead you'd use really clever error-correction to encode quantum information in nonlocal degrees of freedom, in such a way that it can survive the decoherence of (say) any 1% of the qubits.

**scott_aaronson2**on If Many-Worlds Had Come First · 2008-05-11T02:12:49.000Z · score: 15 (14 votes) · LW · GW

Inspired by this post, I was reading some of the history today, and I learned something that surprised me: in all of his writings, Bohr apparently never once talked about the "collapse of the wavefunction," or the disappearance of all but one measurement outcome, or any similar formulation. Indeed, Huve Erett's theory would have struck the historical Bohr as complete nonsense, since Bohr didn't believe that wavefunctions were real in the first place -- there was nothing to collapse!

So it might be that MWI proponents (and Bohmians, for that matter) underestimate just how non-realist Bohr really was. They ask themselves: "what would the world have to *be like* if Copenhagenism were true?" -- and the answer they come up with involves wavefunction collapse, which strikes them as absurd, so then that's what they criticize. But the whole point of Bohr's philosophy was that you don't even *ask* such questions. (Needless to say, this is not a ringing endorsement of his philosophy.)

Incidentally, I'm skeptical of the idea that MWI never even *occurred* to Bohr, Heisenberg, Schrödinger, or von Neumann. I conjecture that something like it *must* have occurred to them, as an obvious *reductio ad absurdum* -- further underscoring (in their minds) why one shouldn't regard the wavefunction as "real". Does anyone have any historical evidence either way?

**scott_aaronson2**on Spooky Action at a Distance: The No-Communication Theorem · 2008-05-05T22:00:48.000Z · score: 1 (1 votes) · LW · GW

*Since the laws of probability and rationality are LAWS rather than "just good ideas", it isn't entirely shocking that there'd be some mathematical object th that would seem to act like the place where the territory and map meet. More to the point, the some mathematical object related to the physics that says "this is the most accurate your map can possibly be given the information of whatever is going on with this part/factor of reality."*

That's a beautiful way of putting it, which expresses what I was trying to say much better than I did.

**scott_aaronson2**on Spooky Action at a Distance: The No-Communication Theorem · 2008-05-05T15:07:03.000Z · score: 0 (1 votes) · LW · GW

Mitchell: No, *even if* you want to think of the position basis as the only "real" one, how does that let you decompose any density matrix uniquely into pure states? Sure, it suggests a unique decomposition of the maximally mixed state, but how would you decompose (for example) ((1/2,1/4),(1/4,1/2))?

**scott_aaronson2**on Spooky Action at a Distance: The No-Communication Theorem · 2008-05-05T12:10:11.000Z · score: 2 (3 votes) · LW · GW

As for your pedagogical question, Eliezer -- well, the gift of explaining mathematical concepts verbally is an incredibly rare one (I wish every day I were better at it). I don't think *most* textbook writers are being deliberately obscure; I just think they're following the path of least resistance, which is to present the math and hope each individual reader (after working it through) will have his or her own forehead-slapping "aha!" moment. Often (as with your calculus textbook) that's a serious abdication of authorial responsibility, but in some cases there might *really not* be any faster way.

**scott_aaronson2**on Spooky Action at a Distance: The No-Communication Theorem · 2008-05-05T11:51:23.000Z · score: 3 (3 votes) · LW · GW

Psy-Kosh: TrA just means the operation that "traces out" (i.e., discards) the A subsystem, leaving only the B subsystem. So for example, if you applied TrA to the state |0〉|1〉, you would get |1〉. If you applied it to |0〉|0〉+|1〉|1〉, you would get a classical probability distribution that's half |0〉 and half |1〉. Mathematically, it means starting with a density matrix for the joint quantum state ρAB, and then producing a new density matrix ρB for B only by summing over the A-indices (sort of like tensor contraction in GR, if that helps).

Eliezer: The best way I can think of to explain a density matrix is, it's what you'd inevitably come up with if you tried to encode *all information locally available to you about a quantum state* (i.e., all information needed to calculate the probabilities of local measurement outcomes) in a succinct way. (In fact it's the most succinct possible way.)

You can see it as the quantum generalization of a probability distribution, where the diagonal entries represent the probabilities of various measurement outcomes if you measure in the "standard basis" (i.e., whatever basis the matrix happens to be presented in). If you measure in a different orthogonal basis, identified with some unitary matrix U, then you have to "rotate" the density matrix ρ to UρU* before measuring it (where U* is U's conjugate transpose). In that case, the "off-diagonal entries" of ρ (which intuitively encode different pairs of basis states' "potential for interfering with each other") become relevant.

If you understand (1) why density matrices give you back the usual Born rule when ρ=|ψ〉〈ψ| is a pure state, and (2) why an equal mixture of |0〉 and |1〉 leads to exactly the same density matrix as an equal mixture of |0〉+|1〉 and |0〉-|1〉, then you're a large part of the way to understanding density matrices.

One could argue that density matrices *must* reflect part of the "fundamental nature of QM," since they're too indispensable not to. Alas, as long as you insist on sharply distinguishing between the "really real" from the "merely mathematical," density matrices might always cause trouble, since (as we were discussing a while ago) a density matrix is a strange sort of hybrid of amplitude vector with probability distribution, and the way you pick apart the amplitude vector part from the probability distribution part is badly non-unique. Think of someone who says: "I understand what a complex number *does* -- how to add and multiply one, etc. -- but what does it *mean*?" It means what it does, and so too with density matrices.

**scott_aaronson2**on Spooky Action at a Distance: The No-Communication Theorem · 2008-05-05T03:48:12.000Z · score: 11 (11 votes) · LW · GW

Eliezer, I know your feelings about density matrices, but this is exactly the sort of thing they were designed for. Let ρAB be the joint quantum state of two systems A and B, and let UA be a unitary operation that acts only on the A subsystem. Then the fact that UA is trace-preserving implies that TrA[UA ρAB UA*] = ρB, in other words UA has no effect whatsoever on the quantum state at B. Intuitively, applying UA to the joint density matrix ρAB can only scramble around matrix entries within each "block" of constant B-value. Since UA is unitary, the trace of each of these blocks remains unchanged, so each entry (ρB)ij of the local density matrix at B (obtained by tracing over a block) also remains unchanged. Since all we needed about UA was that it was trace-preserving, this can readily be generalized from unitaries to arbitrary quantum operations including measurements. There, we just proved the no-communication theorem, without getting our hands dirty with a single concrete example! :-)

**scott_aaronson2**on Feynman Paths · 2008-04-17T17:47:17.000Z · score: 10 (11 votes) · LW · GW

As Jess says, Schrödinger and Feynman are formally equivalent: either can be derived from the other. So if the question of which is more "fundamental" can be answered at all, it will have to be from other considerations. My own favorite way to think about the difference between the two pictures is in terms of computational complexity. The Schrödinger equation can be seen as telling us that quantum computers can be simulated by classical computers in *exponential time*: just write out the whole amplitude vector to reasonable precision, which takes exponentially many floating-point numbers, then update it step by step. The Feynman path integral can be seen as telling us that quantum computers can be simulated by classical computers in *polynomial space*: just add up the amplitudes of all paths leading to the quantum computer accepting, reusing the same memory from one path to another. Since polynomial space is contained in exponential time, the Feynman picture yields the better simulation -- and on that basis, one could argue that it's the more "fundamental" of the two representations.

**scott_aaronson2**on Can You Prove Two Particles Are Identical? · 2008-04-15T16:36:01.000Z · score: 1 (1 votes) · LW · GW

*I take it that the theory doesn't tell us determinately that any given particle absolutely lacks any more fundamental structure. How could it, even in principle?*

Paul N., you're right that QM can't rule out the electron having a more fundamental structure -- but it can tell us that whatever that structure might be, it's the *same* from one electron to the next! Why? Because we're talking about a theory in which whether two states of the universe are "the same" or "different" is a primitive with testable consequences, and this is true not because of some "add-on law" that physicists made up but because of the theory's structure. In particular, if two electrons had some definite property that differed even in the hundredth decimal place, then you wouldn't get an interference pattern when you switched the electrons, but as a matter of fact you do. I know Eliezer doesn't want people to see QM as "bizarre," but if thinking of it that way helps you accept this as a fact, go ahead!

**scott_aaronson2**on Can You Prove Two Particles Are Identical? · 2008-04-15T08:23:17.000Z · score: 7 (7 votes) · LW · GW

*Scott, I can't imagine any possible overthrow of QM that would resurrect the idea of two electrons having distinct individual identities.*

Nor can I! Wise Bayes-Master, I was simply trying to follow your own dictum that an inability to imagine something is a fact about us and not the world.

(For technical reasons set out elsewhere, I have difficulty imagining *any* theory superseding QM -- so once I'm asked to condition on that happening, there's very little I'm willing to say about what the new theory might entail.)

**scott_aaronson2**on Can You Prove Two Particles Are Identical? · 2008-04-15T03:18:33.000Z · score: 7 (5 votes) · LW · GW

Wiseman, you say rather dismissively that, yes, "according to a *specific* theory" the particles are identical. But that's *already* a huge deal! For me the point is that, before quantum mechanics, no one had even *imagined* a theoretical framework that could force two particles to be identical in all respects. (If you don't understand how QM actually does this, reread Eliezer's posts.) Obviously, if QM were overthrown then we'd have to revisit all these questions -- but even the fact that a framework like QM is *possible* represents a major philosophical discovery that came to us by way of physics.

**scott_aaronson2**on Can You Prove Two Particles Are Identical? · 2008-04-14T16:23:46.000Z · score: 4 (4 votes) · LW · GW

Now I'm curious about the historical question: is there any philosopher in the pre-quantum era who actually made Bob's argument? I don't doubt that if you *asked* the question, a philosopher might have responded much as Bob has. But did the question actually *occur* to anyone?

**scott_aaronson2**on Quantum Explanations · 2008-04-10T15:11:09.000Z · score: 2 (2 votes) · LW · GW

*though it is moderately troubling if we haven't found a covariant way to describe such a situation of real things we are uncertain about.*

What's worse, Bell's Theorem implies that in some sense such a description can't exist.

**scott_aaronson2**on Quantum Explanations · 2008-04-10T13:57:51.000Z · score: 2 (1 votes) · LW · GW

*the fact that two different situations of uncertainty over true states lead to the same physical predictions isn't obviously a reason to reject that type of view regarding what is real.*

Sorry, I meant to add: in Einstein's version, the problem is that which of the two "situations of uncertainty" is the right one to talk about could depend on what someone does to another quantum system light-years away. And therefore, nature is going to have to propagate updates about what's "really real" faster than the speed of light.

**scott_aaronson2**on Quantum Explanations · 2008-04-10T13:38:54.000Z · score: 1 (1 votes) · LW · GW

Robin, a good place to start would be pretty much any paper Chris Fuchs has ever written. See for example this one (p. 9-12). As Chris points out, the argument from the non-uniqueness of mixed state decompositions basically goes back to Einstein (in a version involving two-particle entanglement). From a modern perspective, where Einstein went wrong was in his further inference that QM therefore has to be incomplete.

**scott_aaronson2**on Quantum Explanations · 2008-04-10T12:44:44.000Z · score: 4 (3 votes) · LW · GW

Ben and Eliezer: Any reply puts me in great danger of violating the spirit of Eliezer's rule that non-realists hold their fire! (I say the spirit and not the letter, since I'm not actually a non-realist myself, just an equal-opportunity kibitzer.)

OK, quickly. Sure, an interesting question *for subjectivists* is how to deal with pure states, but an interesting question *for realists* is how to deal with mixed states! The issue is that you can't just say a density matrix ρ represents a statistical ensemble over "true states of the world" and be done with it, since then you have to make a completely arbitrary, physically-unmotivated choice for whether those true states lie in the {0,1} basis, the {+,-} basis, etc. In an interpretations of QM seminar at Berkeley, we spent pretty much the entire semester arguing about this and nothing else! Yes, it got tiresome, and no, I wasn't even suggesting that Eliezer bring in mixed states before people understood the fundamentals. I was just alluding to it as a key thing to get to eventually, that's all.

**scott_aaronson2**on Quantum Explanations · 2008-04-10T04:50:53.000Z · score: 4 (3 votes) · LW · GW

Tom: Yes, for as long as QM has been around people have tried to hitch doofus ideas about "mind influencing reality" to it -- and for those of us who spend a significant part of our lives fighting such idiocy, it'll be great to see Eliezer bring his considerable didactic skills to the fight.

I was talking about something completely different: namely, the philosophical debate about whether we should regard a quantum state as what's really out there (like a coin), or as our *description* of what's out there (like a probability distribution over coin flips). Neither view implies any ability to change the world just by wishing it, any more than you can bias a coin flip by just changing your probability estimate. But (unless I misread him) Eliezer was promising come down hard in favor of the former view, and I was pointing to mixed states as the battlefield where the two views really meet in an interesting way.

**scott_aaronson2**on Quantum Explanations · 2008-04-10T03:13:23.000Z · score: 4 (5 votes) · LW · GW

*When I talk about quantum mechanics, I am of course using words and stating my beliefs; but those words and beliefs refer directly to the territory, they are not about my or anyone else's knowledge ... Saying, "This coin has a 50% probability of landing heads", rather than "I assign 50% probability to the coin landing heads", is technically (though rather nitpickingly) a mind projection fallacy; you are talking about your beliefs as if they were directly in the coin.*

The fun part, of course, will be to see how you handle mixed states, where the "map" and the "territory" get scrambled together into a non-uniquely-decomposable linear-algebraic soup...

**scott_aaronson2**on Natural Selection's Speed Limit and Complexity Bound · 2007-11-05T20:39:55.000Z · score: 1 (1 votes) · LW · GW

*Remember, folks, evolution doesn't work for the good of the species, and there's no Evolution Fairy who tries to ensure its own continued freedom of action. It's just a statistical property of some genes winning out over others.*

Right, but if the mutation rate for a given biochemistry is itself relatively immutable, then this might be a case where group selection actually works. In other words, one can imagine RNA, DNA, and other replicators fighting it out in the primordial soup, with the winning replicator being the one with the best mutation properties.

**scott_aaronson2**on Einstein's Arrogance · 2007-09-25T19:54:06.000Z · score: 18 (18 votes) · LW · GW

*The Einstein field equation itself is actually extremely simple:*

G = 8*pi*T

Sure, if we don't mind that G and T take a full page to write out in terms of the derivatives of the metric tensor. By this logic *every* equation is extremely simple -- it simply asserts that A=B for some A,B. :-)