Posts
Comments
Most people will do very bad things, including mob violence, if they are peer-pressured enough.
Shouldn't this be weighted against the good things people do if they are peer-pressured? I think there's value in not conforming but if all cultures have peer-pressure there needs to be a careful analysis of the pros and cons instead of simply strifing for immunity from it.
I'm not sure anybody "just" innately lacks the machinery to be peer-pressured.
My first thought here aren't autists but psychopaths.
Specifically, I would love to see a better argument for it being ahead of Helion (if it is actually ahead, which would be a surprise and a major update for me).
I agree with Jeffrey Heninger's response to your comment. Here is a (somewhat polemical) video which illustrates the challenges for Helion's unusual D-He3 approach compared to the standard D-T approach which CFS follows. It illustrates some of Jeffrey's points and makes other claims like Helion's current operational poc reactor Trenta being far from adequate for scaling to a productive reactor when considering safety and regulatory demands (though I haven't looked into whether CFS might be affected by this just the same).
For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn't, so you either reject the null, or fail to reject the null, a binary outcome.
Ritualistic hypothesis testing with significance thresholds is mostly used in the social sciences, psychology and medicine and not so much in the hard sciences (although arbitrary thresholds like 5 sigma are used in physics to claim the discovery of new elementary particles they rarely show up in physics papers). Since it requires deliberate effort to get into the mindset of the null ritual I don't think that technical and scientific-minded people just start thinking like this.
I think that the simple explanation that the effect of improving code quality is harder to measure and communicate to management is sufficient to explain your observations. To get evidence one way or another, we could also look at what people do when the incentives are changed. I think that few people are more likely to make small performance improvements than improve code quality in personal projects.
Unfortunately, what I would call the bailey is quite common on Lesswrong. It doesn't take much digging to find quotes like this in the Sequences and beyond:
This is a shocking notion; it implies that all our twins in the other worlds— all the different versions of ourselves that are constantly split off, [...]
Thanks, I see we already had a similar argument in the past.
I think there's a bit of motte and bailey going on with the MWI. The controversy and philosophical questions are about multiple branches / worlds / versions of persons being ontological units. When we try to make things rigorous, only the wave function of the universe remains as a coherent ontological concept. But if we don't have a clear way from the latter to the former, we can't really say clear things about the parts which are philosophically interesting.
I’m reluctant to engage with extraordinarily contrived scenarios in which magical 2nd-law-of-thermodynamics-violating contraptions cause “branches” to interfere.
Agreed. Roland Omnes tries to calculate how big the measurement apparatus of Wigner needs to be in order to measure his friend and gets 10 to the power of 10E18 degrees of freedom ("The Interpretation of Quantum Mechanics", section 7.8).
But if we are going to engage with those scenarios anyway, then we should never have referred to them as “branches” in the first place, ...
Well, that's one of the problems of the MWI: how do we know when we should speak of branches? Decoherence works very well for all practical purposes but it is a continuous process so there isn't a point in time where a single branch actually splits into two. How can we claim ontology here?
I'm not an expert but I would say that I have a decent understanding of how things work on a technical level. Since you are asking very general questions, I'm going to give quite general thoughts.
(1) The central innovation of the blockchain is the proof-of-work mechanism. It is is an ingenious idea which tackles a specific problem (finding consensus between possibly adversarial parties in a global setting without an external source of trust).
(2) Since Bitcoin has made the blockchain popular, everybody wants to have the specific problem it allegedly solves but almost nobody does.
(3) Proof-of-work has a certain empirical track record. This is mostly for cryptocurrencies and in the regime where the main profit of the transaction validators is from minting new coins.
(4) Proof-of-work isn't sustainable. The things which it secures (Bitcoin, smart contracts, etc.) are secure only as long as an increasing amount of energy is put into the system. Sure, traditional institutions like the government or a central bank also use a lot of energy but they can't be abandoned as easily as a blockchain because they are monopolies which have proven to be robust over long time scales. Proof-of-work becomes increasingly unstable when the monetary incentive for the people doing the validations goes down.
(5) Other proposed consensus mechanisms (proofs-of-something-else) remove the central innovation of the blockchain. I don't see them as ingenious ideas like proof-of-work but mostly as wishful thinking of having one's cake and eating it too. I'm open to change my mind here but I don't see any evidence yet.
(6) I don't share the optimism that clever technological solutions which bypass trust will lead to a flourishing society. I think the empirical link between inter-personal trust and a flourishing society is strong. Also as far as trust in people and institutions is bypassed, it is replaced by trust in code. I think it is worthwhile to spell this out explicitly.
(7) Comparisons with the .com bubble don't seem sensible to me. Bitcoin has been popular for ten years now and I still only see pyramid schemes and no sensible applications of the blockchain. Bitcoin and NFTs aren't used, people invest in them and hold them. Crypto right now is almost completely about the anticipated future value of things. In contrast, when the .com bubble became a bubble there were many websites which were heavily used at the time.
(8) Moxie Marlinspike, the founder of Signal, also makes some interesting points regarding web3: We already have an example of a decentralized system becoming widespread: the internet. Did people take matters in their hand and run their own servers? No. What happened is the emergence of centralized platforms and the same thing is happening with blockchains already. I think at least some of the potential people see with blockchains wouldn't be realized because of this dynamic.
Rick Beato has a video about people losing their absolute pitch with age (it seems to happen to everyone eventually). There are a lot of anecdata by people who have experienced this both in the video and in the comments.
Some report that after experiencing a shift in their absolute pitch, all music sounds wrong. Some of them adapted somehow (it's unclear to me how much development of relative abilities was involved) and others report not having noticed that their absolute pitch has shifted. Some report that only after they've lost their absolute pitch completely, they were able to develop certain relative pitch abilities.
Overall, people's reported experiences in the comments vary a lot. I wouldn't draw strong conclusions from them. In any case, I find it fascinating to read about these perceptions.
I am quite skeptical that hearing like a person with absolute pitch can be learned because it seems to be somewhat incompatible with relative pitch.
People with absolute pitch report that if a piece of music is played with a slightly lower or higher pitch, it sounds out of tune. If this feeling stays throughout the piece this means that the person doesn't hear relatively. So even if a relative pitch person would learn to name played notes absolutely, I don't think the hearing experience would be the same.
So I think you can't have both absolute pitch and relative pitch in the full sense. (I do think that you can improve at naming played notes, singing notes correctly without a reference note from outside your body, etc.)
Thanks for this pointer. I might check it out when their website is up again.
Many characteristics have been proposed as significant, for example:
- It's better if fingers have less traveling to do.
- It's better if consecutive taps are done with different fingers or, better yet, different hands.
- It's better if common keys are near the fingers' natural resting places.
- It's better to avoid stretching and overusing the pinky finger, which is the weakest of the five.
Just an anecdotal experience: I, too, have wrist problems. I have tried touch typing with 10 fingers a couple of times and my problems got worse each time. My experience agrees with the point about the pinky above but many consecutive taps with non-pinky fingers on the same hand also make my wrist problems worse. If traveling less means more of those, I prefer traveling more. (But consecutive taps on different hands are good for me.)
Since many consecutive taps with different fingers on the same hand seem to part of the idea behind all keyboard layouts, I expect the costs of switching from the standard layout to an idiosyncratic one to outweigh the benefits.
For now, I have given up on using all 10 fingers. My current typing system is a 3+1 finger system with a little bit of hawking. I'd like to be able to touch type perfectly but this seems to be quite hard without using 10 fingers. I don't feel very limited by my typing speed, though.
You might be also be interested in "General Bayesian Theories and the Emergence of the Exclusivity Principle" by Chiribella et al. which claims that quantum theory is the most general theory which satisfies Bayesian consistency conditions.
By now, there are actually quite a few attempts to reconstruct quantum theory from more "reasonable" axioms besides Hardy's. You can track the refrences in the paper above to find some more of them.
As you learn more about most systems, the likelihood ratio should likely go down for each additional point of evidence.
I'd be interested to see the assumptions which go into this. As Stuart has pointed out, it's got to do with how correlated the evidence is. And for fat-tailed distributions we probably should expect to be surprised at a constant rate.
Note you can still get massive updates if B' is pretty independent of B. So if someone brings in camera footage of the crime, that has no connection with the previous witness's trustworthiness, and can throw the odds strongly in one direction or another (in equation, independence means that P(B'|H,B)/P(B'|¬H,B) = P(B'|H)/P(B'|¬H)).
Thanks, I think this is the crucial point for me. I was implicitly operating under the assumption that the evidence is uncorrelated which is of course not warranted in most cases.
So if we have already updated on a lot of evidence, it is often reasonable to expect that part of what future evidence can tell us is already included in these updates. I think I wouldn't say that the likelihood ratio is independent of the prior anymore. In most cases, they have a common dependency on past evidence.
From the article:
At this point, I think I am somewhat below Nate Silver’s 60% odds that the virus escaped from the lab, and put myself at about 40%, but I haven’t looked carefully and this probability is weakly held.
Quite off-topic: what does it mean from a Bayesian perspective to hold a probability weakly vs. confidently? Likelihood ratios for updating are independent of the prior so a weakly-held probability should update exactly as a confidently-held one. Is there a way to quantifiy the "strongness" with which one holds a probability?
Thanks for your answer. Part of the problem might have been that I wasn't that proficient with vim. When I reconfigured the clashing key bindings of the IDE I sometimes unknowingly overwrote a vim command which turned out to be useful later on. So I had to reconfigure numerous times which annoyed me so much that I abandoned the approach at the time.
A question for the people who use vim keybindings in IDEs: how do you deal with keybindings for IDE tasks which are not part of vim (like using the debugger, refactoring, code completion, etc.)? The last time I tried to use vim bindings in an IDE there were quite some overlaps with these so I found myself coming up with compromise systems which didn't work that well because they weren't coherent.
At least for me, I think the question of whether I'm buying too much for myself in a situation of limited supplies was more important for the decision than the fear of being perceived as weird. This depends of course on how limited the supplies actually were at the time of buying but I think it is generally important to distinguish between the shame because one might profit at the expense of others, and the "pure" weirdness of the action.
We have reason to believe that peptide vaccines will work particularly well here, because we're targeting a respiratory infection, and the peptide vaccine delivery mechanism targets respiratory tissue instead of blood.
Just a minor point: by delivery mechanism, are you talking about inserting the peptides through the nose à la RadVac? If I understand correctly, Werner Stöcker injects his peptide-based vaccine.
You could also turn around this question. If you find it somewhat plausible that that self-adjoint operators represent physical quantities, eigenvalues represent measurement outcomes and eigenvectors represent states associated with these outcomes (per the arguments I have given in my other post) one could picture a situation where systems hop from eigenvector to eigenvector through time. From this point of view, continuous evolution between states is the strange thing.
The paper by Hardy I cited in another answer to you tries to make QM as similar to a classical probabilistic framework as possible and the sole difference between his two frameworks is that there are continuous transformations between states in the quantum case. (But notice that he works in a finite-dimensional setting which doesn't easily permit important features of QM like the canonical commutation relations).
There are remaining open questions concerning quantum mechanics, certainly, but I don't really see any remaining open questions concerning the Everett interpretation.
“Valid” is a strong word, but other reasons I've seen include classical prejudice, historical prejudice, dogmatic falsificationism, etc.
Thanks for answering. I didn't find a better word but I think you understood me right.
So you basically think that the case is settled. I don't agree with this opinion.
I'm not convinced of the validity of the derivations of the Born rule (see IV.C.2 of this for some critcism in the literature). I also see valid philosophical reasons for preferring other interpretations (like quantum bayesianism aka QBism).
I don't have a strong opinion on what is the "correct" interpretation myself. I am much more interested in what they actually say, in their relationships, and in understanding why people hold them. After all, they are empirically indistinguishable.
Honestly, though, as I mention in the paper, my sense is that most big name physicists that you might have heard of (Hawking, Feynman, Gell-Mann, etc.) have expressed support for Everett, so it's really only more of a problem among your average physicist that probably just doesn't pay that much attention to interpretations of quantum mechanics.
There are other big name physicists who don't agree (Penrose, Weinberg) and I don't think you are right about Feynman (see "Feynman said that the concept of a "universal wave function" has serious conceptual difficulties." from here). Also in the actual quantum foundations research community, there's a great diversity of opinion regarding interpretations (see this poll).
I think it makes more sense to think of MWI as "first many, then even more many," at which point questions of "when does the split happen?" feel less interesting, because the original state is no longer as special. [...] If time isn't quantized, then this has to be spread across continuous space, and so thinking of there being a countable number of worlds is right out.
What I called the "nice ontology" isn't so much about the number of worlds or even countability but about whether the worlds are well-defined. The MWI gives up a unique reality for things. The desirable feature of the "nice ontology" is that the theory tells us what a "version" of a thing is. As we all seem to agree, the MWI doesn't do this.
If it doesn't do this, what's the justification for speaking of different versions in the first place? I think pure MWI makes only sense as "first one, then one". After all, there's just the universal wave function evolving and pure MWI doesn't give us any reason to take a part of this wavefunction and say there are many versions of this.
I agree that the question "how many worlds are there" doesn't have a well-defined answer in the MWI. I disagree that it is a meaningless question.
From the bird's-eye view, the ontology of the MWI seems pretty clear: the universal wavefunction is happily evolving (or is it?). From the frog's-eye view, the ontology is less clear. The usual account of an experiment goes like this:
- The system and the observer come together and interact
- This leads to entanglement and decoherence in a certain basis
- In the final state, we have a branch for each measurement outcome. i.e. there are now multiple versions of the observer
This seems to suggest a nice ontology: first there's one observer, then the universe splits and afterwards we have a certain number of versions of the observer. I think questions like "When does the split happen?" and "How many versions?" are important because they would have well-defined answers if the nice ontology was tenable.
Unfortunately it isn't, so the ontology is muddled. We have to use terms like "approximately zero" and "for all practical purposes" which takes us most of the way back to give the person who determines which approximations are appropriate and what is practical - aka the observer - an important part in the whole affair.
There isn't a sharp line for when the cross-terms are negligible enough to properly use the word "branch", but there are exponential effects such that it's very clearly appropriate in the real-world cases of interest.
I agree that it isn't a problem for practical purposes but if we are talking about a fundamental theory about reality shouldn't questions like "How many worlds are there?" have unambiguous answers?
Right, but (before reading your post) I had assumed that the eigenvectors somehow "popped out" of the Everett interpretation.
This is a bit of a tangent but decoherence isn't exclusive to the Everett interpretation. Decoherence is itself a measurable physical process independent of the interpretation one favors. So explanations which rely on decoherence are part of all interpretations.
I mean in the setup you describe there isn't any reason why we can't call the "state space" the observer space and the observer "the system being studied" and then write down the same system from the other point of view...
In the derivations of decoherence you make certain approximations which loosely speaking depend on the environment being big relative to the quantum system. If you change the roles these approximations aren't valid any more. I'm not sure if we are on the same page regarding decoherence, though (see my other reply to your post).
What goes wrong if we just take our "base states" as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?
You might be interested in Lucien Hardy's attempt to find a more intuitive set of axioms for QM compared to the abstractness of the usual presentation: https://arxiv.org/abs/quant-ph/0101012
This
Ok, now comes the trick: we assume that observation doesn't change the system
and this
I think the basic point is that if you start by distinguishing your eigenfunctions, then you naturally get out distinguished eigenfunctions.
doesn't sound correct to me.
The basis in which the diagonalization happens isn't put in at the beginning. It is determined by the nature of the interaction between the system and its environment. See "environment-induced superselection" or short "einselection".
I mean I could accept that the Schrödinger equation gives the evolution of the wave-function, but why care about its eigenfunctions so much?
I'm not sure if this will be satisfying to you but I like to think about it like this:
- Experiments show that the order of quantum measurements matters. The mathematical representation of the physical quantities needs to take this into account. One simple kind of non-commutative objects are matrices.
- If physical quantities are represented by matrices, the possible measurement outcomes need to be encoded in there somehow. They also need to be real. Both conditions are satisfied by the eigenvalues of self-adjoint matrices.
- Experiments show that if we immediately repeat a measurement, we get the same outcome again. So if eigenvalues represent measurement outcomes the state of the system after the measurement must be related to them somehow. The eigenvectors of the matrix representing this state is a simple realization of this.
This isn't a derivation but it makes the mathematical structure of QM somewhat plausible to me.
Do you see any technical or conceptual challenges which the MWI has yet to address or do you think it is a well-defined interpretation with no open questions?
What's your model for why people are not satisfied with the MWI? The obvious ones are 1) dislike for a many worlds ontology and 2) ignorance of the arguments. Do you think there are other valid reasons?
Like yourself, people aren't surprised by the outcome of your experiment. The surprising thing happens only if you consider more complicated situations. The easiest situations where surprising things happen are these two:
1) Measure the spins of the two entangled particles in three suitably different directions. From the correlations of the observed outcomes you can calculate a number known as the CHSH-correlator S. This number is larger than any model where the individual outcomes were locally predetermined permits. An accessible discussion of this is given in David Mermin's Quantum Mysteries for Anyone. The best discussion of the actual physics I know of is by Travis Norsen in his book Foundations of Quantum Mechanics.
2) Measure the spins of three entangled particles in two suitably different directions. There, you get a certain combination of outcomes which is impossible in any classical model. So you don't need statistics but just a single observation of the classically impossible event. This is discussed in David Mermin's Quantum Mysteries Revistited.
No. The property which you are describing is not "mixedness" (technical term: "purity"). That the state vector in question can't be written as a tensor product of state vectors makes it an *entangled* state.
Mixed states are states which cannot be represented by *any* state vector. You need a density matrix in order to write them down.
Smolin's book has inspired me to begin working on a theory of quantum gravity. I'll need to learn new things like quantum field theory.
If you don't know Quantum Field Theory, I don't see how you can possibly understand why General Relativity and Quantum Theory are difficult to reconcile. If true, how are you able to work on the solution to a problem you don't understand?
In Smolin's view, the scientific establishment is good at making small iterations to existing theories and bad at creating radically new theories.
I agree with this.
It's therefore not implausible that the solution to quantum gravity could come from a decade of solitary amateur work by someone totally outside the scientific establishment.
For me, this sounds very implausible. Although the scientific establishment isn't geared towars creating radically new theories, I think it is even harder to create such ideas from the outside. I agree that most researchers in acadamia are narrowly specialized and not interested in challenging widely shared assumptions but the people who do are also in acadamia. I think that you focus too much on the question-the-orthodoxy part. In order to come up with something useful you need to develop a deep understanding and to bounce around ideas in a fertile environment. I think that both have become increasingly difficult for people outside of acadamia because of the complexity of the concepts involved.
The evidence you cite doesn't seem to support your assertion: Although Rovelli holds some idiosynratic ideas, his career path led him through typical prestigous institutions. So he certainly cannot be considered to stand "totally outside the scientific establishment".
I think so, too, but I don't know it (Eliezer's Sequence on QM is still on my reading list). Given the importance people around here put on Bayes theorem, I find it quite surprising that the idea of a quantum generalization -which is what QBism is about- isn't discussed here apart from a handful of isolated comments. Two notable papers in this direction are
Einstein was a realist who was upset that the only interpretation available to him was anti-realist. Saying that he took the wavefunction as object of knowledge is technically true, ie, false.
I agree that my phrasing was a bit misleading here. Reading it again, it sounds like Einstein wasn't a realist, which of course is false. For him, QM was a purely statistical theory which needed to be supplemented by a more fundamental realistic theory (a view which has been proven to be untenable only in 2012 by Pusey, Barrett and Rudolph).
Thanks for conceding that the Copenhagen interpretation has meant many things. Do you notice how many people deny that? It worries me.
I don't know how many people really deny this. Sure, people often talk about "the" Copenhagen interpretation but most physicists use it only as a vague label because they don't care much about interpretations. Who do you have in mind denying this and what exactly worries you?
I don't think that the QM example is like the others. Explaining this requires a bit of detail.
From section V.:
My understanding of the multiverse debate is that it works the same way. Scientists observe the behavior of particles, and find that a multiverse explains that behavior more simply and elegantly than not-a-multiverse.
That's not an accurate description of the state of affairs.
In order to calculate correct predictions for experiments, you have to use the probabilistic Born rule (and the collapse postulate for sequential measurements). That these can be derived from the Many Worlds interpretation (MWI) is a conjecture which hasn't been proved an a universally accepted way.
So we have an interpretation which works but is considered unelegent by many and we have an interpretation which is simple and elegant but is only conjectured to work. Considering the nature of the problems with the proofs, it is questionable whether the MWI can retain its elegant simplicity if it is made to work (see below).
One (doubtless exaggerated) way I’ve heard multiverse proponents explain their position is like this: in certain situations the math declares two contradictory answers – in the classic example, Schrodinger’s cat will be both alive and dead. But when we open the box, we see only a dead cat or an alive cat, not both. Multiverse opponents say “Some unknown force steps in at the last second and destroys one of the possibility branches”. Multiverse proponents say “No it doesn’t, both possibility branches happen exactly the way the math says, and we end up in one of them.”
What I find interesting is that Copenhagen-style interpretations looked ugly to me at first but got more sensible the more I learned about them. With most other interpretations it is the reverse: initially, the looked very compelling but the intuitive pictures are often hard to make rigorous. For example, if you try to describe the branching process mathematically, it isn't possible to say when exactly the branches are splitting or even that they are splitting in an unambiguous way at all. Without introducing something like the observer who sets a natural scale for when it is okay to approximate certain values by zero, it is very difficlt to way to speak of different worlds consistently. But then the simplicity of the MWI is greatly reduced and the difference to a Copenhagenish point of view is much more subtle.
Generally, regarding the interpretation of QM, there are two camps: realists who take the wave function as a real physical object (Schrödinger, Bohm, Everett) and people who take the wavefunction as an object of knowledge (Bohr, Einstein, Heisenberg, Fuchs).
If the multiverse opponent describes the situation involving "some unknown force" he is also in the realist camp and not a proponent of a Copenhagenish position. The most modern Copenhagenish position would be QBism which asserts "whenever I learn something new by means of a measurement, I update". From this point of view, QM is a generalization of probability theory, the wavefunction (or probability amplitude) is the object of knowledge which replaces ordinary probabilities, and the collapse rule is a generalized form of Bayesian updating. That doesn't seem less sensible to me than your description of the multiverse proponent. Of course, there's also a bullet to bite here: the abandonment of a mathematical layer below the level of (generalized) probabilities.
The important point is that this is not about which position is simpler than the other but about a deep divide in the philosophical underpinnings of science.
Taking this exaggerated dumbed-down account as exactly right, this sounds about as hard as the dinosaurs-vs-Satan example, in terms of figuring out which is more Occam’s Razor compliant. I’m sure the reality is more nuanced, but I think it can be judged by the same process. Perhaps this is the kind of reasoning that only gets us to a 90% probability there is a multiverse, rather than a 99.999999% one. But I think determining that theories have 90% probability is a reasonable scientific thing to do.
As per what I have written above I think that there's a crucial difference between the examples of the fossils and the sphinx on the one hand and the interpretation of QM on the other hand. Which interpretation of QM one prefers is connected to one's position on deep philosophical questions like "Is reductionism true?", "Is Nature fundamentally mathematical?", "What's consiousness?", etc. So the statement "[there's a] 90% probability there is a multiverse" is connected to statements of the form "there's a 90% probability that reductionism is true". Whether such statements are meaningful seems much more questionable to me than in the case of your other examples.
Does the book talk about schizophrenia? I'm a bit skeptical that coherence therapy and IFS can be used to heal it but I'm quite interested in hearing your thoughts about schizophrenia in relation to subagent models.
Thanks!