## Posts

## Comments

**zaq**on The Quantum Physics Sequence · 2017-05-24T01:38:25.956Z · score: 0 (0 votes) · LW · GW

I see three distinct issues with the argument you present.

First is line 1 of your reasoning. A finite universe does not entail a finite configuration space. I think the cleanest way to see this is through superposition. If |A> and |B> are two orthogonal states in the configuration space, then so are all states of the form a|A> + b|B>, where a and b are complex numbers with |a|^2 + |b|^2 = 1. There are infinitely many such numbers we can use, so even from just two orthogonal states we can build an infinite configuration space. That said, there's something called Poincare recurrence which is sort of what you want here, except...

Line 4 is in error. Even if you did have a finite configuration space, a non-static point could just evolve in a loop, which need not cover every element of the configuration space. Two distinct points could evolve in loops that never go anywhere near each other.

Finally, even if you could guarantee that two distinct points would each eventually evolve through some common point A, line 6 does not necessarily follow because it is technically possible to have a situation where both evolutions do in fact reach A infinitely many times, but never simultaneously. Admittedly though, it would require fine-tuning to ensure that two initially-distinct states *never* hit "nearly A" at the same time, which might be enough.

**zaq**on The Cartoon Guide to Löb's Theorem · 2017-05-12T16:38:47.197Z · score: 0 (0 votes) · LW · GW

Wow. I've never run into a text using "we have" as assuming something's provability, rather than assuming its truth.

So the application of the deduction theorem is just plain wrong then? If what you actually get via Lob's theorem is ◻((◻C)->C) ->◻C, then the deduction theorem does *not* give the claimed ((◻C)->C)->C, but instead gives ◻((◻C)->C)->C, from which the next inference does not follow.

**zaq**on Probability is Subjectively Objective · 2016-04-13T04:57:35.166Z · score: 1 (1 votes) · LW · GW

The issue is not want of an explanation for the phenomenon, away or otherwise. We have an explanation of the phenomenon, in fact we have several. That's not the issue. What I'm talking about here is the inherent, not-a-result-of-my-limited-knowledge probabilities that are a part of *all* explanations of the phenomenon.

Past me apparently insisted on trying to explain this in terminology that works well in collapse or pilot-wave models, but not in many-worlds models. Sorry about that. To try and clear this up, let me go through a "guess the beam-splitter result" game in many-worlds terminology and compare that to a "guess the trillionth digit of pi" game in the same terminology.

Aside: Technically it's the amplitudes that split in many-worlds models, and somehow these amplitudes are multiplied by their complex conjugates to get you answers to questions about guessing games (*no* model has an explanation for that part). As is common around these parts, I'm going to ignore this and talk as if it's the probabilities themselves that split. I guess nobody likes writing "square root" all the time.

Set up a 50/50 beam-splitter. Put a detector in one path and block the other. Write your choice of "Detected" or "Not Detected" on a piece of paper. Now fire a single photon. In Everett-speak, half of the yous end up in branches where the photon's path matches your guess while half of the yous don't. The 50/50 nature of this split remains even if you know the exact quantum state of the photon beforehand. Furthermore, the branching of yous that try to use all your physics knowledge to predict their observations have no larger a proportion of success than the branching of yous that make their predictions by just flipping a coin, always guessing Fire, or employing literally *any* other strategy that generates valid guesses. The 50/50 value of this branching process is *completely* decoupled from your predictions, no matter what information you use to make those predictions.

Compare this to the process of guessing the trillionth digit of pi. If you make your guess by rolling a quantum die, then 1 out of 10 yous will end up in a branch where your guess matches the actual trillionth digit of pi. If you instead use those algorithms you know to calculate a guess, and you code/run them correctly, then basically all of the yous end up in a branch where your guess is correct.

We now see the fundamental difference. Changing your guessing strategy results in different correct/incorrect branching ratios for the "guess the trillionth digit of pi" game but *not* for the "guess the beam-splitter result" game. This is the Everett-speak version of saying that the beam-splitter's 50/50 odds is a property of the universe while the trillionth digit of pi's 1/10 odds is a function of our (current) ignorance. You can opt to replace "odds" with "branching ratios" and declare that there is no probability of any kind, but that just seems like semantics to me. In particular the example of the ten trillionth digit of pi should not be what prompts this decision. Even in the many-worlds model there's still a fundamental difference between that and the quantum processes that physicists cite as intrinsically random.

**zaq**on Newcomb's Problem and Regret of Rationality · 2016-04-11T21:25:11.468Z · score: 0 (0 votes) · LW · GW

I two-box.

Three days later, "Omega" appears in the sky and makes an announcement. "Greeting earthlings. I am sorry to say that I have lied to you. I am actually Alpha, a galactic superintelligence who hates that Omega asshole. I came to predict your species' reaction to my arch-nemesis Omega and I must say that I am disappointed. So many of you chose the obviously-irrational single-box strategy that I must decree your species unworthy of this universe. Goodbye."

Giant laser beam then obliterates earth. I die wishing I'd done more to warn the world of this highly-improbable threat.

TLDR: I don't buy this post's argument that I should become the type of agent that sees one-boxing on Newcomb-like problems as rational. It is trivial to construct any number of no-less plausible scenarios where a superintelligence descends from the heavens and puts a few thousand people through Newcomb's problem before suddenly annihilating those who one-box. The presented argument for becoming the type of agent that Omega predicts will one-box can be equally used to argue for becoming the type of agent that Alpha predicts will two-box. Why then should it sway me in either direction?

**zaq**on Forcing Anthropics: Boltzmann Brains · 2016-02-05T00:14:59.824Z · score: 0 (0 votes) · LW · GW

"Why did the universe seem to start from a condition of low entropy?"

I'm confused here. If we don't go with a big universe and instead just say that our observable universe is the whole thing, then tracing back time we find that it began with a very small volume. While it's true that such a system wold necessarily have low entropy, that's largely because small volume = not many different places to put things.

Alternative hypothesis: The universe began in a state of maximal entropy. This maximum value was "low" compared to present day because the early universe was small. As the universe expands, its maximum entropy grows. Its realized entropy also grows, just not as fast as its maximal entropy.

**zaq**on An Intuitive Explanation of Solomonoff Induction · 2015-10-22T23:13:25.540Z · score: 0 (1 votes) · LW · GW

"Specifically, going between two universal machines cannot increase the hypothesis length any more than the length of the compiler from one machine to the other. This length is fixed, independent of the hypothesis, so the more data you use, the less this difference matters."

This doesn't completely resolve my concern here, as there are infinitely many possible Turing machines. If you pick one and I'm free to pick any other, is there a bound on the length of the compiler? If not, then I don't see how the compiler length placing a bound on any specific change in Turing machine makes the problem of which machine to use irrelevant.

To be clear: I am aware that starting with different machines, the process of updating on shared observations will eventually lead us to similar distributions even if we started with wildly different priors. My concern is that if "wildly different" is unbounded then "eventually" might also be unbounded even for a fixed value of "similar." If this does indeed happen, then it's not clear to me how S I does anything more useful than "Pick your favorite normalized distribution without any 0s or 1s and then update via Bayes."

Edit: Also thanks for the intro. It's a lot more accessible than anything else I've encountered on the topic.

**zaq**on Probability is Subjectively Objective · 2015-10-01T20:27:32.326Z · score: 0 (0 votes) · LW · GW

You've only moved the problem down one step.

Five years ago I sat in a lab with a beam-spitter and a single-photon multiplier tube. I watched as the SPMT clicked half the time and didn't click half the time, with no way to predict which I would observe. You're claiming that the tube clicked every time, and the the part of me that noticed one half is very disconnected from the part of me that noticed the other half. The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his memory right now.

Take the me sitting here right now, with the memory of the specific half of the clicks he has right now. As far as we understand physics, he *can't* postdict which memory that should have been. Even in your model, he can postdict that there will be many branches of him with each possible memory, but he can't postdict *which* of those branches he'll be - only the probability of him being any one of the branches.

**zaq**on 2014 Less Wrong Census/Survey · 2014-10-28T01:07:01.750Z · score: 27 (27 votes) · LW · GW

Did the survey, except digit ratio due to lack of precision measuring devices.

As for feedback, I had some trouble interpreting a few of the questions. There were some times when you defined terms like human biodiversity, and I agreed with some of the claims in the definition but not others, but since I had no real way to weight the claims by importance it was difficult for me to turn my conclusions into a single confidence measurement. I also had no idea weather the best-selling computer game question was supposed to account for inflation or general growth of the videogame market, nor whether we were measuring in terms of copies sold or revenue earned or something else entirely, nor whether console games or games that "sell" for 0$ counted. I ended up copping out by listing a game that is technically included in a bit of software I knew sold very well for its time (and not for free), but the software was not sold as a computer game.

Also, a weird thing happened with the calibration questions When I was very unsure which of a large number of possible answers was correct, and especially if I wasn't even sure how many possible answers there were, I found myself wanting to write an answer that was obviously impossible (like writing "Mars" for Obama's birth state) and putting a 0 for the calibration. I didn't actually do this, but it sure was tempting.

**zaq**on Probability is Subjectively Objective · 2014-10-27T23:08:57.538Z · score: 0 (0 votes) · LW · GW

The Many Physicists description never talked about the electron only going one way. It talked about detecting the electron. There's no metaphysics there, only experiment. Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time. You may say that the electron goes both ways every time, but we still only have the detector firing half the time. We also cannot predict which half of the trials will have the detector firing and which won't. And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.

**zaq**on Occam's Razor · 2014-04-15T18:22:37.316Z · score: 0 (0 votes) · LW · GW

I don't think this is what's actually going on in the brains of most humans.

Suppose there were ten random people who each told you that gravity would be suddenly reversing soon, but each one predicted a different month. For simplicity, person 1 predicts the gravity reversal will come in 1 month, person 2 predicts it will come in 2 months, etc.

Now you wait a month, and there's no gravity reversal, so clearly person 1 is wrong. You wait another month, and clearly person 2 is wrong. Then person 3 is proved wrong, as is person 4 and then 5 and then 6 and 7 and 8 and 9. And so when you approach the 10-month mark, you probably aren't going to be expecting a gravity-reversal.

Now, do you not suspect the gravity-reversal at month ten simply because it's not as simple as saying "there will never a be a gravity reversal," or is your dismissal substantially motivated by the fact that the claim type-matches nine other claims that have already been disproven? I think that in practice most people end up adopting the latter approach.

**zaq**on Newcomb's Problem and Regret of Rationality · 2014-04-14T20:57:55.624Z · score: 0 (0 votes) · LW · GW

Suppose my decision algorithm for the "both boxes are transparent" case is to take only box B if and only if it is empty, and to take both boxes if and only if box B has a million dollars in it. How does Omega respond? No matter how it handles box B, it's implied prediction will be wrong.

Perhaps just as slippery, what if my algorithm is to take only box B if and only if it contains a million dollars, and to take both boxes if and only if box B is empty? In this case, anything Omega predicts will be accurate, so what prediction does it make?

Come to think of it, I could implement the second algorithm (and maybe the first) if a million dollars weighs enough compared to the boxes. Suppose my decision algorithm outputs: "Grab box B and test it's weight, and maybe shake it a bit. If it clearly has a million dollars in it, take only box B. Otherwise, take both boxes." If that's my algorithm, then I don't think the problem actually tells us what Omega predicts, and thus what outcome I'm getting.

**zaq**on A Rationalist's Account of Objectification? · 2014-03-20T16:30:28.213Z · score: 6 (6 votes) · LW · GW

The problem isn't objectification of women, it's a lack of non-objectified female characters.

Men are objectified a *lot* in media. As a simple example, the overwhelming majority of mooks are male, and these characters exist solely to be mowed down so the audience can see how awesome the hero(ine) is (or sometimes how dangerous the villain is). They are hapless, often unthinking and with basically no backstory to speak of. Most of the time they aren't even given names. So why doesn't this common male objectification bring outrage?

I think the reason is that there are also plenty of male characters who aren't objectified. Male characters with clear agency abound in fiction, far more so than female characters. And this way, male viewers can identify with the agency-bearing male characters, and the objectified mooks become far less problematic.

The issue isn't with there merely being a bunch of objectified female characters. The issue is that until very recently, objectified characters were pretty much all that women got. If we get a healthy number of non-objectified female characters with clear agency, who obtain value in a myriad of ways (and not just by being sexy), then the objectified ones won't be nearly as problematic.

**zaq**on 2013 Less Wrong Census/Survey · 2013-11-22T22:23:02.704Z · score: 28 (28 votes) · LW · GW

Took the survey. I definitely did have an IQ test when I was a kid, but I don't think anyone ever told me the results and if they did I sure don't remember it.

Also, as a scientist I counted my various research techniques as new methods that help make my beliefs more accurate, which means I put something like 2/day for trying them and 1/week for them working. In hindsight I'm guessing this interpretation is not what you meant, and that science in general might count as ONE method altogether.

**zaq**on Can You Prove Two Particles Are Identical? · 2013-11-14T23:14:03.225Z · score: 0 (0 votes) · LW · GW

But there's also the observed matter-antimatter asymmetry. Observations strongly indicate that right now we have a lot more electrons than positrons. If it was just one electron going back and forth in time (and occasionally being a photon), we'd expect at most one extra electron.

Not to mention the fact that positrons = electrons going backwards in time only works if you ignore gravity.

**zaq**on Can You Prove Two Particles Are Identical? · 2013-11-14T23:10:48.737Z · score: 0 (0 votes) · LW · GW

There's also the observed matter-antimatter asymmetry. Even if you want to argue that virtual electrons aren't real and thus don't count, it still seems to be the case that there are a lot more electrons than positrons. If it was just one electron going back and forth in time, we'd expect at most one extra electron.

Not to mention the fact that positrons = electrons going backwards in time only works if you ignore gravity.

**zaq**on Timeless Identity · 2013-10-10T22:32:03.495Z · score: 2 (4 votes) · LW · GW

Eliezer, why no mention of the no-cloning theorem?

Also, some thoughts this has triggered:

Distinguishability can be shown to exist for some types of objects in just the same way that it can be shown to not exist for electrons. Flip two coins. If the coins are indistinguishable, then the HT state is the same as the TH state, and you only have three possible states. But if the coins are distinguishable, then HT is not TH, and there are four possible states. You can experimentally verify that the probability obeys the latter situation, and not the former. And of course, you can experimentally verify that electron pairs obeys the former situation, and not the latter. This is probably just because the coins are qualitatively distinct, while the electrons are not.

But it seems that if you did make a quantum copy (no-cloning theorem be damned!) then after a bit of interaction with the different environments, the two would become distinguishable (on the basis of developing different qualitative identities) and start behaving more like the coins than the electrons. In fact, if you're actually using the lightspeed limit then the reconstructed you would be several years younger, and immediately distinguishable from what the scanned you has since evolved into. At the time of reconstruction, the two are already acting like coins and not electrons. Does this break the argument? I'm not really sure, because the reconstructed you at the time of reconstruction would still be indistinguishable from the you at the time of scanning, if you could somehow get them both around at the same time.

Bonus! The reconstructed you could be seen to have a very qualitatively different time-evolution. The scanned you evolves throughout its entire history via a Hamiltonian which itself changes continuously as scanned-you moves continuously through your environment. Reconstructed you, however, has a clear discontinuity in its Hamiltonian at the time of reconstruction (the state is effectively instantly moved from one environment into a completely different environment). The state of the reconstructed you would still evolve continuously, it would just have a discontinuous derivative. So I'm not really sure if reconstructed you would fail to pass the bar of having a "continuity of identity" that a lot of people talk about when dealing with the concept of self. My gut says no, but I'm not sure why.

**zaq**on Timeless Identity · 2013-10-10T21:58:04.992Z · score: 1 (1 votes) · LW · GW

Okay, we need to be really careful about this.

If you sign up for cryonics at time T1, then the not-signed-up branch has lower amplitude after T1 than it had before T1. But this is very different from saying that the not-signed up branch has lower amplitude after T1 than it would have had after T1 if you had not signed up for cryonics at T1. In fact, the latter statement is necessarily false if physics really is timeless.

I think this latter point is what the other posters are driving at. It is true that if there is a branch at T1 where some yous go down a path where they sign up and others don't, then the amplitude for not-signed-up is lower after T1. But this happens even if *this particular you* doesn't go down the signed-up branch. What matters is that the branch point occurs, not which one any specific you takes.

In other words, amplitude is always being seeped from the not-signed-up branch, even if some particular you keeps not leaving that branch.

**zaq**on Pascal's Mugging: Tiny Probabilities of Vast Utilities · 2013-09-27T17:11:40.565Z · score: 1 (3 votes) · LW · GW

Edit: Looks like I was assuming probability distributions for which Lim (Y -> infinity) of Y*P(Y) is well defined. This turns out to be monotonic series or some similar class (thanks shinoteki).

I think it's still the case that a probability distribution that would lead to TraderJoe's claim of P(Y)*Y tending to infinity as Y grows would be un-normalizable. You can of course have a distribution for which this limit is undefined, but that's a different story.

**zaq**on Beauty quips, "I'd shut up and multiply!" · 2013-08-05T23:45:01.343Z · score: 0 (0 votes) · LW · GW

You can have a credence of 1/2 for heads in the absence of which-day knowledge, but for consistency you will also need P(Heads | Monday) = 2/3 and P(Monday) = 3/4. Neither of these match frequentist notions unless you count each awakening after a Tails result as half a result (in which case they both match frequentist notions).

**zaq**on Why Are Individual IQ Differences OK? · 2013-07-28T17:05:26.385Z · score: 1 (1 votes) · LW · GW

With individual differences, people are being judged as individuals, and on the basis of their individual capabilities.

With racial differences, people are being judged as members of a race, and not on the basis of their individual capabilities.

At least, that's the fear.

**zaq**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-18T04:50:21.431Z · score: 0 (0 votes) · LW · GW

But what numbers are you allowed to start with on the computation? Why can't I say that, for example, 12,345,346,437,682,315,436 is one of the numbers I can do computation from (as a starting point), and thus it has extremely small complexity?

**zaq**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-17T17:34:58.349Z · score: 0 (0 votes) · LW · GW

I'm not familiar with Kolmogorov complexity, but isn't the aparent simplicity of 3^^^3 just an artifact of what notation we happen to have invented? I mean, "^^^" is not really a basic operation in arithmetic. We have a nice compact way of describing what steps are needed to get from a number we intuitively grok, 3, to 3^^^3, but I'm not sure it's safe to say that makes it simple in any significant way. For one thing, what would make 3 a simple number in the first place?

**zaq**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-14T05:21:33.965Z · score: 8 (8 votes) · LW · GW

Just thought of something:

How sure are we that P(there are N people) is not at least as small as 1/N for sufficiently large N, even without a leverage penalty? The OP seems to be arguing that the complexity penalty on the prior is insufficient to generate this low probability, since it doesn't take much additional complexity to generate scenarios with arbitrarily more people. Yet it seems to me that after some sufficiently large number, P(there are N people) *must* drop faster than 1/N. This is because our prior must be normalized. That is:

Sum(all non-negative integers N) of P(there are N people) = 1.

If there was some integer M such that for all n > M, P(there are n people) >= 1/n, the above sum would not converge. If we are to have a normalized prior, there must be a faster-than-1/N falloff to the function P(there are N people).

In fact, if one demands that my priors indicate that my expected average number of people in the universe/multiverse is finite, then my priors must diminish faster than 1/N^2. (So that that the sum of N*P(there are N people) converges).

TL:DR If your priors are such that the probability of there being 3^^^3 people is not smaller than 1/(3^^^3), then you don't have a normalized distribution of priors. If your priors are such that the probability of there being 3^^^3 people is not smaller than 1/((3^^^3)^2) then your expected number of people in the multiverse is divergent/infinite.

**zaq**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-09T18:15:04.240Z · score: 3 (3 votes) · LW · GW

Just gonna jot down some thoughts here. First a layout of the problem.

- Expected utility is a product of two numbers, probability of the event times utility generated by the event.
- Traditionally speaking, when the event is claimed to affect 3^^^3 people, the utility generated is on the order of 3^^^3
- Traditionally speaking, there's nothing about the 3^^^3 people that requires a super-exponentially large extension to the complexity of the system (the univers/multivers/etc). So the probability of the event does
*not*scale like 1/(3^^^3) - Thus Expected Payoff becomes enormous, and you should pay the dude $5.
- If you actually follow this, you'll be mugged by random strangers offerring to save 3^^^3 people or whatever super-exponential numbers they can come up with.

In order to avoid being mugged, your suggestion is to apply a scale penalty (leverage penalty) to the *probability*. You then notice that this has some very strange effects on your epistemology - you become incapable of ever believing the 5$ will actually help no matter how much evidence you're given, even though evidence can make the expected payoff large. You then respond to *this* problem with what appears to be an excuse to be illogical and/or non-bayesian at times (due to finite computing power).

It seems to me that an alternative would be to rescale the untility value, instead of the probability. This way, you wouldn't run into any epistemic issues anywhere because you aren't messing with the epistemics.

I'm not proposing we rescale Utility(save X people) by a factor 1/X, as that would make Utility(save X people) = Utility(save 1 person) all the time, which is obviously problematic. Rather, my idea is to make Utility a *per capita* quantity. That way, when the random hobo tells you he'll save 3^^^3 people, he's making a claim that requires there to be at least 3^^^3 people to save. If this does turn out to be true, keeping your Utility as a per capita quantity will require a rescaling on the order of 1/(3^^^3) to account for the now-much-larger population. This gives you a small expected payoff without requiring problematically small prior probabilities.

It seems we humans may already do a rescaling of this kind anyway. We tend to value rare things more than we would if they were common, tend to protect an endangered species more than we would if it weren't endangered, and so on. But I'll be honest and say that I haven't really thought the consequences of this utility re-scaling through very much. It just seems that if you need to rescale a product of two numbers and rescaling one of the numbers causes problems, we may as well try rescaling the other and see where it leads.

Any thoughts?

**zaq**on The Cartoon Guide to Löb's Theorem · 2013-04-09T17:48:40.098Z · score: -3 (5 votes) · LW · GW

A simple explanation of a flaw that makes no reference to Lob's Theorem, meta-anythings, or anything complicated. Of course, spoilers.

"Let ◻Z stand for the proposition "Z is provable". Löb's Theorem shows that, whenever we have ((◻C)->C), we can prove C."

This statement is the source of the problem. For ease of typing, I'm going to use C' = We can prove C. What you have here is (C'->C)->C'. Using Material Implication we replace all structures of the from A->B with ~A or B (~ = negation). This gives:

~(~C' or C) or C`

Using DeMorgan's laws we have ~(A or B) = ~A and ~B, yielding:

(C' and ~C) or C`

Thus the statement (C' -> C) -> C' evaluates to true ONLY when C' is true. You then proceed to try and apply it where C' is false. In other words, you have a false premise. Either you can in fact prove that 2=1, or it is not in fact the case that (C' -> C) -> C' when C is "2=1".

PS: I didn't actually need to read Lob's Theorem or even know what it was about to find this flaw. I suspect the passage quoted is in fact not the result of Lob's Theorem. You can probably dig into Lob's Theorem to pinpoint *why* it is not the result, but meh.

**zaq**on Boredom vs. Scope Insensitivity · 2013-01-16T22:34:26.028Z · score: 0 (0 votes) · LW · GW

Uh... what?

Sqrt(a few billion + n) is approximately Sqrt(a few billion). Increasing functions with diminishing returns don't approach Linearity at large values, their growth becomes really Small (way sub-linear, or nearly constant) at high values.

This may be an accurate description of what's going on (if, say, our value for re-watching movies falls off slower than our value for saving multiple lives), but it does not at all strike me as an argument for treating lives as linear. In fact, it strikes me as an argument for treating life-saving as More sub-linear than movie-watching.

**zaq**on Nonperson Predicates · 2012-10-22T19:21:45.480Z · score: 0 (0 votes) · LW · GW

Food for thought:

This whole post seems to assign moral values to actions, rather than states. If it is morally negative to end a simulated person's existence, does this mean something different that saying that the universe without that simulated person has a lower moral value than the universe with that person's existence? If not, doesn't that give us a moral obligation to create and maintain all the simulations we can, rather than

*avoiding*their creation? The more I think about this post, the more it seems that the optimum response is to simulate as many super-happy people as possible, and to hell with the non-simulated world (assuming the simulated people would vastly outweigh the non-simulated people in terms of 'ammount experienced').You are going to die, and there's nothing your parents can do to stop that. Was it morally wrong for them to bring about your existence in the first place?

Suppose some people have crippling disabilities that cause large amounts of suffering in their lives (arguably, some people do). If we could detect the inevitable development of such disabilities at an early embryonic stage, would we be morally obligated to abort the fetuses?

If an FAI is going to run a large number of simulations, is there some Rule of Large Numbers result that tells us that the simulations experiencing great amounts of pleasure match or overwhelm the simulations experiencing great amounts of pain (or could we construct the algorithms in such a way as to produce this result)? If so, we may be morally obligated to

*not*solve this problem.Assuming you support people's "right to die," what if we simply ensured that all simulated agents ask to be deleted at the end of their run? (I am here reminded of a vegetarian friend of mine who decided the meat industry would be even

*more*horrible if we managed to engineer cows that asked to be eaten).

**zaq**on Probability is Subjectively Objective · 2012-08-09T16:25:51.348Z · score: 1 (1 votes) · LW · GW

This is silly. To say that there is some probability in the universe is not to say that everything has randomness to it. People arguing that there is intrinsic probability in physics don't argue that this intrinsic probability finds its way into the trillionth digit of pi.

Many Physicists: If I fire a single electron at two slits, with a detector placed immediately after one of the slits, then I detect the electron half the time. Furthermore, leading physics indicates that no ammount of information will ever allow me to accurately predict which trials will result in a detected electron, I can determine a 50/50 chance for detection/non-detection and that's the limit of predictability. Thus it's safe to say that the 50/50 is a property of the experimental set-up, and not a property of how much I know about the setup.

Pretty Much Zero Physicists: The above indicates that the trillionth digit of pi is in a superposition until we calculate it, at which point it collapses to a single value.

**zaq**on SotW: Be Specific · 2012-04-20T17:59:43.249Z · score: 0 (0 votes) · LW · GW

Replace "the next two seconds" with "the two seconds subsequent to my finishing this wish description"

**zaq**on SotW: Be Specific · 2012-04-14T06:01:57.900Z · score: 0 (0 votes) · LW · GW

Constraint: Within the next two seconds, you must perform only the tasks listed, which you must perform in the specified order. Task 1. Exchange your definition of decrease with your definition of increase Task 2. --insert wish here-- Task 3. Self-terminate

This is of course assuming that the I don't particularly care for the genie's life.

**zaq**on Timeless Physics · 2012-04-06T20:19:01.264Z · score: 0 (0 votes) · LW · GW

Uh... what?

c is the speed of light. It's an observable. If I change c, I've made an observable change in the universe --> universe no longer looks the same?

Or are you saying that we'll change t and c both, but the measured speed of light will become some function of c and t that works out to remain the same? As in, c is no longer the measured speed of light (in a vacuum)? Then can't I just identify the difference between this universe and the t -> 2t universe by seeing whether or not c is the speed of light?

I also think you're stuck on restricting yourself only to E&M using Special Relativity. If you take t -> 2t you change the metric from minkowski space to some other space, and that means that you'll have gravitational effects where there previously weren't gravitational effects. You might be able to salvage that in some way, but it's going to be a lot more complicated than just changing the value for c. The only thing I can think of is to re-define the 4-vector dot-product and the transformation laws for objects with Lorentz indeces, and even that might not end up being consistent.

**zaq**on Timeless Physics · 2012-03-30T22:34:14.684Z · score: 1 (1 votes) · LW · GW

A coupleof things:

- You begin by describing time translation invariance, even relating it to space translation invariance. This is all well and good, except that you then you ask:

"Does it make sense to say that the global rate of motion could slow down, or speed up, over the whole universe at once—so that all the particles arrive at the same final configuration, in twice as much time, or half as much time? You couldn't measure it with any clock, because the ticking of the clock would slow down too."

This one doesn't make as much sense to me. This is not just a translation but is actually a re-scaling. If you rescale time separately from space then you will have problems because you will qualitatively change the metric (special relativity under t -> 2t no longer uses a minkowski metric). This in turn changes the geometric structure of spacetime. If you rescale both time and space then you have a conformal transformation, but this transformation is not a lorentz transformation. I'm not so sure physics is invariant under such transformations.

- The electroweak force has been observed to violate both charge conjugation symmetry and parity symmetry. However, any lorentz invariant physics must be symmetric under CPT (charge conjugation + parity + time reversal). Thus if our universe is lorentz invariant, it is not time-reversal invariant. So you will at least need to keep the direction of time, even if you are able to otherwise eliminate t.

"@Stirling: If you took one world and extrapolated backward, you'd get many pasts. If you take the many worlds and extrapolate backward, all but one of the resulting pasts will cancel out! Quantum mechanics is time-symmetric."

Um... no. As I explained above, lorentz invariance plus CP violation in electroweak experiments indicate that the universe is not invariant under time-reversal. http://en.wikipedia.org/wiki/CP_violation

Eh... correction. Quantum Mechanics may be time-symmetric, but quantum field theories including weak interactions are not.

**zaq**on The Futility of Emergence · 2010-11-21T22:07:44.572Z · score: 1 (1 votes) · LW · GW

The even/odd attribute of a collection of marbles is not an emergent phenomenon. This is because as I gradually (one by one) remove marbles from the collection, the collection has a meaningful even/odd attribute all the way down, no matter how few marbles remain. If an attribute remains meaningful at all scales, then that attribute is not emergent.

If the accuracy of fluid mechanics was nearly 100% for 500+ water molecules and then suddenly dropped to something like 10% at 499 water molecules, then I would not count fluid mechanics as an emergent phenomenon. I guess I would word this as "no jump discontinuities in the accuracy vs scale graph."