Posts
Comments
I already have this and it's horrible.
What about the fact that the best compression algorithm may be insanely expensive to run? We know the math that describes the behavior of quarks, which is to say, we can in principle generate the results of all possible experiments with quarks by solving a few equations. However doing computations with the theory is extremely expensive and it takes something like 10^15 floating point operations to compute, say, some basic properties of the proton to 1% accuracy.
I'm pretty sure cost of resurrection isn't his true rejection, his true rejection is more like 'point and laugh at weirdos'.
Also for a number of commenters in the linked thread, the true rejection seems to be, "By freezing yourself you are claiming that you deserve something no one else gets, in this case immortality."
Am I mistaken in thinking that all you'd need to do is build the centrifuge with an angled floor, so the net force experienced from gravity and (illusory) centrifugal force is straight "down" into it?
Sure, this would work in principle. But I guess it would be fantastically expensive compared to a simple building. The centrifuge would need to be really big and, unlike in 0g, would have to be powered by a big motor and supported against Mars gravity. And Mars gravity isn't that low, so it's unclear why you'd want to pay this expense.
n/t
The inscription is not in the Latin alphabet.
A big pie, rotating in the sky, should have apparently shorter circumference than a non-rotating one, and both with the same radii.
I can't swallow this. Not because it is weird, but because it is inconsistent.
There is no inconsistency. In one case you are measuring the circumference with moving rulers, while in the other case you are measuring the circumference with stationary rulers. It's not inconsistent for these two different measurements to give different results.
You don't need GR for a rotating disk; you only need GR when there is gravity.
Having dabbled a bit in evolutionary simulations, I find that, once you have unicellular organisms, the emergence of cooperation between them is only a matter of time, and from there multicellulars form and cell specialization based on division of labor begins.
I'm very curious: in what evolutionary simulations have you seen these phenomena evolve?
This looks fun! I will participate.
A computer is no more conscious than a rock rolling down a hill - we program it by putting sticks in the rocks way to guide to a different path.
Careful!--a lot of people will bite the bullet and call the rock+stick system conscious if you put a complicated enough pattern of sticks in front of it and provide the rock+stick system with enough input and output channels by which it can interact with its surroundings.
This doesn't seem like a good analogy to any real-world situation. The null hypothesis ("the coin really has two tails") predicts the exact same outcome every time, so every experiment should get a p-value of 1, unless the null-hypothesis is false, in which case someone will eventually get a p-value of 0. This is a bit of a pathological case which bears little resemblance to real statistical studies.
The analogy seems pretty nice. The argument seems to be that, based on the historical record, we're doomed to collective inaction in the face of even extraordinarily dangerous risks. I agree that the case of nukes does provide some evidence for this.
I think you paint things a little too grimly, though. We have done at least a little bit to try to mitigate the risks of this particular technology: there are ongoing efforts to prevent proliferation of nuclear weapons and reduce nuclear stockpiles. And maybe a greater risk really would provoke a more serious response.
I think the Born rule falls out pretty nicely in the Bohmian interpretation.
What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw?
Having recognized this danger, you should probably be more skeptical of verbal arguments.
This is essentially the standard argument for why we have to quantize gravity. If the sources of the gravitational field can be in superposition, then it must be possible to superpose two different gravitational fields. But (as I think you acknowledge) this doesn't mean that quantum mechanical deviations from GR have to be detectable at low energies.
I'd be interested to know what the correlation with financial success is for additional IQ above the mean among Ivy Leaguers.
I'm pretty sure I've seen a paper discussing this and probably you can find data if you google around for "iq income correlation" and similar.
Plus, it's actually classical: it yields a full explanation of the real, physical, deterministic phenomena underlying apparently quantum ones.
Note that because of Bell's theorem, any classical system is going to have real trouble emulating all of quantum mechanics; entanglement is going to trip it up. I know you said "replicate many aspects of quantum mechanics," but it's probably important to emphasize that this sort of thing is not going to lead to a classical model underlying all of QM.
I read it as saying that people have many interests in common, so pursuing "selfish" interests can also be altruistic to some extent.
every time we discover something new we find that there are more questions than answers
I don't think that's really true though. The advances in physics that have been worth celebrating--Newtonian mechanics, Maxwellian electromagnetism, Einsteinian relativity, the electroweak theory, QCD, etc.--have been those that answer lots and lots of questions at once and raise only a few new questions like "why this theory?" and "what about higher energies?". Now we're at the point where the Standard Model and GR together answer almost any question you can ask about how the world works, and there are relatively few questions remaining, like the problem of quantum gravity. Think how much more narrow and neatly-posed this problem is compared to the pre-Newtonian problem of explaining all of Nature!
Fair enough. I can see the appeal of your view if you don't think there's a theory of everything. But given the success of fundamental physics so far, I find it hard to believe that there isn't such a theory!
What would it mean then for a Universe to not "run on math"? In this approach it means that in such a universe no subsystem can contain a model, no matter how coarse, of a larger system. In other words, such a universe is completely unpredictable from the inside. Such a universe cannot contain agents, intelligence or even the simplest life forms.
I think when we say that the universe "runs on math," part of what we mean is that we can use simple mathematical laws to predict (in principle) all aspects of the universe. We suspect that there is a lossless compression algorithm, i.e., a theory of everything. This is a much stronger statement than just claiming that the universe contains some predictable regularities, and is part of what makes the Platonic ideas you are arguing against seem appealing.
We could imagine a universe in which physics found lots of approximate patterns that held most of the time and then got stuck, with no hint of any underlying order and simplicity. In such a universe we would probably not be so impressed with the idea of the universe "running on math" and these Platonic ideas might be less appealing.
Quantum fluctuations are not dynamical processes inherent to a system, but instead reflect the statistical nature of measurement outcomes.
I'm no expert at all, but while that sounds agreeable on an intuitive level, I've read that the opposite is true - ie that QM processed are inherently fuzzy
I don't quite understand why you think that this is the opposite of what you quoted. The point is that the "inherent fuzziness" is there, but it is not because of literal unobserved "fluctuations" of the system over time. Speaking of "fluctuations" as if they were actual processes happening in time is poetic language (and all physicists understand that it is poetic language. The process of trying to explain QM to lay audiences generates a huge number of attractive but incomplete oversimplifications like this one).
something like 'simulationist' preservation seems to me to be well within two orders of magnitude of the probability of cryonics - both rely on society finding your information and deciding to do something with it
I don't know if I agree with your estimate of the relative probabilities, but I admit that I exaggerated slightly to make my point. I agree that this strategy at least worth thinking about, especially if you think it is at all plausible that we are in a simulation. Something along these lines is the only one of the listed strategies that I thought had any merit.
A priori it seems hugely unlikely that with all of our ingenuity we can only come up with two plausible strategies for living forever (religion and cryonics)
I agree, and I also think we should try to think up other strategies. Here are some that people have already come up with besides cryonics and religion:
Figure out how to cure aging before you die.
Figure out how to upload brains before you die.
Create a powerful AI and delegate the problem to it (complementary to cryonics if the AI will only be created after you die).
Personally, I don't find any of the strategies you mention to be plausible enough to be worth thinking about for more than a few seconds. (Most of them seem obviously insufficient to preserve anything I would identify as "me.") I'm worried this may produce the opposite of this post's intended effect, because it may seem to provide evidence that strategies besides cryonics can be easily dismissed.
"There are numbers you can't remember if I tell them to you" is not at all the same claim that "there are ideas I can't explain to you."
But they might be related. Perhaps there are interesting and useful concepts that would take, say, 100,000 pages of English text to write down, such that each page cannot be understood without holding most of the rest of the text in working memory, and such that no useful, shorter, higher-level version of the concept exists.
Humans can only think about things that can be taken one small piece at a time, because our working memories are pretty small. It's plausible to me that there are atomic ideas that are simply too big to fit in a human's working memory, and which do need to be held in your head at one time in order to be understood.
I can't turn it into equations.
Did you try? Each sentence in the quote could easily be expressed in some formal system like predicate calculus or something.
I see a future pattern emerging in the United States:
Few atheists among overwhelming Christians -> shrinking Christianity, growing Atheism -> atheism tribalness growing well connected and strong -> Natural tribal impulse to not tolerate different voices -> war between atheists and Christians.
The last arrow seems like quite a jump. In the US we try to restrain the impulse to intolerance with protections for free speech and such. Do you think these protections are likely to fail? Why are religious divisions going to cause a war when other divisions such as the political left vs. right haven't? Why do you think a religious war is likely in the US when European countries with much higher rates of atheism haven't experienced such wars and don't seem likely to?
Don't try to say this won't happen, and that Rationalists will always allow other people to believe differently. Coherent Extrapolated Volition, Politics is the Mind Killer, and Eliezar' success in creating the LW and rationalist movement say otherwise.
I'm confused; what do these three things you cite have to do with intolerance of religious views?
I don't think that line makes him a compatibilist, because I don't think that's the notion of free will under discussion.
What exactly is the notion of free will that is under discussion? Or equivalently, can you explain what a "true" compatibilist position might look like? You cited this paper as an example of a "traditionally compatibilist view," but I'm afraid I didn't get much from it. I found it too dense to extract any meaning in the time I was willing to spend reading it, and it seemed to make some assertions that, as I interpreted them, were straightforwardly false.
I'd find a simple explanation of a "traditional compatibilist" position very helpful.
I think this is his conclusion:
...if we want to know which meaning to attach to a confusing sensation, we should ask why the sensation is there, and under what conditions it is present or absent.
Then I could say something like: "This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals."
This is a condition that can fail in the presence of jail cells, or a decision so overwhelmingly forced that I never perceived any uncertainty about it.
There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity. I have no problems about saying that I have "free will" appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices.
Certainly I do not "lack free will" if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.
Usually I don't talk about "free will" at all, of course! That would be asking for trouble - no, begging for trouble - since the other person doesn't know about my redefinition. The phrase means far too many things to far too many people, and you could make a good case for tossing it out the window.
But I generally prefer to reinterpret my sensations sensibly, as opposed to refuting a confused interpretation and then calling the sensation "false".
This sounds pretty compatibilist to me. EY gives a definition of free will that is manifestly compatible with determinism. Elsewhere in that post he argues that different definitions of free will are nonsensical and are generated by misleading intuitions.
But as the quote demonstrates, and as discussed in a different post, EY is less interested in providing a definition for free will and then asserting that people do or do not possess free will, and more interested in explaining in detail where all the intuitions about free will come from, and therefore why people talk about free will. He suggests that if you can explain what caused you to ask the question "do we have free will?" in the first place, you may not need to even bother to answer the question.
my confidence that the ultimately correct and most useful Next Great Discovery (e.g. any method to control gravity) will not come from a physics department is above 50%.
If you care to expand on this, I'm curious to hear your reasoning.
What does this mean?
Computer simulation of the strong interaction part of the Standard Model is a big research area: you may want to read about lattice QCD. I've written a simple lattice QCD simulation in a few hundred lines of code. If you Google a bit you can probably find some example code. The rest of the Standard Model has essentially the same structure and would only be a few more lines of code.
I don't think this works, because "fairness" is not defined as "divide up food equally" (or even "divide up resources equally"). It is the algorithm that, among other things, leads to dividing up the pie equally in the circumstances described in the original post
Yes; I meant for the phrase "divide up food equally" to be shorthand for something more correct but less compact, like "a complicated algorithm whose rough outline includes parts like, '...When a group of people are dividing up resources, divide them according to the following weighted combination of need, ownership, equality, who discovered the resources first, ...'"
I think your discussions of metaethics might be improved by rigorously avoiding words like "fair," "right," "better," "moral," "good," etc. I like the idea that "fair" points to a logical algorithm whose properties we can discuss objectively, but when you insist on using the word "fair," and no other word, as your pointer to this algorithm, people inevitably get confused. It seems like you are insisting that words have objective meanings, or that your morality is universally compelling, or something. You can and do explicitly deny these, but when you continue to rely exclusively on the word "fair" as if there is only one concept that that word can possibly point to, it's not clear what your alternative is.
Whereas if you use different symbols as pointers to your algorithms, the message (as I understand it) becomes much clearer. Translate something like:
Fair is dividing up food equally. Now, is dividing up the pie equally objectively fair? Yes: someone who wants to divide up the pie differently is talking about something other than fairness. So the assertion "dividing the pie equally is fair" is objectively true.
into
Define XYZZY as the algorithm "divide up food equally." Now, is dividing up the pie equally objectively XYZZY? Of course it is: that's a direct logical consequence of how I just defined XYZZY. Someone who wants to divide the pie differently is using an algorithm that is not XYZZY. The assertion "dividing up the pie equally is XYZZY" is as objective as the assertion "S0+S0=SS0"--someone who rejects the latter is not doing Peano arithmetic. By the way, when I personally say the word "fair," I mean "XYZZY."
I suspect that wording things like this has less potential to trip people up: it's much easier to reason logically about XYZZY than about fairness, even if both words are supposed to be pointers to the same concept.
Upvoted because of the frank and detailed reduction of pleasure, pain, and preferences in general.
This seems very insightful to me. In physics, it's definitely my experience that over time I gain fluency with more and more powerful concepts that let me derive new things in much faster and simpler ways. And I find myself consciously working ideas over in my mind with, I think, the explicit goal of advancing this process.
The funny thing about this is that before I gain these "superpowers," I'll read an explanation in a textbook, which is in terms of high-level ideas that I haven't completely grasped yet, so the reading doesn't help as much as it should. The book claims, "this follows immediately from Lorentz invariance," and I don't really see what's going on. Then, later, after I've understood those ideas, I find myself explaining things to myself in much the same words as the textbook: "I see! It's simple! It follows immediately from Lorentz invariance!"--but now this really is an explanation, and the words have a lot more meaning.
I'm reminded of the Interdict of Merlin in HMPOR.
I'm not disputing that we should factor in the lost utility from the future-that-would-have-been.
The issue for me is not the lost utility of the averted future lives. I just assign high negative utility to death itself, whenever it happens to someone who doesn't want to die, anywhere in the future history of the universe. [To be clear, by "future history of the universe" I mean everything that ever gets simulated by the simulator's computer, if our universe is a simulation.]
That's the negative utility I'm weighing against whatever utility we gain by time traveling. My moral calculus is balancing
[Future in which 1 billion die by nuclear war, plus 10^20 years (say) of human history afterwards] vs. [Future in which 6 billion die by being erased from disk, plus 10^20 years (say) of human history afterwards].
I could be persuaded to favor the second option only if the expected value of the 10^20 years of future human history are significantly better on the right side. But the expected value of that difference would have to outweigh 5 billion deaths.
But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree?
Yes, I disagree. Have you dedicated your life to having as many children as possible? I haven't, because I feel zero moral obligation toward children who don't exist, and feel zero guilt about "destroying" their nonexistent future.
I am having my doubts that time travel is even a coherent concept.
But Eliezer gave you a constructive example in the post!
I compute utility as a function of the entire future history of the universe and not just its state at a given time. I don't see why this can't fall under the umbrella of "utilitarianism." Anyway, if your utility function doesn't do this, how do you decide at what time to compute utility? Are you optimizing the expected value of the state of the universe 10 years from now? 10,000? 10^100? Just optimize all of it.
If you could push a button and avert nuclear war, saving billions, would you?
Of course.
Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?
Because if time travel works by destroying universes, it causes many more deaths than it averts. To be explicit about assumptions, if our universe is being simulated on someone's computer I think it's immoral for the simulator to discard the current state of the simulation and restart it from a modified version of a past saved state, because this is tantamount to killing everyone in the current state.
[A qualification: erasing, say, the last 150 years is at least as bad as killing billions of humans, since there's essentially zero chance that the people alive today will still exist in the new timeline. But the badness of reverting and overwriting the last N seconds of the universe probably tends to zero as N tends to zero.]
either way there is an equal set of people-who-won't-exist. It's only a bad thing if you have some reason to favor the status-quo of "A exists"
My morality has a significant "status quo bias" in this sense. I don't feel bad about not bringing into being people who don't currently exist, which is why I'm not on a long-term crusade to increase the population as much as possible. Meanwhile I do feel bad about ending the existence of people who do exist, even if it's quick and painless.
More generally, I care about the process by which we get to some world-state, not just the desirability of the world-state. Even if B is better than A, getting from A to B requires a lot of deaths.
Suppose we pick out one of the histories marked with a 1 and look at it. It seems to contain a description of people who remember experiencing time travel.
Now, were their experiences real? Did we make them real by marking them with a 1 - by applying the logical filter using a causal computer?
I'd suggest that if this is a meaningful question at all, it's a question about morality. There's no doubt about the outcome of any empirical test we could perform in this situation. The only reason we care about the answer to such questions is to decide whether it's morally right to run this sort of simulation, and what moral obligations we would have to the simulated people.
Looked at this way, I think the answer to the original question is to write out your moral code, look at the part where it talks about something like "the well-being of conscious entities," taboo "conscious entities," and then rewrite that section of your moral code in clearer language. If you do this properly you will get something that tells you whether the simulated people are morally significant.
Thanks, I wish someone had pointed out this isomorphism to me earlier. I think angles might well be more intuitive than correlation coefficients.
The examples make the point that it's possible to be too pessimistic, and too confident in that pessimism. However, maybe we can figure out when we should be confidently pessimistic.
For example, we can be very confidently pessimistic about the prospects for squaring the circle or inventing perpetual motion. Here we have mathematical proofs of impossibility. I think we can be almost as confidently pessimistic about the near-term prospects for practical near-light-speed travel. Here we have a good understanding of the scope of the problem and of the capabilities of all practical sources of propulsion, and we can see that those capabilities are nowhere near enough.
Let's not just leave it at "it's possible to be too pessimistic." How can we identify problems about which we can be confidently pessimistic?
How can utilities not be comparable in terms of multiplication?
"The utility of A is twice the utility of B" is not a statement that remains true if we add the same constant to both utilities, so it's not an obviously meaningful statement. We can make the ratio come out however we want by performing an overall shift of the utility function. The fact that we think of utilities as cardinal numbers doesn't mean we assign any meaning to ratios of utilities. But it seemed that you were trying to say that a person with a logarithmic utility function assesses $10^9 as having twice the utility of $50k.
Yes, clearly my Google-fu is lacking. I think I searched for phrases like "sun went around the Earth," which fails because your quote has "sun went round the Earth."
Thanks; I thought it was likely to have been posted, but I tried to search for it and didn't find it.
"Tell me," the great twentieth-century philosopher Ludwig Wittgenstein once asked a friend, "why do people always say it was natural for man to assume that the sun went around the Earth rather than that the Earth was rotating?"
His friend replied, "Well, obviously because it just looks as though the Sun is going around the Earth."
Wittgenstein responded, "Well, what would it have looked like if it had looked as though the Earth was rotating?"
-related by Richard Dawkins in The God Delusion
I have an objection to this:
So branching is the consequence of a particular type of physical process: the "measurement" of a microscopic superposition by its macroscopic environment. Not all physical processes are of this type, and its not at all obvious to me that the sorts of processes usually involved in our deaths are of this sort.
I think that essentially all processes involving macroscopic objects are of this type. My understanding is that the wave function of a macroscopic system at nonzero temperature is constantly fissioning into vastly huge numbers of decoherent sub-regions, i.e., "worlds." These worlds start out similar to each other, but we should expect differences to amplify over time. And, of course, each new world immediately begins fissioning into vast numbers of "sub-worlds."
So, while in one world you might get run over by a bus, there is e.g. another world that separated from that one a year ago in which the bus is late and you survive. Plus huge numbers of other possibilities.
In this vast profusion of different worlds, for any given death there's essentially always another branch in which that death was averted.